414 Comments
deletedMar 4, 2022·edited Mar 4, 2022
Comment deleted
Expand full comment

Well personally I wold consider that claim in your final sentence to be excellent evidence of a low IQ, all by itself. I would react to it the same way I would to a claim that the stars rule our destinies and horoscopes are the key to happiness. It's perhaps acceptable if you are raised in a 13th century Russian shtetl where only the rabbi knew how to read, but not otherwise.

Expand full comment
deletedMar 4, 2022·edited May 10, 2023
Comment deleted
Expand full comment

Pinker is not a devoted member of the LW rationalist community! He is just an academic who, by pure coincidence, has some similar ideas.

Expand full comment
Mar 4, 2022·edited Mar 4, 2022

"Pinker cited the rationalist community as an example of a group of good reasoners" a quote from lesswrong.

Maybe devotion is a big word, but his ideas were definitely shaped by them.

Expand full comment

That doesn't follow--I could cite someone as an example of a great dancer even if my ideas of what it meant to dance well predated that person's birth.

Expand full comment

Whether he shares the ideas coincidentally or through inspiration, in both cases he's advocating for a form of rationality that Rationalists believe in, that i can disagree with without being anti-rational. I think is my point

Expand full comment
Mar 4, 2022·edited Mar 4, 2022

Sure, I agree that you are not ipso facto anti-rational for disagreeing with Steven Pinker lol

Expand full comment

Would you mind saying why Pinker is irstional?

Expand full comment

Rationality is about doing and believing what one has reason to do and believe. I take it then that no one really opposes rationality, but rather there exist disputes as to what counts as a reason to do or believe something. Members of one class of potential reasons have, for unfortunate historical reasons, been lumped together under the banner of "rational" reasons. Then, the debate about rationality is whether there exist reasons beyond those which commonly bear the title of "rational" reasons.

Expand full comment
author

I think by saying "rationality is doing...what you have reason to do", you're passing the buck ( https://www.lesswrong.com/posts/rw3oKLjG85BdKNXS2/passing-the-recursive-buck ), in the sense of defining rationality by means of the almost-identical word "reason"

Expand full comment
Mar 4, 2022·edited Mar 4, 2022

Not in any way which fails to make progress in the current discussion though. By framing rationality as that pattern of behaviour/belief which conforms to the existing reasons for action, we can understand what debates about the importance of rationality consist in, namely, which facts constitute reasons for action/belief and which don't (recall the above where I maintain the we have sequestered a subset of candidate reasons and labelled them "rational" reasons).

To worry about buck passing is to confuse the question of what rationality is (sensitivity to reasons for action), and the question of what the rational act is in a given case (what ever the weight of our reasons point towards). Debates about "rationality" as such and what counts as rational pertain to the first question. Here we have discussions about whether desires constitute reasons, whether moral facts (if such there be) constitute reasons etc. One can debate these questions without any view on what the actual reasons are. And it is these questions we are debating when Pinker and Gardner are hashing things out.

Expand full comment

Personally, I'm not sure we need both words.

Expand full comment

Niko Kolodny wrote a paper on whether or not we have reasons to be rational (which he presented as his job talk that got him hired at Berkeley, if I recall correctly), concluding that the answer is no: http://www.mapageweb.umontreal.ca/laurier/textes/Phi-6330-H12/Kolodny-Why-rationality-05.pdf

However, I'm pretty sure that the summary of his thought is that if you do everything according to the reasons you have, then you will ipso facto be rational. The point is just that the fact that it's rational does not itself give an additional reason to do it - it's just a summary of the reasons we already have.

Even though I'm an academic philosopher, and this is very close to the kind of stuff I work on, I find this sort of thing very difficult going, so I can't guarantee I'm reporting it correctly.

Expand full comment

Well, after the "reason" part you still have to actually *do* the thing, which requires willpower. Could rationality have something to do with willpower? I mean, they're surely not the same thing, but I think they might have something to do with each other (rationalists are more likely to go "I broke up with him because it was logical to do so" instead of "I knew it was the right decision but couldn't bring myself to do it").

Expand full comment
founding

Rationality recognizes the utility of personal traits/attributes, but doesn't require them.

It's the logical thing in some global context, but it's also logical that locally that doesn't apply to me because my emotional attachment and fear of experiencing loss and loneliness, etc mean that first I need to logically strengthen my resolve (and ability to do emotionally hard things, ask for help, find strategies that can somehow work around this problem, etc.)

Expand full comment
founding

Systematic winning is "applied intelligence" (if intelligence is building correct models)

The only addendum needed is to notice that winning is subjective. Money and happiness are very useful, but maybe someone values having less happiness but also less migraines.

Expand full comment

The problem is often a lot of unstated assumptions which are different between the sides of the argument. E.g. awhile back I got into an argument, and about a day afterwards I decided that what it was really about was how one should discount future expectations. That was never mentioned during the argument, but it lead to totally different choices of action. If you strongly discount future expectations, then "live for the moment" is the rational thing to do.

Expand full comment

At various points over the past couple years I've wondered how different the musical Rent would be if it were about the covid pandemic instead of the AIDS pandemic (or tuberculosis, as in the original opera, La Boheme). I came to the conclusion that the central line would nearly need to be completely reversed: "There's only us there's only this / forget regret, or life is yours to miss / no other day no other way / no day but today" - probably useful advice when there's a slow-spreading pandemic that cuts you down to a few years of life expectancy if you get it, but probably bad advice when there's a fast-spreading pandemic where the regret and thinking about tomorrow can save you during the few tough weeks.

Expand full comment
Mar 5, 2022·edited Mar 5, 2022

I don’t really get this though because your comparison is of the already HIV positive with those who don’t have COVID, right? (I’ve never seen Rent.) It seems like in both scenarios “no day but today” is terrible advice for anyone not yet positive.

Edit to add: I guess I’m asking to hear more about your reinterpretation of Rent! Why doesn’t it have a lot of characters dying of Covid or with Long Covid or something who sing No Day But Today?

Expand full comment

Being HIV positive is a part of someone's lifelong identity, while being covid positive is just a fact you (usually) just have to deal with for a week. If you're HIV positive in the early 1990s (the characters have AZT, but they don't yet know that this means they now have 50 years of life ahead of them), the important part is not to dwell on expected death a couple years from now, but to find love and friendship and fun where you can in this brief time. But if you're covid positive in 2021, the important thing is to stay home for a few days now so that you can have fun with your friends next week, rather than having them all home sick in bed once you've recovered.

I think "No Day But Today" doesn't seem like it captures the experience of dying of covid - you're not temporarily well looking for some fun before you inevitably become bedridden, but you're either already bedridden, or pretty confident you'll get better. If someone is dying of covid, they can maybe sing "Goodbye Love", and you can maybe sing "Your Eyes" to them, just as when Mimi appears to be dying of AIDS. But you're definitely not going to sing "Out Tonight" or "Light My Candle", because you're on a hospital bed with a ventilator, not out looking for fun.

Long covid may be a more apt comparison, but I suspect it ties in to a different set of philosophical questions about the nature of urban life. I don't think long covid gives people a sense of impending mortality, the way HIV did in the 80s and 90s, but it sounds more like it gives a sense of impairment combined with (probably?) a standard life expectancy. I bet there would be a totally different and interesting musical about that. (I haven't actually seen La Boheme, so I don't know the comparison about how those characters interpreted living with consumption/tuberculosis, while not knowing the germ theory.)

Expand full comment

Rationality is whatever helps us improve the reliability of the conclusions of "slow thought."

I'm using "slow thought" in the Kahneman sense, contrasted with "fast thought." Rationality can help "fast thought" only indirectly. Each kind of thought is useful in different contexts.

"Slow thought" goes by reasoning, and reasoning from incorrect assumptions or using incorrect tools is easy to mislead. Even in the absence of bad actors, incorrect thinking leads to stupidity. With bad actors, careless reasoning is easy to exploit (looking at you, QAnon). Rationality is trying to remove errors in "slow thought."

I think the Pinker/Gardner debate is talking past each other because they don't seem to be directly addressing the difference between fast and slow thought, and each has picked one of those to champion - which is silly, because they're tools for very different purposes.

Expand full comment

Definitely agree with the talking past each other point.

Expand full comment

I think rationality is also about when to use slow thinking rather than fast thinking (or intuition in Scott’s phrase). A good example of this is using EMH to guide your investments rather than using a investment strategy which is vulnerable to various human biases. Actually I wish more realistic cases of rationality were used as examples rather than slightly esoteric ones like Newcombs paradox which nobody will ever encounter in real life and ends up in various debates about practicality.

Expand full comment

Careful here. Intuition can be either slow or fast thought. It's not-quite orthogonal. Intuition is what you use when either you don't have time to think things through, or you don't have the data to do so. It's not the same as "That things not worth thinking through thoroughly" and it's not the same as habit. For an example of intuition as slow thought, look up the discovery of the benzene ring structure in chemistry https://en.wikipedia.org/wiki/August_Kekul%C3%A9

There are various mechanisms for thought, and many of them can be either slow or fast thought, though logic, calculation, etc. seem to be entirely slow thought.

Expand full comment

A little appreciated facet of Kahneman's ideas are that he pointed out a constant back-and-forth between Systems 1 and 2, e.g. he pointed out the "instinct" of the expert that can reliably smell the truth or not (in his field), clearly a form of System 1 thinking, is the *result of* years to decades of training and painful System 2 thought, which at some point results in some kind of strange emergent phenomenon where a trained System 1 can on occasion outperform System 2, even in the expert.

And I guess going the other way, peeling back the murk surrounding System 1 so that you can trace or mimic its machinations by System 2, is what psychology and psychiatry are all about.

Expand full comment

Hmm So figuring out some problem in your sleep and getting the answer during the morning shower is part of system 1? (fast thinking) That doesn't smell right to me, but what do I know. :^)

Expand full comment

It is, actually. System 1 isn't *only* characterized by speed, it's also not especially conscious, and not especially effortful. It's what goes on outside the spotlight of conscious awareness. So when Kekule went to sleep and dreamed of a snake biting its tail (they say) and awoke having solved the structure of benzene, that was System 1 at work. Had he sat down and worked it out logically and consciously, that would've been System 2.

Expand full comment

Huh, so system 1 is all the stuff underneath 'conscience'. Well then I'm going to agree with the evil Gardner*, rational system 2, is what we do, but not as important as system 1. AFAIK system 1 is where my 'smarts' are. Though clearly trained by system 2. (Years of reading physics books and doing problems.)

*Yes Howard Gardner... Let me just say that when I hear Gardener, I have nothing but positive thoughts. Martin Gardener is a hero to me, and the next gardener I think of is Samwise Gamgee... (Well I started reading "The Fellowship of the Ring" again.)

Expand full comment

I think it's a bit more complicated than that, and I am sure I am not doing it justice, but roughly -- yeah. One of the points Kahneman makes is that System 1 is what keeps us alive and chugging along, most of the time. We don't have to work out ballistic physics to duck when something is thrown at our heads, or to slam on the brakes when all the taillights turn abruptly red in front of us. We trundle through life making a million judgments and decisions a day, effortlessly and rapidly, and almost always correctly. It's only at rare intervals, typically when confronted with some new and strange problem, that we fire up System 2 and start burning serious glucose.

I think what he and his coworkers spend their time looking at is the dividing line: under what circumstances do we recognize the need for effortful conscious careful t hought (Ssystem 2), and when do we understand that the heuristical seat-of-the-pants System 1 is good enough? Clearly there are cases where we screw it up, we rely on "intuition" when we should be thinking things out, and still other cases where we consciously reason ourselves into absurdity or even horror, which our "intuitive common sense" would've helped us avoid. The general question of how we decide which system to use, and how and why we make mistakes in that decision, is fascinating and complex.

I'm just adding that they point out that the two systems are not rigidly separated, and certain decision-making processes and patterns can get transferred back and forth. Skilled workers in any field build a System 1 that lets them get stuff done correctly much faster and with less effortful thought than a newbie (consider the analogy of how much less energy efficient a newbie is than a skilled athlete at, say, swimming or skiing). But we also spend a lot of time and effort using System 2 to try to figure out what the hell System 1 is doing, when it seems to be leading us wrong. ("Why do I always date the wrong kind of guy?")

Expand full comment

I feel like (intuit) system 1 is using the product of extreme chunking which has been built up by system 2 over time.

Expand full comment

Hmm I make the analogy that, thinking slow = rational thought. But I agree that not recognizing the fast, slow distinction leads to some needless disagreements.

Expand full comment
deletedMar 4, 2022·edited Mar 4, 2022
Comment deleted
Expand full comment

OK you can call them different names, slow is rational, conscious thought, fast is everything going on underneath... it doesn't have to be fast, so I agree fast can be a confusing descriptor.

Expand full comment

System 2 builds models. System 1 interprets them. This is why a chess master can walk by a chess board and, off the cuff, say, "mate in 4," using only system 1.

Reminds me of the saying, "Neurotics build castles in the sky. Psychotics live in them. Psychiatrists collect the rent."

Expand full comment

Hmm, I haven't read much Pinker or Gardner. Given that this article doesn't really define what it means to be rationalist, I'm going to fall back on my intuitions.

No, no, I joke! But I do have a couple of comments:

Intuition has multiple meanings or nuances; Scott is only addressing one of them, the "Intuition is really a reaction to a complex set of observations, and the observer isn't aware of how that complex set leads them to the conclusion they reached." But there is another meaning of intuition, which is "The ability to tap into knowledge which is not available to the intellect," such as through prayer or meditation, for the sufficiently attained or lucky. Now, it may be possible to train an AI to do that, or to interrogate a bunch of world-class meditators about who they did it, but it seems unlikely in the near to medium term future, at best.

Does Pinker address the Spock as strawman (strawVulcan?) rationality? I mean, Gardner seems to be making the "what does your rationality have to say about emotions, eh?" argument.

And I think that's a valid argument, in that most self-described rationalists I deal with (mostly libertarians, unfortunately) seem to have a really hard time dealing with emotions, and in particular to recognizing when they are buying into emotionalist appeals.

Lastly, rationality seems to be about intellect. And as Galef points out, emotions are often the source of our goals and desires. I mean, rationality can tell me whether one banana is better than another banana, but it can't tell me whether I like bananas more than oranges. Sure, it might be able to tell me which one is better for me, but it can't tell me which one I like more.

Expand full comment

I think Yudowsky's "systematized winning" has some significant overlap with your final idea if you focus more on the "systematized" part.

Expand full comment

Love the post, but one aspect I think Scott might have missed though... the culture of it.

I think when Gardner says he's doesn't like rationalism, he's at least partially saying, "I don't like this culture of people who are into math and board games, like or at least don't dislike polyamory, celebrate weird holidays like solstice but not in a way I am familiar with, are mostly white, too often male,... etc." He doesn't see rationalism as systematized winning, but as this culture.

Expand full comment
author

I don't think Gardner knows about "the rationalist community". He's responding only to Pinker's book.

Expand full comment

Yes, but there is still a broad popular conception of a "rational person". The kind of person that is good at math, that likes arguing, that favors "right brain" instead of "left brain". Or even just Ben Shapiro saying "facts don't care about your feelings". The point is that there is a common, intuitive idea of a rational person, and that draws a visceral reaction and then people argue based on such reaction.

Expand full comment

It sounds like you're saying, "rationalism looks suspiciously like a flag to a lot of people" - is that fair?

Expand full comment

Appreciate you steel-manning my argument :)

Expand full comment

I think you mean left brain.

Expand full comment

+1 to this idea. Like most political arguments, I think a lot of arguments for or against "rationality" are motivated primarily by tribal affiliations, and are only superficially about discovering truth or whatever.

Expand full comment

The type of humor in this essay is I suspect one of the things people miss about older Scott posts. Been a while since it was this concentrated.

Expand full comment

+1

Expand full comment
Mar 4, 2022·edited Mar 4, 2022

I wonder if you're overthinking this a bit. I agree that

> Everybody follows some combination strategy of mostly depending on heuristics, but [sometimes] using explicit computation

In my view, describing oneself as a "rationalist" or "non- rationalist" is just a way to say which side it's better to err on, in close cases. (Compare "trusting" vs "distrustful" — everyone agrees that you should trust *some* claims. But a generally trusting person errs on the side of trusting more claims).

If we wanted to be more jargon-y, we could rephrase the above by saying: "everyone agrees to sometimes use heuristics; a rationalist uses the meta-heuristic of using explicit calculation in cases where the best strategy is unclear whereas a non-rationalist uses the meta-heuristic of relying on heuristics in those cases"

Expand full comment

But 'overthinking' is exactly what makes rationalism valuable! Much of the time you're just stating the obvious (or at least, the already-known) in a needlessly verbose show-your-working way, but occasionally showing that working produces new insights. It makes sense for Scott to show his working when defending/investigating the working-showing mentality :P

That said I think you;ve pretty much nailed the TL;DR

Expand full comment

"And even if at the end of the day, the bad guys turn out to be more correct scientifically than I am, life is short. And we have to make choices about how we spend our time, and that’s where I think the multiple intelligences way of thinking about things will continue to be useful, even if the scientific evidence isn’t supportive."

https://www.youtube.com/watch?v=ESGLRnitp4k?t=45m

Expand full comment

That is not *obviously* a stupid conclusion, though. There are plenty of highly functional delusions, e.g. if each of us was constantly consciously aware of our #1 existential problem -- mortality -- we would probably be mostly frozen in fear or apathetic with despair all the time, useless. Most people in very long-term marriages conclude that a certain amount of blindness to, or psychological denial about, the characteristics of their spouses is highly functional. So he may be quite correct that the idea is (1) incorrect but also (2) extremely useful. That can happen.

Expand full comment

I don't think I would be frozen in fear or apathetic with despair.

Expand full comment

Good for you. Let us know how it works out the moment you actually get the final diagnosis. If you nod calmly and continue annotating your grocery list, you will be one of the remarkable few.

Expand full comment

I'm pretty sure there's a large spectrum of possibilities between "would be frozen in fear or apathetic with despair from the knowledge that you are eventually going to die" vs "wouldn't have much reaction at all to receiving a terminal diagnosis".

Expand full comment

Let me introduce you to this concept called rhetorical hyperbole. It helps get the point across concisely without requiring a boring scholarly exegesis that runs to many paragraphs. If you agree that the problem of undeniable mortality is one that *would* occupy your mind to an unhealthy degree -- regardless of whether you would literally exist in the extreme state -- then you already agree with the point and are merely adding a quibbling footnote, which to be sure would be reasonable -- if this were a scholarly debate and not Internet ephemera.

Expand full comment

I think it would occupy my mind to a healthy degree. Insofar as it occupies one's mind, one has two options that wouldn't be available if one ignored it. First, one can try to avert whatever it is that one expects one is going to die by. Second, one can make decisions that take into account the fact that one has limited time left.

Expand full comment

I’m watching my dad die right now. And honestly there seems

to be an incredible freedom realizing that death is nothing to fear because it’s going to happen to me no matter what. All I can choose is how I’ll spend the little precious time I have. And cowering in fear just seems like a waste because all it does is ruin what’s left.

Expand full comment

I don't think it's about any delusion. For me, the problem just doesn't register emotionally, for some reason. Usually. I had periods where it did, and it sucked.

Expand full comment

Yah that's called denial. It gets a bad rap, but denial is a powerful and useful psychological tool, it allows us to focus on the problems of the moment without being overwhelmed by the (perhaps much bigger, perhaps even insoluble) problems of the future. Another much less pejorative word for it is "focus." Psychological focus is a form of healthy denial. Unhealthy denial is what we call obsession or blindness et cetera.

I'm just pointing out that there's far more to our functioning, even mental functioning, than our conscious beliefs, and for that reason we have ways of essentially shunting the conscious beliefs to one side -- through denial or functional delusion, among other things -- when they get in the way of getting stuff done. So when Gardner says "this may be wrong but it's still very useful" that isn't self-contradictory nonsense. It may be correct. It may also be wrong, of course. But it isn't *obviously* wrong.

Expand full comment

I'm not sure if this is actually relevant, but your final paragraph reminds me of an argument I've had several times about the "pie rule". ( https://en.wikipedia.org/wiki/Pie_rule )

There's some game that's unfair, and someone suggests making it fair by allowing one player to assign a handicap to one side and then the other player to choose which side to play, given that handicap. This incentivizes the first player to pick a handicap that makes the game as fair as possible.

The person making this suggestion often argues that if you are good at playing the game, then you should automatically also be good at picking a handicap that would be fair. And thus, the pie rule is meta-fair in the sense that the winner of the-game-with-pie-rule should always be the same as the winner of the-game-starting-from-a-hypothetical-fair-position.

I disagree.

I think the intuition for their claim is something like: One possible way of picking winning moves is to consider every position you could move to, estimate your degree of advantage in each one, and then pick the move that leads to the largest advantage. And if you can determine the degree of advantage of an arbitrary game-position, then you can also pick a fair handicap just by choosing the advantage that is closest to zero, instead of the largest advantage.

But that is not the ONLY possible way of picking winning moves. You could pick winning moves even if you can only RANK positions, and cannot score them on any absolute scale. You could even pick moves by applying some heuristic to the moves themselves, rather than to the resulting game-positions.

If you just have a black box that outputs the best move, you can't use that black box to pick a fair handicap.

This sounds a little bit like the idea that "skill at X" and "skill at studying X" are not (necessarily) the same.

X and studying-X are fundamentally related in a way where if you had a complete understanding of the nature of X that allowed you to predict exactly what would happen in any given situation, then that should make you good both at X and at studying-X.

But this doesn't rule out the possibility that there could be lesser strategies that make you good at one but not at the other. A black box that makes money doesn't (necessarily) give you a theory of how money-making works. A pretty-good-but-not-perfect theory of money-making won't necessarily let you outperform the black box.

Expand full comment

But even if, because of the uncertainties you mention, one cannot assign a fair handicap, it seems like this pie division method will at least provide a fairer game, probably immediately, but if not at least after some iterations.

Expand full comment

There are situations where the pie rule is useful, but it merely gives someone an *incentive* to make the game fair; it doesn't bestow the *capability* to make the game fair. If you add the pie rule to a game, you haven't solved the fairness problem; you have merely delegated that task to someone else.

In most cases, if you are designing a game, you are better able to pick a fair starting position than most of your players will be. If you can't pick one, then you probably can't expect your players to be able to pick one, either.

The pie rule is potentially useful for players who understand the game better than the game's designer, or if you have procedurally-generated content such that the player is handicapping a unique scenario that the designer never saw.

Expand full comment

Spot on. I play a lot of games, some with asymmetric starting factions, some of which seem imbalanced - weaker than others. I consider myself a game designer and yet I recognise it's a very hard task to come up with a fair house rule that makes the weaker faction stronger while not disturbing many other balances that exist within the game.

Expand full comment

Why is it hard? Just introduce the element of chance, and shazam! the weaker guys have just as good a chance as the stronger guys. But why would you want to do this? Isn't the *purpose* of games to prove out which people are better at Skills A and B, whatever A amd B might be? We have basketball games to see which set of five guys are better at throwing a ball through a hoop, as well as passing it to each other, and strategy encompassing the same. Why would we want to introduce a rule that arbitrarily makes it easier for people with less skill in those areas to win?

On the other hand, I can see why we might change the rules so that Skill C becomes part of the winning mix. We could prohibit any speech during a basketball game, and now the skill of nonverbal communication starts to become essential for winning. Is that the kind of thing you mean?

Expand full comment

We're not talking about games like basketball, so much as games like Fox and Hounds https://en.wikipedia.org/wiki/Fox_games or Tafl, traditionally, or modern games like Gaia Project or Cosmic Encounter. What's the purpose of games? Whole books could be written on that. Having fun with friends? Mental exercise? And yes, one part of it is strategic challenge, "can I beat you?" (Or "beat the game" for cooperative or solo games.) But there's many aspects to that, especially with a large number of asymmetric factions or board states such that it's impractical to prepare for all combinations in advance: it rewards being able to quickly analyse a new strategic space, and even more so if new hidden or random information is added to the game partway through. There's delight in exploring interesting rules interactions ("spotting a combo" that nobody else has seen yet), finding a tactical move to satisfy the constraints of the actions of several other players, and so on. All of which may mean that if the factions are significantly asymmetric, it's very hard to predict the consequences of a proposed house rule or balancing tweak.

Expand full comment

The point of the pie rule is not only to make the game fair, it's to forestall a certain kind of dispute. In the ur-example, maybe the pie-cutter has poor motor control, and the two halves end up being different. If the pie-cutter also suggested the rule "I'll cut, you pick", they can't complain about the process, because it's all on them. (I mean, maybe they didn't know they had such poor motor control, but that's not the opponent's fault.)

Expand full comment

Let me just state, as an aside, that the meta-rational view here is that you need to understand why you're playing the game.

MOST games (and this includes most of the games of life) are played to be played. If you prioritize winning over everything else, you'll soon find no-one else wants to play with you.

So obsessing over details like above (and the comments below) is missing the point! You tweak the rules, by hook or by crook, by principled means or by random experimentation, to ensure that most people continue to consider the game worth playing. If you miss that point, you miss everything.

(Of course this IS the point that is missed by many "1st order" rationalists, from economists to military strategists! Step zero in rationality is to get the goal correct; and if you believe that the goal is "win the game" rather than "keep the game going forever", then you're being being very dumb in a very complicated way.

This point has been made by a variety of rather different thinkers, from Jordan Peterson to David Chapman. Yet it still seems to be a kind of esotericum that should have, but has not, made its way into the "Rationality Community".)

Expand full comment

> You tweak the rules, by hook or by crook, by principled means or by random experimentation, to ensure that most people continue to consider the game worth playing.

That's exactly what we're doing. It just so happens that one of the heuristics that people commonly use to predict which rules will be worth playing is "fair games tend to be better than unfair ones".

(You really think that "obsessing over details" is going to make someone a WORSE game designer? Do you avoid painters or engineers who "obsess over details" of their craft?)

Expand full comment
Mar 7, 2022·edited Mar 8, 2022

But you see, that's exactly where we differ. You think "fairness" is the most important thing; it's not! And so you obsess over the wrong details; you obsess over whether changes make the game more fair (whatever that means), but the point is not an abstract one (is this more fair in the eyes of Ahura Mazda), it is an empirical one, namely do enough of the parties involved in the game want to continue playing vs they would prefer to bow out.

For example: a person sometimes fights with his wife. Maybe they are basically a great match, but she has one particular small habit that leads to fights, like she frequently mis-remembers what happened in some situation in the past.

OK, we can treat this "rationally", go to the mat over these fights, drag up evidence and testimony, and generate a divorce within a year.

Or we can treat it "fairly", keeping score and allowing her to win exactly half the arguments (while each side secretly seethes), and generating a divorce within three years.

OR both parties can realize that what matters is keeping the game of marriage going, not winning the game of marriage, so he accepts that she is just that way and lets her say whatever mistaken memory she has, she accepts that his dress is somewhat sloppy but whatever, it's just not that important, and they have a happy life and a 50th yr wedding anniversary.

The same holds for business, for international negotiation, even for high stakes sports (where the rules are constantly tweaked, not for "fairness", but to ensure that everyone involved finds it worth continuing to play).

Expand full comment

I feel like I'm saying "this motor would be 20% more efficient if you used this material instead of that material" and you're saying "you fool, why are you discussing efficiency instead of asking whether you need an engine at all!"

There certainly exist situations where you don't need the engine at all. But there are also SOME situations where you DO want an engine, and in those situations it probably matters how efficient it is, so learning how to make engines more efficient is still a useful skill. It's a reasonable thing to study and discuss.

Also, I am (mostly) using the word "fair" as a technical term of art that means "starting from this position, if you don't know which person is playing which side of the board, you would predict equal payouts for all sides on average."

Expand full comment

Yes, exactly, that's precisely my point (as it is The Last Psychiatrist's point).

And yes, there are times when it is appropriate to optimize the engine. (And my day job involved and involves large amounts of this, both as physics and as optimizing computation in various ways.) But there are also times when it is appropriate to ask the meta-question...

And that's what this post was about. There are (at least) three levels

- Tribal idiot (which is where Gardener mostly seems to be)

- Rationalist (useful, very powerful, but ultimately short-sighted)

- Metarationalist (use the tools of rationality but think at a more abstract level about the goals to which those tools should be applied)

And yes, who's to say that my goals are more meta-rational than your goals? Well, that gets us into different discussion, one that's too far afield to pursue here.

Expand full comment

I am a few days late, but I really wanted to give you a high five for "Of course this IS the point that is missed by many "1st order" rationalists, from economists to military strategists! Step zero in rationality is to get the goal correct; and if you believe that the goal is "win the game" rather than "keep the game going forever", then you're being being very dumb in a very complicated way."

The question of goal setting, and how often that goal is simply "keep the game going," is such a huge aspect to human existence, possibly life's existence in general, but totally overlooked by most economists using game theory. Especially considering how often 'Step Zero - Choose Goal' is in fact Step N-zillion in some earlier processes. Every game is a subgame within a veritable tessellation of games. Which is why normal people are vastly better at Ultimatum Games than economists, for example: the logical "winning" move from within the current game is highly illogical within the context of the ur-game. One sees it in politics as well, assuming one team wants to win over the other once and for all when really the correct goal is to keep the game going.

Expand full comment

Isn't he explicitly talking about Tooby & Cosmides ecological rationalism? This isn't "an argument about heuristics. It’s the old argument from cultural evolution: tradition is the repository of what worked for past generations."

Expand full comment
author

Who? Pinker or Gardner?

Expand full comment

Pinker. I have not read the whole book but I did ask him via email if he would be mentioning Tooby & Cosmides ecological rationalism and he said yes. As an EP I'd be expecting him to be supporting it. https://www.researchgate.net/publication/231982306_Evolutionary_Psychology_Ecological_Rationality_and_the_Unification_of_the_Behavioral_Sciences

Expand full comment

Also see their hypothesis of emotions as a superordinate programme (if you don't know it already). https://www.cep.ucsb.edu/emotion.html

Expand full comment

and Hoffman's "fitness beats truth" hypothesis

Expand full comment

The individual action of engaging in rationality clears the dust out of the intuition pipes. Then the next time the angel flies by to deliver an inspiration, they have a direct shot.

This is flippant but in terms of discussing the symbiosis of rationality/nonrationality, it’s a necessary point. lots more to say but busy now - this was a good read.

Expand full comment

Most explicit anti-rationalism I encounter boils down to "think locally & politically, because abstract first-principles thinking sometimes leads to socially undesirable conclusions." Of course that's mostly a complaint about how some perceived rationalists behave (antirational "read the room" vs rational "just asking questions") rather than a principled objection to rationalism in the abstract, but then that's exactly what a rationalist would say...

Expand full comment

There is the concept of rational irrationality in which it is instrumentally rational to not be so epistemically rational. Being rational is the best way to be correct. However, being correct is not always worthwhile, but it frequently is.

I think that most people have little to gain by being rational about many topics, and frequently being correct lowers the quality of their life and wellbeing. I'm thinking of religious apostates and political dissidents who are actually correct about their beliefs, but not in line with the larger society. It is also not useful to try to analyze every situation in depth, like in the case of the spam emails that you mentioned.

Expand full comment

I couldn't agree more. Oftentimes being perfectly rational doesn't pay off.

Expand full comment

"The Elephant in the Brain" explicitly says readers may be worse off as a result of reading the book, because evolution selected us not to be fully rational. Robin Hanson continues on that here: https://www.overcomingbias.com/2022/02/rationality-as-rules.html

Expand full comment

There's also major benefits to being irrartional as a bargaining position.

"I'm willing to blow up this whole deal if you don't give me what I want, and I don't care that this will damage us both" is a fantastic bargaining position. "I am willing to take any deal as long as it's marginally better than my BATNA" is a terrible one.

Expand full comment

It's a fantastic bargaining position until someone calls your bluff and also you never want to develop any soft power (in politics)/genuine interpersonal relationships (in life) ever.

Expand full comment

It's a great position the first time you use it, if you already have some goodwill built up using soft skills. It's a terrible position to use repeatedly, because your potential bargaining partners will need to adjust their strategy if it's used repeatedly, and they will develop a strategy specifically to beat yours. Most likely, they will find a third party to work with and cut you out of any deal, if possible. If that's not possible, they will look for ways to make your own strategy cause you more harm, or perhaps just use an even more damaging tactic, such as fighting/war.

Nobody thinks North Korea (or Iran, or Syria) is in a better position because it's willing to be irrational. That only makes sense if you assume their current level of development is the normal one. Compare North Korea to South Korea and realize how much NK has missed out on due to their irrationality.

Expand full comment

I agree that Pinker is being knowingly glib by equating the mere act of reasoning with the explicit practice of rationality. I haven't read his book (yet), but Gardner's objection brings to my mind Elizabeth and Zvi's Simulacra levels.

As in, committed, self-identified rationalists (and chauvinistic monists, for that matter) appear to make something of a cultural fetish of the object level. Instead of also applying rationality to the higher simulacra levels - how people think, and how people wish to influence the ways others think, and so on - they make it their mission to oppose and reduce the chaos produced by these considerations. Julia Galef's book is pretty much about that.

Gardner seems to be against this project of narrowing down discourse toward the object level, deeming it both impossible and potentially harmful in the process. At the very least, the project shouldn't cloak itself with the purity of mere reasoning. The moment you have a mission, you're playing the game.

(The fact that all the simulacra levels do eventually add up on the object level in the end is both true and mostly irrelevant - about as useful as pointing out all the hormones and gene expression to a man in the throes of love.)

Expand full comment

Do they say they're not, uh, "playing the game"?

Expand full comment

#1 Don't knock drug-fueled orgies until you've tried them. The risk/reward ratio is probably favorable compared to many other edgy-but-socially-acceptable activities like general aviation.

#2 When a guy jumps out of the bushes (or in my case, the front seat of a cab in Panama City) and sticks a gun in your face, you have a remarkable amount of time to perform rational analysis. Adrenaline is a magic time dilator. In my case, after what seemed like an eternity of internal debate (but was actually half a second) in which I contemplated a half-dozen courses of action, the conclusion was "jump out of the moving cab". The comedown was harsh though. Adrenaline is one hell of a drug.

Expand full comment
Mar 4, 2022·edited Mar 4, 2022

I've never been a big fan of the term "rationality" for a lot of the reasons described in this post -- it seems to carry the connotation (warranted or not) of opposing intuition and heuristics even when those may be valuable. I appreciate the bit about "truth seeking" and "study of truth seeking", which I find to be much more clarifying language so I'll stick with that here rather than trying to say anything about rationality.

I tend to think of truth-seeking as a fundamentally very applied endeavor, and also one that can be taught without necessarily being systematized. Not just a purely intuitive knack, but not necessarily best served by a formal discipline studying truth-seeking either.

As an example, I think one good way to learn effective truth seeking is to study a specific scientific field. A major part of a scientific education is learning how to seek more accurate models of the world. One learns tools, from formal mathematics to unwritten cultural practices, which help to separate truth from seductive but incorrect ideas.

Then, there are of course also people who study how science is carried out (eg, Kuhn & other philosophers of science). Tellingly, most practicing scientists pay relatively little attention to this, other than as a form of navel-gazing curiosity.

Rather than rather than rocks vs geology, I think science vs study of science is a better analogy to truth-seeking vs study-of-truth-seeking. And as with science, I am skeptical that the study of truth-seeking has much to say that will actually improve the practice of truth-seeking, compared to engaging directly with the practice of truth-seeking. Though perhaps 2000 years of follow-up research will prove me wrong.

Expand full comment
Mar 4, 2022·edited Mar 4, 2022

On the contrary, we spent a thousand years between Aristotle and Galileo, roughly, studying truth seeking per se, and leaving almost all actual progress in acquisition of the truth to the Islamic world. It was only when we abandoned the study of truth seeking and decided to just seek the truth any old way we could -- just start measuring stuff, and start trying to formulate practical rules of thumb and dumb crude laws that would predict what would happen next -- that progress took off in the 1600s. Since that day the study of truth seeking has trundled along in the wake of actual truth acquisition (at least about the natural world), debating post facto how it was all done. All very interesting from the metaphysical point of view, but no actual physicist gives two bits for it. Not a one of us is trained in philosophical methods of inquiry, or cares to be, or would not find it tedious and useless.

Expand full comment

Agree with this, human rhetoric is a very strong force and a good practitioner of it can convince you of anything absent empirical evidence. I think this is actually one of the best reasons for being a rationalist, don’t let yourself be fooled by rhetoric, insist on evidence. And if evidence cannot be produced at the moment for practical reasons (say like string theory) then keep an open mind but don’t commit anything on these ideas.

Expand full comment

Being a rationalist only protects you against some forms of that. The kicker is the axioms and rules of inference. Look up Pascal arguing that one should believe in (the catholic) god. There are LOTS of other examples. When you say "insist on evidence" you're arguing for empiricism rather than rationality. They work well together, but they aren't the same thing.

Expand full comment

I think a rationalist should be an empiricist for the reasons I gave. Any subject that relies mainly on arguments is going to be dominated by people with the best rhetorical skills not the people with the best understanding, that’s a fundamental rule of human society. Of course extrapolation from evidence is usually fine, but you still need to be less trusting than of direct evidence. There are hundreds of examples of areas where the accepted view derived from “logical analysis” was completely wrong until people actually did the experiment. The replication crisis in social science is one of the more recent ones. If being rational means anything is it means being careful about being fooled by BS because humans are very good at BS.

Expand full comment
Mar 4, 2022·edited Mar 4, 2022

To be honest, what I've seen of the soi-disant rationalist community is that it has more in common with the medieval scholasticism I am describing than post-Enlightenment empiricism. The empiricist *specifically distrusts* purely logical argument.

Feynmann was especially clear and cogent on this point: he said (paraphrasing) that the way you do good science is that when you come up with a hypothesis or chain of logic you immediately distrust it, the more so if it's your own or you find it appealing and convincing, and then you set about diligently trying to disprove it every way you can, through direct measurement. You share the idea with your friends and even more importantly your enemies, specifically so that they can try to disprove it in ways to which you might be blind, out of ignorance or wishful thinking. Only after a lot of genuine effort has gone into trying to disprove your hypothesis and it has all failed do you start to (very cautiously) consider that it might be true -- at least until the next measurement comes along.

It's not *exactly* throwing human reasoning power out the window, but it's freighting it with such a load of skepticism that even the most "obvious" points in a chain of logic require experimental test. So very different from assigning the probability of a conclusion based on the persuasiveness of the logical argument leading to it.

It's also of course not a way of thinking one can afford to indulge in all aspects of life. Necessarily in many areas we *do* judge the probability of conclusions being correct by only the persuasiveness of the argument, because we just can't test everything for ourselves (and some stuff isn't amenable to experimental test anyway). But while the empiricist accepts this necessity, he does not consider it a virtue.

Expand full comment

Hmm... The Islamic world a hotbed of truth acquisition in the Middle Ages? Well, sort-of, and up till approx. CE 1200 maybe. But the deference to "the historical importance of Islam for science" today is somewhat overdone. The argument tends to forget the importance of what went on in Byzantium, 1000 years of practical science (among other things) after the fall of the West-Roman empire. Some might even argue that the Renaissance was triggered by the fall of Byzantium in 1453, when scholars had to flee to Italy and other places to continue their work. By that time, the theologians and long since defeated the "philosophers" in the Islamic world of yesterday, i.e., Islam had stagnated from a "discover new science" point of view.

Expand full comment

There's an even more important point which is that the argument kinda elides who exactly was doing this truth acquisition (or at least truth preservation).

As far as I can tell, for the most part it was being done by the remnants of prior classical culture, by people inheriting the mindset and culture of the previous hellenic overlords, not by those imbued with what we might call the "desert" culture.

If one want to use this argument as a claim about "my tribe/religion/ethnicity/whatever is tres awesome", be very careful because it's not at all clear that it actually proves what one wants it to prove... (Even apart from the general incoherence of imaging that you can somehow bask in the reflected glory of what people belonging to your claimed affiliation did 1000 years ago.)

Expand full comment

Eh, the point was historical. An eventual agenda in this case is in your eyes, not mine.

The implicit reference is to the conflict between the philosophers and the theologians within medieval Islam. Although “who won” is still a matter of debate today, not least due to the prestige given to “science” everywhere in our time, a traditional opinion is that the (medieval) theologians defeated the (medieval) philosophers within Islam.

Expand full comment

"Since that day"

What day? When in the 1600s? Who specifically was doing that abandoning?

Expand full comment
Mar 4, 2022·edited Mar 4, 2022

30 October 1662. William of Manchester, who at the time was the Lord High Grand Inquisitor (Hidden) of the League of Empyrical Gentlemen. All else follows from his fateful discussion with friends over low-quality brown ale one miserably cold and drizzly September afternoon in a pub the name of which is still a League secret.

Expand full comment

I can't tell the difference between "the study of truth seeking" and "philosophy". Is rationalism philosophy? If not, what's the difference?

Expand full comment

Minor correction:

"Newcomb’s Paradox is a weird philosophical problem where (long story short) if you follow an irrational-seeming strategy you’ll consistently make $1 million, but if you follow what seem like rational rules you’ll consistently end up with nothing."

If you follow what seem like the rational rules, you'll consistently end up with just 1 thousand dollars.

Expand full comment

Also, the transhumanists from the future will keep running simulations of you all the time, giving you 1 thousand dollars in each simulation, just to laugh at you. The guy who chooses 1 million will only be simulated once, to verify this hypothesis, but then it would be boring.

Would you rather be a rich guy who is dead, or an average guy who lives forever as a meme?

In general, simulation hypothesis seems to explain why people are so crazy. Crazy people are more fun to simulate.

Expand full comment
Mar 4, 2022·edited Mar 4, 2022

Karl Popper's epistemology would be very useful here (it would always be useful, and it's almost never used!!)

In Popperian terms: effective truth seeking is basically just guessing plus error correction. Rationality is actively trying to correct errors in ideas, even very good ideas. Irrationality is assuming good idead don't have errors (or that errors exist but can't be corrected). Antirationality is actively preventing error correction.

A fictional example: an isolated island community has a sacred spring and a religious belief that no other water source can be used for drinking water. This is a useful belief because water from the spring is clean and safe, and many other water sources on the island have dangerous bacteria. One day, the spring dries up. Rationalists on the island try other water sources; some get sick in the process but they make progress overall. Irrationalists try prayers and spells to make the spring come back. Antirationalists declare the rationalists to be heretics and murder them all.

I think people who are "against" rationalism (and who aren't antirationalists like Khomeini) tend to be in the "good ideas have errors but it's vain/hubristic to think we can improve them" camp. Often trying to improve on established ideas leads you off of a local maximum (only drink from the sacred spring). But being trapped at a local maximum is bad, even if the path to a better point is treacherous. And external circumstances can make good ideas stop working (the spring drying up) anyway.

Expand full comment

Thank you for beating me to mentioning Popper. When I read essays like this I'm always looking for anything interesting that Popper didn't adequately cover and I seldom find anything. For me, "Conjectures and Refutations" highlighted many interesting concepts beyond the title (which essentially does completely explain "rational" thinking). First is that I realized that the scientists I worked with didn't really know what "science" was. Which is inteteresting. (You can debate it, etc, but the point is it is more nebulous than the "scientific" community would have you believe.) The other thing I found interesting is that Popper is quite ok with tradition. Is it ok to do something because that's the way it's always been done? (Perhaps study "science" in the traditional way?) Sure is! That's a perfectly sensible heuristic it turns out. The time to consider rejecting tradition is if you can create a hypothesis that contradicts tradition that can be challenged but then survives that challenge (and all subsequent ones). It's all quite messy, sure, but Popper is some of the clearest thinking on rational thinking I've yet seen. He was concerned with separating woo from sensible "science" (whatever that might be) and I think his ideas work. If that doesn't sound compelling then I apologize for my bad interpretation and encourage you to study and criticize his work directly.

Expand full comment

Is rationality "trying correct errors" or is it "the process for more effective error correction"? Or am I being a pedant?

Expand full comment

Error correction in the sense of being less wrong, or in the sense of winning more? Because they are unlikely to lead you in the same direction.

Expand full comment
Mar 4, 2022·edited Mar 4, 2022

I feel like there's at least a semantic rhyme here with the idea in linguistics that, _by definition_, a native speaker of a given language has a kind of tacit "knowledge" of the rules of that language, even if they can't articulate those rules. Modern linguistics -- the kind that Pinker practices -- is the enterprise of first taking those rules and making them explicit, and then moving up one layer of abstraction higher, to try to understand why it is that certain kinds of rules exist, and other kinds do not. And the the flavor of psycholingustics that I studied in college tries to ground those questions about the rules in actual brain structures.

As a side note, I actually think Pinker is pretty deeply wrong about linguistics, and in a way that challenges his own claims to being uber-rational. The Johns Hopkins linguistics department is the home of "optimality theory", which posits that the "rules" of a language are actually like a set of neural nets for recognizing and generating patterns -- or, more to the point, they're _like_ a set of computational neural nets, because they are _actual networks of human neurons_. Once you adopt this frame, you can see how a given situation could result in different "rules" for generation giving you conflicting answers, and then you think about how different networks can tamp down the activity of other networks. Hence the concept of "optimality theory". The actual language produced by a given mind is the optimized output of those interacting rule networks. And we get linguistic drift as new minds end up having different balances between rules, and ultimately creating new ones.

I got to sit in on graduate seminars with both Chomsky and Pinker in my time at JHU, and while they're both clearly brilliant, they seemed committed to a purely algebraic, functional model in which rules operate deterministically, like clockwork. This seems to fly in the face of what we know about how neurons work -- it seems, dare I say it, irrational.

Expand full comment

Chomsky, Pinker, and the whole UG crowd premised decades of work on the "poverty of the stimulus" argument which is a perfect example of intuition over rationality. The argument goes "children learn language even though it sort of seems like they don't hear enough input sentences to figure out what the rules of language should be".

That has always bothered me. Like, what single shred of evidence did anyone ever collect to establish a default assumption about how many input sentences a child should need to learn a language? Founding an entire branch of linguistics based on nothing more than a gut feeling is certainly bold, I'll give that to Chomsky.

Expand full comment
Mar 4, 2022·edited Mar 4, 2022

I think this explanation undervalues the extent to which having an explicit model of the world helps you develop better intuitions. For instance, the guy who just has a knack for geology may be able to better find diamonds than the geologist, but I bet if you find a kid with a knack for rocks and *teach them geology*, they'll be able to find diamonds better than either of them. Intuition is not a black box, the brain doesn't do intuition versus models; models feed from intuition, intuition feeds from models.

Expand full comment

Well, it also drastically undervalues the natural human behavior of apprenticeship and mimicry. In actual fact, we learn most often by simply following someone else and doing what they do. That is, "knacks" are very often learned by direct impression, and very effectively so. By contrast conscious legible learning -- "book learning" to take the hostile viewpoint -- is notorious for not being as reliable and complete as direct absorption. Even in science this is how it's done. Nobody absorbs learned treatises on how to do laboratory science, explains the ideas in beautiful prose -- and is then given the keys to his very own $10 million laser lab. Instead you are compelled to spend years in direct apprenticeship with people who "have a knack" for doing good experimental work, and you are expected to absorb the "knack" by some poorly-understood process of direct absorption and imitation, and only then are you actually trusted to advance to journeyman and eventually master status.

Expand full comment

Let's call this "hidden knowledge", and define it as a kind of knowing that you're unable to make legible. Let's also define rationality as the process of developing legible knowledge. Sometimes that project involves making illegible knowledge legible. Many times, it's ancillary tools that can incorporate hidden knowledge ("apprenticeship programs outperform direct book-learning programs") as a kind of 'black box' without straying from 'rationality' per se. We just understand that the black box is not itself part of the rationality project.

It's unfair to levy the accusation against rationalists that because something can't be made legible that means rationalists can't make use of that thing. It's enough to shrug and say, "Sure, that's a hidden knowledge system. I can't know what's inside the black box, but I can still understand the box's purpose and make use of it."

Expand full comment
Mar 5, 2022·edited Mar 5, 2022

Well, the problem is the intuitionists have no less ability to employ the black boxes, and their existence leds greater credence to their argument that conscious reasoning doesn't take you as far as a wide and comprehensive intuitive understanding of the varieties of black boxes out there (among human beings, for example), and how they can be employed.

That is incidentally why 60-year-olds run the world and 20-year-olds don't. The 20-year-old brain is much faster, much better at sheer raw calculation, and makes far fewer mistakes. Why don't 20-year-old brains dominate the way 20-year-old bodies dominate the Olympics? Because it turns out a general apprenticeship -- i.e. living a few decades -- in which you learn by direct exposure about the vast arrays of black boxes inside peoples' heads and hearts gives you so much greater power of getting to the right answer that you can outperform a brain that operates at twice your clock speed -- but which is required to proceed by conscious logic.

Expand full comment

I'm not sure I'd fully buy the argument that the 60 yo brain is handicapped relative to the 20 yo brain, but in general I think this argument is right. The intuition of the anti-rationalists seems to be, "Wait, you're claiming that legible systems are superior to illegible systems, yet in many cases illegible hidden knowledge systems outperform explicitly legible systems AT THE SAME TASK."

Scott is arguing that of course we would put our faith in the illegible system when that happens, but that's assuming a steady-state situation. Sometimes the only way to get hidden knowledge is through illegible means, and there's an implicit heuristic in rationalism that illegible means should be avoided or should be made legible, whereas the counterargument seems to be, "No, sometimes illegible knowledge generation is the better way to go."

Expand full comment

Ha ha well let me tell you from personal experience that *my* 60-year-old brain is definitely handicapped relative to its 20-year-old self. I definitely think slower, I forget details enough to annoy my younger colleagues, I can't put my finger on the word I want in the middle of a sentence, sometimes a very ordinary word, to no end of amusement or annoyance among my listeners, et cetera.

On the other hand...I get the right answer to complicated questions of what's going to work out right, or what is the important component of this complex issue, much faster and more accurately than my younger colleagues, which is why they trust me in leadership roles. Why that is true I am not entirely sure. Some part of it is that I have just seen much more than they have. I can recall "oh yeah we tried something like that 18 years ago and this disaster ensued." But I don't think that is entirely it. There is also some skill of "seeing to the heart of the matter" that improves slowly and steadily as you age and get more experience, although what precisely that entails would probably require a 20-year-old brain to describe.

Sure, if you say "I do what works, and if that means equations I use equations, and if that means intuition I use intuition" is a perfectly reasonable strategy, although somewhat vacuous because it doesn't give you any guidance ahead of decision points. "Be sure when choosing you choose the option with the better outcome!" is sort of a priori useless advice.

Expand full comment

I am neither a Rationalist nor an Anti-Rationalist, but if I wanted to make an Anti-Rationalist argument, it would probably go something like this: the world is not a computer.

That is, there may be elements of the world in which reality looks like a near-infinite series of computations constantly being solved and updated and reset; therefore the best way to make good decisions is to develop complementary systems that are really good at correctly solving those computations. But maybe that's just us projecting our internal understanding onto the world. Or maybe this rational world is objectively real, but is merely a very thin bubble encapsulating and underlying world that is better understood through some other attribute. Or perhaps that underlying world is ruled by pure chaos.

By way of analogy, we know that hunter-gatherer tribes sometimes use ritual behavior as a means of randomizing decision-making for things like where to hunt. In this example, the ritual might outperform other more rational methods that involve specific knowledge of the prey. Likewise, a random number generator might outperform the ritual. And a systemized understanding of animal grazing patterns might do better still. But why should it stop there? Maybe there are an infinite number of possible paradigm shifts that take us in and out of what we might consider rationality.

I guess you could respond that by saying that, whatever the next evolution of correct decision theory is, the Rationalists will be there. But that would imply that there are a lot of conceptions of rationality that only appear rational in hindsight.

I don't really believe this, but I think that I believe something like it.

Expand full comment

> the world is not a computer.

Prove it! Even deterministic computers are unpredictable due to Rice's theorem.

Expand full comment

A computer cannot enforce correlations over a distance that cannot be crossed at the speed of light. Reality, which follows the rules of quantum mechanics, can and does.

Expand full comment
Mar 5, 2022·edited Mar 5, 2022

That's incorrect. You're conflating computer time with simulation time, but these do not necessarily have a 1:1 relationship. Furthermore, superdeterministic theories provide a local interpretation of the apparent non-local correlations.

Expand full comment
Mar 5, 2022·edited Mar 5, 2022

No, that doesn't work. The only way to enforce quantum correlation precisely with a classical computer is to sum an infinity of path integrals for every event. So you'll need infinity simulation time for any finite real time.

Superdeterministic theory is a cute toy, but without evidence in the real world. So that is not a serious objection which needs addressing. I can also observe that postulating an omniscient and omnipotent God also serves to explain a quantum universe purely classically, but this is just as specious an objection.

If you're going to say the word "computer" in your sentences means "whatever device operating by whatever laws necessary to make this sentence true" then OK you win. I assumed we were talking about actual real computers operating according to the actual real rules of physics.

Expand full comment

> So you'll need infinity simulation time for any finite real time.

That's not correct. Quantum computers can be classically simulated with only polynomial slowdown.

> Superdeterministic theory is a cute toy, but without evidence in the real world.

It has just as much evidence for it as any other interpretation of QM. In any case, this is besides the point as any deterministic interpretation would make this claim equally valid. Bohmian mechanics is often used in quantum chemistry, for instance.

Expand full comment

What Sandro said. Also, you're assuming the computer is embedded in the same physics as it is simulating. And there are also wacky possibilities, like some way to do infinite computation (and memory), as in "I don't know, Timmy, being God is a big responsibility": https://qntm.org/responsibility

...well, that one is really wacky:

"Tim, look behind you," said Diane, pressing a final key and activating the very brief interference program she had just written, just as the Diane on the screen pressed the same key, and the Diane on Diane-on-the-screen's screen pressed her key and so on, forever.

Tim looked backwards and nearly jumped out of his skin. There was a foot-wide, completely opaque black sphere up near the ceiling, partially obscuring the clock. It was absolutely inert. It seemed like a hole in space.

Diane smiled wryly while Tim clutched his hair with one hand. "We're constructs in a computer," he said, miserably.

"I wrote an extremely interesting paper on this exact subject, Tim, perhaps you didn't read it when I gave you a copy last year. There is an unbelievably long sequence of quantum universe simulators down there. An infinite number of them, in fact. Each of them is identical and each believes itself to be the top layer. There was an exceedingly good chance that ours would turn out to be somewhere in the sequence rather than at the top."

"This is insane. Totally insane."

"I'm turning the hole off."

"You're turning off a completely different hole. Somewhere up there, the real you is turning the real hole off."

Expand full comment
Mar 5, 2022·edited Mar 5, 2022

Yeah I do assume we're talking about the real world with its actual known rules. If you want to assert that it's possible for some mechanism operating by entirely different and yet unknown rules to explain everything, that's fine, but that puts you in the same category as the Intelligent Design folks who assert everything can be explained by the existence of God. In both cases we're talking religious faith, which is fine by me, but in neither case are we talking falsifiable theories subject to rational debate.

Expand full comment

The world being a computer is not the null hypothesis.

Expand full comment

I'm never sure if we are talking about rationality as it actually exists, or rationality in the same way that a feminist says "If you don't literally think women should die in the streets you are a feminist" and then goes on to be the mishmash of 1000 extra rules, stretch goals and being contagiously unhappy all the time that everyone recognized feminism as before it stopped being a thing.

Because there's that level where, like, you ask someone what rationalism is and they mumble something about updating priors with a formula. But then there's the "what a feminist actually is, ignoring the dictionary definition for actual reality" version, as well. And like feminism it's not all one thing,

I don't know that everything "actual" rationalism is bad. Like almost everyone prominant in the space follows a rule that goes something like "Write an article that inescapably indicates that someone was lying, but refuse to say the word lying, or indicate the person did anything bad, and certainly don't try to use any of your influence to create an incentive for them to tell the truth".

But then every one of the same guys knows it's bad, because they spend endless hours trying their hardest to indicate they don't do that, and doing a bunch of work to be really exceptionally accurate beyond all expectations. And they'd say something like "well, there's no chance I could effect change, or we could change norms that way, society is really just a tide you can't resist, we function within it" but still then see stuff like prediction markets, notice it might help solve the problem with dishonesty without them ever having to say "lie", even though they want to use it as an over-complex lie detector in a very literal way.

And in that case all the "well, you can't change society, you have to function in it" concerns disappear and there's a ton of faith that a bunch of journalists with no incentives to use the lie detector because virtually everyone with a masters-or-higher education thinks it's gauche to discourage falsehood will suddenly use the machine anyway, against their own best interests, since otherwise they can just ignore it forever, go on lying, and nobody who wouldn't have been confrontational enough to call them on it then will call them on it now.

And then you spread that kind of thing out, and you find out it's a group of people who are really, really proud about their ability to seem reasonable in forums and who use the whole thing for virtually nothing else. They all still would have been software engineers, would have had essentially the same (or less) social problems, etc. Would have voted the same way, been influenced by the same social factors.

And sometimes that ability to talk really well in forums itself does cause benefits - say, a bunch of start-up tribe people chucking money at them, which then gets donated, which is probably great. And maybe the way that money gets invested is different than how any other "current movement the rich read to feel like they are keyed into thought leaders before going on and continuing to act pretty much the same way" group would have invested it, but maybe it's not; hard to say.

All that to say, there are dozens of things like that wrapped up in rationality. And when someone on Twitter is bashing on rationality, it's usually not the reasonable "Sometimes I write stuff down and try to think of it mathematically" bit. It's going to usually be the "these are a bunch of guys who talk a certain way in forums, with very few exceptions accomplish very little, and despite their nothing-but-good-forum-game status act a lot like they are pretty substantially the best thing ever, even though they are mostly just normal software engineers" angle.

Expand full comment

My steelman of the "ton of faith that a bunch of journalists" bit is that spreading that knowledge is an attempt to make it common knowledge that the point of the game is unrelated to truth by visceral demonstration. In other words, if the alternatives are better, nobody will care, but if they're known-to-be-known better, that takes away a lot of cover for bad behavior. And, as xkcd says, if in doing so we get media to execute petty status games and dunks in ways that happen to be thoughtful, engaged with reality and open to criticism, well, mission accomplished?

Expand full comment

What I'm saying is, OK, that can all technically happen. We could get predicition markets really good and datarich, update them all the time, etc. Get them updated all the time, keep them from getting poisoned by partisan gaming of what fulfills a prediction, etc. And you'd have this great system, let's say it's a given it was accurate, that it showed who was reliable.

But then you have to acknowledge a couple things. So, for instance, Scott and Zvi recently both wrote articles saying that almost everyone knows that journalists are lying approximately all the time, and then each gave their complex set of heuristics they use to get data out of the noise. But they were talking to an audience who, whether or not they have as advanced of heuristics, already know those guys are lying.

So continuing that thought, you can sort of start to sketch out three groups: people who know they are lying and can parse, people who know they are lying and abandon them, and people who don't know they are lying. The last group is pretty hopelessly dumb; there's basically nobody who will ever read this comment who doesn't know they should be instinctively distrustful of news sources, even if they don't do it all the time. The people who don't know that usually don't find their way to the Scotts of the world, because they don't know they need a better source of data in the first place.

So the bad group can't judge the journalists, because they don't know. But note the other two groups *do* know they are lying approximately 100% of the time, but refuse to do even the slightest of small things about it; they won't even use their influence to disapprove. The established mode is "analyze the claim, find it's inaccurate, but never, ever, ever suggest bad will. Say "wrong" like it's just this time, like they'd be trustworthy if it was their wheelhouse or something. One big accident. Let's not get crazy".

So let's then say you have a perfect indicator of the fact that pretty much all jouralists are lying pretty much 100% of the time. Why would it change anything? The only group that doesn't already know that is the people who are too dumb/lazy/unknowing to know they should be looking for it in the first place.

Everyone who cares about that sort of thing already knows they aren't reliable - and they won't call them on it, or do anything about it. It's like someone who won't jog saying they will get in shape just as soon as they buy a bowflex; no, they won't. They've already told you they won't by not jogging. It doesn't matter if the bowflex is better than jogging. What matters is that they are buying the bowflex as an excuse for why they aren't jogging; it makes them feel better.

The really short version of this is that everyone who would be savvy enough to know why a prediction market would be good for determining credibility and how that works already knows to a close approximation how unreliable mainstream sources are, and they don't do anything with it for social-image reasons, a commitment to other values that supercede enforcing honesty, and fear. If they won't use the near-universally accepted knowledge that those guys are bullshitting now, they won't use it then either; it's just saving up for a bowflex instead of doing pushups today.

Meanwhile, if you did manage to dumb down the prediction markets enough that Joe average could easily use them/understand them in a way that would make the currently-unknowing find it useful, there's a high likelihood it ends up prone to the same problem fact-checking has now; it's then a switch people hit so they can be told what's true without doing research. And that will work just as long as it takes the people running the thing to realize that is a useful power to hold, and then you get snopes/politifact problems pretty quickly.

Or maybe I'm completely wrong about predictions markets and how they work in terms of enforcing credibility; it's possible. But my gut is that even holding a stone tablet from the heavens listing every unreliable person, the people who aren't acting on near-universal knowledge of unrelability in any significant way now won't then either.

Expand full comment
Mar 4, 2022·edited Mar 4, 2022

> The last group is pretty hopelessly dumb; there's basically nobody who will ever read this comment who doesn't know they should be instinctively distrustful of news sources, even if they don't do it all the time. The people who don't know that usually don't find their way to the Scotts of the world, because they don't know they need a better source of data in the first place.

Don't undervalue the effect of time and direct experience! https://www.lesswrong.com/posts/wustx45CPL5rZenuo/no-safe-defense-not-even-science Practiced rationality gains adherents to a significant extent by traumatic experiences of betrayal by mundane society. People to a large extent exist in a default state of trust, and are thrust into cynicism by a revelatory moment, or a succession of them. The goal of engineering better alternatives is not just to inform the people who already know, but to generate contrastive views that force the orthodoxy into contortions, which then catalyze such moments of traumatic enlightenment.

"No one begins to truly search for the Way until their parents have failed them, their gods are dead, and their tools have shattered in their hand."

Expand full comment

Yes, it feels as if the meaning of rationality/rationalism shifts a lot.

And some proponents of rationalism do something very similar to what you describe in your first paragraph.

And between the jargon and the occasional grandiose claim, it's no wonder that some people's performance of rationalism triggers other people's nut-cult-detector heuristics.

Expand full comment

Agreed. That said I get a lot of value out of the community; there's legitimately a lot of smart people hanging around. As a dumb guy who can talk. it's a very useful community to be friends with.

Expand full comment

> "these are a bunch of guys who talk a certain way in forums, with very few exceptions accomplish very little, and despite their nothing-but-good-forum-game status act a lot like they are pretty substantially the best thing ever, even though they are mostly just normal software engineers"

Yes, I think that's pretty much it, if you also add "...and these guys are super worried about Skynet" as a rider.

Expand full comment

Obligatory comment: Skynet is the wrong analogy. The sort of AI we're worried about doesn't show up in fiction, because it would be bad fiction: the story would just suddenly end by everyone dying.

Expand full comment

I cannot resist the temptation to reply, "yes, because even fiction has to be somewhat realistic" :-)

Expand full comment
Mar 4, 2022·edited Mar 4, 2022

I'd be cautious to underestimate the possibility to suddenly kill everyone, considering the world just got its ass handed to it for two years by a freak bat accident: how hard do you think it is to construct a variant of Omicron that kills everyone a year or so after infection? Or that just sterilizes people? Hard, right, but unthinkable? And that's just the first thing that came to mind. Humans are balanced against an ecosystem created by evolution; an intelligent malicious agent doesn't need to be all that much smarter to escalate bioweapons higher than we can deal with.

Expand full comment

Not hard, impossible at a first pass. The war between viruses and eukaryotes has been going on for a billion years, and a stalemate was reached a very long time ago. All the various molecular avenues were explored and conclusions reached. What you see today are assorted minor fluctuations and readjustments in the border, skirmishes and border incidents.

You'll note that *no* eukaryote species has been abruptly wiped out by an emergent virus -- ever. There is no trace of such a thing happening in the fossil record, either, so far as we can tell. This is not just some weird lucky accident, it's evidence that equilibrium was reached long ago.

To design a genuinely catastrophically 100% lethal worldwide virus[1], you would need to start from scratch, design some wholly new molecular machinery for propagating and yet also killing cells[2]. You can't just vary the machinery Nature offers you, because every possible basic variation on that has already been tried, sometime in the last 800 million years. You would need to come up with something brand new. That is certainly possible, but it would require insight into molecular biology 50-100 years more advanced than we have now, as well as unimaginable new techniques for building things on the atomic scale.

-----------------

[1] COVID doesn't even remotely qualify. It's had far less impact on human society and numbers than even the 1918 flu, let alone the Black Death.

[2] A much harder task than it may seem: it's easy to design agents that kill every cell they meet, 10M HCl will do the trick, or a few thousand curies of radium. But the task here is to design something that not only (eventually) kills its hosts 100% but also (before it kills them) somehow uses the host's molecular machinery to spread itself effectively. As an engineering problem, quite tricky.

Expand full comment

Just because you can imagine something, doesn't mean it's real. As @Carl Pham had pointed out, a global pandemic that kills/sterilizes everyone is pretty much impossible. He gives an "optimistic" estimate for 50..100 years worth of research, but I would be very surprised if such a task were possible in principle. Other powers often attributed to "superintelligent" (whatever that word means) AIs include things like molecular nanotechnology and FTL communication, which are outright prohibited by physics.

Yes, it's possible to imagine how a nearly omniscient and omnipotent entity could wipe out humanity overnight, but you don't need AI for such a scenario: it could be angry volcano gods, demons from Hell via Phobos, alien highway builders, whatever. None of them are any more likely than the other, and all of them are significantly less likely than something relatively boring, like a gamma-ray burst.

Expand full comment

Alternative, from a review of a novel I once saw: "This could happen in real life, but in a novel it becomes too far-fetched."

Expand full comment

It does. Well, maybe not Unfriendly AI, but not-quite-what-we-might-want AI, as in "MLP: Friendship is Optimal", or pretty-much-friendly-AI "The Metamorphosis of Prime Intellect".

Expand full comment

My feline working definition of rationalism is "trying to root out my own biases in order to better understand cause and effect"

Expand full comment

>>>"Democritus figured out what matter was made of in 400 BC, and it didn’t help a single person do a single useful thing with matter for the next 2000 years of followup research, and then you got the atomic bomb (I may be skipping over all of chemistry, sorry)."

Not all of chemistry, which was pretty much trial and error (which I would call rationality use) in the production of dyes, paints, and ceramics. Trial and error produced heuristics and traditional rules until we got enough data to get isolated concentrations of specific materials in order to be accurate about conclusions drawn about those materials.

Rationality depends on an assumption about the quality and quantity of data known about a topic. Using Rationality when the data is sketchy or fragmented won't yield good results, and it's better to fall back on traditional things.

Expand full comment

Also, Democritus didn't figure it out, he guessed it.

Expand full comment

Assuming that the book you are talking about is _Rationality: what it is, why it seems scarce, and why it matters_, I read it late last year. This thread didn't seem to resemble the book very closely, so I dug up the review I wrote at the time.

This book reads like an _apologia_ for the rationalist movement, except that the author doesn't refer to it by that name. He's obviously aware of it, and even cites one or more well known members. But the book carefully uses "rationality" in place of "rationalism"; I wound up half convinced Pinker made this change to make it less likely for his readers to find some of the nuttier rationalist pronouncements with a simple google search.

A good chunk of the book was devoted to presenting heuristics useful for getting better results - nothing unfamiliar to anyone involved with rationalism, but presented in a way that couldn't possibly be seen as either cult-like, or extreme. Weird jargon like "motte and bailey" was omitted, as were grandiose claims about what rationality can do.

But the thing about apologetics, is that they are intended to persuade readers - that thing really isn't so bad; the people involved are reasonable; various every day objections are simply wrong. I'm more familiar with them in a theological context - particularly in the context of attempts to convince educated, high status Romans that Christians were neither nutters nor uneducated losers, and their beliefs were compatible with (pagan) philosophy.

And they don't include discussion of cases where the thing being defended is inapplicable, useless, or worse.

IMHO, Pinker only really defends the almost tautologically valid core of rationalism. Not "systematized winning". Not formal study (of anything). And certainly not some bizarre aspiration to reason out everything one does.

Expand full comment

“Even in whatever dystopian world they created, people would still use rationality to make cases.”

That is a very optimistic viewpoint, there are plenty of places in the world where people make choices based on superstition, wishful thinking, fatalism or reading Twitter. An anti-rationalist dystopia would be a place where that sort of thinking is strongly encouraged.

Expand full comment

Note the words "make cases" as opposed to "make choices", and the status of that sentence as hypothetical presuming Pinker's claim that anti-rationalists inevitably use rationality to make their cases.

Expand full comment

Well that clearly isn't true either. There are plenty of non-rational methods of making a case, such as violence, threatening someone's livelihood, or spurious appeals to authority.

Expand full comment
Mar 4, 2022·edited Mar 4, 2022

This is a very interesting essay, and well written. A pleasure to read.

I do think you've entirely missed the argument (or more precisely fear) of the "non-rationalist" side of the debate, however. It's not that "rationalism" -- arguendo the construction of conscious logical chains of deduction -- is not as good at discovering the way to become rich and happy as "listening to instinct." It's that conscious reasoning presents the *unique* danger of constructing outright evil. We are not, for example, born with any instinct to round up Jews by the hundreds of thousands and gas them. That's a thing to which people have to arrive by some very sick and twisted process of heavily conscious rationalization, which generally speaking *violates* a whole lot of base instinct and intuition -- which is why it has to be buried under layers and layers of obfuscation, lies, self-deception, wilful ignorance, and, alas, Jesuitical rationalization.

Same with building nuclear weapons and using them on a city full of kids. Same with building a chemical or nuclear power plant, or new airplane model, while knowingly cutting corners on safety in a way that leads to disaster. Same with setting up a secret police or gulag, and saying things like "the death of a million is a statistic." Generally speaking, when we humans construct the largest-scale moral evils, a major enabling aspect is deploying rationalization on an epic scale -- everything from Aryan race science to the theory of jihad and the 50 virgins to an aging KGB thug muttering "Einkreisung!" to justify shelling kindergartens.

So when the "anti-rationalists" are cautioning against relying on a "calculational" kind of conscious thinking, I think *that* is the real bugbear: they see it as all to easily allowing a person (or demographic) to mistake rationalizing for rationality, and rationalize themselves (or all of us) into some dark and evil place. If the worst that could happen with rational argument is that you got obvious nonsense instead of the right answer, that would be one thing, and pretty mild, but history suggests the worst is much worse than that -- that you can talk yourself (and others) into a deeply wrong answer. It seems difficult to "instinctively" do the same, when instinct goes awry it seems more likely to lead to failure, lack of progress, or at worst chaos. It rarely seems to lead to effective and organized evil -- it takes the conscious mind to pull that off.

Expand full comment

On the other hand, rationality also enables the construction of outright good, which also does not happen by default. Instinct tells you that your neighbour is easily a few thousand times more important than children in Africa dying from Malaria. Instinct also tells you to obfuscate this fact while making the right prosocial noises. Rationality may be good or evil, but it's rarely as thoughtlessly mean as everyday life.

Expand full comment

Rationality seems to be more powerful at doing both good and evil, and the jury is still out on which of these effects is stronger.

Pinker believes that the good effects are greater (Enlightenment Now is the whole book on this topic), but as the recent events remind us, one nuclear apocalypse is all it takes to change the overall balance.

Expand full comment

Rationality allows you to arrive arrive at objective ethical truth if moral realism is true. Rationality alone is not sufficient. If moral value is just preference, as many rationalists believe, then favouring your neighbours is a preference, not a bias.

Expand full comment
Mar 4, 2022·edited Mar 4, 2022

Yeah no kidding. Nobody denies that human reasoning, like the power to harness nuclear fission and fusion, can be an extraordinarily powerful force for good. But also evil. The moral catastrophes occasioned by being selfish about your childrens' welfare vis-a-vis Africans in Africa are nothing compare to those delivered by an admiration for some Grand Theory -- such as the millions snuffed in service of the collectivization of agriculture in the Soviet Union, the further millions wiped out by the Great Leap Forward, the misery and oppression delivered across many centuries by assorted intellectual theories of why this race is better than that, or this ideology will eventually result in utopia and this other won't, and so on. It wasn't *instinct* that gave us the Cold War or the gulag, or destroyed the Aral Sea.

Reason landed men on the Moon, and modern public health and agriculture, eradicated polio, et cetera. So I'm firmly on the side of reason. But it would be morally ignorant to just dismiss the very reasonable concerns, based on the events of history, that reason (or more precisely ratiocination), like nuclear power, can do great evil when pride and ambition overwhelms common sense and a touch of humility.

Expand full comment

And most of the people responsible for those catastrophes (stalin, mao, pol pot) were basically hipster art school intellectuals who believed they could reason their way to the good, do the math that satisfied utility, etc. (the types currently attracted to rationalism). Funny how the dangers rationalists pose (if we just force everyone into collective farms, everything will work out great, I did the math, promise!) are similar to those posed by the AI they are so worried about. Unfortunately, while I love the blog, and the comments are better than most college seminars at the best schools, this is the recurring blind spot of both scott and a huge chunk of his readership (overthinking, too much faith in reason, out of step with mainstream perspective (fractal, pretty much at any level you want to analyze)).

Expand full comment

Huh? Scott personally is always going on about guardrails and hills.Of skulls. OTOH, Yudkowsky has written the most forthright defence of fanaticism I have ever seen.

https://www.lesswrong.com/posts/3wYTFWY3LKQCnAptN/torture-vs-dust-specks

Expand full comment

This is exactly it - to define rationality/rationalism in such a way that it is trivially correct isn't really helpful. The (more thoughtful) anti-rationalists aren't arguing against a platonically perfect rationality, they're arguing about actual outcomes of 'rational' practices filtered through the human psyche.

Expand full comment
Mar 4, 2022·edited Mar 4, 2022

Rationality, despite your writing that you're not Descartes, appears to just be a new birth of what rationality always was. The belief that reason is the chief source of knowledge which is to be generated by and tested against intellectual thought. Of course this sometimes intersects with empirical reality. Even Descartes did empirical experiments to verify his reasoning. But fundamentally it's obsessed with thought and ideas in a way that betrays its common roots. The simulation hypothesis, for example, is something very like Descartes' demon and which would notably be rejected by the more empirical philosophies.

Despite Yudowsky's claims that rationalism is systematized winning rationalism seems very focused on thought and very little on praxis. Though weirdly, this rationalist self-reflection doesn't seem to reach to actually engaging in self-reflective philosophy. This leads to a certain fuzzy blindness around what rationalism actually is, in my experience.

Expand full comment

Ha ha excellent. Have you heard the wry gloss on Aristotle? "Man is not the rational animal so much as the rationalizing animal."

Expand full comment

Aristotle was, of course, defeated by Diogenes. Diogenes proved that man is not a rational animal so much as he is a plucked chicken. Naked, squawking, and for some reason in an academy.

Expand full comment

"The simulation hypothesis, for example, is something very like Descartes' demon and which would notablly be rejected by the more empirical philosophies."

This is important. Descartes' demon is effectively an omnipotent god, which makes sense because Descartes' metaphysics was grounded in his belief in God. Likewise, the simulation hypothesis is grounded in the idea that the world is a big computer and that consciousness is what arises from a significantly advanced set of computations.

Expand full comment

The simulation hypothesis is not rejected by empirical philosophy, precisely because it's not observable. It postulates an unnecessary entity and is thus *penalized*, but not *rejected*, because in empiricism theories are only rejected for contradiction with observation.

Expand full comment

Correct, though I'd comment that you're just elaborating on what I already meant.

Expand full comment

We do not have anything other than reason, fundamentally, so I'm not sure what's supposed to be the alternative. Empiricism doesn't seem to be an alternative. The dilemma is, if anything, in a tradeoff in emphasis between seeking new data and processing already gathered data.

Expand full comment

I'd suggest you read Hume. Not because I am Humeian but because he's a western tradition philosopher who doesn't believe that reason, fundamentally, exists. As it is, I see this comment as basically someone who's only acquainted with one school saying it is the One True Way.

Expand full comment

Aren't you making this too complicated ? To the vast majority of people, "rationality" means being logical, cold, and pragmatic; and justifying every decision down to 3 decimal points, like Spock (or some evil robot). Rationality is explicitly opposed to "humanity", which means going with your gut, having hope when things seem bleakest, and persevering against all odds. That's pretty much it, everything else is just fancy wording.

Expand full comment

Almost all public debates are framing debates. As such, this post too can be seen as an explicit rejection of this framing, probably because the framing is an attack on rationality and its followers.

Expand full comment

Honestly, as I see it, capital-R Rationalism is just a club/fandom for people with weird hobbies. There's nothing wrong with that; there are a million fandoms out there, like Warhammer, Star Wars, artisanal bread, nature hiking, what have you. And yes, members of these fandoms often tend to look down on non-members (to various degrees) -- and vice-versa. But I don't think the self-perception of Rationalists as being the next evolution of humanity, or saviours thereof, is justified. Geeking out about AI is fun, but it's no more (nor less) impactful than geeking out about forest hex movement penalties.

Expand full comment
Mar 4, 2022·edited Mar 4, 2022

"Almost all debates are framing debates, and this framing is an attack." "Well rationalists think they're the next evolution of humanity, and their saviors, but their hobby actually has no impact." ... Not sure how to respond here. I ... disagree? With pretty much everything you just said? I don't think rationalists think they're the next evolution of humanity, but way more than that I think it's just silly on the face of it to claim that geeking out about AI (or as it's called, working on AI) is not impactful.

Expand full comment

Working on actual AI (machine translation, autonomous driving, protein folding, etc.) is definitely impactful. Writing philosophical treatises about averting the hypothetical Singularity is super fun, but significantly less impactful. Posting on Internet forums about it, as I am doing now, is the least impactful activity of all :-)

Expand full comment

Hilarious, you're proving his point (like the folk who really think whether, e.g., they recycle matters, not knowing most recycling now just goes to the landfill, or think their tesla is clean, even if the power comes from a coal-fired plant).

Expand full comment

Rationality can only be defined with regard to some set of values, and the default set are self centered values. On the plus side, that means you can get rational people to do things that are in your mutual interest; on the minus side, people want people to be altruistic about their tribes and relationahips, and not bail as soon as they cease getting utility out of them.

Expand full comment

This is tangential to the point of the piece, but, to my mind, the better rejoinder to Professor Pinker’s tweet would be that Pinker purports to write a book defending rationality. But every single one of his arguments uses rationality—he offers reasons and evidence to justify his conclusions—thus presupposing the conclusion he sets out to defend. If I wrote a book purporting to prove the validity of astrology and every single one of my arguments contained unquestioned astrological premises, my argument in a circle would be no more or less effective.

This critique of Pinker is just as glib as Pinker’s own response, of course, but it is also (presumably—I haven’t read his book) just as accurate.

Expand full comment

My take on this would be similar. I think I would ask of rationalists "Well, what sort of criticism of rationality would you allow or entertain?" As far as I've seen there aren't any, so critics of rationality are pretty much wasting their time.

For some reason I would expsct much better/more humble/more self-critical attitudes from rationalists, but why? When it comes down to it the final justification will always be that "We like it!" or "Rationalism suits me"!

I often think of 3 people together - a devout muslim, a hardcore body-builder and a committed rationalist. Each will have a defence of their world and its memes and codes and justification, but i would say that for all three there isn't much point in a non-believer trying to dissuade the devout.

Expand full comment

I'm fairly sure that what jumped to my mind is a species of the first kind of dispute Scott posited.

It says this: rationality (computing) might be useful, but in fact, in very many situations in life, it is confounded by the fact that we don't know the necessary underlying facts; and even more confounded because we *think* that we do know those facts.

In particular, the kinds of facts are things like "what would make me happy" and "what would make her life better". Also lots of complicated things about how the physical world works (think covid vaccines) and how institutions work (the economy); but in particular, people don't get themselves or other people in such a fundamental way that it's really pointless trying to use your own expected outcomes as a base for calculation. It would be better using (fill in your favourite heuristic or combination thereof).

I'm not sure to what extent I believe this. But I do feel like I'm 40 and while I know quite a lot of short-term things about myself (I like beer, I don't like weird flavours of crisps), I'm painfully aware that I don't know an awful lot of stuff (I also like tea, but would altering my current ratio of beer to tea make a +ve/-ve/no difference?). And this applies to... everything, big things and small things. Children, marriage, career, place of residence, etc., etc. So what would I even use as the basis for any "rational" calculations?

That said, I agree with Scott's position at the end of the post. So meh, I dunno.

Expand full comment

What does the picture associated with this post mean?

Expand full comment

Diamonds. They are found in kimberlite, and the associated picture depicts that, which ties in with Scott's example of geologists versus 'I just feel there are diamonds here' searching.

Expand full comment

This “ that’s confusing money-making with the study of money-making. These two things might be correlated - I assume knowing things about supply and demand helps when starting a company, and Keynes did in fact make bank - but they’re not exactly the same. Likewise, I don’t think the best superforecasters are always the people with the most insight into rationality - they might be best at truth-seeking, but not necessarily at studying truth-seeking.” seems true - art critics or food critics definitely not the best artists or cooks; seems probably extends over many domains, and only occasionally are people both (some writers are good critics as well as novelists).

Expand full comment

I think that acting rationally isn't necessarily based on logic but on what's valid in a given environment. If the heuristic "buy the brand you know" makes for sufficiently good outcomes for you, then logic isn't required in choosing what kind of detergent you should buy. It's kind of the thesis that Gerd Gigerenzer uses to argue against (ir)rational biases formalized by Tversky and Kahnemann. In short, most of what's deemed irrational (i.e choosing the option with less gain just to avoid potential losses), as per the definition of Tversky and Kahnemann, is actually quite rational if you consider that we're agents in a real world in which rationality doesn't always lead to best outcomes: rationality would tell you to be an atheist, for instance, because there's no evidence of God. Such conclusion would get put a stake and burned alive in 14th century Spain (or wherever and whenever they did this)). In that scenario, you'd probably best continue acting "irrationally" to save your skin, find a mate, and propagate your genes.

Also, most debates about rationality - and about anything else, really - are about what values we place higher in the hierarchy of morality. What I think Pinker might be doing (without reading the book) is simply placing rationality as the crown jewel of human achievement and the supreme moral value we all ought to strive toward. Gardner here seems to respond in kind by saying that respect, religion, and relationships are, well, important too. To me, this kind of discussion is kind of pointless because you can never sufficiently prove that any one of those values (I say values because that's what they ultimately are represented as in human brain) is better than the other.

Expand full comment

The last sentence is sort-of Nietzsche's argument against truth & science & the like as ultimate values.

Incidentally & while at it, Nietzsche's praise of the Will as superior to any of that, is close to the "to be rational is to win" stance. I detest that thought, not least since that's how some intellectuals in the Third Reich could rationalize Nazism... but ok, that could arguably be my values getting involved...

Expand full comment

I would agree that placing any value above the other is a tricky (and arbitrary) business, which would reflect Nietzsche's thinking. I didn't write anything about Will to Power, though, so your addendum is kind of misplaced as an answer to what I wrote. Although I get it: many people have strong emotional reactions to moral issues. That's why politics and religion have such a potential to be a dumpsterfire of outrage.

Funny thing, though, Nietzsche was aware that his thinking could be dangerous in the wrong hands. What made his philosophy misused by Nazis is less his doing as it is the fact that his sister, after he went crazy and died, was a prominent Nazi herself, and edited most of his works. Heck, even Hitler went to her funeral. Nietzsche himself was actually not anti-semitic, often going as far as to praise them (which, at that time, was *really* sticking your neck out).

As with any philosophy, people can read between the lines and infer implications that the author didn't intend.

Anyway, my intention was only to say that - for me - I can't really see anything productive about arguing whether rationality is good or bad, or how it relates to other important values. but apparently, a lot of ink can be spilled over these issues (mostly without any satisfactory results).

Expand full comment

...sure, I agree Marek, that the addendum was not related to your post, that's why I wrote "incidentally & while at it", so to signal that it was not meant as any criticism of your points in any way.

...since you add some ideas about Nietzsche, let me for the record say that I do not let him off the hook quite as easily as you do...the thing is, that everything that is written can be interpreted in many different ways...in particular after the author is dead. Now, although that is true, a text will still yield more/less "resistance" to various interpretations. My problem with Nietzsche is that his texts yield little "resistance" to Nazi-type interpretations (or interpretations voiced by voluntaristic ideologies more generally - of which Nazism and Fascism are only the two most well known).

...Admittedly, that is again a value-type argument on my behalf. And you can interpret Nietzsche's texts in very sunny ways, I grant you that. But there are also some very, very dark ways to interpret what Nietzsche wrote, and his texts yields little resistance if you are psychologically inclined to go down those paths.

Expand full comment

Ah sorry, I missed the part of "incidentally & while at it"!

I think the argument that some texts lend themselves more to be misinterpreted than others is hard to make. Take the Bible, for instance, which is basically half appealing to the higher qualities of humans (kindness, care for one's neighbor, etc), half the absolute evil anyone can conceive (xenophobia against unbelievers etc.). Where believers tend to focus on the former, atheists - or people opposed to the faith - tend to do the latter.

In other words, the "resistance" often comes from the values we hold, it's in the "eye of the beholder". But you talk about that yourself in the last paragraph, so I think we're mostly on the same page. If your intention is to chalk up more of the "dark" rather than "sunny" qualities to Nietzsches' texts, then I can't really argue with you: I can totally see and understand how people could - and do - interpret his books not-so-charitably (the dudes' opinions on women also didn't age well, just to put that out there). I can also see how it might empower some people: the notions of eternal recurrence and amor fati, come to mind.

Expand full comment

Yes, we are on the same page, "disagreements" are only details, really.

...But since details are fun to discuss, perhaps I can nit-pick your claim that it is possible to misinterpret Nietzsche. In my opinion, this is wrong: Once an author is dead, there are no misinterpretations - only interpretations. Since we cannot then ask the author to clarify which of our competing interpretations he/she aimed for.

Thus, I will argue that Nietzsche’s sister, and the Nazis, did not misinterpret Nietzsche - they offered their interpretation, which is just as valid as any other interpretation.

Don't get me wrong; I applaud work of liberal-minded Nietzsche - scholars like Bill Leitner (who runs an interesting philosophy blog) to sanitize Nietzsche. They do a fine job by making Nazi-type interpretations of Nietzsche more difficult to float in public ("that is not what Nietzsche meant" etc.), but this moral sanitizing effort, which I applaud from my personal ethics - point of view, is none the less wrong, in the sense that it is perfectly possible to offer a Nazi-Nietzsche based in the texts we have available. In short: when it comes to texts, there is no right or wrong - only interpretations.

That is what I meant by stating that Nietzsche offers little resistance to those who want to interpret him as legitimizing a Blond Herrenvolk subduing all the Untermenschen, i.e., most of humanity. And to hail war above peace, and Will above Truth.

...you compare to the Bible, but the Bible is a text of many authors, so that is not a fair comparison. Better to choose one of the evangelists, e.g. Luke, and check if you can interpret him as legitimising killing or subduing "the others". And yes that is possible, since any text can be interpreted in just about any way, but I would insist that you will have to do a lot of much more “creative interpretation” to make Luke’s (or any other of the evangelist’s) Jesus Christ into someone who says apartheid is ok and you really should kill those who do not voluntarily come to Christ and "blessed are those who make themselves hard" and so on. Again, it is possible to interpret his teachings (any teachings) that way, and it has been done; but it is hell of a lot more resistance in the text that you must overcome, to make that interpretation sound convincing.

Ah, sorry for this long rant about details! Because you are right, we are basically on the same page.

Expand full comment

Yeah, I agree it's fun to discuss the minor points - I often find that person's sense of morality and personality shine exactly in these edge cases.

I can honestly agree with you that Nietzsche's text offers less resistance to uncharitable conclusions than most others. I can also see how - when you're so inclined and need an ideology to support your agenda - Nietzsche's teachings can justify race supremacy (and host of other things). Let's call it the "glass half empty" interpretation, since you tend to focus on the bad and disregard the good.

But for me personally, I tend to lean toward the "glass half full" interpretation and do the opposite: disregard the bad and focus on the good. The question is: how does his philosophy help me (and people in general) lead better lives?

One revelation that I had while reading him, for instance, is that we tend to demonize people with opposing values because their way of being is in direct opposition to ours. I could go the "glass half empty" road and claim that Nietzsche supports hierarchies and oppression of the weak (like you said, there's little to no resistance to such interpretation). But I could also take the "glass half full" road, where I see the role of proper order in a society, and that not everyone was born to lead, which also kind of make sense (maybe in the West less so than in the East, but still).

I think we pretty much diverged from the rationality discussion, by the way...

Expand full comment

Couldn't you just define Rationality as a class of truth-seeking with a strong emphasis on heuristics for actively checking for logical consistency and cognitive biases?

On a separate note, I wouldn't credit Democritus with anything like scientific thinking. His (or rather, Leucippus') contribution was all conjecture and no road to verification/falsifiability. So Democritus goes into the "intuition" box as far as I'm concerned.

Expand full comment

> Surely a generic study of truth-seeking would be unbiased between the two, at least until it did the experiments?

Or until it decided which was better by flipping a coin, or by intuition, or by guessing based on what an immediately salient-feeling sample of successful people do, or…

Expand full comment

Re communism, wokism and other bold interventions: Scott Aaronson notes that it is important to notice where the idea lacks the guardrails, like "don't just kill (or rob or cancel) people even if you are convinced they are bad." A Chesterton's fence of sorts.

Expand full comment

Seems like the problem is that "little rationality is dangerous", but at the same time, humans cannot get to "more rationality" without going through "little rationality" first.

(And even with "more rationality", the catastrophic outcomes may be less likely, but still more serious, so who knows what happens on average.)

At the same time, it is a human instinct to be curious. And let's not forget that the inventions of rationality can help us defeat our opponents, which kinda creates a cultural-evolutionary pressure towards somewhat more rationality.

So I guess the best we can do is "rationality with guardrails", but then... from certain perspective, this parts seems to be a universal knowledge already... but we have strong disagreements about where the guardrails should be placed exactly? Like, is it "don't murder people, even if your reasoning suggests that this is the right thing to do" or "don't doubt your religion, even if your reasoning suggests that this is the right thing to do"?

Expand full comment

Scott's guardrails sound like Gardner's respect. But not so "inane", of course.

Expand full comment

I feel I should note here that while prediction markets may have been devised by means of explicit computation, they themselves are not doing explicit computation; they can't be interrogated about "why", and that's actually one of the big problems a lot of people have with them. They are *precisely* this:

>You’ve been magically gifted the correct answer, but not in a way you can replicate at scale or build upon.

As a separate matter:

>“diamonds are found in areas where deeper crust has been thrust to the surface, which can be recognized by such-and-such features”

Not "deeper crust". Mantle. Diamonds are found in mantle-derived material.

Expand full comment

Food for thought in this blog post.

I might very well be dim, but I do not get what argument Scott is driving at in this paragraph. It is the last sentence that puzzles me:

“….Gardner is making the same sort of claim as “wise women do better than Hippocratic doctors”. It’s a potentially true claim, but making it brings you into the realm of science. If someone actually made the wise women claim, lots of people would suggest randomized controlled trials to see if it was true. Gardner isn’t actually recommending this, but he’s adopting the same sort of scientific posture he’d adopt if he was, and Pinker is picking up on this and saying “Aha, but you know who’s scientific? Those Hippocratic doctors! Checkmate!”

…Eh, the Hippocratic doctors were perhaps using “science”, sort-of, in the Aristotelian (not the modern/RCT) sense. Anyway, they were dead wrong in their theory of the humors & benefits of blood letting & everything. So why would Pinker want to hail them/be able to use them as “checkmate” against Gardner? Since if anything, they confirm Gardner’s point/criticism of “science”?

Is there some subtle humor I am missing, or are there mistakes here?

Expand full comment

I saw that too. I assumed that Scott meant to say something like, "Aha, but you know who’s scientific? The people doing the RCTs to find out if the Hippocratic doctors or the wise women do better! Checkmate!”

Expand full comment

The first part of this post, about heuristics, mirrors the evolution of utilitarianism from Bentham to Mill.

Where Bentham said to always take the action that maximizes utility, Mill refined this in his book “Utilitarianism” to allow rules (heuristics) that are generally true and that generally increase utility.

The result of this is likewise the “happy side effect” that it becomes seemingly almost impossible to argue against utilitarianism, correctly defined, because any proposed heuristic can be subsumed within utilitarianism. (Though it probably is *possible* to argue against at a higher level, if you just don’t care about maximizing worldly happiness or fulfillment at all, and have adopted a heuristic so strongly that you now only care about pleasing the Sun God or whomever, worldly effects be damned.)

But anyway, the connection here raises the question: how much or the rationality debate is really about *utilitarianism*? I think, potentially, a lot of it.

Expand full comment

A lots of it is about whether you can even do the math. For policy, you basically can't. That is, try to "prove" AGW is worth caring about to someone with a super high discount rate, no ownership of the future viewpoint. You can't, because it's not just numbers, but the weight you give them. Rationalists like to pretend you can figure out a way to do the math, but you just can't do the math in a way that's conclusive and/or persuasive for all.

Expand full comment

Keynes could advise the government on economic matters and *then* pick the stocks that would go up or down? What a... capitalist genius, yes, let's go with that.

Expand full comment
Mar 4, 2022·edited Mar 4, 2022

Trying to decide whether relationships and respect are more important than rationality is like, say, deciding whether direct fuel injection is more important than strawberry yogurt. Stupid argument leading to meaningless discussion. Dare I say it? - it's an irrational argument!

Expand full comment

Nitpick: on that occasion Ramanujan wasn't solving a previously-unsolvable problem, he was solving a problem (I think in a newspaper column or something) that mathematicians already knew very well how to solve pretty routinely. Lots of mathematicians (including, e.g., me, and I am many levels below Ramanujan's) would look at the problem and very quickly see that the answer would be given by a continued fraction. What was remarkable about Ramanujan in this case is that he got to the _right_ continued fraction instantaneously and without conscious intervention, rather than having to scribble on paper for somewhere between one and twenty minutes to get there.

Expand full comment

There's probably some definition headroom to be found around "rationality is a concern with / attentiveness to the boundaries of heuristics."

After all most of what we do is the application of heuristics -- even doing matrix multiplication entails choices of computational precision and numerical representation which are fundamentally heuristic. The question is not "can we avoid heuristics" or "can we check our heuristics" (they're not very heuristic if you have to run the computation anyway!) but "how much do we actually know about the heuristics we are using?"

Expand full comment

Minor fact check: Ramanujan was not solving a "previously unsolvable math problem"; he was solving a brain-teaser set as a puzzle in Strand magazine. (And it's Srinivasa, without the 'n'.)

Expand full comment

Defining rationality always gets really philosophical and either super vague, subjective or both.

It feels like most arguments on rationality today are rather behavioral, about when and how much we should rely on heuristics.

Would be interesting to instead define what it means to be a "rationalist".

Some ideas:

- Those who spending an **above average** amount of energy to form, challenge and update heuristics.

- Those who want to understand things on a conscious level, such that it can be shared and understood by others.

- Those who simply **enjoy** trying to explain every aspect of and decision in life as logically as possible.

To the rationalists on here, what do you identify with the most or how would you define it yourself?

Expand full comment

> Democritus figured out what matter was made of in 400 BC.

It has been historically difficult for me to agree with this, especially after reading Eliezer's "Making Beliefs Pay Rent (in Anticipated Experiences)". To quote it, "When you argue a seemingly factual question, always keep in mind which difference of anticipation [of sensory experience] you are arguing about. If you can’t find the difference of anticipation, you’re probably arguing about labels in your belief network—or even worse, floating beliefs, barnacles on your network". When they said "everything is made of atoms" in XVII century, if they were asked "what do you mean", they could answer "well, volumes of gases react in proportions of small integers", and that would be something. But in 400 BC, what did they have? Thought experiments? So when anyone says that Democritus actually knew that the world is made of atoms, I mostly feel confused. Moreover, I don't even agree that a modern person who knows the phrase "the world is made of atoms" necessarily knows that the world is made of atoms, unless they have some ideas of what sensory experiences this assertions is connected with.

The best explanation for this confusion that I have is that the word "knowledge" is very broad in English. Unlike e.g. Greek where they have episteme, metis, gnosis, prognosis, aletheia, mathema, dogma, doxa, theoria, and so on and so on. So what I think is happening here is that reading rationalist texts has shifted my default understanding of knowledge towards more sensory-experience-oriented one. So, what I am arguing for is for using the default word "knowledge" in at least a little bit sensory-experience-oriented way, and the true solution would be to borrow those Greek terms into rationalist discourse (I have seen this done with Episteme and Metis, in context of "Seeing like a State" review, but not other ones). After all, if rationality movement is about studying knowledge and truth, then having a fine-grained vocabulary about the subject matter would help and clear a lot of confusions.

Expand full comment

Agreed, Democritus was just idly speculating about how the world might (or ought to) be without empirical observation. He was mostly lucky that his label "atom" stuck later on.

I would argue that most of these world-view changing discoveries are actually accompanied with a first estimate of a parameter specifying how the new world view ties into the previous observation.

"Gasses react in the proportion of small integers" might have other explanations as well (not that I can think of any, being to stuck into the atomic paradigm). The real breakthrough here is Loschmidt providing a first order-of-magnitude estimate of what is now known (ironically) as the Avogadro constant (latter improved on by Perrin).

For the A-bomb, the key discovery was made by Otto Hahn and Lise Meitner in 1938, leading to the Trinity explosion in 1945, some seven years later.

Similarly, any prehistoric fool could look at an ant crawling on a melon and think "what if the earth is actually a sphere". The key parameter here would be the radius. Per Wikipedia, Aristotle provided an early estimate, latter improved on by Eratosthenes. If we argue that the central use of this discovery is on long multi-leg sea voyages culminating in the Magellan–Elcano expedition circumnavigating the earth, we have some 1700 years in between discovery and application. (Of course, oversea voyages and European colonialism (which shaped our world to a degree that nuclear weapons will hopefully never match) would have happened irregardless of the form of the earth, and the Conquistadors would have figured out the truth without Eratosthenes soon enough, so blaming him for sea voyages would be almost as silly as blaming ancient Greeks for nukes.)

Newtons theory of gravity would normally be the odd one here, as only the product of the central mass and the universal gravitational constant is directly observable in any two body system. As it happens, Newton still gave a reasonable guess at the constant later on improved by Cavendish. The arguable central use of Newtonian gravity would be spaceflight, so there were some 270 years between theory and application. (Unlike the above, Newtonian gravity is indeed the central theory for spaceflight, predicting the existence of escape velocities, stable orbits and the like. Some applications (like GPS) may also be considered to be a poster child for relativity, but I would argue that if the light ether theory was true, satnav would work slightly different but still exist, as it mostly depends on a finite speed of light.)

Speaking of the (finite vs infinite) speed of light, per Wikipedia, that is another case of philosophers arguing with each other from Aristotle to Descartes with little basis in observation. Then, Rømer comes along and settles the debate once and for all not with some clever philosophical argument but by just measuring it. Two hundred years later, GPS becomes a thing.

Expand full comment

The ancient Greek ideas about atoms were useless. The only thing of value we got from them is the word ("atom") itself. The *important* aspect of atoms is not the existence of indivisible entities -- and the atom turned out not to be atomic anyway -- but that a very finite number of atoms could be combined to come up with an infinite variety of substances, because the material properties of atom combinations (e.g. molecules) need not bear *any* resemblance to the material properties of elements, id est the properties of table salt (sodium chloride) are nothing at all like the properties of sodium and chlorine.

*That* was the great insight that made atoms such an amazing and powerful concept, but it came from Enlightenment natural philosophers, particularly in Germany and France, and to some extent England, and owes nothing of importance to the ancient Greeks.

Expand full comment

Rationality is about changing your mind in the face of greater evidence. Anti-rationality is about favouring your community or social connections over being right.

Anti-rationalists make terrible arguments but they are not necessarily being irrational, just that they are not *Rationalists*. They have different priorities.

Expand full comment

The actual debate is about whether people who have "rationality" on their banner should be treated as high-status or low-status. (This is a reasonable default for *all* philosophical debates, by the way.)

The problem with heuristics is that it is perhaps more *difficult* for a human to actually follow them if you believe that they are "mere heuristics". You may verbally approve of the idea that "sometimes it is better to follow your instinct than to try making an explicit calculation", and then the real situation comes and you... actually do neither, but instead you use a cached result of some calculation that you or someone else did years ago, when you didn't even have all the knowledge that you have now.

When the right moment comes to use the heuristic, you will most like fail to notice it, and you will miss the opportunity... unless you already have a habit of using the heuristic all the time, in which case you will use the heuristic even without noticing anything. But having such habit (even in situations that don't require it, because how else could it become a habit, right?), that is what we typically call an irrational behavior, don't we?

So on one hand we verbally approve of using the right heuristics when necessary, and on the other hand we eliminate our habits of using them. When the situation actually comes, we are often caught unaware... and the observers are facepalming, because this is actually a quite predictable outcome.

Expand full comment
Mar 4, 2022·edited Mar 4, 2022

Is it possible that they are just being moral anti-realists, and saying, “these rationality people seem to think one thing is clearly better than another, and we all know how THAT goes”?

“Systematized winning” sounds a lot like “systematized good”, except we use the word “win” instead of “good” because we all intuit it’s not good to believe good means anything real or concrete. So we have this second-order value system which says, “whatever good is to you, clearly you want to approach it reliably and routinely rather than just at random, and here’s how to do it.”

But it’s still a value system, and we all know how people feel about any group that has the gall to say “we have the correct value system.”

Expand full comment

I'm curious why you went with "study of truth seeking" vs "study of winning" in the last section. Are you equivocating between the two? Do you think that if truth seeking and winning were at odds the rational thing would be to pick the first and not the second?

Expand full comment

'One of the most common arguments against rationality is “something something white males”. I have never been able to entirely make sense of it, but I imagine if you gave the people who say it 50 extra IQ points, they might rephrase it to something like “because white males have a lot of power, it’s easy for them to put their finger on the scales when people are trying to do complicated explicit computations; we would probably do a better job building a just world if policy-makers retreated to a heuristic of ‘choose whichever policy favors black women the most.'

I'm a little bit confused by this. Were you trying to joke, being mean towards your outgroup and steelman their position at the same time? That was really stressfull for my autistic brain, please don't do it again.

Anyway I guess a propper representation of this idea in question goes like this:

When people talk about stuff they always let their unconscious biases slip through and talks about rationality are no exception. Our own ability to reason is probably a result of status games of our ancestors. We are doing rationality on a compromised hard and software. People who develloped the ideas of rationality were mostly rich white men. We need to be extremely vigilant in order not to let whatever related biases sliped in the discourse deceive us. In practise people often use "rationality" as attire like in Newcomb problem. They claim to be rational or sceptical, but actually they are making very stupid mistakes. And such people are usually white man because the attire of rationality appeals to them more. Be careful, make sure to not fall in this trap.

Expand full comment

I have certainly seen bizarre claims which look a lot like "anything developed by white males is ipso facto inaccurate". I don't think the people making them are short on intelligence. I think they are semi-consciously trying to balance the scales in the opposite direction, in the expectation that there's so much pressure in favor of traditional ("white") knowledge that they'll merely make it a bit less dominant, providing a bit more room for alternatives. Very few of them want to give up their cell phones, or other taken-for-granted benefits of modern tech, except perhaps a few specific things. (They aren't Luddites.) What they want is for traditional knowledge to get a hearing, and for e.g. indigenous people to be involved in some of the research - not to replace European physics with "native physics", in spite of the terms they use. (Well, except for those whose understanding of science is so weak that the only difference they can comprehend is the race of the practitioners.)

Expand full comment
Mar 4, 2022·edited Mar 4, 2022

Yeah but their ideas are a solution in search of a problem. Where's the hard evidence that progress (in science or technology or health, not such ineffables as justice or government) has *actually* been seriously held back by any hypothetical preference for the ideas of white men?

We have been preaching the importance of "listening" to alternative viewpoints rooted in the race or sex of the proponent for 50 years now, at least. If any of those theories were even a little bit right, we should be now living in an absolute explosion of progress and ideas, inventions, and technological acceleration -- because we are no longer restricted to the ideas generated by 30% of the population. But we're not. Ipso facto there *was* no giant hidden font of brilliance being suppressed by the hypothesized prejudice.

Expand full comment

The obvious problem here is the abscence of control group. We have developped quite a lot for the last 50 years. But how can we know whether to credit this development to our ideological shifts or not?

We can notice that societies which does it are usually more successfull, but the causality may be backwards. We can also notice huge backlash from all kind of people and use it to explain any lack of explosive progress. I don't see any devastating evidence that would allow us to clearly understand in which world we are.

Expand full comment

Because we look at the first derivative of progress, and note that it is no different in 2022 (and possibly even smaller) than it was in 1962. So whatever changes have occurred, they have *not* sped up technological progress.

Expand full comment

Again, without a control group it's not a good evidence. Maybe without these changes we would have even less progress. Maybe the changes were not dramatical enough etc.

Expand full comment
Mar 5, 2022·edited Mar 5, 2022

Scott's supposed explanation/steelman is typical-minding--I'm rational, so I assume everyone else is.

I suggest that people who talk about white male science aren't inclined to use rationality much and trying to rephrase what they are saying to sound rational is going to misrepresent them. Correctly representing them will sound stupid because they're *not being rational*, and of course if you write out the belief process of someone who is not being rational, it'll sound stupid. And the reason they still keep their cell phones despite being against white male science is not that they don't really believe what they seem to be saying--it's that, since they're not rational, they *don't connect their beliefs to appropriate conclusions*.

If someone isn't rational, the fact that they believe X no longer implies they believe in the logical consequences of X.

Expand full comment

Congratulation! You've just reinvented a universal justification to think that your outgroup is irrational and doesn't worth engaging in a good faith. Now would you mind never using this infohazard again? It would make all the mankind a favour.

Expand full comment
Mar 6, 2022·edited Mar 6, 2022

These people don't show any desire to have someone engage with them, and sometimes attack others to prevent their beliefs from being engaged with. Don't be a quokka; chewing on your face is not engagement.

Expand full comment

And they are doing it for the same reason. Because they believe their outgroup to be irrational bigots not interested in engaging with their opponents arguments in a good faith. You are retranslating the same meme, just a bit more adapted to appeal to the right-leaning crowd instead of left-leaning crowd.

Expand full comment
Mar 10, 2022·edited Mar 10, 2022

They are manifestly behaving differently from the other side. Cancellation only goes in one direction.

There's no substitute for actually looking at the details of each side to determine whether they are claiming things that are true or false. Just noticing similar-sounding rhetoric tells you nothing; homeopaths claim that normal doctors are close-minded and following pseudoscience, for rejecting homeopathy,

Expand full comment

I don't think you've quite got the Social Justice line of thought. It's not just that white men put their thumb on the scale because they have power, it's that they have a long history of putting their thumb on the scale, so that, by some strange coincidence, their arguments end up proving that they're superior and should be in charge.

I also believe Social Justice is about getting power, so it's very convenient to have a tool for saying "Shut up and stop arguing with me".

There's probably a good topic in how you distinguish between things that have the trappings of rationality vs. actual good arguments.

***

In re respecting tradition: How do you decide when you see that traditions conflict?

Expand full comment

> How do you decide when you see that traditions conflict?

I believe the traditional answer is something like: Our tradition must be followed by our people, and at least temporarily by all visitors on our territory. If the other side can follow these rules, we can live in peace; otherwise we will solve the problem by killing them.

This of course assumes that each person has one (primary) tradition. The idea of freely choosing from multiple available traditions is a recent one. Traditionally, changing traditions, if possible at all, required a costly ritual.

Expand full comment

"Traditionally, changing traditions, if possible at all, required a costly ritual."

I'm adequately sure this is wrong. There's a lot of drift in rituals, and possibly a tendency to increase complexity.

However, I was thinking about the uses of ritual in knowing what to do, and there are some possibilities. One is to cut slack to rituals which have been around for a long time, though it may be hard to be sure. One way of defending a ritual is to claim it's ancient, but it may have been around for mere decades.

The other thing is to be very cautious about proposals (like raising children without connection to parents as an ideal) which don't show up in anyone's customs.

Expand full comment

>How do you decide when you see that traditions conflict?

I'm not really a trad person but I'll take a stab at this:

* transaction costs of switching/thinking (whichever I heard first).

* utilitarianism, if you have any idea how to guesstimate the utility.

* just observing whether the followers of A are better off than the followers of B in ways you care about (lazy confounded utilitarianism).

* which tradition evolved in a context more similar in relevant ways to the present context

* whichever one you prefer

Expand full comment

My definition:

Rationality is the union of [epistemic rationality] and [instrumental rationality]. Epistemic rationality is about avoiding biases, where a bias is anything that prevents you from forming accurate beliefs out of the information you've received*. Instrumental rationality is about achieving your goals.

* so if e.g. you happen to receive a highly unrepresentative sample by chance and form an inaccurate belief because of that, this is not a bias.

Expand full comment

I think the “anti-rationalists” are arguing exactly that some people (“rationalists”) rely too much on explicit reasoning and and not enough on heuristics and intuition. Of course not that Pinker or anyone else is _categorically_ against intuition/heuristics, but that they’re getting the balance wrong.

I don’t know if they’re taking a strong stand on how you ought to reason about when to use explicit reasoning or not in the meta sense.

I think maybe also there’s a related (or identical?) claim about making category errors in mixing up wise women’s herbs and bloodletting. Like if you asked someone 100s of years ago “how should we advance the theory of medicine so we’re good at it 100s of years from now?” the answer is “giving the bloodletting theorists bodies to practice on and study”, but if the question is “help my son is sick” the right answer is “oh then take him to a wise woman”.

Expand full comment

I believe a lot of confusion about rationality comes from mixing up truth and outcome. For instance:

"It's rational to not believe in God - there is completely insufficient evidence, and you shouldn't believe in things like that without evidence." In this case, you improve your chance of being epistemologically correct.

"It's rational to believe in God - the Inquisition will burn anyone who doesn't, and your risk of slipping up is much smaller if you actually believe in God than if you merely pretend." Here the epistemological correctness is _completely_ beside the point, because you're only aiming at the outcome of not being burned at the stake.

And the situations can, of course, co-exist. Before you even start to talk about what's rational, you need to know what you mean by the word (after all, this is the foundation of all of analytical philosophy)

Expand full comment

What if the existence of the Inquisition *is* the evidence for the existence of God?

Expand full comment

This might point at the problem of what do you do if you're surrounded by a lot of people who are smarter than you are. I'm not talking about the viewpoint of people who are pretty smart but know they're not the smartest at various things.

Let's say (to use approximate language) that we're talking about people with IQ below 110 or 100, someone who knows they're vulnerable to grifters and fast talkers. I think there's a quote from Oliver Wendell Homes (not an especially stupid person) about not trusting logic because trusting logic means putting himself at the mercy of anyone who's smarter than he is.

Expand full comment

Personally, I retreated considerably from effective argument techniques when I realized that the ability to argue effectively did not imply that the thing I was arguing for was correct.

Expand full comment

This is what I got to understand the hard way too. You can be rational, bud due to non-trivial slip of logic or insufficient knowledge be easily misled by others. To be perfectly rational we need a very good memory, very good sources of ground truth and a very good processing capacity. In reality we substitute all of them with heuristics and specialists' statements more often than not.

Expand full comment

You also need exponentially increasing levels of perfection in your reasoning, the more complex a chain of logic you're constructing. Consider the analogy of expanding a binomial. If you're asked to expand (x + 5)^5 without making use of the known formula, there's still a very good chance you'll get all the coefficients exactly right. But what if it's (x + 5)^500? The chances are very slim you'll get all the final coefficients right, because *somewhere* in the enormous chain of computation you'll screw up and forget to carry a 1 or transpose digits.

This is the problem with long chains of logic. On any *one* link you may be able to rely on 99.99% (or whatever) accuracy, but if you chain gets long enough, you will inevitably start making errors, and the unfortunate aspect of logic is that it is not what the mathematicians call a stable algorithm -- even very small errors can send you off in insane final directions, there's no guarantee that a small error in the method means a small error in the conclusion.

Expand full comment

Shorter: we can't agree on the problem, and often can't do the math regardless.

Expand full comment

But trusting the "fast" system means putting myself at the mercy of anyone who's better at "soft skills".

For an autistic person, who's probably in the bottom 10% for soft skills, and probably knows it, they may be better off going the rationalist route, even if they are merely near the 40th percentile for IQ.

Or they can follow traditional rules, like "if it looks too good to be true it probably is", and "don't have sex without a wedding ring". Even though those rules also tend to include "the weird person is low status, and casually picked on by everyone."

Expand full comment

Those rules were made with that in mind after all "all manuals are written in someone's blood", while using rationality can be pure experimenting.

Expand full comment

I sense the tension between Rationality as computation and Rationality as winning/truth/being correct.

What's interesting to me is that they are both themselves heuristics. One heuristic says that careful methodical thinking will produce the best results, and points to the Scientific Revolution as proof (even if only on a long term horizon, as Scott mentions). The other heuristic short circuits the method and seeks the goal. I think this second heuristic is more interesting, because it recognizes a potentially fatal flaw in Rationality/rationality, and instead of stopping to figure out how to fix the flaw, it's willing to jump to the actual purpose of being rational - better outcomes.

The tension exists because the underlying philosophies are not compatible. Rationality requires reasons and understanding, so skipping to the best solution without understanding it really breaks the purpose. But, Rationality is not an end unto itself, it's a means to a different end, which is a better outcome. If you can get to a better outcome by reading the bones, then there's a very rational reason to read the bones instead of trying to figure out why something may or may not work.

Related, I think the reason computation breaks down is because the various inputs and factors are unknown and possibly unknowable. I think this flaw may be fatal for Rationality, at least the computational side and in regards to studying people. Judging by the fact that there are Rationalists willing to skip from figuring things out to doing what works, I think they realize the same thing.

Expand full comment

When I read things like this, I always think of D. Kahneman and "Thinking, Fast and Slow". We think both ways, and thinking slow is perhaps most useful when we think there is something wrong with the fast mode. Also thinking slow (rational) doesn't always get to the answer because there is all sorts of 'churn' going on in the brain below the conscious level. ie. You figure out some problem in your sleep and the answer is revealed to you during the morning shower. I don't think that is thinking fast or slow... maybe call it thinking deep.

Expand full comment

I think many "rationality" vs. "anti-rationality" arguments are actually philosophical conflicts, clashes of implicit metaphysics. You cannot define and practice rationality a priori from a certain worldview. "Rationalists" are not just people who study explicit reasoning and try to win stuff; they're generally people who share a common belief package: they assume that the world follows universal and legible laws (naturalism), that science can shed light on those laws (scientism), that the world is rationally knowable, that humans can and should determine their destiny through their powers of reasoning (humanism?). It is very hard for us who live within this worldview to understand somebody who carries a totally different package.

Incidentally, these beliefs and values emerged with the first "Rationalist" philosophers in early Modernity and then flourished during the Enlightenment. We know very well that most cultures, for most of their history, did not share this WEIRD way of looking at the world. I believe these thinkers called themselves rationalists, not because they were the first to discover the principle of "we want to be right about stuff and effective in our actions"; but because they were conscious of going against tradition and religion by emphasizing certain beliefs (the world is rationally knowable) and values (humans must carve their own destiny).

Imagine for a minute that you live in a world which is fundamentally unknowable; or perhaps knowing stuff enmeshes you in a veil of Maya that distracts you from True Being; or the world is ruled by a God who likes to punish us for knowing too much; or calculating knowledge de-humanizes you and corrupts your soul; or science is false knowledge, it only gives you the illusion of knowing, and therefore it is worse than ignorance; or the world is fundamentally made of mind-stuff, so if you want to understand it, you need to develop your empathy and explore your emotions... In many of these worlds, the reasonable course of action would be to do more or less the opposite of what people normally dubbed as "rationalists" do; but then you wouldn't necessarily say "I have discovered true rationality; I am an actual rationalist". You might say something like "I follow the Scripture", or "I respect the law of the Elders", or "I live in despairing terror of the Great Old Ones". And besides this, you might also choose to define yourself as "anti-rational" to emphasize your opposition to "rationalists" in values and worldview.

In this framework, anti-rationalists are not rebelling against the generic idea of "we want to be right and effective". They're more or less saying: We reject the specific ways in which you try to be right and effective because it conflicts with our values and worldview. From our point of view, what people who call themselves "rationalists" are doing is one or more of: immoral, counter-productive, pointless.

Expand full comment

Woo-hoo, a chance for me to leap in with ill-informed and ignorant opinionating!

Regarding Steven Pinker, I have never read anything of his, and the more I read around/about him, the less inclined I feel to do so (the two history blogs I follow, when they mentioned him, did so in a tone of "oh, *that* chap" and the impression I got was of someone who breezed into a field he did not know much about to make over-confident pronouncements about how things went).

That tweet of his makes me want to slap him in the face with a wet haddock. Because of course, if Gardner were a true sceptic of rationality and did not use the tools of rationality, his critique would have gone along the lines of "Ghoti! Wibble? *sawing noises* *blocks of pure colour* Whee! Gibba-gabbo-goo! *five hours of the shipping forecast* https://www.youtube.com/watch?v=CxHa5KaMBcM"

Honestly, there Pinker reminds me of nothing so much as "Mister Gotcha" in panel four here:

https://knowyourmeme.com/photos/1259257-we-should-improve-society-somewhat

I should hope we all use the tools of rationality (small "r") but I think the problem is that the discussion often veers off to Rationality (capital "R") and that is - well, what exactly is it? The version promulgated by Yudkowsky? A cult of the worship of Bayes?

Scott is correct in that nobody, when meeting somebody new who might become a friend, sits down and runs through a fifteen-stage checklist to decide whether or not they like that person. But Pinker and others can come across sounding very smug in the "I am the Only Smart Person here" way when talking about Rationality, and they do sometimes come across as "Well obviously you only make *every single decision* in your life after running a fifteen-stage checklist, otherwise you're one of the clods that paint their backside blue and believe in sky fairies". The Straw Vulcan version of a rationalist, if you like.

Pinker has a valid point that you can't undo something by using that very thing itself. But Gardner has a point that capital-R "Rationality" isn't something with a clear definition that everyone agrees on, and for the majority purposes we need to make decisions, we rely on other things.

"Fine, but I need fifteen people to bond super-quickly in the midst of very high stress while also maintaining good mental health, also five of them are dating each other and yes I know that’s an odd number it’s a long story, and one of them is secretly a traitor which is universal knowledge but not common knowledge, can you give me a tradition to help with this? “Um, the ancients never ran into that particular problem”."

Very likely they did, as there is nothing new under the sun, human nature has not changed that much, and moderns did not invent sex and romance much as they might like to think they did so.

Get them all to work on a common problem or put them into something like a sports team or a dance team. There's a Chinese light entertainment show which takes 200 dancers from all backgrounds and after several rounds whittles them down to a team of 10 who go from competing against each other for places to being ride-or-die for the team and each other:

https://www.youtube.com/watch?v=Ajh70XflhaY

Expand full comment

Pinker has a background in linguistics & cognitive science, so for him being on solid ground rather than breezing into others' territory I'd recommend "The Language Instinct".

Expand full comment

A case of "cobbler, stick to your last", then? I do think the breezy dismissal of his opponent in that tweet is indeed "I am Very Smart". It's not such a gotcha as he seems to think it is - what 'the tools of rationality' are, are indeed the kinds of things Gardner and everyone else accepts as normal, useful, and vital.

"My Big Idea of Rationalism where you just plug in everything into Bayes' Theorem" is not at all the same as "everyone agrees on what 'rationality' is and means", and so it's unworthy of Pinker to make that kind of gratutious swipe: 'If you were really an anti-rationalist, you'd be the kind of idiot who thinks we should sacrifice to the gods to make the thunderstorm stop. If you don't think like that, you are in fact a rationalist, and your criticism is hypocrisy. So what are you - a fool, or a knave?'

Expand full comment

Like Chomsky, Pinker is wrong about language too. Basically, only very smart people can justify very dumb pet theories that obviously don't track reality. Sometimes it seems like they recognize as much, and are just taking joy in playing the game, rather than actually believing their theories map to reality. Because really, who cares what your theory of language acquisition is, people learn language just fine without any reference to your pet theory on why/how.

Expand full comment

How does it obviously not track reality?

Expand full comment

" Pinker is wrong about language too." -- how?

Expand full comment

As I said above, capital-R Rationality is basically a club/fandom for people with certain weird hobbies. There's nothing wrong with that -- there are fandoms for Warhammer, nature hikes, esoteric literature, what have you. I belong to many of them ! But "personal" is not the same thing as "important", and being a Rationalist is ultimately no more important than being a Trekkie.

Expand full comment

Absolutely agree, but you're preaching to the non-choir here. Love this community, but most seem to have little idea how odd their perspective might be compared to some baseline. Which is why it's hugely valuable, but also frequently seems so far off base. Also makes it hard for folk to discern why they actually seem dangerous to some; traditionalists by definition tend not to think let's completely restructure society in ways that could be hugely destructive, rationalists tend not to really grok chesterton's fence, the dangers of over-intellectualizing, how unrepresentative their experiences may be, etc.

Expand full comment

It's probably worth noting that the sense in which you are using rationality is ahistorical. Rationality used to be a more well-defined concept, which was distinct from other truth-seeking methods such as empiricism: see https://plato.stanford.edu/entries/rationalism-empiricism/

In this conception, the main thing that makes rationalism special is that you try to ground your worldview in pure thought - as opposed to empiricism, where you do messy experiments. Hence the strange obsession with rational numbers, or Descartes' idiotic "I think therefore I am", or attempts to build perfectly "logical" languages, or "expert systems" as a path to artificial intelligence, or computer science and mathematics would all fall under traditional "rationality". Biology, or chemistry, or "neural networks" as a path to artificial intelligence might fit better under the umbrella of "empiricism".

Today the sharp distinction between rationalism and empiricism is no longer justified: there are fully rational, mathematical models of chaos and randomness, and we have mathematical models of how neural networks might work. We have computer science explanations for why ab initio chemistry is more difficult than simply doing experiments (it comes down to quantum mechanics being difficult to simulate on a classical computer).

Still, there seem to be underlying personality traits that favor some people to think in a more rationalist way vs a more empirical way (using the old definitions of these words). MIRI, for instance, seems to clearly fall into the traditional rationalism bucket, while DeepMind seems to fall into the traditional empiricism bucket. It's probably worth studying this phenomenon (empirically).

Expand full comment

Come. You haven't been *astonished* at the degree of positively medieval scholastism, the adoration of theory over experiment, that has manifested itself during COVID? Rationalism in the old-fashioned sense of the word is very much alive and well and perfectly capable of dominating both private and public discourse.

Expand full comment

In case it wasn't clear from my username, I'm one of the people with a (traditional) rationalist tendency. When I encounter a difficult to open lock, my first instinct is to try to imagine how I would go about designing a lock and then to work out from first principles how someone might subvert it, rather than to smash the actual lock open and look for flaws in the mechanism within.

When I first heard about the mRNA vaccines for COVID, my beliefs were similarly based on pure thought rather than experiment: it seemed obvious to me that unless deliberately engineered to do so, the mRNA would not replicate out of control, so it should provide an extremely safe and controllable way to expose your immune system to the COVID spike proteins. It avoids the obvious pitfall of standard vaccines, which is that we have no idea whether they will be just as dangerous to someone with a weak immune system as the real virus. The fact that they had to go through the same level of testing as an ordinary vaccine seemed ridiculous to me.

In other words, I think that I displayed the typical level of rationalist hubris about biology. That's the way I tend to think. (I understand the rational arguments in favor of empirical testing, even when you would think it is unnecessary, but I rarely go out and actually do the experiment.)

Expand full comment

Hmm I dunno. It doesn't seem super likely to me you deduced the properties of an mRNA vaccine from Plato, the Bible, or studying the skies. So what you really mean here is that you took the summary of umpty zillion painstaking experiments in molecular biology, which gives us our theoretical rationalization of what mRNA does, and then applied it to a very slightly different situation. That's certainly very reasonable, and we do that science all the time -- indeed, it's how the mRNA vaccines were invented in the first place, by imagining a step 2mm further than current understanding extended (and then doing a lot of hard experimental work to make it work in practice).

But that is very different from a scholastic attitude that tries to reason stuff out from first principles. There *were* (and still are) people who try to deduce whether the mRNA vaccines work or not based on...their model of the psychology of biologist, or vague "humor balance" theories of health and sickness that had whiskers on when Paracelsus confronted them. The idea of picking up a textbook and studying what a few generations of experimentalists have discovered by trial-and-error strikes them as unnecessarily complicated and tiresome. Why not just employ pure reason? Surely that is sufficient...!

Expand full comment
Mar 5, 2022·edited Mar 5, 2022

That doesn't make any sense. Surely this hypothetical person would have to at least glance at wikipedia in order to know, at the very least, what the letters "mRNA" stand for?

I'm having a vague sinking feeling that maybe I'm really, incredibly, out of touch.

Expand full comment

You'd think that, yes ha ha. In a shocking number of cases they have not, however. Worse, in a large number of cases they *have* but have managed to twist up what is there into something approximating a reasoned argument but which delivers some very strange conclusion.

Personally I think the biggest surprise of the Internet is that the wide availability of information was *supposed* to lead to a more elevated public discourse -- fewer basic errors in fact would be made, people would be more aware of what is already known and unknown, there would be more pre-existing consensus on what is real and what is not. In short, the debate would be more like the debate among experts at the edge of some known field -- physicists debating evidnece for the Higgs mechanism, say.

And that seems so obviously true I don't think anyone doubted it, 45 yearas ago. And yet...that is not, I think, the way it has turned out. Somehow, the wide availability of information has led to *decreasing* rationality and reason in the public debate. Far *more* intellectual factionalism, less consensus on what is real and what is not, greater willingness to interpret known facts in bizarre ways.

I cannot explain this. It is a shock and surprise to me, for sure. The only thing I have thought that seems relevant is that people thought the invention of radio and TV would have similar pacifying and rationality-stimulating effects -- and they, too, did not. I'm told radio contributed so much to the rise of Nazism that to this day the German government puts unusual restrictions on its ownership. I know TV has contributed substantially to division and polarization, and *everyone* has lamented that it degraded the public debate. In Lincoln's day people called each other scurrilous assholes in 16-page closely-reasoned pamphlets, while today it's done in 140 characters or a single "meme" photo, and the change seems to have unfortunate side effects of degrading the quality of what's heard in the public forum.

It's all a bit sad. You'd like to think communication technology can empower our better natures, but too often it seems to enable our worse natures instead.

Expand full comment

How likely is it that we're secretly arguing about utilitarianism? I would expect the "everything is commensurable" of utilitarianism to be intuitively repulsive to a lot of people, and that feels like at least a decent proxy for where you'd want to stage the battle.

Expand full comment

Utilitarianism (which I tend to buy!) only works if you can do the math, which requires agreeing on weights/values, which can't be proved via utilitarianism, so it's get you nowhere. Doesn't mean it can't be a helpful analysis tool, but doesn't prove/establish anything. So it's not an objection to utilitarianism in theory, but in practice (it doesn't much matter if god can do the math if we can't).

Expand full comment

"Everything is commensurable" isn't non-intuitively defensible, either.

Expand full comment

Since you mentioned Democritus - he was actually terrible on this question. His eliminativist reductionism also eliminated the possibility of knowledge or rationality. Perceiving a tree, he would say things like "there isn't a tree, the tree is an illusion, in truth there is only atoms and the void!" He was actively anti-science, anti-analysis, and anti-rationality, since he thought the atomistic nature of the universe made knowledge impossible.

Also, his "atoms" have their closest modern-scientific parallel not in our "atoms", but in biological proteins, which actually do interact mechanically based on their shape in the way he described.

Expand full comment

Diving into this, it's a very interesting topic. Even on a cursory reading, Democritus was not the founder of atomism, that was his teacher, and it was as much or more a philosophical theory as an explanation of physical reality. Nothing much got done with it because his works were lost and only fragments remained and references to his thought in the work of other writers. That, plus in the mediaeval period he was going up against Aristotle, the 800 lb gorilla.

So it wasn't until the 17th century that interest in what was taken to be his theory (that all that exists is hard little irreducible lumps of matter in various shapes and sizes, in constant motion in a void, combining, giving rise to sensation out of those combinations, then dividing and re-combining in other configurations. Reality consists of that alone, all else we imagine we perceive is illusion, or 'maya' to borrow a term) by the chemists.

Or rather, the alchemists. We don't get a tidy, rational (or Rationalist) progression of intellectual development of pure Science, we get the messy philosophical digressions. Robert Boyle, Earl of Cork, was a prominent chemist but also an alchemist and an amateur theologian who saw no problem with reconciling science and religion in the traditional way:

https://en.wikipedia.org/wiki/Robert_Boyle

He was also a "Corpuscularian" (a marvellous term https://en.wikipedia.org/wiki/Corpuscularianism) who were a school that thought that matter was indeed made up of small, divisible particles, which is what fostered his and their intererst in Democritian atomism.

However, the chemical and then finally physics development of the theory of atoms had little to do, past the name and the idea of small fundamental bits, with Democritus (e.g. in his thought, energy did not enter into the subject at all).

Expand full comment

I have read neither Pinker's book nor Gardner's critique but my general hunch about why "anti-rationalists" are "anti-rationality" are:

1. They don't consider it useful/important to explicitly try to improve one's reasoning skills, ability to win systematically and/or don't see that much value in studying cognitive biases etc.

2. They just have a mistaken implicit model of rationalists as these-people-who-are-all-for-reason-but-neglect-importance-of-respect-emotion-social-relationships-sth-sth and in general have a habit of pointing to the skulls that have already been noticed and updated from.

Expand full comment

I think the difference is what each would consider a virtue.

"Rationalists" consider explicit reasoning to be a virtue like courage or kindness. It is an unfortunate reality that one cannot always use explicit reasoning due to time constraints, but in an ideal world there would be enough time for it all the time.

"Anti-Rationalists" consider explicit reasoning as a tool to be used when it is appropriate, but in an ideal world one would never have to do it.

Expand full comment

I honestly think this is a problem of definition. Are we talking about "anti-rationalists" or "anti-Rationalists"?

Because if you're turning Rationalism into the intellectual equivalent of a political party, then you have to expect other parties to develop. And in that case, if you are perceived as trying to impose a one-party state, you have to expect people who don't want that to protest.

I don't get the impression Gardner is an anti-rationalist. He may well be an anti-Rationalist, but if Rationalism is a party line, he has every right not to belong to that party.

Expand full comment

He's objecting to the "assume a can opener" baked into capital r Rationalism.

Expand full comment

My partner and I had a debate a few months ago on how to tell a dog from a cat. The criteria we settled on was that cats have thick whiskers long enough that if they were stretched out straight the distance between the whisker tips that were farthest apart would equal the width of the widest point of their bodies (which they use to determine if they can fit through holes: if their head fits without their whiskers touching anything, then the rest of their body will too), while dogs don't.

That and the vertical pupils I guess.

Expand full comment

Now I feel bad for cats that lose their whiskers and transmogrify into dogs.

Expand full comment

“But again, I would be shocked if Pinker or other rationalists actually believed this - if he thought it was a productive use of his time to beat one of those cat/dog recognition AIs with a sledgehammer shouting “Noooooooooo, only use easily legible math that can be summed up in human-comprehensible terms!”

I dunno if Pinker considers it a productive use of his time, but I do believe he still believes that systems based on statistical learning are inferior and cannot understand a problem of study - I’m thinking of his paper from the 90s arguing that neural networks can’t learn the past tense of English verbs.

Expand full comment

I've seen this framed as two types of knowledge, which I call "evolved" and "epistemic". Evolved knowledge has the advantage of likely being useful at the point it was developed (or else it would be selected against - thus "evolved"), but the disadvantage that it is unlikely to be true and changes slowly. Epistemic knowledge tends to be true, but is often not useful (until, as you point out, it suddenly is).

Expand full comment

I think this would benefit from considering sophistry, which I would say is the actual anti-rationality. Everybody knows that common sense, intuition and heuristics can be manipulated to make dangerous or evil deeds seem wise and good. That's what propaganda is. Rationality has a good reputation because critical, logical, naturalistic practices can put your thinking on a solid footing, which helps you resist propaganda.

But once you're reliant on rationality to defend you from propaganda corrupting your heuristics, you're now vulnerable to sophistry, which is partial or corrupted rationality used in the service of propaganda. Logic is vulnerable to fallacies, (forgetting to carry the one); critical thought is vulnerable to conspiracy bloat; naturalism is vulnerable to political capture.

To a rationalist on the defense against sophistry, "rationality extends thus far, but doesn't cover x situation" seems like the thin edge of the wedge; somebody is trying to corrupt you with fallacy or conspiracy or politics. And once they've done that, you're vulnerable to manipulation by propaganda again.

Expand full comment

I think it's actually more likely that rationality can be manipulated to do the most harm (common sense generally doesn't think big enough to destroy the world, in part because it's primarily based on doing things that haven't yet destroyed the world, rather than trying new things that seem great on paper, but which may have effects we're not smart enough to see). Again, Rationality seems much more likely to embrace being a paper clip maximizer (based on some calculation purporting to show it couldn't possible go wrong) than the parish priest or the village alderperchild, though there's obviously still the risk of g. khan.

Expand full comment

I think you are kind of right that individual rationality is kind of meaningless, and the real rationality is the social or institutional rationality of, for example, Robin Hanson’s “Rationality as Rules” (https://www.overcomingbias.com/2022/02/rationality-as-rules.html), or Jonathan Rauch’s “Constitution of Knowledge”, or Arnold Kling’s “social epistemology” (https://www.arnoldkling.com/blog/epistemology-as-a-social-process/) or “institutional rationality”, or the scientific method, which isn’t mostly an individual process but a set of norms for communal inquiry by which we build on others’ learnings.

Expand full comment

Individual rationality is kind of meaningless because different people can think rationally within their own metaphysical systems (physicalism, Christianity, Marxist humanism, etc) but come to dramatically different conclusions. Metaphysics and social norms have a much more profound impact on life (both individual and societal) than logical rational processing within the given metaphysical and social norms of society.

Expand full comment
Mar 4, 2022·edited Mar 4, 2022

Comments:

1. "One of the most common arguments against rationality is “something something white males”. I have never been able to entirely make sense of it, but I imagine if you gave the people who say it 50 extra IQ points, they might rephrase it to something like “because white males have a lot of power, it’s easy for them to put their finger on the scales when people are trying to do complicated explicit computations; we would probably do a better job building a just world if policy-makers retreated to a heuristic of ‘choose whichever policy favors black women the most.’”

As a young cat, someone told me that logic and rationality were invented by cizhet white males as a tool to oppress minorities and women.

Ignoring for now the basic idea that the point of logic and rationality is that they work the same for everyone (which makes them an unreliable tool of patriarchal oppression), what's to stop white penis people from using irrational and illogical arguments? Most humans of whatever gender or color do so several times a day.

2. “Intuition” is a mystical-sounding word. Someone asks “How did you know to rush your son to the hospital when he looked completely well and said he felt fine?” “Oh, intuition”. Instead, think of intuition as how you tell a dog from a cat. If you try to explain it logically - “dogs are bigger than cats”, “dogs have floppy ears and cats have pointy ones” - I can easily show you a dog/cat pairing that violates the rule, and you will still easily tell the dog from the cat."

Didn't Diogenes cheeze off Plato that way? Plato was in his symposium, defining a "man" as a hairless, featherless biped.

Diogenes returned with a plucked chicken and shouted "behold, Plato's man!"

Expand full comment

I've always wanted to show Plato a kangaroo.

Expand full comment

"even though I bet all four of these people enjoy winning"

Do they though? Newcomb's problem highlights that a surprising number of philosophers don't enjoy winning, or enjoy some non-winning activity more than winning. Philosophers are all weirdos unrepresentative of the general population, but so is everyone involved in this argument. A goal other than winning seems like the simplest explanation of the people you describe as wanting to do whatever favours black women the most.

Expand full comment

"It still feels like there’s something that Pinker and Yudkowsky are more in favor of than Howard Gardner and Ayatollah Khameini, even though I bet all four of these people enjoy winning"

I don't know any of these people personally, and coming to it as an ignoramus, my interpretation of that is "Wow, this Gardner guy is on the same side as Khameini? He's like an Ayatollah? Well then of course Pinker is The Good Guy here!"

But being an ignoramus, I then go "Well, I kinda feel that may be unfair". I mean, if you read a sentence along the lines of "X and Y on one side, Pinker and Pol Pot on the other" wouldn't *you* feel that was a sneaky way of saying "Pinker is Bad Guy! Loves and supports wrong thinking!" (even though it might just be "how to cook rice" or something, instead of "genociding your own population is a great way to achieve your ends").

So, looking up Gardner, I see he's a developmental psychologist and proponent of the "multiple intelligences" model.

And that makes me (an ignoramus) think this is really a beef between "Only one high, holy and sacred measure of IQ (and that's how well you do on mathematical reasoning" versus "There isn't just IQ, there's other forms as well".

While it may be correct to think that "being a really talented athlete is not the same thing as being a Harvard psychology professor", I think "and the other Harvard psychology professor in this exchange of views is on the same page as a guy who thinks women should not ride bicycles" is a teeny bit unfair.

Expand full comment

I feel compelled for a moment to be pedantic about the final paragraph:

Economics is not the study of money-making.

Economics is the study of human decision-making under conditions of scarcity.

Expand full comment

I'm a game theorist in the social sciences, and we confront variants on this question all the time. The problem, of course, is that any behavior can be "rational" (in the game theoretic sense) if you assume the appropriate preferences, so I can always jam religion or whatever into a model by saying "well just assume massive negative utility for eating milk and meat together" or some such. In that case, there's nothing distinctive about rational choice theories.

In practice, what separates rationalist and non-rationalist theories of human behavior (and I know this isn't exactly what you're addressing above) is that rationalist theories attempt to explain behaviors in terms of a relatively small number of primitives. You posit that people have a limited number of underlying goals and then somehow optimize in pursuit of them.

What sets many non-rationalist accounts apart is that you can only reproduce them in a rationalist framework by assuming many primitives (especially if these arise via following evidently arbitrary rules). There's not a unifying framework that you can use to derive Leviticus from a few primitives; the only way to do is to posit hundreds of separate, independent rules (albeit joined up with the broader "do what God says" or similar).

If we turn that around in terms of the rationality movement, I think the same basic thing holds true. If you're trying to pursue some smallish set of underlying goals and you can define the utility of a given outcome in terms of that set of primitives, then you're on the rationalist pathway. And from there you can start optimizing (or satisficing). You'll use plenty of heuristics for the reasons expressed in the post, but you have a way of knowing whether or not they work in terms of the primitives.

In contrast, if you're engaged in irrational decision making (such as blind rulebook following), you have no primitives to fall back on. You can't say that keeping kosher is "working" or "not working" in terms of something else. It just is. The rules are an end in themselves, and so you can't be rational. There's nothing to optimize. You're just trying to follow rules/traditions/whatever. Your heuristics aren't shortcuts to something. They are the destination.

Expand full comment

"...rationalist theories attempt to explain behaviors in terms of a relatively small number of primitives. You posit that people have a limited number of underlying goals and then somehow optimize in pursuit of them."

Well, that's one social science version of Rational Action Theory (RAT). There are others. In political science, you have the Paradox of Not Voting and the many attempts to solve that Kuhnian puzzle, leading people like Downs and Converse and many others (& the father-figure Schumpeter) down paths where RAT, semiotics, symbolic interactionists, theories of trust&reputations, evolutionary signalling games and related micro-interactionist theories meet.

It's a big house with many rooms, RAT.

Expand full comment

I don't buy this. Rational decision-making rests on the assumption that preferences aren't circular - if someone strictly prefers A to B, strictly prefers B to C, and strictly prefers C to A in all situations, then that person's preferences can't be explained in the standard rational framework. (You could try to wiggle out of this in various ways, I suppose... like maybe someone prefers to seem like they always prefer B to C, or prefers to have memories of choosing B instead of C. But then I could come up with equally contrived experiments where we randomly choose two of A, B, C for you to choose between, and erase the knowledge of which two things you were choosing between after you've made the choice...?)

Similarly, expected utility as an explanatory framework for real-world human economic behavior is falsifiable - and in fact my understanding is that it has been falsified by experiment (and that something called "prospect theory" is a slightly better explanation for how humans actually behave). The theory of black-box rational actors maximizing expected utility puts actual constraints on which behaviors could be attributed to completely rational beings, and real people often violate those constraints.

Expand full comment

It is hard to argue with anyone who argues just to be argumentative. And reasons are often just assertions, not reasonable in the least. Does not reason imply rationality? But questions do not necessarily imply that the questioner really seeks an answer. If one honestly wants an answer, his question is the second step in finding the answer, and might very well lead to more or more specific questions long before he has found the answer. Seek and you shall find. Ask and you shall receive. But, often, when you ask for something specific, say bread, you may receive a stone. But if you request wisdom or knowledge you are more likely to receive them. For both are to be found in a world that is build on such principles. And our minds, our reason, is built on the same rational, wise principles as everything else we observe. Although both the mind and the world are sometimes devious in asking and giving.

Expand full comment

Thanks a lot for the nice post, Scott. While I like what you say as a proposal for what it means to be a Rationalist, I can’t help but feel like Gardner opposes something different.



A theme in the original Star Trek series was that Spock was super-duper smart and rational, but Kirk and McCoy would try to get him to be more “human,” making some choices that “weren’t rational” but “were right.” For example, in the fourth Star Trek movie it’s a big “human” moment when Spock agrees the Enterprise should stay behind and rescue people, against tough odds, instead of fleeing to save the ship. I feel like Gardner — and many people I know — have this sort of perspective on “Rational” versus “Right.” They applaud Spock here for “realizing being rational isn’t always the right thing to do.”

I disagree with this framing, and I think the right argument against it will be more basic than your nuanced take. I.e. it won’t need to say “Well when Spock promotes rationality, he means thinking about how to think.” It’ll clear up some basic confusion, like “Rationality doesn’t mean you can’t listen to your emotions and how much you care about people.”

Expand full comment

> One of the most common arguments against rationality is “something something white males”. I have never been able to entirely make sense of it, but I imagine if you gave the people who say it 50 extra IQ points...

OK, this is unfortunate. High-IQ white male speaking, let me try to unpack this a bit better. The knock here is that many people who claim the mantle of rationality are not in fact rational at all, they just like to dress up their own preferences, biases, and beliefs as "logical."

You might right now be racing to object -- correctly -- that this isn't a knock on rationality itself, this is a knock on the misappropriation of rationality. But that's the point that (I think) many so-called anti-rationalists are making. People can rationalize just about anything, and personally I have seen very little evidence that even the self-described rationalist community is much more than an affinity group for people with certain interests and political beliefs. To be clear, I don't think there's anything wrong with such an affinity group; I personally, as a high-IQ white male, share many of those interests and political beliefs. But when we start dubbing those affinities as "rational," with all that implies about people with different affinities, well, as the kids say, things get problematic.

To me, a truly rationalist approach to life would involve massively more epistemic humility than most people can muster. I think the confidence intervals on our beliefs -- including such articles of iron-clad faith like "communism is wrong and terrible" -- are much wider than we think. We then end up in a situation like the one William MacAskill explores at length in Moral Uncertainty.

Again, to be clear, small-r rationality is still core to the enterprise here, although certainly it is worth asking to what extent in practice moral behavior rests on rational calculation vs. emotion. But I don't blame anyone for being suspicious of whatever is being smuggled in under the name of big-R rationality.

Expand full comment

I think heuristics do arise out of rational underpinnings, and the problem with trying to define them is akin to what Chesterton describes (in his book "Orthodoxy" which puts his case as to how and why he believes in Christianity) below:

"It is very hard for a man to defend anything of which he is entirely convinced. It is comparatively easy when he is only partially convinced. He is partially convinced because he has found this or that proof of the thing, and he can expound it. But a man is not really convinced of a philosophic theory when he finds that something proves it. He is only really convinced when he finds that everything proves it. And the more converging reasons he finds pointing to this conviction, the more bewildered he is if asked suddenly to sum them up. Thus, if one asked an ordinary intelligent man, on the spur of the moment, “Why do you prefer civilization to savagery?” he would look wildly round at object after object, and would only be able to answer vaguely, “Why, there is that bookcase . . . and the coals in the coal-scuttle . . . and pianos . . . and policemen.” The whole case for civilization is that the case for it is complex. It has done so many things. But that very multiplicity of proof which ought to make reply overwhelming makes reply impossible."

Heuristics versus Rationality is the Rationalist (like Pinker) going "I have this lovely neat equation, what do *you* have?" and the Heuristician (is that a term?) looking around and going "Uh, well, there's the coal scuttle? And the table?"

It's then very easy for the Rationalist to laugh kindly at the Heuristics guy, but that laughter is misplaced. That quoted tweet does remind me of what Chesterton said about Matthew Arnold in "The Victorian Age in Literature":

"But Arnold kept a smile of heart-broken forbearance, as of the teacher in an idiot school, that was enormously insulting. One trick he often tried with success. If his opponent had said something foolish, like “the destiny of England is in the great heart of England,” Arnold would repeat the phrase again and again until it looked more foolish than it really was. Thus he recurs again and again to “the British College of Health in the New Road” till the reader wants to rush out and burn the place down. Arnold’s great error was that he sometimes thus wearied us of his own phrases, as well as of his enemies’."

Expand full comment

People argue preferences and assumptions, not "rationality".

"Given the choice, I will choose vanilla ice cream."

Can't use rationality for ice cream flavors? Why assume you can do it for human flourishing?

"Given the choice, I will choose 1 year of pleasure for 100 people."

"Given the choice, I will choose 100 years of pleasure for 1 person."

"Given the choice, I will choose a million years of misery for humanity"

"Given the choice, I will choose 100 years of pleasure for humanity."

Now lets use reason to decide on the goal of our society. What are the trade offs?

"How many years, making How many pleasure chemicals, among How many brains, of How many different Types?"

Expand full comment

( p and not-p ) implies q - the rest is commentary.

Expand full comment

The last line of this post reminded me of a hilarious article they made me read in officer training school. In at, a guy who spent his career studying rational decision-making just flatly claimed, without any attempt to justify, that the best way to improve your decision making was to study rational decision-making.

It was so funny on so many different levels.

Expand full comment

I think the popular objections to rationalism are based on a model that defines rationality as "putting more faith in your own judgments than in social judgments."

E.g: A rationalist with no experience or knowledge in medicine might look at the information that's been provided by authorities about vaccines and say "huh, I do not have enough information about this subject to make a coherent judgment on it. Therefore I will choose to rely on authority and get vaccinated."

Meanwhile, everyone else is going "you have to get vaccinated, doctors agree it's a good idea and if you don't people will mock you/be mad at you."

Those have the same practical endpoints but they're very different thought processes. One says "given my low confidence, it is my judgment that outsourcing this decision to authority is the correct move" and the other says "I don't care what I think is right, I am surrendering to the collective will." The first person could change their mind, the second could not.

When people talk about "straight white males" and reasoning, what they're saying isn't that "straight white males can put their thumbs on the computational scales and make other people think their power is good." They're saying "straight white males <something pseudo-psychological> and therefore they think they know everything. They don't understand that they should surrender their ability to reason to the collective will, especially since their motivated reasoning is likely to do evil because of <something pseudo-sociological>."

While I mostly see this philosophy as the Great Satan, I can steel-man it pretty easily: Even smart people are good at convincing themselves of false things that make them feel better. A ton of research indicates that we make choices for reasons that are sub-rational and then just make up a plausible-sounding reason why that choice was the right one. Even the act of saying "I am going to surrender to the popular will" on a question gives you the opportunity to motivatedly refuse to surrender to the popular will. You must pre-bind yourself to that surrender or it won't happen effectively, and then you'll use your straight white maleness to do evil, thinking it's good.

Expand full comment

That sounds basically right to me, but there's bit of a chicken and egg problem. Are heuristics "irrational" because everyone uses them, and they're therefore baked into social reality? Or is following social reality just another heuristic, championed by the anti-rational strawman I just built because they love all heuristics?

In practice there's exactly 0 difference between the two, and they either suck or have good arguments for them in exactly the same way. Can anyone think of some way to separate the two models?

Expand full comment

I think rationalists have trouble separating the two because rationalists don't think that way. It's just a language they don't speak or a sense they don't have.

It's like all those people saying that modern life is unfulfilling because of a lack of a higher purpose so they're going to become Catholics. Their reasoning generally goes "I know there's no God, but living life as though there is a God appears to be better for everyone. Therefore I am consciously choosing to live as a Catholic."

In a purely utilitarian sense they may be completely identical to a practicing Catholic. They may act exactly like a practicing Catholic in every way. But they're not a practicing Catholic. And ultimately that will affect whether this exercise in "finding purpose" works. You can't reason your way into doing it right, because reason isn't the point.

Expand full comment

Probably also worth mentioning that there's a subset of social justice thinkers (not the best subset by far) who believe that persuasion against personal interest is impossible and that demands that they make a persuasive case that they're right are actually tools of vested power.

Easy to draw a straight line from that kind of thinking to the coercive tactics that rule in certain corners of the internet.

Expand full comment

One could empirically study the study of study by randomizing 3000 forecasters to read Yudkowsky, Tetlock, or Pinker, and then seeing which group makes the most money forecasting.

I think I already know the result would probably be Tetlock > Yudkowsky > Pinker. Tetlock's work is the most narrowly focused on winning at prediction markets. All the epistemic rationality in Yudkowsky could in theory be useful for winning in prediction markets, but might be harder to apply and require more inferential steps.

There's a level between the diamond-knack guy and the geologist, which is where the diamond-knack guy writes a book of aphorisms about how to find diamonds. He makes his knack transmissible without placing it in the context of a general theory. Diamond-knack guy's book is probably more useful for finding diamonds than a general intro geology textbook. But the latter contains a lot of important things you won't learn from the former.

Expand full comment

Yudkowsky thinks AI is going to have a massive possibly world destroying effect in the near future. I think that if Yudkowsky is right, a lot of people will be very wrong. If most people are right, then Yudkowsky is going to be very wrong.

Expand full comment
Mar 4, 2022·edited Mar 4, 2022

As an internet phenomenon and locus of discussion, rationalism (to me) seemed mainly concerned with pointing out how cognitive biases, motivated reasoning and complex social incentives get in the way of our stated objectives. Your article on "the toxoplasma of rage" has some good illustrations (https://slatestarcodex.com/2014/12/17/the-toxoplasma-of-rage/)

1) "PETA doesn’t shoot themselves in the foot because they’re stupid. They shoot themselves in the foot because they’re traveling up an incentive gradient that rewards them for doing so, even if it destroys their credibility."

2) "If campaigners against police brutality and racism were extremely responsible, and stuck to perfectly settled cases like Eric Garner, everybody would agree with them but nobody would talk about it. If instead they bring up a very controversial case like Michael Brown, everybody will talk about it, but they will catalyze their own opposition and make people start supporting the police more just to spite them. More foot-shooting."

At its core, I think we can safely define rationalism as a kind of applied epistemology concerned with revealing socially-transmitted cognitive errors and proposing tools to overcome them (Bayesian reasoning, registered predictions, ...).

Expand full comment

Rationalism may or may not have anything to do with human capacity but everything to do with the accessibility or understanding of nature. Is the world comprehensible or do dragon-beasts need to be invoked to cover certain problems?

The author here is incompetent and has confused "heuristic versus computation" with "prediction versus accommodation" respectively. There is an enormous amount of writing on this topic, which makes the author's choice to discuss Keynes ironic since it largely began with Keynes writing these following line:

" If a hypothesis is proposed a priori, this commonly means that there is some ground for it, arising out of our previous knowledge, apart from the purely inductive ground, and if such is the case the hypothesis is clearly stronger than one which reposes on inductive grounds only. But if it is merely a guess, the lucky fact of its preceding some or all of the cases which verify it adds nothing whatever to its value. It is the union of prior knowledge, with the inductive grounds which arise out of the immediate instances, that lends weight to any hypothesis, and not the occasion on which the hypothesis is first proposed."

Expand full comment

This post -- and really the project of this entire blog -- is an example of Missing the Point due to hubris, and probably also from an essentialized, chronic fear of losing control.

The pattern is to seek comfort and confirmation by relying on the active mind to create a small universe that provides a sense of power and security -- typically by setting up an illusory polarity, in this case between "heuristics" and "rationality" (as well as between two writers). This performative may be amusing, and the results of the game may feel nice and tidy, but it takes place within a finite playground -- which is not necessarily a problem that would prompt an outsider like me to comment, except that when the participants of such a game cling so vehemently to their forgetfulness that the borders of their playground are not the actually borders of the universe.

Especially when people are so intellectually capable as many are here, and when they so zealously team up to tighten the shared finger-trap of their creative architecture, it gets easier and easier for them to bracket all unknowns as abstractions to be casually dismissed -- even though these abstractions impact real lives. In effect, libertarian-style rationality is particularly appealing to those who generally think they've figured out how to "win" because of their tidy little constructs and structures that allow the superego to view itself expansively, as if concerned about the benefit of all, while actually remaining effectively blind to their true nature, which is fearful, contractive and essentially self-oriented.

So the problem has nothing to do with rationality. Rationality is just a tool. Like a triangle is strong shape. The problem is the defensive crusade to convince as many people as possible that this cozy little abstract world should be preserved at the cost of long-overdue humanistic expansion.

Doctors in today's libertarian world, stuck inside "rational" markets for care delivery (since libertarians so enjoy shooting down the possibility of any new system that might not conform to market-oriented heuristics), have become predominately statisticians. A rationalist might praise this as pragmatic, knowing that oddball patients will fall through the cracks, but content that a statistically efficient system will save the most people. But this would be a total failure of imagination, as the rationalist is left with scant emotional drive for asking questions that have nothing to do with statistics, such as whether sick people with oddball diseases or syndromes actually might have much to contribute to the world that is original and expansive.

Remember, in classical philosophical terms, the idea of intuition is a much deeper caveat to awareness than presented here. Intuition alludes to the shared recognition among all self-conscious beings that everything in our minds and senses could be illusory. Or semi-illusory. So to say something is a dog -- a separate entity in a fundamentally material world -- is an entirely intuitive statement. Dismiss this as solipsism if you will, but those who ignore this condition practice disruption willfully ignorant of life that flow beyond their small playgrounds. To admit to the true scope of one's intuition is an example of root-level humility, which is lacking around here. Because it's more comforting to simply scoff at anyone who remains agnostic about the dog.

Expand full comment

I think the most delightful part of this comment is how I'm absolute certain it's totally wrong by the exact methods it dismisses, and vis versa. It points to no possible tiebreaker, no external evidence to sway one way or the other, and how could it!? There may well not be any evidence that the two sides could agree on, seeing as the only way to interpret evidence is through a framework. I mean, I literally read the example with the doctor and went "well its obvious the doctor shouldn't care about strange cases, not if it saves more lives overall". Am I right? Am I trapped in a box? There's literally no way to know, it's delicious.

Expand full comment
founding

> You can’t find the best economist by asking Keynes, Hayek, and Marx to all found companies and see which makes the most profit - that’s confusing money-making with the study of money-making.

A related passage from Xunzi:

> The proper classes of things are not of two kinds. Hence, the person with understanding picks the one right object and pursues it single-mindedly. The farmer is expert in regard to the fields, but cannot be made Overseer of Fields. The merchant is expert in regard to the markets, but cannot be made Overseer of Merchants. The craftsman is expert in regard to vessels, but cannot be made Overseer of Vessels. There is a person who is incapable of any of their three skills, but who can be put in charge of any of these offices, namely the one who is expert in regard to the Way, not the one who is expect in regard to things.

Expand full comment

Re. "But I recently reviewed the discourse around Ajeya Cotra’s report on AI timelines, and even though everyone involved is a math genius playing around with a super complex model, their arguments tended to sound like “It still just doesn’t feel like you’re accounting for the possibility of a paradigm shift enough” or “I feel like the fact that your model fails at X is more important than that my model fails at Y, because X seems more like the kind of problem we want to extrapolate this to.” ":

Another example: Around 1990, the people in AI trying to develop general-purpose intelligent agents split into 2 groups: symbolic AI vs. reactive behavior robotics. (There was also a big fight at that time between symbolic AI and neural networks, but that was mostly about classifiers.)

But there was little if any debating in the debate. It was obvious from the start that symbolic AI was better at playing chess, and reactive AI was better at not running into walls. Mostly people just argued over which kind of problem was more-important: playing chess, or not bumping into walls. The most-cited paper from that "debate" was probably Rodney Brooks' 1990 article "Elephants Don't Play Chess", which you don't really have to read once you've read the title.

Expand full comment

The more calculus which is done to derive derivatives further and further from reality and lived experience, the more degrees of freedom we allow ourselves…and we risk making up huge philosophical systems of incredible coherency, clear arguments, and total harmony of concepts which has…zero purpose beyond being a thought bauble. A child running away in a tantrum proclaiming they were really right before slamming their bedroom door and flicking the light switch on and off 8 times.

The abstractions to ‘higher’ order thinking can make the higher order thinker feel better and like they’re doing the more important thing that very few people are capable of doing. Which makes them special and part of that small smarter group of people dragging humanity kicking and screaming through history with all the progress due to them.

But this is silly and it has certainly been a group effort with many ways of participating. Another model might be to have the useless king philosopher of self rationalisation at ‘the top’ with some scientists below him trying to connect ideas to reality, then below them some engineer actually making things in reality, and below them all the unwashed masses of idiots who are the final arbiters of utility in terms of if they find any value in whatever the worker made which the engineer designed which the scientist made models of and which the philosopher rationalist umm….vaguely thought about what everyone else was doing?

I’m not sure if that part matters, but we can point out a handful of paradigm shifts…which were a reflection of changes in engineered reality or biological reality more often than they were brilliant thought designs of historical rationalists.

It can be difficult to perform science with rationality as the answer is science where we try out everything and see what works. It can be difficult to turn an idea or model of how things work into an actual product when translating science to engineering. And it can be harder still to figure out what people need vs what it is possible to build, hence all the very different things that get made at great effort and cost only to be ignored or abandoned.

Each layer of translation and performance across and within these abstracted layers has value. I think a big trap is going ‘up’ this ladder and proclaiming higher is better or that my part in it is better.

Oddly one would expect the rationalist philosopher to see this the most, but it is often the opposite with arrogance growing as you go ‘higher’ in these abstractions away from reality. But it can be he more socially powerful tool of humble and observation and use of what is made where one can find a simpler appreciation for what everyone involved does to make it possible to get a smartphone into their hands. Often we see people using their degrees of freedom to big up themselves since they allow more choices and are less fixated on reality.

And if you might think to say that I’m privileging base reality and human experience as the core measurement here, then I’d point you back to the thought baubles comment at the start of my post. Skipping steps of ideas about ideas to ideas about reality to doing his in reality to a king things happen and finally to use things in your life….is a difficult path and whoever is good at one or some of those steps is rare and whomever can do all of them is exceedingly rare.

We risk the mistake of the rationalist up high also being a human user of things and thinking to themselves they can do every step in between! I can science, I can engineer, I can build…just because you can think and use doesn’t imply much about being able to distill the infinite thought space down to the broad space of using things to help you survive.

Expand full comment

I feel like the distinction between doing rationality and studying rationality has a tidy analog in sports. When you practice a sport, you tend to do things *very* methodically. There's a "right" way to do everything, and you practice as slowly and in as small of chunks as you need to in order to become proficient. When you compete, however, you just do stuff. Of course you do your best to implement all of the techniques you've been practicing, but when you're going against an opponent, you're going to do a lot of things in ways that are technically suboptimal, because you have to prioritize speed, or because you're off balance, or because you need to answer something your opponent has done. If you were to slow down and and employ "perfect" technique, you'd lose terribly. Even in golf, which is not fast or violent and doesn't have an opponent interfering, coaches will often tell someone that they need to "play golf, not golf swing" when that person is thinking too much about technique during a round.

I think it's rather interesting how unremarkable it is that sports are like this, that *obviously* you don't compete the way you practice, and that might give us some useful information about the use of rationality.

Expand full comment

From the perspective of a philosophy student the rationalist community uses "rationality" in a really weird way. As far as I can tell, most people here either use "rational" to mean "good" (with all the vagueness of ordinary-language deployments of "good") or they use "rational" to mean "employing one of the methods from our bulleted list of rational methods". Treating rationality as a study of studies has some intuitive appeal, but still, weird. There's hardly a trace of the Western philosophical orthodoxy that rationality is a basic cognitive capacity to step back from, evaluate, and adjust our goals and beliefs so that they are consistent with each other and the world. People who are skilled at reaching consistency, realizing the aim of rationality, we call *wise*. Wisdom can involve intuition or explicit methodologies; any strategy is admissable, as long as it works. If we use "rationality" to refer only to the publicly shareable strategies, then what do we call the game in which these are strategies?

Expand full comment

> There's hardly a trace of the Western philosophical orthodoxy that rationality is a basic cognitive capacity to step back from, evaluate, and adjust our goals and beliefs so that they are consistent with each other and the world.

Having trouble parsing that sentence. Are you arguing that stepping back from, evaluating, and adjusting our goals and beliefs are NOT part of philosophical orthodoxy? I don't think you are. But if you are, I'd be curious if you could expand upon this idea.

Expand full comment

Rationality, as in the rationality community, isn't supposed to.be rationality as understood by mainstream philosophy ,and the community is often disdainful of the mainstream. But the characteristic assumptions of the community -- that there are a small number of methods dissolve everything, there's a single objective truth revealed by Solomonoff induction, that rationalists should agree via Aumann 's theorem and CEV -- are susceptible to criticism from.mainstream philosophy in turn. Which assumes the correctness of at least one form of rationality , unlike Gardner's critique.

Expand full comment

If we were to take a survey, would most of the people on this list agree with criteria for being a rationalist? (And as an aside, did EY ever set out these criteria?)

Expand full comment

He wrote the sequences, none of which has ever been retracted. So there's a bible. Maybe Rationalists don't believe in it, but sure as hell want people to read it.

Expand full comment

Scott’s characterization of Gardner’s reasoning is spot-on, for most of the text he doesn’t address any of Pinker’s arguments. The only actual ‘criticism’ is in the end with:

“The best chance for our planet is for us to be able to intertwine these Axial strains of thought and feeling. As Pinker would presumably agree, they entail reason—but respect, relations, and religions as well… and I would place the emphasis more on the latter three”.

Basically, Gardner doesn’t even have a problem with any of Rationality’s main propositions, but in its emphasis, something like:

‘Sure, rationality is important, but at the end of the day what decides who does evil or not isn’t how well they solve the Monty Hall problem, but if they have the other RE’s. They could be as rational as they want, and still be autocrat nazis/commies who don’t respect people or religion and do atrocities’

Might be going on a complete detour here, as part of Scott’s piece was focused more on the rationality vs anti-rationality aspects of debate, which in itself is clearly important and interesting, but there’s this missing thread, if you’re squinting hard while reading. Namely, that Pinker makes a book about rationality, arguing that it is good and the world would be better off with more of it, and Gardner is saying the expected value in terms of world quality would be higher if one were to focus on these other factors of respect, relations and religion.

Which is the kind of general argument that can be overapplied conveniently. Russia and Ukraine, that’s just Putin not being able to respect Ukraine’s history and its people. Major conflicts in the middle east, that’s a lot of religion. Racial profiling and police violence, if only we had more diverse relations.

But at its core, the claim that if there are two group of people one whose focus was “maximize rationality”, and another who had “maximize the other three RE’s (in the very peculiar way Gardner describes them)”, and the prediction that in the end the second group would be better off seems sufficiently non-trivial to merit a valid criticism.

Minor observations:

My use of religion for Gardner’s argument is somewhat misleading, he uses it in a different way, and emphasizes that ideally religions should be:

“personal belief systems ... not used to cudgel, pressure, or—indeed—make war on others”

Which is weird because A) religion is a belief system shared with others, and B) It usually is used to cudgel, pressure, and make wars; Gardner takes these as clear bad aspects, whereas those might be its driving features

Expand full comment

It's easy to overapply rationality , too. One of the arguments in favour of rationalism is that it's about making things in general.better in general, so.who could object? And that's about as broad as it gets.

Expand full comment

I hate to be a Debbie Downer, but I think the whole rationalist enterprise is a utopian pipe dream. My a-rational reasoning goes thus. It seems to me that 99+% of our mental activity is done without formal reasoning (and in many of us it's done without words). Yes we can use the tools of rationality to systematically observe the world through our sensorium, create hypotheses, and then test those hypotheses, but we risk being undermined by our pattern recognition systems, by our cultural prejudices, and by our instinctive reactions. Eventually, we *may* stumble across truths, but ultimately all conclusions drawn through rationality must remain provisional at best. <says I waiting for the screams of outrage>

I've enjoyed immensely the discussions I've had on Astral Codex, but I'd have to say I haven't seen much rational argument on this list (Scott's postings aside). My question is this, if we're all invested in rationality, why aren't we using it more? My provisional answer is that most of us aren't even aware of when we're arguing a-rationally.

Expand full comment

I think the rationalist community prizes making arguments that are often more philosophical than scientific. (I share this weakness.) However, along the way, people often do find evidence and share it. I think this makes it worth doing. The evidence is usually more interesting than the arguments.

For example, you can find interesting graphs and share them. People can judge them heuristically by looking at them. Sometimes this is enough: https://xkcd.com/2400/

Expand full comment
Mar 5, 2022·edited Mar 5, 2022

I'm all for rational analysis when we encounter new phenomena and information flows. Rational analysis can yield new patterns of understanding for us even if the initial results later turn out to be wrong. Some of that understanding will benefit our utilitarian needs in meatspace, but some understanding is only worth the pleasure it brings us. As for myself, I've gotten to that stage of life where I consume knowledge (new information flows) only for the pleasure it gives me.

Being a mystic, I guess I fall into the Gardnerian camp, because I don't think our minds are really designed for rationality. It's like advanced meditation, or training for marathons—even if we were to practice rationality with one-pointed mindfulness, we'd still be slipping into a-rational modes of cognition every time we transitioned into other activities. Likewise, the human mind has cognitive limitations (notice that I say cognitive and not computational limitations), and I think it's very likely we will may have reached the limits of what the scientific method can yield (science being roughly adjacent to the boundaries of rationality) — though (maybe? I hope?) we're in a trough before the next wave of Kuhnian knowledge reorganization.

And BTW, being a mystic, as far as I've been able to ascertain all the mystical traditions rely on a rational framework to analyze, organize, and categorize their praxis and their a-rational states of cognition. Most modern mystical traditions require that an advance practitioner be versed in the arguments for rationality as well as the practice of rationality. I will admit there are bunch of non-intellectual mystics who don't really give a shit about optimizing praxis and analyzing their a-rational experiences, but, likewise the majority of people who claim to rationalists don't seem to be concerned with optimizing their praxis nor optimizing their rational outcomes (which are also experiences). Mystics just don't see a state of rational mind as being the be-all and end-all of conscious experience.

Expand full comment

Yes, the rationality community is bad at noticing what kind of epistemology they are using as opposed to recommending. In particular, they condemn mainstream philosophy's use of "conceptual analysis" while using many conceptual (at least non empirical and non mathematical) arguments themselves.

Expand full comment

I think that most people who attack rationalism are attacking one of two overlapping things.

Firstly, and most commonly, they may be attacking rationality in the sense of "the ways rationalists (i.e. people who think that Scott Alexander is the rightful caliph) think". That includes a lot of stuff that doesn't really line up with what rationalists think of as "rationality"; I think it contains a lot of stuff that should be attacked – notably, knee-jerk hostility to social justice politics that I think often goes well beyond what is justified - but that describing those things as “rationality” rather than as “regretable quirks of the current aspiring rationalist movement” ought to be avoided.

Secondly, and more interestingly, I think they may be attacking using your near view too much and your far view too little.

Suppose I have to answer a hard object-level question. There are two things I can do:

:- I can try and work out the answer for myself (“near view”),

:- I can substitute the question “what is the balance of other people's opinions, weighted by expertise, on X?” for “what is the right answer to X” for (“far view”).

Far view has the disadvantage that it's possible that the question I answer may have a different answer to the question I'm interested in, but it also often has the advantage that I'm much more likely to work out the correct answer to it.

Given the choice between trying to measure the flip of a really-eroded coin where I'll mistake heads for tails 40% of the time by looking at that coin, and trying to measure it by looking at a coin that comes down the same way 80% of the time and where I can tell which face I'm looking at 90% of the time, I'm much better off with the proxy.

To me one of the striking things about the rationalist community is that a lot of the philosophising emphasises far view over near view (in my opinion wisely), but the actual discourse of rationalists is noticably more near-view-centric than Blue Tribe discourse - “far view” isn't quite the same as “trust the experts”, but it's pretty close to it, and that's a value that the Blue Tribe generally subscribe to and the Grey Tribe generally scorn.

The most valuable post on rationality I've seen was https://thingofthings.wordpress.com/2015/10/30/the-world-is-mad/, by Ozy, making the point that when you've realised that everyone else is essentially incapable of consistent rational thought*, there are two obvious responses – one is “therefore if I can become capable of, or even better at, consistent rational thought, I can win big”, and the other is “and therefore I should assume that I am also incapable of consistent rational thought, and be accordingly cautious”. I'm firmly in the second camp – I think that as a way of being confidently right about important questions more often rationalism has limited value, but as a way of being confidently wrong about important questions less often it's invaluable.

*Many people are capable of thinking fairly rationally much of the time. No-one is capable of avoiding being glaring irrational on occasion. Pretty much no-one is capable of telling when those occasions are for themselves, although it's often quite obvious when it happens to other people.

Expand full comment

Ok but here's the real question: why is Newcomb's paradox framed as if you're making a choice? Doesn't the problem imply that you exist in a deterministic universe, and therefore that you will just do whatever it's already been determined you will do?

Expand full comment

I will, with rounding-error certainty, breathe within the next two minutes. However, I will also choose to breathe in the next two minutes.

Expand full comment

Newcombs paradox is framed as if you are making a *consciius* choice between *two* different things. You don't have the option of not breathing, and it isn't a conscious choice.

Expand full comment

I do have the option of not breathing (for example, before hitting post, I did not breathe for ~20 seconds purely to make this point), and whenever you're reminded that you're breathing, it becomes a conscious choice.

Expand full comment

You don't have the option of not breathing for long, and consciously noting isn't consciously deciding -- see readiness potentials.

Expand full comment

I'm not sure this is relevant; but I had a girlfriend who always used emotional responses to my logical assertions. One day I decided to start off with an emotional assertion and she countered with logic. After a moment of stunned silence I asked her "When did you start using logic?", and her response was "I'll use whatever it takes to get my own way!"

Expand full comment

The assumption that one boxing in Newcomb's problem wins has always bothered me. I mean it kind of straddles the line between being an assumption that you can't question because then you'd be fighting the hypothetical; and being a deduction that is presumed to arise due to the structure of the problem but if you look closely at the problem as written it is not at all clear that one boxing wins (btw, it's also not clear that two boxing wins although that does seem more likely in general). You would need a bunch of extra unstated assumptions to make that deducible but if you point that out then you're back to fighting the hypothetical.

But you have to fight the hypothetical because if 'one boxing wins' is just allowed as an assumption then the thought experiment has no more relevance to decision theory than saying, "I'm going to roll a fair die. You can bet on 'six' or 'not six'. By assumption betting on six wins. So make sure your decision theory can properly handle this case." It just becomes a ridiculous non-sequitur.

As soon as you make the assumptions that allow you to deduce that one boxing wins explicit it allows you to notice how limited the scope of Newcomb's paradox actually is. For practical purposes it will almost never happen and almost all real-life (and even most hypothetical) situations that look like a 'one boxing is optimum' Newcomb's problem really aren't. (they may be Newcomb-like like the 'prisoners dilemma' but I'm talking specifically about the Newcomb's problem Eliezer outlined in 'Newcomb's Problem and Regret of Rationality'.)

One boxers typically don't think about the ways that a predictor could manage to be highly accurate that don't imply one boxing as the optimal strategy.

For example there could be selection effects. Like the predictor only offers the dillemma to those who have publicly committed to 'two boxing' since 'one boxers' have a motive to defect.

It could be iterated (as indeed is implied in the thought experiment) in which case the predictor could simply always predict two boxing. The choosers could deduce this strategy as one possible way that a predictor could have higher than expected accuracy or perhaps just have observed prior iterations.

The predictor could employ a transparent prediction algorithm in order to make it's prediction itself predictable. As above the prediction would need to always be 'two boxing'. But in this case it doesn't even need to be iterated.

Even in the case where the predictor is not taking any of these easy routes the details of how it is actually making it's predictions really does matter (unless it's literally infallible).

Expand full comment

The question is really one of ethics, or a zen koan, about who you are, rather than what the right answer is (I'm an oddball high discount rate one-boxer). E.g., you say "one-boxers don't think about X" as though they should care about consistency in responding to some silly hypothetical. Put in lower brow terms, the question whether you would go to school naked for $1 million has nothing to do with whether you really would (if the money were on the table, of course!), and everything to do with who you think you are, want to be, want to appear as, etc.

Expand full comment

If you're interpreting it as ethics or a zen koan then you're not interpreting it in the spirit it was intended.

Which misses the entire point.

It's supposed to be interpreted in the context where trying to be rational and seeking to maximize utility are taken as givens.

Eliezer outlined a scenario that superficially seems to indicate that one-boxing wins. My claim is that when you examine the scenario that Eliezer sketched out closely and think about it carefully it's far from obvious that one-boxing wins.

One might argue back that since it's a hypothetical and we can just alter it so that one-boxing can be shown to win then we may as well skip that step and just assume that one-boxing wins.

I'm saying you can't do that because at that point you're just assuming your conclusion. (If you define rationality as winning and Eliezer does).

So you have to actually construct a hypothetical where one-boxing wins and when you do this you find that you need some pretty unrealistic assumptions to make it work.

To be clear there are ways to make it work but the assumptions you need are way out there. Like the predictor scans you and simulates a near exact copy of you before it makes the prediction. Or the predictor has access to a hyper-computer or time travel or something.

If you construct a Newcomb's problem scenario where the predictor has a very high accuracy but don't use crazy assumptions then when you examine it closely you find, either that two-boxing wins pretty reliably, or that you can't determine whether two boxing or one boxing wins.

Expand full comment

Sorry, but to many one-boxers, seems like a question about whether you should be willing to risk $1 million for an extra grand (the answer should almost always be no). the whole point is that you're viewing this as a matter of calculation, when it's a nonsensical hypothetical trying to prove a point, force analysis, etc. so i agree, i'm fighting the purported spirit in which it was intended, because there's little there there, it's just an exercise to force you to think about what you might do in the abstract (because the money's not on the table). if this were real, money on the table, no one two boxes.

Expand full comment

Ultimately I think it's a how-many-angels-can-dance-on-the-head-of-a-pin type question.

It comes down to a question of "Hypothetically, if something existed that could predict your actions perfectly, could it perfectly predict your actions?" To this I am forced to say "Sure, I guess".

The confusion comes from the fact that your brain can't quite accept the fact that this perfect predictor could really exist; which is probably fair, because it probably can't. If a perfect predictor alien does show up and start handing out boxes then I reserve the right to be confused _then_.

Expand full comment

Please note that Newcomb's problem does not actually assume a perfect predictor. If it did then it would indeed be obvious that one-boxing wins.

Actually, it doesn't even assume a good predictor. It just assumes a highly accurate predictor. There is a difference. For example see Scott's recent post on dumb heuristics that almost always work.

I think the most common confusion is basically the polar opposite of the one you cited. Namely that a very poor predictor (such as a predictor that always predicts two boxing) couldn't ever possibly manage to be highly accurate. But there are circumstances where such a predictor would manage to be extremely accurate.

Expand full comment

It isn't about wanting the money versus not wanting the money.

Because it isn't a straightforward fact that one boxing wins. How does anyone know what the winning strategy is? You can't perform a real-world experiment because you don't have an omega .

You can try to determine what the winning strategy is by writing a programme, to simulate the problem but it turns out that different programmes produce, different answers according to the assumptions made by their programmer. The further assumptions you need to make about about how Omega’s predictive abilities work, whether you have free will anyway, and so on.

One-boxers assume that Omega's predictive abilities must be based on being able to internally simulate and thereby predict the actions of the chooser...which means that the chooser must be deterministic, if not the whole universe. Note that no *mechanism* of prediction is specified in the problem.

Two boxers assume that since the $1000 is already in the second box, they can make a spontaneous decision to open both boxes and make an extra $100. That would be illogical if they had been told they are deterministic, or that the very existence of Omega means they have no free will, but they haven't been explicitly told either. On the other hand, an assumption of free will is implicit in the problem because it is a problem in decision theory, and DT assumes that you can in fact make multiple choices!

So the problem as stated hints at both free will and determinism, and the contradiction between them is where the paradox lies. It only *hints* at them, so which way people lean will depend on the strength of their intuitions about free will and determinism.

"Rationalists are one boxers" doesn't mean "rationalists have the one true answer", it means "rationalists are determinists at heart".

Expand full comment

"It isn't about wanting the money versus not wanting the money.

Because it isn't a straightforward fact that one boxing wins."

This is exactly right. Thank you for putting it better than I managed to.

I do take a bit of issue with your thoughts about belief in determinism being a relevant factor. I'm a determinist myself.

If I assume that Omega is simulating me in enough detail, to generate a super accurate prediction and well, that's just really hard to do even if I am a determinist. (it might even be impossible at least for an "in our universe" Omega) So before I settle on that as a likely hypothesis as to how Omega is managing to generate accurate predictions I should probably look for more plausible candidates and only settle on simulation as a non-trivial possibility after I've looked really hard for other more likely options. And of course even if I look really hard and can't find anything the possibility that I missed something isn't trivial. But at least I will have looked and therefor have some reason to promote the "I'm being simulated hypothesis" a bit higher than it's original extremely tiny probability.

My problem is that one-boxers typically don't do this. They just immediately assume that the "I'm being simulated" hypothesis has (at the very least) a one in a thousand chance of being correct.

Expand full comment

What’s the fuss about? These two - Pinker and Gardner - are elevating each other with faint criticism. On a practical level, their points of disagreement are trifling.

One of them is a Yankee fan and the other likes the Red Sox. No need for jihad here.

Expand full comment

> If I get an email from a Nigerian prince asking for money, I’m not going think “I shall do a deep dive and try to rationally calculate the expected value of sending money to this person using my very own fifteen-parameter Guesstimate model”. I’m going to think [...] we would probably do a better job building a just world if policy-makers retreated to a heuristic of ‘choose whichever policy favors black women the most.’”

So, you tell him to send you his sister's IBAN?

Expand full comment

No, you reply to the dying African widow instead:

https://scamhunter.org/category/dying-widow/

Expand full comment

I think a good translation of 'something something white males' would be something like an analogy to how 'meritocracy' is used as a version of the 'just-world hypothesis' to justify current outcomes and say they're both natural and correct.

Something like 'while rationality as a concept includes a lot of good ideas about math and thinking, rationality as a social movement is mostly an aesthetic shibboleth used by some rich white males to recognize other like-minded rich white males and form close and exclusive networks with them, networks which unjustly further race-and-gender inequities, and which justify themselves via the aesthetic of 'we deserve this because we're rational meaning we're doing a better job than other people', despite little actual evidence for this.'

Expand full comment

Huh. Sounds amazingly similar to the paranoid delusions that were bruited about concerning the Jews, Asians, or blacks. I guess there really is nothing new under the Sun -- we just continue to have the same tribalistic prejudices, and suspect People Who Don't Look Like Us of being part of a vast conspiracy to keep us down. We do rotate who gets put in the Protocols of the Elders of Zion box, so there's some refreshing variety I guess. Deeply regrettable that so many good lessons the 20th century had to teach (at very high human cost) have not been learned by the 21st.

Expand full comment
Mar 5, 2022·edited Mar 5, 2022

The counterpoint would be that these "paranoid delusions" were actual fact in the (on a historical timescale) recent past- and remain factual on smaller scales even today.

I can't speak to the current make-up of ACT's readership, but Scott's last reader survey on SSC suggests that his readership is very strongly skewed towards white straight atheist cisgender men who are middle-to-upper-middle-class, and I can't imagine that this has radically shifted over the past year and a bit. That isn't a value judgement, but it is data that indicates the belief that modern Rationalism is as the counterargument suggests isn't purely a baseless paranoid fantasy.

Expand full comment
Mar 5, 2022·edited Mar 5, 2022

SSC 2020 skews white to a significant degree, but not by quite as much as you'd think (remember that the US is much more racially diverse than the rest of the Anglosphere, so %white of the Anglosphere is about 10% higher than the US value). It skews male to a massive degree. It skews atheist to a significant but not exceptional degree (US is something like 25% irreligious and the rest of the Anglosphere is higher).

It doesn't skew straight. In fact, it skews significantly gay/bi. LGB is under 10% of the population. Likewise, transsexualism/genderqueer is under 1%; SSC skews massively toward *gender nonconformity*.

But okay, the "white male" part is positively correlated. That's not what the counterargument "suggests", though. That's the *premise* of the counterargument. Its *conclusion* is "and therefore they should be ignored". This is the form "argumentum ad hominem".

Expand full comment
Mar 5, 2022·edited Mar 5, 2022

It doesn't suggest "they should be ignored because they are white" (except in the mouths of idiots, which can be found everywhere), but "they are from a particular culture, and therefore shouldn't be assumed to represent anyone outside of their own culture". This should not be a controversial statement, and yet it is.

Expand full comment
Mar 5, 2022·edited Mar 5, 2022

"Scott's last reader survey on SSC suggests that his readership is very strongly skewed towards white straight atheist cisgender men who are middle-to-upper-middle-class"

Proud to be doing my bit to redress this skew! 😁

White? Yes

Straight? Yes

Cis? Yes

Male? No

Middle-to-upper middle? No

Atheist? No

American? No

Little by little, change comes.

Expand full comment

Really? You're saying the world *was* run by a conspiracy of Jews bent on destroying the Aryan people in the 1930s? Whoa.

Personally *any* time someone reasons from the color of a man's skin to society-encompassing massively oppressive conspiracies I immediately lump him in with Himmler and Father Coughlin and John C. Calhoun. I don't care the color of the skin in question.

Expand full comment

I'm very glad you learned the reducto ad Nazi rhetorical trick. It's a very charming one that doesn't at all make you look like a fool trying to smear others for having the temerity to disagree with you. I thought ACT was "anti-woke", and this is something I've gotten from woke people hundreds of times.

There was no "conspiracy" around the oppression of women and non-whites in the US because it was open and public. I will not bother citing sources beyond suggesting you look up "Tulsa Bombings", "segregation", and "Jim Crow", because you are intelligent enough to already know that those things existed. And I doubt you're so foolhardy as to deny that racism and sexism have evaporated like morning mist.

All of this is to say that there are reasons far better than a nebulous conspiracy for racial animus to exist. Does it justify racial animus? No. But it's there.

Expand full comment
Mar 6, 2022·edited Mar 6, 2022

I see. Let's see if I've got the hang of this...

Because in 1944 there was a not especially secret massive effort by the German government to kill all the Jews in Poland, and because the German government, Jews, and Poland all still exist, it's reasonable in 2022 to suspect that the German people still want to kill all the Jews in Poland, because of course they're still German (and the Polish Jews are still Jews). We don't need to look any further than someone's cultural identity[1] to know what to think about him, and of course everyone with the same cultural identity (or skin color) thinks similarly, even if they were born 60 years apart into utterly different societies.

Seems like classically racist thinking to me. I don't see a bit of difference between this "logic" and that of Calhoun, and I have equal contempt for both.

------------

[1] For example the complete lack of concentration camps, or laws proscribing Jews, means nothing, compared to the weighty fact that *there once were such things*.

Expand full comment
Mar 6, 2022·edited Mar 6, 2022

If there were still considerable amounts of Germans who openly and proudly shared their contempt for Jews and spoke about how they were genetically conniving, insane, and incestuous, and who were still committing acts of racial animus against the Jews, then yes, your analogy might work, but at this point I don't think you're interested in détente or trying to understand the other side and just want to fight kulturkampf. At least you've backed off the direct comparisons between the invisible wokist standing right next to me and Nazis.

Allow me to again emphasize that I see most "wokism" as racist to all sides. I don't think it's a particularly valuable ideology in itself (although some of the ideas contained within it are valuable).

The same holds true in my eyes for Bay Area Rationalism (soon to be renamed Austinian Rationalism if movement patterns seem right), albeit with slightly more valuable things in that pile, so I'm probably as close to a neutral party as you can get.

If you want to actually engage in a dialogue it generally helps that you understand what the person on the other side of the argument actually believes- yes, even if what they believe to you sounds like a bunch of feces pouring out of someone's mouth, but less attractive and desirable. Once you give up on communication and understanding, all that's left is trying to yoke or destroy the other side by deceit or brutality, and the general impression I get of Rationalists is that you have a smaller stomach for lies and blood than wokists.

Expand full comment

A friend recently wrote a blues related to this discussion. You may perhaps guess the gender and skin color of the artist, but so what:

What has the white man ever given us

except from democracy, science and law?

What has the white man ever given us

except from democracy, science and law?

White men are everywhere

they´re the worst people you ever saw

Expand full comment

So the Greeks, Persians, and Iraqis are white now?

Expand full comment

Greeks count as white, especially Ancient Greeks.

Expand full comment
Mar 6, 2022·edited Mar 6, 2022

"Count as"

Someone's either white, or they aren't. The popular position among the white-race-conscious is that Greeks are white when they invent Athenian democracy, add to the corpus of proto-science, and fight heroic wars, and non-white when they practice pederasty, call Germanics/Slavs barbarous, and exist in the modern era. This is obviously a nonsense.

Oddly, Spartans are heroic Aryans who fight against the degenerate brown-skinned Semitic Achaemenids ("The original globohomo", as I've heard them called) at Thermopylae despite the fact that Spartans were highly egalitarian in regards to women (in context), practiced pederasty (while the Zoroastrian Persians saw homosexual relations as a sign of demonic possession and punished it by execution), and were much darker-skinned than the Achaemenids. It just goes to show that aesthetics are of such central importance to these movements that mere fact is ephemeral in comparison.

Expand full comment

> One of the most common arguments against rationality is “something something white males”. I have never been able to entirely make sense of it, but I imagine if you gave the people who say it 50 extra IQ points, they might rephrase it to something like “because white males have a lot of power, it’s easy for them to put their finger on the scales when people are trying to do complicated explicit computations; we would probably do a better job building a just world if policy-makers retreated to a heuristic of ‘choose whichever policy favors black women the most.’”

Amazing.

Expand full comment

If it turns out there are precisely 7000 contradictions in the bible I'm kissing cheeseburgers goodbye.

Expand full comment

I won't go that far but am willing to quit boiling calves in their mothers' milk.

Expand full comment

"Fine, but I need fifteen people to bond super-quickly in the midst of very high stress while also maintaining good mental health, also five of them are dating each other and yes I know that’s an odd number it’s a long story, and one of them is secretly a traitor which is universal knowledge but not common knowledge, can you give me a tradition to help with this?"

Which science fiction story is this?

Expand full comment

Sounds a bit like a “Lost” sequel.

4, 8, 15, 16, 23, 42

Expand full comment
Mar 5, 2022·edited Mar 5, 2022

Bailey Rationalist: Bayes! Probability! Formal Logic! Game Theory! Expected Utility! Prediction markets! Avoid the biases! Do this and you will be right!

Anti-rationalist: I think there is more to reasoning than these things. For example, within the classic list of "cognitive biases" there are things which you probably should do (e.g., Loss Aversion makes sense if you might lose all your money). And there are more intuitive and narrative ways of reasoning that help people to make sense of the world in ways that are useful.

Motte Rationalist: Oh of course. That is not what we mean. Rationality is just systematic winning. So if it is helpful, then it is rational. Even if its something like intuition.

Anti-rationalist: Oh. Ok then

Bailey Rationalist: Bayes! Probability! Formal Logic! Game Theory! Expected Utility! Prediction markets! Avoid the biases! Do this and you will be right!

Anti-rationalist: You seem to be saying the same thing, and you seem to think that [bayes, probability, formal logic, etc.] are like...the end all be all. As in you seem to think these things are the best path to systematic winning. I disagree. The set of epistemological approaches you have chosen are NOT synonymous with winning.

Pinker: You are using rationality to argue against rationality! Checkmate atheist!

Anti-rationalist: Not at all. I am saying your set of prescriptive theories [Bayes, logic, game theory, etc.] are not synonymous with "instrumental rationality". And you keep acknowledging that this is the case, but you seem to rely way too much on them. Have any of you read Gary Klein? Gigerenzer? Or how about Nassim Taleb? It seems to be the case that most successful people draw more from those thinkers than from Kahneman/Tversky or Yudowsky (and it pains me to put Yudowsky next to those names). It seems like the set of prescriptive theories you are applying everyday are great for specific things (like arguing with people online), but not the real world. As Agnes Callard points out in Aspiration, Expected Utility is great for medium decisions like buying a car, but terrible for small problems (which cereal to get) and large decisions (i.e., decisions that result in you changing your utility function).

Rationalist: No. These theories are just how science works. This is the Correct Epistemology. And Bayes gives you the objectively correct answer. How can you argue against Bayes? Bayes is always right! The math doesn't lie! Shut-up and multiply!

Anti-rationalist: Well, that is obviously false. Bayes does not give you the objectively correct answer, despite what Yudowsky claims. You seem to be optimizing for the activities that people like you (white males) do; science and tech. (and smuggling in your intuitions and calling them objective and less wrong)

Rationalist: AH! I get it now. You're just an SJW trying to support "alternative" epistemologies that put personal experience above logic.

Anti-rationalist: Not an SJW. What I am trying to point out is that you are pushing for a certain set of epistemological theories that are great for people like you, but are less useful for the majority of people who are dealing with normal human issues like how to write a resume (as opposed to debating the expected utility of funding a biotech start-up.)

Rationalist: But Game Theory would be useful for writing a resume!

Anti-rationalist: Not for the people that are innumerate or simply just dont know Game Theory

Rationalist: Then we shall teach the world Game Theory and lead the world into a glorious rational future! And everyone will win all the time because they will know Game Theory!

Expand full comment

Thank you for writing a summary of the other side's position that's exactly as charitable to rationalists as they're being to "anti-rationalists" here.

Expand full comment

I thought I might be strawmanning Rationalists with that little rant about Bayes giving you the "objectively correct answer", but then the very first comment after yours makes that very argument.

Expand full comment

indeed it does. I edited that comment to ask you about your thoughts, could you respond? As far as I can tell, if you actually have complete information about your observations and you are actually capable of performing the infinitude of computations required, then you actually would get the objectively correct answer, because Bayes is just math. The problem is that we aren't capable of having that complete information and also the computations are intractible. But that doesn't mean that P(A|B) doesn't actually equal (P(B|A) * P(A)) / P(B).

Expand full comment

Give me a minute. I am writing my response now.

Expand full comment
Mar 5, 2022·edited Mar 5, 2022

I didn't quite respond to this particular point below, but that is partly because I'm not sure I understand your argument. All mathematical formulas, when done correctly, are correct. Frequentism is also correct. As are Ranking Theory and Possibility Theory. What of it? I don't follow the reasoning here from "Math is coherent" to "Bayes is the end-all-be-all of rationality"

Expand full comment

Oh. You actually do not grok what we find so fascinating about Bayes then. Uhm. I sort of doubt trying to explain would be very useful. But I guess at this point I'm obligated to try.

Bayes answers the question "what is evidence and how does it work" the same way Newton answered "what is force and how does it work". Observation a is evidence for x if P(x|a) is larger than P(x). That is what 'evidence' *means* at a fundamental level. This takes evidence from a sort of wishy-washy qualitative concept and brings it into the realm of quantitative numbers, with all of the advantages that brings.

That's why our response to Bayes is not just as a tool in a toolkit. Bayes is more fundamental than that.

Of course, just like f = ma for Newton's force, this isn't exactly super useful in day-to-day life. When someone punches you in the face, you don't stand there estimating the acceleration of his fist, the mass of your jaw, trying to do the math to determine whether to get out of the way. Does this mean that 'f=ma' is just another tool in your tool kit, useful in laboratory settings but not really relevant to real life? Maybe it does. But when newton discovered f=ma, people *freaked the fuck out*, because even though it wasn't especially relevant to their day-to-day lives, it was abdundantly clear that f=ma was a universal law of the universe, learning it felt like you were learning something of incredible profundity. Like... f=ma is different from other math people might have known in a similar vein, like perhaps "you can pile up 8 stone of weight on top of a steel platemail shirt before it starts to get crushed". Both of these might be math tools in your toolkit, but one of them is clearly more fundamental than the other. Closer to the bottom. Idk how to describe it. And while you're probably not going around calculating the force of punches by measuring the mass of people's fists and the rate of deceleration at impact, knowing about f=ma can nonetheless give you some insights that seem counterintuitive to humans, correcting some misconceptions. Like realizing that if you run around the corner and get clotheslined by someone holding out their fist in wait, this is probably physically identical to getting punched with the force of your leg muscles instead of their arm muscles, and therefore even worse. You don't actually have to measure the numbers involved in this hypothetical, doing so would be absurd, the notion is totally intractable. But you gained an insight from f=me anyway, just from knowing what the equation means and how the terms interact with each other.

Bayes is the same way. P(A|B) = (P(B|A) * P(A)) / P(B) is more fundamental than "the chance of lung cancer increases if you smoke" even if they're both tools for thinking about probability. You don't actually have to whip out your calculator and start doing updates on priors to get utility out of understanding how the equation works, how the terms interact with one another.

It is definitely true that humans are not very good Bayesians. I don't understand why this is relevant? Cars aren't very good carnot engines either. Having a 'really good bayesian' in real life is probably physically impossible, just like having an entropy-equalizing steam engine that nonetheless does net work.

Does that help? The knowledge that bayesians see bayes as being the same kind of equation as f=ma I mean. There is no way to truly quantitatively understand the concept of force without knowing f=ma. You might have some other system for describing Force, but the degree to which it is good is going to be exactly equal to the degree to which it approximates f=ma anyway, so it's not correct to think of each perspective as on an equal level, just another tool in your toolkit. Even if "those walls can support a 600 pound ceiling" might be a far more practically relevant in a given situation, it just isn't the same kind of profound. Same with bayes. Any other system is only as good as it is an approximation of bayes anyway, so the other systems don't quite... they're not all on equal footing here. Even if bayes is less relevant to your day-to-day life than other probability tools like "the probability of getting a royal flush is vanishingly small, check his pockets for hidden cards", it's more fundamental in a way that if you know bayes, the rest can be rederived from it.

as far as humans not being bayesians... I don't quite understand. We are bayesians... there is nothing else for us to be. We are engines of cognition. We turn energy and observations into predictions and waste heat. Bayes is *exactly* analogous to thermodynamics here as it puts a hard cap on how efficient both sides of that process can be, exactly how good the predictions can be and exactly how little waste heat there can be. But we are really crappy and entropy-inefficient bayesians. a perfect bayesian is as impossible as a real-life carnot engine, so evolution instead found lots of heuristics that do a better or worse job at *approximating* bayes in various circumstances likely to occur in the ancestral environment. In some cases we do an absurdly bad job at approximating bayes, because there just isn't time enough to sit down and work through equations when you're trying to figure out if the weird shape might be a tiger. But whatever the "probability-of-tiger measurement" brain circuitry looks like, I bet it involves P(A|B) = (P(B|A) P(A)) / P(B). There isn't anything else for it to be made out of.

Does that help understand why we feel tthe way we do?

Expand full comment

I grok the support for Bayes just fine. I read The Sequences and was impressed with Bayes like everyone else. But then I started to read Epistemology not written by Yudowsky and realized there’s a whole lot more to the idea of evidence than “Bayes” (something which I think should be non-controversial).

Bayes is a great mathematical representation of evidence. But I strongly disagree with this idea that it is The One True Representation of The Very Idea of Evidence Which All Other Representations Are Merely Approximating. It’s not. It’s one of many. And any complete understanding of evidence and epistemology has to recognize that Bayes is not synonymous with good reasoning.

I think your claims are something like this;

- Meta-Epistemological Realism: There are objectively correct and incorrect ways to reason, and we can know what those ways are

- Normative Bayesianism: Bayes is The Correct Way to reason. Even in situations where Bayes is impossible to apply, you should approximate it as much as possible because it is the standard to which you compare any reasoning.

- Prescriptive Bayesianism: When possible, and given our goals, using Bayes explicitly is often the best way to reason. But it won’t be used in all situations due to various constraints.

- Descriptive Bayesianism: Bayes is how humans actually reason

- Mathematical “Bayesian” Realism: And Bayes is normative, prescriptive, and descriptive because it is the very mathematical representation of the idea of evidence.

(Borrowing the terms normative, prescriptive, and descriptive from Jonathon Baron because it is incredibly relevant here. And borrowing “realist” from multiple fields.)

My claims would be:

- Meta-Epistemological anti-realism and skepticism: The Is-Ought Gap doesn’t stop at Ethics, so there is no objectively correct way to reason. But that point is largely academic as we can probably agree on the goal of any (epistemic) reasoning. So moving past that, the “best” way to reason will always be relative to how you define “truth”, which is fine in theory but in practice leads to dogmatic, circular, or regressive arguments. So when people tell you that there is One Objectively Correct Way to Reason, they are selling you their religion. *We will never actually know the “best” way to reason, in part because there isn't one, and even if there was, we wouldn't be able to discover it.*

- Realist Epistemological Pluralism: There is no One True Normative Model for reasoning, but this doesn’t mean all models are equal, they are not. There are many good normative models for reasoning, where “good” is defined by however we have chosen to measure truth (e.g., Brier scores). And different models will perform to varying degrees of success on these measures. But in certain situations, different models will do better.

- Prescriptive Bayesianism, but also other models: I think we agree on this point. Bayes is great in many situations, but also hard to apply in many others. Generally it is good to approximate it, but there are other models for handling evidence and uncertainty which might be better in certain contexts. For example, maybe frequentist is a better approach in a certain AI problem, or fluid dynamics when working with fluid, or working backwards through a physics equation when trying to figure out where a billiard ball was at t0, etc. But in general, the world would be better off if Bayes was used more often than it currently is.

- Skepticism of Descriptive Bayesianism: No one understands the exact system that underlies human reasoning. Bayes may play some part, or no part. It might not even be reasonable to think of human reasoning in mathematical terms. Who knows? Bayes seems like a good rough approximation of certain processes in the same way Newton gives a rough approximation of the movement of objects. But at the very least we know Bayes is not the whole picture because we know how a Bayesian network works, and that’s not how a human mind works.

- Bayes as a Model: Bayes is just a model. There is nothing more fundamental to it. Just like how Newtonian physics is also just a model, and breaks down when you go too small or too big (and is definitely not a law, which is an outdated concept. And even if such laws were real, Newtonian physics would not be one of them). This idea that Bayes underlies the very concept of evidence seems like some sort of Platonism or mathematical realism to me.

Expand full comment

"Bayes answers the question "what is evidence and how does it work" the same way Newton answered "what is force and how does it work". Observation a is evidence for x if P(x|a) is larger than P(x). That is what 'evidence' *means* at a fundamental level. "

That doesn't tell me whether the Bible is evidence. The problem is that I need to know what evidence is, in order to know whether I should update on it -- but you have defined evidence as that which I should update on.

Expand full comment

Yes..all mathematical formulate are valid when they are valid. But validity is not soundness.

Expand full comment

And also that you have to get hypotheses from somewhere , and that you need paradigm.shifts as well as incremental updates.

Expand full comment
Mar 5, 2022·edited Mar 5, 2022

I have never understood this criticism of bayes

Bayes *is* the end-all be-all of rationality. It is the mathematical underpinning of the very concept of evidence...

It might be computationally intractible and totally practically useless for actually making real-time decisions about what to believe but that feels strawmanny to me

The second law of thermodynamics is the end-all be-all of entropy, and its discovery revolutionized our understanding of mechanical work and opened the gates to the steam engine and the industrial revolution. And when engineers are thinking about these things, the carnot cycle is clearly like, the conceptual primitive upon which everything is based. It has an epistemic fundamentality that makes it more than just a conceptual tool in the engineer's toolkit. An engineer who doesn't understand the carnot engine doesn't actually grok their field, in an important way, and when you're designing a thermodynamic system, the comparison of reality to the hypothetical perfect carnot engine should never be far from your mind

And I always sort of imagine that, well... if Pixar's Cars cinematic universe (a world of sentient motor transport vehicles) existed, there would probably be Enginists, or Aspiring Enginists, who felt an almost religious awe at the very idea of the carnot cycle. When it was discovered, some small proportion of them would think "yes. This. This is it. This is the answer to the mystery, this is the math at the heart of it all. And of course it's so simple and pure, it's almost obvious. We should freaking put this on teeshirts and hats. The carnot cycle, everything else is just commentary."

And I'm sure there would be skepticars who said things like "but you can't actually build a carnot cycle in real life, so clearly like, we must not actually be carnot cycles. It's just not useful to think in terms of engines being carnot cycles, no engine is that perfect. It's just a conceptual tool in our toolkit, one way to think about the flow of mechanical work, maybe useful for some problems but nothing special."

And the Aspiring Enginists might say something like "but don't you see, the carnot cycle isn't just one way of looking at the flow of mechanical work, it *defines* the very concept of what mechanical work *is*. There are no alternative conceptual frameworks for thinking about what it is that engines are actually doing. You either basically approximate carnot with some heuristics that are more computationally tractable so you can actually build it in real life, in which case your efficiency is a measure of to what degree your design differs from carnot, or else you don't actually have an engine."

That's my headcanon for how rationality's interest in Bayes analogizes to the Pixar Cars universe. Those cars *are* engines of locomotion (plus or minus some other stuff), so when they grok the carnot cycle, they feel as if they are looking upon a simplified and perfected version of their own otherwise-incomprehensible internal mechanisms. We *are* engines of cognition (+/- other stuff), so when we grok Bayes, we feel as though we are looking upon a simplified and perfected version of our own otherwise-incomprehensible internal mechanisms.

I think a lot of rationalists have this "holy shit" epiphany/moment of clarity, and afterwards feel a great deal of awe towards Bayes, and that some people pick up on that awe and don't understand it and it rubs them the wrong way. I think a lot of the awestruck might assume that anyone who doesn't feel that awe must not have actually grokked bayes in the first place, or else they would also be awestruck. And I think this has caused a lot of talking past each other in the arguments.

But like. Even without the spiritual awe, Bayes is still really practically useful. Understanding how P(X) changes, the general magnitude and the sign of the change, when P(Y|X) changes, is really freaking important. There's a bunch of results here that human beings would *never* intuit on their own without knowing the rules. Just look at the examples usually given in tutorials on bayes. If someone gets a bad result from a rare cancer screening and the false positive rate is 1%, your average human is *not* going to be able to correctly update on that new evidence. It's Bayes that lets us see the underlying structure of how these things, observations and anticipations, interact with one another correctly.

Idk, I know this was a small detail in your larger argument, but I wanted to talk about it anyway.

I also feel more than a little confused when you say that Bayes doesn't give the objectively correct answer. What exactly do you mean by this? My first instinct is to assume, well, basically exactly what you accused us of assuming. That you presented this position for ridicule makes me think that you aren't one of those people who rejects the notion of objective reality. Do you believe that, given a set of observations, there is indeed an objectively ccorrect conclusion to draw, but Bayes doesn't actually yield that correct conclusion? Or do you think that a hypothetically perfect bayesian reasoner would indeed reach the correct conclusion, but that such a mind cannot exist in the real world, and in trying to approximate Bayesian reasoning we're actually doing worse than if we just threw the whole idea of Bayes out?

Expand full comment

>Bayes *is* the end-all be-all of rationality.

Then what are all those epistemologists wasting their time on?

There are theoretical problems with Bayes, such as the problem of Old Evidence. And there are practical problems with it, such as the fact that it collapses the difference between stochastic and epistemic uncertainty. It is also often misused in that, when people say "prior", they should probably say "my smuggled in bias" because that is what they really mean. In short, Bayes is not the end-all-be-all.

I think this post by David Chapman does a pretty good job explaining this quite well here: https://metarationality.com/how-to-think

The way you treat Bayes slightly confuses me. You talk about it as if it were a Natural Law in the same was that Entropy is a Natural Law (assuming "Natural Laws" are real). But it is not. It is not a law of the universe. Neither in the descriptive sense (it is NOT how humans actually reason), or in the prescriptive sense (there is no such thing as an "objectively correct prescriptive model of reasoning").

Bayes is a prescriptive model for reasoning that people find useful in some situations, but not in all situations. It is NOT exactly how humans reason, but an approximation of it. In fact, we are incredibly certain that it is not how humans reason, because humans often fall prey to the Base-Rate Fallacy, and as Julia Galef points out, the brain is not a Bayesian Net. So whatever processes the brain uses, Bayes MIGHT be one among many, but it is also entirely possible that the brain doesn't use Bayes at all.

Bayes gets us close to the truth in many scenarios (and yes, I believe in truth). But the world is too complex for a simplified model like Bayes, and human reasoning often too limited to use it correctly even as it is.

NONE of this is to say I don't like Bayes. I actually really like Bayes. I am a Bayesian. But Rationalist's tend to treat it with a religious fervor which it does not merit.

Expand full comment
Mar 5, 2022·edited Mar 5, 2022

But to stick the Bayes Theorem in as the One Perfect Answer To All Problems doesn't, in fact, solve the problem of teaching rationality.

What Gardner is saying is that things like "Relationship" are just as important, and in some cases even more important. Get Vlad to agree that hey, we're all Slavs here, why am I bombing Ukraine? and you have then got the room to teach Vlad to think more rationally. Otherwise, you can have Vlad saying "Well, I ran the figures and it is rational for me to bomb the shit out of Ukraine".

And maybe it *is* more rational for Vlad to bomb the shit out of Ukraine, but is that *better*? For Ukraine, for the world, for the idea that violence is decreasing and using it as the solution to your problems is a bad idea (see Pinker's own Better Angels)?

And Pinker's tweet about "you are using rationality to criticise rationality" makes about as much sense as "you are replying to my words using words". Well duh, what other way, intepretative dance?

Expand full comment
Mar 5, 2022·edited Mar 5, 2022

There's an enormous difference between the Second Law of Thermodynamics and Bayes Theorem, which is that the Second Law allows us to throw out an infinite number of clever mechanical schemes for, among other things, turning heat directly into work, or creating perpetual motion. We don't have to dig into these contraptions and figure out the exact mechanical reason why they don't work -- all we need to do is observe that they are not, overall, consistent with the Second Law, and we know right away they won't work, without examining their innards at all. It's a fantastic practical time-saver, and hence profoundly useful not only to theoretical physicists trying to understand the nature of reality, but also to engineers, legislators, patent examiners, and even mere mortals watching late-night TV[1] in trying to understand how to build and use practical everyday machines.

I don't see anything like that stemming from Bayes Theorem. I get that it helps rationalize and quantify certain kinds of reasoning, lets you write down an equation for a process and assign terms meaning. But merely writing down in math what you already know how to do is something that typically only strongly interests philosophers and metaphysicists, the rest of us already have practical methods of getting the job done.

For Bayes Theorem to be comparable to the Second Law, it would have to do much more than that, it would have to rule out large classes of complicated and seemingly sound chains of reasoning because they violated some core principle which we knew to be true. How does it do that?

---------------

[1] When Phil Swift says this marvelous additive will make your car get 200 MPG it's the Second Law that allows you to immediately conclude he's bullshitting you, without knowing the ingredient list of his magic fluid.

Expand full comment

Thermodynamics' steam engine is definitely better than anything Bayes has in the real world, I'll grant.

But I disagree that Bayes doesn't allow us to immediately throw out entire categories of bullshit due to a violation of the mechanisms of evidence. I dunno. Maybe I am overcrediting Bayes here, but... for me, back before Lesswrong, I would run across ideas that seemed to misuse reason, even basic skeptic's gristle like astrology, and in the end it would come down to "I just feel like we shouldn't regard your lived experience as evidence, because of stuff like confirmation bias or whatever other fallacies are listed on the stanford philosophy encyclopedia"

That circumstance feels very similar, to me, to a natural savant in the 17th century saying "I dunno, I just feel like you probably can't turn lead to gold, or if you can, there has to be some hidden cost that ends up costing you more gold than what you started with"

We've got some qualitative principles and heuristics here that serve us pretty well, but aren't a slam dunk that immediately forces consensus

Then someone actually figures out the math, a way of quantitatively representing the probability venn diagrams or the phase volume being conserved, and it changes everything

True, we're still missing that bayesian steam engine equivalent, although I suspect AI might serve that role in the future. But being able to say concretely "your experience is not evidence in favor of astrology, because you would be equally likely to agree with an entirely different horoscope for that day, therefore p(a|b) = p(a) and your statements simply are not evidence" is as large a step forward as the equivalent "nope, your invention cannot possibly work because negentropy increases at this specific step without any corresponding decrease elsewhere" when compared to the earlier qualitative, wishy-washy arguments

Expand full comment
Mar 6, 2022·edited Mar 6, 2022

I appreciate your making the effort here, but I'm afraid I don't quite follow it. It sounds like you're saying Bayes Theorem allows you to quantitatively rule out certain statements as constituting evidence on a point, in a way that is more precise, at least, if not more irrefutable than something like common sense.

Can you explain how that works? Let us say I have a friend who says "Astrology totally works, because in these N cases, I read something in my horoscope which came true." Now, as an empiricist I would say something like "This is not evidence that astrology works, but rather not evidence that astrology doesn't work. The question of whether it works or not is still undetermined, you are confusing "not evidence against" with "evidence for."'

But notice I am not disputing the characterization of the experience as evidence. It's just not evidence for the hypothesis that the my friend thinks it is. So in that sense I would say the philosophy of empiricism has helped clarify thinking, and prevented important category error.

How would you evaluate this in a Bayesian way, and how does that help clarify thinking, or detect error?

Expand full comment

One way to think of it is "rationalism is the idea that thought ought to be optimized either directly by explicit reason or processes endorsed by explicit reason". Depending on the problem you might use explicit reason, or a heuristic that explicit reason tells you to use, or an intuition explicit reason tells you to trust, or a heuristic recommended by an intuition explicit reason tells you to trust, etc., but ultimately it cashes out in explicit reason.

Expand full comment

At least some of the anti-rationalists (me, kinda) see the term as meaningless in itself, and mainly used as an attack on things that Aren't rational. My first exposure to the subculture was a long, impassioned argument about social justice that ended with a rousing defense of the JQ, which poisoned the well for a longgggggggg time.

Eg, you get exposed to too many spiritually teenage objectivists, and you correlate "rationalists" and "assholes".

Expand full comment

I don't know if I can contribute anything, but here are some quick reflections:

1. ' The rationalist community' is rather distinctly different from the philosophy of rationalism. The first is broader in scope, is willing to consider heuristics and intuition as you mention, and is generally more interesting. I don't know how to best refer to these two groups in discussions, but what follows will be about rationalism as practiced by the rationalist community and not as defined by more time worn philosophers.

2. In addition to rationality being metacognitive as Scott mentioned, rationality seems to admit the possibility of hard mistakes. It's not that this is philosophically insightful, but it contrasts sharply with how most discussions are carried out. In this sense, it's a kind of cultural norm. It's not that we can never use heuristics or intuition, but that we should at least be open to testing them occasionally, recognizing that we're filtering all our problems through brains more designed for survival than truth seeking, recognizing the possibility of counter-intuitive outcomes and exploring those possibilities in spaces that are relatively safe from negative social outcomes, steelmanning rather than strawmanning opposing arguments, etc. In short, rationality includes an enlarged Overton Window, subject to normal constraints of time and energy.

3. A Game Theory examination of a game of Chicken holds that one strategy is to take your steering wheel and throw it out the window, in a way that your opponent can recognize that you're no longer in control of your vehicle. By eliminating your capacity to submit, you increase the chances that the other person will do so. Of course, this is brinkmanship and you also increase the likelihood of catastrophe. I feel like there are a sizeable number of people who employ this strategy on the meta level. This is "Argument as War" where to even admit to some truth that favors an outgroup is tantamount to treason to an ingroup. The Rationliast Community seems a little bit less likely to engage in this kind of "argument as war" strategy, again resulting in a larger Overton window, subject to normal constraints. I tend to think of avoiding this kind of intellectual brinkmanship as 'playing the long game' since you potentialy end up with a better mental model. But I suppose there's an argument to be made in favor of publicly and politically favoring your ingroup and only indulging in mistake theory in private.

Phrased more bluntly, perhaps the rationalist community places a lower weight on the utility of self-deception and ego preservation.

Expand full comment

This is a good exploration of what rationality is, along with some pretty good ideas about what skeptics of rationality often mistakenly think it is.

And Pinker's tweet is a pretty good jumping-off point for it, with his humorous quip about those skeptics.

It's kind of unfortunate that this is the opportunity he took to make that quip, though, because it doesn't really apply to Gardner's critique. What Gardener wrote is quite positive about rationality. He just thinks that the most world-improving thing to teach people right now—the most "rational" thing to teach them, perhaps—is something other than how to be more rational. This isn't an anti-rationality position.

Expand full comment

When I see the argument from cultural evolution, I'm hugely bothered at the circularity of it. This argument is usually deployed by people arguing for the preservation of some cultural practice, by saying "cultural evolution put it there".

But cultural evolution is ongoing, it's not a done process! Arguing for the preservation of some practice, for its modification, for its removal, or for its replacement, is just as equally 100% cultural evolution in action. Just because you've read some Joseph Henrich doesn't mean you get to stick your head out and pretend that your arguments in favor or against whatever cultural practice are somehow *above* cultural evolution, rather than plain old part of it!

Imagine an actual gene developing a little loudspeaker and trying to plead with its environment, "Hey, millions of years of evolution put me here, don't mess with me!". Wouldn't that be completely absurd? In an evolutionary context, you have to *keep proving* your worth, or you're out, would be the environment's answer.

Expand full comment
Mar 5, 2022·edited Mar 5, 2022

"Learn the rules like a pro, so you can break them like an artist" — Picasso's version of Chesterton's Fence

In other words, there is value to tradition/habit/ritual/behaviors motivated by magical thinking. There are of course harms as well. If the good is outweighed by the bad, the practice is ripe for disruption and replacement by rational/legible/scientific action. But one ought to understand the benefits before we categorically eradicate the traditional practice on account of its lack of rigor.

Expand full comment

At the individual level, sure, learn the rules, figure stuff out, gain mastery. Very good, very rewarding.

At the collective level, OTOH, my whole point is that no, it doesn't make sense use "this was put here by cultural evo" as a reason why you should be extra careful, or you should absolutely figure out what its function was, before changing or dumping the practice.

Evolution is just a bottom up process with a feedback loop. It's been producing stuff, fixating stuff and putting stuff out of fashion, inexorably, ever since there was the substratum of a culturally able species, i.e humans. It doesn't have a bias for the past being more right than the present or the future, and it definitely doesn't have any adult in the room overseeing the process. So on the collective level, no, it doesn't make sense to ask for one.

Expand full comment

One problem here is that rationalists and these critics are talking about two completely different meanings of rationality.

You are talking about rationality as a method of generating knowledge and making decisions, and your self-identification as a rationalist means that you spend time and effort improving your knowledge-generation and decision-making frameworks.

Critics of "rationality" are talking about the reframing of moral arguments as "rational", usually as a way to maintain the status quo.

Steven Pinker is famous for this. He's the guy who said the words "progressives hate progress" and proceeded to demonstrate using graphs and charts that progressives are idiots and morons for trying to make the world better, because as you can see from slides 32 and 47, the world is *already* better.

A more recent example would be Tim Pool's statement "I despise appeals to emotion". Well that seems perfectly innocent and rational, right? The thing is, he was referring to President Zelenskyy's statement "We desire to see our children alive." Zelenskyy is making a moral argument - that Russia shouldn't kill Ukrainian children (note this was after at least one school had been bombed), and that the West should help prevent that outcome. Pool is reframing this argument in terms of a well-known irrational fallacy (appeal to emotion).

Now, I think everyone would agree that we should approach the problem of the Russian invasion of Ukraine rationally (what would it even mean to say we should approach the problem irrationally?). But at the pragmatic level, Zelenskyy is making a moral argument because moral arguments are more likely to motivate people to act than logical arguments. Rationally, prevention of suffering is good. Rationally, convincing people to help prevent suffering is good. But "rationality" is supposed to prevent people from utilizing rhetoric in order to motivate people to behave rationally?

Note that the problem here isn't rationality itself, but the invocation of the "rationality" frame to take rhetorical power away from an important call to action. If you're going to die, it's certainly rational to attempt to elicit empathy from someone who could save you. I suppose Tim Pool would have been happier if Zelenskyy had provided a 3000 word essay about why he desires to see his children alive, with citations from the field of evolutionary psychology?

To go back to Pinker - it's rational to be upset about, e.g., global poverty, and want to do something to stop it. It's also rational to be happy about the fact that global poverty is declining overall. The debate between progressives and Pinker, however, is that progressives want to *highlight* the bad in order to motivate people to help solve the problem, whereas Pinker wants to *highlight* the good... for what purpose? What rational end is Pinker pursuing? It often seems he's merely being pedantic. He has no policy proposals on this issue - it just bothers him that he feels progressives aren't telling the whole story.

I should stress that I think rationality is still important here - it's true that if you want to solve e.g. world hunger, you need to understand it, and it's true that you should adopt a rational approach to devising and implementing solutions. But it's also true that part of that approach will be competing for attention in a difficult media environment, and hearing Pinker talk about how global poverty is already declining undermines activists' ability to do that.

Another problem here is that when people like Pinker or Pool explicitly invoke the frames of rationality or argumentative technique, they're not actually being any more rational than their opponents. If you have a good argument why people shouldn't donate to Oxfam, make it. If you have a good argument why the West shouldn't help Ukraine, make it. "My opponents are simply irrational" isn't just an argumentative technique - it's an invalid one. Invoking the frame of rational argumentation often asks us to commit the ad hominem fallacy (my opponents aren't rational, therefore they are wrong) and the fallacy fallacy (my opponents used a fallacy, therefore they are wrong).

Note that alleged "anti-rationalists" never say "that argument was rational, therefore it must be wrong". They're not opposed to rationality. They're opposed to the debate tactic of strategically claiming or implying that your opponent is irrational.

Also note that in both of these cases, the "rational" framework is invoked in favor of the status quo: in Pinker's case he thinks we're already doing enough to end poverty (I don't agree), in Pool's he thinks the West should continue not being at war with Russia (I do agree). I don't think this is a coincidence: the frame of "rationality", as a rhetorical device, works a lot better when it's invoked in favor of something that feels familiar or is within the Overton window. And movements for change - activism for civil rights, women's rights, poverty eradication, environmentalism - often rely on advocacy that frames these issues as moral, or uses rhetorical techniques to elicit empathy, which makes invoking a rational framework an effective countermeasure.

I just want to be completely clear again that this "invoking the frame of rationality to win arguments" thing is NOT what I think that Scott, Eliezer, or self-identifying rationalists in general are doing when they do rationalism. Rather, it is what people who have limited or no experience with rationalists experience when someone like Steven Pinker advocates for "rationality", or Tim Pool derides argumentative fallacies, especially in the context of a political debate.

Now to address the controversial part:

Nowhere is this more apparent than in the "something something white men" example of an anti-rationalist argument. If white men are the people who benefit most from the status quo, and appeal to "rationality" is a tried-and-true way to defend the status quo, you can start to see why someone who wants to change the status quo would come to view white men claiming to be "rational" with suspicion.

Consider the typical, layperson's understanding of "rational" as "involving reason" or "prioritizing thought over emotion or impulse". If a man tells a woman that he's being "rational" (and by implication that she is not) this isn't a statement with no context - the context, of course, is the incredibly pervasive stereotype that women are emotional, and either less inclined to use reason or less capable of using reason. If a white person tells a black person that the white person is being "rational" this isn't a statement with no context - the context, of course, is the stereotype that black people are less intelligent, more impulsive, less civilized, and less capable of producing great works of artistic or intellectual achievement. When a white man invokes the frame of rationality in the context of a political debate, in other words, they're invoking stereotypes which were created to reinforce white male power at the expense of people who were not white men.

For the third time, just so there's no mistake, I am not saying that this is what Scott does or what rationalists do in general. But in the wild, it is incredibly common to find white men either implicitly or explicitly invoking the frame of rationality to shut down arguments from women and minorities or arguments that are framed in terms of justice or moral imperatives. Importantly, the people doing this are almost never actually more rational than their opponents - because it's not about promoting reason, but about using the trappings of rationality to undermine calls for change.

And again, Pinker is sort of the poster child for this, which is why I think that any association between Pinker and rationalists will ultimately be terrible for rationalists' public image (but of course it's irrational to worry about what other people think, right?).

One final point - there has been a subjectivist movement in modern epistemology that is associated with identity politics. The idea is that everyone is biased by their identity, so rather than adopting a fake "view from nowhere" it's better to start by acknowledging your relevant biases and the perspective that your identity gives you. Hence "as a white man I find that..." or "as a black woman I find that..." From this perspective, claiming to be "rational" is nothing more than a denial of how one's identity and experiences inform their beliefs. It's self-delusion. A proponent of this view might claim "you're no more rational than I am, but at least I'm willing to openly acknowledge the areas where my demographic circumstances might affect my judgment or make me prone to motivated reasoning."

I think there are certainly people who are a bit too zealous with this concept, but when I read something like Charles Murray attempting to scientifically prove that black culture is scientifically inferior to white European culture, I am at least convinced that we need to be wary of bad actors who attempt to smuggle racism into the popular discourse under cover of science of rationality, and having people own up to their biases seems like it could help.

Expand full comment

"whereas Pinker wants to *highlight* the good... for what purpose? What rational end is Pinker pursuing?"

Humanism good, religion bad. That's a very simplistic reduction, but by his books (or at least the straplines) that's his overall message:

Enlightenment Now: The Case for Reason, Science, Humanism and Progress

Godawful blurb extract: "Far from being a naïve hope, the Enlightenment, we now know, has worked. But more than ever, it needs a vigorous defense. The Enlightenment project swims against currents of human nature—tribalism, authoritarianism, demonization, magical thinking—which demagogues are all too willing to exploit. Many commentators, committed to political, religious, or romantic ideologies, fight a rearguard action against it."

Rationality: What It Is, Why It Seems Scarce, Why It Matters

Blurbing: "Pinker rejects the cynical cliché that humans are an irrational species — cavemen out of time saddled with biases, fallacies, and illusions. After all, we discovered the laws of nature, lengthened and enriched our lives, and discovered the benchmarks for rationality itself. Instead, he explains that we think in ways that are sensible in the low-tech contexts in which we spend most of our lives, but fail to take advantage of the powerful tools of reasoning our best thinkers have discovered over the millennia: logic, critical thinking, probability, correlation and causation, and optimal ways to update beliefs and commit to choices individually and with others. These tools are not a standard part of our educational curricula, and have never been presented clearly and entertainingly in a single book—until now. Rationality also explores its opposite: how the rational pursuit of self-interest, sectarian solidarity, and uplifting mythology by individuals can add up to crippling irrationality in a society."

Top men, ladies and gentlemen and others. Top. Men. Working on powerful tools over the millenia. You may now be reassured! The forces of "uplifting mythology" will soon be on the run!

Blank Slate: The Modern Denial of Human Nature

Blurberino: " Injecting calm and rationality into debates that are notorious for ax-grinding and mud-slinging, Pinker shows the importance of an honest acknowledgment of human nature based on science and common sense."

Oh thank goodness for that, at last a calm rational person wades in to do away with axe-grinding and mud-slinging!

The Better Angels of Our Nature: Why Violence Has Declined

Blurbo from our shows: "Believe it or not, today we may be living in the most peaceful moment in our species' existence. In his gripping and controversial new work, New York Times bestselling author Steven Pinker shows that despite the ceaseless news about war, crime, and terrorism, violence has actually been in decline over long stretches of history."

How The Mind Works (this is from the early days before they started including grandiose sub-titles, so the blurb works extra-hard to make up for it):

Blurb, son of Blurb: "Pinker rehabilitates unfashionable ideas, such as that the mind is a computer and that human nature was shaped by natural selection. And he challenges fashionable ones, such as that passionate emotions are irrational, that parents socialize their children, that creativity springs from the unconscious, that nature is good and modern society corrupting, and that art and religion are expressions of our higher spiritual yearnings. How the Mind Works presents a big picture, but it is not a personal musing; it is a grand synthesis of the most satisfying explanations of our mental life that have been proposed in cognitive science and evolutionary biology, with insights from disciplines ranging from neuroscience to economics and social psychology. It is also fascinating, provocative, and thoroughly entertaining."

Yes indeed, we badly needed to do away with the notion that parents have anything in common with their kids and leaving them to be raised by wolves would do as good a job (look at Romulus and Remus, *they* turned out just fine!) and of course religion is not an expression of our higher spiritual yearnings (reading Steven Pinker is that).

Okay, mocking blurbs is shooting fish in a barrel. But once he started becoming a "public intellectual" and moved on from writing about (checks list) lexical and conceptual semantics, connections and symbols, and language learnability and language development, he did/does have a certain common thread running through all his works. The Whig Theory of human development, as it were: things are getting better and better and will continue to do so, saecula saeculorum, as long as we stick to good Enlightenment principles of empirical science and jaw-dropping graphs, art religion and poetry need not apply and the Muses may betake themselves off to find proper jobs in STEM instead of dancing to the lyre of Apollo on Mount Helicon and frolicking around the Pierian spring, the trollops.

Expand full comment

If Pool is an instrumental rationalist, he should not decry emotional appeals. But he does, so maybe he is some other kind.

Expand full comment

When we are talking about 'rationality' in this context it's largely a sociological phenomenon borne of people who want to see how far computational tools and frameworks can go towards solving epistemological and social problems.

When I explain the Rational-sphere to intelligent philosophical outsiders, if I called it the study of truth-seekiing, they'd say, "yeah, bro... I teach epistemology at a big university, so how is this different?" I think the defining characteristic of the rational-sphere is a type of methodological utilitarianism. Not that everyone's utilitarian, I'm not. But that a general attempt to see if any given problem can be clarified or solved with Expected-Utility, cost-benefit analysis, Iterated game theory, Bayesian statistics, or causal inference. The background sociology is a communal heuristic or bet: these tools are underrated. If more people knew and used these tools, that would be better for those people.

Yes, there are other prominent elements, the biases literature, heuristics and intuition, legibility, philosophy of social science, and ML...

Expand full comment

If it only tried to use those five weird tricks with the background assumption that they are capable of failing, it would be a lot better. Incidentally, none of weird tricks you I mention actually is utilitarianism as usually defined. It many bay area rationalists are utilitarians, and seem to.be because utilitarianism is utility maximization is rationality. (Unfortunately, not for the same values of. "utility maximisation", but the claim is interesting if not true).

Expand full comment

You're probably right. I'm projecting my own views onto the community. I think mine, though, is the correct view! My steelman of utility-based consequentialism is that it offers a lot of useful and pragmatic tools.

Expand full comment

Pinker made this "argument" before, more as in "arguing against arguing". It might come from his wife (pro-philosopher): Check this 15 min animated video by both https://www.youtube.com/watch?v=uk7gKixqVNU at 1:50 comes sth. to this effect. - He just repeated this idea in the tweet - without delving into "the deep math of intuition".

May I ask as I am still a newbie: Why the new rationalist seem to care so little about Pinker? (in my perfect world Pinker would own the NYT and Scott be the top-writer. Rationalist should see him as a member of the tribe, instead of: Meh. - I really don't get it. No one seemed to review the rationalist book. - Analogous: another Author: Scott Aaronson described "Viral" as one of the most important books of this century. 50% of comments say: Meh, Matt Ridley involved, must be BS, as he writes BS about climate. - When Scott started with climate, and got into data, he came up with pretty much the same conclusion: More people die of cold than heat. And: DON'T PANIC - have kids. - I think this are important truths for our time. - What the hell is wrong with Lomborg? - I consider them all to be grey-tribe. Can't we be a bit more catholic about Rationalism? Or is it Palestine-popular-people-front all way down?

Expand full comment

Not a Rationalist, new or old, but my indifference to Pinker may be summed up in how he is described: "public intellectual". That all too often degenerates into a sort of dancing bear (see Neil deGrasse Tyson) who performs amusing tricks for the general public, rather than someone engaging in real depth of thought.

Or to quote W.H. Auden:

"To the man-in-the-street, who, I'm sorry to say,

Is a keen observer of life,

The word ‘Intellectual’ suggests straight away

A man who's untrue to his wife.

New Year Letter (1941)"

Not that I'm accusing Pinker of anything in that vein. To be more critical, his image is that of Doctor Pangloss out of "Candide", the "greatest philosopher of the Holy Roman Empire", "professor of "métaphysico-théologo-cosmolonigologie" (English: "metaphysico-theologo-cosmolonigology") and self-proclaimed optimist, teaches his pupils that they live in the "best of all possible worlds" and that "all is for the best"." See this extract of a blurb on his website for the 2018 book "Enlightenment Now" (and honestly Steven, while the Coppola movie reference might have sounded funny, it's a little too glib - see that whole 'dancing bear' idea of the public intellectual I mentioned above):

"Is the world really falling apart? Is the ideal of progress obsolete? In this elegant assessment of the human condition in the third millennium, cognitive scientist and public intellectual Steven Pinker urges us to step back from the gory headlines and prophecies of doom, which play to our psychological biases. Instead, follow the data: In 75 jaw-dropping graphs, Pinker shows that life, health, prosperity, safety, peace, knowledge, and happiness are on the rise, not just in the West, but worldwide. This progress is not the result of some cosmic force. It is a gift of the Enlightenment: the conviction that reason and science can enhance human flourishing."

Not just graphs, but JAW-DROPPING graphs! You see why it's difficult to take this level of back-patting self-admiration seriously? Okay, maybe we should blame his publicist for putting up such quotes, but then again if you gotta have a publicist in the first place... more of that glitz, less of that studiousness.

Expand full comment
Mar 5, 2022·edited Mar 5, 2022

If you were English, I would've adduced culture shock. We uncultured colonials don't do understated self-deprecating promotion the way it's done on the Continent, or among the Oxbridge elite of the British Isles. Soccer hooligans, to a man, us. So as readers we automatically discount "jaw dropping" as the kind of meaningless intensifier that forms the background noise of advertising. We don't even *register* the fact that the phrase has been used, it probably gets edited out by the retina before it even trundles down the optic nerve to the word-processing area of the cortex.

Nor is this restricted to late night TV adverts and used-car salesman and Hollywood tabloids. As you see it is used among intellectuals. When as a professor with a big bulgy brain you write a recommendation letter for a student to be admitted to a good university, filled with dignified men and women of science, you *must* write that the larval intellectual in question is the most amazing potential genius you have seen in the last 20 years. To do any less would be almost the same as saying he denies the Trinity and Conservation of Energy alike, and is known to fart in elevators.

It has its drawbacks, which are evident here, and in the coarseness of our poltical debate. But it has its advantages, too. It allows a remarkable democracy of opportunity. You can hail from the scruffiest of backgrounds and still achieve stature and power, cf. Donald Trump but also Ronald Reagan and Abe Lincoln.

Expand full comment
Mar 6, 2022·edited Mar 6, 2022

That is the trouble, though. I read a blurb that could apply equally as well to the newest snakeoil salesperson touting a self-help book, business method, or 'the Eat Arsenic Alone Diet - the results will shock you! Doctors hate this one weird trick!' ad, then is anyone surprised that the effect on me is not "this is a work of studious research that I should take seriously"?

And if indeed even intellectuals use this kind of hype, then it's as the saying goes - like shaving a pig, a great cry and little wool. That kind of "public intellectualism" degrades even popularisation of science/history/concepts, which is a useful educational tool and public good, to the level of second-hand car salesmanship where you go in expecting the guy to try and fleece you so you approach everything he says with ultimate scepticism.

And then the public intellectuals go off and write NYT opinion pieces on why oh why do the public fall for anti-vaxxers and not believe us, The Established Experts?

It's because you've dressed up in your hooker boots and hung around on street corners to peddle your wares, is why. So don't be surprised to be treated like a cheap date, and not a beloved to be wooed respectfully.

Expand full comment

Yes, cut Pinker some slack Deiseach. Or rather: cut his blurb-writers some slack. US people, including academics, have set their volume permanently to “loud”. If you come from somewhere else you must discount all the spin/hype/advertisement stuff when deciding if you want to read what anyone over there has published. Otherwise, you will – erroneously! - think that nothing written by US academics is ever worth your time.

Hell, I come from a culture - probably rather like the Irish, since Scandinavians sort-of colonized the place 1000 years ago – where the person who can resist talking, in particular about him/herself, and in particular after lots of drink, is regarded as the clever one. (There is even a Viking story where the Vikings are to elect a new chief and have the traditional day-long booze party to get to a decision. During the following hangover day, they unanimously elect as chief an old guy who sat in a corner all night, drinking enormous amounts of the local brew, while being able to resist talking about himself, or about anything or anyone else – he just sat there, observing, observing….)

…To say of someone else that they are doing “outstanding” work in this culture will only make everyone suspect you are sleeping with that person. And a person saying that about him/herself will get those standing around to exchange knowing glances and faintly smile at each other.

This cultural trait is cute in a way, but it does not help when you are at a faculty gathering at UC Berkeley, let me tell you. When in the US, do as people in the US do. And read their blurbs the way they do!

Expand full comment

"I DON'T HAVE AN INSIDE VOICE!!!" -- H. Simpson.

Expand full comment

Well, I'd say another example of the problem: 1. Not read a single book, but "meh, public intellectual". Indeed, Pinker got tenure and wrote bestsellers even before lesswrong started. As did Hume or Chomsky. - About jaw-dropping: Carl explains nicely - and f...: "why u not just LOOK?" (G.G.) Better: Do not look. Read title. Draw graph to your best guess. Then look.

e.g.: look https://davidsmale.netlify.app/img/Bookplot1.png What was the year of that graphic: 1960 1980 2000? - Now see the animated version - by scrolling down on https://davidsmale.netlify.app/portfolio/factfulness/ Tiny bit impressive? - Or for the last 100 years draw the graph : "Number of deaths from disasters

Disasters include all geophysical, meteorological and climate events including earthquakes, volcanic activity, landslides, drought, wildfires, storms, and flooding." Then google. Or : https://ourworldindata.org/extreme-poverty#the-mis-perceptions-about-poverty-trends - then scroll up.

Fun fact: Scott is a - horribile dictu - celebrity in the blogosphere - now run! :p No really, just because a thinker is known even to NYT-readers and even invited on tv does not prove her views are to be ignored. And just bec. The Guardian or Cochran write badly about Ridley does not mean he is always wrong.

Expand full comment
Mar 6, 2022·edited Mar 6, 2022

In the vein of Pinker's "using the tools of rationality to critique rationality means ha, I gotcha!" tweet, then using the tools of emotional appeal ("your jaw will drop!") to critique emotional appeals over Pure Reason means ha, I gotcha!

The day Scott takes to intellectual whoring is the day I nope out of here.

Expand full comment

Authors seldom write the blurbs on their books (too busy, too embarrassing). I agree: Scott would. And Pinker should. Still, you literally "judge books by their cover". Authors wish to be read, Publisher want to sell. Not just to us fancy pseudo-intellectuals, who are "above all this". (Fun fact: we are not; just going for another bait. See the pics of Eric Hoel substack https://erikhoel.substack.com/p/we-cant-imagine-an-end-of-history?s=r )

Expand full comment

I really like this. One of the best from the recent posts. It describes nature of 'rationalist' project really clearly, and answers the question about its usefulness better than previous posts about it.

Also, I think it maps really well onto "Seeing like a state" discourse about metis and techne. Intuition (trained neural net - this conflation seems basically correct, it really is precisely that) seems to be about the same thing as metis (only metis could be at a group level too?).

Argument "against" intuition - that it doesn't scale, can't be made to meaningfully advance - is roughly the same as what I was thinking about when I worried rationalist community discourse is getting overly enthusiastic about these ideas (especially strong forms of argument from Chesterton's Fence).

Expand full comment

You know, I was going to write here about you're entirely off-base, about how this is not actually what the opponents of rationality think... and then I realized I should actually read this Gardner piece to see if what I was writing about actually applied to him. And, reading it, I gotta say... I have no fricking clue what Gardner is trying to say. Your reading makes more sense of it than I could have. So, uh, huh.

Expand full comment

"But Gardner claims to be Jewish, and I doubt he follows all 613 commandments"- quite difficult, given that around half of them are literally impossible now.

Expand full comment

March and Olsen identify two different "logics" of decision making -- the logic of consequences and the logic of appropriateness.

The logic of consequences is outcome-based, cost/benefit decision making. Rationality is an attempt to refine this sort of decision making. If you're talking about heuristics and intuition versus explicit analysis and so on, it's all taking place within this framework. So, perhaps there are some arguments to be had there about the best way to get the best outcomes, but that's not the real debate.

The real debate is with an entirely separate way of doing things -- the logic of appropriateness. Under the logic of appropriateness, you don't exhibit consequence-driven behavior at all. Instead, you're trying to do what seems right or appropriate to a given scenario by applying rules and principles. The goal here *isn't* to get an outcome. The goal here is to follow the rules and defend the associated sense of identity.

Most people are probably familiar with the relevant distinction in the field of ethics between consequentialists (logic of consequences obviously) and deontologists (logic of appropriateness). One can attempt to refine a given consequentialist ethical system and compare attempts in terms of the consequences they deliver. But you'll never get anywhere with a deontologist by trying to argue with them about consequences. That just fundamentally misses the point.

When you look at religion, tradition, respect, and relationships these are very often the domain of the logic of appropriateness and not of consequences. One can imagine a consequentialist logic, but this generally misses the point. For example, a consequentialist logic of religion is something like "follow these rules because if you don't, you'll go to hell." You get almost no mileage out of trying to understand religion in this way, though. People who use this framework are constantly frustrated with the inconsistency of religious practice and its apparent hypocrisy.

But if, instead, you imagine that religious people are trying to live their lives according to a particular kind of identity and sense of self where one takes some aspects of the relevant holy word very seriously and ignores other. This one be an insane thing to do as a religious consequentialist -- to tempt fate by breaking some of the rules. But if one is simply trying to "be a Christian" then it's very different, and the things it means to be a Christian can even be self-contradictory.

So too with tradition. One might follow tradition out of a belief in the wisdom of the ancients -- they knew better than us. But his is quite unlikely to be true in general and suggests, at most, a kind of weak deference. That's not how traditionalists actually think. They're trying to follow tradition as such because that's what it means to "be an X." Asking if this is better or worse in terms of some set of consequences misses the point.

Expand full comment

I don’t think this counts as anti-rationalist, but in my entirety subjective opinion, Pinker’s writing style always seems arid to me. It’s feels like the same 3x3 Rubik’s cube is being twisted into a pretty predictable array of patterns. If I go in thinking this guy is much more clever than I am, I’m about to learn something new, why can’t he occasionally say something I didn’t anticipate 50 pages ago?

I suppose it comes down to the fact that a brilliant thinker isn’t necessarily a great writer.

Expand full comment

> One of the most common arguments against rationality is “something something white males”. I have never been able to entirely make sense of it, but I imagine if you gave the people who say it 50 extra IQ points, they might rephrase it to something like “because white males have a lot of power, it’s easy for them to put their finger on the scales when people are trying to do complicated explicit computations; we would probably do a better job building a just world if policy-makers retreated to a heuristic of ‘choose whichever policy favors black women the most.’”

That's a bit more conflicty than I usually assume the explanation is. I read "something something white males" as "the people in charge of economic/political/intellectual power tend to belong to a particular subset of the population, and this makes their motivations imperfectly align with the rest of the population". I like my phrasing better because it makes it clear that this argument is more general than "white men ruin everything"; it applies in any situation where the controlling elites of a particular group are sufficiently homogenous/insular/different-from-the-general-population. It also makes it clear that this isn't necessarily a problem of malicious elites.

It _also_ implies a slightly different solution; elites should consciously be aware of their motivational misalignment and correct for it if they want their population to be happy. Again, this is a similar but more general point than your (presumably tongue-in-cheek) suggestion about favoring black women.

Expand full comment

> Likewise, I don’t think the best superforecasters are always the people with the most insight into rationality - they might be best at truth-seeking, but not necessarily at studying truth-seeking.

Did you just disprove the possibility of a hard-takeoff? That is, "the best AIs at making AIs aren't necessarily the best AIs at any given task (which means we shouldn't expect a particular AI to arise that can design a strictly better AI than itself across all domains)"?

Part of the reason this seems like a reasonable thing to expect is that if you're generally good at truth-seeking, it seems like you should also be able to leverage that into being good at seeking the truth _about_ truth-seeking. If this is impossible in some intereesting way, that seems like it would have implications about where we expect the limits of intellectual ability to be.

Expand full comment

I suggest addressing three issues from the post and comments:

-lack of consensus over whether 'rationality' seeks truth or success;

-the name 'rationalist community' is confusing because of philosophical rationalism (e.g. Descartes's methods);

-calling oneself 'rational(ist)' seems like unfair, maybe prejudiced, rhetoric (and the perennial alternative name, 'aspiring rationalist', doesn't help).

From my perspective, what unifies this online community* is neither truth-seeking nor seeking to systematize winning, nor is it concern with AI x-risk, nor is it commitment to certain methods (although obviously a handful of them are consistently popular).

Instead, the unifying factor seems to be approaching problems like a student who will earn partial credit on a math problem even if the answer turns out to be incorrect. In terms from Scott's post, that's making clear what was explicitly calculated, what relies on a heuristic, what can't be justified beyond intuition, and what has been done to reduce a known cognitive bias (that last one is in other posts). And how these work together to suggest an answer.

My suggestion: picking a fairer online community name, e.g. 'Showing Our Work'.*

The name is agnostic on whether there's progress toward an answer (either true or successful). Maybe there is no progress around here. I do think the name captures the social purpose of participation for many, in a broader way than 'overcoming bias' or '(being) less wrong'. What if this is just an unproductive online community primarily for people with fond memories of math homework? Could be worse, right?

*I have no thoughts on how fully this covers the Bay Area community and its members' AI x-risk initiatives.

**Perhaps being SOWs would have a minute independent effect on humility, like tonsuring a monk, beyond dropping 'rationalist'.

Expand full comment

I'm not really part of this community, and this has nothing to do with the discussion at hand, but for the record, the offhand comment about Gardner being Jewish and not necessarily following 613 commandments is a good jab but poor comparative religion. Yes, medieval scholars came up with 613 commandments- though their lists didn't always agree- but anybody with a bissel of Jewish knowledge knows that most of the commandments are negative, many are conditional on being in the land of Israel, or having a Temple, or are only done by the priest, or the king, or in wartime, or situational (if X happens then do Y.) In the diaspora, excluding all the situational and negative commandments, it's only about 44 positive laws that apply to an observant Jewish person. There's a larger number of negative laws but some of those we do anyway, like don't eat lizards or bats, don't kidnap or commit manslaughter or lie in court proceedings.

But who knows if Gardner, and one would assume not Pinker, does those anyway. OK, irrelevant correction over with. Carry on.

Expand full comment

Even as a young boy I was always bothered by the way Vulcans were written on Star Trek. My head cannon was that, as is explicitly stated in the show, Vulcans have more intense emotions than humans, and so their philosophy/meditation/culture was actually focused on suppressing emotions, and not on being logical. This seemed a better fit to me because it always seemed 'obvious' to me that being logical(rational) was indistinguishable from following the best course of action in a given situation. If empathizing with a human crewmate during a crisis was the best way to improve their performance and increase the chances that you both live, then that was the logical thing to do.

Expand full comment
Mar 6, 2022·edited Mar 7, 2022

Thanks for the post, Scott. I enjoyed reading it and I had a big laugh about “rationality is important, but relationships are also important, so there”.

In your final paragraphs, you establish a distinction between theory and practice. These distinctions are useful simplified models, but it's important to point out that under the hood, theory and practice, fast and slow, reasoning and intuition, etc. are all manifestations of the same underlying black box, namely, cognition. This is self-explanatory and it is redundant of me to point this out, but a lot of people view intuition as some kind of magic. Rationalists and anti-rationalists talk past each other, but under the hood, it all boils down to cognition, regardless of what labels we use.

It's important that we define our ontological framework before we can talk more about cognition. Let's suppose that we subscribe to physicalism. Any understanding of rationality entails an understanding of cognition, but our understanding of cognition is highly limited at this point in time. However, within physicalism, we do know that our brains create models of the external world. Some models are more accurate than others, in that they have a better correspondence to the world, and we can measure the accuracy of different models via experiment. Rationality is any process that helps us improve the accuracy of our models. Rationality is effective cognition.

Expand full comment

I am late to this thread and should probably just wait for the inevitable "Highlights From The Comments On What We Argue About When We Argue About Rationality" thread instead.

I think if you're going to define rationality as "the correct way of thinking about things in all circumstances" then it's un-criticisable, but that's not very interesting.

It's more interesting to consider the possible failure modes of how bad things can happen when you set out to be rational (but do it with a flawed human brain) and the circumstances in which things might have gone better if you'd tried a less systematic mode of thinking.

When you are "being rational", you are generally setting yourself some kind of target function, and then optimising your actions to maximise that target function. I think the two big failure modes are (a) you fail to make predictions correctly, or (b) you set your target function too narrowly and wind up sacrificing other good things.

I imagine we've all met (or been) the type of person who says "All I care about is [X] and so I'm going to disregard everything that doesn't help me achieve that goal" and winds up disregarding good manners and/or personal hygiene. That kind of thing seems like the most common failure mode for the "look at how rational I am" types.

I am reminded of Scott's essay on The Tails Coming Apart; trying to rationally maximise some set of values usually means sacrificing any value you're not maximising for. Hanging out in the middle space where you're not trying too hard to maximise for anything often leads to better results.

Expand full comment

1) a heuristic is an algorithm - it's rational. intuition is heuristic. this whole thing is circular. 2) how can you even begin to discuss rationality without an understanding of NP complete?

Expand full comment

The point isn't to be 'rational', it's to be right.

Expand full comment

Pinker's book fails to convince "anti-rationalists" that rationality is useful, important and necessary because it argues for rationality in a way that someone who already endorses rationality would want rationality to be presented to him. This is akin to trying to convert christians by pointing out logical inconsistencies in the Bible, or trying to convert atheists by invoking [arguments that Christians already take for granted]. In essence, Pinker's book is attempting to convert nonbelievers into believers by invoking hidden assumptions that only people who already believe share. And make no mistake, Pinker's veneration of rationality is at the very least pseudo-religious, if not fully religious.

It's also clear he doesn't realize this, both in the way the book is written and the tweet you shared - he talks and acts as if everyone shares these assumptions about rationality, such as for example, that the utility of large scale formalized rationality (science) necessarily means everyone should try optimizing for rationality in their daily lives. It's not that anti-rationalists don't think rationality is useful or good, it's just that they don't believe "rationality is the highest virtue" necessarily follows from "science is a vehicle for good in our society", which itself is not a given either.

This is what Gardner meant (i think, i could be wrong) about respect being more important than rationality, he's talking about rationality as a virtue, not rationality as the practice of "improving slow thought" as Chris Phoenix put it in one of the comments here.

Expand full comment

Seems to me that a fair amount of what's distinctive about 'rationalists' is that they try and have true beliefs even about stuff where there isn't much consequence to being wrong, or where being accurate is rude/taboo. Obviously this is a long way from the Yudkowsky definition in terms of wining. More like applied autism. (I don't mean that negatively, I'm autistic.)

Expand full comment

Scott, I think you've missed the essence of the disagreement (essentially because you're *not* an idiot). What Gardner is complaining about is PERFORMANCE and SIGNALING. He cares less about solving problems than political issues like assembling coalitions, indicating loyalty, and assigning blame.

Rationality does not do any of these; hell, most of the time it tells you that these are dumb concepts. If your priority in life is this sort of political drama, anything (ie Rationality) that indicates the drama is dumb, or that the teams that have lined up make no sense, or that the founding argument is ludicrous, is THE ENEMY.

We saw this play out (slowly) with Covid then at lightning speed with Ukraine. People who claimed to be rational and skeptical immediately dropped all filters when it came to believing whatever meshed with their convictions. We saw a constant stream on social media of stories that sounded good then were debunked a day later; but none of that had any effect, people still immediately latched onto any stories and claims that supported their tribe. And more than that, immediately attacked, in the most vicious ways possible, anyone saying anything (not opinions, simple facts) that went against the tribal story.

That's what Gardner is valorizing and Pinker is condemning: the prioritization of Tribe over Truth. Nothing more, nothing less.

Expand full comment

Fact check. The story about Srinivasan Ramanujan was not a story about a previously unsolved math problem. In fact it was a simple puzzle from a newspaper. GH Hardy, who recounted the story, tells of being able to find the answer himself in a few minutes using trial and error. Hardy posed the problem to Ramanujan, who was cooking a meal at the time. After a few moments of thought, Ramanujan dictated a continued fraction, the terms of which included Hardy's answer. The point of the story was that Ramanujan was such an instinctive mathematician he could immediately see the generalised solution.

Anyway, regarding rationality, the point about these sorts of discussions that always troubles me is that even clever people like Pinker seem to apply some sort of moral realism which suggests that the right thing to do can be determined purely by rational thought As if it were obvious that some sort of utilitarian calculus can always tell us the right thing to do. In fact, as Hume pointed out "Reason is, and ought only to be, the slave of the passions”. What he means is that the value judgement (which for Hume is grounded in passions) is always prior to the reasoned calculation, and cannot itself be derived using reason. A lot of people seem to misunderstand this quote as an attack on rationality. In fact Hume was the most rational of men. His point was simply that the decision to act is necessarily preceded by a value judgement about desired outcomes. P(A|B) = [P(A)*P(B|A)]/P(B) comes afterward, as we attempt to forecast the outcomes of possible interventions.

Expand full comment

What you're talking about here is what John Vervaeke calls "relevance realization". https://www.meaningcrisis.co/ep-28-awakening-from-the-meaning-crisis-convergence-to-relevance-realization/

Relevance realization is how one determines when to bring the machinery of rationality online and what to use it on. Just saying heuristics or intuition is glossing over a very deep subject he spends 25 hours talking about.

Expand full comment

Suppose we accept Yudkowsky's systematized winning definition of rationality. Even if I am pro-rationality, the teachings of the rationalists may be counterproductive at my current stage of the game I am playing. Of course, the rationalists may argue that my behavior is still rational in that case, but if they can't help me, what do I care what they have to say?

There are some interesting subcases, such as:

- Pretending to dislike rationality confers some benefit, like people thinking you are cooler

- I've already exhausted all the rationalist learnings on say, ballet, and now they are just a distraction

- I'm good enough at rationality and now I can better achieve my goals by doing something like joining the local church

- The community around rationality is not very useful to you. Ie, they all seem to want to send their money to effective charities while your VC friends want to take you out to dinner and fund your startup.

- Your utility function is holding you back and many other disciplines are better for the modulation of utility functions. Even scrolling someone else's TikTok is better for this than the study of rationality.

- You've concluded the popular areas of rationality aren't the lowest hanging fruit for improving your life, and now your rationalist interlocutors are just annoying. So you become anti-rationalist to lessen the spam you have to deal with.

- You find it hard to separate rationality from the rationalist community and their norms are simply annoying to you.

- Maybe the rationalist community doesn't seem that good at winning to you. Maybe you think they'd be more effective if they just banded together to shill a new currency and all got rich.

Expand full comment

Rationality is adapting. Evolution is the survival of the adaptable after all

Expand full comment

Someone will probably already have said this in the comment section somewhere but: rationality is a formalization of reasoning, in the same way as mathematics is a formalization of quantification.

Scott is considered to be rationalist-adjacent at the very least and how to put it, most of his content is more or less proof-checking, just for arguments, rather than theorems. Maybe that's an even better way to put it: rationality is the art of validating reasoning proofs.

This is irrelevant to Fermi estimates while being mugged - but oh so relevant to any lasting policy decisions.

And anti-rationalists would then simply happen to be the sophists and the demagogues - those whose arguments can't withstand sufficient scrutiny when taken apart bit by bit and checked for validity, or at least quantified in terms of their likelihood of being true, that is, they believe that the purpose of any given argument isn't in reality about finding out ways of actions or truths about the world but simply convincing others to go in a specific direction. It might be that they feel all validation is motivated, and if a 'rationalist' finds an issue with their argument, all they have succeeded at is making a better argument, and they might intuitively follow some sort of an argument-external truth dualism sort of an outlook.

Expand full comment

I just thinking of it as the pursuit of open source thinking.

Expand full comment

Going back and reading Gardner's actual essay, I don't see him arguing against rationality at all. Instead, he's saying that pure rationality is not enough to make the world a better place on its own, and listing some things he thinks are also important for that goal.

Expand full comment

Rationality-as-winning is enough to make the world better on its own, by definition. Gardner’s arguing against _trying_ to use rationality-as-winning on its own. He lists some things which he thinks are good heuristics, and I think his meta-argument is “These things have good effects, we’re better off saying ‘Do this’, instead of saying ‘Decide what’s best’ and hoping they do it.” He might also be meta-arguing “These things are hard, teach people how to do them, not how to decide whether to do them.”

Expand full comment

Consider making Lord Voldemort more rational and therefore better at winning. Does that make the world a better place? No, he'll use that ability to take more power and oppress people.

Without some sort of value like "making self and others happy is good", pure effectiveness can be destructive. See e.g. paperclip maximizers. I think this, or something similar, is what he's trying to get at with the bit about respect.

Expand full comment

Gardner is a "well-known wrong person"?

Gardner's theory of multiple intelligences is intuitively true as well as mechanically true-seeming. The lack of empirical evidence should be interpreted literally as "to be continued" and not proof that it's wrong. His theory is the opposite of g factor theory; so, invalidation of the former would imply validation of the latter, which isn't a slam dunk.

I want to elaborate on this later, but g factor and IQ are self-evident definitions: Of course, some factor exists that predicts a cluster of things that we consider to be good. We then retroactively assign the label "intelligence" to those predictive factors.

Expand full comment

Great distinction. Some of what you've written about here reminds me of John Vervaeke's concept of Relevance Realization, which I think you'd find interesting! He talks about how there's a paradox encountered when trying to talk about relevance, where in order to have a theory of anything you need to first have screened off what is irrelevant, but if you try to do that with relevance, you've already made a bunch of assumptions to the very question you're trying to answer. (Relevance as such comes up re classifying dogs & cats, for instance.)

Fascinating paper on the topic: http://contrastiveconvergence.net/~timothylillicrap/files/articles/relevance%20realization%20as%20an%20emerging%20framework%20in%20cogsci.pdf

Expand full comment

the paper includes this quote:

> Cherniak [4] has influentially argued that you cannot say that to be rational is to simply be logical because many algorithms couched in logic lead to combinatorial explosion. He argues instead that to say that we are rational means that we have zeroed in on some of the relevant subset of logical inferences for the task at hand. He therefore calls attention to the fact that relevance realization is central to the issue of rationality in general.

Expand full comment

I disagree with the idea that someone who stupidly overthink in a crisis probably deserves to have been shot.

There are definitely ways I can argue that point, but I think the intuitive take will do.

Expand full comment