What Are We Arguing About When We Argue About Rationality?
Let’s talk about this tweet:
The backstory: Steven Pinker wrote a book about rationality. The book concludes it is good. People should learn how to be more rational, and then we will have fewer problems.
Howard Gardner, well-known wrong person, sort of criticized the book. The criticism was facile, a bunch of stuff like “rationality is important, but relationships are also important, so there”.
Pinker’s counterargument is dubious: Gardner’s essay avoids rationality pretty carefully. But even aside from that, it feels like Pinker is cheating, or missing the point, or being annoying. Gardner can’t be arguing that rationality is completely useless in 100% of situations. And if there’s any situation at all where you’re allowed to use rationality, surely it would be in annoying Internet arguments with Steven Pinker.
We could turn Pinker’s argument back on him: he frames his book as a stirring defense of rationality against anti-rationalists. But why does he identify these people as anti-rationalists? Sure, they themselves identify as anti-rationalist. But why should he believe them? After all, they use rationality to make their case. If they won, what bad thing would happen? Even in whatever dystopian world they created, people would still use rationality to make cases.
I feel like what I’m missing is an idea of what anti-rationalism means. What’s at stake here? What are we arguing about when we argue about rationality?
Rationality As Full Computation Opposed To Heuristics?
I think Howard Gardner sort of believes this. He has an inane paragraph about how respect is more important than rationality. When I try to make sense of it, I get an argument kind of like: the Communists trusted their reason, reasoned their way into believing Communism was true, and oppressed people because their version of communism said it was okay. But they should have trusted a heuristic saying that every human being is worthy of respect instead.
Elsewhere in the essay, he compares rationality unfavorably to religion or tradition. One is tempted to use the maneuver from Pinker’s tweet here: “Is there anything good about religion or tradition?” If no, why prefer them to rationality? If yes, wouldn’t a rational person rationally choose to believe / follow them?
Again, this makes the most sense as an argument about heuristics. It’s the old argument from cultural evolution: tradition is the repository of what worked for past generations. Perhaps you are very smart and can beat past generations. Or perhaps you are an idiot, you think “I can do lots of cocaine-fueled orgies, because I will just calculate the pros and cons of each line of cocaine / potential sex partner as I encounter them, and reject the ones that come out negative”, and then one time you forget to carry the one and end up in a bathtub minus a kidney. This was basically how Communism went too.
One of the most common arguments against rationality is “something something white males”. I have never been able to entirely make sense of it, but I imagine if you gave the people who say it 50 extra IQ points, they might rephrase it to something like “because white males have a lot of power, it’s easy for them to put their finger on the scales when people are trying to do complicated explicit computations; we would probably do a better job building a just world if policy-makers retreated to a heuristic of ‘choose whichever policy favors black women the most.’”
So what are pro-rationality and anti-rationality people arguing about? In this model, Pinker and his supporters believe you should explicitly calculate the pros and cons of everything you do, whereas Gardner and his supporters believe you should often retreat to heuristics like “don’t do anything that violates human rights” or “live a holy and god-fearing life” or “don’t do drugs” or “try to favor black women”.
But I am pretty much 100% sure that Pinker and his supporters don’t believe the stupid explicit computation thing. I count myself among his supporters and I definitely don’t believe it. Obviously heuristics are important and good. This is true not just for big important moral things, but also for everyday occurrences and determining truth. If I get an email from a Nigerian prince asking for money, I’m not going think “I shall do a deep dive and try to rationally calculate the expected value of sending money to this person using my very own fifteen-parameter Guesstimate model”. I’m going to think “nah, that kind of thing is always a scam”. Not only will this prevent me from forgetting to carry the one and sending my life savings to a scammer, but it also saves me the hours and hours it would take to create an explicit model and estimate a probability.
Then maybe the difference between rationalists and anti-rationalists is that rationalists use heuristics sparingly and are willing to question them, and anti-rationalists follow heuristics religiously (or even slavishly)? But Gardner claims to be Jewish, and I doubt he follows all 613 commandments; I imagine he’s even raised his voice a few times when respect didn’t seem to be working. I think everybody follows some combination strategy of mostly depending on heuristics, but using explicit computation to decide what heuristics to have, or what to do when heuristics conflict, or whether a certain heuristic should apply in some novel situation.
Rationality As Explicit Computation Opposed To Intuition?
“Intuition” is a mystical-sounding word. Someone asks “How did you know to rush your son to the hospital when he looked completely well and said he felt fine?” “Oh, intuition”. Instead, think of intuition as how you tell a dog from a cat. If you try to explain it logically - “dogs are bigger than cats”, “dogs have floppy ears and cats have pointy ones” - I can easily show you a dog/cat pairing that violates the rule, and you will still easily tell the dog from the cat.
Intuition can be trained. Good doctors have great intuition, and are constantly saying things like “this feels infectious to me”. If you ask them to explain, they’ll give you fifteen different reasons it seems infectious, but also admit there are ten different reasons it might be iatrogenic and forty reasons it might be autoimmune, but the infectious reasons seem more compelling to them. A newbie intern might be able to generate the same list of 15 vs. 10 vs. 40 reasons and be totally paralyzed by indecision about which ones are most important.
This last decade has been good for intuition, because we’ve finally been able to teach it to computers. There are now AIs that can tell dogs from cats, previously an impossible task for a machine. There are style transfer AIs that can make a painting feel more like a Van Gogh, or “more cheerful”, or various other intuitive things. Even text generation programs like the GPTs are conquering intuition - Strunk & White aside, there’s no ruleset for how to write, just better or worse judgment on what word should come next. Since these AIs are just giant matrix multiplication machines, “intuition” now has a firm grounding in math - just much bigger, more complicated math than the usual kind that we call “logical”.
So in another conception of the debate, the Pinkerian rationalists want to explicitly compute everything through formal arguments or equations, but the Gardnerian anti-rationalists just want to get a gestalt impression and make an intuitive decision. This maps onto stereotypes about atheism vs. religion: the atheist saying “here are 7,000 Biblical contradictions, QED” vs. the believer saying “but it just feels true to me”.
But again, I would be shocked if Pinker or other rationalists actually believed this - if he thought it was a productive use of his time to beat one of those cat/dog recognition AIs with a sledgehammer shouting “Noooooooooo, only use easily legible math that can be summed up in human-comprehensible terms!” Again, it would be impossible to live your life this way. A guy with a gun would jump out from behind the bushes, and you’d be thinking “well intuitively this seems like a robbery, but I can’t be sure until I Fermi estimate the base rates for robberies in this area and then adjust for the time of day, the…” and then the robber has shot you and you probably deserved it.
Even this doesn’t go far enough - it suggests that intuition might only be useful under pressure, and when you have enough time you should do the math. But I recently reviewed the discourse around Ajeya Cotra’s report on AI timelines, and even though everyone involved is a math genius playing around with a super complex model, their arguments tended to sound like “It still just doesn’t feel like you’re accounting for the possibility of a paradigm shift enough” or “I feel like the fact that your model fails at X is more important than that my model fails at Y, because X seems more like the kind of problem we want to extrapolate this to.” The model itself is explicit, but every decision about how to make the model or how to use the model is intuitive and debated on intuitive grounds.
Yudkowsky: Rationality Is Systematized Winning?
This is Eliezer Yudkowsky’s standing-on-one-foot definition of rationality.
The idea has a history behind it. Newcomb’s Paradox is a weird philosophical problem where (long story short) if you follow an irrational-seeming strategy you’ll consistently make $1 million, but if you follow what seem like rational rules you’ll consistently only get a token amount. Philosophers are divided about what to do in this situation, but (at least in Yudkowsky’s understanding) some of them say things like “well, it’s important to be rational, so you should do it even if you lose the money”.
This is what Eliezer’s arguing against. If the “rules of rationality” say you need to do something that makes you lose money for no reason, they weren’t the real rules. The real rules are the ones that leave you rich and happy and successful and make the world a better place. If someone whines “yeah, following these rules makes me poor and sad and unable to help others, but at least they earn me the title of ‘rational person’”, stop letting them use the title!
This definition has its issues, but one thing I like is that it makes it very clear that following heuristics or using intuitions is fine.
If you have some difficult problem, should you consult your intuitions or your long chain of explicit reasoning? What would a rational person do? The most rational answer I can think of here is “run the experiment, try it both ways a few times, and use whichever one produces better results”.
Should you rely on heuristics, or calculate everything out each time? I would be surprised if people who explicitly calculated the value of responding to each spam email ended up happier and richer and psychologically healthier and doing more good in the world than people who click the “delete” button as a spinal reflex - in which case, a real rationalist should choose the reflex.
This has the happy side effect that it’s impossible to be against rationality. But it also has the more concerning implication that it’s vacuous to be in favor of it. If rationalists were people who really liked explicit chains of computation, we could print out cool “TEAM EXPLICIT CHAIN OF COMPUTATION” t-shirts and play nasty pranks on the people who like heuristics and intuition. But if it’s just about preferring good things to bad things, it doesn’t really seem like a method, or a community, or an ideology, or even necessarily worth writing books about.
It still feels like there’s something that Pinker and Yudkowsky are more in favor of than Howard Gardner and Ayatollah Khameini, even though I bet all four of these people enjoy winning.
Rationality As The Study Of Study?
Maybe rationality is what we’re doing right now - trying to figure out the proper role of explicit computation vs. intuition vs. heuristics. In this sense, it would be the study of how to best find truth.
This matches a throwaway line I made above - that the most rational answer to the “explicit computation vs. heuristics” question is to try both and see which works better.
But then how come pretty much everybody identifies “rationality” more with the explicit calculation side of things, and less with the intuitive side? Surely a generic study of truth-seeking would be unbiased between the two, at least until it did the experiments?
Geology is the study of rocks. It’s hard to confuse the object-level with the meta-level; rocks are a different kind of object than studying. If you’re debating whether a certain sample is schist or shale, you’re debating the rocks. If you’re debating whether argon-argon dating is more appropriate than potassium-argon dating, you’re debating the study. In order to do good science, you want your studying to conform to certain rules, but nobody expects the rocks themselves to conform to those rules.
Rationality is the study of truth-seeking, ie the study of study. It’s very easy to confuse the object-level with the meta-level; are we talking about the first or second use of “study” in the sentence?
Science ought to be legible, not because legibility is always better at finding truth, but because that’s part of the “rules” of science. You don’t get to say you’ve scientifically explained something until you’ve put it into a form that other people can understand. This is a good rule - once something is comprehensible, you can spread it and other people can build on it. Also, you’re more likely to be able to take it off in new directions.
If some prospector has a really amazing knack for figuring out where diamonds are buried, which he can’t explain - “This just feels like a diamond-having kind of area to me” - then he’s good at rocks but not good at geology. He’s not a geologist until he’s able to frame it in the form of laws and explanations - “diamonds are found in areas where deeper crust has been thrust to the surface, which can be recognized by such-and-such features”.
If you’re a mining company, then by all means hire the guy with the mysterious knack; employing him sounds really profitable. But a hundred years later, most of the progress in diamond-acquisition is going to come from the scientists (…is a hypothesis you could assert; I think Taleb would partly disagree). Not only can they share their findings in a way that Knack Guy can’t share his knack, but they can ask questions and build upon them - might there be other signs that indicate deeper crust thrust to the surface? Can we just dig down to the deep parts of the crust directly? Can we replicate the conditions of the deep crust in a lab, and avoid having to mine at all? These are the kinds of questions that a knack for finding diamonds doesn’t help with; you need the deep theory.
Likewise, supposing that some tradition is good, following the tradition will give you the right answer. But you can’t study it (unless you study the process by which traditions form, which isn’t itself “relying on tradition”). You’ve been magically gifted the correct answer, but not in a way you can replicate at scale or build upon. “Following the Sabbath is good because it helps you relax and take time to contemplate, the ancients were very wise to prescribe it”. Fine, but I need fifteen people to bond super-quickly in the midst of very high stress while also maintaining good mental health, also five of them are dating each other and yes I know that’s an odd number it’s a long story, and one of them is secretly a traitor which is universal knowledge but not common knowledge, can you give me a tradition to help with this? “Um, the ancients never ran into that particular problem”.
Sometimes theories lag way behind practice. For most of medical history, theorists believed bloodletting and the four humors, whereas people with knacks (wise women, village healers, etc) generally did reasonable things with herbs that presaged modern medicine. Still, even though ancient doctors got the contents of their theories wrong, the part where they had theories was legitimately a real advance; without it, I don’t think we would have gotten to modern medicine, which does outperform the wise women most of the time.
If you’re seeking truth, you’re absolutely allowed to do what Srinivasan Ramanujan did when he discovered how to simplify a certain kinds of previously unsolvable math problem:
It is simple. The minute I heard the problem, I knew that the answer was a continued fraction. ‘Which continued fraction?’ I asked myself. Then the answer came to my mind
If we define rationality as “the study of truth-seeking”, this is good at the “truth-seeking” part, but bad at the “study” part. He got the right answer. The truth was successfully sought, the diamond was found. But he can’t explain to anyone else how he did it - he just has a good knack for this kind of thing.
Here’s one scenario which I think is unlikely but theoretically possible: the formal study of rationality will end up having zero advantages over well-practiced intuitive truth-seeking, except insofar as it allowed Robin Hanson to design prediction markets, which someday take over the world. This would be a common pattern for sciences: much worse at everyday tasks than people who do them intuitively, until it generates some surprising and powerful new technology. Democritus figured out what matter was made of in 400 BC, and it didn’t help a single person do a single useful thing with matter for the next 2000 years of followup research, and then you got the atomic bomb (I may be skipping over all of chemistry, sorry).
I’m not actually that pessimistic. I think there are plenty of times when a formal understanding of rationality can correct whatever vague knacks people are otherwise using - this is the biases and heuristics research, which I would argue hasn’t been literally zero useful.
This theory would help explain how Pinker’s beef with Gardner developed. Gardner is making the same sort of claim as “wise women do better than Hippocratic doctors”. It’s a potentially true claim, but making it brings you into the realm of science. If someone actually made the wise women claim, lots of people would suggest randomized controlled trials to see if it was true. Gardner isn’t actually recommending this, but he’s adopting the same sort of scientific posture he’d adopt if he was, and Pinker is picking up on this and saying “Aha, but you know who’s scientific? Those Hippocratic doctors! Checkmate!”
A few weeks ago, when I posted my predictions for 2022, a commenter mentioned that various “rationalist” “celebrities” - Eliezer Yudkowsky, Julia Galef, maybe even Steven Pinker - should join in, and then we would find out who is most rational of all. I hope this post explains why I don’t think this would work. You can’t find the best economist by asking Keynes, Hayek, and Marx to all found companies and see which makes the most profit - that’s confusing money-making with the study of money-making. These two things might be correlated - I assume knowing things about supply and demand helps when starting a company, and Keynes did in fact make bank - but they’re not exactly the same. Likewise, I don’t think the best superforecasters are always the people with the most insight into rationality - they might be best at truth-seeking, but not necessarily at studying truth-seeking.