806 Comments
Comment deleted
Expand full comment

I feel like this strategy is more meant to apply to things you can do. Like, if you don't volunteer at a homeless shelter, but if most people did you would, that's an indication you should reevaluate 'not volunteering at a homeless shelter'. Not even change your mind, you might have a perfectly valid reason not to, but a sign to at least check in if there might be bias there. I don't know if this is a great method of testing huge policy changes for bias, just individual thought processes and actions.

Expand full comment
Comment deleted
Expand full comment

I am unironically in favour of deporting poor people to poor countries.

Expand full comment
Comment deleted
Expand full comment

Yeah well I wouldn't really do it for arbitrary reasons. I'd just replace the existing tax system with an annual flat "membership fee" which for the US right now would need to be about $24K. If you can't pay the fee you don't get to live there.

If it's good enough for your country club, it's good enough for your country.

Expand full comment

That’s EXACTLY the sort of “how can I get away with ignoring her” that she is talking about.

Perfect example: imagine a world where germophobe Trump panicked at the first news of Covid and demanded federal laws for masks, working at home, etc. In that world are you still supporting all those things? Or are you complaining about yet another insane Republican power grab that is the first step on a slippery slope, a scheme that, conveniently, works out great for the jobs of rich white men but reduces quality of life for everyone else?

THAT is what it is about. Consider arguments in a different world…

Expand full comment
Comment deleted
Expand full comment
Comment deleted
Expand full comment

Yes, and...

It's nice to hang out in spaces where at least a significant portion of people accept that "most people are fundamentally incapable of being rational most of the time, when they are rational it's mostly an accident, and that even most people who are more rational than average still mostly engage in irrational motivated reasoning".

Expand full comment

You might consider the possibility that you are trying to think about things, and that the reason you think the things you think is not a cosmic accident. After all, it's possible you're wrong on that point and it would make a big difference if you are.

Expand full comment
Comment deleted
Expand full comment
Comment deleted
Expand full comment

How is it irrational?

Expand full comment
Comment deleted
Expand full comment

I'm not sure how that conclusion is supposed to follow. Even if we think that it's impossible for most of us to be mostly rational most of the time, if we think that some sorts of efforts can help us be more rational some of the time, without making us less rational other times, and we can show the benefits of that increment of rationality, then it seems like we should pursue those efforts.

Just like I can still think it's worthwhile to do things to cut carbon emissions somewhat, even if I think it's unlikely or even impossible to get the world to cut enough to avoid 4 degrees C of warming.

Expand full comment

Isn't that just parallel to "most people are fundamentally incapable of being altruistic most of the time, when they are altruistic it's mostly an accident, and that even most people who are more altruistic than average still mostly engage in selfish behavior." That doesn't lead me to conclude that altruism is pointless; it leads me to conclude that altruism is difficult.

Expand full comment

Don't you need to make a distinction beween "irrational" and "unpredictable?" I would agree most people are incapable of being rational, for a certain definition of that word. But they are often 100% predictable, so in *that* sense they *are* "rational" = "following a system of logically consistent rules." The rule may very well be "John is in love with Mary and so Mary can do no wrong, so even if Mary carelessly drives through a red light and wrecks the car John will argue fiercely that it's the other guy's fault." So John is certainly irrational by one definition of "rationality" but he is "rational" by another -- he follows a set of rules in a predictable way.

If people are predictable, as they kind of are, I would say, then there's no reason you need to abandon all hope of working with them and do your own thing, you just need to fold the rules by which they actually operate into *your* rational decision-making process. I see it as pretty similar to how one approaches natural phenomena. You wouldn't argue that a tornado is the atmosphere behaving "irrationally," you would just fold the possibility of a tornado into your own decision-making.

Expand full comment
Comment deleted
Expand full comment

I agree with the last point, but I'm confused by what you mean in the first paragraph. Let's say I was a horse breeder. Nobody would suspect horses of having complex mental states or being nuanced in Bayes' Theorem, and they're clearly emotional animals. But it turns out their behavior is quite "rational" in the sense that it follows from a set of deducible rules, and they're pretty reliable about following those rules, so one can manage them and work with them pretty well.

Why not think the same way about people? They may very well have silly internal mental states and do things that are logically inconsistent with their stated aims and beliefs, but...why not just discount their stated aims and beliefs as epiphenomenal, and treat them like...well, horses? They do appear to follow a set of logically consistent rules, which granted aren't the same as the rule they *say* they are following, but which are still deducible and pretty reliable.

Also, I kind of beg to differ on human knowledge being cumulative. It's only cumulative in certain areas, such as fire and antibiotics and so forth. But in other areas, it doesn't seem to be. One generation is scarred by getting into an expensive war without clear aims, and by gum in a generation and a half people will be doing it again. "This time it's different!" Nobody every thinks that perpetual motion has never worked only because it hasn't been tried the right way, so why not give it another spin?

Expand full comment
Comment deleted
Expand full comment

Based on Scott's review, I don't think this book is an endorsement of uncritical acceptance of the social consensus of the last several years.

While not seemingly the point of the book, it seems like adopting a "Scout Mindset" would cause one to have more skepticism, not less.

Expand full comment
Comment deleted
Expand full comment

I'm even more confused. The socially approved consensus of ... meeting new people? Making friends? Falling in love?

Expand full comment
Comment deleted
Expand full comment

I mean, sometimes the societal consensus happens to be the correct thing. Do you think Luke changed his mind because of societal consensus or because of good arguments?

Expand full comment
Comment deleted
Expand full comment

Social consensus. I say that because he was very careful to explain how it was a good argument. If it was obvious, either to himself or others, that it was a good and argument and not the build-up of social pressure, he wouldn't feel compelled to defend his pride by arguing it totally was his own decision, no not giving in to the crowd at all, nosireebob.

Expand full comment

Agreed. Likewise on the Jerry Taylor example. Getting with the climate consensus line isn't heroic. If anything it makes me think he was just taking his anti-climate change line because it was popular with some group when he claims "Oh, I read the science and I guess it is better now"; the science hasn't changed much at all, hardly enough to justify a 180 turn in my opinion.

Then I asked myself "Why does that name seen vaguely familiar?" Oh, right, he is the head of the Niskanen Center. Jerry Taylor is something of a weather vane, one that has in recent years spun left. He's a political salesman, not a serious thinker.

Expand full comment

I haven't read the book. Are there any examples where the rational change was away from the current social consensus? Anyone who went from "Climate is an existential threat and if we don't do something serious about it in the next ten years we are finished" to "climate change may present some problems over the next century?"

Expand full comment

He's no longer head of Niskanen, as was revealed today. He's also someone people might want to associate less, although an aspiring rationalist like Julia might still say that the specific incident of him changing his mind is worth emulating.

Expand full comment

1. The global warming skeptic who moves towards accepting global warming

2. The Republican party supporter who moves towards believing that a Republican Party politician involved in a scandal is actually guilty

3. Another global warming skeptic, this one named, who moves towards accepting global warming

4. An anti-dating pastor who moves towards accepting dating.

On the flip side we do have a feminist who retracted an anti-man article after realising it was wrong. But the other examples are all oriented in one direction.

Expand full comment

> including the one that ends in landing a hot rationalist wife

Holy crap, your take is weird and dehumanizing.

Like... what is the author supposed to do here. Not talk about a personal experience that defined keys parts of her life while being directly related to the subject of the book? Be a gay man so that she can't be accused of selling herself as a trophy that lonely men could get if they buy her book?

Expand full comment

> Like... what is the author supposed to do here. Not talk about a personal experience that defined keys parts of her life while being directly related to the subject of the book?

Probably? If my wife were writing a book about rationality I'd prefer she didn't illustrate it with examples of "Here's times that my husband was wrong and was successfully badgered into admitting it".

Expand full comment

Why not, if they were true? I have repeatedly told the story of a time when I was wrong — about the implication for me of Covid early in the pandemic — and was persuaded by my son to cancel the last two talks of my European trip and fly home. And about why I was wrong — using crystalized instead of fluid intelligence, possibly due to age.

I have no objection if my son wants to tell the story.

Expand full comment

I feel like disagreements between husband and wife are different to disagreements between father and son. A husband and wife should act (in public, or in front of their children) as a single unit, in a way that a father and son aren't expected to.

Expand full comment

I didn't take it that he was badgered, the story is that despite everyone trying to badger him into "it was wrong", he stuck to his guns, but later changed his mind because someone convinced him (not bullied him) and then he equally stuck to his guns when other people were trying to badger him about *that*.

I can see why she would find it admirable: social bullying didn't work, persuasive evidence did!

Expand full comment

It could equally be taken as "hey you too can land yourself a hot rationalist husband", for people interested in acquiring a husband 😀

What's interesting to me here is that I really, *really* dislike the Obama anecdote, and poking at my brain as to *why* it is making me itchy is turning up a lot of things that are not really rational as such, but possibly the way most of us react instinctively when we dislike stuff.

As with Trump, I never believed the hype around Obama - either that he was the Starchild Indigo Child Lightbearer (and people unabashedly saying this on social media made me go "Hmmmm????" very hard) or that he was a secret Muslim going to overthrow the dear old US of A. I considered he was a standard politician who had made a career by a bit of adroitness and being willing to push the "First African-American Ever" angle hard (no blame to him, you do what you can with what you have to achieve your goals).

But the amount of - let's say "boot licking" instead of my first phrasing of this - annoyed the heck out of me. The tingle down the leg wasn't the half of it:

https://www.huffpost.com/entry/chris-matthews-i-felt-thi_n_86449

Now we were supposed to admire Obama's diet, for one (and the point here about Hannity attacking Obama for putting mustard on his burger reads very ironically when you consider Trump's steak and ketchup):

https://www.mashed.com/114046/barack-obama-really-eats/

And in general, the idolising just got to the point that I started wanting to kick the cat whenever I saw yet another "Most fantastic person ever in the history of the world does another fantastic thing" story. So anecdotes about "Obama behaved like a control-freak boss" push several of my buttons (I don't like mind games of this kind, if a boss wants to know my genuine opinion let them ask me, not play cat-and-mouse games of this kind to see if they can trip me up; also, anybody who gets that far up the ladder already knows that your underlings are going to "yes sir no sir" anything you say, because the guy who points out that the Emperor is wearing no pants generally doesn't get showered with praise and rewards, see the history of whistleblowing).

Expand full comment

I would guess you dislike the Obama anecdote simply because it shows him at his patronizing better-than-you worst. "You people are just children, helpless in the face of my Jedi mind tricks to keep you on your toes. Who's your daddy, bitches?"

What may be causing you trouble is why it doesn't bug *other* people the same way. And that is an interesting question. Put that anecdote in the mouth of Trump and a whole lot of those same people would be foaming at the mouth about his arrogance, narcissistic manipulativeness, inability to trust or be trusted. Why the difference? It isn't *just* politics, but what it is is not easy to pin down.

Expand full comment

Reader, I married him

Expand full comment

Well, in fairness a book advising you on how to think more skeptically is obviously in a tricky place. The message has to be carefully tailored "Be more skeptical! Er...except not about what *I'm* saying...you can totally take that to the bank. But the next guy over, in the book on the best-seller shelf -- you definitely want to take him with a grain of salt. Not me, though. Him."

Expand full comment
Comment deleted
Expand full comment

Not the famed jazz pianist Dick Hyman? Composer of the fingerbuster?

Expand full comment

"(sometimes this is be fine: I don’t like having a boring WASPy name like “Scott”, but I don’t bother changing it. If I had a cool ethnically-appropriate name like “Menachem”, would I change it to “Scott”? No. But “the transaction costs for changing are too high so I’m not going to do it” is a totally reasonable justification for status quo bias)"

Hmm yes, transaction costs of changing a name are high. After all, it's not like you had been writing under a pseudonym for many years that could have been anything you wanted, Menachem Alexander

Expand full comment

I mean, the transaction costs for switching psuedonyms are still pretty high. Setup costs on any name you fancy are cheap, but a reputation builds on the name etc

Expand full comment

A fair point, except that Scott was originally writing under "Yvain" at LessWrong, and I believe started to use "Scott Alexander" once SlateStarCodex was started. So he accepted the costs then, but failed to call himself Menachem Awesomeman.

Although I'm sure there are probably a bunch of good reasons for the original SA pseudonym which we aren't privy to

Expand full comment

I felt like his pseudonym of "Moldbug" worked pretty well

Expand full comment

And the "Nikole Hannah Jones" character is a brilliant touch.

Expand full comment

Is it charitable to make jokes saying that someone you disagree with is simply an invention of someone else? As if they're a character or parody? This joke seems very "soldier mindset".

Expand full comment

The joke, of course, is not on Nikole Hannah Jones, but rather depends on the obvious social and political and identity differences between Scott and Jones, and is itself simply playing on Presto's joke of saying that Curtis Yarvin is a creation of Scott's. Now I am well and truly out of patience with you, so please find someone else to haunt.

Expand full comment

This comment has finally made it clear to me that you, marxbro1917, are completely uncharitable and only here to be a nuisance. It is quite clearly a harmless and lighthearted (and also funny, imo) joke. I gave you the benefit of the doubt for too long. Thanks for making it so obvious with this one, buddy.

Expand full comment

Tip: You made a good point here, but you overly belaboured it with your followups. You don't have to respond to everyone that replies to you; if they don't raise any points you haven't already addressed, there's no point repeating yourself.

Expand full comment

>bunch of good reasons for the original SA pseudonym which we aren't privy to

I believe he said he didn't want to use "Scott Alexander (Actual-Last-Name)" to protect his private practice. As you may know he had a serious tiff with the NYT over the use of his last name.

Since he could have published SSC under the name Menachem Awesomeman I read this as a strong inclination within Scott to tell the truth. He says his name is "Scott Alexander" which is true. It's not the whole truth, but the whole truth could be hurtful to innocent parties. So he looks for a middle way with some success and some difficulty.

Expand full comment

I think it was actually "Yarvin".

Expand full comment

I wasn't around back then, but the LessWrong wiki has a page for "Yvain" that redirects to "Scott Alexander", so I feel pretty confident that "Yvain" is correct.

https://wiki.lesswrong.com/index.php?title=Yvain&redirect=no

Expand full comment

You may be confusing 'Yvain' with the real last name of Mencius Moldbug, 'Curtis Yarvin'. (I keep misremembering it as "Yavin" -- I hope the man himself would appreciate the irony that I get him confused with a "Star Wars" battle which culminated in the death of an Emperor and the restoration of a democracy.)

Expand full comment

I do respect Scott Alexander, but I might follow Menachem Awesomeman to the ends of the Earth. Food for thought.

:-)

Expand full comment

He could change his name to “Scout”.

Expand full comment

And that concludes the discussion. Good night everybody!

Expand full comment

I agree. I'm suspect about the idea that the transaction costs of a name change are really that high, although I agree they are not zero. Women in the United States routinely change their last name when they marry, and then revert to their "maiden" name upon divorce. Marry more than once or twice ... the transaction cost here is actually that institutions make you jump through hoops to update documents, but in terms of socially and professionally everyone gets it and just adapts like they would if you changed your pronouns. Ah but what about famous people? It seems likely easier for them after all, a famous person has the benefit of people being interested in what they are doing and a desire to keep up with the news about that person. Re-branding in the consumer product/business world happens, sometimes due to company mergers and sometimes due to escape the bad associations with the prior name. Anyway, just stream of consciousness thinking, but in support of how imho, while not zero, the transaction costs of changing one's name are not prohibitive, and likely just high enough that one wouldn't do it without a compelling reason (e.g. marriage, disassociate with a hated family name, etc.) but not so high that if your whimsy is to do it, it's easily enough done.

Expand full comment

One complicating factor is that so much of American life is set up around the assumption that a married couple have the same last name (and that both parents have the same last name as their kids) that bearing the costs of violating this assumption sometimes outweighs the transaction costs (particularly since the law actively lowers the transaction costs for name changes associated with changes in a woman's marital status).

Expand full comment

That's true, but it can get sticky in regards to the career aspect where the social convention rule doesn't apply (e.g. when the name on your driver's license and diploma don't match, or you publish a paper in an academic journal and then change your name, and so on). And then what if you do it twice (because let's face it, second marriages are common). And yet both of those things happen all the time too with seemingly little negative effects.

Expand full comment

Right, but those effects are so small because you're still following a common pattern; everyone instantly understands when you say that was your maiden name, and everything is set up to deal with it.

Changing your name in opposition to cultural norms would be a lot harder than changing it in compliance with them.

Expand full comment

Hmm, I think the effects are rather large when it comes to one's career in the sense that you do have to manage it to ensure that people know. But at the same time, I don't see how the cost is higher if the reason you're doing it is not due to marriage/divorce. The pathway is the same regardless.

Expand full comment

so just use maiden name for work publications and use family name for family stuff, tbh

Expand full comment

Most married academic women I know kept their maiden names. It really does muck up your CV, and it’s so common in the community that it doesn’t surprise anyone when a married couple doesn’t share surnames.

Expand full comment

I for one am offended by the idea that "Scott" is an Anglo-Saxon name. Scott is a Scottish name that pre-dates the arrival of Angles and Saxons in Britain.

Expand full comment

Hardly. It's an English word originally applied to Irish speakers derived from a Latin ethnonym, which became a surname (note the Scott's in Scotland were mostly found in the English-speaking southeast) and then a forename, I presume through the normal process for this (e.g. Stanley, Baker) of naming a son with the surname of another branch of his ancestry.

All of which to say it is an English name and an Anglo-Saxon word at least. And that I'm somewhat pedantic about names.

Expand full comment

Scout Mindset is, accidentally, a really great general self-help and CBT book, that doesn't talk down to you

Expand full comment

Agree. It is a less psycho-jargony, more real world version of "Would you rather be right or would you rather be effective?".

Expand full comment

Haven't read it yet, but much of what Scott talked about made me think "that just sounds like CBT".

Expand full comment

Typo thread: "sometimes this is be fine"

Expand full comment

> there is no such thing as telepathy, p < -10^10.

p < -10^10 is p < 10,000,000,000, I think that was supposed to be p < 10^-10

Expand full comment

I thought the negative probability was funny

Expand full comment

Agreed. I already think "what would Julia Galef do."

Expand full comment

I feel compelled to plug my friends' and my fan project, mindsetscouts.com. Everyone likes earning cool merit badges! One of the merit badges even has a link to a calibration spreadsheet template, if that is a thing you've always wanted but never wanted to bother making.

Expand full comment
founding

This excellent review convinced me to buy the book. One area of life where it is fine to have a pure soldier mindset is being a sports fan. You start rooting fro a team when you are young because of where you live, your family, whatever. You never consider changing your allegiance (although if, like me, you're a Mets fan, you sometime wish you could). If we all have a certain amount of "soldier"tendency in us, then sports fandom is a good and healthy way to exercise and partially exhaust that tendency.

Expand full comment

I take a "scout mindset" to my sports fandom.

Oh, not really. But one of the interesting places where that soldier mindset comes out in fandom is when there's a controversial play; people for one side are more likely to see that the _other_ team, for example, committed pass interference or touched the ball last before it went out of bounds.

So I use situations like that to remind myself to stay in scout mindset (not literally--I've been doing this for years, and now have a framework for it).

Expand full comment
founding

I tend also to be a scout when it comes to calls. But I am 100% soldier when it comes to my partisanship. Long ago, I should have, rationally, switched from being a Mets fan to being a Yankees fan. But I grew up pro-Mets, hating the Yankees, so that's that.

Expand full comment

Ha! Indeed.

But as a Cubs fan, let me assure you that when the breakthrough happens, the payoff in joy is likely 200x better than the ersatz happiness of cheering for a team you only picked to win titles. ;)

(That said, my grandfather was born a Cubs fan, lived 87 years a Cubs fan, and died a Cubs fan without ever seeing them win a Series. There are risks to the approach.)

Expand full comment

Your grandfather's tale reminded me of this cartoon. Even my black and stony heart was moved by it. Thirty years of waiting:

https://www.youtube.com/watch?v=LWJ5D16hX3U

Expand full comment

Thanks for that.

I was able to watch something like that with my local "Little League" 15U team. Our league isn't very competitive and usually gets crushed when it comes time to play outside the league (which happens at the end of each season). But one year we won *one* game before being eliminated. The next year we won district. The kids were happy, but some of the adults who had been working with our little league organization were ecstatic! They had been supporting our teams on the losing end for well over a decade.

Expand full comment

My team just lost a really big game, and then video of the 3 best players sniffing a white powder (after the game) surfaced, and at first I was totally convinced that either the other team provided the white powder, or made the video! But then I thought, nah!

Expand full comment

I try to take a scout mindset with regard to sports fandom when I set my expectations around how good I think the team will be in a season or how they will perform in a game.

Every pre season the talk is always all sunshine and rainbows about how great our team will be this year. Then the season happens and everyone is upset that the pre season expectations are not met.

Luckily it is very easy to find an outside view for your sports teams because the folks in Las Vegas put a number out there that you can bet on for total wins in the season and odds of winning each game. There are also computer models estimate those odds. So you can say "lots of people think we will be 6-5 this year why might they think that?"

Expand full comment

I am a Nationals fan but I was incensed this summer when they traded away Kyle Schwarber at the trade deadline. Trading veteran Max Scherzer I understood, and saying goodbye to Trea Turner was a necessary part of that deal. Other trades I didn't mind. But Schwarber hit 16 home runs in 18 days in June and pulled the entire team into contention until he injured his hamstring.

I was furious with the team. It seemed unfair to Kyle and more unfair to the fans. Kyle is doing great things for the Red Sox. Having him and Juan Soto both would have won a lot more games for the Nats in the second half.

It seems my loyalty to some abstract concept of fairness is greater than my loyalty to my team.

Expand full comment

Unless you bet money on games

Expand full comment

I'd never bet on my own team. Occaionally I will bet against my team as a form of emotional hedging though. "I really want them to win, but if they don't then at least I get ten bucks"

Expand full comment
founding

I've been tempted by that hedge, but wonder if Id fell that the money was tainted if I won. Or can you compartmentalize?

Expand full comment

Not being a sports fan, I find the soldier mindset in relation to sports a bit confusing.

If you root for a team, you generally admire and like your team's best players, and work up a fine froth about the best players on the opposing teams. But what if the players are traded? How does the process of changing your personal allegiances work? What is the sequence of thoughts and ideas one goes through in that situation?

Expand full comment
founding

that's why it's a soldier mentality. You follow the uniform. It lacks all rationality, can be incredibly engrossing/entertaining/sometimes enraging, but harmless.

Expand full comment

Can I suggest a friendly edit? Instead of being in a froth about the "best" players, people get in a froth about "cheap" or "unsportsmanlike" players on the other team. (When people dislike the other team's best players, it's usually a sort of comradely dislike where it's understood that the only reason one dislikes them is the color of their jersey.)

But when an unsportsmanlike player joins one's team, the cognitive dissonance gets acute but goes away quickly. Usually, I find, after an uncomfortable transition of about a quarter season, fans revel in the new player's "gamesmanship."

What's hilarious is that, when confronted with their past views on this topic, the fan will easily, even gleefully, acknowledge that their views are purely because of their team allegiances. In other words, there's a real self-understanding in sports fandom that just doesn't exist in political fandom. People know they're being ridiculous, and don't care.

Expand full comment

Funny example of that. My aunt was a huge University of South Carolina football fan and for years and years they were beaten (often badly) by the University of Florida and their coach Steve Spurrier who in addition to being a great coach was also very arrogant and quite abrasive. My Aunt hated him.

Fast forward several years later and he became the coach for the University of South Carolina and my aunt loved him. "He's an asshole but he is our asshole."

Expand full comment

"Not being a sports fan, I find the soldier mindset in relation to sports a bit confusing.

If you root for a team, you generally admire and like your team's best players, and work up a fine froth about the best players on the opposing teams. But what if the players are traded? How does the process of changing your personal allegiances work? What is the sequence of thoughts and ideas one goes through in that situation?"

A number of sports fans are well aware that their support for "their" team is highly irrational, but go with it because it is fun.

So ... I am an SF Giants fan and passionately hate the Dodgers. I hope their players do poorly (but do NOT hope their players are injured ... lets keep a sense of perspective). If one of their players gets traded to the SF Giants then he gets promoted to "one of our guys" (with a few exceptions).

I like to think that my disdain for the Dodgers lets me be part of a long historical tradition dating back to the late 1800s. If I had grown up in the LA area I would, of course, have been a Dodgers fan. But I didn't, so SF Giants it is.

And note that there is no need to *admire* your team's best players. Some of them can be jerks. It is not disloyal to acknowledge this. It is also not disloyal to acknowledge greatness in players on "the other team." We just hope they play poorly in the future (unless they get traded to our team, of course). Very few SF Giants fans would claim that Clayton Kershaw was either a bad player or a bad person. We just want him to lose because he is a Dodger. Nothing complicated here.

If one of "our" players gets traded it is fine to continue to wish them well. Just not when they are playing "us." There is the concept of a "Forever Giant" (which likely exists for other teams, too) that signals that even though the player isn't on "our" team we still consider him "one of us."

The important point to realize is that many fans DO KNOW that this is borderline nuts. It is still fun to embrace, however, so many of us do.

Expand full comment

It helps a lot if there's a decent amount of continuity. I'm not sure I could maintain emotional investment in a sport where 'my team' was just a set of jumpers to be filled arbitrarily from year to year or week to week. But when each year's team is last year's team with, say, 20% turnover, it's usually not too hard. Once you see the new guy not only wearing the colours but bonding with the players you already know and like, contributing to a win or sharing in the pain of a hard-fought loss, etc., it's not too hard to see him as part of the in-group and let your natural biases do their thing.

Expand full comment

"It helps a lot if there's a decent amount of continuity. I'm not sure I could maintain emotional investment in a sport where 'my team' was just a set of jumpers to be filled arbitrarily from year to year or week to week."

Agreed.

Watching how the minor league teams (where year-to-year you might have 10% of the team returning) is interesting. And minor league baseball is fun! The teams manage this and fans cheer for "their" team and "hate" on their rivals (who are also seeing 90% turnover year-on-year) ...

Expand full comment

I guess that's where you'd really need to lean into the sense of community with fellow fans. The team might be a mercenary army, but they're fighting for us!

Expand full comment

I think it's important to consider WHY you're a sports fan. Is your goal to always support the winner? Or are you supporting the team that helps you connect with neighbors/clients/coworkers?

I've been left unsatisfied just supporting the team most likely to win. Just a boring experience since you really don't have anything invested in it. If you're supporting your underdog home team, you've got a cheering section at the bar, you get a high five from strangers for wearing their logo, and if they win... it's really a unique satisfaction. Even if they lose, you've still got all the other good stuff that comes with supporting that team. (I realize that doesn't apply as well in two-team cities.)

A more useful application for soldier/scout mindset in sports, IMHO, is analyzing what happens with your team. Your team get caught cheating? Key player caught doping? Star QB caught molesting 20 women? When the Astros got caught, I was immediately on the "all the other teams are doing it" bandwagon. After reflecting on it, I realized that was in direct conflict with my principles. So, I chose to shift my energy to supporting the local minor league team. Turns out that was way more fun in the end.

Expand full comment

I moved from the SF Bay Area to North Carolina. I have always been a fan of the S.J. Sharks (NHL). Since moving here I decided that I was better served by being a Carolina Hurricane fan since very few Sharks games are televised here. And being a fan of a team I never see provides little or no joy.

This worked out really well since the Hurricanes are actually really good (both in terms of winning and in quality of gameplay). I went to a Sharks vs. Canes game. I rooted for the Canes(with a side of I won't be too unhappy if S.J. wins).

Expand full comment
founding

I know many fans in NYC who will root for both teams. I cannot, because of endless debates with my friends when I was a kid as to which team was "better."

Expand full comment

I've often wondered what would happen in that situation, but fortunately, no team I've ever rooted for has done anything the least bit tawdry, so I don't have to decide.

Expand full comment

You can hate the sin/sinner but still support the team. You just have to say, "Yes, that was terrible and he has brought shame to my team. We will move on without him."

Expand full comment

"I've been left unsatisfied just supporting the team most likely to win." You better. I have some compassion for the poor people needing a top team to identify with while having no root connection. The small doses of happiness with all the achievements of regular winners one roots for never amount to what happens when Werder Bremen wins the Cup and the Championship in one season. Happening to some people, at least.

Still, professional Sports is a business and sometimes teams more or less vanish, when the economic background in their hometown changes, others develop weirdly and gain a whole new fanbase. I've also noticed developments between sports. Towns that don't have the potential for a top team in the most popular game (like, in these parts, association football aka soccer) sometimes get to have very good hockey, handball or basketball teams and enthusiastic fans to fill their venues. That's a bit tricky when most supporters have played the most popular game themselves as kids but not the game they are supposed to identify with.

Expand full comment

Depends how awful your favorite sports team/star is.

Expand full comment

I think a bronze age mindset reference would really round out that first paragraph.

Expand full comment

Lol, but on the short line between “erudite, provocative right wing intellectual” and “just a Larpy nazi”, that cuts it a bit close to the right, while Moldy’s firmly on the left of it

Expand full comment

Bronze Age Pervert is one of those rare intellectuals who doesn't need to be extraordinarily verbose, or even use sensible grammar, in order to let you know how intellectual he is. Wat mean?

Expand full comment

I mean that he literally is a nazi, and in a sense where moldbug is not at all a nazi, he is still a nazi

probably not worth continuing this convo though given the location!

Expand full comment

so I would not advise referencing him at all in something connected to your irl name, whatever your leanings are

Expand full comment

I've read (well, skimmed) his book and while he's certainly a white supremacist I didn't see any support for any form of Socialism, National or otherwise.

Expand full comment

yeah his book was toned down a lot, he regularly speaks about the inferiority / subversive nature / global power of / need to remove from power and influence of da jooz, although is somewhat careful to hide it from the least inquisitive of eyes

my bigger criticism is he’s just kinda making stuff up, say what you will about moldy but he writes seriously about history and power. brap just says stuff that sounds good if you dont look into it

Expand full comment

oh you meant if he’s a “nazi”. nazi colloquially means Jew hate / white nationalist, just like “communist” means whatever it does in the net. But he also has (critical support style) endorsed the NSDAP many times

Expand full comment

On the political scandals thing, one noticeable trend is that for people who support party X, they have to weigh the cost of having a politician who has done something scandalous against the risk of getting a politician from party Y, which is (for supporters of party X) intrinsically scandalous.

So it's not unreasonable to have a higher standard for your opponents (whose replacement is costless) than for your own party.

The other factor is about who will replace them, which is why I favour political systems that make it easy to replace a politician with another of the same party (and make that a separate process from the electoral process where voters choose between parties). Note that Al Franken was replaced as Senator easily - but his replacement, Tina Smith, is of the same party. Ralph Northam was also in a scandal of comparable magnitude, but remained as Governor of Virginia because his two same-party successors were involved in the same scandal and the third in line was of the other party. You can see the same process with the recent California recall; Gavin Newsom was able to ensure that the only possible replacement was a Republican and was able to run against that Republican. From the perspective of the Democratic majority in California, however bad Newsom was, a Republican would be worse.

The only case I can think of in recent years when a politician has been replaced by one of the opposite party as a result of a scandal is the Alabama Senate election when Roy Moore was defeated by Doug Jones.

Expand full comment

Agreed with this analysis. Interesting case study in what could be seen as an application of this principle: Germany's electoral system, where everyone casts *two* votes: one for their direct representative for their region, and a "second vote" for a party. The direct representatives all make it into the Bundestag. But then, on the second vote, seats are added to the Bundestag to align its composition with the results of the party vote. Those second-vote seats are filled from a list each party prepares in advance of the election.

So if you like a party, but not their candidate in your district; or if you think a candidate from an opposing party is unusually good and moderate, but still dislike their party's principles; you can cast your direct vote for the individual you prefer, then your party vote for a party other than theirs, where only the party vote is really important for the eventual balance of power.

In practice, this still likely won't help you get rid of odious incumbents, at least if they're well-connected. Direct candidates are often high-up on their party's "list"; so even if they lose their local election, they'll make it in on the second vote. So it's not a perfect mechanism for removing Moores, Northams and Cuomos from office. (The latter two are/were executives, but you know what I mean). Nonetheless, I still like the scheme, since it allows for some decoupling of character judgment from political persuasion.

Expand full comment

The Irish system which creates competition between candidates of a party as well as between parties (the exact details are rather long for a comment, and can be easily found on wikipedia) has a similar effect, perhaps better.

The Maltese system, which is normally described as being the same (they are both versions of "Single Transferable Vote") has a rule that makes it superior for this purpose, which is a rule prohibiting undernomination (again, I'll omit the details, they're easy to find).

In the US, the nature of the primary system means that STV with a rule against undernomination would be easy to implement, and this would very much sharpen intraparty competition - if you have to have five candidates of your party to be allowed to have any, and you're (even in a very strong area) not going to get more than four elected, then there is always an intraparty contest as well as an interparty contest.

The German system would work better if it was combined with some form of "open list" where voters can vote to influence who gets elected from the list; there are several versions of this, some of which entirely remove the influence of the party in terms of ordering the list, and others require a voter revolt on a huge scale to move a single candidate up a single place.

Expand full comment

For people who already understand electoral systems, what I'm suggesting is STV in multi-member constituencies, with party primaries also conducted by STV, where the number of winners of each primary is the same as that in the general election, and where a shortage of nominees in the party primary would eliminate that party from contention in the general election.

Expand full comment

The crazy thing about Roy Moore is he was kicked off the Alabama Supreme Court *twice* before he ran for Senate. Both times he was elected to that position. So he definitely represented the will of the people, and surely would have won again if not for the Washington Post reporting.

Expand full comment

I was generally in favor of Franken resigning because I felt we Democrats needed to set a good example. I regret that outlook now, both because it was an unnecessary rush to judgement and because good examples mean nothing to Republicans.

I supported Northam because I felt that wearing blackface in a school skit decades earlier was trivial.

I wanted Chris Cuomo to resign because there was extensive evidence and the offenses were not trivial.

Expand full comment

I think you mean Andrew Cuomo, but it does seem like Chris should be resigning from his job too over his role in the whole thing.

Expand full comment

Correct. I had them switched.

Expand full comment

Frankly, if Al Franken hadn't resigned I still wouldn't take Democrats seriously on sexual harassment. That sacrifice was absolutely worth it, because it set a markedly different standard than the tribal defense of Bill Clinton a couple decades prior.

Generally, the only effective signals across tribal lines are costly ones.

Expand full comment

Your perspective is useful. In which case I am sorry that I supported Franken's resignation because the evidence that he was a sexist asshole was vanishingly small.

Expand full comment

Would you say that the sexual assault allegations against Joe Biden were handled in the Al Franken manner or the Bill Clinton manner?

Expand full comment

I would say they were handled accurately. I read a long account of the primary accuser and in sum she was very unconvincing.

Expand full comment

I think that Biden's allegations were more similar to Kavanaugh than either Clinton or Franken; a lot of noise with little chance of being true.

Expand full comment

Agreed, but the issue Republicans have is that Democrats raked Kavanaugh over the coals on the allegations, while they brushed off Biden's.

I strongly agree that the only signals effective across party lines are costly ones. I'll add that hypocrisy negates any positive signals.

Expand full comment

> So it's not unreasonable to have a higher standard for your opponents

It's not unreasonable to consider other factors (like policy positions and/or overall competence) in addition to scandals and personal qualities when making political choices. But it still seems healthier to try to be objective about the magnitude / reality of various scandals, since there isn't really a conflict between acknowledging a scandal and still voting based on other things you consider more important (at least on a personal level, maybe public advocacy has more soldier vs scout mindset considerations).

Expand full comment

OK, I didn't realise when I wrote that sentence that it could be interpreted that way, but it clearly can.

A higher standard for when you will call for a resignation / sign a recall petition / support an impeachment / vote for the opposite party / whatever other political mechanism for removing people, not a higher standard in terms of what you consider constitutes a scandal.

Expand full comment

That makes sense, I thought it might have been a counter-point against the scout mindset in the OP (which as I understand it is just about trying to be accurate about strengths and weaknesses, not necessarily switching sides). I think that way of dealing with scandals is totally reasonable.

Expand full comment

Kind of depends on your position and your purpose. If your party is in a position of power, or can expect to achieve it in the near future almost certainly, by a turn of the wheel if nothing else, then this makes sense.

But if you're out of power, and you hope to replace the guys at the top, then there's good chance you're better off (politically) holding your own guys to higher standards, because your ambition is to be persuasive to a lot more people than now owe allegiance to your party. The best way to acquire much addition support beyond your core is to present candidates who look (and may even be) a cut above what the party in power is offering.

Expand full comment

tl;dr version: "power corrupts."

Expand full comment

Sure, but it's still useful to burst the bubble of pretence that you're attacking the politican based on some intrinsically unacceptable failing, rather than partisan realpolitik.

Expand full comment

One under-appreciated angle on the "scandals by the opposing party are inherently worse" bias is that it's ostensibly symmetric.

Imagine we wanted a panel of judges who would dole out punishment whenever a politician had a scandal. A handful of people are permanently appointed, and every scandal they get final say on the punishment.

It would be great to fill the panel with nonpartisans, but making it half democrats and half republicans achieves the same purpose. Think of it like a bargain, "I know I'm bad at being angry at Democrats, and I know you're bad at being angry at Republicans. So whenever it's a Democrat in the scandal, I'll argue for leniency because that's my relative advantage. If it's a Republican, I'll argue for harsh punishment, and you do the opposite."

This setup sounds wrong, but it's 1) maybe more rational than one might naively assume and 2) much easier to set up in practice.

I think the implications of this need greater consideration.

Expand full comment

Very interesting and a compelling endorsement. This review is a good prompt to think about my own relationship to the whole rationalist project, and I need to read this book. I am much more sympathetic to rationalism than I was say 5 years ago and think on balance it's a force for good. I also think it's a giant motte and bailey, which is frequently discussed in grand and outsized terms regarding its goals, but when challenged its members tend to say things like "oh nobody's trying to achieve real rationality, we know that's impossible, we're just trying to get people to be a bit more rational." But I think what I will do is read and review this book and use it as a lens to think through the movement and its evolution.

Expand full comment

How is it a "force for good" and how did you measure that? Most "Rationalism" seems like the exact same kind of naive liberalism that Marxists like yourself should be debunking.

Expand full comment

If rationalism was a political project, I might agree. But more to the point, I think you misunderstand what Marxism is. Marxism is a form of rationalism. It is the science of history; its central belief is the emancipatory potential of reason. Marxism is also not anti-Enlightenment; it's the culmination of Enlightenment thinking, its logical endpoint. It's called dialectical materialism for a reason.

I realize that this does not comport with the popular (mis)understanding of Marxism, but that's not my problem.

Expand full comment

"Rationalism" as it currently exists is definitely a political project and almost none of them are Marxists. Why don't you try to correct Scott Alexander's misunderstandings of Marx, for example?

Expand full comment

In what way? Through the comments here? Sure: hey Scott, Marxism is a theory of history that seeks to arrive at a scientific understanding of the successive progress of political and economic systems by examining their root power structures through materialist principles. It also articulates a theory of how we could escape the cycle of exploitation through the establishment of a communal economic system based on shared ownership of the productive apparatus of society. (Which does not and has never entailed the abolishment of private property.) The philosophical and analytical lens is called Marxism; the political project of Marxism is called communism. You can read about it on the internet.

Here's hoping.

Expand full comment

Now, do you expect Scott to actually read through that and debate it in any way? Or do you expect him to ignore you and continue being wrong about Marxism?

And, if you expect him to ignore you, then how is "Rationalism" a "force for good"?

Expand full comment

I don't expect anything of him; it would be rude to come into his space and expect things of him. I think Scott writes a lot of interesting things, much of which I agree with, some of which I don't. I am not shy about telling him when it's the latter. If I want to engage deeply with one of his pieces, including critically, I'll do so in the form of my own newsletter. Do I expect such a thing will change his mind? Of course not. But my job is not to try and change his mind. My job is to tell the truth. It's fine if you don't see the purpose of my doing so here, but then I didn't ask for your opinion or your permission, did I?

Expand full comment

"the establishment of a communal economic system based on shared ownership of the productive apparatus of society."

With respect to natural resources, I'm a full on Georgist, supportive of sharing ownership and resulting revenue streams (acknowledging challenges with respect to communal governance as well as challenges with respect to which individuals should have a right to shares in which natural resource revenue streams).

But with respect to "productive apparatus" that is based on innovation, do you envision 100% of innovation being financed by the state? Or some other communal apparatus? How does society decide which ideas and individuals to invest in? What is the best sketch of a communal approach to an innovative society without private capital markets? Which Marxist texts provide sound insight on this issue?

Expand full comment

Hello there Michael. Communism doesn't wish to have a 'state' with which to enforce things like this. Creation is supposed to be praxeological. I recommend reading the Grundrisse by Marx for free on marxists.org to get a base for the economic side of things, otherwise you're just going to hear the same one liners you've heard before. My personal opinion is that its much easier to innovate when you're cooperating rather than competing. But again, one liners.

Expand full comment

> The philosophical and analytical lens is called Marxism

> You can read about it on the internet.

Hey, I know that was probably tongue-in-cheek, but you seem reasonable and I've been on the look-out for works that will challenge my (current) views from the left, and I was looking for some specific reading pointers. I'm asking for your help here because I'm willing to read a book or two but not like 20, so someone more familiar with Marxist literature might be able to point me in the right direction for what I'm looking for.

In particular, I am much, much less interested in critiques of the current system(s) (which I often already agree with), but instead authors presenting fleshed-out alternatives. Since critiques are more common I'm not sure where to start looking for the latter.

By fleshed-out I mean addressing the fundamental political questions, like how do you prevent people with power (like decision-making discretion) using that power to entrench themselves, and how do you hold decision-makers accountable for results when historically there is an environment with imperfect information and in political processes the cause-effect relationships between things are often contentious.

This isn't intended to be criticism, I'm hoping there are answers out there!

Expand full comment

You might be interested in something like "Towards A New Socialism" by Cockshott and Cottrell. I don't agree with some of it, but I think it might be of interest to 'Rationalist' types (Cockshott puts a lot of emphasis on computing, for example)

here's a direct link to the pdf:

http://ricardo.ecn.wfu.edu/~cottrell/socialism_book/new_socialism.pdf

Expand full comment

I think this is a pretty fair summary of (one of the many kinds of) Marxism. Yes, I disagree with it, but at least there's a coherent argument there to disagree with. However, there are many other kinds of Marxism out there -- just as there are many kinds of Feminism or Christianity or whatnot -- and most of them are less coherent. At this point, I don't think that anyone could credibly claim that Marxism X is the One True Marxism, and all the rest are impostors. I think you guys need to develop some sort of coherent nomenclature to identify which Marxism you believe in, just as the Christians have done with the myriad offshoots of their faith, or just as the Feminists are doing now with the various flavors of their movement.

Expand full comment

I identify most strongly with the Marxism-Leninism although I have an open mind and often note that Maoists make very good critiques.

Expand full comment

Freddie deBoer has, through one comment, explained more about Marxism than marxbro has in, what, several hundred?

Expand full comment

Agreed. although admittedly marxbro's brand of Marxism is different from Freddie deBoer's.

Expand full comment

Every comment thread with your interlocutor: http://wondermark.com/1k62/

This was a surprisingly good exchange by the usual standards.

Expand full comment

He's the very definition of a sea lion. And, like trolls, the way to handle sea lions is not to throw them fish. Even joke fish. Just ignore him.

Expand full comment

I usually find most accusations of "Sealioning" to be "I want to be able to say something wrong without being called out on it."

But holy buckets, THIS guy.

Expand full comment

Do you disagree with something I said?

Expand full comment

I don't know. The more I look into this, the more I realise rationalism as a community is pretty 'reactionary', and by that I mean it exists solely in reaction to something it perceives as bad. Any ideology that does this is treading on thin ice. I read some of Julia Galef's thoughts and I just can't help but get the feeling she wrote her entire book as a reaction to some of the things she's seen on twitter. Hell, Scott couldn't help but use twitter as an analogy in this blog post, I wonder why. People can't seem to escape the bad interactions they have in everyday politics, to the point they form entire personalities and ideologies in rejection of these behaviours, but none of it seems to stand for anything truly new. Even the effective altruism thing is just a sort of smartass repackaging of the concept of charity, not realising that charity itself is not a very revolutionary or helpful method of changing the world.

Of course, a rationalist would never accept a full criticism of the 'movement' in a single paragraph. This ties into the motte and bailey type of defensive criticism you see pervading the community. Whatever I've just said above is not how the rationalist community actually is. I guess. Sure.

I'm just here because Scott writes well, and about interesting things. I don't care for the rationalist stuff. Luckily, he doesn't really describe his blog as a rationalist blog.

Expand full comment

> The more I look into this, the more I realise rationalism as a community is pretty 'reactionary', and by that I mean it exists solely in reaction to something it perceives as bad. Any ideology that does this is treading on thin ice.

Why's that? Off the top of my head, the slavery abolition movement existed in reaction to slavery (the abolitionists thought slavery was bad and reacted accordingly). Another example could be charity, if you think people dying for want of a vaccine is bad and you start a movement to make vaccinations available, then that's reactionary by your definition. I agree that the rationalism community views irrationality as bad, but what's so bad about that?

Expand full comment

It's not that it's wrong to do, it's just that this has caused them simply reuse the tools they've known since childhood, instead of questioning them, because the actual enemy is not necessarily irrationality but irrational people on twitter/their personal life that they don't like. It's why i think we keep seeing all these articles about how rationalists don't really accomplish much, if you're comfortable with a general claim like that.

I think Galef thinks she is questioning her own mind's tools, as do many rationalists, but they're not actually doing a disciplined critique of themselves, because that's hard.

Expand full comment

Pretty sure the most famous Rationalist discussions on Less Wrong and Overcoming Bias and the community formed from them predated Twitter --or at least the mass popularity of Twitter --by a few years.

Expand full comment

What would you say is a good critique of one's own tools?

Expand full comment

"The more I look into this, the more I realise rationalism as a community is pretty 'reactionary', and by that I mean it exists solely in reaction to something it perceives as bad."

That isn't what "reactionary" means. Etymonline explains, '1831, "of or pertaining to political reaction, tending to revert from a more to a less advanced policy," on model of French réactionnaire (19c.), from réaction (see reaction). In Marxist use by 1858 as "tending toward reversing existing tendencies," opposed to revolutionary and used opprobriously in reference to opponents of communism.'

That is to say, a reactionary is someone who is opposing progress and specifically wants to undo progress that has been achieved already rather than just prevent more progress. Organizing a movement against something you perceive as bad, such as capitalism (communism), sin (Christianity), suffering (Buddhism), war (pacifism), racism (anti-racism), monarchy (republicanism), being colonized by an empire (nationalism), fascism (antifa), smallpox (the smallpox eradication campaign), polio (the March of Dimes), slavery (abolitionism), the gold standard (bimetallists), the Glorious Revolution (the Jacobites), COVID-19 (the worldwide pandemic-stopping effort), etc., self-evidently doesn't make it a reactionary movement unless the thing you perceive as bad is a sort of progress.

My laundry list above of ideologies defined in opposition to some kind of perceived evil should make clear the absurdity of your statement, "Any ideology that does this [exists solely in reaction to something it perceives as bad] is treading on thin ice."

Invariably the reactionaries themselves don't consider the thing they are opposing to be progress (actual reactionaries stereotypically deny the possibility of progress, which is not very far from the viewpoint you are expressing) so "reactionary" is a derogatory exonym; but I think it would be very difficult to find someone who honestly considers human irrationality to be "progress".

Expand full comment

I specifically said "and by that i mean..." to show i was using my own definition of reactionary. That being said your definition is a scarily accurate way of describing rationalists as well, who hold onto ideals that extend beyond being just a community of people against irrationality.

Expand full comment

"motte and bailey" castle/fallacy.

I learned at least one thing today.

Expand full comment

AFAIK the entire Rationalist movement started as a motte-and-bailey for the AI Risk movement; the basic premise behind Less Wrong was to "raise the sanity waterline", so that all the benighted non-Rationalists would wake up and realize the dangers of AI. It is debatable whether the movement is still structured along those lines, however.

Expand full comment

I reckon there's some interesting work to be done in relating rationalist ideas about cognitive bias to the question of how economic classes form coherent ideological blocs.

I'm often a little confused by rationalist writing about bias, because it seems to overlook the social processes by which most biases are actually produced and enforced. But I do think it's interesting how they try to study the actual mechanisms of the mind which creates them.

We can say that, for instance, the bourgeoisie is drawn to believe that liberalism is good because it benefits the material interests of the bourgeoisie, but this opens up the question what exactly happens in the mind of any particular bourgeois. We could then start thinking about how to relate the broader social processes that Marx describes to tendencies within individual psychology. For instance, if we know there's something in the human brain that makes people naturally more likely to believe things that materially benefit them, it would explain a lot about why liberal hegemony shakes out the way it does.

One of the things I like about your work is that it's a form of Marxism that takes seriously the idea that the human brain is a material object with material limits, something that a lot of the modern left is uncomfortable with. Bringing Marxism into contact with rationalism is a good opportunity to cross some of these streams and hopefully produce some new ideas.

(I don't know if I'm expressing this very clearly, it's something I've been thinking about for a while but haven't had much opportunity to write down. Hopefully you get the gist of it.)

Expand full comment

"They founded a group called the Center For Applied Rationality (aka “CFAR”, yes, it’s a pun) to try to figure out how to actually make people more rational in the real world."

As if Scientology wasn't enough...

Expand full comment

What would Scott’s review be if he wasn’t personal friends with Luke and Julia? (Probably similar but longer?)

Expand full comment

Nobody's complained about p < -10^10 yet, which, depending on where you put your parentheses is either impossible or certain :^)

Expand full comment

I came to this comment section to point that out. Otherwise, awesome review of an awesome book! :-)

Expand full comment

Definitely an awesome review. Luckily my library has 14 (!) copies, so I'm going to check the book out right away.

Expand full comment

I was very confused by that myself. p is a probability, which means it should have a value between 0 and 1, inclusive. -10^10, no matter where you put the parentheses, is not between 0 and 1. Right? Did Scott mean to write 10^-10?

Expand full comment

> -10^10, no matter where you put the parentheses, is not between 0 and 1. Right?

Right. Ridiculously large in absolute value in either case.

> Did Scott mean to write 10^-10?

That's my guess. The post he refers to has a few "p < X", for X = 1.2 * -10^10 (likely the same typo); X = 0.002 (refers to something else); and X = 1.2 * 10^-10 (a reasonable (in the sense of "in the correct domain") p value).

Expand full comment

I wanted to complain, but you beat me to it :-(

Expand full comment

I think I might buy the book, then.

I feel like there's a kind of deeper level of reasoning that often goes into people being unwilling to change their mind, or unwilling to adopt a 'scout mindset'. In the real world, military scouts tend to get killed a lot. They go into enemy territory surrounded by soldiers where they're at a disadvantage. Soldiers at least get to fight in groups. The epistemic equivalent of a scout being "killed" would, I think, be being convinced or pressured to change your mind based on non-rationalist tactics. If this happens a few times then your Bayesian prior on "I am going to be misled/pressured/BSd into changing my mind" starts going up and it stops making sense to be a scout. It starts looking smarter to be a soldier.

In the past I've changed my mind about lots of things - I can think of a few examples from both politics and my job right now. But I sort of feel like this has happened to me with everything about COVID. In the beginning I adopted the default position of "the scientists have got this" and believed all their predictions. Then I read an article that gave me lots of new information about the history of epidemiology and the unreliability of Ferguson's modelling, and that caused me to go off and do lots of research of my own that backed it up, so I changed and adopted a position of "this is all wrong and my god scientists don't seem to care which is scandalous". But I tried to keep a scout mindset (not calling it that of course) and would still read articles and talk to people who disagreed, in fact, I ended up debating a scientist I was vaguely acquainted with in the street for about 20 minutes. We met, said hello, not seen you for a while, got talking and ended up in a massive debate about PCR testing, modelling and other things. He was very much still where I started so it was a vigorous debate.

The problem is, a very frequent experience was reading or hearing something that was causing me to update my beliefs somewhat to be closer to the "mainstream" (i.e. media/government) narrative. But then a little demon on my shoulder would whisper, "shouldn't you double check that? after all, these guys were misleading you before" and then I'd go check and sure enough, there'd be some massive problem with whatever I'd just read. In many cases it was just flatly deceptive.

After this happens a bunch of times your prior on "these people who disagree with me are probably bullshitting me" goes high enough that it's really hard to maintain a scout mindset. After all, clever bullshitters do it because sometimes they succeed. If I find myself becoming less certain and updating my beliefs in the direction of "I was wrong", how do I know that this is a correct decision, given all the previous times when it wouldn't have been? This feels like a systematic problem with attempts to do scout-y Bayesian reasoning in a world where people are actively trying to recruit you to their side via foul means as well as fair. I suspect this explains a lot of politics, although it doesn't mean the perception of being misled is true, of course.

Having written all that, I have no solutions. The scout mindset would work best in an environment where people are genuinely playing fair. If they aren't, then maybe it's one of those times the book alludes to when the soldier mindset actually works best.

Expand full comment

I had a recent personal experience with being too much of a Scout. In dealing with a younger person, I was accepting things told to me as honest-unless-proven-otherwise, which is how I engage with just about everyone. Then this person very clearly lied to me, and I slightly updated, and then they lied again, and I slightly updated. I was still treating things said as true, but feeling skeptical. Eventually the skepticism became too large to maintain a Scout mindset, because I realized there were more lies than truth. Trying to be a Scout took a lot of additional time, and created numerous situations where the truth of the matter was being blocked by trying to review whether the specific claims were or were not accurate. By switching to a Soldier mentality, I was able to defeat false claims more easily and not update in a false direction.

You can't be a Scout when the other side is constantly defecting. You can only be a Scout when there can be some assurance that your willingness to update isn't going to be abused. Otherwise you are signaling that the opponent has an opportunity to Win using Soldier tactics, which will encourage them to be more of a Soldier. Being a Scout in that situation means you will lose or be forced to disengage.

Expand full comment

I think you’re equating being a scout and believing people at their words?

I think the problem is believing some people and not others due to confirmation bias (ie my in-group said it, so I won’t double-check). Though, double checking has a cost; it takes time and signals you don’t trust the other 100% (or you might embarrass them if they’re wrong!).

Maybe you could double check based on 1) how important the original fact is for decision making and 2) how likely you think that fact is.

Say little Sue comes to you, saying Bob hit her. Say it’s important whether it’s true (I’m a parent or a teacher), and that it’s unlikely because Bob is usually a sweetheart. Then I’ll go check. Even if Bob isn’t a sweetheart, it’s important to go check so Bob isn’t punished unfairly.

With COVID, I tend to double check everything regardless of the source (news or Lesswrong) because it’s important to get it right!

Expand full comment

Yeah this is the nub of it.

If you phrase it as "in-group bias" then it sounds bad. If you phrase it as "not believing people who turned out to be repeatedly wrong in the past" then it sounds rational. So which is it?

Expand full comment

Isn’t that the opposite of what he’s saying? “It’s in my ingroup, so it’s probably true” rather than simply checking it?

Whereas if you didn’t take them at their word from the start you would be fine.

Trust-building, either in people or institutions, is a really interesting aspect here.

Expand full comment

I suppose whether there's a difference between "my in-group is probably right" and "the out-group is probably wrong" depends on whether the question has a binary answer?

Expand full comment

I'm thinking about when is double-checking someone's word good to do? I can either learn the facts myself or take them at their word.

You could take a math theorem at it's word and use it to prove other things, or you could prove it yourself, come up with examples and counterexamples, building your own understanding on why it's right (or wrong!). How important is this understanding as compared to the cost of gaining it?

This is extremely situational, but the problem comes up when you don't even consider that trade-off.

Expand full comment

It's hard to find out who was repeatedly wrong in the past without you yourself having some sort of local-expertise. This can range from obvious (ie.g. little Sue lied about eating the chips because I SAW her do it.) to less obvious (news article said "Study says X", but I read the study and it didn't").

People learn to distrusts news articles on studies after looking up the studies themselves (or being an expert beforehand).

But I did want to make the point that it's not always important to double-check which can waste time, and can come across as annoying if done socially wrong.

Expand full comment

Being a Scout requires being open to the possibility that they are right (and also that you are wrong). If someone is willing to defect within the conversation - say, by claiming a study proves X when it doesn't, then there is added burden in checking what they say. If they do that a lot, or at least enough that the burden of checking all of what they say becomes untenable, then moving to a Soldier mentality is necessary in order to avoid updating towards false conclusions or simply disengaging.

Incidentally, I think that's what happened in the early days of the internet. Pre-internet discussions were more often with family and friends, or at least coworkers that you had to deal with after the conversation. Based on that, there was a lot of built in goodwill, and I think people were more likely to act like Scouts and potentially get convinced by discussions. On the internet, people could defect over and over again in conversations with limited repercussions. Scott has written several times about the massive argue-fests about religion, feminism, and other topics that used to be common and are now far less common. Atheism verses religion is now rare to see argued online. I think as people engaged more and more with these conversations they found more and more people defecting, using shoddy sources, logic, or whatever. They therefore had to either become Soldiers for their point of view, or drop out of the conversation. Eventually topics become Soldier verses Soldier and are non-productive, and everyone complains about confirmation bias.

Expand full comment

Incidentally, this is why I posted on SSC and now ACX. A lot more Scouts in the population and a specific desire to speak with other Scouts. The conversations can be productive instead of an argue-fest of attrition about who can provide the most convincing sources and are willing to keep coming back to the discussion with more data (which the other side will not really believe, but will appropriately respond to with their own data that you will not believe).

Expand full comment

Scouts can be wrong about what a study proves and soldiers can be right. Overall, I double check when it's important regardless of the Scout-ness or trustworthiness of the other person.

But I do agree that more productive conversations can happen between scout and scout. Scouts are okay with me double-checking sources and they're willing to explain more and add caveats of their own.

Expand full comment

I do not believe the dichotomy, but really what you want to happen is to know exactly why they’re wrong, say why it is whenever they bring it up so as to convince any observers, and hope that after that happens a few hundred times they start to feel a bit nauseous

Expand full comment

so both “unconditionally believe” and “unconditionally reject” are not good

Expand full comment

I agree, especially in a discussion forum such as this one. About 15 years ago I recall a very heated online discussion where clearly no one was going to back down and nobody was going to "win" the debate. Someone made the very astute observation that they were not arguing to win against their opponents, but to present the best evidence for non-participants who might be reading to make up their own mind.

Expand full comment

The idea that I'm not writing to convince my dead-wrong comment-conversation-partner but rather for the benefit of some unnamed third parties that might see this wrong argument stand unchallenged is definitely a major factor in my own tendency to do this: https://xkcd.com/386/

It's hard for me to judge when this thought is right, and when it's just a habit.

Expand full comment

I'm sure I'm not the first one to draw the connection, but "soldier vs scout" = "conflict vs mistake" to a first approximation, right? Because this was my reaction to that article, too. Performing for an audience is, I think, the single most common reason "conflict theory" occurs.

Conflict theory isn't some unique feature of Marxism; the Marxists I know do plenty of mistake theory when they're not in the public eye, so I'm disinclined to think it's a feature of the viewpoint. Rather, if you feel like someone's not talking to you charitably or not talking to you with proper "mistake theory" in mind, it's probably because they're not actually talking *to you in particular* at all.

Expand full comment

Marxism is a "conflict theory" in that it aims to explain why there is conflict in the world. But we're well aware that many people have mistaken beliefs.

In Marxist analysis, capitalists are both mistaken and they conflict with the working class.

The dichotomy of "mistake" and "conflict" doesn't really fit too well when discussing Marxism in my opinion.

Expand full comment

Faction X has lied about a whole host of things in the past, and in general faction Y is far more truthful. Nevertheless, faction Y also frequently lies, and faction X frequently tells the truth. There's also lots of cases where both factions are lying. Even though I want faction Y to win, I still feel like I'm better off digging into the details of specific issues and incidents. I'm especially worried that if faction Y gets in the habit of believing its own bullshit without getting called out, it might degrade until it's little better than faction Y.

Expand full comment

Also perception that your opponents are mendacious liars is an easy trap to fall into which makes it impossible to update actually false beliefs.

Expand full comment

I think the problem with the military metaphor here is that, while scouts certainly want to seek out and bring back true information to their side, the difficulty is: what if, while scouting, the scout comes to the conclusion that the 'other side' is in the right?

If you decide to go over to the other side, then you are no longer a scout, you are a deserter - or a defector. If you go back to your side and stay scouting for them, then you have picked your side and are in the same quandary as a soldier who is fighting for 'our side'.

Armies do indeed need scouts. But what does a scout do, if they are convinced "I shouldn't be doing this"?

Expand full comment

Ah, the Little Big Man Conundrum!

Expand full comment

> what if, while scouting, the scout comes to the conclusion that the 'other side' is in the right?

An Open Letter To Open Minded Progr - er - Soldiers

Expand full comment

I don't really understand how this relates to the mindset analogies. A scout who "converts" is still a scout; perhaps even moreso.

Expand full comment

The metaphor in that case works better for "explorer" than "scout". A scout for the Brown Army who goes out exploring the terrain and reporting back on troop movements of the Green Army, and who comes back saying "Guys, we should all desert and join the Green Army", is no longer a scout for the Brown Army. He's no longer a scout at all.

An explorer who isn't working for anyone but himself can change his mind about "is it a good idea to try and settle this empty land?" and not be deemed false to his patrons or his principles. An explorer working for patrons who want to know "is it a good idea to try and settle here or not?" can honestly report back 'yes' or 'no' as the case may be. An explorer working under the patronage of the Diamond Court reporting back "yes, this land should be settled, but we should let the Ruby Dynasty do it" is false to his patrons, whatever about his principles (is he getting paid or otherwise influenced by the Rubies to report this?)

Expand full comment

I think the metaphor is supposed to be more confined, in the sense that the question the scout and soldier both need to decide is something like: "Is General Fnord's plan to rout the enemy by sneaking up on him through this obscure and little-known pass going to work?"

The soldier being told to move out for the march to the pass might decide based on his fervent wish to believe that General Fnord is a brilliant leader who never makes tactical mistakes, because his (the soldier's) life now depends on that. The scout, on the other hand, is presumably charged with deciding whether Fnord's plan is actually sound before it's executed, e.g. whether the pass really is unknown to the enemy. He doesn't have the same emotional weight on one side of the decision, because his life doesn't depend on Fnord being right -- if the General is wrong, he can report that fact and HQ will change the battle plan.

If your criticism is centered around the fact that we rarely have the *choice* about whether we are a soldier or a scout, meaning those positions are rather forced on us by circumstances beyond our control, I would agree with that. The areas of our lives in which we have the luxury of choice, of deciding all on our own whether to be a soldier or scout, are pretty dang limited.

Expand full comment

If the question is "Can the Brown Army general's plan work?" then that's what the scout is out there to do. But the scout is working for the Brown Army general, just as much as the soldier is.

What Ms. Galef appears to be recommending is, if the scout sees the Green Army is in a much better position, not alone should the scout bring that news back to General Fnord, he should also try convincing General Fnord to go over to the Green Army side.

What I'm saying is, that's not how armies work. The Soldier versus Scout metaphor is cute in a surface manner, it's certainly one that lends itself easily to slick marketing of the "Chicken Soup for the Soul" sort. But it's not really helpful if we push past the surface.

The mindset she wishes us to adopt is "if the Green Army is in the right, go over to the Green Army". That's not a scout, that's an independent enquirer. Henry Stanley going on an expedition to find David Livingstone couldn't stop half-way through and decide to go off and look for the source of the Nile instead, because he was being funded by the New York Herald to give them a story.

To be true scouts, we have to detach ourselves from any armies.

Expand full comment

Well, you can't push past the surface of *any* metaphor because...they're metaphors. Not the real thing, by definition a skin suit.

I think I understand what you're saying, but I think it's along a different axis than what the author intends (although I haven't read the book, so I'm just guessing based on the excerpts). You're saying: the big question here is, what do I do with some uncertainty in my existential convictions? *Should* I change my allegiance, and how much evidence is required for that, and what are the pros and cons, on a practical level, of the level of evidence I require for that, and other related questions. And a scout/soldier metaphor for that issue is dumb, because a traitor is a traitor, and pondering going AWOL has zip to do with whether you're a dumb grunt or a foxy scout. Makes ense.

But my impression is that she's addressing a much more humble and previous stage, which is "How do I make sure I'm getting the best and most reliable information?" Information which, yes, may ultimately end up challenging your allegiances, but that's a subsequent issue -- the prior issue is making sure you aren't making decisions (about allegiance among other things) based on canalized and distorted information. How do you maximize the quality of your data, so to speak -- and leavingout as a subsequent problems what do you *do* with your high-quality data, now you have it?

I realize the questions are not cleanly separable at some level. For example, the hoary thought experiment of whether you'd like to be told the precise length of your life. In principle, it's just information, and one can freely choose to make use of it or not, and how, and so in principle it ought to always be advantageous to possess. But it's not, and we know that. There is not only the principle of rational ignorance (information costs more to gather than its worth) but also the principle of rational delusion, so to speak -- there are things it really is better not to know, or not to question, in order to function well.

I'm also not in diagreement if you're just saying you don't like the smug tone, the assurance that if you buy this workbook and dutifully fill out Exercises in Better Thinking #1-33 hey presto all your existential problems will vanish like a soap bubble because your mind is going to start working a 200% efficiency, like McCoy operating on Spock after using The Teacher.

Expand full comment

The military metaphor doesn't seem to carry so far here. Right and wrong in argument translates to tactical advantages and disadvantages. A scout recognising no chance for victory should truthfully report this to those who then decide about alternatives like retreat, surrender and sacrifice. Back in real life one may be more akin to the general, with scouts and soldiers under command. When the scouts bring really bad news, consider surrender, not defecting.

Expand full comment

I’d suggest you give an example of the mainstream being wrong. In general, Vox + NYT != CDCFDA != mainstream science != smart scientists Scott makes friends with, and at any link in the chain one side may be right while another isn’t

Expand full comment

Wouldn't that just turn into a debate about whether the "mainstream" was indeed right or wrong?

Expand full comment

no. It’d maybe become a debate about how and why the mainstream was or was not wrong in that instance, which would help us understand the broader issue

Expand full comment

Well, perhaps take the "lab leak theory = conspiracy theory" one. I actually didn't get deep into that at any point but it seems like the best example, because now even the people who originally started that meme are backpedalling pretty hard, and especially because I think all four groups you named supported Daszak et al when they were busy exiling those people to the outermost fringes of society.

Note: for the mainstream to have been wrong here doesn't require there to have been a lab leak. It only requires that the theory wasn't a lunatic fringe idea with no credibility, as they depicted it. And it's clearly not such an idea, as there's lots of evidence for it and even people who are very clearly biased against it - like the Lancet - are now being forced to admit it has merit, even if they aren't quite at the point of admitting it's what really happened.

Expand full comment

Yeah they were wrong there, it’s not clear either way afaict. And yes, many of them have fully pulled back from that it seems. What I don’t get is why that requires anything beyond active distrust - when the NYT reports that congressmen poop has a new bill, you’ll still believe it lol, lots of people have “”””bias”””” (not a useful concept IMO) and you still pay attention

Expand full comment

I think the implication is that you should distrust the NYT when it is saying something its readers and editors want people to believe, trust it when saying something they don't want people to believe. "Distrust" doesn't mean "be certain it is false," only "put little confidence in it being true."

Similarly for Fauci, or Biden, or Trump, or ...

Expand full comment

Not necessarily - because each of the four groups listed here has sometimes been right while the others were wrong. I think that when they've been wrong, the CDCFDA has stayed wrong longest while the others have either flitted around quickly enough to never be wrong long, or have been willing to change their minds based on evidence (depending on how charitable we want to be about each of them). But I think a lot of this is also a function of the different roles they are trying to play.

Now I'm thinking that it might be a fun exercise to think, for each pair of these entities, of a time and a fact where one was right and the other wrong. I claim I should be able to find 12 such pairs but I haven't gone through it yet.

Expand full comment

rationalists > all others: early Covid, nutrition science, replication crisis, like 200 separate Scott articles

CDCFDA > others: presumably there were some awesome looking drugs that got rejected for quirky reasons that the FDA ended up right on. Don’t have any top of my head, but Dereklowe blog probably has some. Us health pegs also sponsor and run global anti disease programs that go very well. These aren’t great but idk what better fits

mainstream science > NYT: every stupid pop science article

mainstream science > FDA: aduhelm, dozens of stupid fda decisions

mainstream > rationalists: thinking the whole bayes thing and Yud’s more extreme woo were a bit overdone. Simulation hypothesis? MIRI early alignment stuff?

NYT: struggling here tbh.

NYT > cdcfda: by now, they’ve come around on a lot of the corona stuff before the government. Also lots of good reporting on their failings in the past

NYT > mainstream science, rationalist: uh idk lol

Expand full comment

Is simulationism a rationalist thing?

Expand full comment

I think it’s at least rationalist adjacent. A quick google suggests a number of LW users took it seriously and some agreed with it. Wikipedia also suggests Elon musk agrees with it, of which I’m somewhat skeptical

Expand full comment

My memory is that, early on, the mainstream was pretty confident that a vaccine couldn't be produced in less than a few years, Trump was mocked for saying it could be done by election day, and he was right, although the fact was concealed until too late to affect the election.

Going back fifty years, the mainstream view as that population growth was an imminent problem, and if not prevented would make poor countries much poorer. Ehrlich's prediction of unstoppable mass famines in the 1970's was at the high end of those predictions, but he was taken seriously. What actually happened since was the precise opposite — per capita calories going up in poor countries, extreme poverty sharply down.

Expand full comment

And possibly China getting into real trouble by trying to restrict its population.

Expand full comment

"My memory is that, early on, the mainstream was pretty confident that a vaccine couldn't be produced in less than a few years"

I'm in Australia -- where our media diet is only 90% US-dominated -- so maybe I encountered a different mainstream than you, but I don't think this is accurate. Certainly by mid-2020 we had articles about AstraZeneca preparing to deliver hundreds of millions of doses in 2020-21. If you mean very early on, my memory is that the media (once they got past the phase of ignoring the virus or telling us we were racist for worrying about it) expressed justified uncertainty about when and if a safe effective vaccine would arrive.

(I wouldn't be surprised if Trump expressed high confidence before the mainstream did, but that obviously has more to do with his willingness to express high confidence in any proposition that suits him (including optimistic 'predictions' about the course of the pandemic that were obviously unjustified and turned out to be completely false) than with anything that could make him a useful source of true beliefs.)

I searched Google News for the words 'coronavirus vaccine', and although the sort-by-date feature seems a bit screwy, I'm not seeing a lot of vaccine defeatism even early on. The closest I've found to your claim is this quote paraphrasing Fauci --

> In early March, Dr. Anthony Fauci, the infectious-disease expert on the White House’s coronavirus task force, said it could take at least 18 months to develop a vaccine.

-- but not only is 18 months significantly less than a few years, that paragraph appears in an article from late April that is overall quite bullish about the prospects for emergency approval of various vaccines by Fall 2020 and early 2021.

Expand full comment

I saw the 18 month figure bandied about quite a lot. I have no idea where "a few years" could come from.

Expand full comment

Looking back to the early days - the retraction of the Lancet article on HCQ was a huge blow to expert credibility. It became politicized, and that was enough for a DEEPLY faulty article to get published in a journal of mainstream science, either because the journal had truly abysmal standards or because it compromised them so it could publish something agreeable to the Vox/NYT side of the politicized issue.

As far as I know, we still haven't demonstrated real efficacy from HCQ, so you could technically argue they weren't "wrong" per se. And the fact that the issues got pointed out and the article got retracted helps. Regardless, I think only soldier mindset could keep one from calling that sordid episode a failing of mainstream science.

And perhaps the most dangerous implication of the above is that it weakens your "!=" equation above. One institution can follow the other, and it doesn't just flow from the right side of the equation to the left. If scientists and journals realize that they can get attention from CDCFDA or Vox/NYT by publishing something agreeable... they'll find a way to publish something agreeable.

And we saw this so many times. Trump closed the borders, and it was called unnecessary (or xenophobic) based on the low number of US cases at the time. While nobody *directly lied* about masks, I'm sure we all remember the initial coordinated effort to discourage masking by sowing FUD and widespread use of the phrase "no evidence". The wholesale dismissal of the lab leak hypothesis speaks for itself. In every one of these controversies, "mainstream science" was happy to supply fodder and dakka for the liberal side, and the opposite view only became acceptable to publicly endorse (even for scientists!) once liberal politicians and journalists said so.

When you base power on science, that will amount to power informing science *at least* as often as science informing power.

Expand full comment

The Lancet article wasn't retracted because it was of low quality. It was retracted because one of the authors, the one who provided the data, straight-up faked the entire database: https://www.theguardian.com/world/2020/jun/03/covid-19-surgisphere-who-world-health-organization-hydroxychloroquine

Maybe the Lancet's peer reviewers should have caught the fraud, but even the fraudster's co-authors didn't catch it, so I'd give them a pass. I do think the Lancet lowered their standards during the pandemic because of the urgency of the pandemic, and furthermore, I think it was the right thing to do. How many times has Scott gone after the FDA for killing people by slow-walking approvals of vaccines, rapid tests, and other pandemic-related products? If there's ever a time to cut corners, this is it.

"While nobody *directly lied* about masks, I'm sure we all remember the initial coordinated effort to discourage masking by sowing FUD and widespread use of the phrase "no evidence"."

As Scott pointed out early in the pandemic, the CDC didn't recommend against masks in a conspiracy to have enough masks for the doctors. Their website recommended against masks as early as the 2009 flu pandemic.

Expand full comment

That data was the core of the paper. The analysis wasn't novel or deep, the paper's unique contribution was that it constituted the first big dataset of results on HCQ. And that signature contribution was fraudulent. Everyone involved should be ashamed, from authors to publishers, even if their fall guy was genuinely most at fault.

As for justifying corner-cutting: yes, I'm in favor of lowering barriers so that doctors can prescribe possibly-valuable treatments to their patients. But this had the opposite result - the drive was to stigmatize and suppress the use of a possibly-valuable treatment, even if a doctor thought it was worth a try. Seems to me that the Lancet article was closer to the thing that Scott went after, than the thing he wants.

As for masks, let's look at Scott's conclusion about it more closely. He didn't fully get why the CDC was recommending against masks. He noted that they were carefully splitting hairs between "not effective" and "not proven effective", and speculated this overcaution was because of a hangover from medicine's "century-long war with quackery."

But with the benefit of more data than Scott had then, let's take another look. The CDC and FDA turned on a dime to promote masks, and mask *mandates*, which strikes against the "they were just being very cautious" theory. Taken together with things like the organizations' treatment of the lab leak hypothesis and Walensky coordinating with the AFT to draft school reopening guidelines, and I think a more cynical conclusion is called for. I think it's fair to conclude that as political entities, the CDC and FDA hew more closely to political incentives than scientific results.

Expand full comment

Thank you for writing this.

Expand full comment

In general, scientists trust each other not to completely invent data out of thin air. They're not completely naive--they expect p-hacking, spinning, publication bias--but science just doesn't work if everyone's faking data all the time. Because of this, peer review is not an exhaustive forensic investigation, but a basic review of the methods and of the plausibility of the conclusions. I think this is a reasonable use of scientist time. The real review happens after publication, when the entire scientific community looks over the paper, and when other teams try replicating the results. It's unrealistic to expect every paper, even in a top journal, to be "right". It's more realistic to expect faulty or fraudulent studies to be caught quickly, which is what happened here.

In this particular case, the study stopped studies of HCQ, but there's no reason the fraudulent studies you get by lowering standanrds would always slander treatments instead of shilling for them. In fact, the same Desai from the same Surgisphere provided fake data for another influential study which "proved" that ivermectin was a miracle drug, which (AFAIK) is the genesis of folk belief in ivermectin in some circles today: https://www.the-scientist.com/news-opinion/surgisphere-sows-confusion-about-another-unproven-covid19-drug-67635

"The CDC and FDA turned on a dime to promote masks, and mask *mandates*, which strikes against the "they were just being very cautious" theory."

Perhaps they started by repeating their common wisdom from 2009, before reviewing the scientific literature again and talking to health officials from other countries and deciding they were wrong. Why is "they changed their mind" not plausible? I'm not doubting that the CDC is political--it's hard to argue that the eviction moratorium isn't a political decision, for example--but I see no political reason to first recommend against masks and then recommend them.

Expand full comment

Trump never "closed the borders" at all. He put some minor restrictions on travel from China (only) and declared the problem solved once and for all. The arguments against the supposed "ban" look pretty good in hindsight.

Expand full comment

This is similar to Scott's idea of Epistemic Learned Helplessness

https://slatestarcodex.com/2019/06/03/repost-epistemic-learned-helplessness/

Expand full comment

Yeah some mix of that and trapped priors, except I'm not sure my priors are exactly "trapped" in this case.

Expand full comment

"But I sort of feel like this has happened to me with everything about COVID. In the beginning I adopted the default position of "the scientists have got this" and believed all their predictions."

Which predictions of theirs did you believe which didn't end up being true? Everything I read in the early pandemic (March 2020) emphasized how hard it is to predict the evolution of an exponential process where human behavior and herd immunity can drastically modify the scaling factor inside the exponent. The one important scientific result from January-March 2020 was the infection fatality rate, which they said was 1-2%, increasing dramatically with age, higher for males than females, and possibly slightly overestimated because of undercounting. 2 years later, we know the infection fatality rate is about 0.7%, increasing exponentially with age, and higher for males than females. In other words, the scientists got pretty close just based on the Chinese data from January-February.

Expand full comment

E.g. the need for massively scaled up ventilator production. For a month or so I was doing research into ways to engineer quasi-ventilators super fast and cheap, formed a small working group with some like minded people etc. Then it became clear that actually, no extra ventilator demand existed, and probably not a single rapidly engineered vent would ever be put into production.

I'm curious what you read that projected such levels of uncertainty. Not saying you didn't see it, but I recall wall-to-wall consensus from epidemiologists that there'd be endless exponential growth until the entire population had been infected. One single massive wave of a size so large that emergency hospitals filling stadiums were required. Zero uncertainty anywhere. That turned out to be false and moreover, entirely implausible given even the most rudimentary study of past epidemics.

But I'm not sure re-litigating all these specific things is the right way to go here. The topic is actually the meta-questions about rationality. I feel like I scouted many times and each time, was nearly "killed" by arguments that sounded reasonable but turned out to be deeply flawed on close inspection, and often, flawed in ways that looked deliberate. Regardless of the underlying issues, does it make sense to keep being a scout in that situation? I'm not sure it does.

Expand full comment

So why was it that no extra ventilator demand existed? The news articles I read suggest a combination of factors: the curve was flattened quickly enough, production was boosted quickly enough, and doctors started using ventilators less often because they found it was leading to worse outcomes. I don't know which factor dominated, but considering that there were countries that did have a ventilator shortage (e.g. Italy, India), it wasn't irrational to order a bunch of ventilators in March 2020.

"Not saying you didn't see it, but I recall wall-to-wall consensus from epidemiologists that there'd be endless exponential growth until the entire population had been infected. One single massive wave of a size so large that emergency hospitals filling stadiums were required. Zero uncertainty anywhere. "

The mantra of March 2020 was "flatten the curve". Search "flatten the curve" on Google Images and you'll see thousands of versions of the same plot: a "no intervention" case, a "protective measures" case, and a horizontal line representing hospital capacity, showing that "protective measures" would keep case rates below hospital capacity. If epidemiologists all agreed there was "zero uncertainty anywhere" and that human behavior can't decrease the spread rate, what's the point of flattening the curve? What's the point of cancelling events, staying at home, social distancing, or anything else? It's not plausible to me that epidemiologists thought human behavior had no effect on the course of the pandemic, or even that most people misunderstood them to have said this.

Nate Silver surveyed experts on March 30-31, 2020, and wrote an article called "Best-Case And Worst-Case Coronavirus Forecasts Are Very Far Apart": https://fivethirtyeight.com/features/best-case-and-worst-case-coronavirus-forecasts-are-very-far-apart/

"Building a model to forecast the COVID-19 outbreak is really freaking hard. That’s one reason we’ve been following a weekly survey of infectious disease researchers from institutions around the United States.

This week’s survey, taken on March 30 and 31, shows that experts expect an average of 263,000 COVID-19-related deaths in 2020, but anywhere between 71,000 and 1.7 million deaths is a reasonable estimate."

In actual fact, there were 350,000 deaths--almost exactly the geometric mean of the lower and upper estimates.

"But I'm not sure re-litigating all these specific things is the right way to go here. The topic is actually the meta-questions about rationality. I feel like I scouted many times and each time, was nearly "killed" by arguments that sounded reasonable but turned out to be deeply flawed on close inspection, and often, flawed in ways that looked deliberate."

I think it's important to come to a conclusion about these specific things before drawing any general conclusions about rationality. If you found out that the epidemiologists were right about COVID-19, or at least made logical statements that made sense given the information available at the time, wouldn't that be an argument for the scout mentality? At the very least it wouldn't make sense to say their arguments were flawed in ways that looked deliberate, even if, in hindsight, they were flawed.

Expand full comment

I think this is what I was trying to avoid. This discussion should be about the mentalities in general rather than specific cases. I used COVID as a personal illustration of cases where I feel one side has repeatedly lied, to the extent that scouting is just an invitation for being manipulated. It wasn't really a "hey let's debate it all from scratch all over again". For example, I think you're wrong about absolutely all the above, as were all the "experts", but we're both soldiers here and this soldier is getting tired. Been there, done that. For any specific example I'm pretty sure you'll find ways to convince yourself that the "experts" were right. We're living in different universes and it seems pointless to try and cross them anymore. Sorry.

Expand full comment

"E.g. the need for massively scaled up ventilator production. For a month or so I was doing research into ways to engineer quasi-ventilators super fast and cheap, formed a small working group with some like minded people etc. Then it became clear that actually, no extra ventilator demand existed, and probably not a single rapidly engineered vent would ever be put into production."

I wouldn't be so hard on yourself — or, for that matter, so hard on those who at first thought we would need extra ventilators.

Ultimately, medicine is a social science, and can't constrain itself to optimizing the physical accuracy of its diagnosis and physical efficacy of its treatments. It also relies on protocols that are good enough for now... until they aren't. Like protocol calling diseases either "airborne" or not, officially leaving no room for Mr In-Between — until something like COVID makes the protocol's flaws impossible to ignore. Or protocol saying get ready to bust out the ventilators when oxygen saturation drops below this number — until something like COVID comes along to revise that, too.

When medical providers are toughing out the first wave of a crisis with the protocol they have, it's normal and human — perhaps even rational! — for the rest of us to react by trying to get these providers what *they say* they need, and for these providers to rely on protocol to *tell us* what they need.

Considering that medicine is a social science, and public health is therefore "doubly social", how much, really, were the flaws in protocol that COVID exposed the result of BS artists gaming the system? And how much were the flaws due to the nature of medicine itself — that it can't just stick to the science, but inevitably does make judgments about who's "worthy" and who's "unworthy", who ought to be reassured they aren't too badly to blame for their suffering versus who ought to be shamed into better habits?

Even here, at ACX, when Scott dives into Long Covid, you'll get extremely rationalist comments going something like, "So yeah, Long Covid could be real. But is it wise to publicize it's real, since it's also the perfect 'fashionable malingering' diagnosis for these times?' That is, even if Long Covid is real, *can we afford* to treat it as if it is?

ACX comments might tend individualist, libertarian, and rationalist, but even here, what might be factually true for individuals can still seem worth sacrificing on the altar of wider social control.

And is it realistic to expect otherwise? Medical providers play the odds when they diagnose — *all* the odds, including the odds that they'll attract punishment if their care isn't "standard", and odds that their patients fit a social type whose testimony should be distrusted. (If we're one of the patients who really and truly do buck those odds, can we even blame medical providers for not believing us?)

Scott is justifiably loved for giving advice on how to navigate the psychiatric gatekeepers who must play these odds. Much of Scott's advice boils down to seeming like "the right type of patient" to get the care you might need — even if you aren't. Like, don't answer, "Why yes I hear voices," just because it happens to be true, since you're an audio engineer who relies on mentally hearing voices to do his job. No, that is the *wrong* answer, even if it's factually right.

If speculations about lab leaks kept people too busy blaming perfidious foreign power to accept and address the community spread that was already here, so might "lab leak" have been the wrong answer, too, even if it's factually right: If it's possible we can't afford to take Long COVID seriously even if it's real; if it's necessary to literally lie about the voices one hears in order to give an answer conforming to the gist of what very intelligent people *think* they're asking when they pose that question; then it's also possible we couldn't afford to take lab-leak theorists seriously even if they're later proven correct: we can't really have this kind of logic both ways.

And, for all the absurdities of mainstream gatekeeping, it's possible the socially-possible alternatives to mainstream gatekeeping are even more absurd:

https://modelcitizen.substack.com/p/q-trust-and-you

(Wilkinson seems perfectly happy to revel in the kind of twerpy, smug defense of mainstream institutions that's likely to irritate many of us here, including me. But... is he less wrong than I wish he were?...)

Expand full comment

> I ended up debating a scientist I was vaguely acquainted with in the street for about 20 minutes. We met, said hello, not seen you for a while, got talking and ended up in a massive debate about PCR testing, modelling and other things. He was very much still where I started so it was a vigorous debate.

The older I get, the more I think that being right is more trouble than it's worth. If you go to the enormous intellectual effort of actually trying to be right about an issue, then two things can happen:

1. You figure out that the people around you happened to be right, in which case all that effort was wasted.

2. You figure out that the people around you happened to be wrong, in which case you wind up in conflict with all the people around you. Either you're constantly getting in arguments and getting socially shunned, or you're constantly biting your tongue to stop yourself from correcting them.

I'm already in situation 2 on quite enough issues, I have limited desire to add any new ones. I have successfully avoided getting too caught up in being either a scout or a soldier on any covid-related topic, and I'm happier for it.

Expand full comment

Well, sometimes there's also:

3. You figure out a way to capitalize on other people's wrongness and end up a successful entrepreneur.

Hence the Peter Thiel-ian interview question of, "What do you believe is true that nobody else believes?". But this option doesn't exist for questions of public policy, of course. Which in my view is one reason public policy is so often wrong. People need an incentive to discover the truth in the face of groupthink and in government/academia, that incentive is missing. Especially in epidemiology, where the history of the field is littered with people who were right about things (e.g. the way cholera spreads), were socially shunned their whole life by everyone else who thought they were wrong, and who only became celebrated as heros after they were dead. Not a nice way to live a life.

Expand full comment

You're quite right. There are some areas where it's strongly worthwhile to be right while everyone else is wrong, and some areas where it's totally un-worthwhile.

The best strategy is to try to concentrate your intellectual efforts in those areas where they will pay off, and to try to ignore the areas where it won't. Use your brain power on "I wonder if there are any new innovations in materials science that could be applied to another field" rather than "I wonder if the Holocaust actually happened".

Expand full comment

"The older I get, the more I think that being right is more trouble than it's worth. If you go to the enormous intellectual effort of actually trying to be right about an issue, then two things can happen..."

An economics lecturer (from the Great Courses folks ...) my son watched pointed out that the *cost* of knowing the correct answer for something is often not worth it. "Rational ignorance" is the phrase I (hopefully correctly) remember.

My take is that rational ignorance is fine. But you need to be clear that you don't know. Being ignorant and thinking you know is a bad place to be.

Note for your #2: You don't have to TELL people that they are wrong. I've made it clear to my son that just because you know something doesn't mean you have to share. Choosing to share is a second, independent, step after the effort of knowing.

Expand full comment

"The older I get, the more I think that..." standing up for the bullied kid in high school everyone derisively called the kitten murderer was "... more trouble than it's worth".

Whether he did actually kill kittens in the town's graveyard I could never decisively disprove but "either [way, I was] constantly getting in arguments and getting socially shunned." I already had "enough issues", like a case of acne and no date to the prom.

Of course this sentiment doesn't strike me as "the older I get" wisdom. Instead it seems like "the more cynical/Machiavellian I get". My twist to your story certainly changes the advice but why? It's useful to explore why examples of social grace and transcendence-of-partisanship bleeds so seamlessly into cowardice and tolerance for viciousness.

1. There is some dignity in personal ethics (e.g. how you treat the beggar in front of you) which supersedes your social policy insights and commitments (e.g. covid lockdowns negative effect on economically-at-risk families)

2. Social harmony uber alles: it's not important for a State to be the getting the actual policy prescription to be 100% correct - which is impossible anyways - but that the public buy into the action ("get the benefits") and stop arguing ("avoid the costs").

3. Combining the above two..."In matters of principle, stand like a rock; in matters of taste, swim with the current." This can be easily seen by changing "standing up for a bullied kid" -> "doing Dr Who cosplay all the time"...I mean c'mon it's a great show but read the social cues man.

I think there's a great confusion in the West currently about what role can the average citizen should expect to have in discussion, consent and dissidence to the state's social science (which is ipso facto the state's social policy), and this combines with an increasingly partisan social norms. One thing we can all agree on (!) is this is uhh...not good.

My warning to you, old man, is this exact dynamic was foretold. That the people will squabble, then ostracize, then fight about anything (and if you want to look at the cause for religious schism - I mean anything!) until a system sovereign from the people is imposed on them. If you want to maintain anything like a voice to self-determination, now is not the time to shut up.

(Unless it's about Dr Who, save that for the message boards)

Expand full comment

> After this happens a bunch of times your prior on "these people who disagree with me are probably bullshitting me" goes high enough that it's really hard to maintain a scout mindset. After all, clever bullshitters do it because sometimes they succeed.

> Having written all that, I have no solutions.

I've been very happy dealing with this by focusing at an individual author level (rather than by viewpoint). If too high a % of an author's output is low quality (being disingenuous is especially bad), they get removed from the reading rotation. This is balanced out by what % of their good arguments are non-obvious and hold up to scrutiny.

In order to evaluate an author's argument quality I also like to go directly (to where they're published) and get a random sample. Since generally any prolific author will have some articles/arguments that are weaker than others, and their opponents will try to single out the weakest ones in order to dismiss (the author) as generally non-credible.

Expand full comment

I think this comes down to having to do the work in advance. Had you not had a high confidence in the initial "the scientists have got this," you would not have had an issue with retractions or repositioning. You would merely have adjusted your confidence levels accordingly. I am not saying that this is an easy task. Personally, it starts with being skeptical of my positions.

Expand full comment

Hi Mike H, I just wanted to write a note to say how much I appreciated reading this comment. I also have had a difficult time scouting during the pandemic, doing my best to help people around me understand biological stuff that I have studied as it is relevant and to work out what can actually be believed and it has been super hard. And I have to say that I really appreciate the soldier mentality amongst my community of the heart. It's the way I am that I don't mind going out and fighting with big questions about how I would know if x or y was truer and I love that there are people who have my back and love me (so far!) whatever answers I have found. I think it is emotionally super hard to scout particularly in this pandemic and I wanted to send out a bit of respect and 'shine' your way.

Expand full comment

I find myself in disagreement with this use of the word "probability", but I realize it's because I am a soldier for frequentism

Expand full comment

Brave of you to venture this far into enemy territory, then, as a scout would.

Expand full comment

Alas, nowadays it seems it's all enemy territory. The "japanese-in-the-jungle mindset"

Expand full comment

Can you give a concise explanation of the difference? I've honestly never been able to find more than a semantic distinction. It seems to me both can capture the same ideas and conclusions, even if they term those conclusions differently. But since people obviously care a great deal about the distinction, I'm probably wrong about that.

Can you provide a case where you think a Bayesian and frequentist would come to different conclusions as a direct product of the difference between the two viewpoints?

Expand full comment

A frequentist says that there is no such thing as the probability of a general hypothesis - either the hypothesis is true or it is false. The frequentist has all sorts of rules about when one should endorse or reject a hypothesis, or leave it open, but nothing that adds up to a continuous modification of degrees of confidence in the hypothesis.

Expand full comment

I might be misunderstanding, but I think I agree with frequentists there - at least, on a strong interpretation of what it would mean for the probability of an idea to "be a thing". I don't think those probabilities are "out there" in some Platonic sense, for sure. It seems uncontroversial that a (meaningful, well-formed) hypothesis is either true or false, independent of our opinion about it or the data available to us.

But this isn't exactly what I was hoping for. I was hoping for a case where based on a particular dataset of past observations, a frequentist would assign one probability, and a Bayesian would assign another, with each motivating their conclusion with reference to their philosophical viewpoint. But going by your comment, we might not be able to find such a case, since they're answering different questions. The frequentist would make assign a probability, the Bayesian would assign a credence; on a proper understanding of each, their results might not be in conflict, even if the numbers were to be different.

In case the above isn't clear, maybe another way to get at what I'm trying to say: is it possible to be both a frequentist about events and concrete predictions, and also a Bayesian about hypotheses? Where would that composite view break down?

Expand full comment

My interpretation (as a pretty hard core Bayesian) is that every probability that the frequentist talks about is also one that the Bayesian is happy to use. But the frequentist also has a bunch of *rules* for accepting or rejecting hypotheses, that the Bayesian has no use for, because the Bayesian says to *never* accept or reject hypotheses, and instead maintain a credence.

Expand full comment

Interesting. I sincerely appreciate the perspective, I'm pretty sure I disagree, but I'll try to read more deeply into it before formulating a clear reason why. (I'm sure there will be plenty of opportunities to discuss this topic on this blog in the future.)

Expand full comment

I doubt anyone has probabilities for “the sun will come up in the morning” or “space has, in the weakest sense possible, three dimensions”, and if they did, would they be 1e-10, 1e-50, 1e-500, 1e-5000

Expand full comment

I think there's a possible confusion here.

It's certainly possible to be a frequentist _probabilist_ about events in the sense that (e.g.) they only make sense when long run frequencies do. So you might be happy to assign a probability to 'fair coin comes up heads' but - unlike a Bayesian probabilist - not about a single unique situation ('Trump wins in 2024'). The strict frequentist would simple be willing to say less than a Bayesian - not anything differently, but just less (though he would claim that it's right to be more circumspect, because not everything that can be said is even meaningful).

People can differ on this but it would be a rather niche philosophic controversy if that's all that it was.

But that's not why B vs F is a _well known_ controversy. Rather, it is because of its implications for the field of _statistics_ (not plain probability theory), and so much of that is about hypotheses. Convince a frequentist statistictian that he can assign probabilities to hypothesis, and - given that - he will then do a lot just like a Bayesian (it's just probability theory at that point) but 70%+ of his textbooks just become irrelevant. One off requentist statistics 's distinctive concerns "If it is improper to say that hypothesis have probabilities - since they are either true or false - what type of inferences and assertions can we nevertheless make and how?". Take that away, and you'd just have one unified field of statistics.

It is because frequentists won't give probabilities to hypothesis, that they and Bayesian statisticians can lead to extremely different conclusions in concrete cases (and yes often because, though it can be obscured, they are answering different questions). With deliberate effort you can often make them end up in similar places in common cases (e.g. a Bayesian can try to use 'uninformative priors' even though that's unnatural in real problems of import where you almost always know stuff beforehand). Some people seem to think that this shows that the differences are practically unimportant (though, spoiler: IMO that's a silly argument.)

Expand full comment

How would they differ regarding Newtonian mechanics? Frequentist says it is wrong, Bayesian gives a percent confidence depending on the evidence hey have?

The answer to This question Has always evaded me too, but I guess I have survived without it.

Expand full comment

The Bayesian gives it a credence of 0.00000000000001 or so. Though Bayesians like me don't advocate always working with explicit numbers, and when the numbers are this extreme, our intuition for numbers tends to give out before our intuition for when it's reasonable for evidence to bring such a hypothesis back up to salience.

Expand full comment

That doesn't really address my brain fart. We seem to be saying that both approaches agree in saying Newtonian mechanics is bullshit, when what we really want to say is that it works fine as a first approximation under terrestrial conditions. True/false and X% both unhelpful. So I guess I am just changing the context, and neither is intended for use in this one?

Expand full comment

In philosophy of science this topic is discussed under the term "verisimilitude" and is considered a really difficult one to say a lot about, but is really central to a lot of issues.

I was thinking of saying something about how both the frequentist and the Bayesian would say we have a lot of evidence that something very close to Newtonian mechanics is correct, but neither of them addresses the question of how to formulate this hypothesis, and particularly the "very close" part of it.

Expand full comment

Would you disagree with the concept of "degrees of belief" or is it a more semantic argument?

Expand full comment

I'm curious about the theory of psychotherapy mentioned toward the end. Is there a name for this theory? I'd like to read more about it.

Expand full comment

So what I'm hearing is, "the best way to be a good person is to be like me and my personal friends. The community I, personally, am involved in is the main one to avoid the scourge of confirmation bias -- the bias in which people think they are never wrong". Got it.

Expand full comment

In fairness, I would hope that most people believe that being like them and their friends is a path towards being a good person -- and if they don't, that's certainly not an endorsement of their community!

Expand full comment

Definitely not the best way to be a good person.

Besides that, your ironic statement is a fully general argument against any solution to correcting confirmation bias. A specific example on why it’s wrong would be more convincing.

Expand full comment

OK, try this: can either Scott or Julia give me a criticism of the rationalist community? Something they think the rationalist community gets importantly wrong, more wrong than the rest of society. I've never really seen this from them, which leads me to suspect bias.

I personally have quite a lot of gripes with the rationalist community. Perhaps the most infuriating is when they declare themselves to be so skilled, so singularly important, that donating money to themselves is "effective altruism". This happened a couple of times, for example when a "long-termism" EA grant gave money to a community member for the goal of "working to prevent burnout and boost productivity", including such measurable outputs as "gaining the ability to ride a bike". This EA application was by a (former?) instructor at CFAR, by the way, and she received $20,000 in effective-altruism money.

So look, I don't object to combatting confirmation bias in the general case, but maybe clean up your house a little first, OK?

Expand full comment

Could you provide a link to the grant?

I do think the rationalist and related EA community are especially critical of themselves as compared to most communities I’ve been a part of. For example, a really popular, recent EA post is one that’s critical of the EA movement (link: https://forum.effectivealtruism.org/posts/pxALB46SEkwNbfiNS/the-motivated-reasoning-critique-of-effective-altruism)

Expand full comment

Here is the link:

https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-decisions

(search that page for "bike")

It also includes other charming highlights, such as the decision to print physical copies of Harry Potter fan fiction using effective altruism funds.

It is good to hear that EA accepts criticisms, but I mean, it's really weak sauce compared to the public repudiation that the long-termism grant I linked to deserves; you don't get to say "we are unbiased and great because we published a self-critical blog post once" while funnelling thousands of dollars of charity money to community members so they can self-actualize and ride bikes (while proclaiming to hold a genuine belief those community members are so wonderful and productive that this is in the best interest of the long-term future of the world, no less!)

Anyway, that's what's in the back of my mind when I read these self-congratulatory rationalists takes about confirmation bias (by Julia and Scott): that same community gives money to itself because it thinks that community members riding a bike is in the best interest of the long-term future of the world, and Scott/Julia/etc. never repudiated this or said anything negative about the community, yet they lecture me about confirmation bias?

And to be clear, Julia and Scott don't need to repudiate anything negative any rationalist ever does; but if they specifically recommend the EA/rationalist community as a paragon of virtue, it does stand to reason that we should check whether these places are even remotely virtuous, and I say they are not. Certainly I would never claim that my friend riding a bike is a world-saving charity cause (to say nothing of printing dead-tree copies of Harry Potter fan fiction), yet the rationalist mentality appears to have lead many people to that exact conclusion. Being part of the rationalist community may well make you less rational!

Expand full comment

Consider the fact that the grants and reasoning are published so that you can criticize them. Nobody is asking you to agree with the reasoning behind funding some of the personal requests of individuals in the community.

Also, giving talented Russian kids copies of fan fiction by the foundational thinker behind the rationality community explicitly written as a gateway to the community seems like a highly defensible way to spend some money.

Bed nets are not going to save the world.

Expand full comment

Thanks for giving exactly the kind of example of confirmation/in-group bias I've been saying is prevalent in the rationalist community.

Like, it's supposed to be commendable that a charity which fund-raises from the community discloses its spending and grant allocations? That's completely standard. Stop self-congratulating please, that's hardly the way to convince me you are free of bias.

>Also, giving talented Russian kids copies of fan fiction by the foundational thinker behind the rationality community explicitly written as a gateway to the community seems like a highly defensible way to spend some money.

This is basically self-parody, but just to make things extra simple for you: every time you print Harry Potter fan fiction while claiming to save the world, it makes normies like me want to spite you as hard as possible, almost surely negating whatever recruitment value those HPMOR books could possibly have (and the value wasn't high in the first place; the HPMOR books are in English and most IMO contestants don't necessarily read English well enough to decide to start reading such a book.)

>Bed nets are not going to save the world.

You claim Harry Potter fan fiction and riding bikes are going to save the world? @Elriggs, this is the stuff I was talking about when I said the rationalist community aren't particularly free of ingroup bias.

Expand full comment

"Also, giving talented Russian kids copies of fan fiction by the foundational thinker behind the rationality community explicitly written as a gateway to the community seems like a highly defensible way to spend some money."

Going by my admittedly limited exposure to Russian views on fanfiction, the talented kids are more likely to write their own version in which Voldemort is right and Harry Potter should be executed as a traitor to the state (see the discussion we had on here before about Russian fanfiction re-working of LOTR where Sauron is the Good Guy).

If Rationalist Harry gets a good kicking from smart Russians, I admit that *would* be a good use of the money 😁

Expand full comment

"It also includes other charming highlights, such as the decision to print physical copies of Harry Potter fan fiction using effective altruism funds."

My own personal opinion is that nothing is better calculated to have an ordinary person run screaming into the night away from 'the rationalist community' than asking them to read this.

Expand full comment

I believe Julia has once criticized the tendency to trust models over common sense. I.e., someone figures something out on a theoretical level, but it leads to a strange conclusion. Instead of taking that as a signal that the model is wrong, they just accept the conclusion.

Which is funny because I had the exact opposite criticism, i.e., people just assuming the model is wrong if the conclusion is unintuitive. (What's the point of models if you only accept conclusions you already believe?)

But in general, she has a very favorable view of the community, and said so herself. I don't think you'll find any harsh criticism from her.

Expand full comment

well any good thing many people believe has a community, so

idk how one avoids that. But there is a small contradiction there - one would probably be better to critique “confirmation bias” rather than accept it to get around it

Expand full comment

Don’t you think that joining the How Do We Figure Out What’s Actually Right Club is more likely to help you figure out what’s actually right than joining any one of the millions of We Are Already Right and You Can Leave If You Disagree Clubs?

Expand full comment

Anyone can call themselves the "How Do We Figure Out What’s Actually Right Club". I'm in academia, we literally all think this about ourselves and most of us are wrong about our own motivations.

Every group views themselves as virtuous. My interactions with rationalists do not inspire confidence, though I concede they are better than /r/changemyview (which somehow was also mentioned as a model community, WTF). The latter once banned me for refusing to change my view (i.e. to award a "delta") after someone tried to gotcha me on a technicality. That shows you how far declaring yourself to be the "How Do We Figure Out What’s Actually Right Club" gets you: not far at all.

Expand full comment

Is your contention that Academia has done no better than, for example, the Denver Broncos, at figuring out what's true and what isn't?

We're not talking about virtue. We're talking about which program tends toward factual correctness, in the long run. I'd put my money with your average chemistry department over Boy Scouts, when it comes to giving me accurate rate constants for reactions, and I'd put my money with the rationalists over the local Parent-Teacher Association, when it comes to accurate forecasts about whether a novel coronavirus is going to become a pandemic.

(To be clear, I expect the Denver Broncos to be better than the rationalists at football.)

Expand full comment

I disagree that the rationalist community tends towards factual correctness. This is an extremely self-serving thing to say, and I'm not particularly impressed with the single example of the pandemic.

By the way, for the record, I was worried about the pandemic back in January 2020 (primarily because I have many Asian friends, who were all concerned). I bought an N95 mask and everything. At some point, perhaps February, I checked Eliezer's twitter to see what the rationalist community thinks about it. He was retweeting some people who were saying to stockpile a month's worth of food. Panicked, I went and did so -- a big waste of my time, in hind sight. Others in the rationalist community were saying that the stock market is being irrational and that they were selling/shorting. I decided to ignore that advice, luckily.

Has there been any post-mortem on those failed predictions? I haven't seen any; I only see self-congratulation, even though, again, the rationalists swayed me *away* from the right stance when it comes to the pandemic.

Expand full comment

It would be interesting to hear you describe a sort of hypothetical scenario where there was a group of Rationalists who actually deserved the label, and your description of how those people would behave.

I think part of your problem may be that you think "self-described rationalists who spend all day on twitter" is a reliable sampling of rationalists-in-general, when in fact it's a sampling of the least personally effective ones. The ones who can afford to spend all day on twitter.

This is an impasse that I often get into when discussing this general topic online. I am active in a meatspace rationality community. Rationalists in real life don't behave anything like the twitter caricature of rationalists. They are just smart, normal people who are unusually willing to engage in abstraction, and about 100x more likely to casually admit they were wrong than your average smart engineer-type. And now we're reduced to arguing about mutually incompatible sets of experiences that we are using to derive our sense of what the label "rationalist" refers to, and neither of us can be "right" even in principle.

If you are going to define your sense of rationalists as "rationalist twitter" then I suppose I agree with 100% of what you've said in this thread.

Expand full comment

>It would be interesting to hear you describe a sort of hypothetical scenario where there was a group of Rationalists who actually deserved the label, and your description of how those people would behave.

I don't have the answers. There are some real-life friends I consider careful and rational, but I don't know the secret sauce. Perhaps making a community out of it is a doomed prospect as communities always tend towards group-think and bias. Or maybe not; I'm no expert. I'm just saying, the parts of the rationalist community that are visible to me -- they ain't it.

>If you are going to define your sense of rationalists as "rationalist twitter" then I suppose I agree with 100% of what you've said in this thread.

Good, then we are in agreement!

But please do realize that when people (online) talk about the rationalist community, like I did and like Scott did in his post, this is always understood to be the online rationalist community. I don't know your personal circle of rationalists and cannot criticize them. You don't know other non-online circles of rationalists, and so you cannot honestly vouch for them. When people talk about the community at large, they really do mean the online stuff; constantly equivocating between them becomes motte-and-bailey if done too much.

Just to back up my point: a year or so ago, Scott Aaronson had a post saying the rationalists beat out other media outlets in warning about COVID early. He backed this up by... mentioning twitter rationalists (I disagree with his assessment of them). Then Scott Alexander featured Aaronson's post on his blog, once again saying that the rationalists did well with COVID prediction (equating "rationalists" with "twitter rationalists"). Also, up above in this thread, there's a guy who's trying to tell me that printing fan fiction counts as effective altruism since that fan fiction is by "the foundational thinker behind the rationality community" (AKA Eliezer, the twitter rationalist).

When Scott and Julia say to join the rationalist community, most people naturally assume they mean the twitter version, since that's the most visible part.

Expand full comment

I think stockpiling a month's worth of food in February 2020 was a very reasonable thing to do. It just turned out to be unnecessary. Do you feel like putting on your seatbelt was a big waste of time every time you get out of your car without having crashed?

Expand full comment

We know that sometimes people are saved by seatbelts (there's pretty convincing statistics on this). Care to name a time when someone in a first-world country was saved by stockpiling a month of food?

It *was* pretty silly in hindsight: for one, I can survive some weeks without food anyway. Second, if the food situation becomes so dire that I must stockpile food (no emergency camps available, for example), then where am I going to get the electricity to cook this food? Also, I don't know if I can survive a month without home heating -- that depends on the time of year. I also didn't do other prepper things I should have probably done earlier, like (e.g.) stockpiling gasoline so as to more easily evacuate.

You can't just recommend a random sample of useless prepper stuff (but not other, more useful prepper stuff) and then defend it because "don't you wear seatbelts".

Expand full comment

I also once looked into rationalist claims to have predicted the pandemic better than the media and found them to be giant lies. The problem with running a public blog is that you can actually see what people did or didn't write in the past.

Expand full comment

Google 'ssc we have noticed the skulls' and get back to us.

Expand full comment

I have noticed the "we have noticed the skulls" skull. There were many previous, doomed attempts to criticize the rationalist community, and they have failed, but I saw the pile of skulls and went "huh, better do the opposite of what those guys did". So now I have a criticism of the rationalist community that does not claim to be perfect, but please judge it according to what it is, not according to long-outdated stereotypes of what critics of the rationalists usually say.

Expand full comment

The only criticism you seem to be expressing here is 'you think your community is good', which... yeah, everyone thinks that of their own community, that's why they're in that community.

Expand full comment

No, the criticism I'm expressing is that I can plainly see a *huge* amount of in-group/confirmation bias in the rationalist community. That's my criticism.

I've expanded on this in other replies (scroll up), but one example I gave was the tendency of EA grants to give money to people in the rationalist community for "working to prevent burnout and boost productivity" (including measurable outcomes like "learning to ride a bike") and literally claiming this is saving the world.

If you think that the people in your community are uniquely equipped to save the world, to the point where they should get thousands of dollars just to have fun with (because them having fun might make them a tiny bit more productive at saving the world), then perhaps this community should not lecture me about confirmation bias, OK?

Expand full comment

Does the book touch on the problem of Going Public? It's easier to change your mind when your opinions are private. At least that's been my experience.

I see increasing polarization as partly an effect of more opinions being public because of social media.

Expand full comment

This is a really good point. I think we're seeing that among the stop the steal types and covid deniers. They staked out a public position and now they're invested. I'm thinking particularly about the talk radio hosts who died from covid after posting nonstop nonsense on social media and their radio shows.

Expand full comment

“It’s hard to be rational without also being a good person” looks like more evidence against the orthogonality thesis. If you’re intensely intelligent but don’t see “being mean to other people as an error”, you’re much more likely to dismiss them when they have true knowledge that conflicts with your priors.”

And if rationality is just as hard as being a good person, doesn’t this suggest that an unaligned AI is likely to have biases which inhibit it’s abilities as well?

Expand full comment

There are two components to confirmation bias here: the first involves recursively weighing evidence more strongly when it supports priors, which is a straightforward flaw in the epistemic process. It gets addressed by things like counterfactual tests, and while it's fascinating from a mathematical perspective I'm not sure it has much to say about orthogonality.

The second is overtuning to deal with social dynamics. Embarrassment in the face of being publicly repudiated or conformity to an authority's opinions are sensitive to personal pressures beyond mere evidence. They serve an obvious purpose, but there's a clear tradeoff against strict rationality and it takes a particular set of virtues to overcome the discrepancy in a productive way. That's a feature of human cognition, and I'd be extremely skeptical of extrapolating from there to minds in general.

Expand full comment

> the first involves recursively weighing evidence more strongly when it supports priors, which is a straightforward flaw in the epistemic process.

Is that really a flaw, or a tradeoff in computation? In the polar bear situation, you discount that evidence because the polar bear is much less likely than "your friend was mistaken."

Now _ideally_ you'd remember all such claims, and if hundreds of people all claimed to see a polar bear, then you might connect all these dots and conclude that maybe there was one. But doing this requires remembering every single claim you've ever heard. If you assume that:

- you have limited storage space to work with

- input signals are often noisy

Then confirmation bias starts to make sense as a 'cognitive-space-preserving' mechanism. You either need to remember every single 'surprising' claim you've encountered, or else selectively discard sufficiently surprising incidents you encounter.

> The second is overtuning to deal with social dynamics

Agree with this assesment

> That's a feature of human cognition

This is only true if you really buy into the 'AI go foom' argument. If you expect AI's to have to reason with other minds, and make decisions about which other minds it can trust, then i think you should expect the same dynamics to play out.

Insisting that AI will go FOOM, and rapidly augment its own intelligence so fast that it dominates the earth to such a degree that nobody can stop it, obliterates any concerns around social dynamics. I don't think this 'foom' argument is really feasible, any moreso than the idea that you could fit a world-dominating intelligence into something the size of a hydrogen atom: space and time definitely pose real constraints on intelligent systems. If it takes a certain about of spatial volume (and, say, energy) to outwit literally the entire human race, then i think it's reasonable to surmise that time-bound likely applies as well.

Expand full comment

> Is that really a flaw, or a tradeoff in computation?

Often the second, but always the first. The perverse case is where for a given set of evidence, one's posteriors are determined by the *order* in which the evidence is presented. That's a clear failure of rationality, and an exploitable one at that.

Discarding evidence for bandwidth reasons is a pragmatic necessity, sure, but surprisal alone is a poor metric - that's where you find the strongest evidence! Instead, filter according to whether a piece of evidence *or its negation* would cause an appreciable shift in your beliefs. The trick then, is acknowledging your own level of uncertainty. Thus, calibration! (There's a Bayesian formalization for most of this, but I doubt a string of letters showing an unbalanced equation would be helpful.)

>>That's a feature of human cognition

>This is only true if you really buy into the 'AI go foom' argument.

No. This is a feature of human cognition regardless of whatever other intelligences might exist. That's important, because the position that it is *also* a feature of all other intelligences requires not merely rejecting fast takeoff but the positive assertion that all (meaningful?) intelligences will be similarly moulded by social pressures over the course of a slow takeoff. That not only will the same solutions be reached, but that they will be reached by the creation of similar heuristics. An intelligence that plays nicely while it's a parity with humans and then does a Clippy impression once potent enough is a failure, regardless of whether that overlap is seconds or centuries.

(That similar heuristics result in alignment is a necessary follow-up requirement, but distinct enough to set aside. IMO, extant humanity is enough to disprove it.)

Expand full comment

Someone who can admit that they're wrong and seeks to understand others are both *instrumentally* useful for rationality and being a good person, but also useful for a wide range of other goals like manipulating others.

Expand full comment

Sure, you _could_ manipulate others into serving your interests against their will.

But if you view other people as 'useful epistemic tools', this could backfire on you. It might be like using a powerful microprocessor to only solve really simple problems, while you continually micromanage it.

If your _ only goal_ is instrumental rationality, this suggests to me you should _want_ other people to be as intelligent, wise, and virtuous as possible, so that you can get the most use out of their brains!

Expand full comment

Orthogonality thesis is about final goals (utility functions) being orthogonal to intelligence. I think "wanting other people to be as intelligent, wise, and virtuous as possible" is (possibly) a convergent instrumental subgoal, not a final goal.

Expand full comment

My current steelman is this:

The orthogonality thesis is usually used to argue that AGI will not, by default, optimize for human values (because it's optimizing for it's utility function).

But, if human values end up being instrumentally convergent, AND the AGI heuristically learns them instead of it's original utility function (for example, evolution is humanity's "original utility function" but we care about beauty, sentience, etc as opposed to just propagating genes), then AGI may, on average, tend to optimize for human values.

I don't think this is probable. Even if we choose the only goal as instrumental rationality (how would you encode that?), I can't imagine the optimal policy valuing what I value. Emulating human brains and taking out the unimportant bits for rationality purposes seems more efficient.

Expand full comment

Was about to post something similar. It seems obvious that having good techniques for gathering accurate information is a convergent instrumental subgoal, i.e. it's useful for many possible ultimate goals. This is not at odds with (my understanding of) the orthogonality thesis.

I do think that there's probably *some* overlap between "convergent instrumental subgoals" and "human morality". I think morality can be viewed as being (at least partially) a decentralized system for promoting positive-sum cooperation, and to the extent that it succeeds, one might expect an AI to converge on it.

But I think there's some really huge caveats to that.

For one, notice that humans don't typically cooperate with ants. I think cooperative strategies mostly work when there are a lot of other agents approximately as powerful as you are, and so an AI that is sufficiently smarter than us might not see much instrumental reason to cooperate with us.

For another, I think human-style moral systems are leaning pretty heavily on the fact that false signals regarding your goals and emotions are costly for humans; e.g. it takes a lot of effort to look convincingly happy when you're actually sad, and vice-versa. There are certainly still instances of humans being "two-faced", where they act as allies when you can see them and then immediately betray you when you can't, but I think this happens much less than one would expect for an agent whose external signals are entirely voluntary. If an AI's internal state is less "transparent" to humans than human states are, then it might secretly defect a lot more often.

(Although, the AI also might see advantage in self-modifying to become more transparent, in order to create more opportunities for cooperation. But the increased transparency would need to come in some hard-to-fake form, like publishing its source code, not simply in the form of an internal decision to be honest.)

Expand full comment

> “It’s hard to be rational without also being a good person” looks like more evidence against the orthogonality thesis.

Not really. The orthogonality thesis is about minds in general. That two things happen to be correlated in humans, due to their particular mind-design, doesn't imply that they're related in the space of all possible minds.

Expand full comment

I would suggest that being (relatively) rational decreases the number of possible flavours of jerk that you can be, without entirely preventing you from being a jerk.

Expand full comment

I've seen a lot of press for this book, but reading this review was the first time I realized that "scout" in the title refers to scouting something out, and not glorifying the boy scouts. I think it was the cover image of the binoculars looking out over a landscape right out of a national park.

Expand full comment

The Boy Scouts were originally modeled on / inspired by military scouts, so really the chain of connection goes full circle there.

Expand full comment

Me too

Expand full comment

Does Galef, in a spirit of fair play, mention Mercier and Sperber's theory of argumentative reason?

Expand full comment

I think that you're raising an issue that I too have been wondering about for a little while.

The rationalist community seems to accept the dual-process model. System 2 has a chance of correcting System 1's errors, as discussed in Daniel Kahneman's *Thinking, Fast and Slow*. This is taken to apply in any normal adult. I think CFAR's goal is making us all better at this.

Mercier and Sperber's argumentative theory of rationality describes reasoning differently (see their 12-page summary here: http://www.dan.sperber.fr/wp-content/uploads/2020_mercier_sperber_bounded-rationality-in-a-social-world.pdf). They argue we are a lot better at reasoning through dialogue than we are at reasoning on our own. They are dubious about solo reasoning in part because they don't think it is innate. They think that training can work, but isn't really portable to new contexts, and thus its use is domain-specific.

There's a LW post on their work from 2010 (https://www.lesswrong.com/posts/zNawPJRktcJGWrtt9/reasoning-isn-t-about-logic-it-s-about-arguing). An important point:

> Becoming individually stronger at sound reasoning is possible, Mercier and Sperber point out, but rare. The best achievements of reasoning, in science or morality, are collective.

For improving our solo reasoning, what are the *practical* implications of Sperber and Mercier's model replacing the dual-process model? Other than pessimism?

Galef and Sperber discuss this on pages 14-18 of her podcast transcript (http://rationallyspeakingpodcast.org/wp-content/uploads/2020/11/rs141transcript.pdf), but not to a satisfying depth. Scott has mentioned Sperber and Mercier a few times over the years, but only in passing.

Does anyone know of a good, third-party comparison of their work with e.g. Kahneman's? I'd appreciate reading that first, as context for better tackling their 2019 book, *The Enigma of Reason*.

Expand full comment

I hate it, so it's probably good advice.

Well, to say "hate" is too strong. But the thing is, scouts are just as much part of the army as soldiers. Scouts are still "on a particular side" and are working against the scouts from the opposing army. What if you don't want to fight in any war at all, you just want to find out things? Is there such a thing as The Nature Rambler Mindset?

Second, the Obama anecdote strikes me not as "wow, strong BS detector, how admirable!" but as "what a jerk". (And yes, I did the "imagine it's a guy you like" part to check out my reactions).

I'd hate a boss like that, who was constantly whip-sawing between "I love it/I hate it" in order to 'catch me out'. You could never be sure what his real opinion was, when he had genuinely changed his mind, and when it was "he always loved/hated it, he was just pretending the opposite". Plus, if the people working with him have any brains at all, they will figure out the strategy after he does it a few times, then they will always have two sets of opinions ready to go at all times - if Obama says "I love this thing!", be ready to go with "Eh, I'm not so sure"; if he says "I hate it!", be ready to go with "Oh, there are good parts". That way you can always turn on a sixpence when he goes "Ha, fooled you, I hate/love it!"

It'll trip everybody up when he goes "No, honestly, I really do love this" "Yeah, sure, Mr. President" *wink* "I know how this goes, you don't want yes-men!" "No, seriously, I do believe this" "Ha ha, can't catch *me* out, I know this means you hate it!" but at least Mr. Big Guy can flatter himself on his sterling bullshit detector.

You're correct that this will require a lot of change and a lot of work to improve yourself.

Expand full comment

I read the book but maybe I wasn't paying enough attention because I was surprised by "the scout is also at war"; in my mind the archetypal scout is exploring unknown territory for their band, and the only war necessary to fight is the one against entropy.

Expand full comment

There are two kinds of scouts in society: The explorers, which are what you mention, and the military kind, fast moving, lightweight, low drag operators who move ahead of the main contingent (the soldiers) to find out what lays ahead. The "scouts at war" have a significant amount more incentive to get purely accurate information than the explorers do, lives depend on it after all. The explorers mostly care about having something interesting to report, which likely explains academia quite nicely.

Expand full comment

Well, since the metaphor pits Soldiers versus Scouts, I am taking that the military sense of scouts is what is meant.

Expand full comment

I was offering an alternative way to think about the word "scout" so you don't feel obliged to throw the baby out with the bathwater because you find that the metaphor breaks down when scouts want to desert.

Expand full comment

Before I read the book I assumed it was about boy scouts, being helpful, camping in the woods etc. It's a good and clearly good-hearted book, but like Deiseach, I'd rather go on a nature ramble, and if I were conscripted, would probably have more of a 'Yossarian mindset'.

Expand full comment

In practice, you’d only pull this on people you suspect are yes-men, which would avoid the “boy who cried wolf” effect you mentioned.

If I had someone I trusted to have their own thought out opinion, then this is unnecessary. They’d argue that I was wrong or ask why I believed in X when Y was true or… it’d be a much more useful conversation than “yes, that’s a good idea”.

Expand full comment

Depending on how it's implemented, yes. This is only a second or third hand account of an anecdote, so it's hard to judge (though it does seem to fit with the general media fawning over Obama over So Smart! So Superior to Last Guy!).

But the brute fact is, if you're the president and you really like the idea of "We should institute a National Broccoli Monday where every household in the country has to include broccoli as a main dish for a meal", then you're going to get your way whether your advisors say "great idea, sir" or "maybe re-think that, sir". Seasoned political operatives will realise this and adjust how they respond to you: is this a really dumb idea that will die of its own accord, is this something we need to worry about, or is this something that we can just go "yes sir no sir three bags full sir" about? (Let's leave Trump and La Résistance out of it for the moment, with the papers publishing stories by Brave Anonymous Civil Servant bragging how he deliberately did not carryout the president's orders).

And we don't know that Obama only pulled this on people he suspected of being yes-men, or if he tried it all the time. Looking it up, the original quote comes from an address at "a tech conference in Las Vegas hosted by the identity-security company Okta":

"Mr Obama says a lot of big decisions were made in the Situation Room.

"If you were in the Situation Room, the way it would work is you've got some big kahunas sitting around the table," he said. This may include the secretaries of state and defence, the CIA director, the national security adviser, and "a bunch of generals" he said "look tough and important".

Naturally, they all gave their two cents, and they "red teamed" it, he described, meaning the group tested all the assumptions.

"Invariably, in the outer ring of the room, there'd be a whole bunch of people, often younger but not always, and they had the big binders and they're doing stuff and taking notes -- and those are the people who are actually doing the work," he quipped, and the audience rewarded him with a laugh.

"I'd point to someone in the back and say, you, what do you think? And they'd be shocked that I called on them," he said. And because the person hadn't prepped a response, Mr Obama said, the person would answer honestly.

"Part of the way I was able to ensure people were not telling me just what I want to hear, was to deliberately reach outside the bubble of the obvious decision makers," he said.

5. Test your B.S. detector.

"Every leader has strengths and weakness, and one of my strengths is a good B.S. detector," Mr Obama said. But he still tested that people were doing their part and not just agreeing with what he said or telling him what he wanted to hear. If someone agreed with his idea, he said, he would tell them that he had changed his mind and then grill the person on why the idea was still the best option."

So it sounds as if he tried it on everyone, and I still think it leads to "okay, we've danced this dance before, we know if he says X and we say 'okay, X is fine' then he'll suddenly go 'no, I want Y' and so we need to either have a spiel on why X is great or why Y is bad" which doesn't lead to mutual trust, it leads to trying to guess "does he really want X or not?"

Expand full comment

Thanks for looking up the source!

I'm unsure if it would lead to that dynamic or not. Thinking who I would want help with decision making... people who can play devil's advocate or pre-mortem (predict how the plan fails before it fails).

If I'm stuck with a specific group of people and want to incentivize that behavior, well, if I was the president, I could just tell people to play devil's advocate or to do a pre-mortem.

Expand full comment

I'm guessing the resolution here isn't mutual trust, but rather "I'm the president so I can get a better advisor if I decide this one's not trustworthy." But yeah, we don't know much about the actual context.

Expand full comment

I mean, if he's actually doing it right after they said they agreed with him, then there's no subterfuge. It's just him playing devil's advocate so they explain to him the reasons for a position that they both know Obama (and nominally the other person) believes.

Asking random advisors to let you play devil's advocate against them isn't really a problem for me at least.

Expand full comment

It sounds to me like he is being flippant and bragging a bit. I am guessing however that where he stands (in-group/out-group) is not the same between you and I.

Expand full comment

> someone who says misleading things to catch you on not getting it

Isn’t this just joking or messing with someone? Seems

a bit normal

Expand full comment

In highschool, we would say a nonsense joke with everyone in on it except one person to see if they'd laugh. The cousin-favorite-song story happened similarly as well.

Expand full comment

Half of internet “personal stories” are like this - fiction for laughs

Expand full comment

I do believe that people make up personal stories for laughs, but how does that relate? My own story is true and I do believe Galef's story is actually true.

Expand full comment

It relates in that the guy on the other end writing the story is doing the same nonsense joke thing, except the entire subreddit is the butt of the joke

Expand full comment

I feel we're talking about two different things (an example reddit post may help!).

Saying a joke that doesn't make sense is a common conformity experiment, but I'm unsure of reddit posts that do that. There are lots of fake/staged reddit posts and videos that some sincerely believe. Are you talking about the latter?

Expand full comment

Yeah, but the Obama anecdote isn't "us guys just messing around in the Oval Office to lighten the mood", it's presented as Good Man (or I suppose nowadays I should say Person) Management practice.

There are enough management theory fads and fashions that come and go, I'd hate the Obama Yes-man Weeding Out Method to become widespread practice. Granted, it would only last six months until the next overcoat, cheese or rhino became the fashion of the day, but it would be a pain in the backside never knowing if the boss really does want you to go ahead with X, dump X for Y, or is just 'testing' you to make sure you're not a yes-man.

Expand full comment

The story as presented seems pretty bad. It may also be that we are getting a partial story, and he did this once (or very infrequently) in a specific situation where it made sense, but not as a general practice.

Better management practice would be to ask the opinions of others before voicing your own, asking detailed questions to drive to the heart of the matter, and then formulating an opinion. That he would come out in favor of something and then ask his advisors their opinion seems like a management mistake in the first place.

Expand full comment

I'd like to know more about Obama's method before judging it. If I'm a staffer and he does that to me all the time, then yeah that sucks. If he just does it once to me personally in order to make the point that I should always be honest, it makes sense as a learning/signaling mechanism.

Expand full comment

I am guessing this was something he did situationally with someone he did not previously know well, but needed to be able to depend on. Something I have done twice in my (55 year) life, but only after having my hackles raised, is to leave something available for theft to weigh an individuals integrity. It would be pretty antisocial to do this on a regular basis. But in these particular instances it was a quick way to recalibrate my trust level. (strongly south in both cases)

Expand full comment

But everybody has a side, the one which tries to ensure that they can continue to be a "nature rambler" or whoever they wish, at the very least. Life, at its core, is competing interests, "war of all against all", everybody must either participate or perish.

Expand full comment

"I talk a good talk about free speech, and “don’t cancel other people for discussing policies you don’t like, they have a right to their opinion and you should debate it instead”."

But I've noticed that you've banned plenty of plenty of people from the comments for politics you don't like. Is there really a "scout mindset" within Rationalism?

Expand full comment

who

Expand full comment

I've been banned from this blog previously for arguing strongly in favour of Marx. On the opposite end of the spectrum I've seen "race war now" types banned.

You can argue that this is good or bad but there are certainly some limitations as to what you can post here.

Expand full comment

you were banned for arguing like you’re in a debate club graded by GPT2

Expand full comment

This appears to be an insulting joke. You asked who was banned and I gave you an answer. Do you disagree that there are limits on what you can post here?

Expand full comment

jokes can be true. You were banned for arguing in a bad way, not for content.

Here: “the bourgeoise should be overthrown and bound with the chains they use to shackle the workers”

doubt I’ll get banned for that

I would get banned, however, if I accused you of being an evil Marxist shill who bla bla bla bla bla for ten comments in a row (which you aren’t, obviously, you’re just some guy who likes talking on the internet like the rest of us)

Expand full comment

There was never any evidence I was arguing "in a bad way".

Expand full comment

Was this an original burn, or did you hear it somewhere else? Because this is 10/10 quality burn

Expand full comment

original lol, after fifty comments worth of srs disc horse I had to get something in

Expand full comment

Then congrats! Funniest thing I've read all week

Expand full comment

My sides are in orbit, thanks

Expand full comment

Not even GPT-3... Man, that's harsh :-)

Expand full comment

How many people can you name that were banned for things that had nothing to do with their ideology, and everything to do with their commenting style?

Expand full comment

I don't know, I tend to forget names.

Expand full comment

Do you think you have a good grasp of what the non-ideological components of the banning policy are? Can you read other people's comments and make accurate predictions about whether or not they'll be sanctioned?

Expand full comment

The banning policy is entirely ideological.

Expand full comment

FWIW, I've argued against banning you, and I still do, but you're really not making it easy for me.

Expand full comment

You are currently commenting on the very blog you claim to have been banned from! I think it would be more accurate to call that a "time out" then. Or perhaps a "suspension".

Expand full comment

Sure, either way it was a restriction on what I'm allowed to say.

Expand full comment

The fact that *you* haven't been banned, despite ample provocation is more than enough evidence for me that Scott is not generally overusing his ban powers.

(Please, spare me the "who me, I'm just making good points and the fact that everyone finds me insufferable, even people ostensibly on the same side as me is just a weird coincidence" routine. Pull the other finger.)

Expand full comment

What "ample provocation" have I provided?

Expand full comment

Ah, sorry, can you not read parentheticals? You must have missed it, I said:

> "Please, spare me the "who me, I'm just making good points and the fact that everyone finds me insufferable, even people ostensibly on the same side as me is just a weird coincidence" routine.

Thanks for the attempt to amuse and divert, but I'm not interested in continuing this today.

Expand full comment

People who are ostensibly on the "same side as me" can be wrong too. Just look at Freddie DeBoer. The fact that I argue with them just as often as I argue with non-Marxists is proof of my Scout Mindset and good intentions. I'm not one to avoid an argument with someone just because they're a member of the same "tribe".

Expand full comment

> The fact that I argue with them just as often as I argue with non-Marxists is proof of my Scout Mindset and good intentions.

That is not proof of either of those things.

Expand full comment

Yes it is. I'm open and prepared to debate and discuss with anyone.

Expand full comment

Interesting book review. Could you have someone read it through and correct the mistakes? Driving me nuts reading it and correcting them in my mind so that I can understand your points, so carefully made!

Expand full comment

This is perfect: "the damned could leave Hell for Heaven at any time, but mostly didn’t, because it would require them to admit that they had been wrong."

Expand full comment

I love that Scott has read The Great Divorce. Not even many serious Christians I know have read it. It's tied for my favorite Lewis book with Till We Have Faces or Perelandra, and I very much recommend it. Though I think it helps to be Christian to really enjoy them, I think there's value for anyone.

(Just as I think any Christian ought to read and appreciate Camus.)

Expand full comment

I loved The Screwtape Letters even when I was a was in my more militant atheism phase at the height of New Atheism circa 2010 (now I'm still an atheist but more passive about it)

Expand full comment

That last story about Luke really struck a chord with me, because I also sometimes alienate both sides by not changing my mind UNTIL I get a good argument, and insisting that the bad arguments remain bad.

Two examples:

1) most of the arguments I heard against “Intelligent Design” and for “Undirected Evolution” were crappy; the ID people had some good points and were unfairly maligned. But then I figured out what was wrong with the “irreducible complexity” argument in a way that I had never seen properly explained, and with that insight, was able to go back and see that SOME of the IDers were bullshitters (while others still made some valid points, and while I still find many of the evolutionists to be bullshitters, the evidence definitely points to evolution for now although I can think of several ways in which my mind could be changed by new investigation).

2) The opposition to the usefulness of the IQ measure, and to the degree of heritability that it seems to exhibit, always struck me as hysterical bad faith, which refused to engage in any non-sophistical way with the data and arguments put forward by the IQ proponents. But then I saw Nassim Taleb’s arguments against IQ, and THEY were mathematically valid and more penetrating than what the other side was saying, and most of the IQ proponents failed to engage with them or argued badly against them. I think Taleb, for polemical reasons, goes too far in his opposition, and that heritable individual and group differences need to be reckoned with, but the IQ number itself, and the way in which it is applied, are definitely messed up and a lot of the people (but not all!) who support it can now be seen as bad scientists or “motivated reasoners”.

Expand full comment

Modern genetics pretty powerfully refuted ID - you can follow each gene through species

Talev claims Iq>100 has no predictive validity - yet when the core rationalists take the test they tend to get 130-140

Expand full comment

Straw man. Modern genetics indeed powerfully refutes the claim that Darwinian evolution does not occur, but IDers (as opposed to fundamentalists and creationists with whom their opponents improperly conflate them) do not claim that. The claim is that Darwinian evolution is not SUFFICIENT, and although the evidence from modern genetics makes me doubt that claim, it is certainly not conclusive enough yet to “refute” that claim.

I already said I thought that Taleb went too far, there is no doubt some predictive validity in higher IQs, but perhaps you don’t understand his arguments in full. Academic success is not a good indicator because of the circularity involved, and when you look at real-world indicators of success like income or wealth or patents or political power or job type the correlations are very largely explained by a threshold model (you need a certain minimum IQ score to be able to become a doctor but among the population of doctors the correlation between IQ and income is MUCH smaller etc).

The reason we know this a priori is that IQ scores are CONSTRUCTED to follow a Gaussian distribution and real world measures have “fat tails” and are decidedly non-Gaussian but instead follow power laws. No Gaussian score will be able to predict them very well. Instead, in order to make raw scores be distributed in a Gaussian way, IQ tests overly rely on a particular type of question which can be made arbitrarily difficult. Without such a category of question, you can’t make raw scores come out Gaussian, but that category (logical puzzles of a familiar type) selects for nerds more than a practically valuable measure should and will still only identify as exceptional a far smaller “right tail” than will excel in real-world situations.

Expand full comment

(I’m not speaking from ignorance here, this is an “admission against interest” from someone who qualified for much more exclusive high-IQ societies than the scores you attributed to “core rationalists” would suffice for: I found them full of unpleasant and litigious underachievers and quit.)

Expand full comment

I’m not aware of any ID proponents who claim that “macro evolution happened but god guided it and it couldn’t have happened without it”, afaik most claim “macro evolution didn’t happen”. Modern genetics refutes the second. The first claim is ... questionable, but whatever, not commenting there

> when you look at real-world indicators of success like income or wealth or patents or political power or job type the correlations are very largely explained by a threshold model (you need a certain minimum IQ score to be able to become a doctor but among the population of doctors the correlation between IQ and income is MUCH smaller etc).

yeah this isn’t true. Quantitative hedge fund managers and employees or founders of software companies. “Among a population of doctors correlation between iq and income is smaller” yeah that’s what a threshold effect does, berksons paradox. Doesn’t suggest against the overall trend.

I agree the Gaussian thing is probably bad but IQ seems to have ridiculously good validity in comparison to anything I’ve looked at. And again I don’t think about IQ outside of arguments against it being invalid I don’t think it’s that load bearing, you can just think about capability in general, but capabilities in all areas are ridiculously closely related

Expand full comment

I’m not arguing “against the overall trend”, nor against the overall relationship between abilities, but the IQ number is unhelpful in lots of ways. The point of the “threshold” effect is that it explains almost all of the “correlation” so focusing on, say, 130 vs 140 is a waste of time. Of course super outliers like heads of successful tech companies will have high IQs but they won’t be close to the 180 IQ score corresponding to the percentile their income is!

Programming is one of the professions most correlated to measured IQ because IQ tests emphasize logical puzzles so much, but apart from “are they high IQ enough to do this job at all” IQ as currently measured isn’t all that great a success predictor, and still less so in other kinds of jobs.

Not saying there couldn’t be a better measure, just that this one is significantly flawed.

Expand full comment

Also, remember I am talking about IQ as a generally useful tool. Even if it were valid at predicting extreme outliers, that’s a tiny fraction of people.

Expand full comment

Okay but if there’s a threshold at 100 for some things and at 110 for others and at 120 for others and 130 for others then it’s just fully predictive up to 1% of the population. Maybe it doesn’t have any usefulness at 140 to 160 beyond 130, idk, I think it does, but don’t have proof on me. But “iq isn’t super predictive beyond 135” is not something I have ever seen anyone else claim, and is way less important

Expand full comment

“fully predictive” exaggerates. Consider the simplest case, where everyone with IQ <100 has performance 0 and everyone with IQ greater than 100 has performance 1. You get a correlation of 0.8 already, which is a lot higher than is seen in real studies. If IQ is more valuable than as a binary filter, it’s not a very strong statistical signal...of little use other than to screen out those who can’t do the job at all.

Expand full comment

What did you expect from IQ before you read those arguments? Did you expect a strong correlation with the above?

I would have expected a strong correlation with progress through difficult academic fields. But otherwise that's it.

Expand full comment

Intelligent Design is perfectly compatible with evolution by natural selection.

God gets to choose the initial starting conditions of the universe, and he can correctly predict the effect of any particular set of starting conditions, so if life exists and God exists then he must have intelligently designed it.

Expand full comment

That’s a trivial reinterpretation and it’s not what the IDers mean. They’re not creationists, but they think there is evidence of processes other than natural selection: not necessarily by God, but likely involving some kind of genomic intervention.

Expand full comment

The scientists inserting “gain of function” codons into Coronavirus RNA may have been designers but I’m not sure they should count as “intelligent”...however the reasoning used by some scientists to conclude that COVID-19 originated because of human research is similar in kind to the reasoning the IDers sometimes attempt to use. That doesn’t mean they’re correct, but some of them are trying to be scientific about it.

Expand full comment

I've never heard or read an ID proponent who actually knows what he means. Every time you ask for a precise definition of what "God" is, and how even in broad outlines He might "intervene" or "create" via supernatural means anything at all, there is no concrete answer -- all they can do is wave their hands and make superficial analogies to what human beings do.

Well, the difference between me "creating" a birthday cake and someone "creating" a species or the entire Universe is so profound the analogy is useless, aside from establishing some similarity in motivation or mindset (and even that is dubious, since my mindset when needing to take a dump in public is quite different from my dog's, even though we are creating the same thing, a turd). It loads far too much meaning into my use of the same verb, "create," and assumes because the word has some well-defined meaning when used in the kitchen it has an equally well-defined, or defineable, meaning when used to explain the existence of everything. Well, it doesn't.

I mean, the ID argument has always struck me as similar to the Drake Equation. You build a very nice logical framework, in which *if you grant the inputs* the output makes all kinds of plain sense -- but then you hide all the really big unknowns in your definition of terms. This really isn't epistemological progress, and the fact that their opponents take the opposite approach -- they use simple definitions but erect a towering edifice of dubious ratiocination on top of it -- is not any kind of recommendation for either camp.

Expand full comment

Link to the latter?

Expand full comment

I worry about the opening lines, fearing I have also ended up behind the curve, since I have worked for the last six years on-off on a liebhaber project on similar issues. (The book should be finished this year, unfortunately I say that every year – a blatant example of Kahneman’s planning fallacy.)

Over the years I have become gradually more sceptic about the Bayesian approach to rational decision-making (which seems to be the underlying approach of the book, as well as Scott's review); not least since it does not correspond with how I form opinions and make decisions, if I introspect & try to be rational about it.

Instead, I have become rather enthusiastic about Paul Thagard’s idea of “inference to the best explanation”, which (again using introspection as a method) is closer to how I actually make & change my opinions, and decide on things.

If someone who reads this blog has opinions on the “inference to the best explanation” approach, including how it fits (or not) with interpretations of Bayesian reasoning, I would be very interested in your thoughts – as well as tips on literature.

For those who want a short version of Thagard, a nutshell version is in the open-access Journal Philosophies 2021, 6,52, titled “Naturalizing logic: How knowledge of mechanisms enhances inductive inference”. The core idea is that we form opinions based on the perceived plausibility of “mechanisms”, and that deciding the plausibility of various “mechanisms” is a question of continuous inference-to-the-best-explanation.

Here are snippets from the article, including some of Thagard’s critique of the Bayesian approach. Again, I would be interested in reactions to his critique & line of reasoning, by people who have opinions on these things:

“Should probabilities be construed as frequencies, degrees of belief, logical relations, or propensities? …

Bayesians assume that probabilities are degrees of belief but face problems about how such subjective beliefs can objectively describe the world and run up against experimental findings that people’s thinking often mangles probabilities. …

I think the most plausible interpretation of probability is the propensity theory, which says that probabilities are tendencies of… situations to generate long-term relative frequencies….

What does it mean to say that glass has a disposition to break when struck? Fragility

is not just a matter of logical relations such as “If the glass is struck, it breaks” or counterfactuals

such as “If the glass had been struck, it would have broken.” Rather, we can look

to the mechanisms by which glass is formed to explain its fragility, including how poorly

ordered molecules generate microscopic cracks, scratches, or impurities that become weak

points that break when glass is struck or dropped. Similarly, the mechanisms of viral infection, contagion, vaccination, and immunity explain the disposition for people to be protected by vaccines.

Mechanisms flesh out the propensity interpretation of probability and point toward a new mechanistic interpretation of probability. … propensities point to unobservable dispositional properties of the… world.

…The second problem with Bayesian approaches to inductive inference is that the

relevant probabilities are often unavailable, no matter whether they are construed as frequencies,

degrees of belief, or propensities…. Paying attention to mechanisms helps to constrain identification of the probabilities that matter in a particular inferential context. For example, understanding the mechanisms for infection, contagion, vaccination, and immunity makes it clear that many extraneous factors can be ignored, such as demonic possession.

…To sum up the result of this assessment, we can judge a mechanism to be strong, weak,

defective, or harmful. A strong mechanism is one with good evidence that its parts, connections,

and interactions really do produce the result to be explained. A weak mechanism

is one that is not superficial but is missing important details about the proposed parts,

connections, interactions, and their effectiveness in producing the result to be explained.

Weak mechanisms are not to be dismissed, because they might be the best that can be done

currently, as in the early days of investigations of connections between smoking and cancer

and between willow bark and pain relief.

…The social significance of the role of mechanisms to inductive inference comes from

the need to differentiate misinformation from information.... Separating information from

misinformation requires identifying good patterns of inductive inference that lead to the

information and defective patterns that lead to the misinformation. Noting the contribution

of mechanisms to justifiable induction is one of the contributions to this separation.

Expand full comment

I like IBE, but perhaps only because I've never been able to pin down exactly what it means. Perhaps that's its strength? A vagueness about what constitutes "explanation", and the possibility of being satisfied with that vagueness when most of the available precisifications of the idea are obviously flawed?

I certainly also have a vague discomfort with Bayesianism in principle. It's a fine epistemicized conception of probability, but are we really supposed to think that propensities and mechanisms, resulting in limits of relative frequencies, don't enter into the matter? That might be fine for the limited purpose of generating predictions, but it seems like without any other tools, you'd be prevented from attributing cause to effect (rather than simply predicting event A following event B with arbitrarily high confidence.)

But as I expressed upthread, I still don't see where these views really "come apart", except as a semantic matter of what they term "probability" to be. So discount my opinion to whatever degree you're confident I'm wrong about that.

Expand full comment

Thanks for your comment Crank.

Following up on your point that there is a frustrating vagueness in what constitutes "explanation", the same is even more so the case with "mechanism"; which is muddy, and sounds suspiciously like a metaphor. If one likes one's theory-of-science clean and neat, there is something unaesthetic about both concepts. But so be it.

Further, using such concepts implies one can hardly avoid holdning the (naive) belief that casuality exists out there in the world, it is not just something we believe out of habit (Hume) or because our mind is constructed that way (Kant). Which, until recently, was a sign that you belonged to the Great Unwashed philosophy-wise.

But belief in causality (and therefore mechanisms) as something real out there that we with a varying extent of probability can learn to know, is gradually becoming a houseclean point of view again - read e.g. the entertaining interview with Clark Glymour at Marshall's excellent 3.16 blog. Quoting very much from memory here, Glymour is asked if we really can know if casality is not only correlation, and answers something like: Try to turn a doorknob assuming that turning the doorknob opens the door, versus assuming the door opening makes you turn the doorknob. If you believe the latter, or is agnostic as to which is which, you are unlikely to survive long enough to propagate your genes. We are the descendants of those who believe the first causes the second (via the mechanism connecting the handle & the hinges to the door being "openable"). Those who believe otherwise might be philosophically on safe ground, but they are no longer with us, or they will be gone soon.

Ok, I made up the last part...

Expand full comment

True, "mechanisms" and "propensities" are as vague as any concept can be. I'm still not entirely convinced that's a weakness at the level of abstraction this conversation has to happen at, but I can see why it would cause discomfort. As for causality - count me as unwashed! I like Hume well enough, but I cling to the idea that we should make a Cartesian leap of faith to avoid his final conclusions.

As for Glymour's point - I haven't read it and will need to later; but if your summary is apt, I find some to like and some to object to. I like it insofar as it's making a Moorean kind of argument - "your doubts are academic, while the doorknob and my immediate intuitions about it are not". But to bring evolution into it seems circular to me. Beliefs against causation can only self-annihilate if complex causal chains are already supposed, which is the question under discussion!

Expand full comment

The doorknob example was Glymour. The spin-off into evolution was my add-on.

I concede you have a good point. Bringing in evolution in a debate about the "ontological fundamentals" is arguably an illegitimate philosophical move, since this presupposes acceptance of a mechanism (natural selection), the existence of such items being the issue under debate.

There might still be second-line defences of the existence of causality as an external phenomenon, using "Glymour's doorknob" as a springboard for the argument...how about this: People seriously being agnostic about whether the door opens when they turn the doorknob or the other way around, may be seen as performative self-contradictory when they move their bodies around in their everyday lives. At least, one should expect them to end up catatonic more often than they do….hmmm…

I add this because I balk at having to do any type of “leap of faith”, Cartesian or otherwise, to get there, as you suggest is necessary.

I enjoy reading Hume as well, but try to remind myself that it is a young man’s book – the Treatise was published when he was in his late 20s, which means he must have written his 500+ page tome in his mid-20s. He was a very clever young man who, as very clever young men often do, was out to demonstrate his own cleverness, including its shock value. Hume is certainly right that only correlations are observable, causality is not. However, the same is the case with “intentionality”, and “consciousness” itself – no mental phenomenon is directly observable.

..I admittedly have some admiration for old-style positivist who then draw the conclusion that all of this stuff should be categorized as “metaphysics” and be discarded, along with belief in non-observable angels and demons. B.F. Skinner tends in this direction in his most hard-nosed behaviourist days. But if not willing to go so far as to doubt the existence of your own consciousness, you should not have to doubt the existence of causality as well. Perhaps that is what you mean by “Cartesian leap of faith”? If so it is then a very small leap; perhaps a “Cartesian miniscule jump” is more accurate…?

Anyway, the above is sort-of a distraction into the dark cave of ontology (I make a living as a social scientist and only venture into this shadowy world when my writing reluctantly takes me there). My beef was really with the Bayesians and how they arrive at, and adjust, their subjective probabilities. I have a suspicion that when they do adjust, what they are (really) doing is inferring to a better possible explanation, based on some new more-or-less clear hunch that the mechanism they though was important, was not so important after all; or that it was countered by some other mechanism(s) they were unaware of the first time, which they have now become aware of, which has changed their (subjective) probability estimates.

Expand full comment

Loved this & I'm fascinated by the connection between not being an intellectually obstinate prick & increased mental wellness. As a recovering intellectually obstinate prick I've noticed that calibrating one's beliefs somewhere around the mid-values a certainty scale makes me less hateful & contemptuous of people I think are probably mistaken. Consequently I've decided that a policy of radical uncertainty may be quite rewarding.

Expand full comment

"Consequently I've decided that a policy of radical uncertainty may be quite rewarding."

A very important point, I think. Too much of modern thinking seems to be underpinned by the assumptions that 1) We CAN know the answer, 2) We DO know the answer, and 3) I personally MUST know the answer to be a good person. It is a lot easier to get along with other people when you can just shrug and say "Eh, their choices are their business. Who am I to tell them what to do?", but that requires admitting that at best 1 is a maybe. I can't help but think the hubris around our knowledge has put us where we are today.

Expand full comment

I also seem to recall caring much less about *knowing* everything or *having the right take* before having all these platforms on which to air my views & push back on the views of others. It feels like my essential dickishness is jacked by the current information sharing infrastructure 🙄

Expand full comment

I expect that is a big part of it. Remember back in the day when you weren't supposed to talk about politics or religion at social functions? I will admit, at this point it is hard to imagine what people talked about; now if you can't spout the "correct" policy position you are pretty much a pariah. Especially now that politics and religion seem to be practically the same thing. :(

Expand full comment

I appreciate Julia's perspectives.

Expand full comment

> Like - a big part of why so many people [...] moved on was because just learning that biases existed didn’t really seem to help much.

I've made this point a few times. I brought it up just a few days back. As much as rationality bills itself as sytematized winning there doesn't seem to be a lot of focus on helping people win. Everyone has a different definition of win but I'd argue the broadest, most normal definition is something like: have work you enjoy that pays you a lot of money which you manage into financial independence while improving your health and having lots of good friends and a loving spouse.

I've read precisely zero advice on how to achieve any of that from the rationality community. Or anything like it. Or how to achieve any alternate version of winning. What advice exists seems incidental to the larger project of bias spotting and correct thinking. I enjoy this and I enjoy the debate. But I honestly feel like it's a hobby rather than anything that improves my life.

That's fine as far as it goes. But when people point me to EA or CFAR it just looks like more consumption and hobbyism rather than self-sustaining change or self-improvement. Frankly, this strikes me as a problem for the rationality movement. Yes, it's nice to be rigorous and correct. But at the end of the day most people respect success above all else. Doomed intellectual victors still are doomed. The most successful ideologies whether religious or secular have traditionally promised people real benefits in this life or the next. Rationality just kind of skips this. Which is weird to me: wouldn't it be rational to seek a way to win and propagate those methods?

Freddie posted a bit below. Freddie is an honest, no BS communist (or does he identify as a socialist? some kind of far leftist). He would have no problem pointing out the specific material benefits of his ideology both on the individual and societal level. You may disagree whether his ideology achieves them but you can't disagree that he claims they are the goal or that his movement will try to create them for you. Decommodification, the end of anomie, improved terms on your specific rental agreement, better working conditions, etc. Christianity would have no problem pointing out a similar list of benefits. What's rationalism's equivalent?

Expand full comment

"have work you enjoy that pays you a lot of money which you manage into financial independence while improving your health and having lots of good friends and a loving spouse."

I think the difficulty there is that most of the rationalist online people are *starting* from that baseline, so they no more think about "how do I manage this" than you think about "how do I tie my shoelaces?"

They tend to be smart, they tend to be in good jobs that pay well and that they enjoy, they tend to have a circle of friends either IRL or online that they are close to. I'm not going to touch the polyamory angle because we've already done that one to death.

But any group or movement which can unironically recommend a site like 80,000 Hours as "this is a great way to put your wishes about being an effective altruist into action!" is not aimed at the likes of "okay, so you want to move on from your job stacking shelves in the local supermarket to something more rewarding, like maybe a secretarial job in a small business" people who need advice about "how do I get a high-paying job doing work I enjoy and make friends and have a successful romantic life leading to marriage?"

Expand full comment

My experience is that's a minority of the community albeit one that the community likes to trumpet as "typical" for various reasons. In particular the financial management and social/romantic parts of their lives often seem unsatisfactory. A few do have very high incomes but even then those other parts often fall down.

And, to be honest, I've met a lot of people who comment here with average or below average incomes. I actually have given advice to one woman who was a stockperson looking to move up in life. (Last I heard from them they were an admin assistant making twice what they were before, though not six figures.)

At any rate, if you're looking to do good imbuing the knowledge of how to tie shoelances should be a priority. At least in a world where most people can't even do that.

Expand full comment

“For evil men account those things alone evil which do not make men evil; neither do they blush to praise good things, and yet to remain evil among the good things they praise. It grieves them more to own a bad house than a bad life, as if it were man's greatest good to have everything good but himself.”

-Augustine of Hippo, City of God, Book III

I see rationality as helping people win by becoming better people: not by getting better material goods. Or at least, that's the pitch.

Expand full comment

I've been told elsewhere that rationality is specifically not old style Christian rationality where rationality was seen as an ipso facto good. If you believe in purity of a rational mind, the telosian argument about man as a reasoning animal, that's certainly a counterargument. Albeit it transforms rationality from a pragmatic into a spiritualist movement.

At any rate, I specifically mentioned several elements of a good life, not just money or houses.

Expand full comment

Eh, I just love that Augustine quote and it's been a dogs age since I had a setup that seemed appropriate to use it.

If I'm being honest, I'm not much of a believer of capital R Rationality=winning either.

Expand full comment

Counterargument: https://putanumonit.com/category/love/

Expand full comment

I acknowledge this is an example of the sort of thing I'm talking about in one specific realm (romance). Or rather it's the beginning: presumably a real commitment would involve actual specific advice and self-improvement with measured outcomes and all that.

Expand full comment

I'm repelled by those who use the term "winning" when referring to life in general. Reminds me of Charlie Sheen during his manic episode and obnoxious jerks in general. I realize that you are simply echoing the term.

Rationality seems like one useful tool to keep well-sharpened in the toolbox, but living a satisfying, well-rounded life likely requires many more tools than that, as well as some luck.

The rationalist community, something I've only read about, creates value if it brings likeminded people together who enjoy each others' company. Similarly, the LA surfing community also creates value by bringing likeminded people together socially. It's only a hobby to some, but it can also be a lifestyle if you spend enough time doing it.

If the rationalist community merely brings people together and improves lives due to that, isn't that enough? Merely because it is a network, it seems likely to improve the odds of good romantic, friendship and career outcomes for its members. Unless, of course, it's got some dark pyramid-scheme-like social dynamics such as those of an actual cult. (I don't believe that is the case.)

Expand full comment

yes, Jack gets it.

Expand full comment

Yeah, I wouldn't have chosen "rationality is systematized winning" as a claim. I'd have chosen something like "rationality is a way to think more and think better." But as you say, I didn't make the initial claim. Yudkowsky did and it's been echoed by other rationalists.

I've repeatedly pointed out that rationality is somewhat Cartesian in its worldview and it relationship to teleological ideas of the purpose of humanity. "Man as a reasoning animal" etc. This has gotten a lot of pushback.

As I said, I enjoy the community because I like interesting ideas, debating, etc. That is what I think rationality is. I don't think it's what rationality claims to be. I think there's a specific motte and bailey going on here wrt rationality's benefits. When rationality helps someone become very successful that's touted as being due to their rational mindset. When it doesn't consistently do that then it's just a way of thinking.

Expand full comment

Every heuristic eventually turns into an ideology. This is BAD.

And yet you are saying: I want to use rationality as an ideology, not just as a heuristic.

You might want to reconsider that desire.

Expand full comment

Firstly, I'm not a self-identified rationalist. (And you should discount my opinion accordingly if you care about being a rational soldier.) So I don't have a strong desire either way. I'm judging a specific claim here: that rationality is systematized winning.

Could you define ideology vs heuristic?

Expand full comment

A heuristic is a rule of thumb, a pattern recognition that, most of the time, things happen according to this pattern.

An ideology is what happens when you forget that this is pattern recognition and insist that the heuristic represents total reality.

The transition from one to the other happens ALL the time. Most commonly, as far as I can tell, as a tool to demonize the other.

In other words I, a vaguely rational human being, use implicitly a heuristic that the Democrat political program more closely matches my beliefs and desires more than the Republican program. You, an ideologue, convert that into "BY DEFINITION if the Republicans support it, it must be wrong and evil". Or

I use a heuristic that I tend to like Apple products. You convert that into "anyone who uses Windows is a moron with no taste".

I use a heuristic of utilitarianism to make some decisions. You convert that into a claim about all ethical decisions everywhere of all types, and then either promote various types of behavior or insist that everyone who refuse to condemn utilitarianism wants to engage in that behavior.

If you study intellectual history, you will see this as one of the three great patterns, along with

- tactical decisions today (we don't really care about X, but saying that we do gives us a temporary advantage over our enemies) are treated as gospel by the next generation (who have heard about the importance of X every day of their lives). And so whether it's evolution or abortion, some light weight idiotic choice becomes a shibboleth.

Evolution: Dealing with Darwin and Adam's Ancestors

Abortion: The well known story of taxing bible colleges.

- the narcissism of small differences. Leaders emphasize differences precisely to create a tribe they control, then the previous mechanism kicks in. The difference from the previous case is whether there is a pre-existing split, and some item in the air is exploited as yet another tribal marker, vs the creation of a new tribe (Hillary supports vs Bernie supporters, Eagles vs Rattlers) out of nothing.

The difference of both these two vs the first is that the latter two are mostly driven deliberately by "moral entrepreneurs"; the first appears to be organic, to happen anywhere and everywhere.

Early Christian history is a great place to look for these behavior patterns because very few people now care about Arianism vs Nestorianism or homoiousios vs homoiousios. For ~1700 years people were willing to die and kill for these words, and yet early church history shows how contingent every one of these decisions was, mostly falling into the patterns above.

Expand full comment

The important difference (heuristic vs ideology) is what do you do when you encounter a case that contradicts.

A heuristic user says, "yeah well, what do you expect?" and recalibrates from "90% things work out like so" to "85% of the time things work out like so".

An ideologue starts with "how can I use this claim about reality to control, beatify, and demonize those around me" and when faced with a contradiction insists on rewriting the world so as to eliminate the contradiction (or, usually easier, eliminate the people pointing out the contradiction).

Expand full comment

Just a basic tip: if you're going to contrast you and the other person in a hypothetical ("I X but you Y") then making yourself the villain sounds less harsh than making the other person the villain.

Anyway, yes, frozen in place judgments are certainly a problem. I understand that but I don't understand what it has to do with my point that rationality seems more concerned with general investigations than coming to practical life tools. It doesn't necessarily have to be dogmatic: it could be tools for analysis or a constant pressure to compare what you've intended to what you've achieved. But that doesn't seem to be the focus.

Expand full comment

I didn't attempt to answer Thales' question "how can I use rationality to corner the olive oil market" (I do that in a different comment,

https://astralcodexten.substack.com/p/book-review-the-scout-mindset/comments#comment-3089450 )

I attempted to answer (with background) the question "Could you define ideology vs heuristic?"

Expand full comment

I was always surprised by how many people found rationality useless in their personal life. It was just obviously useful to me.

Being able to sit down and say "What are the main roadblocks in my life and how to I lift them in a rational manner" is... Really useful.

Like, try having irrational dogams around health (physical and mental) and see where that gets you.

Expand full comment

I think you're giving rationality too broad a grant there. Sitting down and looking at roadblocks and thinking how to remove them is standard self-help advice and not advice I've gotten out of reading LW. But this is exactly the kind of thing I'm saying I'd expect to see more of out of a Systematized Winning movement.

Expand full comment

Maybe for some. But not for me.

If you come to rationality from a strong background in science based medicine, science based therapy, science based nutrition and so on, then sure, it's probably not going to add a lot to your life.

But if you come from believing in aromatherapy, Freudian theory, and anything organic is healthy then by god you'd be surprised where it'll get you.

The above certainly exaggerates my experience, but rationality helped me a lot in reevaluating a lot of beliefs that were neither reasonable nor helpful.

Expand full comment

Fair enough. I think we agree on the best thing rationality can do for a person. I guess I'd say I don't see a focus on it though. Like, I haven't seen a rationality based takedown on Kangen Water, have you? Or am I missing something?

Expand full comment

Definitely not a focus on it no. I find it a bit odd that open threads are awash with various long-form theories of human status, but bereft of conversations about how to improve immune function or physical fitness.

But rationality was the top level thing that helped me evaluate experts and advice in different areas. Helped me change my mind when the evidence was in. So that brought me to people like Andrew Huberman and Joel Jamieson.

Haven't heard of Kangen Water until now, but the science based medicine community is normally good for stuff like that. Stephen Novella, Harriet Hall etc.

Expand full comment

Come on, let's not pretend, we all know the real subtext here which is: Rationality did not help me pick up babes.

And while this is true, in the sense that the complainer tried to approach the babe-picking-up problem "rationally" he didn't do the work! It's easy to watch a few Hollywood teen comedies, or read a pickup artist web site, and believe you have now engaged in a comprehensive study of the female babe and are ready to work your magic.

But true rationality (ah, good old no true scotsman!) would look at this at a meta-level and ask: why did I believe in the first place that the world experts in babe-picking-up were Hollywood and night club sleaze bags? Is it just possible that their agenda's skew to something like "selling tickets" and "selling courses" rather than to "making me happy"?

At which point you can reconceptualize the problem, look for other sources of advice, and do the real hard work. Tip -- the single most important thing most woman (yes yes there are always exceptions, do we have to make this about feminism in the middle of something else?) -- want from a man is to feel safe and secure every moment of every day. If you can provide that, you are already in the top decile.

But that means you PROVIDE that safety and security every day! If you say you will pick up food, you pick up food.

If you say you will meet someone at the airport, you meet them at the airport. And yes, traffic happens and things are unpredictable -- we all know that, which is why you leave early and you hang around the airport -- YOU engage in a little inconvenience so that SHE does not feel scared and abandoned when you fail to arrive because you couldn't be bothered to leave 30 min early and were "surprised" by traffic.

You don't get drunk or use drugs or do other things that turn you into a violent loathsome creature. You hold your temper and your tongue. If she irritates you, you swallow it and respond reasonably -- not sarcastically, not passive-aggressively, reasonably. You're a rationalist FFS. Rationally, you want this relationship, and 2 min of irritation is nothing compared to the relationship. So ACT rationally; don't act like a petulant child assuming that the world will care about rant about how she is being unreasonable because XYZ.

*meta*-rationality will get you all you want. But you have to put in the work to be meta-rational -- which, as Scott says, is close to identical to the work of being a decent human being, a decent adult, a decent citizen, and a decent man. And part of meta-rationality is not having a tantrum every time (oh what a surprise, you were not aware of this?) most people around you behave in ways you consider irrational.

Or, to out it a third way:

- is your goal to win a one-time game (if necessary by cheating)? And never be invoited to play again.

-or is your goal to keep playing the game, being a person other people want to play with, day after day, year after year; which means often losing so they can win?

Expand full comment

I'm not a rationalist as I've said a few times now. And I don't have a hard time "picking up babes." I'm doing fine on the dating market. I've never bought a course or gone to a guru about dating either. So no, that is not the subtext here. And less of this please.

Expand full comment

You do realize this is a public forum, not a private conversation, right?

I am writing for the general public, not for you as an individual.

Half the comments that you seem to feel I directed at you weren't even in reply to your comments!

The threading software is not great in terms of how it handles this (especially how it mails out "comment notifications"), but a good heuristic going through life is to remember that no-one is the star of any movie except their own.

(Or go watch _The Map of Tiny Perfect Things_ which is a truly wonderful movie that, the only time I have ever seen this done well, makes that point in a lovely way along with also being a great movie along so many other dimensions.)

Expand full comment

There is big value in having true(-ish) facts and theories that you can pick for making predictions and reaching your goals (that is the whole point of science), instead of factoids (whose truth is unknown and not especially relevant) which are only munition to influence others to reach some goal. The first is a king of common good and should only increase in amount and quality with time, especially because you can do inference without having to spend too much time constantly cross-checking premises. The second is reusable only within the group sharing the same goal, and no reason to improve. But only the second is of immediate personal value, so it seems modern science suffer from the tragedy of the common :-)

Expand full comment

It could well be. I think about this a lot: honesty might not always pay as well as coalitional dishonesty. But regardless does rationality go out of its way to supply such facts about your day to day life? It seems more like it's a method of thought purification rather than, say, an attempt to create answers on how to get a date while cutting through all the BS advice. (Which I bring up because there's at least one blog that did try to do that.)

My point is the ratio of ephemera to praxis seems heavily skewed towards ephemera. Which is fine! But again, I expect people to adopt the mindset and then say, "This is interesting but not pragmatically helpful." As this review started with.

Expand full comment

The opening paragraph literally made me laugh out loud. A+ introduction.

Expand full comment

"Scout Mindset"

The first image that came to my mind was a Boy Scout helping an old woman across the street. Not to dismiss the urge to be helpful, but I see now the subject is broader: the search for truth.

I was asked a few days ago, why I write. I responded,:

>>My motive in writing is to seek the truth. Writing things down helps me to clarify my thinking. Over time I learn more information and I edit my previous work. Sometimes I get feedback which is helpful and I edit some more.

When I'm talking/writing with someone and it touches on stuff I've already written, I can just give the link rather than trying to express it again.

I do hope my point of view sort of osmoses into the cloud, but I don't expect this.<<

I started following Scott in a casual way because he said some insightful things about basic income (one of my main interests).

https://slatestarcodex.com/2018/05/16/basic-income-not-basic-jobs-against-hijacking-utopia/

Slowly I became aware that Scott is a proponent of something called "rationality" and perhaps a member of a “rationalist community”.

Rationality does sound like a good idea and being a scout a better goal than being a soldier.

Expand full comment

Scott, America has been on the metric system for nearly two centuries! The American Customary System, the law of the land, is a metric-based transitional units system that defines legacy English units and encourages, but doesn’t require, people to switch to metric. This law passed in the early 1800s! The people who have switched are people who had a high motivation and low cost to switch (science, with lots of custom tooling) and the people who didn’t had a high cost to switch (highways, heavy industry with all its existing tooling).

On the topic of the post, at one point in the book, Galef advocates presenting points of view in such a way that people don’t know if you’re for or against it. This is good! You can take it even a step further when there are multiple positions - say, in an argument - and you present them such that nobody can tell which one(s) you support. That delivery shows your respect for the person you’re debating, even if you strongly disagree with their ideas.

That sort of clinical explanation also lends itself well to deadpan humor, if you’re in to that sort of thing.

P.S. my calibration numbers (for buckets 55%, 65%, 75%, 85%, and 95%) were 67%, 71%, 80%, 86%, and 100%. Uniformly under-confident. But I also go around saying “most tautologies are true” so maybe I should have expected it.

Expand full comment

I saw a fun infographic going around showing the set of things that British people use the metric system for (long distances, lengths of running races, weights of non-humans, volumes of liquids other than beer or milk, weather) and things they use the imperial system for (speeds, heights of people and short distances, weights of humans, volumes of beer and milk) and realized it's basically the same as the United States, except for speeds and temperatures.

Expand full comment

You can take an automated version of the calibration quiz in the book (the same questions) here https://calibration-practice.neocities.org/

Expand full comment

Huh. I'm underconfident. Thanks for the link.

Expand full comment

So much humblebrag potential!

Expand full comment

55% 8/13 (62%)

65% 5/3 (63%)

75% 7/7 (100%)

85% 2/2 (100%)

95% 10/10 (100%)

It appears I have fewer buckets than I think.

Expand full comment
founding

> Someone who updates from 90% to 70% is no more or less wrong or embarrassing than someone who updates from 60% to 40%.

The size of a Bayesian update can be quantified by the difference between the values of the logit function, ln(p/(1-p)), for the prior and posterior probabilities. Going from 90% to 70% is a bigger update than 60% to 40%, and going from 100% to 80% is an infinitely large update. If embarrassment is quantified by "how wrong was I before I made this update?", then 90% to 70% is indeed more embarrassing than 60% to 40%.

Expand full comment

I came here to say this.

Expand full comment

Another victory for log odds!

Expand full comment

But if embarrassment is quantified by "how much will this affect the expected values of the acts or policies I'm considering?" it turns out that difference in probability is in fact the relevant metric. (I'm usually a big defender of log odds as a more useful representation than probability, but for decision theory, probability really is more useful for this reason.)

Expand full comment

Someone who updates from 90% to 70% is no more or less wrong than someone who updates from 60% to 28%. And in a perfect world neither are embarrassing.

Expand full comment

A message to all you dudes who tried to employ scout mindset and didn't achieve perfect rationality:

You all failed because you were not true scouts, men.

Expand full comment

I worry that too little attention is given to the acknowledged point that it is perfectly appropriate to treat new evidence differently based on your prior assessment of the totality of the evidence in the aggregate. If I learn some new datum, but I come in with a holistic worldview under which my belief in this area is strongly held and commonsensical then it isn’t necessarily confirmation bias to treat the datum fairly dismissively if it contradicts that much larger, integrated understanding of reality. This strikes me as a much more fundamental obstacle to changing minds than is presented in the review, not least because it may be wholly rational. It seems really hard to tease this apart from situations where cognitive biases may be at work.

Expand full comment

Strongly agreed. It's fine to say "apply Bayes' Rule to the available evidence", but only once you've decided what the evidence *is*. If evidence can be legitimately discounted or disregarded based on significant enough disagreement with your priors, that kinda gives up the game.

The conclusions one can draw from "The Control Group is Out of Control" don't stop at "you shouldn't have updated so strongly based on this telepathy study." That conclusion rests on and implies the following, in increasing strength: "The certainty numbers scientists attach to their studies are manipulable and don't determine how much you should update based on a study"; "Your judgment can be better than a study, even if that study is an ostensibly correctly performed meta-analysis"; and "Much of science reflects individual opinion and institutional incentives, regardless of the truth". And of course, the idea that this principle is strictly limited to science is tenuous.

Expand full comment

The change of view in Julia's book, from "teach people methods of belief update" to " teach people to be less defensive about their beliefs" is in line with Stanovich's data about my side bias. If I understand correctly, he indicates that the political divide results mainly from expressive rationality: people want to signal affiliation and group identity, not change beliefs.

Reading the book I found the emotional part more helpful, but I wonder how effective the tests are to counter partisan tendencies.

Expand full comment

I agree, this all reminds me a lot of Bryan Caplan's "Myth of the Rational Voter" and the point that most people hold beliefs for reasons other than "These are true". I think you can go farther and say that many people hold beliefs more strongly the less able they are to actually determine if they are true or not.

Expand full comment

Well yeah. The whole point of a belief is to come to a conclusion in the face of insufficient evidence, for personal or social psychological reasons. If there was enough evidence, you wouldn't need to form a belief at all. I don't need to form a belief about whether there's enough gas in the car to drive to work, I can just check the gauge and know.

Presumably the function of a belief is to come to the right conclusions "prematurely," before there is enough evidence to prove a fact, so that when enough evidence does roll in, you will not have to change your mind, and any decisions you made will have turned out to be correct.

But evidence is almost never only in one direction. It's "noisy", sometimes it points towards your belief, sometimes away. The less evidence you start off with, the more likely the oncoming "noise" will be equal or greater in magnitude than the seed of your belief. So if you intend to hold the belief, you will need greater faith in it the smaller the amount of seed evidence you've got. The strongest beliefs will necessarily be those based on no evidence at all, because otherwise they would fall victim to almost any trivia that comes along and your ability to hold them until they are proved right would be zip.

The whole problem is dodged by the Bayesian position of holding options on beliefs, instead of beliefs themselves. "I have an option on Belief X that pays out at the ratio of 8:1 if it works out to be true." "Really? I bought an option that pays out at the ratio of 6:1. I guess I'm more confident in X than you."

This would all make human psychological sense if we could actually think this way, hold options on beliefs instead of, you know, actually hold beliefs.

Expand full comment

I think we might be using the word "belief" differently. I would say in your gas in the car example that you do have a belief about the level of gas, specifically that you have X amount, where X amount is what the gas gauge shows. Or that you have a belief that the gas gauge is accurate enough that you don't need to double check with another method.

You say that the point of a belief is to come to a conclusion in the face of insufficient evidence, which... I agree with, but would consider that trivial, because we are always working with insufficient evidence. There isn't really anything we can know 100%. I am not sure what you would call the alternative to belief, something you know with sufficient evidence such that you don't form a belief.

In my phrasing, I would say that everything we know is a belief, and some we hold more weakly or strongly than others. Weakly held beliefs we are willing to go either way on, changing our beliefs (or surety of that belief) with less evidence required. Strongly held beliefs take a lot of evidence to shift us on. Optimally, strongly held beliefs would have a lot of prior evidence in their favor, while weaker ones would be things we don't know much about or have much experience and evidence around. My point above was that many people seem to hold most strongly those beliefs that they have the least ability to directly test instead of the most ability, which is exactly backwards. We should be less attached to beliefs that we have no real ability to confirm as true and have conflicting and poor evidence for.

Expand full comment

Yeah OK but in my world you're expanding the definition of "belief" so far it doesn't become very useful. It's fine, but now we need to introduce some adjectives to distinguish beliefy beliefs from facty beliefs, and we can do that, but I prefer to just let the nouns carry the weight of the distinction.

And I would ask you to reconsider the *utility* of beliefs (or the more beliefy beliefs). There's a reason we have them (a belief I hold due to my belief in natural selection and evolution). We would not act that way unless it was adaptive, at least in many situations. And it *is* adaptive to be right before the evidence is all in, because that allows you, say, to buy APPL at $20 and sell it at $2000. Big win.

So then the question is: what strength of belief do you need for it to fulfill its adaptive purpose, given the already existing "seed" evidence, and the probably level of noise you'll encounter in the future? And the answer is: the less seed evidence you have, the stronger the belief needs to be to serve its (adaptive) purpose of enabling you to weather the storm of doubt that will come with future noise.

So that explains why it's natural and sensible for the strength of beliefs to be inversely proportional to the amount of seed evidence for them.

Or put it another way: your kid brings home a D on an algebra test. How much strength do you need in your beliefs that (1) he'll figure it out on his own and do just fine on the next test, and (2) he'll figure out how to succeed in school on his own, eventually, and not end up a drop-out doing drugs under a bridge? The D provides much *less* evidence for (2), but for that very reason that needs to be a stronger belief to serve its adaptive purpose (which is you not abandoning your child in despair).

Expand full comment

I think you are making an epistemological mistake differentiating between beliefs and facts the way you seem to be doing. The relevant distinction seems to be between beliefs that are more accurate or less accurate, perhaps more or less true. Facts are not necessarily more or less true than beliefs; the useful dichotomy seems to be between facts and opinions, not so much facts and beliefs. (Not that I think fact/opinion is terrible useful a distinction at root, but that's another topic.) You don't go into detail, so I am not sure if you mean facts in the sense that they are objectively and independently arrived upon by different people, or something that can be measured, or what.

When we are talking utility of beliefs, we have to consider that we use beliefs for at least two different things: making predictions about the world for our gain, and telling stories about the world to feel better about ourselves.

For the former there is a direct correlation between the accuracy of the beliefs and their utility. A belief that encourages you to buy APPL at 20$ so you can sell it at 2000$ is only good if the price actually goes up. A belief that drives you to buy Clean Energy at 120$ because you think it is going to 2000$ isn't so hot.

So ideally for that sort of belief, the predict the real world well sort, you want to have evidence to match your certainty. Low evidence/certainty suggests that maybe you spend 100$ on a stock, while high evidence/certainty suggests mortgaging your house.

In other words, doubt is good when you really don't have good reason to trust your predictive model. If your predictive model of the world doesn't do a good job accounting for the "noise" you should wonder if you are right.

As to the kid example, if you really have so little prior evidence that your kid is able to succeed at school that a D on a single algebra test is throwing you off towards abandoning them in despair, you probably should seriously consider getting them remedial help. It might be exactly the right thing to do, although I would wonder why it took that long to get there. Maybe gather more evidence, by talking with the kid and seeing what's up? Going over their work to see if they can do it or if there is some deeper issue they can't get past? It is probably worth keeping the possibility that he needs extra help alongside the possibility that the school is a problem alongside the possibility that he just had a crappy test alongside the possibility that academics isn't his thing but he'd be a great mechanic alongside etc.

Expand full comment

Calibration score

Mean confidence: 75.92%

Actual percent correct: 69.39%

You want your mean confidence and actual score to be as close as possible.

Mean confidence on correct answers: 79.71%

Mean confidence on incorrect answers: 67.33%

You want your mean confidence to be low for incorrect answers and high for correct answers.

I think I was doing this wrong. When I had little idea I chose 50%. I was thinking "50-50 chance of being right". I think now I should have been choosing something closer to zero.

Expand full comment

I didn't see an option for anything lower than 50%.

Expand full comment

Yes, I went back to see if I could take it again with different questions, and sure enough 50% is the lowest. My 50-50 interpretation was correct in the first place.

I was sort of proud that my confidence was only six points higher than my performance.

Expand full comment

50% is the minimum, and that's the correct choice for "I have no idea". (If you were to predict less than 50%, you're saying you expect yourself to be wrong and you should instead reverse your guess and predict the inverse)

Expand full comment

Peter Robinsonjust now

Yes, of course you are correct. After I left the quiz, I remembered the percent question as something like "How certain are you that your answer is correct."

Expand full comment

Can I put in another desperate request for our host to demand an ignore-user function from Substack if he won’t just ban a certain commenter outright? You’ve written a wonderful review of what sounds like a wonderful book and the comment thread was shaping up to be great until a certain individual started with the usual argumentative spam. Possible Chrome plug-ins are of no use given that not everyone does all their reading in Chrome on a desktop.

Expand full comment
Comment deleted
Expand full comment

A weird AI, or Weird Al himself? (Wow, the typeface here is not helping the joke.)

Expand full comment

It’s odd how his behavior has gone from annoying to slightly endearing to a few of us. Like “oh, there goes marxbro, at it again. Classic marxbro”.

I’m not against a ban though!

Expand full comment

I'm not sure if Polynices was talking about me.

Expand full comment

Is there another user who gets a comparable number of complaints?

Expand full comment

Great write-up Scott. I liked how you pointed out Julia's coping mechanism section as a way to differentiate the book. The section reminded me of the Second virtue of rationality, the "If the iron is hot, I desire to believe it is hot, and if it is cool, I desire to believe it is cool." Easier to say than to actually feel.

Additionally, I would like to add a new internet law.

Godwin's Second Law: The longer the AXC comment thread, the more likely marxbro is involved and derailing the discussion.

Expand full comment

The status quo / conformity bias is a terrible tool, I found myself agreeing with all sorts of majority positions just because they'd be contrarian in the counterfactual world.

I assume if we summed our biases with a regular person we'd be a rational oracle.

Expand full comment

How would Julia Galef want us to think about Marxbro's activities in ACX comment threads?

Expand full comment

I assume she would be supportive. I'd be happy to talk to Galef about Marxism at any time.

Expand full comment

The degree of self-awareness…

Expand full comment

> in case this is starting to sound too touchy-feely, Julia interrupts this section for a while to mercilessly debunk various studies claiming to show that “self-deluded people are happier

Intuitively I'm very convinced that self-deluded people are happier, but I cannot wait to change my mind!! Anything one can read about that on the internet? Is she pointing to scientific papers?

Expand full comment

Note that 'most studies showing X are terrible' doesn't mean X is false. Julia has said several times that she had to take out many of the studies she's linked as evidence for her points out of the book because she gave them a closer look and they were also bad.

Expand full comment

(To be clear, that was before the book was published, i.e., she's taken them out of a prior version. (Why can't you edit posts anymore?))

Expand full comment

Perhaps the book's analysis could benefit from a thorough examination of its own premise by first answering the question: "What is a 'Mindset,' and why do you need one anyway?"

It sounds like the book's premise is that everyone's ideal "mindset" is to be a relentlessly objective truth-seeker. That's fine and well in a domain that potentially has objectively correct answers to questions (e.g., what is the relationship between increasing CO2 levels and the temperature of the atmosphere?) But what's the correct "mindset" when addressing squishy value-laden controversies that don't have objectively right or wrong answers? (e.g., is it morally wrong for humans to change the Earth's climate and ecosystems when the economic benefits of doing so outweigh the economic costs?)

Is one's "mindset" solely concerned with one's personal beliefs? And is there any reason one's personal "mindset" must always be aligned with the positions the person advocates publicly or attempts to persuade others to believe. For example, is it wrong to confidently argue for a position -- like a lawyer advocating for a client -- that you don't 100% believe yourself. After all, if the argument is persuasive to others, why does it matter that you don't find it persuasive yourself. Everyone is entitled to their opinion.

And what about those situations where one's (apparent) "mindset" can, by itself, alter reality in the manner of a self-fulfilling prophesy. To use a recent example, was it a good mindset of "radical honesty" for U.S. leaders to publicly acknowledge that the Taliban would probably take over in a few months after we left -- which caused the Afghans to conclude that they should just throw in the towel early to get a better deal from the Taliban. Wouldn't it have been a more proper mindset to over-project confidence in order to rally the troops to the cause? And what if the only way to effectively project self-fulfilling optimism is to employ "confirmation bias" to convince yourself?

I guess the main definitional questions about "mindset" are: (a) When (and why) it is important to be intellectually honest with yourself? and (b) When (and why) do you have to fully disclose your personal beliefs to others?

Expand full comment

Yes, this seems to me like a fundamental difficulty for much of the rationalist movement and similar movements. I see lots of discussion by Nate Silver and Scott Alexander and Julia Galef and Eliezer Yudkowsky about how to be a fox rather than a hedgehog, and get more calibrated credences, and be more accurate, and better rationally respect the evidence. I even see all sorts of reasons why these things are good much of the time.

But I *also* see that an academic discussion (or a book review, or a comments section, or whatever) is often more valuable when it contains a bunch of distinctive hedgehogs, who are each miscalibrated in different ways, and who overweight different pieces of evidence. This idea comes up in William James's paper, "The Will to Believe". (I find this an incredibly disturbing but valuable paper, and I annotated it on Genius.com for a class I was teaching: https://genius.com/16641273 ) He points out what I think is going on in the discussion of whether you have to be overly optimistic to start a company - except in his case the claim is that you have to be overly credulous of a scientific theory in order to actually persevere through doing the experiments that end up convincing others.

These people that are violating rationalist canons are likely going to be worse off for it in some way. But we are better off that they exist, and help us make our better informed judgments. We might even have a duty to *be* such people, some of the time - to not just *play* the Devil's Advocate, but to *be* it. As long as we also ensure that these people aren't making the decisions for the rest of us while they do it.

Expand full comment

I think a simpler mechanism is involved in both Galef's advice and your suggestion about competing hedgehogs: The key to avoiding bias in an individual or social process is to harness the power of motivated reasoning via dialectical methods. "Disinterested is uninterested" might be the motto underlying this method.

Put yourself in the position of having to argue a position with which you disagree--where losing the argument might hurt (your ego at least) and you will turn up evidence and lines of reasoning that you would otherwise miss. (I believe the "rationalists" refer to "steelmanning" an opposing argument, which is a form of what I'm describing if one becomes emotionally invested in building a good steelman.)

That's a psychological trick to use on oneself, but of course a collection of opposing advocates is likely to seek out and surface most of the evidence and arguments for and against various beliefs and positions if turned loose to do so. Then the problem is how to observe the arguments and make sense of one's own beliefs, and that may also be difficult without "trying on" the different perspectives offered as if they were your own.

Expand full comment

Yes. One's attempt to steelman an opponent is usually not quite as good as that opponent's attempt to give the best version of their view. So I wouldn't want everyone to move to the more cautious skepticism the rationalists endorse - it's good for me that there are devoted advocates of many views out there, so that I can benefit from them. The main question I have is whether I have the social epistemic obligation to be that devoted advocate of something as well, even if I try to be rationalist about most things.

Expand full comment

> Is one's "mindset" solely concerned with one's personal beliefs?

Yes. The basic idea is, having accurate beliefs would help you act more efficiently, regardless of your goals (which are often value-based and thus outside of the scope of the "mindset" under discussion).

> And is there any reason one's personal "mindset" must always be aligned with the positions the person advocates publicly or attempts to persuade others to believe.

Depends on your goals, but in general, no.

Expand full comment

I'd just like to say that I'm proud of my confidence calibration - I was within 1% of my confidence calibrated expected correct rate on the exercise linked in the post, and within the expected range for each confidence interval. But now the meta-confidence question, can I be too confident in my confidence calibration?

Expand full comment

I just started Stanovich's book, _The Bias That Divide Us_, and it looks like it deals with the same problems, although maybe in a more formal/rigorous way.

Is it worth also reading Galef's book?

(There is also Pinker's latest that I had pre-ordered and just dropped into my Kindle library; that's a lot of book about rationality in just a few weeks.)

Expand full comment

Wow, I'm on page 94 of 241 in Stanovich's book (I like it, BTW) and came across this sentence: "Writer David French (2018) paraphrases an imaginary encounter in Scott Alexander’s blog Slate Star Codex whereby ..."

Expand full comment

Nice try getting out of voice-in-my-head responsibility Scott, maybe next time.

Expand full comment

lol!

Expand full comment

Reminiscent of Jane Jacob's Guardian and Commercial "Syndromes" from her Systems of Survival.

Expand full comment

Journalists ought to have a Scout mindset, but often, they have a Soldier mindset on behalf of a political ideology.

Expand full comment

I have a preliminary scout report for my fellow global warming skeptics (dozens of us?)

This is how much we can't win on basic narrative grounds:

I've heard that in the era of "The Coming Ice Age" cover pages, 90% of published papers in fact predicted future warming from CO2. Media narratives warp reality like a massive particle traveling 99% of the speed of light - and the narrative is now cannons blazing on the side of those 90% plus warming predictions

Here's the extremely good news:

The climate economists are on our side, big time. First redefine our side as "doesn't want a trillion dollars a year to be wasted on useless policies". In principle that includes almost everybody. It's time to go big tent and to do it on policy grounds

How much are the climate economists on our side? The Nobel winning Nordhaus calculated the economically ideal rise in temperature to be 4.5 degrees in terms of warming costs vs costs of an ideal carbon tax. I suspect these numbers are getting skewed lower now with motivated adjustments but suffice to say they are not proposing short term net zero or holding temperature to no more than 1.5 degrees above where it was at the end of the little ice age

So what do we do? I think our bread and butter goal is to commission policy recommendations from climate economists. How do you think alarmists will fare arguing in favour of "don't consult the climate economists"?

They will say mid to long term a global carbon tax (but not very high very soon) and if we ask short term they may reccomend direct investment into green energy technology. Subsidies will not be in the mix. Short term hard reductions of emissions will not be in the mix (ie: the Paris Accord)

A potential additional strategy would be to advocate tests of marine cloud whitening in equatorial countries where warming is already likely a net negative along with a program guaranteeing crop insurance to allay fears of precipitation effects. Remember the prime reason the UN is advocating a 1.5 or 2 degree cap despite the economists projecting massive overcosts is out of a sense of guilt and obligation to the hardest hit. This is admirable, and can be cut off at the pass by a cloud whitening project that helps those people immediately at a price in the range of 35,000 times more cost effective than directly scrubbing carbon from the air

How does this sound global warming skeptics?

Or are there any double agents with comments on how the general strategy and specific ideas sound to you?

Expand full comment

Nordhaus also estimated the cost of waiting fifty years to do anything vs following the optimal policy immediately to be trivially low, a reduction in average world GNP over the next century or so of about .06 percent.

Although that isn't how he put it.

http://daviddfriedman.blogspot.com/2014/03/contra-nordhaus.html

Expand full comment

This one should be in your arsenal for ready deployment:

"Sea level rise will cause spatial shifts in economic activity over the

next 200 years. Using a spatially disaggregated, dynamic model of

the world economy, this paper estimates the consequences of probabilistic projections of local sea level changes. Under an intermediate

scenario of greenhouse gas emissions, permanent flooding is projected to reduce global real GDP by 0.19 percent in present value

terms. By the year 2200, a projected 1.46 percent of the population

will be displaced. Losses in coastal localities are much larger. When

ignoring the dynamic response of investment and migration, the loss

in real GDP in 2200 increases from 0.11 percent to 4.5 percent."

And they assumed no dikes would be built.

https://www.princeton.edu/~erossi/EECCF.pdf

Expand full comment

I think sea rise displacement is the AAA example of the narrative power of catastrophe

I think a different study was reported as 187 million people will be displaced by sea level rise from warming

Their true likely scenario was that at a cost of a small fraction of a percent of GDP 186 million 800 thousand of them would not have to move, much like the current 110 million people "underwater" who are almost all secured by sufficient engineering

It's probably worth fighting the catastrophic narrative but clear that only waving the correct policy capes can turn this raging bull away from ramming into the wall

Expand full comment

The thing is that everyone else adjacent to climate economists disagree. Anyone else working on climate disagrees, and even other economists disagree.

Expand full comment

Certainly not all. There are plenty of engineers and geologists who find the topic overblown and the proposed remedies grotesquely disproportionate and unsuitable.

But the ignorance of even the more-robust parts of economics, and its application to data, is certainly painfully widespread. Most specialists are not mentally prepared to accept the necessity for tradeoffs that might mean the thing they specialize in is not the only thing to optimize on. Economists are unusual in thinking about tradeoffs across seemingly incommensurate realms.

Expand full comment

I think there are some things about climate economic modelling that are counter intuitive to an outsider. Part of the result is due to there being a strong negative feedback in the tropics where warming increases humidity and cloud cover which reflects more sunlight. So much of the warming is at the poles which particularly in Russia and Canada has more benefits than costs up to a point

In this picture warming costs are generally much less than you would expect in a null case. This does introduce a skew where the benefits and lesser costs to large economies in the north sort of drown out costs to smaller economies in the tropics. It's quite reasonable to therefore prefer a lower target temperature than a straight economic model gives you by considering relative costs. However this has a limit on its own terms since there is also a higher relative cost payed by developing countries based on how much climate policies reduce world economic growth rates. I find the focus on climate costs systematically causes outsiders to climate economics to underrate the economic costs of lower growth

I would say there are two more things that count pretty strongly in favour of the climate economics models being correct to allow higher temperature goals. For instance Nordhaus's 4.5 degree result was with an ideal carbon tax. In the real world the carbon tax is reasonably likely to be imperfectly applied globally which would allow for high carbon sectors to shift to lower carbon tax regions. Historically it seems the average climate policy has cost double what an ideal carbon tax would. So if you were to model the ideal temperature goal vs a real world cost of climate policies that would adjust upwards from what climate economists are modelling

The other thing that is also quite significant is that I don't believe very many (if any) of the models account for a geoengineering backstop. In the real world if it started to happen that a major ice sheet was melting at a high risk rate or if northern warming was releasing an exponentially rising amount of methane from the tundra it is fairly certain that a geoengineering effort would kick in at some point. It sounds like there is a straightforward application where you can simulate the cooling effect of a volcano eruption by simply firing shells of sulphates into the stratosphere at a quite cost effective price. And marine cloud whitening might do the same at a vastly lower price with the advantage of locality, where you could specifically increase cloud albedo in the arctic to halt melting of ice or tundra. So where climate models are increasingly adding large worst case cost scenarios to their warming costs side of the calculation the real cost of an unlikely worst case is likely capped by geoengineering backstops

Expand full comment

I wouldn't deny that climate modelling is hugely complex and has all sorts of negative feedbacks. The question is how do the effects of those compare to the positive feedbacks and drivers combined?

There's also the question of the effects of geoengineering projects on the biosphere. If you start tossing sulfates into the air to increase cloud cover but that ends up acidifying the ocean by orders of magnitude greater than what would happen with merely carbon dioxide, I would hardly consider that a good solution. Now that isn't the only geoengineering idea (at least, I hope not), but given your focus on it I thought I'd address that.

Expand full comment

That's a good point. It seems that since it would be directly placed into the stratosphere the amount of sulphur dioxide would be quite small compared to existing emissions, according to a critique of geoengineering ideas. However I thought of it as simply affecting heat but of course it does that by reducing the amount of light coming through which is a direct and global cost to plant growth and solar panel efficiency

I imagine that cost would still provide a cutoff point for theorized catastrophic warming events while it sounds like marine cloud whitening is far better and more tested than I thought. (I think the wiki article mentions an estimated .25 degree current cooling effect from ship emissions affecting clouds, while the geoengineering proposal is to simply shoot a sea water aerosol up with the salt having a similar effect)

Expand full comment

I'm 21and live a relatively healthy lifestyle. What odds would you give me living 200+ years?

Expand full comment

I wouldn’t hazard a guess here but I sure wish you good luck.

Expand full comment

20%. But that's from a college dropout. However, my calibration score was pretty good; I closed the window, but it was like 77% right, 76% confidence =).

I think there's a good chance of a breakthrough that would cure aging, so then you'd only have accidental death or murder to worry about. But I don't know if that breakthrough will come in time for you.

Expand full comment

To the nearest whole number? Zero percent.

Expand full comment

Strange that all the examples in this genre of how to change minds are always climate skeptics becoming climate believers. Maybe there is something minor to be learned here.

I imagine this was just a throw away example, but climate change is also a bad choice because this isn't an argument over a specific isolated provable fact, it is an entire encyclopedia of claims that range from the validity of temperature record (the known), the certainty of catastrophic extinction level outcomes (the highly speculative and unsupported), and the effectiveness of proposed solutions (risk/cost/benefit analysis). Yet one must either believe or not believe in the entirety of the claims. There is a near instantaneous judgment on what group you are in based on any initial statement. My lead sentence would trigger most environmental activists. I'm not sure what the "believe it all or be excommunicated" behavior is called but there should be a name for it.

Expand full comment

What I'd love to see is a poll where you ask "How strongly do you agree with the scientific consensus on climate change?" and "What percentage of world GDP will be lost to warming costs in 2100?"

Make a graph where the x-axis is strength of agreement and the y-axis is how many percentage points they differ from the UN's estimate of 2100 GDP warming costs

I predict the graph starts about 5 points off, drops down to zero at low levels of agreement and rockets upward to 50 points plus at maximum claimed agreement

Expand full comment

The question "How strongly do you agree with the scientific consensus on climate change?" Is not useful unless you at the same time ask the people being surveyed what they personally think the "scientific consensus" *is*. Arguments for and against the reality of "global warming" or "climate change" are like arguments for and against the reality of "god" - people can have entirely different internal definitions of what they think they're talking about while using the same words, so they talk past one another.

Expand full comment

Yes, well put. My implication is that the reality imagined by the average person who would say they support the 97% consensus or the scientific consensus is likely to be far more inaccurate in absolute numbers than actual deniers who would project 0% GDP costs of warming by 2100

Expand full comment

Here's one data point, from me: 1) very strongly, 2) No clue. The question makes no sense, and even if it did, it would be impossible to measure.

What's the UN's estimate? I couldn't find it.

Expand full comment

It took me a while to find but I got this referencing a 2018 report. I believe the UN has 'outsourced' the scientific work to the IPCC: "the 1.5°C IPCC report finds the cost of unmitigated warming by 2100 to be 2.6% of GDP" - Chapter 3 of https://www.sciencedirect.com/science/article/pii/S0040162520304157

I had thought it was 5% for the most likely level of mitigation and 15% for none so my data point would be 1) moderately (~5/10) and 2) 5% or 2.5 points incorrect but substantially overestimating based on the scenario despite low agreement

Looking at a recent Pew poll 70% of people consider warming to be a "major threat". Threat definitely implies more than single digit percentage costs and presumably there would be high to unanimous reporting of very strong agreement with the scientific consensus among them. Not sure you'd get an average estimate of 52.6% GDP cost from that group however, but surely an error of multiple tens of percentage points

Expand full comment

I'd love to see that first question asked of people who know something of the actual data and theory underlying that so-called "consensus" *. For example, I know essentially all the underlying physics, have a passing familiarity with the detailed methods, and have glanced at some of the data. So in principle I'm in a much better position than most people to decide whether I agree or not. And yet, my personal position is deeply agnostic. I think it is a very, very complex question, and the probability is quite high that *everyone* wrong. There could be way more human-derived change than even the worst alarmists fear, there could be none whatsoever -- we could easily be overinterpreting the data, like people who diagnose themselves with pancreatic cancer after googling "pain in the abdomen and loss of appetite for 3 days" -- or it could be anything in between. I feel like I'm a lot *less* sure of what's actually true than either (some) people whose careers revolve around the point, or those who have almost no background that gives them an independent ability to judge.

----------------

* I put that word in quotes because the very concept of a "scientific" consensus does violence to the definition of "science." If it's to be decided by a majority vote, it's not science, pretty much by definition.

Expand full comment

And even the temperature record isn't very "known". There just aren't enough ground stations, and that's before we even get into siting, calibration, etc. Proxies are dubious as well. How much can a few bristlecone pines or ice cores really tell us about global climate conditions?

Now, I'm not THAT skeptical about whether the world is getting warmer. I think the satellite record is very strong, and the CO2 model is at least reasonable. But I'm with you. The tendency to boil down a very complex issue into good vs. evil is scary.

I coined a phrase I'm proud of but I don't get to use that often: "It's important to know the difference between climatology and eschatology."

Expand full comment

Yes, I was referring specifically to the instrumental temperature record since around 1850. Anything before that using proxies gets increasingly uncertain (tree rings aren't good thermometers). Early instrumental data is spotty in the southern hemisphere and the areas of most warming (the poles) has the least amount of data. That being said there is enough data to reasonably show real warming over the last 150 years of about 1C. Causation from CO2 is likely, but some portion may be naturally occurring from other sources and its a struggle to isolate the temperature sensitivity to CO2 with precision.

Expand full comment

This all sounds very reasonable and likely to be true, it's the inferential leap from "and therefore we're all DOOMED within decades" that's the hard one for me to make.

For some reason it's very hard to find people willing to go on the record as saying "global warming is real but probably not that big a deal, we are already on track to reduce CO2 emissions anyway, so we're probably just going to get a bit more warming (which has downsides but also upsides) so I honestly wouldn't worry too much about it", which seems to me like a more reasonable attitude.

Expand full comment

The argument is low probability but very high impact. I don't think they have done a very good job of demonstrating the high impact part. Specifically a long line of unwise predictions for political expediency in the past several decades have not come to pass and has undermined their credibility. Media based extreme weather and sea level rise claims are dubious when examined closely. There is no accountability for alarmism and most people have simply tuned it out so they respond by amping up the alarmism even more.

Expand full comment

Seeing Siberia catch fire this year made me much more worried about positive feedback loops and extreme events than I was before.

Expand full comment

The dutch east india company has been collecting temperature data since the 1600's. Not using tree stumps. You give humanity far too little credit for their obsessiveness of accuracy.

Anyway have you read the full IPCC report? (not the one for policymakers). I think you'll find there's a lot more to it than just temperature records.

Expand full comment

What makes you say causation from CO2 is likely? The biggest reservoir of CO2 on the planet is the ocean, and the solubility of gases in water decrease with rising temperature, so if the temperature were rising *for other reasons* we would naturally expect CO2 levels to rise for that reason alone. That is, CO2 would rise with temperature *whether or not* the CO2 was itself causing the temperature rise. So the question of causality is much harder than "is there a correlation?"

The only way you can prove the causality is by doing an experiment: on one planet you raise CO2 levels and on another identical planet you don't, and you see on which does temperature rise. That is essentially what the climate studies people are trying to do, only the "planets' they use are necessarily model planets in computers, since we don't have another actual planet handy. But this is the source of the uncertainty: how accurate is your model planet? If one has experience in modeling other complex systems -- trying to predict what one's wife will say in response to a certain decision one has just made, trying to predict whether the value of your stock in Tesla will rise or fall, trying to predict whether the crops will succeed or fail this year -- that ought to be reason for significant concern.

Expand full comment

I'm not a big fan of climate models but they have made predictions that rising CO2 levels will be followed by temperature increases. This has arguably happened over the past 50 years. There is also natural variability (first half of the 20th century) that hasn't been explained very well but they didn't have the detailed instrumentation in place so that explanation will likely be lost forever. Unwinding these two is pretty difficult so it is possible that the CO2 portion of measured warming may be more or less what they currently believe (climate sensitivity). There is more uncertainty here than the modelers and media like to admit, but CO2 is the most likely cause I think. It's more certain now than it was 20 years ago, and it will be more certain one way or the other 20 years from now. The only realistic option to increase certainty is to continue to observe and try to improve models over the next few decades.

Expand full comment

Sure, CO2 is a plausible explanation, but a plausible explanation and $3 will get you a cup of coffee at Starbucks. Well...maybe $5 if you want something fancy. Anyway, it doesn't mean squat in science. So that's a problem. It's more of a problem (in my view) that we also have a plausible explanation for why CO2 rise could be *cause by* temperature rise, rather than the other way around. So we ought to be damn sure the causality arrow is running in the direction we think it is.

Personally, I think the climate folks put too much emphasis on their models. They really ought to be thinking of new kinds of empirical measurements that might start to confirm the causality link. Like, what *else* would CO2 rise do that is completely independent of temperature rise, so we could somehow measure something and confirm it would *not* be happening if the temperature were rising for reasons other than CO2 rise. I can't think of anything off the top of my head, but that doesn't mean someone who is immersed in the field and thinks hard about it can't. It's hard to believe there's *no* point in the mechanisms where we could measure something that would finger CO2 as the culprit *independently* of saying, yeah, temperature rise is seen.

The problem with temperature as a proxy for CO2 levels is that we already know the temperature of the Earth (long term) varies for all kinds of complicated nothing to do with CO2 reasons. There's the Milankovitch cycles, variations in solar insolation, variations in cloud cover, variations in how the continents and oceans are arranged (through continential drift), variations in ocean current topology and strength, variations in ocean chemistry, activity of the plant biosphere (including the vast amounts of plant life in the upper 3 meters of the ocean), volcanic activity -- there's an enormous number of possible causes. To use temperature as a proxy for CO2 you have to have a model that can successfully incorporate *all* of these and say, yep, we've ruled out causes A-B, D-Z, and only CO2 is left.

Maybe it can't be done, but...I'm skeptical. I'd like to see people making a bit more of a college try at finding more direct evidence that CO2 is changing *because* of human combustion and temperature is rising because CO2 is rising, and t here are no other causal arrows running amok.

Expand full comment

If you're looking for a place to practice the Scout mindset and sharpen your rationalist skills, come to the Guild of the ROSE! (guildoftherose.org)

Expand full comment

> Of the fifty-odd biases discovered by Kahneman,

> Tversky, and their successors, forty-nine are cute

> quirks, and one is destroying civilization. This last

> one is confirmation bias...

PutANumOnIt makes a similar point:

https://putanumonit.com/2021/01/23/confirmation-bias-in-action/

Opening paragraph:

> The heart of Rationality is learning how to actually

> change your mind, and the biggest obstacle to

> changing your mind is confirmation bias. Half the

> biases on the laundry list are either special cases

> of confirmation bias or generally helpful heuristics

> that become delusional when *combined* with

> confirmation bias.

Expand full comment

CMV: /r/changemyview is bad.

I think it's easy to portray it as good. Look at these noble redditors, humbly submitting that their deeply held beliefs are misguided. It takes a a great deal of introspection to try and spot the flaws in your own thinking and a great deal of courage to read numerous detailed arguments from your opponents.

And the idea is definitely good. I just think there are some big problems with the execution.

1. It's unidirectional. Submitting a post requires one to be willing to change their view, but commenting on a post does not have this requirement. Granted, it's possible for commenters themselves to be convinced and award deltas, but this is pretty rare, and the sub is clearly built around the idea of the OP changing their opinion. It comes across as "let's hold a debate, but we're going to make it so one side wins" from the outset. I think requiring that commenters be willing to change their views as well would make it a much better place.

2. It can lead to groupthink. I think the people most likely to submit posts on CMV are those with unpopular opinions. Then a hoard of redditors holding the popular reddit opinion respond and argue the OP into submission. This seems more like the OP caving to peer pressure than practicing rationality. In fact, I'd go as far as to say that I think a lot of OPs WANT to have their views changed so that they can hold the "correct" opinion, and CMV allows them to do this gracefully.

3. It incentivizes winning arguments over arriving at the truth. In the Phoenix Wright, they like to introduce the antagonist prosecutors as having "never lost a case." How intimidating! This doesn't mean that they're good at arriving at the truth though, itjust means that they're clever and good at making logical arguments (because how likely is it that the defendant was truly guilty ever single case they prosecuted?). I feel like having a monthly deltaboard incentivizes that kind of behavior--"I don't need to help the OP defeat their bias, I just need to win the argument." And then, given that a clever person might be able to convince you of either side's correctness in a debate, you end up in an epistemic learned helplessness situation.

4. This is a flimsy argument, but it just feels like people give up too easily on that subreddit. A lot of the time I'll really agree with the original opinion expressed by the OP, but they give in to the first obvious counterargument, and then it's over. I think because the OP is SO willing to change their view, debates end too quickly and a lot of questions are left unanswered.

In the spirit of this post, I'm definitely willing to change my position, but I can't guarantee that I will.

Expand full comment

Compared to the rest of reddit, /r/changemyview is relatively good, in that you can actually read proper arguments for or against popular or unpopular positions without one side being downvoted to heck.

It's not some platonic ideal of rational debate, and the "Delta" schtick has all the problems that you suggest, but keeping a large community engaged in vaguely sensible and occasionally high-quality discussion is very very hard, and /r/changemyview does a not-too-bad job of that, considering.

Expand full comment

For comparison to /r/changemyview, there's /r/unpopularopinion, which curiously doesn't actually seem to have any opinions that are unpopular (at least not with Reddit's demographic). Both subreddits are somewhat similar in the type of posts they get, but r/changemyview is far superior for rational debate in the comments.

Expand full comment

> In the Phoenix Wright, they like to introduce the antagonist prosecutors as having "never lost a case." How intimidating! This doesn't mean that they're good at arriving at the truth though, itjust means that they're clever and good at making logical arguments (because how likely is it that the defendant was truly guilty ever single case they prosecuted?).

Phoenix Wright is, in part, a parody of the Japanese legal system, where there is a 99% conviction rate. It is quite possible for a prosecutor to have a perfect win record in Japan. There's a number of possible reasons for this - one might be that the state doesn't bring criminal charges unless it is sure it can win, another is that innocent people might be put away at a surprisingly high rate in Japan.

There's a reason why the fourth Ace Attorney game "got political" and argued in favor of a jury system as a better path to justice. Japan's legal system is a bit of a mess.

> It can lead to groupthink. I think the people most likely to submit posts on CMV are those with unpopular opinions.

This is true, and CMV tends to attract a small handful of arguments over and over again as a result. However, to try to combat this, they have Fresh Topic Friday, where users can only post "novel" opinions to be changed. It doesn't solve the issue, but it is a step in the right direction.

> This is a flimsy argument, but it just feels like people give up too easily on that subreddit.

This is fair, but I think it's sadly true to life. In real life, one of my friends told me about a guy who flirted with her at the gym. At some point, he brought up some conservative talking point or another (I think it was abortion), and she raised the simplest pro-choice 101 counter argument, and he supposedly said "I've never thought about it that way."

Sometimes, people aren't super attached to their opinion and haven't researched it, so it is no surprise that they fold like origami at the first brush with a halfway decent counter-argument.

Expand full comment

He was trying to get into her knickers, not have a serious debate. Flip it the other way round - he brought up some progressive talking point on abortion, she countered with simple pro-life 101 argument, he goes 'never thought of it that way' - and it would be for the same reason: he was feeling her out based on an impression, she showed where she really stood on the position, since it wasn't something he really cared about he goes for the non-committal route which shuts down argument and hopefully leaves him a chance to sweet-talk her into going out with him.

"I never thought about it that way" is the default way of not having an argument where you don't want to get into it with someone, maybe because they're family and you don't want to start a feud, or they're your boss and you want to keep the atmosphere at work clear. It's not admitting "You're right and I'm wrong" and it doesn't leave an opening for "But what about..." further debate.

Expand full comment

I completely forgot about the Japanese legal system actually being like that. Even so, I think the analogy holds true: a lot of CMV users take the soldier/prosecutor-who-tries-to-win-every-case approach rather than the scout approach because you acquire prestige by winning arguments.

Expand full comment

Well I can add that my wife would be in favor of a change to ‘Menachem’ so I don’t have to go through the whole “You remember that Scott Alexander guy I was telling you about…” bit when I want to show her an interesting thread.

Menachem she would remember.

Expand full comment

Maybe you could call him Yvain? (Unless Scott would rather we didn't — feel free to delete this comment if that's the case! The matter of old handles, like 'Yvain' and the LiveJournal username, sort of got lost in the muck of the "real name" affair…)

Expand full comment

I don't know how "Yvain" was supposed to be pronounced. "Yevain"?

Expand full comment

I'd say Ee-vain, with the second syllable stressed. (Although in the original French the 'vain' would be pronounced rather differently.)

Expand full comment

Bronze Age Mindset next

Expand full comment

Bronze Age grindset (spending six hours a day grinding wheat with rocks)

Expand full comment

Am I the only person who noticed that she's just lifting Myers Briggs P vs J and then declaring P superior?

"A Soldier’s goal is to win the argument, much as real soldiers want to win the war. ....Scout Mindset is the opposite. Even though a Scout is also at war, they want to figure out what’s true. "

This is extremely similar to the defining test for P vs J ("Js think a meeting is productive if everyone knows what to do, Ps think a meeting is productive if every issue has been discussed and understood."

It's fricking hilarious, of course, because in the real world Js run roughshod over Ps and that's what would happen in real life as well with "Scouts" and "Soldiers".

The real irony, to me, is that the author is doing exactly the opposite of a real P, in declaring a superior approach. (She says she's not, but that's a big ol lie.) But that's pretty typical of many in the rationalist community--they are overly fond of pretending they are logical (hyper T) but not really as flexible and decision resistant as they like to imagine themselves (that is, they're a lot more J).

I'm deliberately using MB typology to mock her because I mean, Jesus, Scout and Soldier is so fucking high school. But seriously, very little of this is original. Every personality model has this dichotomy in it. Again, they don't declare winners.

Thank god we aren't living in a world run by people like her.

Expand full comment

I found myself arriving here as well. None of this feels new, and it seems like rationalists are not as interested in changing their deeply rooted thoughts as they claim to be, which explains *why* the book doesn't feel like it's saying anything new.

Expand full comment

INTJs are vastly overrepresented in this community, which Scott has covered.

She doesn’t claim to have achieved massive original insights. She’s popularizing an approach at critical thinking. Being open to new evidence, focused on factual accuracy, and carefully considering opposing arguments does not mean one cannot take strong positions. The whole damn point is to have the superior approach at holding correct views.

Do you actually think the world would be worse if run by people like Julia and Scott instead of the current offerings? I’d love to run that experiment.

Expand full comment

Oh, be serious. This is an absurd comment. Of course she's claiming new insights, and her book is treated as having a new model.

And Julia (but not Scott) is the very thing she's telling everyone else not to be. Girl's a soldier, not a scout.

Expand full comment

I’m not the only one to read the book and go “this is a great packaging of rationalist ideals I already know.” But I’m in her camp already.

She’s using new labels and her own explanations of the rationality approach to target a popular audience. She’s citing research of others.

Have you never seen Scott argue in favor of rationality or Bayesianism before?

Scott is more of a soldier than Julia is across their bodies of work, and I bet Scott would agree with that.

Expand full comment

Everyone is more of a solider at everything than Julia Galef, imo. This is only a slight exaggeration.

Expand full comment

I wonder how much of a scout Julia is about things she actually dislikes - animal sacrifice, the nobility of war and conquest, slavery, the divine hierarchy of some over others, the <censored> of gays, transgenders, untouchables, and other tribes/nations and women, and the disfigured, honor killings and duels, the divine right of the privileged to have laws not apply to them vs the unprivileged, and the necessity of fighting and insults to keep the strong on top and the weak on the bottom

not endorsing any of that, but uh probably a majority of all humans that every lived believed more than one or many of those, and they are now so anathema as to not even be considered. Also like 99% of animals. Not very scout-ish!

Someone like moldbug (or Scott! reactionaries in a nutshell being a great example of distilling the most palatable or compelling fractions into a nice, pleasant smelling phial) would be much more scout-ish from that perspective

Expand full comment

Being a scout does not imply one has to entertain for more than a split second things that are obviously false or immoral according to already carefully considered judgments.

Julia (presumably) doesn't like the kind of things you're proposing not out of simple aesthetic preferences, but out of carefully considered epistemological and moral frameworks.

The trick, of course, is being properly calibrated and having a strong baseline of justified beliefs. But you can waste a lot of time going down strange rabbit holes (e.g., continental philosophy, Marxism) if you don't have some simple rules like "by default, causing suffering to appease the gods is a bad thing" or "markets work." There's being open minded and then there's being credulous.

Also, no one individual needs to consider every idea ever conceived. Or debate every issue. Communities and networks and extended trust are a thing.

Expand full comment

Amen. She makes Scott or Eliezer look like Atilla the Hun.

I think the meanest thing I've ever heard from her (on her podcast) is something like, "That doesn't sound right." Or the dreaded, "That seems obviously wrong, so that can't be what they think is it?"

She's practically the patron saint of steelmanning.

Expand full comment

“Not being mean” isn’t what rationality is about. Many of Scott’s best posts are extremely mean on the merits. The facts often demand meanness such that kindness is impossible without lying. “Untitled” was mean, as were the ones exposing academic fraud and replication crises - no word is as insulting or harmful as a post that ends a thousand careers.

Expand full comment

Every new business management book does the same thing. This is a pop science book done for rationality, to get people not already familiar with the whole background interested in the subject.

She's using 'scout and soldier' the way people used cheese or overcoats or rhinos or black swans - as an attention-grabbing motif that is easily remembered.

Expand full comment

I don’t like pop science books partly for this reason - like a hundred thousand people now believe that scout and soldier is a real and important thing!

Expand full comment

I've been in the community for a while, and I also definitely think the book is light on new insights. It's certainly not a new model, but very much a branding of a bundle of a ideas. The value is not in novelty but on how clean and thoughtful the arguments are.

I wouldn't say there are zero new insights there; some of the proposed thought experiments are at least new to me.

Expand full comment

Longtime fan of Scott. This "review", however, reads like an endorsement rather than the very critical reviews that Scott has written in the past.

I wish Scott had considered the counterfactual "would I have been so complimentary towards the book if I was not already a Rationalist?"

Expand full comment

“Scott is a rationalist” is basically “the pope is catholic”, or “the ocean is wet”. If Scott weren’t a rationalist, there wouldn’t be nearly as many other rationalists. Critically reviewing yourself is, if worthwhile, a challenge.

Expand full comment
Comment deleted
Expand full comment

I mean he may also have just really liked the book lol

Expand full comment

Does seem suspicious though. He has reviewed really famous books in the past, and has always had critical comments to make about those. He hardly has any here, and he's also friends with the author and her husband.

I mean, Scott is entitled to write whatever he wants. But maybe he should recognize the conflict of interest in reviewing a book written by a friend.

Expand full comment

But he already likes these ideas. Really likes them! So of course he would find them good and really like them. This is why I do not like the book, btw - an in group bias is not distinguishable from “both you and the other person are correct or agree”. He’s not biased, he just agrees!

Expand full comment

I don't think "Scott is a rationalist" is equivalent to the other statements you make. "The pope is catholic" is equivalent to the fact that the pope is a soldier for Catholicism. He's the guy responsible for evangelizing it to the world. Scott didn't create Rationality. Neither is he the poster-child for it. He can completely be a scout about it, and say "these are the things I don't care about as much."

Expand full comment

But on a meta level, it's almost perfect. Scout mindset is hard, and here's Scott soldiering for a friend.

Expand full comment

How exactly are you differentiating bias from the explanation that the book is just really really really good?

Expand full comment

> You’ve probably heard the probabilistic (aka Bayesian) side of things before. Instead of thinking “I’m sure global warming is fake!”, try to think in terms of probabilities (“I think there’s a 90% chance global warming is fake

things don’t work like this though. What’s the probability that the langlands conjectures are true? Does that probability really mean anything or help you solve it? No, lots of intuitions and heuristics can help understand your directions and reasons to investigate, but there’s no probability there. What’s the probability that true love conquers all? What’s the probability you should be a programmer vs a doctor? What’s the probability of AI X-risk? The last one is a totally useless number because it pretends to reduce the actual technical difficulties and complexities that we don’t know yet about AI to a probability that we can’t know what it means or how to calculate it. Same frankly for “probability of global warming being fake”, no, the problem is what global warming is and what it’s effects will be and who and how it’s being investigated, and what one can intervene with it to change, none of which a probability is relevant to. It’s a distraction and a meme

Expand full comment

Confidence levels and forecasting, how does it work?

Expand full comment

I’m arguing bays and probabilities are a terrible metaphor or claimed method for thinking in general, as essentially none of the tough parts involve that at all

Expand full comment

This book and the entire post is literally about evaluating ideas on truth not whether everyone agrees with them. cmon man. Why don’t you defend it?

Expand full comment

I am defending it by sharing the writings of an author I know you like to read making the arguments better than I could? If he can't convince you, what hope do I have?

See also:

https://www.lesswrong.com/tag/bayesianism

https://www.lesswrong.com/tag/aumann-s-agreement-theorem

I definitely don't want you to feel like the kinds of objections you're making haven't been considered (and considered legitimate challenges to overcome). You don't have to find the responses sufficient, but efforts have been made.

Expand full comment

Sure, I’ve read most of those before though, and I don’t think my individual concerns (the important parts of thought aren’t related to probability, so it isn’t an epistemology as much as a flawed minor tool) were addressed

I’m not arguing that understanding statistics is bad, for now - just that it’s not an epistemology, it’s just “understanding statistics and applying it occasionally”

Expand full comment

> Afghan soldiers

While the Americans ... well ... American soldiers absolutely did question the value of the war ... as did the Vietnam soldiers, see the origin of “fragging” - blowing up commanding officers with grenades ... uh anyway, while the Americans thought a little bit about the value of war, the afghan soldiers thought a lot about it, such that they regularly changed sides depending on local conditions, as they has for many decades. And when America left, this contributed to the immediate collapse - elders simply became taliban.

I’m not sure “soldier mindset” is a real thing tbh

Expand full comment

> So for example, if a Republican politician is stuck in some scandal, a Republican partisan might stand by him because “there’s no indisputable evidence” or “everyone in politics does stuff like that” or “just because someone did one thing wrong doesn’t mean we should fire them”. But before feeling too sure, the partisan should imagine how they would feel if a Democrat committed exactly the same scandal. If they notice they’d feel outraged, then their pro-Republican bias is influencing their decision-making. If they’d let the Democrat off too, then they might be working off consistent principles

But Republican and Democrat are still brothers.

Let’s say your wife steals from a store. Let’s say the guy who assaulted your child steals from a store. Or even better - let’s say France breaks a nuclear treaty. What about Russia? China? North Korea? The expert doctor treating your cancer makes a rude comment at your wife - vs a random nurse - different situations! Even in the case of politicians, a generally honest and faithful Republican making a slip up in the men’s bathhouse may look different from a Democrat abortionist - or - a deceitful, conniving Republican taking money from an oil company may be different from AOC, champion of progress, doing so. Or just a friend vs non friend doing the same thing can be very different for good reasons. They legitimately are different circumstances, and demand different responses! And I’m not sure that simply calling to abstract over it with a few simple ideas captures either why people do these things or explains why they mar sometimes be worthwhile? Hypocrisy is bad because it means a mistake is being made somewhere, not because it’s hypocrisy - “soldiering” is bad when it’s dumb, not when it’s driven by strong beliefs in a thing

Expand full comment

Here's where I differ: yes, you may be very sympathetic to your wife, the kleptomaniac. There may be very good reasons why she should be excused. But that does not change the fact that she stole. And it does not change the fact that she can't just go on stealing from stores. And that does not also change the fact that if the guy who assaulted a family member steals from stores, he too may have a very good reason for it.

I'm human, I'd be glad to see the assailant thrown in jail and getting the full measure of hard treatment. But we have a system of law, and if I expect fairness for my spouse, then I have to accept fairness for my enemy. Otherwise, we go back to "I shoot you because I don't like your face".

In other instances, we are rightly outraged when people put their mistresses on the public purse, or the founder of a charity treats the donations as his private piggy-bank, or politicians are caught out in sleazy dealings, or your boss promotes his brother-in-law who gets all the money while you do all the unrecompensed work.

If we excuse 'grab 'em by the pussy' because it's Our Guy, but then 'grab 'em by the pussy' is the demonstration of how Their Guy is the most wicked man in the world, that's not principle, that's not 'different circumstances demand different responses'.

If you were one of the "I'd gladly strap on my kneepads and give the president a blowjob" partisans because that particular guy did policy things you liked, you have no leg to stand on at all to be outraged that a different guy made vulgar remarks about treating women as sex objects.

Expand full comment
founding

I don't believe that intel story is true: They were working on microprocessors simultaneously to developing an update to their memory chips, which were extremely competitive. They pivoted to microprocessors because they saw immensely greater potential, which had little or nothing to do with the Japanese.

Expand full comment

https://en.m.wikipedia.org/wiki/Intel#History suggests their first microprocessor, intel 4004 in 1971, came a whole decade before Japanese increased memory competition made that memory market less profitable, and then they “ growing success of the IBM personal computer, based on an Intel microprocessor, was among factors that convinced Gordon Moore (CEO since 1975) to shift the company's focus to microprocessors and to change fundamental aspects of that business model. Moore's decision to sole-source Intel's 386 chip played into the company's continuing success.”

Expand full comment

The story is basically true, and it was famously recounted by Andy Grove in his memoirs. The Japanese firms were killing Intel in DRAMs (they were much better at manufacturing process improvement, and cared more about it, leading them to beat Intel to market with next-generation DRAMs repeatedly). The part of the story that is less often retailed is that Intel's middle managers had already been slowly reallocating production capacity and investment toward the microprocessor business because of its superior profitability.

Expand full comment

Andy Grove and Gordon Moore were talking in 1985 about where to direct Intel's resources. Memories were still a big part of Intel's business, but competition from the Japanese was tough and Intel was struggling. They thought Intel might not survive.

So Andy Grove asked Gordon Moore: "What would happen if somebody took us over, got rid of us — what would the new guy do?"

"Get out of the memory business," Gordon Moore answered.

Andy Grove agreed. And he suggested that they be the ones to get Intel out of the memory business. That was major surgery. Intel laid off more than 7,000 employees, almost a third of its workforce, and shut down many of its plants.

Expand full comment

There were further adjustments, but the groundwork had been laid:

"Fortunately for Intel, however, its middle managers had been dealing "with allocations and numbers in an objective world," so by the time the senior managers made the decision to exit the memory business, its production had already been substantially redirected to microprocessors. 'The Intel production schedulers shifted capacity from memories to microprocessors because the latter were more profitable," says Grove. "Meanwhile we senior managers were trapped by the inertia of our previous success."

https://www.google.com/books/edition/What_Management_Is/ew1RfZlB2MAC?hl=en&gbpv=1&dq=intel+had+already+been+shifting+from+drams+when+Grove+made+decision&pg=PT140&printsec=frontcover

Expand full comment

Okay, In principal I’m completely down with rationalism. In endless chart and stat citing, truth table displaying, hair splitting practice, it takes on an aspect of brow beating IMO and makes me a bit weary at times.

Expand full comment

Now if someone asks me to cite weariness inducing examples, that will be an example.

Expand full comment

principle, although it may have been autocorrect

And stats and charts can be useful, and important in understanding stuff (although not in psychology - Julia was correct to nor cite any of it, although the fact that all the rats didn’t notice the incoherence of the psychological principles underlying those studies suggests other stuff)

Expand full comment

Yep you’re right about the spelling. Thanks, I missed it.

Expand full comment

Now that we've read the review, do you think the actual book is useful for people who are in say the top 20% or so of effectiveness among rationalists? My friend said it was more useful for normies than the in-group.

Expand full comment

I agree it's more useful for normies (although probably not useless for the set of people you've asked about).

However, note that you can get the audio book (narrated by Julia herself), at which point it doesn't compete with reading but with listening in your off time or while working out or whatever.

Expand full comment

It's fascinating how scholastic this all this, though -- the enormous attention paid to the quality of one's reasoning, the use of dialectic, really this could all be straight out of a closely-argued 14th century monastic treatise on philosophy, or on how to divine the Will of God.

Over here in the shaky landfill soil of Empiricist Land, sinking steadily into the Sea of Stuff About Which Nobody Cares by 10cm every day, alas, we would instead hammer away at evidence, evidence, evidence. Learn not to care about the quality of the argument, either yours or your opponents, and instead search obsessively for the objectively measureable fact, the cold hard numbers, photographs, or dead bodies to throw onto the scales, and then take and stick to the difficult inhuman decision that an ounce of ugly measurement outweighs a megaton of beautiful theory.

But it makes sense. Empiricism is deeply unnatural to the human spirit. We are forced to it only when we work repeatedly with real systems, and our lives depend on deducing what natural systems that we did not design will do -- whether the weather will be fine or rainy, whether the steam engine will stutter to life or explode, whether the airplane will rise or crash into pieces, killing us all. Under those cirx, we turn to empiricism in dread and despair to save us from our ability to self-bullshit.

But in the modern world, few of us work and live like that. Our success or failure derives far more from social forces -- do people like us or not, applaud or hiss, buy our service or not, vote us in or out of office? Under those circumstances, yeah, the quality of the argument and the conformity to enduring social myths is way more important. It's a little like living in the Church circa 1350, where whether you are sold into slavery or become wealthy and powerful has everything to do with the good will of all the other bishops and nothing much to do with whether a stone arch you build stands up or falls down. Strange, that we should be retracing our intellectual evolution this way.

Expand full comment

You can pile up a mountain of measurements, yes. Then what do you do with them? You need them to say "this theory of how to put up a stone arch works" or doesn't work. Having a mountain of facts does nothing in itself, it's like a junk yard full of scrapped cars rusting away.

We collect facts, as you say, when we are forced to it - and that is when we need confirmation or confutation of a theory. Theory alone can be in error, but facts alone get nothing more done than "here is a pile of measurements of how much water can fit in a jug this size". Both are needed. "I took a survey of all the villagers who turned blue and died, and it seems every single one of them ate berries off this bush". "So you're saying the bush is poisonous?" "Hold on there, friend, I'm an empiricist, I don't venture into wild theories! Just cold hard measurements!"

Unless you put those measurements into a theory of "eating these berries makes you turn blue and die", you are doing nothing useful.

Expand full comment

You’re right - *but* - that still makes Carl’s point about this book correct - it’s much more about feelings and “the status of various speakers” than it is, say, propositional logic or the mathematics of computation or the organization of science, which are needed with and part of empiricism and such. Recognizing and accounting for cognitive bias (which is fake but whatev) absolutely is unrelated to, say, set theory, which would be something your argument applies to

Expand full comment

I think in the first place you're confining the definition of "measurement" much more than I intend with that word. If every one of the people who ate berries from a bush died, *that* is a measurement, and it succeeds very well in supporting the hypothesis that the bush is poisonous. (Although you would also need the measurement that all the people that *didn't* eat from the bush *didn't* die, to account for the possibility that, say, a plague was going through the village and killing everybody, blue bush berry eaters or no.)

And I most certainly did *not* say that hypotheses and theories are unneeded. What I said is that in the process of deciding which hypothesis is more likely to be true, the inherent purely-internal persuasiveness of the hypothesis -- how simple it is, how elegant, how logical, how pleasing -- is essentially useless.* The only useful guide to how correct a hypothesis is, is what amount of empirical data support or conflict with it.

---------------

* It might be useful to a different species than H. sapiens, a species that actually *could* think about things coldly rationally, but we are not that species, and our ability to bullshit ourselves is so high that we are collectively pretty much incapable of being coldly rational about anything.

Expand full comment

I guess another way to put this, using your scenario, is this: I am a villager attempting to decide whether they hypothesis "berries from the blue bush are poisonous" is true or false. Here's a list of observations an empiricist considers useful in deciding that question:

1. Of the people who ate from the bush, X% died promptly.

2. Of the people who did not eat from the bush, Y% died promptly.

And here's a list of observations an empiricist considers not useful:

1. I like/don't like the guy who came up with the hypothesis.

2. 97% of people surveyed in the village agree with the hypothesis, including the head man and the shaman.

3. The guy proposing the hypothesis is an inveterate Chicken Little, always freaking out about something that turns out to be nothing.

4. The guy proposing the hypothesis is a sober, level-headed professional studier of bushes and poisons, and has never been wrong before.

5. The hypothesis comes with a detailed and very plausible description of the mechanism by which the bush berries kill -- enzymes are named, the detailed chemistry is laid out.

6. The hypothesis comes with in incredible and complicated description of the mechanism involved, which involves invisible demons and hypothetical substances of which no one has ever heard, and is internally logically self-contradictory.

7. If the hypothesis were true, we would have to revise a whole lot of other hypotheses that up until now we have considered to be well-founded.

8. If the hypothesis were true, it would fit in very well with what we alreayd think we know.

9. This would be the first bush ever that was found to be poisonous, nobody has ever experienced such a thing before.

10. This would be the 500th bush in the neighborhood that was found to be poisonous, it happens all the time.

That's not to say that some or all of these things might be *personally* persuasive to any random person (including me of course). But none of them are, by a strictly empirical viewpoint, relevant to the question of whather this particular hypothesis is true or false, because none of them are direct measurements that test the hypothesis, and the empiricst accepts nothing other than direct measurement.

It's also not to say that being strictly empirical is not esentially an impossible ideal for all situations. I cannot directly measure the hypothesis that mothers in France love their children as much as those I know personally, so I probably take that on faith (or not), and that's perfectly normal human behavior, quite reasonable. But on the other hand, if I had to bet a billion dollars or the life of an innocent child on the truth of that hypothesis, I would suddenly become a lot more empirical about it.

Expand full comment

"Galef is [CFAR's] co-founder and former president, and Scout Mindset is an attempt to write down what she learned."

This seems not quite accurate to me. I believe that when Galef left CFAR it was because its techniques didn't seem to work that well and she was disillusioned with the approach. CFAR's curriculum has changed a lot since Galef was there, and avoiding confirmation bias has never been more than a small part of the curriculum.

My understanding is that CFAR still doesn't have very good evidence that their techniques work.

I've only read reviews of The Scout Mindset so take this with a grain of salt, but my impression is that it does a good job establishing that there are indeed two mindsets of the kind she describes and it'd be good if more people were in scout mindset more of the time, but that there's not much evidence yet that the suggestions in the book actually work.

Expand full comment

> My understanding is that CFAR still doesn't have very good evidence that their techniques work.

Same here, but I'm willing (and, in fact, hopeful) to be persuaded otherwise. Has CFAR ever ran any studies to evaluate the effectiveness of their techniques ?

Expand full comment

Not sure what they have more recent than this:

https://www.rationality.org/studies/2015-longitudinal-study#life-satisfaction

They try a lot harder at evaluation than universities do… (particularly per dollar)

Expand full comment

That is not entirely fair, because universities have a long-standing performance record as evidenced by scientific research and/or employment, as performed by their graduates. CFAR doesn't have that; admittedly, one could argue that they haven't been around long enough yet.

Expand full comment

Oh it's more than fair. I've been to a number of universities and attended CFAR a few years ago and CFAR was WAY more concerned about figuring out what it was we came away with. It's particularly lopsided if you consider the relative sizes/funding of the institutions. I'm personally a skeptic of how much impact any educational treatment for complex subjects can really have in a week. CFAR does try to help ingrain the techniques after you leave, but I'm not sure how effective it was for me personally (though I came in already familiar with a lot of the underlying material). The experiment of how to teach rationality in a crash course remains worth running in my view. In my experience, CFAR in general, like Julia herself, is painfully aware of the challenges of what they're trying to do there.

Universities, as this very blog has pointed out quite a few times, do not actually have a great track record at accomplishing their stated goals. Sure, the credential is valued on the job market, but that doesn't mean the students necessarily learned much, particularly to available alternatives.

See: https://www.amazon.com/dp/B07T3QRNLC/ref=dp-kindle-redirect?_encoding=UTF8&btkr=1

See also: https://slatestarcodex.com/2015/11/30/college-and-critical-thinking/

https://astralcodexten.substack.com/p/kids-can-recover-from-missing-even

Expand full comment

I would argue that most students don't learn much of anything, regardless of the venue -- unless the venue enforces draconian selection criteria to begin with.

I note that most scientific research happens in universities, and is mostly done by university graduates (usually after they get their Ph.D.). I also note that, while most university graduates are incompetent, the proportion of competent ones is still higher than among non-graduates -- though this could be different for other professions (besides e.g. engineering).

The university promise is, "if you are smart, and you put in a lot of time, and do a lot of work, you will be able to earn more money, to perform useful research, and to generally become better at educating yourself". AFAICT, this promise does bear out in the real world (at least, for certain professions).

The CFAR promise is, "if you are smart, and you attend our workshop, and you put in a lot of work, then you will become a lot smarter". This promise is much more vague while being more ambitious at the same time, and thus far I have not seen enough credible evidence to believe it.

Expand full comment

Is there anything specific CFAR could do better here? If their graduates have a job, given that most of them *also* attended some university, people would automatically attribute that to the university.

It would be more legible if the graduates had some job after university and before CFAR, and then a better paying job after CFAR. But even then... looking at myself, was a Java developer before CFAR workshop, am a Java developer now, my salary has increased by 50%... should I attribute it to my advanced rationality (actually, I barely remember anything from the workshop), or simply to inflation and more years of experience?

The proper way would be to have a randomly selected control group... but let's admit that universities do not actually go this far. They use "people without university education" as a control group, but that's not fair -- if you have entry exams that check for some skills or traits, how do you know whether the success should be attributed to your lessons, or to those skills and traits?

Expand full comment

These are all fair points. I'm not sure what CFAR can do better; but if their results are (at present) indistinguishable from the university, then I'd rather attend a university -- at least they have a proven track record, and will give me a credential when I graduate.

I understand that the comparison is unfair, because CFAR is a relatively cheap (all things considered) workshop, whereas the university is an expensive 4..8 year commitment. But still, the burden of proof is on CFAR to demonstrate that the time and money I invest into them would be worth more than investing in, say, some online class or something.

As I said above, one problem with CFAR is that their claims are rather vague. They promise to make you "more rational" or perhaps "smarter", but it's hard for me to judge how that translates into real-world gains. Again, the burden of proof is on them, not me -- and my prior going in is low, since there are lots of self-improvement programs that all make lots of vague claims, and, empirically, few of them ever bear fruit.

Expand full comment

Walk a mile in someone else's shoes...sure this is very old and kindergarten level advice. And?

I also think this is silly and has limited to no utility for most people who don't need to engage in this level of neurotic navel gazing.

If we are taking other people's views and lives into consideration....it is good to process things through their lens too....otherwise we're just forcing our ideas into our made up version of other people in our heads to have endless looping arguments. Did I mention this was neurotic already? And we want to spill this out into the rest of the world for everyone?

People like groupthink, they like not thinking about anything at all, they like knowing what to do.

They don't care about being right in the end, they want to be accepted by the people around them.

Sure, this has led to many atrocities, but it is also the grease to keep day to day life moving along most of the time without everyone sending each other long winded arguments and apologies on twitter in some libertarian fantasy of fully awakened and sovereign individuals...which has failed to gain mass appeal thus far.

People want to be lazy! Not negotiate every little detail in their life like making sure the fire department comes or their bank isn't shafting them or their food and drugs are safe to eat and take. Why can't we all just be ever vigilant experts in all things adjudicators for everything all the time? Why isn't this idea more popular? Hmmm...(I've got my flak jacket on for the blowback on that line!)

This can be seen as taking academic and ivory tower neurotic mentalities and norms of arguing everything to death into the real world where people are busy going to work, hosting BBQs, and just getting by until they can get back to their Netflix series. What possible reason should people bother to engage with this when they are a landless peasant held in debt quasi-slavery?

Did they not bow properly after the herald announced them or wear the right colours of the right material upon entering the court in the Spring period? These ideas about confirmation bias sound like courtesan etiquette nonsense to the average person.

A few higher ranked TED 'though leader' barnacles on the side of the yacht's of the billionaire neo-aristocrat oligarch class will yammer away making detailed rational arguments (Now with emotional appeal! wink wink) which will probably get ignored for political purposes to empower or enrich someone who is already powerful and rich.....and regular folks don't need to worry about this stuff.

They'll just get by, people with accept or reject them, and you just suck it up and move on! Your friend was rude...maybe he was hungry when he said a mean thing...but the friendship is good....if he keeps being shitty, maybe I'll hang out with him less or not at all. Problem solved! No need for 50 comment long longwinded arguments on twitter!

Expand full comment

Shorter version: "I do not care about this thing therefore it must *really* be about this other (billionaires and their yatchs etc.) that I do care about."

Expand full comment

My results on http://confidence.success-equation.com/ (for the sake of comparison with others):

Calibration score

Mean confidence: 65.60%

Actual percent correct: 62.00%

You want your mean confidence and actual score to be as close as possible.

Mean confidence on correct answers: 69.35%

Mean confidence on incorrect answers: 59.47%

You want your mean confidence to be low for incorrect answers and high for correct answers.

Quiz score

31 correct out of 50 questions answered (62.00%)

15 correct out of 30 questions answered with low (50 or 60%) confidence (50.00%)

10 correct out of 14 questions answered with medium (70% or 80%) confidence (71.43%)

6 correct out of 6 questions answered with high (90 or 100%) confidence (100.00%)

Expand full comment

Are the grammatical mistakes in the excerpts a transcription error, or is my heuristic "anyone who cannot manage to write mostly-correctly in a professional setting is probably not too smart" itself in error?

I mean, there *are* exceptions, especially in ESL cases. Still, I'm kind of surprised if it turns out these are present in the original. "All of which may be our beliefs are 'undermined'"? *Less* substituted for *lest*?!

Expand full comment

I've looked it up, and the errors are from Scott. The quoted passage is not actually a direct quote from the book, it bounces between quoting stuff at verbatim and then rephrasing it, presumably to make it shorter.

Here's one of the paragraphs at verbatim (without the italics):

> Arguments are either forms of attack or forms of defense. If we're not careful, someone might poke holes in our logic or shoot down our ideas. We might encounter a knock-down argument against something we believe. Our position might get challenged, destroyed, undermined, or weakened. So we look for evidence to support, bolster, or buttress our positions. Over time, our views become reinforced, fortified, and cemented. And we become entrenched in our beliefs, like soldiers holed up in a trench, safe from the enemy's volleys.

Expand full comment

“ Like - a big part of why so many people - the kind of people who would have read Predictably Irrational in 2008 or commented on Overcoming Bias in 2010 - moved on was because just learning that biases existed didn’t really seem to help much. ”

Well that’s the sort of generous interpretation we all expect from Scott.

It’s not the interpretation I, much less generous than Scott, put on how this played out.

Remember all those theories of which would win in a conflict between class loyalties and national loyalties before WW1? How did that play out?

Yeah, now run the tape again substituting rationalism for class loyalty, and identitarianism for national loyalty…

Expand full comment

What's "identitarianism" and how does it explain why people stopped reading Predictably Irrational and Overcoming Bias?

Expand full comment

Most of those people were not part of "rationalism" because they had any sort of innate Scout Mindset (ie prioritization of truth over other values), they were there for the bashing of others (most obviously bashing christians). Once Identity Politics took off, there was a much larger bashing game they could join, BUT joining that game required abandoning any pretense at fairness, objectivity, and a prioritization of truth.

We all know which value system prevailed in most people.

Politics is about the definition of the world into friend and foe. Anything other than this feels unnatural to 99% of humans. The belief that we could make large swathes of the population more rational just by telling them how to be so was always a fools errand, in the same league as those claims that a common language, or a common postal system, or a common communication system, will ned war and bring everyone together in an orgy of happy understanding.

Good luck to Julia in her latest version of this attempt, but honestly I think the best she can do is create a few common knowledge phrases (like Scout Mindset) that will allow us to communicate slightly more clearly. But she won't move the dial much on that 1%. Just as always, there'll be created a pool of wannabe's who all insist that they are Scouts, always considering things from every angle -- right up until that claim becomes inconvenient compared to a larger tribal claim.

Expand full comment

Makes sense. I don't expect rationalists to be immune to the turn against free speech and free discussion that's been happening in the US due to identity politics. Nor do I expect them to be rational about their core tribal values. I don't think any of that means that one can't be *more* rational, or that it's not worth trying.

Expand full comment

How many of these comment threads have you read through?

I don't seem much attempt being made at "being more rational".

Most of the arguments I see are tribal arguments, conducted in tribal fashion. I almost never see "Yeah, you're right, I've changed my mind" or something similar.

Ask yourself "when was the last time I changed my mind about something important (ie something 'tribal')?"

Or the even stronger "how often do I change my mind about something important?"

Or alternatively, the Peter Thiel question "what is a heretical view you have?" (ie what's something you believe that most people would disagree with, or even better what most people in your affiliate group would disagree with).

If people claim to be rational, but the only time they ever change their minds is as part of the tribe as a whole switching from hating Eastasia to hating Eurasia, well...

Expand full comment

Are you comparing these comment threads against the median Internet comment thread, or against the Platonic ideal of rationality? If the former, I see these comment threads as less tribal, with more mind-changing and more heretical views, than the median Internet forum. If against the latter, I agree that ACX falls way short.

Expand full comment

Maybe I missed it but I expected to see some acknowledgement that people can be scouts on some topics and soldiers on others. No one is immune to this.

Expand full comment

I like the idea of the scout midset, it helps a lot when trying to seek the truth and not be trapped in your own views just by the shame of being shown wrong. However, it misses one important point (like most of discussion about cognitive bias): Many times, the real value of "facts" or argument points are not their truth, it's their usefulness in creating an effect on your listeners (making them changing their behavior). Even if they are false, it can benefit to believe in it if it makes you more efficient in influencing others.

Let's call that the Machiavelian bias, and I think it trumps confirmation bias by a lot. Scout really helps confirmation bias, but as soon as facts are used to influence, it's how you like the direction of influence that will makes you believe or not the fact. Hence the famous Upton Sinclair quote “It is difficult to get a man to understand something, when his salary depends on his not understanding it.”

The best way to up the chance of really considering the truth of a fact (scout mindset) is to remove the link with the desired/feared policy change, either because it's relevance is trumped by other facts, or because it becomes compatible with a very different policy change...Both of those reduce the usefulness of the fact for supporting the controversial goal, which improve it's truthness value relatively. But then you will see how most people are interested in facts that do not help support a policy - very little :-)

Expand full comment

Maybe I should create a new version of the Upton Sinclair quote, one that better reflect on how "scientific facts" tends to be used nowadays: "It's hard not trying to convince a man of something when him believing it will help you, regardless if this something is true or false". That's Machiavelian bias :-)

Expand full comment

I think this is a great approach when paired with properly identifying what information ecosystem you’re pulling data from, and if that ecosystem is stale, fresh, etc. I convinced the the latter is the most important practice for critical thinking and rationality. This gets discussed via highlighting “one’s bubble,” but there’s more value and content to info ecosystems than that. It’s more about applying tools to systematically parse one’s information flows, and identify their limits, sources, and so on. This practice as a professional tool shows up in intelligence circles and I think it provides a solution that most closely mirrors why it’s hard to think and act rationally now - information throughput is too high to intuitively parse, and the primary data source (the Internet) works in loops for most users who try to navigate it - news page to social media to Wikipedia to news page to…

Scouting mindset paired with information source awareness and management seems powerful.

Expand full comment

“This is the bias that explains why your political opponents continue to be your political opponents, instead of converting to your obviously superior beliefs. And so on to religion, pseudoscience, and all the other scourges of the intellectual world.”

Is it really fair to say religion is all confirmation bias? Or that it is a scourge to the intellectual world? At bare minimum, religious tradition has carried forward a tremendous number of socially useful norms and distilled wisdom in its scripture.

This is not an unscientific position to hold. Joseph Henrich’s research characterizes religion as very prosocial for example.

Expand full comment

I'm kind of let down the review was not more critical of the book, just because I usually enjoy the contrarian takes on books most (when it coms to SSC)

But as someone who never bothered to read it, I'm glad to know it's generally speaking a "rational" take on the movement that doesn't get too extreme.

Expand full comment

I think the "there's no such thing as telepathy" study may not be the best way to catch someone being biased.

As someone who does NOT believe in telepathy, My first reaction to that was "How do you even begin to test this?", followed by doubts about whether testing if there is "such a thing as telepathy" is really what happened, and curiosity about what they tested and why it rules out all possibility of telepathy.

I think the core of the problem for me is that I can't really conceive of a way to test if there is such a thing as telepathy, so I have doubts about that study even though I'm biased against telepathy existing.

Expand full comment

The cheesy story made me smile. Adults needing ten years to get to the point of "engage to marry"? Not very rational in my book (no rational objection to "not-engaging and not-marrying at all". But with a life-span of less than 350 years, why do decades of delay?! ). Who wants to write the book: "I kissed good-bye to go for kids till shortly before menopause - and other cruel facts of life"? - Sounds like I will actually buy Julia's book, thanks for the review!

Expand full comment

I won't defend most Imperial units, but Celsius is worse than Farenheit for US weather. <a href="http://lethalletham.com/posts/fahrenheit.html">The 0-100 range of Farenheit closely matches the range of outdoor temperatures in the United States</a>, and Celsius compresses that range to about -17 to 37 degrees.

(Trivia: Did you know that one degree on a Farenheit scale corresponds to the temperature change that causes liquid mercury to expand by 1 part in 10000?)

Expand full comment

Daniel Fahrenheit was a skilled instrument maker and chemist, while Anders Celsius was an astronomer, which maybe suggests the practicality of Fahrenheit's viewpoint. In any event, his temperature scale was way more practical at the time, for a number of reasons:

* 0F was the lowest temperature you could easily achieve in a highly reproducible manner in an era before refrigeration, the temperature of a saturated solution of ice, water, and salt. There is only one such temperature, and it has the advantage over the freezing point of pure water (0C) in that the latter is perturbed by any dissolved salt.

* The upper end, 100F, was originally defined to be body temperature, which has the advantage over the boiling point of water in that it does not vary significantly with barometric pressure or altitude.

* He then changed body temperature to 96F after noticing that water froze at 32F, so that there were 32 (= 2^5) divisions between the the ice-water-salt temperature and the freezing point of (pure) water, and 64 ( = 2^6) divisions between the pure-water-freezing point and body temperature.

These things make it easy for anybody to construct an accurate thermometer: you make your thermometer and put it in a bucket of ice, water, and salt at equilibrium, marking the point. You put it in a bucket of ice and water and mark a second point. You put it under your arm and mark a third. Then you take a string the length of the distance between the first two marks and fold it in half, make a mark, fold it in half, make another mark, and so on until you've got 30 new marks. Do the same with a string the length of the distance between the upper two marks.

Presto! You've got a reasonably accurate thermometer, without using any precision instruments at all, and its readings will match up very well (by standards of the time) with anyone using a similar protocol to make his thermometer. It's ingenious and practical, and the fact that it doesn't have a magic power of 10 in it is the kind of thing about which only a Montagnard would complain.

Expand full comment

Okay, I'm not arguing that Réaumur Scale has any special advantages, but the fact that it's referenced in the works of Thomas Mann, Dostoyevsky, Flaubert, Tolstoy, and Nabokov gives it a bit of nostalgic historical heft.

And then there is this beauty of graphical information presentation:

https://en.wikipedia.org/wiki/Charles_Joseph_Minard#/media/File:Redrawing_of_Minard's_Napoleon_map.svg

Expand full comment

There is an idea floating around in my mind about books/writings like this which makes me think that they generally have the wrong idea.

Suppose you are trying to determine if something is true. In order to even reason at all, we need to have some ground rules for what counts as good arguments, which assumptions are valid, and how to reason given valid assumptions.

In something like physics, this doesn't really matter, because there are generally accepted ground rules for what counts as good physics. Nobody is going to start telling you about how things move the way they do because of their telos, and if you show someone a good mathematical argument backed up by a solid experiment, almost anyone will accept it as true.

Social reasoning, which is where this book seems to be mostly pointing at, do not work this way. There is endless debate about what counts as good methodology, and even afterwards we have things like the Sokal affair and the replication crisis, and don't seem any closer to a universal set of ideas about what constitutes good social reasoning than we were many years ago.

In light of this, your ideas about what counts as good social knowledge are far more influenced by your assumptions than specific arguments that you hear. This explains confirmation bias; if you have certain assumptions about the world, and want to extend them to their logical conclusion, it doesn't make sense to read things written from someone with too many different assumptions. In your own framework, their reasoning is wrong! This also explains why we sort into two very distinct ideological groups, where some seemingly unrelated ideas can have highly correlated beliefs. If assumption A implies 1 2 and 3, and assumption B implies 4 5 and 6, then 1 2 and 3 will be very correlated, and so will 4 5 and 6, even though they may a priori appear unrelated.

In light of this, consider the idea that it is virtuous to want to change your mind. In this context, something like that is not sufficient. If someone with assumption A reads a rock solid argument for believing 4 instead of 3, and changes their mind accordingly, they may still have very little in common with someone with assumption B. What really matters is if you are able to change your mind about assumption A.

This is kind of like the post on trapped priors; we have direct sensory data, and use our understanding of the world to interpret it. The closer something is to sensory data, the easier it is to make clear arguments about it, and the more likely there is that something can be demonstrated very precisely, effectively, and convincingly. In this case, changing your mind from the correct perspective is kind of silly. When there is a lot of disagreement, or in other words where this book is claiming to be useful, is where we have layers of interpretation on that sensory data. The high layers of interpretation are what inevitably will determine your conclusion, even though at the surface level they are not even the thing being debated.

Because of this, changing your mind about an isolated idea isn't really important. If we really want to understand our intellectual opponents, it is more important to figure out what assumptions they are making than it is to understand particular ideas that they profess.

This is not to go postmodernist on you though. There are some assumptions which are worse than others, and once you understand assumptions it is still possible to demonstrate things about them. It's just that I worry this book has too narrow of an understanding of what it means to change your mind. Whether the Golgi apparatus exists seems to be more of an object level fact than an implicit assumption, or whether climate change is real, or how you interpret studies about telepathy, etc.

Expand full comment

Thanks for the excellent review. It deepened my appreciation of the book. I am often frustrated by JG on the podcast - I found her print voice (in the sense of authorial voice, not tonally) to be more convincing. On the pod she tends to work things out in her head in real time, which can leave the guest waiting, waiting…. That said, I have a bias against overly wordy interlocutors (that’s a good name for a band).

Also, who cares about Scott’s throwaway line about changing his name. Zoinks!

Expand full comment

One thing about behaving like a soldier is that, for a lot of people, me included, it's really fun. I like trying to destroy my interlocutor and force them to surrender! And this motivation can at least sometimes make you better. It's similar to how it's more fun (and often results in better performance) if you try to smash the other team during even your neighborhood pick-up game of basketball on the weekend, instead of playing with the "let's bask in the social-cohesion-building aspect of sport" attitude.

Of course, you have to be able to switch mindsets, so that you all have a laugh and a sandwich together afterwards instead of plotting to Tonya Harding the other players later that tonight, but it seems that at least debating like a soldier shouldn't be dispensed with entirely...maybe?

Expand full comment

The D&D / LARP version of Gresham's Law is, "Bad roleplayers drive out good roleplayers." Which means that if you form a roleplaying group in which some of them want to have the kind of fun specific to roleplaying, and some of them want to win, you'll eventually have a roleplaying group composed only of people who want to win.

(Winning can be fun, but it's a generic sort of fun which doesn't vary much from game to game. Winning at chess feels a lot like winning at baseball.

So perhaps Gresham's Law of Debate and Politics should be, "Soldiers drive out scouts."

Expand full comment

"You tried Carol Dweck’s Growth Mindset, but the replication crisis crushed your faith. You tried Mike Cernovich’s Gorilla Mindset, but your neighbors all took out restraining orders against you. And yet, without a mindset, what separates you from the beasts? "

I tried Bronze Age Mindset, and I got 30 years to life.

Expand full comment

What's so bad with confirmation bias? Without it there would be no stable identity, no bonding or binding ties- no civilization, no laws, no Science, nothing.

It may be that about ten percent of the population has an excess of the thing- but this isn't a problem provided the rest of us get mechanism design right. Another ten percent may have too little- e.g. me thinking I can twerk like Beyonce and that that it would be a good idea to sell up and head for Hollywood.

Expand full comment

I think you're using "confirmation bias" differently than "ignoring evidence that doesn't match your expectation". Either that, or you're extremely cynical about everything.

Expand full comment

We all ignore evidence that doesn't match our expectation- the thing is called 'robustness'- so as to maintain a stable identity and subscribe to a Gentzen type sequent calculus which reflects 'uncorrelated asymmetries' giving rise to 'bourgeois strategies'. I am very cynical about shite like this because it is shite. I'm not cynical at all about stuff that makes my life- everybody's life- better.

Scott writes well and doesn't commit howlers to do with Eigen values etc. But that really isn't saying very much.

Expand full comment

Well, I mean, for instance, saying that we'd have no bonding or binding ties without confirmation bias implies that, if we ever saw our friends and families clearly, we'd leave them. Saying we'd have no science implies that we already have no science; we just have a social convention, and fail to notice that it doesn't work. Saying we'd have no laws implies, I think, that our laws don't actually work, don't serve any purpose, and we'd be just as well-off with anarchy.

Expand full comment

Unfortunately we can't see clearly. We have limited information. Thankfully, because of confirmation bias, we make snap judgments and stick to them. Since this is 'common knowledge' others have faith that we have faith that they have faith etc in us. So this keeps the show on the road. Some people, it is true, lack confirmation bias or let others manipulate them by presenting fake 'evidence'. Macbeth & Othello needed to tell the witches and Iago respectively to get lost. Both were in relationships- that of King and subject in one case, that of man and wife in the other- where they should only have accepted evidence the confirmed their view that the King or wife was good and would do well by them. I think we do have a lot of Science because scientists and mathematicians have a confirmation bias that what they are doing is very useful. True, some scientists (maybe string theorists?) are barking up the wrong tree. But the amazing inventions made in our life-time have raised the prestige of STEM subjects. We tend to admire people who have a conviction which causes them to keep going though they cheerfully admit that the cause they are fighting for seems hopeless. We may smile pityingly as they latch on to some tenuous evidence to keep their hopes up but then we also wish we had a like faith and will to persevere. In the case of a justice system which is very dysfunctional, we do see 'disintermediation' and jurisdiction shopping. In other words, people find some other mechanism to enforce contracts and resolve disputes. Anarchy would actually be very difficult to achieve because informal mechanisms replace formal mechanisms very quickly.

Expand full comment

We don't have limited information; we have noisy information. In many cases, we can gather information until we have reached any desired degree of certainty short of complete certainty.

I used to be a cryptographer for the NSA. My opponents' job was to take a message and obscure it, to reduce it to as close to zero information as they could. They obscured the information in messages orders of magnitude more than could any of the sources of confusion pointed to by post-modernists. But, just as it's impossible to obtain completely certain information, it seems to be impossible to eliminate all information from a message. There's always a little "bulge"--a non-randomness in the observations. And you can measure the bulge, and compute how much ciphertext you'll need to gather before you can break the code and uncover the message. So even when messages are far more obscure and uncertain than anything encountered in ordinary life, the information content of their distorted form can be measured, and often the original message can then be discovered.

Similarly, there's no need to invoke bias to explain why scientists work on projects that have a low probability of success. They have some chance of success, and they're getting paid. Would they rather stock shelves at Walmart? You might need to invoke bias to explain why a scientist would leave a good job to work at home, alone, for years, at his or her own expense, on a project with little chance of success. But we don't see many of those.

Your observation that we've made amazing inventions is evidence that science is doing better than random. That's because the physical world has its own bulge, its non-randomness; and science is whatever methodologies best measure and exploit that bulge, to reveal the mechanisms of nature to any resolution we desire. We can't know that we know truth; but we can know with any desired degree of certainty whether we've arrived at something so close to truth that the human body and mind are incapable of distinguishing the difference from the truth.

I disagree with your claim that a strategy of making snap judgements and sticking to them is better for society, or more reliable, than a strategy of making sound judgements. Being able to have faith that someone will behave predictably seems less useful than being able to have faith that they'll behave reasonably.

I agree there's a problem with /rationalist/ judgement, as rationalist thought has a fractured, discontinuous landscape, where tweaking a single variable can cause a sudden about-face (which is what "conversion" means literally). A friend who's a rationalist is likely to suddenly cut you off completely for some trivial cause. This has happened to me many times in the past few years. But this is precisely because they have confirmation bias. They don't count observations or attach probabilities to their beliefs, and hence are unable to see an observation which correlates with some conclusion as not always proving this conclusion. In other words, they have the maximum possible confirmation bias.

(No, Bayesians are not fully rational using the traditional and ancient definition of rationalism, which frowns on empirical observation, numeric measurements, probabilities, irrational numbers, and practical applications. The features which discriminate Bayesians from rational logicians are all empiricist heresies. Their use of symbolic logic rather than distributed representations is still a prominent rationalist feature of Bayesianism, and still causes problems in their reasoning. But humans aren't really capable of using distributed representations except either natively, in human machine language, below the level of consciousness; or by using our symbolic conscious language to build interpreters which can execute code that implements distributed representations, as we do when we run a linear regression, perform a statistical test, or program a neural network.)

You still seem to be asserting that we stay with our friends and lovers only because we deceive ourselves about them. I deny that. I can even deny that while being completely cynical, for even if everyone is awful, some will still be more or less awful than others, and it seems unreasonable to claim that it's impossible for people to do better than random at distinguishing more-awful from less-awful people. And even if that were the case, most of us would be better off staying with some randomly-selected friends than going it alone; and unbiased reason would perceive this even more clearly.

I do think our science is dysfunctional, but this has rather to do with system for funding science and choosing scientists, which makes it doubtful whether anyone is doing their scientific work motivated primarily by the desire to discover something, or whether those scientists are chosen for their motivation or ability. The most classist, discriminatory, anti-diversity, intolerant, and power-obsessed sorting systems of all are those in place at Harvard, Yale, and those other bastions of idealistic thought which now shout the loudest against all those things. It's an ancient rhetorical strategy for concealing one's own intentions. Most of those institutions, with the exception of the German universities, were originally built with the sole purpose of educating preachers to indoctrinate the population with those specific religious beliefs that would allow the funders of them to maintain political control. The beliefs have changed, but the culture of domination, power, and fanaticism has not.

The ideas you're expressing which allege hopeless uncertainty about the world come from those same universities, were first used effectively by the Nazis (via Heidegger, I think?) in order to deconstruct Germany's existing moral order, and are now being used in the same way in America. They've been selected and propagated for their political utility. Their wrongness and lack of experimental justification--trace their origins and tell me if you find any--isn't an unfortunate bug, but a feature. Ideologies should be incorrect and unreasonable to be useful as a means to totalitarian power, for otherwise the organization one builds in order to seize power would be riddled with thoughtful, moral, or independent-minded people, and hence unreliable.

I like your observation on the difficulty of anarchy.

Expand full comment

I think I shouldn't have said "moral" above when I wrote, "The ideas ... which allege hopeless uncertainty about the world ... were first used effectively by the Nazis... to deconstruct Germany's existing moral order".

Deconstructionism is a tool not against traditional morality, which can't be deconstructed since it isn't based on logic; but against Enlightenment-style social structures built using observation, honest debate, reason, and mistake theory. Typically a deconstructionist wants to replace an epistemology of reason with one of myth, and mistake theory with conflict theory. It was used by Plato against mistake-theory-based Athenian democracy; by Ibn Taymiyyah, the foundational philosopher for radical Islam, against Aristotle (he made the basic deconstructionist arguments in the 14th century); by the Nazis against the Weimar Republic; and now by post-modernists and some activists against science and mistake theory.

Expand full comment

Re. "They act as good Soldiers for Team “we’re definitely going to make a billion dollars”, and that certainty rubs off on employees, investors, etc and inspires confidence in the company": My own anecdotes are counter-examples.

- Doug Lenat tried to recruit me for the Cyc project way back in the 1990s, offering, I think, $35,000 / year plus stock options. I was excited about it until I asked how much outstanding stock there was, did some math, and told him, "The company would need a $1 billion IPO just for this to be decent pay." He said something like, "Of COURSE the company's going to have a $1 billion IPO!" He was so confident in the company that people who worked for him ended up screwed financially.

- I went to another startup instead, where the same sort of thing happened. The founder had been psyching us up with talk of getting rich. When we discovered he'd been so confident in the company's future that he'd allocated just 1% of the company's stock to be divided up as options among all the first-year non-management employees, it devastated morale.

Expand full comment

working for a startup is like getting paid in lottery tickets.

Expand full comment

Yeah, kind of. My point was that these managers turned potential employees away, or demotivated their existing employees, as a result of being too confident. Counterexamples to the idea that great enthusiasm is good because it motivates employees.

Expand full comment

Maybe there would be fewer "soldiers" if there were fewer "wars"?

The more people want to run the lives of others via politics the more things are controlled via politics. Politics is war. When you insist on forcing your way on others, you need soldiers on your side and the other side's soldiers get mobilized. Big government creates soldiers.

Expand full comment

I think Scout vs Soldier (or Conflict vs Mistake) concepts obfuscate the simpler, fundamental description.

What's described as Solder (or Conflict) is really just "treating instrumental goals / beliefs as core values". Presumably, there's a whole lot more agreement on these core values, regardless of politics. If there's not, Scout (or Mistake) mindset isn't that helpful in reaching an agreement anyway.

In that read, Soldier / Conflict mindset isn't just an alternative to the Scout / Mistake mindset; it's an error. Through still, there are some advantages - like decreasing complexity, increasing inter-tribe cohesion (since everyone agrees on a higher level than if they just agreed on core values and each member tried to compute valid beliefs from scratch).

Expand full comment

Can we then call conflict theory the error of caching your chains of thought too aggressively?

Expand full comment

"For example, I sometimes feel tempted to defend American measurements - the inch, the mile, Fahrenheit, etc. But if America was already metric, and somebody proposed we should go to inches and miles, everyone would think they were crazy. So my attraction to US measurements is probably just because I’m used to them, not because they’re actually better.

(sometimes this is be fine: I don’t like having a boring WASPy name like “Scott”, but I don’t bother changing it. If I had a cool ethnically-appropriate name like “Menachem”, would I change it to “Scott”? No. But “the transaction costs for changing are too high so I’m not going to do it” is a totally reasonable justification for status quo bias)"

Wait - what defense could you have for the Imperial measurement system <i>other</i> than transaction costs? I think that's a good defense, btw, but I can't imagine any other.

Expand full comment

On a somewhat related topic, I would love it if there were a link to the Substack comment formatting rules someplace prominent when commenting...

Expand full comment

I'll defend the Fahrenheit scale on the grounds that it's objectively better for measuring the weather, since 0 is roughly "as cold as it gets in winter" and 100 is roughly "as hot as it gets in summer", at least for the major world population centers.

Expand full comment

Also, you don't need to multiply by powers of 10 like you might going from meters to km, so the usual metric arguments are worthless.

Expand full comment

I think one of the valuable things about arguing politics on DSL is seeing the same kind of arguments and opinions you have made with the polarity reversed. It really helps provide a sense of perspective and make you wonder how much what you believe is actually general principles versus based on what would help your side.

Expand full comment

Does anyone own the Kindle edition? Trying to decide which edition to buy. Graphics tend to suck on a Paperwhite.

Expand full comment

That is all well and good but how many reconnaissance units does an army need? Someone has to do the heavy lifting.

Expand full comment