312 Comments
User's avatar
Vlaakith Outrance's avatar

Scott Alexander breaks down how impossible it is to discuss anything remotely controversial with my boomer parents, episode 2.

Expand full comment
Mark's avatar

Boomer+father+son here. Are your parents the people to discuss controversial stuff with? We had peers. You have reddit (and stuff).

Expand full comment
Vlaakith Outrance's avatar

My parents bring up controversial topics all the time, because their media diet is 24/7 news channels and Twitter, and their bullshit meter is drastically underdeveloped. I'm glad you had peers. I also talk to my peers about stuff, daily. I don't post on reddit

Expand full comment
MLHVM's avatar

My parents did the same thing and always tried to act like it was my filial duty to agree with them. I stopped talking to my mother five years before she died. It was a great five years. Don't miss her. She'd be a psycho in the current age of sludge media.

Expand full comment
Mark's avatar

I see. My pre-boomer parents do/did the local paper and the 15 minutes news at 8pm. Always vote(d) conservative. - All day news is not helpful to stay sane, indeed. Maybe a decent book next birthday? Abundance or The blank slate? - Else, hard to say -pretend to be too dumb or just nod along. I know it is harder to say than do. And I often fail at it. See it as a reminder that people can believe BS and NOT being totally evil. ;)

Expand full comment
neuretic's avatar

>boomer policing the idea of talking to one's parents

lol

Expand full comment
Anonymous's avatar

"We had peers. You have reddit"

Brutal!

Expand full comment
Doug S.'s avatar
1dEdited

Oh, my boomer dad, who is a retired engineering professor, can be equally insufferable and will happily argue anyone's ass off for as long as they let him, but 1) he usually knows what he's talking about (and will cite non-crazy sources to back himself up if necessary - or ) and 2) I usually don't have strong disagreements with him about the things that we discuss, so I don't often end up on the bad end of his lectures. So I generally enjoy letting him talk at me about stuff.

Expand full comment
Eremolalos's avatar

Jeez, just to keep the sample representative I'd like to say that my parents had their faults but they actually did not do the "but" thing when I pointed out that they were wrong about something factual.

Expand full comment
Red Seventyfive's avatar

Some bright-eyed young progressives aren't that much better. People don't change much generation to generation.

Expand full comment
NoRandomWalk's avatar

I believe X because of 100 reasons, that are collectively sufficient. I expect 10 of them to be wrong. And so long as 50 of them are true, I stand by my general claim.

I usually start my argument by listing 10 of them. When you point out which of the 10 is wrong, I am ready to supply the next one on the list of 90.

What is the right way to express this general strategy, in a way that comes across as good faith?

Expand full comment
Alex's avatar

(I don't believe that you believe it for 100 reasons)

Expand full comment
moonshadow's avatar

(OK, you're right and I'm wrong about the exact number, BUT...)

Expand full comment
ZumBeispiel's avatar

"One hundred proofs the Earth is not a globe", William Carpenter 1885:

https://www.gutenberg.org/files/55387/55387-h/55387-h.htm

Expand full comment
Egg Syntax's avatar

That is absolutely charming, thanks for sharing it. Some of them are obviously silly, but others require some thought to find the error. It'd be a fun thing to give to kids in a science class and ask them to find, say, 10 errors.

Expand full comment
Jonathan Vander Lugt's avatar

If you go into an argument knowing that 10% of the things you could say to support your claim are not correct, then, on premise, I don’t think this style of argument can be called good faith. At best it’s only going to come across as weird and annoying and at worst as legitimately bad faith whataboutism.

The way you could come across as good faith is by making a good faith attempt to verify claims before you make them.

Expand full comment
Nate Scheidler's avatar

I think he's just expressing the idea that he knows some of his arguments will be wrong, not that he knows which ones they are. Only the very conceited would assume that every argument they had was certain to be correct.

Expand full comment
Scott Patton's avatar

If that is the case, then it seems the value of "some" is unknowable. It could be 90% wrong instead of 10%.

The good faith way to proceed (assuming multiple arguments) is to pick the best one. That one argument should be as certain as possible considering the context (non-expert etc.). If that best argument gets shut down, gracefully concede or at least withdraw, then come back later with an better argument, if there is one.

Expand full comment
Radar's avatar

It seems to me equally fine to just admit when we've got something wrong, period. An overall attitude of defensiveness in a conversation is counterproductive, even though it is very widespread. Getting comfortable acknowledging mistakes and making apologies along the way and have that not be a big deal is a really useful thing.

I agree it seems bad faith to me also for a person to know they are wrong sometimes (which is normal and human and pretty universal) but to converse by throwing an arsenal at their interlocutor, and holding an overall stance of "I'm right" in any given moment. More humility, more learning.

Expand full comment
AnthonyCV's avatar

Could, yes. And it's not any particular person's responsibility to counter 100 arguments on a given question for a given person. But, 'unknowable' is not the point, and not a useful response. There's an inside view - "To the best of my ability to discern thus far, I believe these 100 things, and will happily change my mind in response to new information on each one, and I'll chance my mind on the conclusion they all seem to point to when I've changed my mind on enough of the supporting arguments." And there's an outside view - "In the past, in situations where I've held what seems to me to be similarly persuasive evidence, I've been wrong about 10% of the time."

In other words, the person is making a claim about their level of calibration, which is absolutely a skill that can be trained and that some people are better at than others. If I'm well-calibrated, then the probability for any given question that I'm off by ~100x on the odds ratio (10:1 vs 1:10) for the validity of almost all of 100 arguments is extremely low, unless I'm making some underlying mistake that is damaging my thinking in this context relative to my general calibration level.

Expand full comment
darwin's avatar

An equivalent way of saying this is that it's disingenuous to make a statement you are only 90% sure of, which seems like a disablingly high standard for casual conversations among non-experts.

Famously, most science papers are only held to 95% certainty for their conclusions. Saying you are being disingenuous unless you are within 5% of the accuracy of an expert scientist seems unfair.

Of course, the technical solution here would be putting a caveat before literally every sentence yo say: 'I'm pretty sure X, I'm pretty sure Y, I'm pretty sure Z, one of those might be wrong but I don't think so and I don't know which one'.

Everyone could talk like that for literally everything they say, if we wanted to acknowledge Bayesian uncertainty and that all beliefs are probabilistic.

BUT, if we're already acknowledging that, can't we just acknowledge that *all speech* already works this way, no belief is ever 100% probable and it's *always* different levels of 'I think'? Can't we just take those caveats as assumed, every time anyone speaks?

Expand full comment
Jonathan Vander Lugt's avatar

Sure, but what you're outlining IMO is meaningfully different from what OP mentioned. In practice arguing with OP is just going to seem like OP believes a thing and when they're pointed out as being wrong, instead of acknowledging and reconsidering, they're going to say "wait but actually that's in my confidence interval of error and here's an alternate reason why I'm right, please disregard the old one." I do not see a situation where that is able to be interpreted in good faith.

Other commenters have highlighted this next point more succinctly but it also doesn't factor in the chance that maybe the 1 or 2 things they're wrong about are substantially more important than the other 98. Or even some smaller but still significant number. If you're not willing to update your priors on new evidence but instead cycle to some new point like a rolodex it's not going to come off well imo.

Expand full comment
Alexander Corwin's avatar

> Famously, most science papers are only held to 95% certainty for their conclusions.

yes, and look where that's gotten us

Expand full comment
Eremolalos's avatar

I don't think the 95% certainty thing is the main problem, though. Do you really think it is?

Expand full comment
Alexander Corwin's avatar

Certainly there are multiple problems, but I think starting from "it's okay if 5% of everything our field produces is just false" is a pretty bad baseline.

Expand full comment
Philosophy bear's avatar

Yes! I am so passionate about this, P=0.05 as a cutoff is far too generous. A 5% probability of getting a positive result if the null hypothesis is true is WAY too high. Even lowering it to 1% would put us in a much better place.

Expand full comment
Radar's avatar

I'd like us to move towards a world where more people feel comfortable trying out saying things they're not sure about while also feeling more able to openly self-correct along the way... and for all of us not to feel we have to work so hard to defend our egos. And to discover that nothing falls apart when we don't work so hard to defend our egos. And that everything gets more interesting from that place.

A conversation where one person is just performing their rightness seems singularly uninteresting to me.

Expand full comment
Dweomite's avatar

> Famously, most science papers are only held to 95% certainty for their conclusions.

That's a common misconception. The standard for statistical significance is <5% chance that the null hypothesis would generate data as extreme as yours. This is NOT the same as a 95% chance that your alternative hypothesis is correct, but many people (including many scientists!) often confuse the two.

Expand full comment
AnthonyCV's avatar

True, a P<.05 standard is *much* less strict than what '95% certainty for their conclusions.' AKA @darwin was understating (intentionally or not) the strength of this argument.

Expand full comment
Paul Goodman's avatar

To be fair 90% confidence is only half as high as 95% confidence- it's a much bigger difference than "within 5%" suggests. Still agree with your overall point though.

Expand full comment
quiet_NaN's avatar

I think that it is fine if you are upfront about it. "Here are ten arguments, from past experience I expect that I will have to retract one of them" is fine. Scott kind of has a similar disclaimer on his links posts, IIRC.

Of course, publishing a ten step proof in a mathematical journal when you expect an average of one step to be terminally unsound is probably a bad idea, but by the standards of internet arguments, 10% factually wrong but in good faith is not that bad a standard, certainly worse than Scott but better than his commentators, I would guess.

I mean, as far as I recall I have in the past made statements about stuff which I was less than 95% sure about -- and turned out wrong. Typically, I hedge a bit by saying "as far as I recall" or something. If you frame it as "this is the ironclad, mathematically rigorous proof that all my opponents are wrong and stupid", then your argument being unsound is a bit more embarrassing, though.

Expand full comment
Tracy Wilkinson's avatar

I think this depends on the type of argument.

Some arguments depend on a chain of logic where if a single link is wrong, then the end outcome can't be relied on.

Some depend on a thicket of knowledge, where one error doesn't upend the whole thing.

Let's say that you're arguing that Alice is Bob's mum (and you don't have access to DNA testing). You might amass various evidence, like Alice and Bob both had red hair and the same unusually shaped nose, that Alice is listed as Bob's mum on his school's contact list, that ten years ago Alice shared a photo of herself in a hospital bed, holding a newborn baby with a caption saying "We are so happy to say hello to welcome to our little Bob!" and now Bob is ten years old, etc.

Even if it turns out that Alice and Bob both had red hair only because Alice dyes her hair, it still might be entirely reasonable to conclude that it looks very likely that Alice is Bob's mum.

Expand full comment
Paul Brinkley's avatar

The unstated assumption I always see is that we aren't able to check the argument logically; we're forced to take someone's word for it, until some later date when the truth is unmistakable. Therefore, we can judge their reliability on future claims based on their reliability on past claims where the truth was eventually a matter of record.

We can of course inspect their argument for dependencies, and if they say Q -> R -> S -> T, we can agree that T is less likely than if (Q or R or S) -> T. OTOH, we often can't even establish dependencies, and the default assumption is something like P(T) = P(Q) + P(R) + P(S) or even P(T) = !(P(!Q) * P(!R) * P(!S)) and the more Qs, Rs, and Ss we can find are true, the truer we're expected to find T to be.

Expand full comment
Boris Bartlog's avatar

I would have a hard time coming up with an example of this where the '100 reasons' couldn't be somehow summarized statistically, or grouped in some reasonable way. In principle of course there might be 100 ... really different and somehow incommensurable reasons? But is there an example of this where the argument can't be distilled or summarized in some useful way? Honestly it feels rather like an attempt to create a flimsy intellectual justification for something like a Gish Gallop.

Expand full comment
NoRandomWalk's avatar

Well, the issue is that there's nothing to argue with if you summarize them statistically.

You have a controversial view. Something like, 'brinksmanship is more effective for negotiation than offering halfway between your position and your best understanding of your counterparty's weighted by leverage' and it's based on a number of salient examples.

To make your point, you list the most salient ones. Then, you are corrected on important facts that clarify the history in such a way that removes one of the example's support for your position. Any particular example isn't highly loadbearing. It's likely that your interlocutor started with the least-true example they are familiar with, rather than investigated one at random, or evaluated all of your points for truth (even you didn't have time to do that).

If an example chosen at random was incorrect, then sure. But it's nice to be able to list many examples to explain your position, and be able to say 'the fraction of examples that would need to be false to change my mind is quite high actually, and higher than the time you have to factcheck.'

I do agree with Scott that 'yes, but' is hygienic to maintain shared good faith.

Expand full comment
Thomas K's avatar

>and it's based on a number of salient examples.

I would argue that your belief isn't necessarily based on just the examples themselves, but rather on some overarching principle that is supported by these examples, maybe something like, "I believe people will often back down in the face of brinkmanship because [risk aversion or something]". Your strategy should be to extract the broader argument that is tied together by examples or sub-arguments and defend that, rather than trying to litigate each example separately.

To generalize, if you're proferring an argument in defense of your position, it should be substantial enough that a refutation results in a meaningful update to that position. If you cannot conceive of a way to pack a meaningful chunk of your view on a given issue into a cohesive argument, then I think you should at least specify at the outset "hey just so you know, I'm using play money here, this isn't going to change my mind"

Expand full comment
NoRandomWalk's avatar

Good points, thank you

Expand full comment
awenonian's avatar

If you present 10, and they disprove 1, and you don't change your mind, that's fine, and they'd be weird to expect you to. If you present 10, and they disprove all 10, and you say "well, I actually have 90 other ones, so I'm not gonna change my mind" it starts to bring up questions. Like, if the 10 you presented were all wrong, are you still confident on the next 90? Did you mean to present the 10 weakest ones?

If this does still seem appropriate despite those questions, I'd say the way to express this is "I acknowledge that the 10 I presented are wrong. They were a small fraction of my reasons for this position, here are the next 10, if you find these wrong too, then I will have to reconsider my position."

If the next 10 are not ones that would make you reconsider your position, pick different ones.

If this is a case where you believe something for 100 complex reasons that are independent such that disproof of one offers no change in credence for any other, and you present 1 at a time, it's understandable that disproof of that 1 won't change your mind, but it's also likely this is an extremely deeply held worldview, and should be dealt with by a conversation specifically structured for that, and not like, a couple back and forth internet comments.

Expand full comment
NoRandomWalk's avatar

Very well put, thank you.

Expand full comment
DanielLC's avatar

One example would be that the earth is round. Sure I could group this into things like:

* Our observations are consistent with Earth being round

* Our observations are not consistent with Earth being flat

* A giant conspiracy would not be able to stay secret

But if you want to argue that Earth is round, you can't just say that observations are consistent with that. You have to actually say those observations.

Expand full comment
Paul Brinkley's avatar

"I would have a hard time coming up with an example of this where the '100 reasons' couldn't be somehow summarized statistically, or grouped in some reasonable way."

The readiest example that leaps to mind is some disposition I might have toward a political candidate. Such people have often done many things in their career; it's possible for some of the more senior ones to have racked up over a hundred things I might care about if I'm following closely. And they could be essentially separate things, not just "he pardoned 20 people, signed 30 EOs, passed 40 bills, etc."

A hundred seems like hyperbole even so, but "dozens" seems plausible.

Expand full comment
Melvin's avatar

> I would have a hard time coming up with an example of this where the '100 reasons' couldn't be somehow summarized statistically, or grouped in some reasonable way

I think the original "Trump Bad" would be a typical example. But I'm bored of discussions about Trump, so let's talk about Warren J. Harding. I know nothing about Warren J Harding but I'm going to read his wikipedia article and pretend I do.

Someone who wants to argue that Warren J Harding is bad might quite easily be able to come up with a hundred bad things about him, which could probably be classified into a few categories -- he was corrupt, he told lies, he had extramarital affairs, he failed to prevent Japanese expansionism, he was a racist, he laid the groundwork for the Great Depression, and so forth. In an argument between a casual Harding hater and a well-informed Harding fan, I imagine you'll see various specific complaints trotted out, some of which will be more accurate than others.

Ideally both parties will go away acknowledging that some of the things they thought they knew about Harding were wrong, and although neither will have fundamentally changed their opinion of the man they will hopefully acknowledge that his legacy is complicated.

Expand full comment
Tracy Wilkinson's avatar

So one thing in the field of economics is that it looks like achieving good economic growth requires a lot of things to go right - rule of law, low rates of government corruption, avoiding hyperinflation, avoiding government fiscal crisises, having a reasonably effective tax system, etc.

There's a lot of historical examples of countries that had multiple things going right but still got one thing really wrong and did badly economically. But these aren't exactly easy to summarise, I mean I'd be jumping between the USA in the 1930s. Japan in the 1950s, pre-Revolutionary France ,etc.

Expand full comment
Paul Brinkley's avatar

At a tangent: my working hypothesis these days is that good economic growth needs only one basic thing: high public trust. And by that, I mean that any one person can look at any other one person in their society and think "because that person is in my society, I can trust them with a trade". "The apple I buy won't be rotten inside; the check won't bounce; the shipment of steel will arrive on time".

In terms of your examples: people can trust each other to abide laws without requiring a judge or team of lawyers looking over their shoulder; they can trust government officials to try to deliver on the government's stated promises, and to agree on what those promises are; they can trust others to not run prices up either out of greed for more of the pie, or fear that whatever pie they get will soon shrink; government officials can trust the people to report risks and hardships honestly, so that they can allocate aid correctly and avoid crises; and they can trust the government to levy only as much tax as it truly needs to function, and for fellow taxpayers to treat that levy as same, and pay it without complaint or attempt to evade.

The idea here is that trust reduces economic friction. If all economics is exchange between individuals, then the more individuals can trust in the quality of the thing they're trying to acquire, the less wealth has to be expended in checking. (The catch is that the less checking happens, the more tempting it is for someone to exploit that and get away with a fortune, especially if society is large and they can throw away their reputation in one part of it and start over elsewhere.)

The bold claim here is that *all* economic exchange comes down to that. On the planet, granted. This breaks down if there's literally not enough raw material in the system to nourish everyone or something, like in an isolated space station or a lifeboat. And we could explore levels of trust affecting this, too - we might trust each other to not sell bad fruit, but not to renege on a million-dollar contract, and an economy could chug along at some rate dependent on that level.

Expand full comment
Tom's avatar

I think this is just a bad way to think about things and will not generally cause your beliefs to converge with reality on non-trivial questions

Expand full comment
Mark Roulo's avatar

"What is the right way to express this general strategy, in a way that comes across as good faith?"

I think it would be honest for you to mention that you have 100 reasons, are only presenting 10 of them (hopefully the 10 you are most confident in) and that you will continue to provide new reasons as your presented reasons are refuted.

This lets the people you are engaging in realize that for all practical purposes you will not change your position no matter how the discussion goes and no matter how many of your reasons are revealed to be false/incorrect.

Expand full comment
Cjw's avatar

OK, well we've encountered this in AI debates a lot lately. For example Yud has a list of 30-40 ways in which AI alignment would be nearly impossible to achieve. If somebody refutes one of those, or perhaps some AI dev actually overcomes one of those, what's the proper response? I suppose he would acknowledge the refutation if accurate, and then examine the list of his other reasons to see if the refutation of the 1 point makes any of the others less likely. Almost certainly if you have 100 reasons to believe X, those 100 propositions are not fully statistically independent of each other, so if you lose one it's reasonable to re-examine your others.

Now it's annoying when some tech bro says "actually it turns out your 2010 prediction of AI interpretability problems came out completely backwards" and then adds the unwarranted coda "so you're totally discredited". When you have 50 other arguments still justifying your belief. But it's annoying AS a dunk in social media. If shaping public opinion were irrelevant, then the only person being harmed by doing that dunk is the dunker, who is deluding himself with flawed reasoning. You, as the better reasoner, shouldn't have a "nitpicking heuristic" that just ignores this situation, you probably DO want to at least reflect on how much of your certainty depended on this and whether any other justifications you had are weakened.

Expand full comment
TGGP's avatar

Are there predictions that came true, along with ones that were falsified?

Expand full comment
Jiro's avatar
1dEdited

This is why the idea of Gish Gallops is real.

>Now it's annoying when some tech bro says "actually it turns out your 2010 prediction of AI interpretability problems came out completely backwards" and then adds the unwarranted coda "so you're totally discredited".

If he has 51 arguments, most people who can refute them can only refute a few. If you *don't* admit that losing one or two discredits them, there is nothing that anyone could reasonably say against you because nobody's gathering that much expertise in one place. (Unless someone makes a group-written FAQ refuting everything, as happens with actual-Gish).

I am skeptical that anyone's actually going to have 51 good, independent arguments for something.

Expand full comment
Cjw's avatar

I think "discredit" is used too aggressively in the type of discourse I'm talking about. A reasonable person finding one prediction failing for an unexpected reason would make an honest assessment of the type you're hinting at in the end, you would look for interdependencies. Both parties would do that if they were rational. However it may be the case that the refuted claim was doing very little of the work in justifying the broader conclusion, in which case neither the conclusion nor the person arguing it have been "discredited" to a degree that you should dock points from all of their other claims without such an analysis.

The issue of narrowly focused refutations being given outsized import is one I see a lot in arguments with conspiracy theorists. Let's say my confidence that Oswald acted alone to kill JFK is 99.99%, based on a whole bunch of facts I have learned over the years about the murder. The conspiracy-minded researchers will zero in on certain details and spend 100 pages of effort to undermine one fact. Supposing that my belief about Oswald's acquisition of the rifle was supported in small part by comments his wife made to one of the Russian emigres in Dallas, and a conspiracy researcher with a niche interest is able to show that this statement was mistranslated in the Warren Report. They will treat this like a gotcha that discredits the entire conventional account of the assassination. Then the skeptics will memorize a bunch of these and do the "gish gallops" of their own. But in reality I really do have 50+ really good and largely independent reasons to believe Oswald acted alone, and 1000s of mostly or fully-independent facts pointing in that direction.

If you want to say somebody's theory is discredited by one or two conclusions being wrong among many, you have to show that.

Expand full comment
Jiro's avatar
1dEdited

I hope you can see the catch-22 though. If you have 50+ independent reasons for believing something, and someone wants to refute you, exactly what are they supposed to do? They can't go through all 50+ and refute them, they don't have the time or the expertise. Is the only option to give up?

Expand full comment
Dustin's avatar

I think so, yes. If someone has 50 arguments and you're not willing to engage them all, you can't say the person is wrong.

You can say your priors are against them being right, but you have to acknowledge that they may have an argument you haven't considered. That's the best you can do.

Expand full comment
Michael Watts's avatar

No, if I look at three of your arguments and all three of them are wrong, I'm going to conclude that you're an idiot and it's pointless to look at more of your arguments.

Expand full comment
REF's avatar

Except, I don't need for you to change your mind. If we are discussing in good faith, then I merely need to convince you to reevaluate 45 of them on your own. If, after having 5 disproven (or severely weakened), you are unable (or unwilling) to reexamine them, then there is little point in my taking the time to refute them myself.

Expand full comment
Cjw's avatar

I guess it depends on why you want or need to refute them. Good chance that I have 50+ independent reasons because I'm correct, in which case you should just be persuaded by them. If you are convinced I am wrong without going through the reasons, then you probably have some intuition that there's a hidden premise nearly all of them rely upon, so figure that out and attack it instead. Or you're just engaging in motivated reasoning, like the JFK conspiracy nuts who needed to sell books. Or maybe what you really want to oppose is the call to action that could follow from my conclusion, so you should pivot to that step (e.g. maybe it's easier to argue against carbon mitigation policy steps than to argue against every single argument or observation climate alarmists have based their conclusions upon.)

On rare occasion, I suppose you might be actually obligated to argue against something that seems true and is supported by a variety of independent evidence. For me, that was only the case when I was a criminal defense lawyer, and thankfully in that specific context you are given a legal standard which allows you instead to pick out the weakest load-bearing evidence and just go after that, the state has to prove A+B+C for a conviction so you only need to prove ~A or ~B or ~C, your pick.

Expand full comment
Jiro's avatar

>I guess it depends on why you want or need to refute them.

Suppose it's creationism after all, and there's a list of 101 reasons to think the Earth is young (such as https://creation.com/en-us/articles/age-of-the-earth ). Only, it's enough years ago that nobody's compiled a FAQ listing the flaws in the arguments.

Notice how this scenario has actually happened, yet it fails all your points. The 101 reasons aren't correct. There's no hidden premise they all rely on (some do, but not enough that I can dismiss the whole thing). I'm not engaged in motivated reasoning, and while the call to action isn't great, that isn't my main objection. It's just a list of 101 (mostly) independent reasons, every single one of which is wrong.

What am I supposed to do when faced with this, if I can't just refute the handful I know about and say "screw the rest"?

Expand full comment
Anatoly Vorobey's avatar

One way is to have an independently trusted third-party come up with 5 random numbers below 50, then refute those particular arguments. This should give you (the bearer of the 50 arguments) very strong ground to revisit your grounds for believing in all 50.

In practice this usually doesn't work because the person who thinks they have 50 independent high-quality arguments that are really bullshit will not be persuaded by a random-check refutation. But it *could* work. Maybe you're an intelligent and fair-minded person who is mind-killed by propaganda on some political question (e.g. Israel vs Palestine) and you genuinely think you have 50 good reasons, but you lack enough context to see that they're really based on a handful of incredibly wrong beliefs about the basis of the conflict. Refuting 5 surface-level arguments will be enough to get you to examine the other 45 more deeply and understand that they're really not independent after all.

Basically you want to credibly state that you didn't just painstakingly find 5 false arguments after going through 40 correct ones or something like that. Randomness is the best way to achieve it, but it may just be too difficult to refute some arguments (e.g. they're ill-defined or weaselly). I used to encounter this problem regularly when I engaged with a blogger who posted long lists of, say, 86 recent news-or-commentary items that conclusively prove that [one political party] is the devil and its supporters are the worst of humanity. I would read through it and catch two or three items that both seemed a little suspicious (too good to be true) and seemingly easy to verify. I'd check them, see they were lies or significant distortions, and comment with details. So not a random selection but an "easy to verify or refute" selection. Mostly I'd get back comments from the OP's supporters that went like "oh, so out of 86 comments you only found something to complain about in 3? Sounds pretty good to me!" Then I'd sigh and move on.

Expand full comment
Taleuntum's avatar

I think this is good idea, and it's worth trying it a few times at least.

You might know already, but in case you or others don't: there are already trusted public randomness beacons! See: https://blog.cloudflare.com/league-of-entropy/

Therefore, you can greatly simplify the process of actually getting those random numbers: you don't have to coordinate with the person you are about to randomly fact check at all!

Expand full comment
Jiro's avatar
20hEdited

Or you can just refute 1 2 3 4 5. Same result, without having to explain to normies why you're bringing in random number generators. 1 2 3 4 5 is a Schelling point that means the other person can't accuse you of cherrypicking the weakest arguments.

(It also has the same problem that you described--it doesn't let you reject ones that are too ill defined to either verify or refute.)

Expand full comment
AnthonyCV's avatar

I would say this question only matters when someone is important enough in context that their views need a confirmed, public reputation for some purpose, and dogmatic enough not to re-evaluate their own beliefs in response to evidence. In which case, you can use somewhat more costly methods.

One traditional solution is to let 50 people each challenge the reasons they can challenge, and then have a 51st person collect them and write up a review weighing the sum of all of them against the original. That the original person themselves dismisses or tries to discredit each of the 50 should not matter.

Expand full comment
Jerry's avatar

One other important part is if you can have some idea which sub-beliefs supporting the main belief are the most load bearing, then prioritizing sharing those even if them being wrong means it's time for a big update.

An example of this is street epistemology. A common strategy to guide conversations to a productive place is to ask for the persons belief, ask how confident they are it's true, ask what the main reason they believe it is, and ask if that reason was shown, to their satisfaction, to be false, would they no longer believe the thing being examined? If not, that's just a red herring that's not worth wasting time with, and it's better to find something that would at least have some impact in their confidence in the belief

Expand full comment
Deadpan Troglodytes's avatar

Another way to look at this is you've just come up with an honest description of most people's beliefs about most things.

Priors come from dozens and sometimes hundreds of "reasons": little empirical events, things they've observed directly or ingested via report, thoughts they've had. Some of them are sturdy, others are fragile, and some are obsolete. Sometimes, people haven't integrated all of them, or updated.

Expand full comment
Feral Finster's avatar

Many of those reasons amount to cognitive dissonance. Contrary to popular belief, the educated and intelligent are more prone to cognitive dissonance, because they are better at symbol manipulation to reach a desired outcome, and because they tend to wrap their sense of self in their beliefs.

Expand full comment
Deadpan Troglodytes's avatar

Agreed. That's part of what I was getting at when I mentioned that people haven't always integrated all of them. I could have been clearer and mentioned that those reasons can often be in conflict.

Expand full comment
Yug Gnirob's avatar

"You're right, that was a flawed example. Here's another one."

Expand full comment
Meefburger's avatar

I think this is generally not how justified belief and disagreement actually work. The truth values of your reasons are correlated (especially if you have 100 of them!), the amount of justification they provide is highly unequal, and the ways in which they turn out to be wrong matter a lot.

For example, I believe that electrons exist, and I have lots of reasons for that belief. One of those came from an experiment I ran, where (what I believe were) electrons came out and caused a patch of a detector to light up. If it turned out the detector was defective and the bright spots were something else, I would have very slightly less justification for believing in electrons, but it wouldn't really change my belief in electrons. But if some big result came out showing that every device we thought was responding to impact by tiny charged particles was actually responding to some other thing, this would matter a lot. I couldn't just go down to the next item on my list, because it would likely be affected by the new result.

Expand full comment
Patrick D. Farley's avatar

Present your strongest argument first and work backward from there.

(I also don't believe you have 100 reasons. You probably have like 3, and then 100 claims that derive from those 3)

Expand full comment
neuretic's avatar

there is no right way because interacting with you is exhausting by design. someone has nullify 100 arguments before you'll even consider a new perspective. most people won't value this kind of discourse enough to engage with it after they realize what's happening.

Expand full comment
Marc's avatar

"I was mistaken about that point, but I don't think it defeats the overall argument, because..."

Expand full comment
John N-G's avatar

You have to be careful how you update your beliefs when someone points out one of the ten are wrong.

Did they examine all ten, and find nine were correct? If so, you're good, and it would be odd for your interlocutor to be arguing your point is wrong because nine out of ten examples support it!

Did they examine one at random, or have special knowledge about only one of the ten? If so, what are the odds that they happened to land on the wrong one, if only ten percent are wrong? This is evidence against your belief that only 10% are wrong, and if it happens more than once, it's strong evidence against your belief.

If I were your interlocutor, I would not be starting with the prior that only 10% are wrong, and finding that the first example I examine or know about is wrong would be evidence that a majority (possibly a vast majority) is wrong. I would not want to waste time examining another one.

To continue the conversation productively, you should apply similar knowledge or investigative time to two or three of your other ten examples and thereby provide evidence that your belief has strong support.

Expand full comment
walruss's avatar
1dEdited

"No matter what is said during this discussion I will not be changing my mind or thinking about your argument. I strongly believe I'm much smarter than you and that anything you say has no bearing on reality."

EDIT: This was a little flippant, so I'll add that my actual answer would be to narrow the scope of your discussion to only those 10 things, and if several of them are wrong, say "thank you, I'll have to think about that," remove it from your list, and have another conversation on a different day about another subset.

Expand full comment
quiet_NaN's avatar

I am doubtful that anyone believes anything for 100 independent reasons.

Even for a strongly-held, common beliefs, like "the Earth is (roughly) a sphere", I do not think that I hold the belief because I know 100 completely independent arguments.

For example, suppose I said "the Earth is spherish because that is the one topology which matches roughly the flight times between airports", and then a flat earther miraculously managed to convince me that in fact, the windows in airplanes are really displays, and airplanes actually move by creating wrap portals, not by traveling at a speed slightly below the speed of sound, then I would not go on and say "but what about travel times for ships and railways?" or even "what about GPS satellites?", because if airplanes are a sham, that increases the probability of these arguments to be also false tremendously.

Or take creationism. A biologist can likely name 100 different branches of the tree of life which are well explained by evolution, and some of these might well contain a few harmless errors which do not change the overall conclusion. But if the creationist managed to convince them that wolves are actually closer related to giraffes than to dogs (despite the genetic distances), then the biologist could not go "okay, so you were correct about Canidae, but what about Felidae?", as any mechanism which would cause unrelated species to have similar genomes would have strong implications for all the 100 arguments.

For anything remotely controversial among intelligent people, like the lab leak hypothesis or p(doom), it is very unlikely that you have 100 independent arguments with an average odds ratio of 1:10 for a fixed side yielding an overall probability on the order of 1e-100 (or the inverse), and that if 50 of these arguments were disproven you would still have a probability of 1e-50. Anyone who claims that must either be the least convincing arguer in the world (because otherwise they would immediately convince honest people by their overwhelming evidence) or is (much more likely) terminally overconfident.

Expand full comment
Matthew McRedmond's avatar

Nobody is basically 100% wrong about anything because 0 and 1 are not probabilities and if you think you are 💯 in one respect you’re probably scoring very poorly in another

Expand full comment
Alex's avatar

The 0 and 1 thing does not apply in that case

Expand full comment
Alex's avatar

As that's a percentage, not a probability

Expand full comment
quiet_NaN's avatar

Well, the same weirdos who like to use percentages also like to express probabilities as percentages, in my experience.

Expand full comment
Mikael's avatar

For mutually exclusive events A and B, P(A ∩ B) = 0. Cromwell’s rule is a good reminder not to get stuck in never updating your beliefs, but I think at some point you get reasonable credence to simply state that the other person is just wrong in practice.

Expand full comment
The Unimpressive Malcontent's avatar

If you write down an equation and put the wrong number on one side of the equation, then yeah, that's 100% wrong.

Expand full comment
Taleuntum's avatar

I'm definitely sometimes guilty of this, but you are completely right!

Also, were there more ACX posts than usual or am I imagining this? Either way, I'm certainly not complaining, in fact, it came perfectly for me. Keep up the good work!

Expand full comment
NoRandomWalk's avatar

Scott just got back from a (fun, scary for ai fearers) conference, he has a lot of things on his mind and some free time for once

Expand full comment
Tatu Ahponen's avatar

I've frankly often thought that online forums should institute a rule allowing the mods, at least occasionally, to pinpoint a poster who has made a clearly wrong factual claim and then demand that poster to admit directly and clearly that they were wrong or get banned. I mean, a lot of forums basically have a "mods can just do things" rule to cover this, perhaps more of a principle is what I had in mind.

Expand full comment
Edmund Bannockburn's avatar

Getting this *consistently* enforced would be hard, but I can certainly see the value in it. (I would worry about mods sticking the forced admissions to one "side" more than the other, even when both sides are guilty of factually false statements.)

Expand full comment
darwin's avatar

You basically need 2 mods with exactly opposed and equally strong ideological stances for this to not just devolve into isolated demands for rigor.

Expand full comment
Kenny Easwaran's avatar

That would only work for issues that are binary, which is very few of them outside of team sports.

Expand full comment
Arie's avatar

Get like 10 and if they agree unanimously you're out

Expand full comment
Tatu Ahponen's avatar

Most team sports issues I've seen in discussed in sports forums are pretty non-binary as well, ie. "Which guys playing for our team are dragging it down", "What would be the best tactic/formation for beating this particular opponent team", "How should the league system be changed if it should be changed", "How to deal with corruption in sports" etc etc

Expand full comment
James's avatar

I think part of the point Scott's making is that when people do this voluntarily, it functions as a signal of good faith and makes you more likely to want to continue engaging. It wouldn't function that way if it was extracted.

Expand full comment
Feral Finster's avatar

There is also the flipside - one can point out that Trump is probably not in fact directly culpable in the murder of Mickey Mouse, without being a Kool Aid chugging MAGA Trump cultist.

Expand full comment
Ross Andrews's avatar

You are correct - the murder of Mickey Mouse was committed by Ron De Santis

Expand full comment
TGGP's avatar

Another reason he'd make a better President.

Expand full comment
DangerouslyUnstable's avatar

Sort of relatedly, I try to always be able to find and recognize at least one non trivial positive/good thing about people and ideas I oppose, as an exercise in making sure I'm not overly blinded.

Expand full comment
Ross Andrews's avatar

This sounds like a good exercise. Do you have any examples of times you’ve done this?

Expand full comment
Sol Hando's avatar

Please create a list of all the 15 word acronym pet peeves you have. I feel like you could one-shot them all and create a nice glossary of tips for interaction.

Expand full comment
Michael Watts's avatar

Andrew Gelman created a list of problems he frequently encountered that he had to coin names for: https://statmodeling.stat.columbia.edu/2009/05/24/handy_statistic/

Expand full comment
Ben Zeigler's avatar

Learning how to say "Yes, and/but" in technical arguments was EXTREMELY important for my career as a programmer as I moved into a senior position. Everyone (literally everyone) likes it when you acknowledge that they are partially correct before disputing something. It also makes the transition a lot smoother so people can follow the conversation. Seriously, do this.

Expand full comment
Liskantope's avatar

It's a good social skill to have as a teacher/instructor as well.

Expand full comment
Sniffnoy's avatar

I believe the general term here is "lead with agreement". Useful in any argument!

Expand full comment
Shaked Koplewitz's avatar

I agree that this is a useful thing to know how to do in any argument, but I'm not sure that's a very catchy term for it. We should have a better one, like "walrusing" or something.

Expand full comment
Sniffnoy's avatar

I don't think there's a substantial difference in catchiness, or if anything it goes the other way -- I don't think that opaque jargon will spread more easily than a short plain language description (in addition to the other advantages of plain language over invented jargon).

Expand full comment
Shaked Koplewitz's avatar

I agree that there's a risk of opaque jargon making it harder to spread compared to short plain language, but fewer syllables and a clear box around the concept might help package it. Plus walruses are cute.

Expand full comment
Sniffnoy's avatar

There isn't really a connection between the term and the concept though, nothing to make it catchy and stick. (Contrast "shit sandwich", mentioned below -- it might not be entirely transparent, but it's not fully opaque either, and the image is pretty vivid.) If I try to ground it in things I know, I'm just like... is it a play on internet jargon "sealioning"? But the connection seems distant, and if that *is* the intention I think it has the problem that people won't remember which is which.

I think "lead with agreement" is short enough that the additional brevity of "walrus" isn't significant, and I think it's already a clear enough box around the idea?

Expand full comment
Shaked Koplewitz's avatar

I agree with you that there's a lack of connection and that this is a bad thing. Walrusing was kind of an out there idea - it was vaguely based off "sealioning", but a catchy name with an existing slightly-opaque connection would be better

Expand full comment
Michael Watts's avatar

I remain annoyed that online fashion has led people to constantly refer to "motte and bailey fallacy" where the phenomenon has the much clearer and more traditional name of "equivocation".

"Lead with agreement" is a phrase that says what it means. The only advantage to calling it "walrusing" would be that most people fail to understand you.

Expand full comment
Paul Brinkley's avatar

Motte and bailey is not the same as equivocation. Equivocation refers to an argument based on using words with ambiguous meanings. Examples include interpretation of the Second Amendment based on the modern meaning of "militia", or equivocating the right to have something with the right to be provided that thing at society's expense.

A motte and bailey, by contrast, is a shift between a weak but easily defensible claim and a powerful but more vulnerable claim, depending on the effort expended by the opposition. Equivocation can happen independently of opposition strength.

Expand full comment
Theodric's avatar

I see what you did there.

Expand full comment
Erica Rall's avatar

There's also the related "shit sandwich" technique, where you both open and close your statement with agreement, praise, or good news and then put the disagreeable things you want to say in the middle.

Expand full comment
Gunflint's avatar

This is basic Dale Carnegie stuff. If you want to influence you have to meet people on their own terms.

Expand full comment
Paul Brinkley's avatar

Fun fact: this is probably why LLMs are caught doing it so much. (In their case, it's also known as "glazing".)

Expand full comment
John's avatar
1dEdited

I think you want to be careful to separate what you might crudely call "social manipulation techniques" from "epistemic hygiene." I think Scott's point is not "'Yes, but' is an effective way to win arguments and convince others that you are right," it's "'Yes, but' is an effective way to *make sure your own beliefs are correctly grounded in truth*" -- like, for your own good.

I agree that "NOPE YOU ARE WRONG, I WIN, BYE BYE" is not going to win you a lot of friends, though, and there's a time and a place for tact (which is "usually, in most situations").

Expand full comment
SorenJ's avatar

Evolution must have somewhere along the line selected for people who never want to lose face and admit they are wrong. I don't know why that is the case, but it seems like it. Maybe the pendulum is swinging the other way and natural selection will pick out people who are comfortable admitting they are wrong.

Expand full comment
Matthias Görgens's avatar

You should have more kids with these people.

Expand full comment
Ross Andrews's avatar

I think many people see admitting fault as a drop in status and an increase in the opponent’s status. Holding firm even when wrong can be viewed as a show of strength and admitting fault can seem weak and submissive.

Expand full comment
darwin's avatar

It doesn't have to be evolution, the incentives for learning this behavior on social media (where a disturbing number of people live a disturbing portion of their life) are pretty straightforward.

Expand full comment
John N-G's avatar

Evolution may be moving in the other direction. My dog, for example, has never admitted that she is wrong.

Expand full comment
Nina Bloch's avatar

I mean, I think the evolutionary advantage is compelling. The fact is, proving someone wrong does require effort. Lying or being uninformed often does not. So if you are willing to bluff, and are able to do it convincingly, you will be able to maintain status to a degree that is disproportionate to your actual investment. Obviously this is a high-risk endeavor, and so not everybody is adapted well for it, but assuming you have the adrenaline-junkie gene and a fair sense of intuition, it’s pretty adaptive I think. (I mean, Trump is basically a really great witch doctor in a sense, right?) It’s probably particularly well adaptive to finding a (short-term) mate.

Unfortunately, it’s not clear that the selective pressure is turning. Ideally, in a high-information world, this kind of behavior would become more high-risk as the cost of disproving goes down. But the flood of information actually devalues any particular piece of truth, tripping the scales further to the side of confidence.

(This all feels excessively cynical. Please someone prove me wrong.)

Expand full comment
Shankar Sivarajan's avatar

> I don't know why that is the case,

What is your best guess?

Expand full comment
Walliserops's avatar

Coadaptation with other anti-social-defeat traits, probably.

Say you have the will-accept-faults geneset. You tell yourself "I am the sexiest man alive and will have 2 million kids". The first woman you meet says "No, you are three feet tall and 90% of your diet is your own boogers." You evaluate the data, say "Well, I guess you're right", and never attempt to breed again.

Meanwhile, your evil twin has the will-not-accept-faults geneset. The same thing happens to her. She says "Wow, that woman must be very stupid not to realize my greatness." and tries the next. Eventually she gets lucky and has 2 million kids that are all genetically incapable of admitting they are wrong.

Expand full comment
JQXVN's avatar

And if we run this example where the correction involves the location of a lion or a cliff we come to the opposite conclusion.

The question is what is trading against the (fairly obvious?) utility of knowing what is the case about the world. Which as many have said is probably status-related.

Expand full comment
JungianTJ's avatar

I discussed this in 2022 as one of several exercises on social status. Admitting an error costs *both* dominance and prestige status. Evolution won’t have looked kindly on it. Quoting the relevant bit, with D a dominance-status person and P a prestige-status person:

„So [P] and D instead retreat to a quiet zone with W, to debate the integrated information theory of consciousness. They are clearly shown by W, who we assume to be not of higher status, that this theory, which they both previously publicly endorsed, is weak.

Exercise 4. Which one will happily concede the argument?

Neither. Certainly, D will be averse to being dominated by W in this way. But for P, there is, if anything, even more at stake! A reputation for erring means being compromised as a team member, and badly. [(Prestige status is about being a desirable team member.)] What weight is your voice going to carry in a team meeting if precedent is stuck in others’ minds of you having been wrong in the past?“

https://birdperspectives.substack.com/p/two-kinds-of-status

Expand full comment
JQXVN's avatar

You don't make much of the points others have brought up relating to the prestige value of epistemic humility in certain contexts?

I have a personal intuition that it can confer dominance status in the right context, though I haven't fleshed it out.

Expand full comment
JungianTJ's avatar

You mean „epistemic humility in the form of admitting mistakes can confer *prestige* status in the right context“? If so, I agree, it’s possible, but I think the status-determining „audience“ would have to be very unusually rational. Or in a trading or betting environment where those who admit mistakes and change course can demonstrate better results.

Expand full comment
MathWizard's avatar

Evolution doesn't occur in a vacuum, people co-evolve together. Obviously there is a positive advantage to "lie for selfish gain" if and only if it actually gains you things on average. Which creates a counter force in the people around you selecting for the trait "punish people who lie for selfish gain".

So both traits become pretty common. And then the former evolved into "lie for selfish gain if you expect you won't be punished for it, but stop if you do"

So then if the trait "admit you are wrong" causes people who weren't sure if you were lying to become sure and punish you more, then it's a bad trait. But if people punish liars who don't admit they are wrong but are obviously lying even harsher, then "admit you are wrong" becomes beneficial, because you get a small punishment instead of a large one. So society works well and people do admit they are wrong IF society can implement a policy of "punish obstinate liars a lot, punish liars who recant a little."

I don't think society does that anymore. I don't know if it's the fall of Christianity or the atomization of society or just the internet, but people who admit they are wrong tend to get punished more, not less. If you're only seeking to maximize epistemic honesty then you should admit when you're wrong, but if you're trying to maximize your welfare and minimize the chances of being cancelled or fired or publicly shamed, you should never apologize and never admit when you're wrong. Because that's what society rewards.

Expand full comment
Justin CS's avatar

I suspect the evolutionary reason may originate in looking strong to ward away violence. Showing any weakness invites theft and attack. This instinct may have carried over to verbal posturing too.

Expand full comment
ascend's avatar

I kind of think this community is weirdly, almost pathologically addicted to evolutionary explanations and complicated theories of nebulous "status" for simple psychological facts.

I don't see why we can't simply observe that on the most straightforward and conscious rational level, losing face reduces your power--because people are then less inclined to assume you know exactly what you're doing and automatically follow your lead--and power lets you get things you want, whatever those may be.

Expand full comment
Benjamin Clark's avatar

Yeah, "speaking into a void" and skepticism that the counterparty is open to a good faith discussion has caused me to walk away from a lot of online discussions after their first response. It's just not worth the time in such cases.

Expand full comment
DLR's avatar

And they not only didn't admit they made a mistake, they didn't update on the new information -- they are still willing to 'bet every dime' they have on the 70% probable outcome.

Expand full comment
Ransom Cozzillio's avatar

And even the good version you lay out will still be called “whataboutism.” Which I guess I’d also be curious for your thoughts on.

On one hand, “point jumping” as my dad would call it, is an annoying way to avoid admitting large scale error; even if you do it the good way, and admit the local error as you’ve suggested.

On the other, comparisons and the one utility for pointing out inconsistencies of an opposing thought should obviously be a valid discussion/argument tool.

Expand full comment
Capybara's avatar

I don’t think that the good version is whataboutism. “Okay, argument 1 was invalid, but what about argument 2” is just pointing to another (currently not disproven) piece of evidence. So called whataboutism is changing the topic to something unrelated to the original problem, like saying that there’s racism in USA when someone pointed out that the Soviet Union is not democratic.

Expand full comment
quiet_NaN's avatar

Yes, what they described sounds more like a https://en.wikipedia.org/wiki/Gish_gallop .

Expand full comment
Michael Weissman's avatar

None of us are perfect at applying this rule, Scott in particular. See:

https://michaelweissman.substack.com/p/open-letter-to-scott-alexander for an response to which an "ok, I was mistaken but still...." might have been appropriate. None has been offered.

Expand full comment
Cal van Sant's avatar

If there has been no response, I don't think this rule applies. There isn't and can't be a rule that one must reply to every criticism of them; the volume is overwhelming for anyone worth criticizing. Even worse if they must respond within 3 weeks.

Calling Scott particularly bad at acknowledging his mistakes is very strange to me. Certainly he's not perfect, but he's got to be in the top 5%. Is that just a turn of phrase to link this otherwise unrelated post to your own, or do you really think Scott's in the lower 50% on this?

Expand full comment
Michael Weissman's avatar

Yes, he's clearly in the upper 50%. He had, however, made a particular point of not only presenting a lengthy detailed argument on covid origins but also devoting a section to dismissing my statistical paper: https://doi.org/10.1093/jrsssa/qnae021 (non-paywalled https://arxiv.org/abs/2401.08680). So it was a bit odd to choose that point to drop out.

Expand full comment
Écorché's avatar

I think Scott is done with public debate on this for a while: https://x.com/slatestarcodex/status/1925597463777616357

The tweet was a bit before you posted your correction I think. To his credit, he's putting significant money where his mouth is, but it's unusually awkward now for him to give your post the attention it deserves.

Expand full comment
erajad's avatar
2dEdited

"I don’t have as many specific arguments here as for the IIWYTTLIWMTTCI principle...."

But surely it's the `IIWYTTLIWMTTCY` principle! This feels relevant somehow.

Expand full comment
Logan's avatar

I have no idea what to do about this, but I've noticed a subtle and inconvenient thing where saying "you're right, I'll need to update" kind of mentally absolves me of my shame at being wrong, which is bad for the obvious reasons.

It's a thing your parents teach you, right? Doing the right thing is hard, but you'll feel better. So when you *shouldn't* feel better, because you need to maintain the motivation to improve, how do you still respect your interlocutor?

Expand full comment
Taleuntum's avatar

I don't think your shame and desire to improve should be contigent on any single conversation.

Instead, you should constantly feel a low grade shame and desire to be better (simultaneously with the pride of already being better than your past selves). Maybe meditating a bit on your persistent weaknesses would help? (It's a pretty curious situation though, usually people have trouble with the exact opposite in my experience.)

Expand full comment
Logan's avatar

I think the issue is more about prioritization/time management.

I'm examining and improving my mental models as often as is practical, I hope, but at any given moment I will prioritize those issues that are bothering me. After I have a discussion and get reamed, that topic 1) is suddenly of great interest to me, because I am motivated by injured pride to figure out a more robust model, and 2) has a bunch of new data that's fresh in my memory

By promising "I'll update on that," the experience is less of an injury to my pride, which makes it more likely that other topics will win out in the battle for my attention. Those may be the ones I didn't apologize for, which often means my incorrectness was less dramatic and/or less well explicated.

Expand full comment
Yug Gnirob's avatar

>which is bad for the obvious reasons.

What are those?

Expand full comment
Logan's avatar

When you're wrong, you should feel bad, so you'll try harder to stop being wrong.

Not universally true, sometimes feeling bad makes self-improvement harder, but "for the obvious reasons" was meant to imply that in this case I am referring to the kind of shame that's useful for self-improvement

Basically, the moment doesn't haunt me later on, I don't ruminate as much on what I wish I had said, so I don't fully integrate the new information into my world model, i.e. "update"

Expand full comment
Kenny Easwaran's avatar

I don’t think this is a helpful attitude! Shame is usually not a useful way of making feel bad that helps them update in productive ways. We learned that with fat shaming, and we should learn it with falsehood shaming too. People already usually have plenty of internal reasons to want to not be fat and not be wrong - adding shame just makes them want to *hide* being fat or being wrong.

Expand full comment
Logan's avatar

I think I shouldn't have used the word "shame."

I was trying to be evocative, not psychologically precise.

What I'm trying to talk about is the drive to resolve cognitive dissonance, which is a motivator for introspection. In my experience, the cognitive dissonance is related to my self-image as a smart person who thinks deeply about stuff and has lots of cool insights. When I look foolish in a conversation, it damages my self image, but as a solution I want to think harder and be more correct in the future. I want to be the kind of person who isn't wrong twice. I want to be the kind of person who updates, so I do update.

Because I know that admitting I'm wrong is high-status in this community, I get dopamine/affirmation from saying it, and that reduces my motivation to chase the dopamine/self-affirmation of actually becoming more correct.

For your fat-shaming analogy, this is like a fat person who posts about joining a gym on facebook but never actually goes to said gym, so they stop posting about it (to keep their dopamine from getting wasted on the joining, save it for when they actually form a routine), and then other people start talking about the importance of posting on social media when you start a new routine so we can all affirm each other. Withholding affirmation can be stategic!

Expand full comment
Yug Gnirob's avatar

I don't think shame is useful for self-improvement, and is quite the opposite. I find I learn much faster by exposing bad ideas to the air than by trying to kill them when they're a cloud in my brain. That's a good way to end up thinking nothing at all.

Expand full comment
Logan's avatar

That's exactly the process I'm talking about, though!

There's two phases, there's introspection where you form opinions and build models and come to believe them, and there's discussion where you let other people stress-test your beliefs and provide new information. After a dicsussion, you still need to do another round of introspection to integrate the new information and actually change. Or at least I need that. Otherwise the new ideas don't have time to propogate through my worldview and they are easily forgotten.

Yes, shame can stop you from engaging in discussion, it can be bad. But when I say shame, I'm specifically, specifically referring to the motivation to engage in introspection after airing your ideas in discussion and being proven wrong. That is what I meant originally: admiting I'm wrong in the moment during the discussion can lessen my motivation to introspect afterwards.

Without that introspection, airing your ideas is pointless. You'll just air the same wrong ideas next time, and then again, never actually trying to become less wrong. (Again, it's possible no one else works this way, but this is certainly how my brain works)

Expand full comment
quiet_NaN's avatar

I think that both being right and being able to admit when you were wrong are good skills to have, and most humans will require both of them.

I also think that mostly, these skills are roughly orthogonal, just like being a good player and taking a defeat gracefully.

There is a trade-off between how certain you want to be to say something and how many helpful contributions to a discussion you can make. For example, if Scott suddenly decided that he wanted to lower his mistakes-to-posts ratio by a factor of ten, he would have to put even more research into each post, and that would mean that he would make fewer posts, which would be a terrible trade-off.

There is also a trade-off between being proven wrong and holding on to unvoiced false beliefs. Personally, I do not like having false beliefs, so the ideal I personally strive for is humble and curious. "Sorry, I am confused about" is a good way to start a sentence which will lead to a situation where at least one conversation partner will end up with a map which corresponds more closely to the territory.

Expand full comment
Chasing Ennui's avatar

Just had this interaction on Twitter:

Him: "None of the [founding fathers] had a college degree."

Me: [Points out that Adams, Jay and Madison all had degrees, and Jefferson and Hamilton both attended college]

Him: "You overlooked the fact that the universities they graduated from focused on theology and classical languages, not political science. Furthermore, you didn't deny what I said. Only nine of them have university degrees; the rest are either selftaught or the result of practical experience."

Expand full comment
Jean's avatar

Incredible.

Expand full comment
Ken Kovar's avatar

Anachronism ya gotta love it 😍

Expand full comment
Rainwoods's avatar

"If you are 50% right, then you are 100% wrong"

Expand full comment
Moose's avatar

It's pretty rare in online discourse for someone to acknowledge that they are listening and understanding, except when they are rebutting something. I think this is probably a bad norm.

Expand full comment
Gunflint's avatar

No, Trump wasn’t responsible for Watergate and no he isn’t building camps to house and kill immigrants and no he didn’t somehow rig the 2024 election.

I know that you aren’t saying people who believe or at least say they believe goofy stuff like this are representative of people who dislike Trump for real and obvious reasons but some people seem to take it that way and they too are wrong and making a goofy assertion.

Expand full comment
Mark's avatar

Is this also a prelude to: "Tyler was right about x+y+z, (but I am still mad about Musk/Trump/Vance)" - and IIWYTTLIWMTTCI (If it's worth your time ...) the first part? ;)

Expand full comment
Taymon A. Beal's avatar

There's already an entry on Scott's Mistakes page about errors in his analysis of USAID-grantee overhead ratios, which is at least an existence proof that he's not categorically unwilling to admit he was wrong within this topic. Obviously there are other disagreements that remain unresolved but that's a separate issue from what this post is about.

Expand full comment
Anna Rita's avatar

What was Tyler actually right about? He wrote a 150 word blog post that conflated "pocketed" and "channeled." He did no personal research for that particular post. The prompt that he gave to o3 was irrelevant to the central point he was claiming to fact-check. His blog post provided negative value above replacement versus pasting Marco Rubio's quote into o3 and adding "Please fact check this."

Expand full comment
Mark's avatar

see for Cowen about PEPFAR: https://www.bloomberg.com/opinion/articles/2023-09-29/pepfar-is-america-s-greatest-public-policy-success

see for Cowen on Scott: https://marginalrevolution.com/marginalrevolution/2025/05/scott-alexander-replies.html quote: "Instead, Scott has thrown the biggest fit I have ever seen him throw over a single sentence from me that was not clear enough (and I readily admit it was not clear enough in stand alone form), but made clear elsewhere."

Me personally, I misread that short Tyler statement, too. But the content was indeed: 1. Is USAID done by government or mainly "outsourced" to to NGOs? - Mostly outsourced, it seemed. and 2. Does this seem problematic and should be discussed, too? Tyler thought, it does and should. - As my life-savings are all from my work at government funded NGOs abroad, I guess government may not be less wasteful than NGOs. But most NGOs surely are, and their expenses are always immaculately documented and often highly wasteful without anyone really interested in checking - outside medical aid. If those were done by government directly there might be more pressure to ask: Do they (still) make sense? - That seems to be Cowen`s impression after decades of work, too. https://marginalrevolution.com/marginalrevolution/2025/05/so-many-mistakes.html

"I have lived in this milieu for almost forty years, and sometimes worked in it, from various sides including contractor. A lot of people have the common sense to realize that these institutions are pretty wasteful (not closely tied to measured overhead btw), too oriented toward their own internal audiences, and also that the NGOs (as recipients, not donors) “capture” US AID to some extent. ..

I am all for keeping the very good public health programs, and yes I do know they involve NGO partners, and jettisoning a lot of the other accretions. That is the true humanitarian attitude, and it is time to recognize it as such. Better rhetoric, better thinking, and less anger are needed to get us there."

What was/is Tyler actually wrong about?

Expand full comment
Anna Rita's avatar

I can't access your first link, as it's paywalled.

>Instead, Scott has thrown the biggest fit I have ever seen him throw over a single sentence from me

I don't think this is a factual claim that one can be right or wrong about. It argues that what Scott wrote was impolite. It probably was impolite. But it doesn't make Scott wrong.

>that was not clear enough (and I readily admit it was not clear enough in stand alone form), but made clear elsewhere.

I think it is quite funny to say that it was made clear elsewhere when many readers that frequently read his posts say that it was unclear. It is one thing to complain about being taken out of context but quite another when someone quotes your *entire blog post* before responding to it. If he felt that so many people misinterepreted his words, why not edit the post and make it clear?

>But the content was indeed: 1. Is USAID done by government or mainly "outsourced" to to NGOs? - Mostly outsourced, it seemed. and 2. Does this seem problematic and should be discussed, too?

He could have used that phrasing, and talked about "outsourcing" a core government function, rather than whether NGOs were "pocketing" the money. That would have been more clear about what his actual concern was. He chose not to. I agree that if he had written the post like you suggest, he would have not received the criticism from Scott & co about it. But he didn't. I think it fair to criticize him for the post that he actually wrote.

>What was/is Tyler actually wrong about?

He is wrong about whether his main piece of evidence supports his main thesis. So, he's only wrong about one thing. But it's a short post.

Expand full comment
Michael Watts's avatar

> I can't access your first link, as it's paywalled.

You can access it fine. Here: https://archive.is/rvlaJ

Expand full comment
Mark's avatar

Greatly appreciated! For the purpose of showing: "Tyler Cowen is very much pro PEPFAR", the title sufficed ;)

Expand full comment
Chris Bowyer's avatar

"Men occasionally stumble over the truth, but most of them pick themselves up and hurry off as if nothing had happened."

I run into this more and more, the habit of breezing past mistakes because people have internalized, consciously or unconsciously, the idea that must never show intellectual weakness. Even though that kind of brittle, never-bend posture is actually a *tell* for intellectual weakness to those who can see it for what it is.

The most important part of this post is probably this:

"And it’s good for your interlocutor to realize that they’re not just speaking into a void."

Let's get more specific: it lets your interlocutor know they're not speaking to a robot. Not in the literal sense (the AIs for public consumption are unceasingly self-flagellating about every mistake), but in the sense that human connection requires that each response absorb and incorporate what it's replying to into the next salvo. Without that, you create an Uncanny Valley of communication where what you input into the interaction seems to have little relationship to what comes back.

Expand full comment
Brian Moore's avatar

It's a good filter for determining which people are arguing for persuasion and which people are arguing for politics/propaganda/influence/audiences.

Expand full comment
Liface's avatar

This is good. Reminds me of "arguments are not soldiers". We cannot show any weakness in "battle".

I will add it to the Beginner's Guide to Arguing Constructively:

http://liamrosen.com/arguments

Expand full comment
Ari Wagen's avatar

I was confused what you were referring to with "IIWYTTLIWMTTCI." Before I unravelled the abbreviation, I realized that sequence of characters is a valid protein sequence (which is predicted to fold into a neat little helix: https://labs.rowansci.com/s/IIWYTTLIWMTTCI).

Expand full comment
Jake's avatar

TINACBNIEAC

Expand full comment
Gunflint's avatar

Whoa, I’m still trying to figure out Charli XCX.

Expand full comment
Ken Kovar's avatar

Last emperor of the western Roman Empire 😆

Expand full comment
Gunflint's avatar

Two things that some people just can’t seem to say are “I don’t know,” and “I made a mistake.”

Expand full comment
Radek's avatar
2dEdited

Adam: I have stance S1 and supported by arguments A, B and C.

Bob: C is actually wrong, for reasons...

Adam: I think not, because...

Bob: I still think it is wrong, because...

... this goes on for a while ...

Then, through some defect in human mind, because of the time spent arguing against one of S1 arguments, Bob becomes associated with people who are against S1, even though he (perhaps autistically) just wanted to correct Adam about C.

This happens more often on hot button topics, where you really don't want to be associated with people against S1.

I try to assure at short intervals about my stance on the whole issue, but now I think I can also add Scott's "It’s fine to admit that one of your arguments was wrong, while continuing to believe the same thing as before."

Expand full comment
Yug Gnirob's avatar

I suggest a Rule of 3 approach. Once to present an argument, once to rephrase the argument, once to show you're not changing your position on the argument. Past that, it's just noise.

Expand full comment
Shankar Sivarajan's avatar

And along with that, Alice who also has stance S1 and argues A and B, but NOT C still gets associated with argument C, because she can't disavow it without being lumped in with the anti-S1 Bobs.

And then Andrew comes up an even worse argument D for S1, which Adam and Alice have to at least tacitly defend.

And eventually, the arguments for S1 are mostly bad, and Carol, Dan, and Eve decide that S2 is right by default.

Expand full comment
praxis22's avatar

I forget where I saw/heard this, "but and therefore" are better (and more dynamic) than "and" narrative storytelling wise.

I would also agree with the random commenter that the built environment is unlikely to change much over the short/medium term. However AGI/ASI is all but guaranteed over the medium/long term IMO. I can discuss anything with my mother. It's her birthday today and we were discussing Jung and biosemiotics. :)

Expand full comment
Bean Sprugget (bean)'s avatar

I think it was advice from the creators of South Park https://youtube.com/watch?v=vGUNqq3jVLg

Expand full comment
Corsaren's avatar

I’d take this a step further and say that if your opponent makes a good point, it’s polite/good discourse policy to first acknowledge and repeat that point back at them.

“I think XYZ”

“Well I think XYZ is wrong because Points 1, 2, and 3”

“Okay, yes, point 1 and 2 are getting at something correct in the sense that I’d agree with 1modA and 2modB but I think point 3 is wrong and as such XYZ stands”

This is a much bigger demand, and a futile request in most internet arguments, but it honestly makes a world of difference when both parties engage this way because you slowly build up a shared consensus set.

Expand full comment
walruss's avatar

This is one post I can get behind 110%. Absolutely if you are talking to someone who disagrees with you, you make a point, and they just switch to a different argument for their point, you are wasting your time.

I'm going to go on a little tangent here, please come with me. I've seen a popular post going around about trolley problems, where people are furious at the idea of a "trolley problem" because obviously you could just derail the train or you should go back and find the person responsible for tying people to the tracks or whatever. You saw this also in your article about infinite drowning children. I think refusing to engage in hypotheticals is exactly the same problem as the one you are describing. People didn't just think your hypothetical was wrong, or unrealistic. They were *genuinely furious* that you attempted to use a hypothetical at all.

I think this is because hypotheticals are argument isolators. They are meant to strip away all the context besides one particular argument, and force your attention on whether that argument, absent of the other factors, holds. Some people will engage with the hypothetical, admit an obvious point, and then say "okay but that doesn't apply here because XYZ." Others react like a rat in a cage, doing anything they can to avoid being pinned to one single issue at a time. And that's deeply uncomfortable if you're not willing to admit you can ever be wrong, even a little bit, about anything.

Expand full comment
John N-G's avatar

I think it's often not a refusal to engage in a hypothetical, but rather an inability to engage in a hypothetical. They treat everything as though it's the real world. Which is not necessarily a bad thing in real life.

You: "Imagine you see a trolley coming down the tracks..."

Them: "Okay, I can imagine myself in that situation."

You: "You have the following two choices..."

Them: "Neither of those choices are acceptable. I would try to find a different choice."

You: "No, in my hypothetical you only have those two choices. Those are the rules."

Them: "Then the only morally acceptable choice is to break the rules."

Expand full comment
walruss's avatar

I'm not sure, because I've seen very smart people have this defect.

I think they may not know how to engage with a hypothetical, but as a consequence of a worldview that refuses to admit tradeoffs. I think "I don't want to be wrong or admit my worldview can have bad consequences even if it's overall correct" is prior to this problem. But I could easily be wrong.

Expand full comment
Shankar Sivarajan's avatar

> rather an inability to engage in a hypothetical. They treat everything as though it's the real world

An old 4chan greentext suggests this is a general intelligence issue.

'''

Did you know that most people (95% +) with less than 90 IQ can't understand conditional hypotheticals?

- For example:

- How would you have felt yesterday evening if you hadn't eaten breakfast or lunch?

- What do you mean? I did eat breakfast and lunch.

- Yes, but if you had not, how would you have felt?

- Why are you saying that I didn't eat breakfast? I just told you that I did.

- Imagine that you hadn't eaten it, though. How would you have felt?

- I don't understand the question

'''

This seems plausible.

Expand full comment
walruss's avatar

This really bugs me because there's no source for this claim and our bias should be that it's obviously false.

How many people do you know IRL who can't plan for contingencies like "What if it rains?" I would think very few, even dumb people. If something like this is true (and literally nobody has ever provided a shred of evidence for it), I suspect it's more of a "what are you trying to pull?" thing than a genuine failure to understand your question. I've actually had this reaction in conversation before, though I try to resist it. The urge to say "no, I don't understand" - not because I don't understand, but because I don't want to grant you the right to go down this conversational path unless I know where it's leading.

Expand full comment
Shankar Sivarajan's avatar

Yes, I agree most people can probably make plans about the future. From context and the example given, I think what the post is referring to are technically called counterfactual conditionals: https://en.wikipedia.org/wiki/Counterfactual_conditional.

Expand full comment
walruss's avatar

Yeah, but I still suspect dumb people can talk about counterfactuals and it's baffling to me that people are so confident otherwise they'd take a 4chan greentext at face value. I have conversations with dumb people constantly where they can have dumb opinions about counterfactuals in contexts they care about: "If Micheal Jordan were playing today he'd kick LeBron's ass" comes to mind.

I'd believe something weaker, like, "dumb people refuse to engage in counterfactual speculation about subjects in which they have little knowledge or interest." But 1) Only if given literally any reason to believe that's true, which nobody's given, 2) I don't think that's an interesting or surprising idea - I wouldn't be able to engage in counterfactual speculation about quantum computing, and 3) that's not even what the greentext says - they're talking about a simple example of understanding you would be hungry if you hadn't eaten.

I'm sorry to beat up on the idea, I'm sure it makes sense to you. But I've seen writers with large audiences taking this at face value for no reason I can find, skeptics who generally demand some level of proof for their beliefs, and I don't understand why?

Expand full comment
Doctor Hammer's avatar

That may well be true, but the other side is that often accepting the premise of the hypothetical is treated as accepting the hypothetical as being a proxy for reality. Often the hypothetical is so divorced from the reality of what one is talking about that it should be treated as an entirely different topic, yet if one plays along and accepts the premises and their conclusions it is considered gauche to then say “but that has nothing to do with the situation we are discussing”. It is akin to the false dichotomy, only what is smuggled in is the assumption that the hypothetical is sufficiently real to matter in the real world case.

Expand full comment
lyomante's avatar

do people not understand what a false dilemma is, and how it differs from a hypothetical?

seriously ppl using logical fallacies as thought experiments is why no one likes the trolley problem. its a classic false dilemma.

Expand full comment
walruss's avatar

It's a false dilemma in the way that solving a physics problem "ignoring air resistance" is a false dilemma.

Expand full comment
lyomante's avatar

not the same, these hypotheticals restrict options in the idea that under the constraint and duress truths about morality will be revealed, but its never the case. reductionist approaches to morality tend to become absurd and counter to actually discussing it.

its used more for a gotcha, then any real moral inquiry.

Expand full comment
walruss's avatar

That's a fair enough opinion but life is too complex for me to tell whether it's true. A shame there isn't a simplified example you could give to illustrate it.

Expand full comment
lyomante's avatar

not the same and you know it.

the moral truths aren't objective like physical phenomena, and simplification does not illuminate it obscures. there also has to be a large lived or real world aspect in practice and that means simplification is ineffective.

morality is "how then should we live?" but it has to be lived. that is antithetical to simple reductionist hypotheticals

edit: christianity is full of them: is it better to gouge an eye out to avoid sin and go to heaven, or keep it and descend to hell?

Expand full comment
Liskantope's avatar

Isn't this just a close variant of Bulverism?

Expand full comment
Taymon A. Beal's avatar

How so?

Expand full comment
Liskantope's avatar

Bulverism (term coined by C.S. Lewis) is the fallacy of attempting to rebut someone's claim by explaining what motivated them to come by that belief while failing to actually argue why their claim is wrong. At least in the pasted comment, that's more or less what the commenter seems to be engaging in (maybe it doesn't apply so well to Scott's first, more hypothetical, example).

Expand full comment
Taymon A. Beal's avatar

I suppose you can read the quoted comment that way, but it doesn't seem like the main point of the post.

Expand full comment
WoolyAI's avatar

"And it’s good for your interlocutor to realize that they’re not just speaking into a void, that you are capable of admitting fault, and that it’s a real discussion with potential win criteria instead of them getting bombarded again and again until they go away."

Kinda. I find this extremely frustrating.

If I'm talking directly to somebody or I'm working on a small team, "Yes, but" is extremely important and effective.

If there's an audience, especially a large and/or uninformed audience, then "But" is more effective. Never concede anything, never look the slightest bit dumb or wrong, never deviate from your primary talking point. That's what public affairs people do, that's what debaters do either at the high/school collegiate level, that's what every online argument/debate/conversation has evolved into. Everywhere I look in media, that's all I see, up to the most powerful people in the world.

Discursive tactics to arrive at the truth are significantly less persuasive than other methods and I hate it and it makes us all dumber.

Expand full comment
Jake's avatar

Nixon probably wasn’t responsible for the Watergate break-in. He was certainly culpable in its cover-up, but there’s no evidence he had knowledge of the break-in specifically ahead of time.

Expand full comment
Yug Gnirob's avatar

You're talking about 'directly involved', not 'responsible'. If he hired the people responsible, he's still responsible.

Expand full comment
Alex's avatar
2dEdited

A general principle of human interactions: almost everything anyone says in words to a stranger is, at some level, basically a lie. But those lies are usually true.

The lie is usually about what they are saying and why they are saying it. The person who does the watergate/Jan 6th bait-and-switch claims to be giving a reason to scared / disgusted by Trump. But their reason is wrong, i.e. their reasoning is fake, i.e. a lie. Why then are they saying it--why is it "worth their time to lie"? Because their underlying emotion is true, they actually are scared and disgusted by Trump. They tried to translate this emotion, which they have no other way to offload, into rationality, and came up with something nonsensical, and then said it anyway, which is basically a skill issue.

But had they said it right, what difference would it have made? It's an emotional message, which hopes to be proceed emotionally. If you react with "actually you're thinking of Jan 6th", it's exactly the same as if they had said Jan 6th and you had shrugged. If you react with "I think you mean Jan 6th, but yeah, it's pretty bad, huh?" then it's the same as if they had said Jan 6th and you had agreed with them. The rational content of the sentence is almost irrelevant to whether you turn out to be on their side and validating their emotions or not.

The stupid comment you quote is similar. They had no intention, ever, of debating AI singularity percentages. They were scared of people hyping up AI singularity risk, so they produced essentially bullshit to support their position probably because they're in a subculture that only accepts emotions that are transmitted rationally (which is awful, by the way), then when their bullshit was called bullshit, they produced other defensive bullshit to stake out their same stance, now with some antagonism and palpable bitterness. Not expertly done; skill issue. None of it matters, though. The words are all made up. The only thing going on here is one person trying to find a way to voice an emotion and other people accepting or rejecting it. The rational content of this conversation essentially doesn't exist.

There are other kinds of interactions which allow for emotional communication, but they don't feel like chucking arguments back and forth. The "meeting of the minds" that occurs when a person is actually learning from another person, for instance, is far healthier, and in that setting the words do exist: the student is building up a model and the teacher is empathetically mirroring the student's understanding and showing them how to extend it; the words are then carrying the shape of the model-extensions required to the student's brain, and the process works. In that case the words have been tailor-made for the student in order to meet them where they're at, which is why it works: it really is the exchange of information, not emotions. (related, when emotions get involved in learning, it doesn't work. E.g. when people have anxiety about their ability to learn or speak clearly or do math or whatever.)

Emotional conversation between two people who care about each other is another. A person comes home from work and complains about something; their partner ignores the text of the message and says "what happened?" understanding that the complaint is an off-gassing of frustration about something else."

Etc. I'm convinced that rational discourse online is largely a farce because it has wandered so far away from any setting where emotion is possible---yet people are (to my eye) absolutely desperate to get their emotions seen by somebody; desperate enough that they're trying to do it anonymously online, presumably because their real lives aren't getting them any emotional processing either.

Expand full comment
TheAnswerIsAWall's avatar

Instead of arguing with you about a nit pick, I will instead tell you that your comment left me with a feeling of impotence and more than a little existential dread. [How’d I do?]

Expand full comment
Alex's avatar
1dEdited

fair enough. sorry to hear that...

I suppose that the point of my take is that, maybe there's a completely unfamiliar alternative to what passes as discourse today which would make everyone feel a lot more heard and cared-for and productive... or something like that. Does that help? It would certainly make me feel better if other people agreed with that being a possibility.

Expand full comment
🎭‎ ‎ ‎'s avatar

Well, I don't agree that's a possibility, and I don't see why it's necessary either. If you can understand the truth behind another person's words even through their "lies", then that's valuable information on its own. It's information that lets you optimize your own actions, information that can be used to identify vulnerabilities in others, information that lets you manipulate situations to your benefit. In that sense, these "lies" can be far more revealing and valuable than the truth of whatever irrelevant topic is being discussed.

Expand full comment
Alex's avatar
1dEdited

My point is really that everyone can understand the truth behind others' "lies", but they have a muddled model of what's going on and so they keep getting caught up arguing and debating and writing blog posts, like the OP's, about the lies ... and everyone would be better served giving up on that entire concept, that people's words reflect what they are trying to communicate, and place way more stock in figuring out whatever they actually *are* trying to communicate.

In particular, there's some big trend in human civilization of thinking that we can rationalize everything, and we've applied that to communication and psychology and made it taboo, in many places, to say things that aren't some kind of rational, evidence-based point. Which means everyone who wants to communicate their emotions tries, and pretty much fails, to translate them into some kind of rational statement. Everyone is speaking this broken language, struggling to communicate, and since their audiences are also steeped in the broken language, they can't help but get involved in pointless arguments where nobody is capable of hearing or empathizing with anyone else.

This is especially prevalent among "rational" people, academically intelligent people, stubborn people, maybe some kinds of autistic people.

It turns out (I speculate/claim) that if you completely drop the concept of a statement being "correct" or not you can have much more productive conversations. (Among other things I think this is a large driver of political polarization; everyone needs to shut the fuck up about the things the other people say and figure out why they're saying them. not possible at a national discourse level though; maybe not possible interpersonally in many cases, but I can attest that it works a *lot* better than arguing).

obviously don't drop it in, like, situations where correctness matters, like medicine or war or something... but in interpersonal interactions, correctness is vastly less important than figuring out why they're saying it and responding to *that*. If nobody believe this, however, those interpersonal interaction all essentially fail at their purposes for every party involved (and leave everyone feeling alienated and lonely to boot).

Expand full comment
🎭‎ ‎ ‎'s avatar

> My point is really that everyone can understand the truth behind others' "lies"

I would argue that it is precisely their inability to do this (and accurately model others' thoughts in general) that leaves people confused and stuck in these worthless debates... And without sufficient empathy to process the pain of others, or the existence of shared desire and purpose, your proposed heart-to-hearts would be just as unproductive as the alternative. ...Though, I guess it would waste less time beating around the bush.

I'll take what I can get from these debates, which is an understanding of others. It's folly to expect anything more than that...

Expand full comment
Alex's avatar

My belief is that people's inability to do it is not some innate thing; it's that their schemas what communication is supposed to be are interfering with their intuition which is usually more accurate.

Which is to say, they are not getting an understanding of others, but they could perhaps learn to by unlearning some of their other beliefs.

Expand full comment
Ghillie Dhu's avatar

>"…everyone would be better served giving up on that entire concept, that people's words reflect what they are trying to communicate, and place way more stock in figuring out whatever they actually *are* trying to communicate."

There are a single-digit number of people I care about enough to bother with this. For everyone else, it's on them to express themselves clearly; IDGAF about their feelings.

Expand full comment
Alex's avatar

I mean, sure, that's your right. It is what I would describe as "what being a shitty person feels like on the inside"; probably other people in this world who you would describe as shitty would describe their attitude towards others in the same way.

Expand full comment
Radar's avatar

Yes yes to all you say!

Expand full comment
TenaciousK's avatar

I work in a psychotherapy practice and often use a Buddhist frame with my clients (positive motivations, unskillful action) when we're discussing difficult communication with others. With difficult feedback, if you validate positive motivations first, you preempt the defensive reaction that follows the double-bind you've put the other party in (I can either feel ok about myself/my worldview etc or I can believe what this person is telling me). It's pointing out a misstep rather than making someone "wrong," which makes it so much easier for someone to take accountability (for the impact of something they did, said or whatever) and simultaneously save face. Making people wrong leaves little room for this.

What you're saying here sounds familiar. I'd add, though, that the motivation for provocation is usually along the lines of undermining the other party's credibility. What people seek is the validation you're referring to, and when confronted with the opposite, their avenues for saving face are limited.

I wish we put more focus on rhetoric in public education.

Expand full comment
Jay Vandermer's avatar

I can think of three people that I know IRL that do this, and one thing they have in common is that they're highly partisan. And they tend to only do it when arguing those highly partisan topics.

Expand full comment
Alec's avatar

Very small nitpick: if the phrase is "If it's [. . .] Correct You," then the acronym should end with a Y, not an I.

Expand full comment
Keese's avatar

It's also good from a strategic point, establishing yourself as a good faith interlocutor.

Expand full comment
walruss's avatar

I strongly disagree with this, just from years of trying to do this.

In certain circumstances with certain people this is good strategy. But in most discussions with most people, the moment you admit that there are any downsides to your position whatsoever, they mentally declare victory, decide that you are a fool who doesn't know what he believes, or check out on the conversation because it's confusing.

You can see this really clearly in politicians. Those who accept nuance and complexity basically always lose. If you try to explain that you voted against a bill after you voted for it for complicated parliamentary reasons your political career will be shot into the sun instantly.

Expand full comment
Radar's avatar

I think it'd be a shame for the norms of interpersonal communication to get imported from what people do who are perpetually asking for votes and money from a public it can't be honest with. There's not really an honest conversation to be had with salespeople because of the material circumstances they're in. I'd hope that our expectations could be higher for conversations where we're both trying to be genuine and to learn something from the genuineness.

If I sense someone is treating me like an object for their sales pitch, I lose interest really fast.

Expand full comment
walruss's avatar

I understand that, because we want to think otherwise. And with people I speak with often, with whom I have a strong personal relationship, you're right.

But a law career has taught me that in discussion with strangers or professional peers, saying "this is what I think, here's why I think it, here's why I might be wrong," and then soliciting their input is a good way to undermine their confidence in you, to miss out on advancement, and to never have your position taken seriously. There's no inherent reason why people would prefer politicians with simplistic worldviews and then turn around and want nuance and complexity from these professional or personal conversations.

Don't get me wrong. I'm not gonna stop, because I can't, because that's my personality. But the idea that an argument becomes more impactful when you admit its weaknesses is definitely only true in a small handful of cases.

Expand full comment
Radar's avatar

I hear that.

I wonder sometimes if this is a generational thing.

My internal picture of people conversing about a topic is two people, one-on-one.

I think it's possible that generations younger than me, that their picture is that conversing about a topic is largely performative because it's happening at school/work or online.

And so I come away with the idea that more is possible than someone whose conversations are mostly happening in a performative space.

My view is further skewed because I'm a therapist and so most of my conversations happen in a much more private, gentle kind of space.

Expand full comment
neuretic's avatar

i feel bad for the guy in the example image. he's clearly imprisoned by a fixed mindset and feels powerless to change anything about the world (you're either lucky and rich or unlucky and a barista). dude's living in a mental hell.

Expand full comment
Kveldred's avatar

>you're either lucky and rich or unlucky and a barista

Boy, that hit home. I feel sort of like that—like I was born to be impoverished, and there's nothing I can fuckin' do about it:

→ get o-chem degree, mistakenly believing you can continue on to PhD

→ stepfather dies, mother becomes ill, drop out of grad school & move back home

→ no one cares if you have a BSc in such a field u cretin lmao

→ luck into an amazing, unrelated career anyway! promoted! things are lookin' up for ol' Kvel!

→ VP's son brought on & given your dept; you do all the work still though

→ VP's son doesn't like you. you don't like him

→ eventually quit, angrily, expecting that your years of experience & address-book full of contacts will easily land equivalent position elsewhere

→ well, no; you don't have any petroleum- or management- related degree, after all, and it turns out people don't remember good turns for very long once you can no longer do anything for them

→ "hmm. I've enjoyed programming, as a hobby, and I knew a lot of people back in the ol' LW days that didn't have a CS degree but managed to make a good living as programmers. perhaps I can retrain & enter the coding w–"

→ haha no bad timing it's no longer a field that doesn't care if you don't have a CS degree & there is a glut of programmers now

→ also, AI

→ wife leaves too

→ shoot yourself as your savings start reaching the "better take that barista position" level

mistakes were made—like, probably should have chosen a different degree to start with; and maybe I should have just sucked it up with Mr. Nepo-Son; the pay was still good, it just hurt my pride to do all the goddamn work while being treated like a sidelined errand-boy for a turd who didn't know the first goddamn thing about the industry but Daddy was the big cheese so aarrrghh—but at this point, what can you do? go back to school—with what money, and what will happen to me ma if I do that, and how many years before THAT pays off?

nah, 'stoo late, for me. I feel like I'm not dumb; I feel like I ought to have been able to climb out of the ghetto somehow; other people manage to have nice lives, many of 'em with less brainpower than I—but I fucked up somewhere, and now I'm just makin' time till no one relies on me & I can quit.

</self-pity & complaining>

Expand full comment
neuretic's avatar

read dweck's book "mindset". it will help you gain awareness of the fixities in your thinking and dissolve them. strong chance a lot of the problems you're facing will fall away simultaneously.

Expand full comment
Performative Bafflement's avatar

> nah, 'stoo late, for me.

As somebody who's also lived a high variance life, I'd urge you to consider your past record as Bayesian evidence that things can change significantly for the better, and do so more quickly than you'd think.

It's never the time to despair until you're actually in the ground - instead, I've found it helpful to try to increase your "luck surface," or the amount of your interactions with the parts of the world that tie into those high variance good outcomes. This boils down to trying to interact more with the people in the careers and lifestyles and milieus that you would like to be a part of again - whether digital or real life.

You should still absolutely follow whatever rational, thought-out steps you can think of to get there, too - but increasing that luck surface often pays off further and faster than the step-by-steps, in my own experience, because so much of the world is gated by person-to-person trust and relationships and vouching.

Expand full comment
Kveldred's avatar

Probably pretty good advice—thanks. That is indeed exactly how I got the one position, the one that (I thought) would be a career for life.

Back then, with a new wife to provide for & my mother's medical bills piling up, I had taken the first job that called me back. Wasn't a lab-tech position or anything like that (labs mostly just laughed in my face, or a more polite variant of the same: "well, you know, we have people with graduate degrees applying for this opening; so... I mean, I'll take your résumé—just warning you"), but rather: delivery driver for an oilfield services outfit.

Got to know the people I delivered to pretty well; one day, the regional fleet manager (at a fairly large gas company—not ExxonMobil or anything, but still intra-national) was chatting with me as I unloaded the pumps & motors & so forth, and I made a little joke about how someday I was going to be able to buy the expensive, $5-a-loaf bread, and "see how it tasted."

He said something like "oh, they ain't payin' you well, eh?" & I rushed to backtrack a bit: "No, no, I'm not complaining!—it's just... well, we were making it before, but now the health insurance & all is coming out of each check, and we're struggling a bit, that's all. No worries!"

Next week, on my run to that particular gas co., as I was unloading more equipment & supplies & parts etc., he walked up again: "Kvel, son, I haven't been able to stop thinking about your wish."

I replied, intelligently, "huh?" He explained: "Your wish to one day be able to afford the /good/ bread at the grocery store!"

Turns out he really /hadn't/ been able to stop thinking about that half-joking, off-hand comment, heh. Really was a good man with a soft heart, as I'd come to know; he'd already gone & gotten the other local bigwig on board, and so offered me a job right there.

It was just as a field-hand, nothing fancy... but this being the oilfield, it paid more in a single week than my previous job had paid in a month anyway—hence: I jumped for it, of course.

--------------------------

I like to think I'm not hard to get along with, nor stupid; and—well, esp. for that kind of money, I REALLY, REALLY took pride in my work, you might say, heh. To my amazement, this was noticed, and I got promoted—fairly rapidly, even! (Felt & still feels unbelievable, to me: I hadn't thought that really happened—never saw anyone achieve a high position who hadn't started there. Us worms stay worms.)

So did my benefactor, who rose up to become "Fleet Director", with only the VP of Ops & the C-suite folks above him;¹ I was immediately below. The story—for the last two years I worked there—was that he'd retire & I'd take over; had been "groomed for it", so to speak... although that word has recently gained something of an unpleasant connotation–

I was real happy, man. For the first time in my life, since childhood, I didn't feel sick with dread every day at the prospect of unexpected expenses. For the first time, I enjoyed my work, and felt that my ability was perceived; I had the power, the opportunity, to make an actual difference (if I had a good idea, I could /try it/—instead of being told "huh? why is this peon talking to... look, buddy, just get back to sweeping alright", heh).

Hell, for the first time, I could afford something nice if I wanted it—a watch, an RC heli, a good PC... small-enough luxuries, but new to me.

I guess, in short: for the first time, I /was/ somebody, y'know? It was nice.

-------------------------

Probably shouldn't have ever left, but I suppose I'd let success get to my head & so convinced myself that it wasn't a terrible idea: "well, I was given a chance, and proved that I can be an excellent employee—a real asset—when given such an opportunity... the same thing can happen another place, right?"

Ha-ha! No, of course not, Kvel, you moron! You get given a chance if you are able to rub elbows with people who can offer one,² or if you have qualifications. You cannot do the former, being a nobody & all, and you fucked up the latter back when you were choosing a major!

("...so just stop struggling! Don't you want to... rest, Kvelly-boi? Give in, give in & rest!–")

Well... or perhaps that's, uh, a bit premature. Maybe I /can/ think of a way to "increase my luck-surface". After all, as you say: it worked the one time, right?

The idea sort of reminds me of Scott's "micro-marriages": the idea that exposing yourself to situations & people that /could/ lead to a marriage adds up in the same way that exposure to health risks adds up in the "micro-morts" concept (if I recall the term correctly). Perhaps I can expose myself to "micro-careers"...?

-------------------------

Ahem. Pardon for the lengthy & depressing rant-ramble, anyway. It just... it makes one feel better, to get the poison out–

.

--------------------------

[footnotes]

--------------------------

¹: There was no "President of Ops". No, I don't understand why—that being the case—there was a /Vice-President/ of Ops, either.

-----

²: Out of everyone at other outfits with whom I had—I thought—good working relationships, or maybe even friendships: only a single one came through, and offered a spot at his own company.³ The rest said they'd "be on the look-out"; never heard back from any, though. The guy I had worked most closely with for years, to whom I spoke nearly every day, /didn't even return my call./

Y'know, that fucker owed me, too; I had made the decision to go with his outfit over their #1 rival, and as we were immediately their largest client, this made the company pretty happy—and I told the leadership over there, over & over, that "you know, Scott [different Scott, of course] is probably the main reason we went with you guys... quick, knowledgeable, always goes above-&-beyond! give that man a promotion!" Not long after, he gets invited in to discuss taking over for his boss (who was himself getting moved to Houston to be a real big-shot).

Yessir, ol' Scott L. thanked me for that nearly once a week, for a year thereafter; invited me to his dam' wedding, even!...

....and /not even a call back?/ Not even an email saying "hey heard about what happened, gonna miss you"...? I got /those/ from dozens of people I didn't even remember!

Not gon' lie: that one hurt—I still don't quite understand it. Oh, well.

-----

³: I didn't take it, because it was a pretty large step down from the previous position... well, now I'm applying to be a goddamn field-hand again, so—whoops!...

Expand full comment
Performative Bafflement's avatar

Yes, I think the “micro-marriages” is pretty much a 1:1 example - as you’ve seen, it’s sort of a general trend for things in life to go from “basically noise” to “significant life change,” like your offhand comment about the fancy bread.

Life in general is pretty chaotic in the literal sense, and that’s one of the hallmark dynamics of a chaotic system. So just like going to more in-person events with single people of the relevant gender leads to statistically more marriages, the same happens for careers, and friendships, and schools, and much else.

So increasing your luck surface is directly increasing those “basically noise” interactions, ideally among people with the potential to drive a big connection or uplift.

Then there’s more prosaic techniques - it sounds like you were well regarded at your former career - if so, you interacted with a bunch of people, not just the guy you helped lift up who ghosted you. So you can look at them on LinkedIn or wherever and see where they are, and look for roles there, and see if you they’d be fine recommending you, because referrals are generally a really significant buff in HR systems, and so on.

I wouldn’t really consider that “luck surface” stuff directly, that would be more reaching out and going out for a drink or hanging out with some of those people that you had good in-person rapport with directly. But still not a bad idea.

Hope you’re back on the upswing soon, friend!

Expand full comment
lyomante's avatar

you can only live with the cards you've been dealt. There isn't "god's plan for your life," in that there is an ideal, perfect state every human can achieve. You just strive and do the best you can, and it sounds like you are doing that.

Be at peace some dude.

Expand full comment
lyomante's avatar

youd be surprised how much the meritocrats rely on daddy's money and connections, and how hard life's without it.

much easier to have a growth mindset living in flushing, ny instead of kansas, or having dad bankroll an MFA in creative writing and an internship in a publishing house.

lot of life is not fair

Expand full comment
Marc's avatar

I haven't given this idea much thought, but it occurs to me that one of the fundamental differences between private and public conversation is the role of humility. Setting aside moral considerations, in one-on-one conversation, if one person lacks any humility and refuses to acknowledge even blindingly obvious times when they're wrong about something, the other person is eventually going to stop talking to them out of sheer frustration. Humility, at least to some degree, is a signal of good faith, and helps keep the conversation/relationship going.

In a conversation with an audience, there are far more people to whom you're signaling, and, absent a culture in which humility is held in high regard, the more confident and unrelenting person might be seen as "winning" even (especially?) if they drive their interlocutor mad. This is, of course, one of the basic critiques of our age, but I hadn't before recognized how different it is, on a signaling level, from private conversation.

Expand full comment
Radar's avatar

Humility is a signal of good faith -- that's a keeper.

Expand full comment
oxytocin-love's avatar

I've said this before, and every time I've been told that this is an unreasonable expectation to have of humans and if I want to have arguments I should get over it and deal with never having the satisfaction of being told "you were right, I was wrong" and just let people quietly update afterward despite never backing down during the interaction. :(

Expand full comment
Taymon A. Beal's avatar

It's a good aspirational discourse norm but it's hard to hold people to in most contexts, and so if your interlocutor cares about something else more than they care about being a good discourse citizen then you're probably not going to get it.

Expand full comment
🎭‎ ‎ ‎'s avatar

Well... yes, that is an entirely unreasonable expectation to have. Just be grateful that you now have information on their beliefs, and learn to work around that.

Expand full comment
oxytocin-love's avatar

Well, that was surprising to me, because personally I find it pretty easy to acknowledge when someone has shown my claims to be factually incorrect.

Emphasis on *was* - I've now been told this so many times that I do now believe it about other people, I'm just sad about it.

And wanted to give Scott the same information I've been given.

I don't know why you felt the need to hammer home the thing that I already said I knew but was sad about.

I am not grateful to live in a world of unreasonable people. I doubt I ever will be.

Expand full comment
Radar's avatar

Some people respond to other people's sadness by getting annoyed.

Expand full comment
🎭‎ ‎ ‎'s avatar

I'm not saying you need to be grateful about the circumstances. I'm just saying to get what you can out of debating others. Which is... information. It's ultimately their loss if they're unable to take in valuable information, no? But now not only do you know that they are operating on false information, you can also make better guesses as to how they'll act in the future. That is information that you can work around.

Expand full comment
Radar's avatar

Expectations versus aspirations. Some people can do this, a lot of people can't. Egos get in the way for a lot of people. So reasonable expectations would be that sometimes this is possible but most of the time it's not. And if you want it to be more possible, then you aspire to find and give your time and energy to people who are kind, humble, and considerate. It means prioritizing those qualities more in your life and it narrows the field, but in a way that's ultimately enriching. And when you notice railing against people not being better than they are (which is understandable sometimes to rail at that), that you also encourage yourself to accept how people actually are and move on to focusing on the aspects you can do something about.

Expand full comment
oxytocin-love's avatar

I don't know why we're lecturing me about this, when Scott's OP is the one saying this should be the norm, and I'm just saying I tried and have already given up hope.

Expand full comment
Radar's avatar

Oh I’m sorry! I don’t want you to feel lectured at and I see how you might.

For my part, I think maybe I didn’t want you to lose hope that your life could have more satisfying interactions in it.

Sometimes if we recalibrate expectations by letting go of how we feel things should be, then it frees us up for what’s sometimes possible.

Not for me to say though about your feelings!

Expand full comment
oxytocin-love's avatar

Thanks, I appreciate the apology.

Most days I'm pretty good at the Serenity Prayer, but I'm honestly having the kind of day that means I probably shouldn't be engaging with internet comment sections at all.

Expand full comment
Ed Mirago & friends's avatar

I love this. Personally, I really appreciate (systemically! including intellectual, somatic, emotional, interconnectedness channels) when an interlocutor even acknowledges my point and am wildly relieved and sometimes delighted when my accuracy is noted. Feels generative and collaborative rather than like being smacked.

Expand full comment
Max Morawski's avatar

Not to be too unkind to the commenter, but assuming technology will stop progressing now after a hundred years of increasingly dramatic advancement seems like a very strange thing to believe.

Expand full comment
Anna Rita's avatar

I remember when I was young, and one of my classmates expressed disappointment that every important invention had been made already, and there was nothing new to invent.

Expand full comment
quiet_NaN's avatar

I would not describe technology as progressing "increasingly dramatic" in the last century. Around 1900, Planck started what would become quantum mechanics. Five decades later, we had nuclear weapons and transistors, both of which use QM. Another decade later, hydrogen bombs and integrated circuits.

And then things just slowed down.

Take physics: the discovery rate of elementary particles has been on a steady decline.

Around 1900, we had the electron and the photon.

In 1932, the positron (previously predicted by Dirac).

In 1936, the muon.

In 1942, the (electro-) neutrino (previously predicted by Pauli).

In 1962, the muon neutrino.

In 1968, we got direct evidence for the u-, d- and s-quarks (previously used by Gell-Mann et al to explain the hadron zoo discovered in the previous two decades).

Ca 1976, the charm quark (previously predicted, like all the following).

In 1977, the bottom quark.

Ca 1977, the tau.

Ca 1979, the gluons.

In 1983, the W- and Z-bosons.

In 1995, the top quark.

In 2000, the tau neutrino.

In 2012, the Higgs boson (predicted in 1964).

Since then, no further particles.

We certainly have made a lot of progress in some areas (applications of semiconductors, for example), but overall progress has been stagnating.

For another example, suppose you are a person in 1962. You remember the early passenger airplanes of your childhood, like the DC-3 (1936, 32 seats, 330km/h) and the recent progress like the Boing 707-120B (1961, 137 passengers, ca. 897km/h). Extrapolating, you might anticipate the Concorde (1969, 120 passengers, 2179 km/h). God only knows what you might expect from 50 years more progress after that. Flying cars? Hypersonic airliners? Suborbital shuttles? Space travel, e.g. a Disneyland on the Moon?

Instead, you get the 787 (440 passengers, 903km/h), with the caveat that the most produced jetliners are still the 737 and A320 series (with about 230-240 passengers). While these surely have better fuel economy than the earlier jets, I think it is fair to say that jetliners have converged to proven designs in the last few decades.

Relevant reading: https://slatestarcodex.com/2019/04/22/1960-the-year-the-singularity-was-cancelled/

Expand full comment
Shankar Sivarajan's avatar

I agree that progress in physics is now less dramatic than a century ago (hard to beat relativity, quantum mechanics, and nuclear physics all going from completely unknown to quite well understood), but look for instance at biology, gene-editing with CRISPR most obviously, and that still looks pretty impressive.

And yes, technological progress for the past few decades is most visible in "applications of semiconductors" instead of planes and spaceships, but come on, computing and communication tech is (at least) just as cool! And that's even if you entirely ignore the AI advances of the last few years.

Even if you're myopically focused on space, sure, we don't have moon colonies, and no man has set foot on Mars, but we've discovered thousands of exoplanets and know a lot about many of them, there's a Gravitational-Wave Observatory that's working well, and I think the latest space telescopes would be living up to the expectations of the people from ~fifty years ago.

Expand full comment
Max Morawski's avatar

I think what you're describing is scientific progress, not technological. Think about what it would be like in every decade to be teleported 20 years into the future. The closer to now you get, the more insane the transition would be. You'd start off getting things like planes and electricity and phone lines, and then suddenly you have computers, phones, self-driving cars, gen-ai. I don't think this trend is going to slow down--I think in 20 years, the world will be unrecognizable to me due to tech. Whereas the commenter seems to think society will look more or less the same.

Expand full comment
Stephen Pimentel's avatar

Scott is unambiguously correct about this one. Some people are kind of dicks, cognitively speaking. They just want to press their arguments, like toddlers grabbing at a toy. Low self-awareness, low impulse-control.

But also, one should ration how much energy one spends countering such people. Sometimes they are best just ignored.

Expand full comment
Radar's avatar

yes yes!

Expand full comment
Sniffnoy's avatar

A related point: Don't express agreement as disagreement! I see this all the time on the internet for some reason, where person A says something, and person B essentially *agrees* and wants to expand on A's point, but for some reason frames it as a *disagreement* between them.

Bad:

A: People should be aware that X.

B: Except that you're forgetting about Y. [Y doesn't contradict X, and in fact expands on it.]

Better:

A: People should be aware that X.

B: Indeed, but let's not forget Y as well.

Or I guess relatedly, if you overall agree with someone's point, but need to correct some details, don't express that as a fundamental disagreement either! Express it as correcting some details!

Bad:

A: People should be aware that X.

B: That's not true at all, it's actually Y. [Y is only slightly different from X.]

Better, by removing the combative framing:

A: People should be aware that X.

B: I should note that it's actually Y.

Better, by acknowledging fundamental agreement:

A: People should be aware that X.

B: Indeed, people should be aware, but I have to correct you and point out that it's Y, not X.

Really, of course, this is a special case of the more general principle of "lead with agreement", but "lead with agreement" is usually described as a tactic for getting people to listen to you when they disagree. For some reason, people fail to lead with agreement when they *essentially agree*!

Expand full comment
AlexTFish's avatar

What an excellent point! Thank you, I think I can be prone to this when "agreeing" with people.

Expand full comment
Trust Vectoring's avatar

LMAO was that Impassionata? That fricking straggot, I just can't.

Expand full comment
Big Worker's avatar

Yes, I've been trying to start a lot of my replies to people with "yes" or "yeah" lately to the extent possible. It's so easy to assume hostility and disagreement on the internet; it's worthwhile leaning against it.

Expand full comment
specifics's avatar

This is really great. I’m annoyed by this behavior in others, and yet I know I do it myself. Will try to take note of it and change.

Another good reason to do "Yes, but": Admitting you’re wrong is hard but necessary, and it’s good to practice when the stakes are low. It’s rare that a random person online is going to overturn my whole way of thinking about something, but it’s common that they’ll poke holes in a flimsy argument I’ve made off the cuff. Being able to acknowledge that is part of training myself to think more clearly and honestly.

Expand full comment
Isaac King's avatar

You shouldn't use blur effects to hide information like you've done with the commenter's name and profile picture; most such effects are reversible. (And even without reversing, you leave enough detail on the profile picture that someone familiar with that person could guess it was likely them from the overall coloration.)

Expand full comment
🎭‎ ‎ ‎'s avatar

I don't think he was trying very hard to hide his identity in the first place, especially given that it's... *very* obvious who that is if you've been around here awhile.

Expand full comment
Isaac King's avatar

Sure, in this particular case it doesn't matter much, because someone who wanted to find out can simply google the text of the comment to find it. But it seems like a bad habit to get into, because one might unthinkingly use the same method to hide information that one *does* really want hidden.

Expand full comment
Shankar Sivarajan's avatar

You can also easily google the text. It's more pro forma than an actual attempt at obfuscating identity, but I agree one should use solid colors (traditionally black) for redaction.

Expand full comment
Michael Watts's avatar

> You shouldn't use blur effects to hide information like you've done with the commenter's name and profile picture; most such effects are reversible.

Unless the image has been replaced, Scott hasn't used a blur effect. He's used an information-destroying method; you can clearly see the giant pixels that redact the name and avatar. It's not reversible.

(You can apply the same method to a list of candidates and see whether they produce matching output, but that's not reversing the transformation. You're complaining that not enough information has been destroyed, but you're *saying* that none has.)

Expand full comment
PsyXe's avatar

Bulverism spotted - also, it's really annoying when you're saying "guys there's a less than 50% but still significant chance that something really bad is going to happen so we should try to stop it" and people convert that to "actually I want the bad thing to happen"

Expand full comment
Peter Defeel's avatar

When I lose internet arguments I just slink away.

Expand full comment
The Unimpressive Malcontent's avatar

Tangentially related at best, but I remember when the New Atheist movement created the pejorative "I'm an Atheist But" to mock and dismiss any atheist who didn't take the position of full-blown unconditional anti-religion. I was not a fan. Incidentally I became an economist. We like buts. Big buts. And I cannot lie.

Expand full comment
Peter Defeel's avatar

The new atheist movement became so embarrassingly shrill and stupid that I never call myself an atheist when asked. Sceptic is all I’d admit to.

Expand full comment
lyomante's avatar

athiests often can be isolated and beleaguered, though. religion can often be incredibly oppressive to many kinds of people, and drive them away. I don't like it either, but i get it, and you have to be careful: sometimes its out of real inner pain.

christianity in particular decided to completely ignore young men unless they were pastors, musicians, athletes, or fathers. created their own worst enemies.

Expand full comment
FractalCycle's avatar

Finally, some more classic rationalist writing in the form of "extremely basic errors made by people not being autistic enough!"

(I'm not being sarcastic or poking fun, I unironically think a high % of the world's irrationality is caused by most people not being autistic/anxious/high-concientiousness/etc enough to deeply care about getting things right, *in any domain at all* (let alone thinking, talking, or arguing).)

Expand full comment
Andrew Marshall's avatar

"Yes I was completely wrong, but here is why I am basically right anyways..."

Expand full comment
beowulf888's avatar

I just wish more people would support their arguments with links and data. You'd think they'd have read or heard something somewhere to convince them that they had a superior understanding of the issues. But so few people take the time to offer them up.

My least-favorite excuses for not providing links are: (a) "I'm not going to do your homework for you," and/or (b) "the burden of proof is on you to prove me wrong."

Who knows? I may be wrong, but if you can't show in the data where I'm wrong, don't expect me to roll over and agree with you. (You in general sense, not you personally, Scott.)

Expand full comment
David J Keown's avatar

A: I once I dropped a turtle in a pond. It swam around happily. Therefore, I expect this tortoise to thrive as I drop it in the ocean.

B: But the ocean is much larger than a pond and its size gives it different properties--you shouldn't automatically expect an animal that can survive in a pond will survive in the ocean. Furthermore, the tortoise is different from the turtle precisely in that it lacks the turtle's adaptations for swimming.

A: Yeah, but if you replace the “t” in tortoise with a “p” it spells a “porpoise”, and porpoises can swim in the ocean. That’s only one letter difference. Ergo, we should expect this tortoise to be fine.

Expand full comment
Emily's avatar

I like "Oh, no! Thank you for pointing that out!" I'm not sure why. Maybe it wouldn't work on the Internet.

Expand full comment
Radar's avatar

Agreed, it's got a very nice tone to it.

Expand full comment
Shaked Koplewitz's avatar

It's very hard to do in text because it has a completely different meaning based on tone

Expand full comment
Taleuntum's avatar

This sounds very pleasant, but too female-coded for male usage imo.

Expand full comment
Emily's avatar

I was wondering about that!! I can see that being the case.

Expand full comment
Jack's avatar

Love how I knew that comment was Freddie before I googled it.

Expand full comment
The Unimpressive Malcontent's avatar

His bizarre worldview certainly requires some unique and recognizable rationalizations, doesn't it. That weird kind of Marxism that has to deny the improvements in living standards technology has wrought because the whole edifice falls apart otherwise.

Expand full comment
Jack's avatar

I think it has less to do with his politics and more to do with temperament. He writes the same way about sports and movies that he doesn't like. It's just a particularly dismissive style that doesn't even attempt to steelman the other side or consider the possibility that there is a valid alternative to his position. "Agree to disagree" isn't part of his vocab.

Expand full comment
Haakon Williams's avatar

Yes, Scott. This is so important. The power of acknowledgement. The systemic reason I find this so important is: the absence of such acknowledgement drives cycles of increasing rivalry (and attendant existential risks). 'Oh you didn't acknowledge my correct point? Fine, I'm not going to acknowledge your correct point! Let's just yell at each other until we get so polarized and alienated that we make each other enemies (when really we could be collaborators).'

Expand full comment
WSLaFleur's avatar

It seems like a fraught assumption that real discussions ought to contain potential win criteria. Is this your real position?

Expand full comment
Simon Break's avatar

This drives me absolutely insane, way out of proportion with its actual moral seriousness. Correcting yourself is just so, so easy. You feel better, you get to look wise and humble, and it actually makes people take you *more* seriously. If I was a pure & total sociopath I would still apologize for mistakes as a matter of strategy.

Having said that (politics warning!), the Current Situation isn't exactly an advert for the strategic advantages of admitting error. Apologies for the mistake everyone.

Expand full comment
Eremolalos's avatar

I think the best thing about admitting you are wrong is that it makes the person you are talking with much more receptive to things you say in the immediate future, and the next while. They will see that you pay attention to their points and evaluate them fairly, and that you are more interested in establishing what's true than in winning. Having the listener see you that way at least doubles the impact of everything you say.

Expand full comment
Taleuntum's avatar

I admire your optimism! I would say this depends on your interlocutor, they could easily take your admission as confirmation for you being an idiot who is not ever worth listening to.

Expand full comment
Eremolalos's avatar

There isn’t much to lose. They already know you are wrong. They have pointed it out. So if they are going to think the mistake proves you’re an idiot, failing to address the fact doesn’t help. And stating calmly and bluntly that you were wrong shows confidence.

Expand full comment
Radar's avatar

I like that Scott is offering ideas about how not to be an a**hole in conversations.

The thing that strikes me about the comment he quotes is that it's dripping with snide condescension, and that's a harder thing to talk about than whether someone can acknowledge they made a mistake or not.

One could learn to say the words to acknowledge a math error before continuing on. But if all the rest of the communication is snide and condescending, then I'm not sure how much progress has been made.

Learning to treat other people with respect is a deeper kind of skill that requires genuine, not performative, humility. Otherwise, one's basic contempt and disdain for other people will bleed right through. And generally that's a flashing sign of one's own contempt and disdain for oneself. We tend to talk to other people the way we talk to ourselves in our own head, at least some of the time.

And so it speaks to the challenge of not being able to have very productive conversations with people who don't have a basic degree of self-esteem on board. Helping people improve their self-esteem seems like a fairly urgent task in the world in which we live. It can be done, but it can't depend only on each individual doing it by themselves, I think. Though that certainly is part of it.

Expand full comment
Nope's avatar
1dEdited

This is a common social issue. It's why one of the most common marriage tropes is the wife who wants to vent about her day but gets upset when her husband suggests a bunch of solutions. The solutions are fine, she just wants acknowledgement before he launches into them.

Expand full comment
David Wyman's avatar

Scott a really good sociopath would admit fault occasionally in order to increase people's trust in the long run. In fact, I am worried that some AI will figure this out and start advising people about this.

But it's the right thing to do, and if we are to all go down, we should go down with honor.

Expand full comment
Freddie deBoer's avatar

I genuinely think you're having a manic episode

Expand full comment
Jeremy R Cole's avatar

It's sort of funny how blurring the author's name does nothing to help because the writing style is so de-anonymizing. Or maybe I'm just an over fit LLM.

Expand full comment
avalancheGenesis's avatar

I try to keep some epistemic humility and not take shortcuts like "I precommit to no longer reading takes about AI, neoliberalism, or YIMBYs on this blog" (you never know when a Worthy Opponent might make a good counterpoint!), yet the track record at this point is pretty grim. Scott doesn't always win beefs, but not taking the L after that post referencing a famous astronomer got smacked down was...yeah. And that's pretty civil and almost productive, compared to the endless feud and gaslighting-about-the-feud with a certain political blogger. At least there it's ostensibly from high-level disagreement generators (cf. Varieties of Argumentative Experience) rather than pulling a Gary Marcus Gotcha and committing to a bit long past its expiration date. It's a shame because such arguments genuinely could be valid, insightful, even strong, if they were further refined for actual convincing rather than as soldiers. Arguably the earliest such batch of contrarian takes were just that, unlike the more recent ones that contort themselves into knots to include Against Corporate Power And Capitalism. Kind of gives away the game. Still good to read on other topics like mental illness and the decline of Media, Inc., but...a little charity and admittance of error would go a long way towards heading off Gelman's Amnesia there too.

(It's weird to dance around author identities when it's clear how little effort went into "anonymizing" this one. Not that many orange checks floating around! Distinctive prose! Meta-sphinx-debating, I am not sure how I feel about this veneer of privacy versus just calling out the poster directly, or actually making the inspiration oblique enough without loss of generality. Feels mealy-mouthed.)

Expand full comment
Chris Willis's avatar

Sorry Scott, but in this case I’m with your interlocutor. The thrust of his original argument was watertight, if not his math. Your response was not your best, and contained some goalpost-shifting and other issues.

Instead of getting into the weeds, your interlocutor was - admittedly, rather bluntly, perhaps even rudely - trying to see if he could approach the wider problem from another angle to see if he could convince you that way.

Expand full comment
Dirichlet-to-Neumann's avatar

In the spirit of Scott's post, this issue is neither limited to boomers nor more prevalent with them than with anyone else.

Expand full comment
mike_hawke's avatar

I remember that post and that comment. I remember also thinking your reply was way more restrained than mine would have been.

(See also "Logical Rudeness" and "Is That Your True Objection?" by eliezer yudkowsky)

Expand full comment
Vetera Letters's avatar

Yes, I think people sometimes underestimate the importance of "the tone of voice" in a discussion.

It reminded me of the following quote: "We often refuse to accept an idea merely because the tone of voice in which it has been expressed is unsympathetic to us." - Friedrich Nietzsche

Expand full comment
Eric Rasmusen's avatar

A problem. What if I say, "I support X because Y = 1,234" and someone says, "No, Y = 1,233", and they're correct. Then I do want to say "Yes, Y does = 1,233, but it doesn't matter to my conclusion."

Expand full comment
Victor's avatar

The appropriate response to someone who cannot admit they are wrong is pity, because they have closed themselves off to the most important source of self-growth. They will never achieve their potential as individuals.

Expand full comment
TenaciousK's avatar

I work in a psychotherapy practice and often use a Buddhist frame with my clients (positive motivations, unskillful action) when we're discussing difficult communication with others. With difficult feedback, if you validate positive motivations first, you preempt the defensive reaction that follows the double-bind you've put the other party in (I can either feel ok about myself/my worldview etc or I can believe what this person is telling me). It's pointing out a misstep rather than making someone "wrong," which makes it so much easier for someone to take accountability (for the impact of something they did, said or whatever) and simultaneously save face. Making people wrong leaves little room for this.

What you're saying here sounds familiar. I'd add, though, that the motivation for provocation is usually along the lines of undermining the other party's credibility. What people seek is the validation you're referring to, and when confronted with the opposite, their avenues for saving face are limited.

I wish we put more focus on rhetoric in public education.

Expand full comment
lyomante's avatar

to actually change your stance can be painful, and a lot of people argue stances not realizing this. Like there are athiests who never believed, athiests who did but are angry at the church, and those that did but are not mad at it; if anything they may very much wish they could go back, because truth can make a worse world if not better.

i think people don't realize how hard to keep your soul together can be. Scott might see it as irrationality but the world is a flood that can happily overwhelm you and carry you off, and not everyone is a strong swimmer. those people argue to pass the time really, no shit off Scott's nose if AI happens.

some dudes argue to resist though, the poster there i can feel it. Their world and values are being washed away by a flood, and fuck the autistic niceties. the trans kid wants to survive and finally feel ok: the anti trans is watching fundamental human concepts get radically transformed and the world too. For either to give can be devastating.

so you get this sort of warfare debate. tbh the world's a bit too much with us now, we kind of need to wall off from others more to save our souls.

(for me i argue to prove i exist. at heart i've always been an outsider in a tiny mill town, and rationality or not, if i dont let my mind free i die inside)

Expand full comment