774 Comments

IQ is not a real thing. An IQ test is a vanilla measure of cognitive function that has been misunderstood to be measuring some fundamental underlying characteristic called "IQ" which does not exist. Cogntive function is important but you don't need an IQ test to measure it, and being slightly better at rotating shapes than the median does not make you the master race.

Expand full comment
author

Thank you for proving my point.

(in particular, the fact that you are perfectly happy to use the term "cognitive function", but freak out about "IQ" because you associate it with being "the master race", is so exactly what I was complaining about that I couldn't have come up with a better example if I'd been trying.)

Expand full comment

You'll notice many of your commentors believing that IQ does in fact relate to superior genetics, a belief system for which "master race" is shorthand. But if the shoe fits...

Expand full comment
author

"...that IQ does relate to superior genetics"

I think you're still doing the thing. Given that IQ is mostly genetic, if you think having more IQ is better, you could call this "having superior genetics". But you would only do this if you were obsessed with figuring out ways to cast things in order to make people sound evil, which again, is my whole point. I think you are so incapable of looking at this issue through any lens other than how it can be used to make you gain or lose social status that you're talking about the particular status games you can play around it, while being convinced you're talking about the real world and somehow disagreeing with my point.

Expand full comment
RemovedAug 31, 2023·edited Aug 31, 2023
Comment removed
Expand full comment
Comment removed
Expand full comment
RemovedAug 31, 2023·edited Aug 31, 2023
Comment removed
Expand full comment

A higher level of intelligence (which is what IQ is attempting, imperfectly, to measure) is *useful*. This is true in as much as being "attractive" is useful. But being "better" is a value judgement and thus requires someone making an evaluation. It may give you greater opportunities for success in life, but it won't necessarily make you eg. happier.

Expand full comment

Do you anticipate a post-scarcity utopia where being intelligent *isn't* instrumentally useful? Like even if no one has to be smart in order to earn a living, what kind of life are you imagining where it is never useful to be a bit smarter than you are now?

Expand full comment

Well, define "useful". There could always be situations where being smarter would make you more efficient — games; in any true utopia all human activities would essentially be games — but as I discussed in this comment (https://astralcodexten.substack.com/p/heres-why-automaticity-is-real-actually/comment/39366842), in the long run I would expect arbitrarily smart people to have *less* fun. More broadly I don't think the possible gains would be sufficient to outweigh my general bias against mental self-modification. Like I said upthread, I like being me. And "me" is a gradient, not a binary — I'm willing to accept a certain amount of self-modification proportional to how instrumentally necessary it is. I would press a button that changed my favourite flavour of ice-cream in exchange for immortality; I would not press it in exchange for fractionally better parking opportunities in the Heaven parking lot. Making myself arbitrarily smarter seems a lot like pressing a button that may or may not change various aspects of my personality in unpredictable ways (because again, lots of things that I enjoy now might become boring to smarter-me; and not just movies or games, I might e.g. fall out of love with my girlfriend because I find her predictable now).

Expand full comment

IQ is, in practice, defined as the ability to correctly answer a bunch of questions on a sheet of paper within a certain time limit.

I think that all other things being equal, yes, the ability to do things is always better than the inability to do them.

(Obvious objection: "But what if you're strong enough to lift a wardrobe over your head, and now all your friends want you to help them move?")

Expand full comment

"It is better to be a human being dissatisfied than a pig satisfied; better to be Socrates dissatisfied than a fool satisfied. And if the fool, or the pig, are of a different opinion, it is because they only know their own side of the question."

Having higher intelligence simply allows a level of understanding and experience that is not possible without it and that is valuable in and of itself.

Expand full comment

Seems dubious to me. In the first place, I think there is a difference between intelligence and understanding/wisdom. I don't think you need superhuman (or even genius-level) memory and processing speed to understand and internalise a correct understanding of the world along all relevant axes, even if, perhaps, you would need them to derive it.

But that's a distraction, because the literal pig probably *won't* understand a lot of things, but I still feel like it's ethical to keep it as it is. I think it all comes down to personal choices and preference, in the end; I'm no hedonium-tiler who would turn every sad philosopher into a happy pig. Equally, however, I feel deeply that turning a happy fool into a sad Socrates against his will would be quite a cruel thing to do. Indeed, even with animals — given the power, would you really enlighten every puppy and every butterfly? I feel like the pithy quote is unfairly tarring the happy animal's existence by choosing a pig.

Perhaps part of the answer lies in reversibility; perhaps everyone should be — if not forced, then at least strongly encouraged — to experience a day (or a month, or a century) of genius, after which they'd get to choose whether to keep it or return to their original self (or indeed, to experiment until they found the right sweet-spot for them). I predict that Smart-Me *would* in fact want to switch back.

(I say switch back, but perhaps Smart-Me would want to continue to exist *but also* conclude that he was a different person from Dumb-Be in such a way that he was morally obligated to resurrect Smart-Me; so the answer here would be duplication. You'd turn into Smart-You, and then Smart-You would decide whether to recreate a copy of Dumb-You to coexist with him. I've heard worse high-concept science-fiction premises.)

Expand full comment

This is so interesting. My intelligence is one of the traits I'd be most eager to self-modify, even ignoring instrumental benefits. Thinking is fun, and it's more fun on days when my brain is working better*. Moreover, when I'm thinking better it's like my internal world is vaster, richer, more interesting.

What am I? A collection of thoughts weaving together in various patterns? If there can be more of those threads, if they can dance more intricately, it feels like 'I' am simply 'more'. Or conversely, losing 90% of my ability to think feels a long way toward dying.

Sometimes though I've wondered if other qualities are like this too, and I just don't have enough of those to see the appeal! It would be interesting to see the correlation between traits and self-modification preferences.

(*I'm sure there are some confounds here. Maybe good thinking days are really good sleep or low stress days, so mood and whatever else are also improved, coloring my impressions.)

Expand full comment

I would certainly respect individual people's preference for self-modifying to be smarter, much as I would respect people's preference for self-modifying to be great athletes, or supernaturally beautiful! Good for you, genuinely. I only mean to question the assumption that everyone would or should choose this (or worse, that it should be chosen for them).

But for what it's worth, as I outlined in other comments, aside from concerns about any kind of mental self-modification, my worry is that at some threshold, becoming smarter just means that you run out of things to think about. Adding up 2 + 2 to make 4 isn't very fun even for people who think "maths" is fun; isn't there some level at which all math problems would seem just as trivial, and the "fun of thinking" would be snuffed out? I truly think there's a "sweet spot" for intelligence, past which it becomes a curse where *at best* you'll just be hunting for puzzles worth your time to solve, which will bring you no more gratification than regular puzzles brought regular-smart you. That limit may not be the current average human IQ, mind you. There's probably *some* room overhead. But not infinitely so, I'm quite sure, and (slightly less sure but still pretty confident) not a *lot* of room, I wouldn't think.

Expand full comment

IQ isn't mostly genetic unless you intentionally construct a statistical artifice to make it appear that way, in which case I'm perfectly reasonable to wonder why you'd go to the trouble.

Expand full comment
author

My impression is that a very strong scientific consensus disagrees with you. Is this your impression of the situation and you are deliberately defying that consensus, or do you disagree that this consensus exists? Also, how closely have you studied this issue?

Expand full comment
User was indefinitely suspended for this comment. Show
Expand full comment

With this I am actually curious what is a strong scientific consensus?

To me it means nothing. We could justifiably say that there is a strong scientific consensus that, for example, vaccines don't cause autism. It is just a shorthand that those who study this issue will find strong evidence that we cannot blame vaccines for autism without actually going through that evidence again and again. Sometimes it is hard to argue with overly sceptical or bad-faith people. It could be an invitation to those people to open any textbook about vaccines and go as deep as one wishes and report back the results. It is the extremely high confidence (>99.9%) that a good-faith participant will reach the same conclusion.

But apart from such basic and fundamental issues, I don't see referring to scientific consensus at all. Instead, it is always about concrete evidence, different strengths of evidence, and going a step higher – referring to meta studies, conclusions of regulators (EMA, FDA), reputable groups such as Cochrane or NICE etc.

For example, what is scientific consensus about masks? I wouldn't be surprised that most people, including healthcare workers would say that there is a strong scientific consensus that masks help to limit the spread of respiratory illnesses. The consensus may appear real.

And yet, from the point of view of evidence-based medicine no strong evidence of mask effectiveness exist. Reviewing all the available is hard, no individual researcher is able to do it alone. We need more resources to make a decision. Respectively, as per Cochrane group no strong evidence exists that masks are effective to protect from respiratory illnesses. If there is any consensus, it doesn't matter because it may be wrong.

Is IQ mostly genetically inherited? I don't know but in practical life this question seems to be less relevant than question about masks. Considering how bad psychology studies are in general, I don't think that the evidence is about IQ inheritance is as good as the evidence about masks (which is not good). It is a bummer that we cannot discuss IQ inheritance with higher level of confidence but that would be the only honest thing to do.

Referring to consensus in this case to me seems an attempt to avoid acknowledging that our state of knowledge is very incomplete. I know it is boring to say that we need more high quality of evidence but that's how I see it.

Expand full comment

You’re being silly. Yes, cognitive function exists, and yes, genetics influences cognitive function. You don’t have to deny basic concepts or start dropping woke terms. Those basic concepts do sometimes come in a bundle with racism, but more often they don’t.

Expand full comment

Nazis who thought of Aryans as a master race rejected the use of IQ testing. Adolf Hitler banned IQ testing for being "Jewish." The groups that test higher than Germans include East Asians and Ashkenazi Jews. Basically all the IQ-realists in Scott's comment section probably accept this.

Expand full comment
Aug 31, 2023·edited Aug 31, 2023

The ones in Scott's comment section, sure. This is nowhere near the right end of the spectrum.

You have plenty of white supremacists who think those are alien races that need to be wiped out. Hang around frogtwitter sometime.

Expand full comment

Are you going to let the rants of a few ne'erdowells on Twitter force you into refusing to accept scientifically derived knowledge?

Expand full comment

Oh, I accept it. I have since 1994 when that book with the rainbow Gaussian on it came out. What I do with that is another question.

Expand full comment
Aug 31, 2023·edited Aug 31, 2023

Define 'plenty'.

I would argue that those forces are more than counterbalanced by far-left blank-slatists who dogmatically assume that all differential outcomes are conclusive proof of systematic racism and who are determined to destroy all institutions that demonstrate it. Those forces are much more socially destructive and they're currently winning.

Expand full comment

Okay, and "anti-racists" on twitter often call for acts of homicide against white people - do I get to automatically dismiss all "anti-racists"?

Expand full comment

"Well, you know, it's part of a system of systemic racism, so it doesn't mean the same thing, marginalized racialized people who are subalterns under systems of oppression don't have the same power against people who are part of the hegemonic group"... (goes and shoots a white gas station owner)

I'm skeptical of anyone who puts that in their Twitter at this point, but I think you are right that we would probably draw the line between, say, Steven Pinker and Andrew Anglin at each end in different places.

Expand full comment
Sep 1, 2023·edited Sep 1, 2023

Everybody believes in the concept of 'superior genetics' to some degree.

For instance do you believe that is preferable to be born with sight than born blind? Do you believe that it is preferable to be born with 4 limbs instead of 3?

If you do, then you believe in 'superior genetics'.

Expand full comment
founding
Sep 7, 2023·edited Sep 7, 2023

>Everybody believes in the concept of 'superior genetics' to some degree.

You'd think, but considering the response from some in the disability rights community to Mr. Beast paying to restore vision to a bunch of people I'm sure you could find more than a couple people who'd try to argue that there's no such thing as superior genetics. There's a certain contingent on the left that worships so strongly at the temple of Equality and they're willing to take that position.

Expand full comment

Isn’t the woke position that race is a social construct not genetic?

So if IQ is genetic then it’s not race-related, right? I thought that’d be compatible with the woke position

Expand full comment

IQ is the score on an intelligence test. These scores do exist. "Cognitive ability," "intelligence" or "cognitive function" are the underlying trait that is measured by intelligence tests. The g factor is a construct that arises from factor analysis because all cognitively demanding tests positively correlate. Cognitive function is important, I agree. We would be wise to measure important things and try to influence them. If we do not have an objective measure of cognitive function, we cannot evaluate interventions.

Expand full comment

We would be wise to measure them. But people who measure often mistake the capacity to quantify for the capacity to devise good responses to what they find. I see this constantly among hereditarians, whose policy prescriptions often seem to me to be at about monkey level. So of course the anti-IQ contingent is perpetually freaked out and wants to deny the existence of differential intelligence altogether.

Expand full comment
Comment removed
Expand full comment
RemovedAug 31, 2023·edited Aug 31, 2023
Comment removed
Expand full comment
Comment removed
Expand full comment
RemovedAug 31, 2023·edited Aug 31, 2023
Comment removed
Expand full comment

Which hereditarians are providing monkey-level policy prescriptions, in your view? I think widespread research and subsidy of genetic enhancement technology is the way to go.

Expand full comment

Mostly people who follow well known hereditarians and then extrapolate simplistic policy from whatever they learn. Or Emile Kirkegaard, who is interesting but seems to careen off into extreme determinism here and there (not that I've cataloged examples).

As for genetic enhancement technology, I'm all for research, but except in cases of intellectually debilitating diseases that can or might be ameliorated, that's still science fiction at this point. There is very little known about the genetic basis of intelligence, or for that matter, personality, which is also quite heritable and has a big impact on how people do in the world (and probably has links to intelligence for that matter).

And think about it: if personality traits and IQ are linked developmentally, what will it mean to improve intellect?

Expand full comment
Aug 31, 2023·edited Aug 31, 2023

The accuracy of polygenic scores is mostly a function of the size of the datasets available. The issue is increasing the size. Even with our current knowledge, selection is possible. It's not science fiction at all. It happens now.

IQ and good things go together. Improving IQ will likely improve personality.

Expand full comment

You and I have absolutely no idea that IQ and "good things" necessarily go together, or that improving IQ will likely "improve personality."

This is exactly the sort of overconfident, biologically untutored, hubristic nonsense I'm talking about.

Expand full comment

It's exactly this slide from IQ to value of an individual that makes people skittish. There are many high-IQ monsters. Also, you don't need to be very bright to be a good citizen. This attempt at moral engineering through IQ will either die a meme idea or develop into atrocity. I too see an increasing number of g-nicks go right up to the conclusion that if we gassed all the stupid people we'd live in paradise, and then their sole caveat is, "but I would never support that, it's immoral... but it would work."

Expand full comment

Excuse my random tangent, but this got me thinking about Nassim Taleb's vendetta against IQ as a useful measurement of intelligence [1]. That got me thinking that even if we were able to nail down g, what we'd be measuring would be the capability of the "hardware".

I don't know about Taleb, but I would propose that more than somebody's "hardware" (their raw intellectual capacity) is the "software" they're running, or the overall set of heuristics, philosophies, priors, Type 2 processes (using first principles, etc) when it comes to overall success in life. Specifically systemized winning [2]. I would not limit "winning" to just things like success in your field or wealth, or billionaires would be considered the best amongst us (as some people do think, to be fair).

However, when I think of somebody who's won at life, I'm thinking of somebody who's figured out emotional maturity, has their priorities right (doesn't sacrifice health for money, has strong relationships that make them happy), has meaning in their life, etc. I don't want to broaden this definition too far, because the raw g plays a big part in being able to actually _understand_ life enough to be able to make the right decisions to get the kind of future you'd want to have. I think I'm talking about *wisdom* vs just *intelligence*.

I'm sure people have thought/written about this plenty, and I'd love to read any prior art on something like this!

[1] https://medium.com/incerto/iq-is-largely-a-pseudoscientific-swindle-f131c101ba39

[2] https://www.lesswrong.com/tag/rationality-is-systematized-winning

Expand full comment

The things you consider wins actually correlate with IQ (or more accurately the g-factor).

But maybe you would be interested in Shane Frederik's work on the CRT (Cognitive Rationality Test). He published some papers suggesting that rationality (as measured by his test) isn't totally colinear with IQ (g). He showed that certain behaviors that we normally consider "good" or "rational" line up more with the CRT than with IQ (in other words, some people with high IQ still acts stupidly).

Keith Stanovitch is also worth reading. He wrote an entire book on the Rationality Quotient, and his latest book, _The Bias That Divide Us_, is also excellent.

Expand full comment

> in other words, some people with high IQ still acts stupidly

This perfectly matches my real world experience, so it instantly lends some credence to Shane Frederik's work. I'll also check out Keith Stanovitch's books, they sound right up my alley, thanks!

Expand full comment

"The g factor is a construct that arises from factor analysis because all cognitively demanding tests positively correlate."

This is a well substantiated empirical finding.

Does anyone else find it counterintuitive, almost downright weird? Naively, I'd expect that, given a fixed volume within a skull, a neuron that assists language processing would be at the expense of a neuron that assists geometrical visualization.

The analogy in a computer system is the e.g. a wire track that is used for a register is not available for an adder and vice versa.

Are there any low level measurements of e.g. things like axon propagation speeds that correlate with g, that would be consistent with some peoples' neurons just working _better_???

Expand full comment

The g factor is found in some animals. This might give us a hint about how it evolved.

The g factor does have biological correlates with biological variables like brain glucose metabolism, brain size, alpha brain waves. I know Jensen discusses some in chapter 6 of The g Factor (https://emilkirkegaard.dk/en/wp-content/uploads/The-g-factor-the-science-of-mental-ability-Arthur-R.-Jensen.pdf). But this is an older book (1998). Richard Haier has a book about the neuroscience of intelligence, but I have not read it.

Expand full comment

Many Thanks!

Expand full comment

Re: cryptocurrency it's worth remembering that any success it has had in poor countries is a direct result of Know Your Customer laws and other restrictions on the banking system by the western world. In a world where anyone could simply avail themselves of a numbered account with an email address the appeal of cryptocurrency would have been limited to some ideological purists.

That's not to defend the role that crypto plays just to observe that it's really a question of regulation more than technology.

Expand full comment
author
Aug 31, 2023·edited Aug 31, 2023Author

Really Paxlovid has only caught on because of COVID. If there was no COVID, it would just be a totally worthless bundle of chemicals, and nobody would want it.

Expand full comment

True, and I don't mean to suggest that the regulatory arbitrage of crypto isn't a benefit (indeed I think it is) but I think it should influence how we think about it from a public policy POV. Rather than make it easier to do crypto if you think it's good relax the regs that prevent other kinds of financial instruments from doing the same.

But I admit this may be a bit off-topic.

Expand full comment
author

I think this only works if there is a single "we" who is both making the crypto and the financial regulations. Otherwise it's like saying "Rather than send Ukraine weapons, we should just have Russia manufacture fewer weapons, since the only point of Ukraine's weapons is to counter Russia's".

(sorry, I guess I'm in a confrontational sarcastic analogy mood tonight)

Expand full comment

Yes, but that's what we do when we have policy discussions. We idealize and talk about what some imagined actor could do -- usually the political entity that we are part of. Without this it would just get too complex to translate between what each individual should do and the positions political coalitions or policy makers should take so we idealize.

I'm not sure how we could have meaningful policy discussions otherwise. Admittedly, I agree that sometimes vagueness about the idealization in play can cause disagreement but I think it's usually better to deal with it when it comes up.

Expand full comment
author
Aug 31, 2023·edited Aug 31, 2023Author

I disagree - I think of crypto as a bunch of libertarians trying to confound bad governments (somewhat, though not centrally, including the US). Any idealization that considers both a group and its targeted enemy to be the same actor, and tries to simplify, is going to simplify out everything useful (hence the Russia/Ukraine analogy).

Expand full comment

But that's not really your audience here, they are already convinced and less concerned about practical impact of crypto on developing economies than it's libertarian anti-government effects. For the people you are likely to be usefully affecting by raising the point, I do think that's a reasonable idealization to make.

Or, to put the point another way, this won't make any difference one way or another to the crypto-libertarian audience but might have an affect in policy circles more interested in the harms that limitations on banking pose to the developing world.

Expand full comment

But sure, I agree that to the extent you are talking to them a different idealization should be made and clarification called for. And I suspect this discussion has effectuated such a clarification.

Expand full comment

Two things can be true - it's possible (and likely!) that both (1) cryptocurrency assists in confounding bad governments and (2) when we notice #1, we could try to make governments less bad in that area so as to assist the people who don't use cryptocurrency and lower the transaction costs for those who currently do.

Peter is explicit that he believes both (1) and (2), but I assume Scott does as well.

Expand full comment

I don't follow your argument here Scott. Peter is arguing that in an ideal world western banks/governments would relax their regulation to allow people in the developing world to internet bank on them, removing the valid use case you've argued for crypto. I can see how you'd counter that with "those regulations are there for a reason, remove them and criminals can money lauder through conventional banks without having to go to all the trouble of using cryto"*, but the counter "but western banks/governments are enemies of the developing world and want them all to stay poor so that instead of buying the stuff we make, they feel pressured to sneak across our borders" seems ... obviously false?

*Though perhaps not without having to hand in your libertarian badge.

Expand full comment

Incredible - but some of the US/EU sanctions are targeting weapon (or support equipment) components, thus making Russia manufacture fewer weapons...

Expand full comment

The difference being that we didn't create COVID (well, hopefully...) and even had we, we certainly couldn't make it go away just with a decision to do so. It's a persistent feature of the physical world, now. The same can not be said of banking regulations. If crypto has become a refuge for poorer countries despite its overall inefficiencies because regular banking gatekeeps them via contrived rules, the most efficient world would be one with better rules and no need for crypto's idiosyncrasies.

Expand full comment

Sure, but I think it's relevant to ask whether something is a tech victory, and policy victory, or both.

Paxlovid was both a tech victory and a policy victory; it solved a contingent problem we happened to be having at teh time, and the problem could only be solved by that specific technological advancement or something very like it.

If people are going to call crypto 'a huge tech success story,' it seems fair to ask whether the success was actually due to the technology, or whether it was just a policy success story (in this case, of skirting regulations) which didn't actually need that technology to accomplish all the good it did.

This seems like a relevant distinction if we are asking how to improve the world, and which problems can be solved by better policy, which by better tech, and which need both.

Too often it feels like we just accept bad policy because we are waiting for a tech wizard to save us with a new invention that does an end-run around the problem. Obviously tech is amazing at solving a lot of problems we can't solve with policy alone, but there's a danger of getting complacent and not fixing the policy we can if we get too used to that narrative.

Expand full comment

Incidentally, in the UK medical system Paxlovid is considered unnecessary drug in most cases and used only in rare severe cases. Maybe it works but is not cost effective.

The same thing probably applies to crypto, even if it has some benefits, the cost is too high everywhere except in very special cases.

Expand full comment

Good analogy in general though in the particular case I don't know if I trust the UK medical system to know what's actually cost effective. This country seems to have run most of its pandemic strategy on "stiff upper lip" mindset, and in fact that's a worryingly general principle in its healthcare.

Expand full comment

Less metaphors, please. The UK pandemic strategy was idiotic but very similar to most western countries. Only Sweden managed think in cost benefit terms and got the best results of all comparable countries.

Expand full comment

I think it would be more precise to say that the UK ran its 2020 pandemic response on sheer guesswork, and made a lot of wrong guesses. Its 2021 response, once it understood the dynamics of covid and lockdowns, was a series of increasingly well judged moves. While that distinction may be irrelvant for many things (a lot of the damage was already done by that point) it is relevant for whether the UK medical systems judgement now is any good.

Expand full comment

Eh, I mean, it never really transitioned to a "sustained effort" plan though. I maintain that, past the emergency, COVID is still blatantly a high enough cost (on productivity if nothing else, but on healthcare too) that it justifies a bit more than the essentially nothing that is being done about it now - no genetic monitoring, no surveillance of spread, no vaccination program for most people, no effort to increase air quality or use any kind of protection even in extremely vulnerable settings like cancer wards, or extremely important transmission hubs like schools. There were lots of wild swings, and the "we should take it on the chin" mindset cost us if anything longer and harsher lock downs, but as soon as possible we transitioned to the current approach of essentially doing nothing at all, and essentially preferring the hidden cost to the upfront one. Which is often coated in a rhetoric of "well really, getting sick is actually good for you" (with for example a very abused notion of "immunity debt" - yeah, of course if you don't sick with one specific virus you don't develop antibodies to that specific virus, but that doesn't mean you then get generally weak to all pathogens ever, nor does it mean that being infected for the sake of developing antibodies is necessarily the actual net beneficial trade-off).

Expand full comment

I live in and am a natural-born citizen of the USA, and the single biggest reason I care about cryptocurrency is so that I can exchange money with people without following American KYC laws.

Expand full comment

Wouldn't it be great if you could avoid all of those and not have the downsides of crypto?

Expand full comment

Never gonna happen in this political climate.

I mean anyone that wants that clearly just wants to fund terrorists, right?

Expand full comment

Sure, but the point is, if the value of crypto is just that it's a libertarian endrun around obstructionism and the tech itself doesn't matter, then give the credit to libertarian endruns instead of calling it a tech success story.

Expand full comment
Sep 2, 2023·edited Sep 2, 2023

Would the libertarian endrun be possible without the tech? I think you need a system that is both trustworthy to the participants, but beyond the reach of the governments that want to apply these kind of checks.

Expand full comment

North Korea has stolen billions of dollars of cryptocurrency in order to fund its nuclear program, so whether you want to or not, you're funding terrorists anyway.

Expand full comment

Are you able to say why you object to complying with KYC.

Are the regulations stopping you from doing things the government has decided they don't want you to do or are you an innocent bystander who wasn't supposed to be affected but still is?

Expand full comment

I have very little interaction with these KYC regs/laws/whatever, but I can say they are a massive pain in the ass. I just want to send money to someone to get a thing. I want to send MY money to someone to get THEIR thing.

But now suddenly I have to get tax statements and bank statements and paystubs and other shit. I'm meticulous, and for some dumb reason I have all this stuff handy, and it's still a major pain in the ass.

I can't imagine what it's like for 99% of humanity that doesn't store every single document carefully cross-tagged and in a format available on every electronic device they touch. It must be easier to just put your money into a Crypto system and use it to buy and sell things.

Expand full comment
Sep 2, 2023·edited Sep 2, 2023

"Buying drugs overseas" could reasonably be described as "sending your money to someone to get a thing," which is probably why they're asking if OP was the intended target of the regulations or just collateral.

Expand full comment

Even the intended targets are collateral damage of the moralizing busybodies behind KYC.

Expand full comment

I got Galen and Nagel in 6, but not the standard two. Strange...

Expand full comment

I got Galen also. Not sure why.

Expand full comment
Aug 31, 2023·edited Aug 31, 2023

Old book review entry on Galen is my guess.

Expand full comment

n=1 here's what my sleepy brain did for the unscramble.

CHURCH, CHRISTINA, RELIGION, PRIEST, *ohhh they are religion words* JESUS CHRIST, [... whut... ALGEN? NAGLE? GELAN? AN LEG?... I'm not spending that much time on a stupid challenge], GOD, HEAVEN, HILL no HELL, [that is not the way you spell PURGATORY!], PEARLY GATES.

Expand full comment

I wonder why so many of us struggled with ‘angel’. Is it because the target word begins with a vowel, and we were presented with a consonant at the front? Is it the soft g?

Expand full comment

On further reflection, it's eery, actually. Not only did people not find ANGEL and ANGLE, but like one person found GLEAN.

Expand full comment

That's me. I'm the one who found Glean. I guess I'm all alone here.

But I agree that I just assumed Scott did a typo on purgatory and wasn't worth my time to find any other solution, and that the catch was actually coming after the break at the bottom of the page. Whoops.

Expand full comment

You're not alone, I also found glean! And I was like "oh, okay, I guess it must be, like, Ruth gleaning Boaz's field, seems legit as being religion-related," lol. But I also thought it was purgatory.

Though playing Wordle has shown me that I'm naturally less inclined to be able to find words that begin with vowels.

Expand full comment

Glean is mentioned 22 times in the King James Bible, so it is a perfectly cromulent 'God word'.

Expand full comment

I have real difficulty doing these tests when I know that there's supposed to be some twist coming. My brain is 10% devoted to the test and 90% devoted to trying to figure out what the big twist is and why the next paragraph is going to call me an idiot if I choose the answer that seems right.

Expand full comment

I kept trying to make it either Legal or Eagle. I finally got Angle due to my binging of Wrestling with Wregret's old WWF Pay-Per-View vids, the last one of which featured Kurt Angle throwing Shane McMahon through a glass pane, except the engineers replaced the sugarpane with heavier glass so Shane just bounced off and landed on his head. And then Angle threw him again until he went through. (Then I realized "Angel" was an option.)

Totally caught "perogatory" though (there's no third r, shut up)

Expand full comment

I agree. I am terrible at word scrambles, and once I realized the religion theme I just went through quickly matching to religious words. Totally missed that there was no U in the purgatory one, I'm just pattern matching. Yet I too stumbled on angel: while the other ones took me less than a second to match to a religious word, the angel one threw me completely off my groove. Took a good five seconds to match to angel. What's up with that?

Expand full comment

Me too. I also got Christina instead of Christian.

Expand full comment

I got Galen and glean.

Expand full comment

I got "glean". The way my brain explained that to me was, "these are obviously religious terms, so there will be one outlier somewhere down the list... ah, there it is". Next one was clearly "god"; without "glean" it might have become "dog". Seeing "prerogatory" later on was far more confusing than it should have been.

Expand full comment
Aug 31, 2023·edited Aug 31, 2023

I also got "Glean" - took me a while. Upon finding it I considered it to be an unusually obscure reference to the book of Ruth in view of the obvious religious content of the other words (Gleaning features very significantly in Ruth)

On "Prerogatory" I noticed there was no U but (falling victim to the priming effect) assumed that Scott had simply messed up rearranging "purgatory" and moved on. In retrospect I'm not beating myself up over it as I think it's reasonably likely that this is the first context in which I have seen that word written in literally my entire life.

Expand full comment

haha, I just commented above that I found glean and was like "cool, it's religion-related because of Ruth, seems legit." I didn't catch purgatory/prerogatory, at all, though.

Expand full comment

For some reason, I didn't explicitly think "those are religious examples," although I probably thought that unconsciously. I had a very hard time unscrambling "angel" and "hell." I didn't have a hard time unscrambling "purgatory," even though that's the wrong answer.

Expand full comment

I consciously recognized they were religious terms, and think that informed my thinking. The ask of "as fast as you can" led me to assume they were all religious, so I also got "purgatory".

Also, is "prerogetory" actually a word? Google gives me a definition instead for "prerogative".

Expand full comment
Aug 31, 2023·edited Aug 31, 2023

Likewise, I cannot recall having ever heard the word "prerogatory" and could not find a definition for it.

Expand full comment

Meanwhile, my #4 was SPRITE (and #6 GLEAN, and #10 hurt my head so I didn't bother but it clearly wasn't PURGATORY)

Expand full comment

My #4 would have been "sprite" in normal circumstances, but my brain went "stop, this works in multiple ways", i specifically looked for the pattern, consciously decided "oh, this is about religion", went with "priest", and then with "god", "angel" and "purgatory", because of course.

Expand full comment

I got "stuck" on 4 at first until I came up with "SPRITE" before thinking "wait, that can't be what they wanted" as the theme was that obvious by then. My Protestant upbringing at work I guess.

Expand full comment

I also got Sprite, thought to myself hang on, that doesn't fit with the list, looked harder, got priest and moved on. Not sure that this says 'priming' is what's going on.

Expand full comment

For me, I eventually "got" angel, but that's only because I was explicitly rejecting the others as not being "religious enough."

And that seems to be a flaw in the examples of priming I see (in my layman's capacity). Humans are social. We communicate in patterns. We use structure, similarity and subtext. Are we being "primed," or are we obeying the rules of the road instead of being the asshole in XKCD 169?

Expand full comment

Part of me is hoping "the asshole in XKCD 169" is the guy who cuts someone's arm off on a whim.

Expand full comment

It's assholes all the way down.

Expand full comment

And it's ok, he grew his hand back next strip.

Expand full comment

Alternatively, that's what priming *is*, just ingrained into habit at a low enough level that it doesn't (necessarily) make it to conscious awareness.

"Priming" would apply in a couple of ways for Scott's examples here - to take all the commentary about the unscrambling, "oh, they fit into religious themes" would be the obvious priming, and, for longer-term readers, "Scott has a habit of subverting things a bit in these examples, what's off in this one?" would be a less obvious but still relevant bit of priming.

There's probably also a motte-and-bailey dynamic here, where the impetus to get out to the less-defensible bailey derives from toxoplasma dynamics ("PETA is aggressively wrong, but that's why you've heard of them") - or, more charitably, the ones Scott described.

Expand full comment

I thought that priming was a change in behavior in unrelated things.

Expand full comment

I got angle, not "angel". And "Christina" for number 2, and didn't get anything for 10 (thought of "purgatory" but saw there's no "u").

Also it's not just a question of "priming" - I thought of religious words not just because I was "primed" but because I figured "they're probably going for a religious theme here". I don't think that's necessarily the same as "priming".

Expand full comment

That one took me forever and I eventually came up with glean. Also prerogatory definitely isn't a word, right?

Expand full comment

Galen was my first thought, but I didn't think it was a word. It took me a while to see angel.

Expand full comment

> Cryptocurrency has become an important part of poor countries’ financial infrastructure, so much so that I think it should objectively be considered a huge tech success story.

I'm sorry to keep beating this horse, but I don't really think this is true. I this crypto proponents *claim* this is true, because it matches their ideal theoretical usecase ("what if we didn't have any sort of reliable central bank system and had to rely on decentralized finance for everything"), while also conveniently being ideologically appealing ("look we're helping the global poor") and happening very far away so it's hard for anyone in a first world country to really verify. But I haven't seen any non-crypto people actually talk about crypto succeeding in third world countries, which makes this look exactly how it'd look if it were empty stories being sold by a bunch of determined grifters who'd thrown a bunch of money at the problem and had the inevitable occasional-thing-that-looked-like-a-success-story or broad trend that could be made to look like a success by careful selection of metrics.

Once we remember that these forces exist in strength around crypto and adjust for them, I don't think the case that it's been helpful in the third world holds water.

Expand full comment
author
Aug 31, 2023·edited Aug 31, 2023Author

"I haven't seen any non-crypto people actually talk about crypto succeeding in third world countries"

Have you seen Devon Zuegel's article (https://devonzuegel.com/post/inside-argentina-s-currency-exchange-black-markets.html)? Do you think she's a crypto person? (I hadn't heard this, but I guess she has some of the risk factors)

Expand full comment

I hadn't seen this. I don't know this person, but updating mildly away from previous position. Heavy caveats though:

1) At best, the article claims this is occasionally used by some people. This seems to fall significantly short of "an important part of poor countries' financial infrastructure".

2) I'm not actually sure who this person is (or how you ran into her?) I'll take your word that she's not generally a crypto person (the article's vibe mostly matches that), but if e.g. you ran into this article through crypto people sharing it that leaves some heavy selection bias (if I run into more similar articles organically, I'll update away further).

3) Milder objection that feels like moving the goalposts: The article says that the actual most-useful forms of crypto in Argentina are USD-pegged currencies that aren't really decentralized, which doesn't really match the crypto dream - they basically just need a way to have unregulated digital USD (which does carry the risk of being shut down or depegged - to some degree the innovation of crypto over the bank of argentina here is "we haven't been around long enough for you to have seen us fail yet"). I don't really like this objection - it feels a bit too different from my original argument, and a crypto proponent might say "sure, they're not relying on decentralization, but the fact that the option to decentralize exists is what prevents the government from shutting this down in the first place".

Expand full comment

I am very very anti crypto. I think KYC laws are a good use of OECD government resources.

But if you’re unaware that if you live in countries where the government routinely inflates away your savings, steals your dollar-denominated savings by declaring ludicrously incorrect exchange rates, and does other things that lead to a completely dysfunctional financial system, you haven’t been paying much attention to the developing world.

Incredibly large fractions of people in Argentina, specifically, use crypto via multiple layers of techie-assisted proxies to evade capital controls. This has been an established story in the mainstream press for several years.

Evading capital controls is the one trick cryptocurrency is really really good at. I happen to think that’s bad in a rich country context, but it seems undeniable.

Expand full comment

"I am very very anti crypto. I think KYC laws are a good use of OECD government resources."

Geez you sound like the an incredible insufferable neo-liberal Karen. Did you also go through the rotating social media avatar from BLM, to you wearing a mask, to the Ukrainian flag as well? Do you report local yard sales to the IRS to make sure that taxes are paid on the proceeds?

Expand full comment

Keep in mind that it’s probably crucial for social stability that most people have strong bias towards the status quo. I know it feels good to act all superior because you “know more than the common man” but come on, all the recent bashing of normal people as “NPCs” because they don’t have countercultural opinions is getting to be ridiculous.

Expand full comment

Most normal people don't even know what KYC rules are, let alone passionately support them. It takes a certain type of insufferability to know something about our managerial bureaucracies and passionately support them.

Expand full comment

KYC laws are aimed at large-scale fraud & corruption and mostly do good. Since you like tangential ad-hominem attacks here's one for you: you sound like a 14 year old Elon Musk fanboy who just learned the word "neoliberal"

Expand full comment

Large Banks and firms love KYC laws. It prevents competition from smaller firms.

Expand full comment

New laws are sold by appealing to great sounding causes.

Sometimes those causes are the real motivation, but that's a coincidence.

Expand full comment

Wouldn't a Neo liberal be against KYC laws?

Expand full comment

I think in this case, "neoliberal" has completely meaningless valence and is essentially a stand-in for "status quo-supporting member of the borg." It's almost more ad hominem than not.

Expand full comment

On this topic, I have been researching digital banking in Pakistan (the third-ranked country on the Chainanalysis list). My impression is that it is not accurate to characterize crypto as an important part of Pakistan's financial infrastructure.

TLDR: Rural people are unaware of crypto; Those who know about it don't trust it; Even if they did know about it and trust it, they probably could not use it without a bank account; Crypto is associated with scams and speculation; Primary users seem to be young middle-class men; Possible benefits in terms of protection against inflation/evading regulations but I did not find any evidence that this type of use is widespread.

I ran a survey with ~1000 household heads in rural Punjab. Only 3% had heard of crypto (I asked about Binance or Bitcoin) and those who had rated it as less trustworthy than mobile banking platforms (the two largest in Pakistan are JazzCash and EasyPaisa). It would also be much more convenient for Pakistani households to use mobile banking since >90% of people have a mobile banking agent within 30 minutes of their household. Depositing/withdrawing for crypto would likely be much less convenient unless you already have a digital bank account (so not really banking the unbanked)

In qualitative surveys, I heard from one person who used Binance (or a scam version of Binance?). He had deposited in 100-200 US dollars (a lot of money for people in this context) after being told by a friend about an unknown pairs of "Americans" (quotes because I don't know if they are actually Americans) who knew about crypto. These "Americans" contacted him over WhatsApp and encouraged him to put money into an app that looked like Binance. Later, he was unable to withdraw money from the app. I am not sure if this was just a scam or a result of the State Bank of Pakistan releasing official guidance that use of cryptocurrencies was unauthorized in Pakistan.

The other time I heard about crypto was from an executive at fintech company in Pakistan. His fintech company focused on middle class Pakistanis and he was frustrated that the government was not cracking down harder on crypto companies. In his telling, crypto companies (in particular Binance and OctaFX) market through Tik Tok influencers (such as Waqar Zaka) and podcasts that convince young middle-class men that they can make a lot of money through crypto (this makes me think that Pakistani users are actually similar to the stereotypical US user in terms of being very online tech bros). He was particularly scathing about OctaFX which he viewed as basically a scam rather than a legitimate service.

The last time I heard about crypto was talking to well-off individual after the financial sanctions against Russia. He was interested in whether crypto could be used to get his money out of Pakistan if the US ever sanctioned Pakistan. This still seems like an important use case but more as an emergency options than an important part of the financial infrastructure.

My last note is that inflation is a significant problem in Pakistan and many Pakistanis want to save in dollars. There are probably some Pakistanis (maybe the podcast/tik tok listeners) where it would be easier to open a crypto account to save in dollars vs. opening a foreign currency savings account with a traditional bank (which requires a lot of documentation of your income etc.). I don't have first-hand experience with this but I think the most optimistic case for crypto is that all of these middle-class young men are using crypto for this. Looking for safe investments is not really the vibe you get from people like Waqar Zaka though (https://www.instagram.com/waqarzaka/?hl=en)

Expand full comment

This comment is really useful, thank you for posting it. Updating moderately away from the position Scott described in his post.

Expand full comment

Bitcoin may fluctuate wildly, but it may seem pretty stable compared to triple digit inflation. Under those circumstances, even if the price of Bitcoin drops in half while you own it you could still be able to buy a lot more stuff with it than if you had kept local currency.

Expand full comment

You can also just use USD, which is strictly better in most ways (or if in the extreme situation where you can't legally use it, you don't want to use cash, and you don't mind any of the technical risks issues and instabilities that come with crypto, you can use a stablecoin on a centralized exchange).

Expand full comment

Using USD as a unit of account seems to me to be far superior, but as a medium of exchange Bitcoin might have enough advantages to overcome its unit of account deficiencies. Mailing dollar bills has obvious latency and theft drawbacks while using a bank as an intermediary runs into KYC laws.

Expand full comment

From Scott's link above, it seems like the main usecases use USDT on centralized exchanges (though unclear if this is actually common or just something a few black market people in Argentina advertise but nobody uses much). Which basically is a hack to be an unregulated bank, in practice.

Expand full comment
Sep 3, 2023·edited Sep 3, 2023

There's one third world country where cryptocurrency has been very "successful" - cryptocurrency hacks are a major source of funding for the North Korea nuclear program. It's sad to see so many western libertarians accidentally donate their money to the worst of the worst.

Expand full comment

I am so glad you wrote this response. The Banana post was very bad and yet was so positively received.

It’s easy to forget nowadays how good of a scientist Kahnemann (and Tversky!) really were. If you look at the Many Labs replication project results, you see not only that their research stands out by replicating, but often replicating with stronger effect sizes than the originals. Remember they used to do evaluations for the Israeli army at a time when they were constantly engaged in wars against seemingly horrible odds - there was a lot of skin in the game ...

And precisely because the effect sizes are so large, you can often replicate them in classroom contexts. For example, Anchoring, one of my favorites, which you can see play out in basically any sort of negotiation.

Expand full comment

I sometimes suspect the very impressive studies by K&T max have in a sense doomed the social priming research, or at least seduced them onto dangerous paths, by being so strong.

Upcoming researchers knew how strong the K&T results were and thus thought that anything they did that gave them similarly strong results (from unnoticed mistakes to p value fishing to outright fraud) must be moving them closer to the truth.

But - explaining the replication crisis by showing that its underlying anthropology is false is unconvincing anyways. Other fields also have their own replication crises. Maybe they all also have false anthropologies? But, it’s more likely to assume that it is simply the incentives in academia that are bad, because these bad incentives apply to all of those fields, and combined with the lack of strong truth coupling mostly limited to the hard sciences simply lead to bad research regardless of the anthropology.

I want to point to psychophysics as another field whose results stand strong - simply because they’re reasonably hard, with strong effect sizes that are fast to replicate.

Expand full comment

What bothers me about the original post the most is that Kahnemann proposes that we don't need to be automatic--that we have System 1 and System 2 thought, and that when we need to we can switch to System 2 and not be automatic, but it takes more energy so we tend not to. It's like Banana is tilting at windmills; not so much a strawman as seeing demons where they don't exist.

And I include anchoring strategy in all my negotiation trainings, because it's SO REAL.

(I also pulled off an endowment effect/loss aversion for a law school presentation where I gave half the class cookies and asked what they would sell them for, and what the other half would buy them for, and it replicated beautifully that people who had the cookies valued them more than the people who did not.)

Expand full comment
founding

When I was a Cog Psych undergrad at the end of the last century (in Tversky's dept, but not in his group), the value of these puzzles was in helping to test models of how the brain worked. That color illusion tells us that the visual cortex decides on color by comparing to signals from adjacent neurons (approximately). The anchoring effect tells you that there's a part of the brain that is good at doing pairwise comparisons (which is bigger?), and it's faster than the part that measures a given dimensions (how big is this?). What part of the brain system is that, and how does it work?

As Adam Mastroianni is hammering (https://substack.com/inbox/post/136506668), they used to do Science, and this new thing isn't really science. Per your own reply below, I think the incentives in academia shifted badly.

Expand full comment

I also thought that exactly that Adam Mastroianni piece was quite good at explaining what the Literal Banana failed to explain.

Expand full comment

What’s the definition of PREROGATORY? Pretty sure it isn’t a word.

Expand full comment

I associate "pregrogatory" with the Protestant Reformation, but that may just be because I constantly associate/confuse terms that begin similarly.

Along those lines, Naomi Klein has out a new book, "Doppelganger," in which she complains about all the people who confuse her with Naomi Wolf.

Expand full comment

Funny, because the first writer I associate with the name Naomi is Naomi Novik.

Expand full comment

It looks like a derivative of "prerogative", which means a special right granted by one's rank or position.

But I've never seen it before today, and putting it into Google does not return a definition, so like you I question whether it is a recognized word.

It is certainly VASTLY less common than "purgatory":

https://books.google.com/ngrams/graph?content=purgatory%2Cprerogatory&year_start=1800&year_end=2019&corpus=en-2019&smoothing=3

Expand full comment

It also seems to be a term in law.

But yeah, to this Papist, it smacks more of Calvinism as the kind of term they'd toss about 😁

Expand full comment

I recently read Trollope's Victorian novel "The Warden" about a nice Church of England cleric who is appointed to a really lucrative 500 year old sinecure, but is shocked when Dickens (who is renamed in the novel "Mr. Popular Sentiment") and Carlyle ("Dr. Pessimist Anticant," which would be a good name for Mencius Moldbug) start a campaign against his prerogatives. I'd assumed that "prerogatory" was in that novel, since the book is devoted to the clash between old prerogatives and modern reforms, but it is not.

So, yeah, "prerogatory" appears to be extremely rare.

Expand full comment

What does "Anticant" mean?

Expand full comment

Against cant.

Cant common Victorian word for trite, virtue-signalling, woke, hypocritical, Mrs Grundy speak

Expand full comment

I was thinking "supererogatory", but it didn't have enough letters.

Expand full comment

I wonder if that wasn't supposed to be PEROGATORY?

Expand full comment

I can't find much on perogatory either, if it's a real word it's a pretty obscure one.

Expand full comment

It is, but it at least comes up in legal contexts on rare occasions.

Expand full comment

Does it? It doesn't appear to feature in legal dictionaries. I found an article titled "Perogatory And Pejorative Name Calling During An Opening Statement Is Going To Draw A Reversal" ( https://illinoiscaselaw.com/clecourses/prosecutor-made-the-statements-but-trial-judge-is-bench-slapped/ ), but as the body of the same article makes clear, the "perogatory" in the headline is a typo for "derogatory".

I also found two citations in which it is a mistake (at this level, hard to call it a "typo") for intended "prerogative"; one in which the mistake is attributed to a third party and called out ( https://blogs.chicagotribune.com/news_columnists_ezorn/2008/06/aldermanic-pero.html ) and one in which the mistake is made directly by the author ( https://twitter.com/Ed_Rants/status/1520166806756990977 ).

"Perogatory" is not a plausible word because it's malformed; there is no pe- prefix. Can you find a citation that actually meant to write the word "perogatory"... anywhere?

Expand full comment

(I'll respond to your other comment)

Expand full comment
Aug 31, 2023·edited Aug 31, 2023

Yes, I unscrambled that one as NOT_A_WORD.

I imagine that it might mean something like "being in favour of the existence of prerogatives", things that some people are allowed to do but not others; or perhaps it's a general adjective for the topic of prerogatives.

Expand full comment
Aug 31, 2023·edited Aug 31, 2023

I'm going to admit that the fact that this person's name is Banana makes me significantly more likely to read their writing. Is that some sort of cognitive flaw?

The idea that people are "mindless sheep“ has been around forever, like you say, this concept of enlightenment and awaking from a sort of lifelong fugue state is heavily used in cults. I find it interesting that lately the trend is to use video game logic to describe it (non-player characters), as if player characters have more agency, when really they're still following a prescribed path set forth by the devs or at least conforming to the rules of the game. I'm also curious how actual sheep compare to more or less scripted NPCs. I certainly feel more empathy for sheep.

I think most of the time when we make choices and things turn out well for us, we don't tend to care about how conscious that choice was. Driving on autopilot from work to home every day tends to be fine, our brain being capable of quietly making the multitude of tiny choices from point A to point B. But if said thoughtless operation results in something horrible (an accident, or just missing a new detour) we easily blame ourselves for our unthinking motions. But it's human. Constant questioning and awareness of why you do everything you do sounds more like an anxiety disorder than wakefulness, doesn't it?

Expand full comment

Yeah, but when you get to Scott's final question, the whole issue becomes a lot more poignant and worth thinking about. Yeah, it's not big deal if we go on autopilot while driving, and check the middle tip box for Uber eats because the set-up nudges us to. But doesn't it give you pause to contemplate the fact that if you were born in the 1700's in the south, you would probably have thought slavery was no big deal?

Expand full comment

Honestly, no it doesn't bother me. I'd be a totally different person. Who I am now is a person who holds the values I do because of my experiences etc. I think as a thought experiment it is interesting but ultimately you're saying "yeah but if you were born in a more racist society wouldn't you be more racist“... yeah, but I wouldn't be *me*, so it isn't relevant to the mortal coil I do happen to inhabit. I'm glad people in the interim have fought for progress so that I don't consider blatant chattel slavery normal. That said there are everyday things we are aware of that are not so much better, just more distant (human rights abuses at the far end of the supply chain etc), and I think it's similar in that our relative nonchalance about it makes us a product of our times too. Who knows what future generations will think of us? Self awareness doesn't edit that, does it?

Expand full comment

I notice you think the awful goings on we aren't much bothered by are now more distant (at the far end of the supply chain). So do you think we are now at least more clear-thinking and humane than people in slave-owning society were? I don't. Clearly, we are more clear-thinking and humane about slavery, as it was practiced then. But you and I got to "read the answers in the back of the book" before we took the is-slavery-basically-ok test, and passed by saying "hell, no." But it seems likely to me that in 400 years your views and mine will seem as blind and barbaric to future humanity as slave society's do to us now. Perhaps the stuff we're seen as grotesquely blind to won't have to do with things like slavery, but will be about other matters entirely.

Expand full comment

Alternatively, we aren't any more "enlightened" now than we were back then, it's just that the abolitionists won. So of course in 400 years our views will be seen as blind and barbaric - not because they *actually are* blind and barbaric, but because the moral system of whoever is in cultural power at that time will probably differ in some ways to ours.

Expand full comment

"Alternatively, we aren't any more "enlightened" now than we were back then, it's just that the abolitionists won."

Agreed. I tend to chalk all mores up to the shifting winds of politics.

( I also find it ironic that Lincoln's administration was responsible for conscripting men nationally for the first time in the USA. Seizing a law-abiding citizen from their life, forbidding them to quit, and forcing them into the line of fire where their ruler's enemies are actively trying to kill them. Kidnapping, slavery, murder. And this is part of USA law to the present day, albeit not currently active. And somehow USA culture accepts this as normal? )

More generally - I don't think it is reasonable to expect any large fraction of a society to reject the current mores of the society - still less to reject the particular portion of the current mores that will happen to be rejected by the winds of politics a century or more from now.

Expand full comment

I'm going to offer a perspective as someone who feels that I've solved the question from first principles.

The steelman for racism is selfishness. Stereotyping is a rational response in a low-information/low-trust environment. E.g. when you meet a lion, your first instinct is to run. Hypothetically you could become best friends. Are you willing to take that chance?

The Rawlsian Veil of Ignorance is an argument for cooperating in a staghunt. (The more I read about society, the more convinced I become that it's staghunts all the way down.) The current zeitgeist is to proclaim the moral superiority and logical superiority of cooperation. Personally, I like pareto optima. It makes me feel warm and fuzzy. But I can also see the argument for defecting. Progressives, however, imagine that cooperation is the only equilibrium in the staghunt, and therefore denounce anyone who defects as necessarily malicious and bigoted, rather than risk-averse to being eaten by the metaphorical lion.

So yeah, I'm personally not a fan of chattel slavery. That said, I also think contemporary complaints about racism are frequently (though not always) overhyped. The fact that the West conflates the two so often is an artifact of recent history, not some manifestation of the inexorable march of progress. Maybe you could make an argument that an over-correction is necessary to overcome past grievances, but that's another discussion.

More broadly, I think social mores are mostly determined by the environment. I'm not a historian by any means, so I'm prepared to be wrong on this. But I suspect that abolition may have had something to do with the West being WEIRD, a la Hienrich. I also suspect that abolition is sticky because of industrialization. There's much less demand for slaves, just likes there's much less demand for horse-drawn plows. Because mechanical horses replaced them.

The fact that slavery is outlawed in the West is a luxury. The West may or may not be able to afford that luxury in the far future. If conditions improve, then future humanity will probably think us barbaric. If conditions deteriorate, then future humanity will probably think us too soft. Personally, I tend to lean toward the later scenario. As Robin Hanson once said, "we live in the dreamtime".

Incidentally, Scott himself tried to tackle this question in Asches and Asches [0], where he notices the seeming paradox of "family good; racism bad. But wait, aren't family and race the same thing!?". Though ultimately, I don't think he was able to escape his prior reflective equilibrium.

[0] https://slatestarcodex.com/2014/06/03/asches-to-asches/

Expand full comment

Does it really bother you? I am pretty comfortable with moral relativism and the idea that if I were raised in another society my values would be totally different, like that feels incredibly obvious to me. It's not even really that interesting, because in this counterfactual "I" am not even me. (More interesting is if I were magically teleported there today, would I even hold on to my values or would I update them away for the sake of convenience)

Expand full comment

It's not that I feel guilty about the attitudes my 18th century self would have had. It's that knowing how different they would be is so at variance with the feeling I have about my present self. If I were to tell the story of how I came to have the tastes and beliefs that I have, it would be a story about realizing things, learning things, being inspired by things, contemplating things, admiring certain thinkers, rejecting certain thinkers because of flaws, etc etc. It would be a story in which I am quite agentic. And yet since most people, including me, share the beliefs of others in their eras and communities, it seems that the true story of how I came by my tastes and beliefs is that I am a sponge absorbing the water in my little tide pool.

Expand full comment

It's an argument for not getting too hung up on castigating people for historical beliefs that are no longer popular, but other than that, there isn't really anything worth worrying about. Maybe you could try to be a moral entrepreneur by figuring out what the next big thing will be (I can see several plausible candidates), but that's a hard and difficult life.

Expand full comment

Yes, I agree. I think it's also an argument for not castigating people of our era who have beliefs that are very unpopular in my circles, but are normal in theirs. That's a lot harder to swallow, though, right? Though in fact I believe it's valid.

Expand full comment

"Constant questioning and awareness of why you do everything" sounds fairly close to hypervigilance, a symptom of PTSD.

Expand full comment

And also of anxiety disorders. Speaking from experience, it's exhausting, stress-inducing, and usually a waste of time.

Expand full comment

I think the core issue is that many psychologists focus their attention on studying phenomena like biases or whatever, which are "secondary effects", artifacts of a constellation of mechanisms that approximate rationality, when really it would be better to study "primary" effects, such as rational choice, because the effect sizes are going to be much bigger, making the results more robust (less sample size needed for bigger effects) and more applicable (more uses available for bigger effects).

That said, small-scale stuff like "people would rather eat an apple than touch a hot stove" is probably too obvious. But I think this can be solved by looking at bigger-scale stuff that would be harder to observe in everyday life, for instance performing big factor analyses, mapping out measurement methods and developmental trajectories, collecting wisdom, and so on.

Expand full comment
author

I'm not sure what you mean.

My understanding is that economics has great models of rational choice, people mostly follow those models (eg if you ask someone whether they want $50 today or $100 today, they'll choose the $100), and that cognitive biases were interesting because they were deviations from these otherwise simple and broadly-applicable models.

Can you give an example of a real or hypothetical experiment in the program you're interested in?

Expand full comment

My opinion on these topics are heavily inspired by some of the banana's writings, particularly on indexicality. I think it is important to map out extremely concrete cases of real-world scenarios.

When it comes to rational decision theory, this might involve studying what utilities and probabilities people use when waking real-world decisions. So for instance creating a model that can map the contents of food items to corresponding utilities (possibly taking greater context such as diet variety into consideration if necessary, which anecdotally it probably is). Or more economics-focused, mapping out different types of people (students, parents, elderly) and their different needs for houses.

Recently I've become interested in the development of interests. Here, one could also ask questions such as where the variance in interests comes from - e.g. do people mostly get into technical stuff because they think it's good and the future of society (no they don't, AFAICT; there's a correlation but the pattern of correlations is too weak to explain this), or because of some life events (via gaming maybe?) or for other reasons?

Less focused on choice and rationality, there's a bunch of discussion of the effects of culture, but most cultural measures seem overly abstract. I think one could easily design much better measures of culture and apply them to get a deeper understanding of the nature of culture. (Like factor-analyze stuff like soul food, black musicians, idk stuff like that, to measure black culture. Or to measure danish culture, factor-analyze stuff like HC Anderson, Terkel i knibe, Pippi Langstrømpe (yes I know she's swedish) and Matador.) Large factor analyses are generally great because they give you enormous effect sizes across enormously wide topics, and I think it's an absolute shame that they've been nearly entirely restricted to personality, intelligence and interests. (Yes, psychologists do tons of tiny factor analyses, but smaller and more homegenous factor analyses are much less informative.) Blue tribe/grey tribe/red tribe might be a factor anysis topic close to your interests that I'd encourage (yes your blog selects strongly for grey tribe, but you probably also get some of the others as readers - and even then mapping out variation within the grey tribe would be super interesting).

Expand full comment
author

Thanks, this is a great answer (probably; I'm going to have to think about it more)

Expand full comment

Nice 😁

Also, one thing I've become a fan of is qualitatively studying the residuals. For instance if someone scores higher or lower in some outcome than predicted by a model, ask them why. (Admittedly this is nearly impossible to do in a single-round survey, which many researchers might be restricted to?) They might not be able to explain why, but sometimes they are, and if so this can then be used to improve future models.

Expand full comment

Oh also, sometimes thinking carefully about measurement can teach you something important that you can use to create better measures of stuff. One example is, I came to believe that many common trauma measures (such slas Adverse Childhood Experiences) might be bad because they don't take the long-tailedness into consideration. I haven't had time to write up the details, but I prompted ChatGPT to give an explanation because I wanted to use it for brainstorming trauma items: https://chat.openai.com/share/518f665f-04dd-4ddb-b271-ceaec6299253

Factor analysis is a standard method to design measures, and as you can see from my other comment, I *am* a big fan of factor analysis. However, some people create measures of a phenomenon by dumping a bunch of examples of that phenomenon into factor analysis (again, consider ACEs), and I think that's great when one thinks of the phenomenon as a "general factor", but sometimes such as for traumas that doesn't entirely apply, and in such cases I find that if I apply some thought to the measurement, I can design much better measures.

Expand full comment

"some people create measures of a phenomenon by dumping a bunch of examples of that phenomenon into factor analysis"

That sounds remarkably like p-hacking.

Expand full comment

Your ChatGPT conversation is really eye opening, wow.

Can you say a little more to explain this? "I came to believe that many common trauma measures (such slas Adverse Childhood Experiences) might be bad because they don't take the long-tailedness into consideration." My brain is tired today and I can't put these together. I'm familiar with the ACE research, can you connect the dots for me?

Three random thoughts about trauma distribution and your questionnaire...

1. It's been my clinical experience in treating interpersonal trauma that people with significant trauma symptoms often have experienced multiple really bad traumas, rather than one really bad one, and that there's a reason for that. For instance, the woman who was beaten repeatedly by her father ends up in relationships with abusive men; the impact of earlier trauma has made her unable to hold steady employment which exposes her to various traumatic effects of poverty (exposure to crime and community violence). Early interpersonal trauma also tends to make it so people don't take great care of themselves physically later -- they self-neglect -- which leads to a variety of health problems that come of not doing preventative care, and that can lead to serious illness which can lead to being traumatized by medical intervention. Anyway, you get the picture. I don't have a dataset to look at to tell me whether this is true more generally. It's what I've observed in my work.

2. Dissociation exists on a spectrum and is an extremely common and signal consequence of trauma. I think it's an important measure of severity. I think there are pretty good self-assessments for degree of.

3. I'm thinking you know but I'll mention that people who wind up with diagnoses of PTSD as adults as a result of minor to severe experiences are much more likely to have experienced significant trauma as children. Prior trauma is a huge predictor of being more severely impacted by trauma later in life. Research showed that which soldiers wind up with PTSD is more readily predicted from prior life adversity than from the severity of the wartime trauma they experienced. It seems to me this complicates the question of severity as it relates to any one instance of trauma. It's cumulative, but really more it's an accumulating load that a minor event can then result in catastrophic collapse.

Expand full comment

I definitely agree that the severity of traumatic events is not the sole factor in determining trauma severity. I don't have a good idea of what the other factors are, but I imagine any of the following could matter:

1. The robustness with which one knows how to process the event and react in the future.

2. The community support one has to recover and react.

3. The material buffer and safety, e.g. income and health.

4. The neurological robustness against traumas.

5. Other things that I am too ignorant to think of.

I can buy that e.g. for point 1, certain kinds of abuse by parents could lead to self-neglect, so in that sense I could agree with a model where trauma accumulates in a loop.

My issue with ACEs is more like, do we really expect that factors 1-5 are *mostly* determined by the overall level of trauma one has experienced? No room for culture/ideology, genetic biological predispositions, risky strategies that work fine in some circumstances but fail catastrophically in others, and unknown unknowns?

Or phrasing it another way: What happens to ACEs-style measures if there is some other cause (e.g. poverty; of course by itself, poverty doesn't contribute *that* much variance, but what I mean is there may be a pool of variance contributed by an unknown-to-me number of factors) which contributes to ACEs? Well, that cause might contribute to many other areas in your life too, thereby confounding ACEs with those other areas. But this wouldn't reflect the ACEs causing those bad outcomes, but instead just reflect them depending on multiple things.

This leads most naturally to an answer to question 3. Even if we suppose that most of the variance to trauma-susceptibility is not other traumas, then other traumas would still be correlated simply due to reflecting non-traumagenic trauma-susceptibility, so you'd expect to see a correlation between later PTSD and earlier traumas based on that. Depending on the specifics of your measurement, this correlation could be expected to be quite strong; e.g. if you average a bunch of traumas, that reduces the influence of non-trauma-susceptibility factors.

(Some people try to use multiple regression to fix this. I don't think that works, for reasons that have been extensively discussed by other people. Other people try to use twin studies to fix it. I suspect that also doesn't work, which I didn't properly appreciate until about a year ago; I've attempted to write it up twice, in the "Know your Xc!" section of https://tailcalled.substack.com/p/many-methods-of-causal-inference and the "… or not" section of https://tailcalled.substack.com/p/if-everything-is-genetic-then-nothing . I suspect it is not fixible with pure statistics, except MAYBE with panel data.)

As for point 1: I think one major factor is the distinction between a clinical setting and a statistical setting? I imagine that clinicians primarily treat the 98th percentile+ most traumatized people, who simply are so rare that they don't make much of a dent in the statistics.

A related answer to point 1: I argued that ACEs maybe measure something like trauma-susceptibility, or perhaps problem-susceptibility. (Or maybe even "familial problem-susceptibility". Idk, my original statement was "might be bad" as trauma measures, not "definitely absolutely are bad".) This seems potentially very useful for clinicians, in the sense that it might be the people who are most susceptible to problems that they can help the most? My main issue here is just with taking the scale at face value, and assuming that its correlations are solely due to the traumas it assesses, rather than to various vulnerabilities and sensitivities (that I don't have that deep insight on, and which you might have more insight on).

As for dissociation in point 2, I sort of vacillate between how important I think dissociation is; sometimes I think it is pretty important and sometimes I think it is not so important. Part of my uncertainty is in what phenomena researchers use it to refer to, since there are some things that could be called dissociation which I am pretty sure are important, but which I (with low confidence) don't perceive to be the central instances of dissociation.

Expand full comment

Oh I guess one thing that I didn't emphasize enough in my previous comment is that one needs to be careful to avoid the phenomenon where one gets absurdly strong relationships for stuff that doesn't matter in practice. I don't think that's inherently in tension with "chase down strong effect sizes", I think that's more a question of what subject matter you choose to investigate. My line of comments with Joshua Becker, perhaps especially my final comment, might illustrate how I think one can address that.

Expand full comment
Aug 31, 2023·edited Aug 31, 2023

"Recently I've become interested in the development of interests. Here, one could also ask questions such as where the variance in interests comes from - e.g. do people mostly get into technical stuff because they think it's good and the future of society (no they don't, AFAICT; there's a correlation but the pattern of correlations is too weak to explain this), or because of some life events (via gaming maybe?) or for other reasons?" But it seems to me that asking a question like this will still land you in the muck , i.e., looking at what you call secondary effects -- things that can't be explained by sensible, obvious factors such as subject's belief that a certain career is good for society, or because as a teen they liked gaming & associate computers with enjoyable challenges. Especially when it comes to career, my sense from listening to a lot of life stories is that many people end up in their careers sort of by happenstance. And for those who decided in advance what career they wanted, the explanation they can give for the decision isn't truly explanatory: " Why am I a biologist? Well, I played outside a lot, and used to climb trees and collect bird's nests . . ."

Expand full comment

Maybe you are right?

I definitely agree that people don't seem good at explaining their interests, as I've tried asking about that with very little success. I have some ideas for how to improve though. I plan to write it up once I get the final results.

I think it's worth at least trying. Even if the finding is a null no matter how I do it, null findings are also themselves something one can learn from.

I'm not sure career happenstance makes it infeasible. I definitely agree that I observe tons of happenstance in people's jobs, but that happenstance also shows up as deviations between their jobs and their interests. So often people have one thing as their job but another thing as what they are interested in. The interest itself can in some ways be pretty robustly correlated with stuff even if the job isn't.

Expand full comment

I have read this comment a few times and do not understand it. I'm having a hard time figuring out in what way these "secondary effects" are falling short, and what you want to do the job. What's missing for you and what are you trying to accomplish?

I'm wondering if it's possibly related to one of two things.

One is that every model explains higher/macro level phenomena in terms of lower/micro level mechanisms. This lower/micro level mechanisms are macro/higher level things to be explained. I study group behavior with models based on premises of individual behavior; psychologists study individual behavior with premises of cognitive model. Do you want us to look lower?

Another is that every model is simple, and gets gradually more complex. You talk below about examining residuals to refine models. One way of interpreting (social) science is as a process of explaining variance: we gradually chip away at it, with more and more nuanced models. Do you want our models to be more detailed?

Are either of these perspectives relevant? I ask because I've been struggling a lot with the philosophy of science lately and I'm very eager to understand what alternative possibilities may exist compared to the standard paradigm.

Expand full comment

This is an interesting question, and your own research subject is an interesting topic. I will read some of your research and get back to you with a more detailed answer with suggestions later.

But for now, prior to reading your research, my thought isn't that you should look lower. Looking lower tends to give you more universal principles, which is sometimes useful, but it also has the disadvantage that one cannot take advantages of the specifics of the situation.

One of the most important things IMO is to have an idea of what you are trying to achieve, as then you can backchain from there to figure out useful methods. And if I understand, the billion dollar question in your field is something like "what methods can groups of people generally use to make better decisions?". Is that correctly understood?

My immediate thought is that this is an especially tough question because you are working at high scale and with strong constraints. Like groups of people making real-world decisions is a "big" and "detailed" thing, which makes it expensive to study directly, and you want to have findings that work for many different kinds of teams solving many different kinds of problems, and which the teams are not already using themselves.

One option is to be less ambitious by picking a narrower target. For instance one could study ways for software engineering teams to make better decisions about what bugs to prioritize. In that case, one could try different variants of quick heuristics that they often use, and compare the quality of those quick heuristics to what a more in-depth evaluation would get you.

Of course being less ambitious is boring and less useful, so I don't necessarily recommend this. But if you want to keep the ambition of having findings that work for many different kinds of teams solving many different kinds of problems, you need to face the fact that it's a genuinely tough problem that probably doesn't budge hugely with a handful of people experimenting on toy tasks, but instead requires lots of people working hard at real tasks?

Fortunately, there *are* lots of people working hard at real problems, like teams of people working in companies. If you trust them to be able to identify their own issues and debug them in ways that work for them, then maybe you could buy information from a bunch of such teams in a bunch of such companies, and try to find common patterns of issues that many teams have and solutions that many teams find to work for them. This would be a form of "wisdom distillation" which might be quite useful?

Also, once you have a bunch of such distilled wisdom, there's the question of how to apply it. You could give it/sell it back to companies, but it might be that they apply it wrong in ways that cause problems. This too is something that can be experimented with, e.g. you can investigate how it goes to apply it and whether there are exceptions or nuances that need to be added.

It's possible I'm totally misunderstanding the goal of your field, in which case what I said might not be so useful to you. 😅 But then with a correction I can give alternative goals. And as mentioned I haven't had time to read your studies in detail yet, so this doesn't directly comment on those studies.

Expand full comment

Or another interpretation, if your goal is to understand how social influences affect people's beliefs about things, then a starting point might be to map out what kinds of social influences people have in their life. This would give you a starting point for making generalizations and distinctions. Once you have a taxonomy of social influences, you can sample from each of the kinds of social influences and investigate the accuracy/helpfulness of those samples, and thereby discover if they are generally accurate/helpful and if there are some exceptions that are not accurate/helpful, or some limitations to the cases that may be helpful.

I should maybe also add, my suggestions so far have been mathematically quite simple, and more focused on qualitative matters or scale. This isn't because I am opposed to fancy math like agent-based models, I just haven't immediately had any use for it in the suggestions I gave. It may be that more investigation would give reasons to use fancier math.

Expand full comment

thank you for this thoughtful reply! i don't think it's helpful to use my own research here as a touchstone, we already have the great example from the OP to focus discussion. i was trying to understand what your comment means in the context of studying rationality and biases. (i can port the basic philosophy of science to my own work later.)

so.. in the context of things like priming and anchoring... is your interest in looking lower? or is it about using more detailed models? both/neither?

i.e., what does it mean---and what is the implication---that anchoring is a "secondary effect" of something? what is the primary thing?

(it's all a secondary "effect" of neurochemistry, right? all a secondary effect of particle physics? sorry if i'm completely misunderstanding, please do correct me if i'm way off base)

Expand full comment

The goal of rationality is to learn how to infer correct information, make good decisions and communicate well.

My impression is that the primary things that interfere with this are commons problems (e.g. most of the value in figuring out information comes from the fact that the information can be applied elsewhere, so people are not sufficiently motivated by themselves to find good information), conflicts (e.g. "politics is the mindkiller", or Ben Hoffman/Michael Vassar type stuff), and lack of robust real-world applicable theory for inference (probably downstream of the previous two problems). Oh and probably also g factor and maybe also some other things.

So my take would be that if you want to study rationality, focusing on e.g. priming is wrong because priming is too small of a bias to be worth worrying about. Basically with main/secondary, I'm referring to their degree of influence over the things we care about, not about level of reductionism.

Expand full comment

Sociology, especially Bourdieu-inspired sociology, tends to use a lot of factor analysis to examine taste and cultural practices and how they are related to the characteristics of individuals. A lot of it is in French for the obvious reason that Bourdieu was French, but I'm sure one can find quite a few books or articles in English. You should check it out.

Expand full comment

Neat, I'll try to look into that! If you or anyone else knows of some good pointers/recommendations then I would love if you could suggest some.

Expand full comment

As far as I remember, the two most influential sociologists in that vein of very quantitative cultural sociology are Bourdieu and Peterson. The most famous book by Bourdieu on this subject is "Distinction," and it is one of the most famous books in sociology in general. That being said, in many ways, it has become dated and is quite criticized even by his spiritual successors. It is also long and quite hard to read.

https://en.wikipedia.org/wiki/Distinction_(book)

I'm not sure that Peterson has ever used factor analysis (unlike Bourdieu). Still, here is an article on the subject:

https://books.google.fr/books?hl=fr&lr=&id=VHMhOCNyQ-kC&oi=fnd&pg=PA152&dq=peterson+1992+simkus&ots=hI_EKaHnPE&sig=UYUtjk-THn7OC3n64ZR4b1fNzcA&safe=active&redir_esc=y#v=onepage&q=peterson%201992%20simkus&f=false

But maybe a good first entry could be this article by Coulangeon, which uses factor analysis to explore musical taste and discusses the results compared with Bourdieu's hypotheses and Peterson's hypotheses.

https://www.cairn.info/revue-francaise-de-sociologie-1-2005-5-page-123.htm

There is probably a lot of other, more recent work done with this tool on taste and cultural practices. But this is what has come to mind quickly off the top of my head.

Expand full comment
Sep 1, 2023·edited Sep 1, 2023

'I think one could easily design much better measures of culture and apply them to get a deeper understanding of the nature of culture.'

There's an econ nobel waiting for you if you can do that. Which is another way of saying lots of very smart people have tried and not gotten very far, so it's unlikely to be as easy as you think.

Expand full comment

Maybe. When I think econ and culture, I think stuff like the factors in the world values survey or various other international indexes. These have the challenge that they are not just trying to measure whether a person has a specific culture or their relative placement within a pair of cultures, but instead trying to measure a simplified model of all countries's cultures.

Expand full comment

Well, it is true that studying interactions is harder than studying main effects because of superlinear scaling of required sample size.

Expand full comment

Also, John Wentworth's You are not measuring what you think you are measuring / Solution: Measure lots of things is great.

https://www.lesswrong.com/posts/9kNxhKWvixtKW5anS/you-are-not-measuring-what-you-think-you-are-measuring

Expand full comment

> Cryptocurrency has become an important part of poor countries’ financial infrastructure.

Can someone explain why they can't just use dollars? Is there no online service that allows people from poor countries to open accounts? Sounds pretty trivial to make.

(Yes, that would make it centralized, but most people own crypto through centralized platforms anyway.)

Expand full comment

An online service like that would be a de facto bank, which is illegal to run in the US without a bunch of KYC laws (I'm not sure how it would work if you just based your bank in the Bahamas or something, but presumably the US government would eventually come after you on the basis of "anything to do with dollars is our jurisdiction"). Crypto gets around this problem by having so many blatantly illegal things that allowing for banking is too far down the list for governments to bother with.

(That said, the forms of crypto that are used at all in Argentina - which I think is somewhat overstated, basically are this, with the crypto thrown on as a fig leaf https://devonzuegel.com/post/inside-argentina-s-currency-exchange-black-markets.html

Also, not that they do still primarily use dollars, they just have to use paper currency)

Expand full comment

Gotta say, if this is the real reason, it doesn't sound too much like a tech success story to me.

Expand full comment

I don't know, without the crypto tech you couldn't get around this completely regulation based hurdle. So even if the problem was manufactured (by regulations) tech that allows people to get around it is still a success, in much the same way that a steel toed boot would be a tech success for a man who insists on whacking his toes with a baseball bat every morning.

Expand full comment
Sep 3, 2023·edited Sep 3, 2023

It's not the tech that enabled it though, it's the willingness to ignore the law. Especially if you're talking about a centralized system like Tether. Anyone who wants to could have done something functionally identical to Tether 20 years ago.

Expand full comment

I don't know: I think the tech may have made it easier to ignore the law, by removing the need for institutional gatekeepers when transferring money from place to place.

Expand full comment
Sep 3, 2023·edited Sep 3, 2023

But it doesn't actually "transfer money from place to place". The key part is the people in various places who are willing to exchange dollars for tether or vice versa. But you could just as easily do that with LaprasBux, there's no *tech* required that didn't exist 20 years ago. Heck, you could just use, say, imaginary shares in giant Rai stones, as long as there are people in Argentina who are willing to exchange them for dollars and back.

Satoshi's key innovation was POW to prevent double spends (aka Dispute Resolution Via Competitive Coal Burning) plus an economic system to fund said coal burning, but centralized "cryptos" like Tether don't rely on that.

Expand full comment

There are (at least) two tricks fiscally spiraling downward governments use that make “using dollars” very hard.

1. Capital controls. You’re not allowed to bring physical dollars in or out of the country in reasonable quantities

2. Bank controls. You may be allowed to open dollar denominated local bank accounts. But… as had happened in many countries, the local government can decide “sorry, your dollar denominated bank savings are being converted to local currency as of today’s totally-not-made-up exchange rate”.

“Stable” cryptocurrencies evade stable-countries KYC requirements, dramatically increases the money supply attempting to use the dollar as a stable unit of account in places where that’s desired but capital controls prevent it.

Basically, if the local government is itself a grift, the cryptocurrency grifters may appear to be less bad.

Expand full comment

I think automaticity has to be evolutionarily selected for.

In our ancestral environment, it wouldn't make sense for our brains to weight every piece of sense data equally. Our eyes have to jump straight to the tiger. Our ears have to be attuned to things like screams.

Thinking about things is expensive. It paralyzes you. Sure, often it's the best thing to do—particularly as the threat of instant death recedes from society. But I'm learning not to look down on people who don't plan everything out. Honestly, for ANY mistake humans commonly make, there's probably a big evolutionary pressure holding them at that point.

Expand full comment
Aug 31, 2023·edited Aug 31, 2023

The really disturbing automaticity isn't making non-rational choices about little stuff because that's quick and efficient, it's the thing Scott brought up last. We are evolutionarily selected to think like those around us. So if those around us think slavery is no big deal, we probably will think the same.

Expand full comment

While I agree, I'd phrase it the opposite way: automaticity is the default of all animals, and humans have evolved another capability that's sometimes useful. Thus the entire genre of people quitting this complicated civilization that's built using thinking, and going back to "nature", experienced as a more automatic existence.

Expand full comment

This is Kahnemann's argument--we CAN break free of our heuristics, but it's super energy-intensive and therefore is only saved for when we can afford to do it and it makes sense.

Expand full comment

In fact, we see this even in technology nowadays. LLMs are too slow and expensive to run, so you want to use weaker models for the easy queries. VR displays are too power hungry, so you want to predict which parts of the scene the user will look at and only render those. Etc.

Expand full comment

"default tip options of 15/20/25%" as an european that prompt will nudge me in becoming very angry

Expand full comment

This is a very confusing perspective to me. If you don't like tipping, you should be ecstatic that most other people tip so much and so often! After all, they subsidize part of the cost of the product you are buying for everyone else; since labor costs are a part of production, lower wages (because costumers are supplying a significant part of the service workers' income) mean a lower sticker price, so just decline to tip and enjoy paying less for the product than you otherwise would!

I suppose going against a social norm like tipping in a foreign culture could be somewhat uncomfortable at first, but breaking out of a comfort zone is not only generally a good idea, but also particularly advisable when you see a very simple opportunity to break out of a bad equilibrium and also take advantage of an opportunity.

Expand full comment
Aug 31, 2023·edited Aug 31, 2023

As an American recently moved to Europe, it makes me nervous that tipping culture seems to be spreading over here. Tipping is a way of moving risk from the business to the employee, because it is now the employee's wages which are variable instead of the businesses income. It's also a way of shifting the moral responsibility from the business to the consumer, because it's now the consumer's fault and not the employer if the employee doesn't get paid much. So while it's certainly a complex system, it seems that tipping mostly benefits the business while tricking both customers and employees to think that they get more agency over their circumstances.

There might be a counterargument that tipping actually brings in more revenue as compared with simply raising prices, e.g. because of tax evasion or some behavioral effect, but I doubt it would balance out the downsides.

Because: I'm not breaking society out of equilibrium by refusing to tip. Just helping myself and harming the employee with no impact on the business. My (generally private) tipping behavior does not impact other people's tipping behavior. (Even if height is contagious.)

Expand full comment
Aug 31, 2023·edited Aug 31, 2023

I appreciate your comment.

I have no disagreements with the points about the shifted risk and about how private behavior doesn't impact the behavior of others. Nevertheless, I'm still confused by the framing of "Just helping myself and harming the employee" in the last paragraph.

Do you regularly go and donate money to service workers when you're not buying things? I would wager the answer is no, even though it's exactly analogous to tipping: the money is in your pockets at time t and you are thinking about whether to give it to the worker at time t + 1. Surely, the worker benefits the same from the money whether you give it to them when buying something or just at random, and you are hurt just the same by the loss of that money. So are you "just helping yourself and harming the employee" then?

Perhaps you'd say yes to that as well, but that would eliminate any usefulness or insight you could gleam from this perspective; after all, if everyone is always doing X, then saying that someone is doing X at a certain point carries no meaning anymore and is just a waste of oxygen. Such redefinitions would be actually bad, as words are mere containers of meaning, so the original meaning is what matters, not a redefining of it. The statement being discussed carries particularized, rather than generalized, moral condemnation; shifting it from the former to the latter loses meaning.

So there would have to be a relevant difference in your mind between the two scenarios; not just a meaningless distinction such as whether you give them the money in their workplace or somewhere else (which is not the root cause of any ethical distinction for our purposes, for why should it be?), but rather something actually pertinent.

And it seems the only such distinction is that there is a social norm, or expectation, that one "should" tip when they get a service or product. But, of course, there is no free-wheeling, universal "should"; it is all contigent upon the particular characteristics of the situation you find yourself in.

I take it from your comment (as an implication), along with the fact that tipping is so much less prevalent outside the US (and Canada), that you would not find a general "obligation" to tip were it not for the social norms embedded into the North American system. You are "supposed" to tip, so you tip.

But, of course, as this particular example so clearly illustrates, the fact that there is a social norm in favor of X provides, at best, really weak evidence that X is what one should do upon a careful analysis. As we can see, social equilibria are very often inadequate, and contribuiting to their stability merely prolongs them while not taking advantage of more advantageous and thoughtful opportunities (such as not tipping and instead using that money to donate to people you believe are more in need than the randomly-selected service worker in front of you, who is extremely unlikely to be the one person in the world who could most benefit from your money).

Expand full comment

Imagine that some European restaurant chain introduces a business innovation that enables customers to reduce their bill by 15% if they choose to press a blue button on the credit card reader, with the understanding that this money will be deducted from the server's pay. There are no legal ramifications for pressing the button, but most people decline to press it and agree that button pressers are assholes.

Do you commit to pressing the button every time and donating the money to people who you think need it more than the random server whose paycheck you are reducing?

Consistency with your stated position would seem to imply your answer should be yes. Someone who refuses to tip in a US-style culture but also declines to press the blue button when it is introduced in Europe would seem to present a classic example of anchoring/framing bias.

(I say this as a non-American who finds US tipping culture extremely distasteful.)

Expand full comment

I do indeed believe I would press the button, yes.

Expand full comment

Okay, if there are three buttons, reducing the random server's pay by 5/10/15%, which do you hit? What if it was 15/20/25%?

Expand full comment

What's the decor of the place? Is the server wearing a political messaging shirt, and if so which?

Expand full comment

The decor and server's attire are whatever would conventionally lead people to expect that this is a situation where you "shouldn't" press the blue button.

Expand full comment

Their shirt reads "First things first, let's kill all the Blue Buttoners".

Expand full comment

Just for another perspective… I find tipping to be a fantastic solution for some service people (such as waiters). It rewards and recognizes excellence, it promotes a pleasant attitude, it discourages grumps and freeloaders from entering into inappropriate service occupations and it just seems more efficient than paying everyone the same.

Most importantly, I was appalled by every single waiter I have ever had in Europe. They seem to resent me and my business, and they are frankly unacceptable. The basic Denny's waitress in America is incomparably better than any waiter I experienced in Europe, IMO.

Expand full comment

I think this stands if tipping is thought of as a bonus and hence includes a meaningful chance of not happening. If everyone just expects a 20% tip as their 'right', why would it affect behaviour?

Expand full comment

>Most importantly, I was appalled by every single waiter I have ever had in Europe.

Could you please elaborate for the benefit of someone who has never been to the US?

I enter an establishment, I’m led to a table or I sit down at one myself, a human being approaches me, I communicate a list of dishes to them and after a while said dishes are brought to the table. Later I ask for the bill, they bring a POS terminal, I pay and I leave.

That’s exactly what I expect and I’m satisfied. Do American waiters do something else that you find essential?

Expand full comment

My reaction would be to boycott this restaurant chain. And if possible, vote to make this kind of buttons illegal.

In the meanwhile, if I somehow couldn't avoid this restaurant chain (or, let's assume it is not restaurants but say supermarkets), I would press the button, because it is better if the servers get angry and quit, than if this business model becomes successful and popular.

Expand full comment

I like Quiop's comment. I'm not sure if I meant "harm" but i meant "choose between two options, one which gives me higher payoff and one which gives the other person higher payoff".

Just want to add two things.

First---if i thought not tipping would make it more likely that the inadequate equilibrium of tipping would change, then i would withhold tips, and punish the individual waitstaff for the benefit of the class. so tipping is not "contributing to the stability" because the counterfactual is not undermining the stability. the inadequate equilibrium will persist and is not a factor in this decision.

So in sum, tipping is a decision between two options, whether I get more or whether the server gets more. totally unrelated to equilibria.

Second---possibility relevant biasing information with regard to the concept of withholding... in USA, restaurant servers often are paid below minimum wage and tips form a primary constituency of their income. thus, it's more than just social norms compelling me to tip, it's my belief that people deserve a fair wage and that tipping is part of that system. here in UK i know that tipping is "extra" and so i don't feel bad when i ignore the increasingly popular option. they're already getting paid.

Expand full comment

I think this is wrong. Having known quite a few servers/ bartenders i can confidently say that American style tipping heavily benefits these people. They make a lot more than they would in comparable untipped jobs. This stems from the fact that the employer can't take tips away from them so if the tips are more than the equilibrium value of the labor the employee will benefit.

Expand full comment

i've wondered about that myself---your claim is that when i was a waiter, i made more because i worked in a tipping restaurant than I would have if i were just paid wages? is that right?

if so... can we say that the employee still takes on risk, by definition of variable wages, but that they are compensated for the risk by the higher expected value? (in which case it'd be the same as any investment.) and that the consumer ultimately bears the cost of the higher expected wages, while the restaurant and non-tippers reap the benefits.

but also---anecdatally i find your claim believable, though it seems hard to actually make that comparison. how on earth could your American working friends know what they would have made in a european restaurant, adjusting for the fact that european economies are generally weaker? i'll also add that restaurants in europe (London anyway) have terrible customer service, so it's not comparable on the product side either: waiting tables in USA is a different job than waiting tables in London.

i'm going to also going to assume that you are a smart and attractive person with smart and attractive friends, who make far above average in tips both because people like to tip them, and because they end up working in higher quality restaurants.

Expand full comment

You don't need to make comparisons to Europe: some places introduced a "no-tipping" policy, raising prices and wages, and then reversed course after servers quit.

Expand full comment

That's really interesting, did they raise wages to something at or near the expected tips? Do you still have a link to that?

Expand full comment

> your claim is that when i was a waiter, i made more because i worked in a tipping restaurant than I would have if i were just paid wages? is that right?

Well not you specifically. A waiters tips depend on a lot of factors and I wouldn't claim that every single waiter/bartender is better off under a tipping system than they would be otherwise. I am making the weaker (but still substantial) claim that a large proportion of waiters/bartenders are better off under the tipping system and that the system benefits the population of waiters/bartenders as a whole.

> can we say that the employee still takes on risk, by definition of variable wages, but that they are compensated for the risk by the higher expected value? (in which case it'd be the same as any investment.)

It's true that the waiter/bartender takes on risk by having uncertain wages but for many of them this risk is very small relative to the wage premium. Everyone has slow nights but it generally averages out over the course of a month. I don't think it's quite accurate to say that the wage premium is "compensating" them for the risk since it's not really the reason for the wage premium. The restaraunt/bar isn't choosing to let the waiters/bartenders keep the higher expected wage because they want to offest the risk, they're legally not allowed to request the employees surrender their tips so the higher EV but higher variance compensation package is the only option

>anecdatally i find your claim believable, though it seems hard to actually make that comparison. how on earth could your American working friends know what they would have made in a european restaurant, adjusting for the fact that european economies are generally weaker?

The proper comparison isn't to a european waiter, that's a completely different labor market. The proper comparison is to a similarly skilled and difficult job in the same area that isn't tipped. For example a cashier at a fast casual restaraunt or even the kitchen staff at the same restaraunt.

seriously, the deal that a bartender at a fairly busy bar gets is unheard of in any other industry. No education or training needed, prior experience optional, can be trained and ready to work on your own in a week or two, work on your feet but no heavy lifting, mild drinking on the job is tolerated, 30+ an hour (I know this is above average, that;s why I specified fairly busy)

Expand full comment

I'm American and it still makes me angry. Especially at fast-food places or coffee shops. I'm like, "You spent 20 seconds taking my order and now you have the gall to panhandle me for it?"

Expand full comment

Funnily enough, for the religious word priming, I got Christina for the second one and thought it was out of place. Though maybe that was because it wasn't far enough down to get the pattern yet.

Expand full comment

This was a really excellent post.

For quite some time now, I’ve been looking for a survey paper outlining which areas of psychology have replicated/failed to replicate. (The best I have found is estimates of replication percentages in cognitive psychology and social psychology.) Has anyone seen such a paper?

Expand full comment

The weirdest thing in this post: Why do the taxi tip percentages go *up* with the fees in the right hand of the graph?

Expand full comment

Doesn't seem weird to me -- I tip a slightly higher % for longer rides out of gratitude, and I assumed others did similarly. And from a personal perspective it's not that large a difference (17% vs 18% eyeballing the graph). Pretty cheap as cost-of-gratitude goes.

Expand full comment

Hm. I think there is arguably an interesting bias here, where 17% feels the same whether it's 17% of a long or short trip.

Expand full comment

I assume long trips probably take the driver to a place where there may not be nearby fares, necessitating some unpaid driving back to the center of things.

Expand full comment

Re: boundaries for biases:

Another factor you're not mentioning is that many biases are at least partially adaptive in practice. The religion world scramble for example: In most natural situations, you'll just be discussing (in whatever media) something, and it's advantageous to understand what the other person is talking/wants to talk about. Unless someone deliberately wants to trick you, they won't suddenly switch topics, and there is also no advantage to them (and little disadvantage to you except a mild annoyance). It's related to Bayes as well, we've trained the prior "topic is important" very hard for all our life, and unless we train these kind of word scrambles every day we will not get rid of it.

Expand full comment

> Unless someone deliberately wants to trick you, they won't suddenly switch topics, and there is also no advantage to them (and little disadvantage to you except a mild annoyance).

Exactly this, but with one important caveat: people that deliberately want to trick you do exist, and they're not particularly uncommon. Everyone from marketing departments to the taxi companies and woke grifters Scott mentioned in the article wants to weaponize your psychology against you to get you to do things that are in their interest that you would have no reason to want to do if you thought about it clearly for a few moments. So it's important to be aware of their existence.

Expand full comment

I think this is somewhat related to how a Quick Change Scam can work so well. For those not familiar, it happens when the scammer pays for an item with a large bill, then before the cashier can give them their change they throw in some more money so that their change will be more "round" and then throw in an extra transaction and generally make a lot of transactions quickly in order to confuse the cashier on how much change is owed. Then they confidently state how much the cashier owes them and end up shortchanging the cashier. It used to happen a lot to my dad when he was young and owned a gas station.

It works because our minds tend to follow one track and get confused when we suddenly have to change what we're mentally doing multiple times in the same conversation. You'd think it wouldn't work, but it can be surprisingly effective: the only way to avoid it is to be suspicious enough to spot it happening (ie, this is a lot of confusing transactions WAIT that pattern matches onto scam) and then insist on only doing one transaction at a time, closing the register between transactions.

Expand full comment

And what is the point of the tricksiness of the Linda example? It would be interesting to see how the results compared if people were asked Which is more probable: Socrates is a man, or Socrates is a man and has two legs? And then crank it up a bit: Socrates is a successful marathon runner. Now which is more probable? The Linda example seems like the yellow/blue optical illusion, it doesn'ttell us much about our ability, on average, to distinguish blue from yellow.

Expand full comment

I agree people don't think as logic would indicate. I believe the interpretation is more like "I have no idea whether she is a bank teller, so maybe 50/50? Why suggest bank teller if there weren't some reason to suggest it, instead of, say, baker or factory worker?" combined with "Well, it certainly seems consistent that someone concerned with social justice would also be a feminist, so that part seems likely to be true, maybe 90%?" But the possibility of certainty for part of the factor intuitively seems to lend strength to the bank teller assumption, too, since things that are right often go together.

The logical way of thinking strips away much of the way people actually think. Using my (fake) example, if you think it 50% likely for them to be a bank teller, but the certainty of that is small, then the certainty you have of them being a feminist is large. That would give a weighted rating of (50% with 10% certainty = 5%) + (90% with 90% certainty = 81%) for a total of 86% feeling that the second statement is true, against a 50% feeling that the first statement is true.

Expand full comment

The point is you can bury a lie if you sprinkle in a bit of truth and control the framing. If you ask if she is a bank teller or not, many people will say they don't know one way or the other, could be 50/50. But it should be a lot less than 50/50, because only a small percentage of this population is bank tellers. Then you tuck that lie in with a strong truth that you let the other person notice for themselves and you can get them to side heavily with the choice you want them to make.

Ever been to a time-share presentation or similar? It's kind of fun to go once or twice and (hopefully) develop some antibodies. I mean, the inoculation carries some risk, but it can show you how some of these biases are used in real life. Then when you see their signs in the future you can switch to type-2 reasoning.

I wonder if there is some kind of class or workshop where you can LARP these kinds of inoculations without putting your wallet at risk.

Expand full comment

Honestly, the "Socrates is a successful marathon runner" would probably get me. Primarily because I would lose track of what the actual question is: my brain immediately connects "well if Socrates doesn't have two legs then there's no way he could be a successful marathon runner, therefore he almost certainly has two legs!" forgetting that the question isn't "does Socrates have two legs" and is instead "what is more probable".

I think a lot of these kind of biases comes down to tricky questions: humans are not good at comprehending questions that don't pattern match to questions they're accustomed to.

"Is this your dog?"

"Lady, I'm going to pick up the poop, just give me a second!"

"That's not what I asked!"

Expand full comment

I don't think automaticity is a good label for the ideas Literal Banana talks about. Situationism is the problem: https://en.wikipedia.org/wiki/Situationism_(psychology) There was a "heroic" age of social psychology from the 1950s to the 1970s when famous studies like the Asch conformity experiments, the Pygmalion effect, the Stanford prison experiment, the Milgram experiment, etc. seemed to show that environmental circumstances have a huge effect on behavior. Those studies were generally fake or greatly overinterpreted, but social psychology grew into a big field and has been stuck in the situationist paradigm ever since. When you combine this false view of human behavior at the core of social psychology with methodological standards where running an experiment on a few dozen undergraduates and then fiddling with the data until you get p<0.05 constitutes scientific proof, there is nothing surprising about the replication crisis.

Expand full comment

The 1940s presented a lot of unfortunate examples of how in certain situations human beings could do unimaginably bad things.

Of course, it's hard to reproduce those situations on college campuses within reasonable codes of experimental ethics, but the historical record is clear that sometimes they can happen.

Expand full comment

Sure, but what happens in extreme circumstances is not a good foundation for understanding human behavior in general. For many decades, methodologically weak experimental psychological research based on situationist presuppositions has crowded out more sensible approaches. The average psychology textbook from 1950 provides a more accurate view of human behavior than one from 2010.

Expand full comment
Aug 31, 2023·edited Aug 31, 2023

I agree about lots of those famous old studies being badly done. But don't you think there's a place for situationism? We are social animals and it's adaptive to be aware of what a situation is pulling for, i.e. what behavior is considered appropriate in the situation. While the pull of the situation doesn't compel us to behave as we think is expected, it sure makes us more likely to. And the researchers of that era were trying to understand what the hell happened in Germany -- why did the population not rise up and protest the monstrous things their government was doing? After all, most of the citizens were not monsters. Seems to me like some situationism was at work.

Expand full comment

Right.

Has anybody gotten their Institutional Review Board to approve a replication of the Milgram scientific torture or Zimbardo Stanford prison experiments? Even the 1954 Robbers' Cave study of adolescent summer campers sounds unlikely to be approved nowadays, although I could totally see my 12 year old self behaving like those boys.

Expand full comment

Today we actually have a pretty decent explanation for German atrocities. The population didn't rise up in protest for a few reasons. 1) They didn't know about the worst parts of it. Very few people did, even inside the Wehrmacht. This was very much by design. 2) They knew what the alternative was. They had just lived through it. The Weimar period was a time of abject misery for Germany, and the Nazi party got them out of it. What was happening now felt, from their perspective, like the lesser of two evils. 3) Bad things happened to people who spoke out. Again, this was very much by design. The Nazi regime made an example of dissenters to make the rest of the people too afraid to dissent.

As for the German soldiers actually perpetrating the atrocities, surprisingly enough a huge part of the answer turns out to be... drugs. The German army gave out stuff like methamphetamine and cocaine to its soldiers as performance-boosters, and this ended up having... well... exactly the disastrous mental and moral effects on them that we'd expect in hindsight. In a very literal sense, WWII was the first War On Drugs.

Expand full comment

I can't see how the drugs explain what the soldiers did. Have you every been around meth-heads or people high on cocaine? I have. You would NOT want that bunch as your soldiers or as guards in your concentration camp. Half the time they think they're gods and the other half they're coming down and feel like shit. Neither drug makes you more compliant. I think they probably do tend to bring out people's violent side, but the form of violence you'd get would come from personal rage, and would not take the form of increased willingness to march over to some set location and commit whatever horror the higher command was ordering today.

Expand full comment

...which the German army discovered along the way. They cut way back on the initial idea of "just give out meth like it's candy" even before the war started, because the downsides were becoming apparent, but instead of arriving at "therefore we probably shouldn't give our soldiers this dangerous drug at all," they went with "let's find a more effective way to optimize its effects." By the end of the war they'd arrived at stuff like D-IX, a complex mixture of meth, cocaine, and oxycodone, which was deployed to submarine pilots and was intended to be mass-produced and handed out to the whole Wehrmacht before the end of the war made that plan unworkable.

Expand full comment

Methamphetamine has been used in the USA armed forces too:

https://pubmed.ncbi.nlm.nih.gov/7661838/

( I think the strategic air command pilots used it too, but I'm having trouble finding a reference )

Expand full comment

I have actually seen people suggest that 3. is false, and that those who refused to obey orders to commit "atrocies," (mostly in the context of the concentration camps) WEREN'T punished: they were merely reassigned to the Eastern Front.

Expand full comment

This seems like a suspiciously narrow definition of "punish." How exactly is "we will take you out of your cushy job guarding prisoners and throw you into the meat grinder" *not* a punishment that makes an example of reluctant soldiers?

Expand full comment

I agree.

Expand full comment
Aug 31, 2023·edited Aug 31, 2023

On the other hand, people from various other countries were going willingly to the front, and have gone willingly to other fronts. So apparently going to the front is not something you cannot reasonably expect anyone to willingly do.

Expand full comment

This is true. When the Nazi's started rounding up the disabled and murdering them, the German public was pretty upset when they found out. There were public protests, open letters to the Fuhrer, all that stuff. That's probably why all the death camps (not all the concentration camps, but the death camps specifically) were located outside of Germany, in areas controlled by the military. If the German people knew the Jews that were rounded up were being slaughtered wholesale, they probably would have been strongly opposed.

Expand full comment
Aug 31, 2023·edited Aug 31, 2023

Looking at the Wikipedia article on Conjunction Fallacy, the prize example seems terrible (at least to this literal-minded idiot):

"Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.

Which is more probable?

1.Linda is a bank teller.

2. Linda is a bank teller and is active in the feminist movement."

The 'correct' answer is "Ha ha, you fools, you all picked No. 2 but you're wrong!" Why? Because there's nothing there to say Linda is a feminist as well as a bank teller. Yeah, but there's also nothing there to say she's a bank teller. So the real proper answer is "Neither is more probable" or "I can't tell, there is insufficient information".

If you're asking me to estimate if Linda is a bank teller *or* a feminist bank teller, without giving me any information as to what her job really is, then yeah I'm taking the description of Linda's character that you *did* give me and extrapolating from that that she is likely to be a feminist.

I'm not saying the thing *isn't* a fallacy, just that it's a dreadful example to use and ordinary people are not idiot machines for picking No. 2 over No. 1. You're asking them to make a judgement, you're not giving them the full information, so naturally they will make a decision based on the information you did give them, which is that Linda is the age range and background to be a feminist. We still have no idea if she's a bank teller, surf instructor, or Cordon Bleu chef, but you're the one proposing that she's a bank teller.

Imagine reading something that was said to have been posted online about JESUS NEVER EXISTED RELIGION IS HORSESHIT CHRISTIANITY IS NUTS THERE ARE NO SUCH THINGS AS GODS OR DEVILS BELIEVERS ARE SHEEPLE SCIENCE IS REAL BELIEF IS NOT.

"Is it probable that this person is an atheist and a teenager?" Well gosh, George, how could I possibly tell from what you've given me there? Maybe it was Pope Francis posted that on the Vatican website. I wouldn't want to be Conjuncting any Fallacies, now would I? 😀

Expand full comment
Aug 31, 2023·edited Aug 31, 2023

It's a fallacy because 1 includes all possible versions of 2, and is thus more probable in every circumstance. You want to pick 2 to make the preceding information meaningful, but it isn't.

Expand full comment

Sure, but most people aren't so cynical as to assume that the authors are intentionally wasting their time by including irrelevant details just to fool them. Chekhov wouldn't do that.

Be like Chekhov.

Expand full comment
Aug 31, 2023·edited Aug 31, 2023

The bones of the example are "Linda is a 31 year old woman. Which is more probable: she is a bank teller/she is a bank teller and a feminist".

If I don't know or can't say that she's a feminist based on what they told me about her, neither can I say that she's a bank teller based on what they told me about her. She could be a bank robber, not a bank teller (and given that the example comes from 1981, Linda is 31 which makes her born in 1950, and her college activist history, it's certainly possible that she would have sympathised with the Symbionese Liberation Army and robbing banks):

https://en.wikipedia.org/wiki/Symbionese_Liberation_Army

Linda is a 31 year old woman is all we can say with surety. Calling it a 'fallacy' that people pick the second option out of two options where we have limited information is misdirection.

Again, I'm not denying that such a thing as the Conjunction Fallacy is real, but just that I wish they had used a better example. Say, "Linda is a 31 year old woman who works in a bank. She is married with three children** and lives in a suburb of her medium-sized Midwestern city. She majored in Education* in college.

Which is more probable?

1. Linda is a bank teller.

2. Linda is a bank teller and votes Republican."

*Then* you could take a newspaper to the nose of people assuming that Linda was a Republican voter due to conjunction. All that we can say is more probable is that Linda is a bank teller and not the bank manager or working for some big NYC financial concern. I think that, in the choice and wording of example, Tversky and Kahneman had some unconscious biases of their own operating 😁

*Linda would have been a college student in the late 70s so based off this as most likely field for her:

https://flowingdata.com/2016/12/07/fields-of-study-ranked-over-past-few-decades/

** Mike Jr. aged 7, Chris aged 5 and Jennifer aged 3. She got married to her high school sweetheart who attended college with her, Mike Sr., straight after graduation. Mike majored in Business, owns a Buick dealership, and is a stalwart of Rotary. They love Duluth and think their suburb is the best place to raise a family. I can tell you an entire story about Linda and her family 😁

Expand full comment

"Linda is a 31-year-old woman" is still noise in the question. Which is more probable:

1. A is true

2. A and B are both true

That's the entire question.

Silk silk silk silk silk. What do cows drink?

Expand full comment

"Silk silk silk silk silk. What do cows drink?"

Water, unless you're feeding calves, and they'll drink from the mother's udder. But only for a while, and then they'll be put on milk replacer (often bucket fed) while the heifers are milked.

https://www.youtube.com/watch?v=MdoAvuNk09U&t=1s

That question probably works better on people who've never seen cows milked or calves fed 😁

Expand full comment

Phrased this way, I wonder how much error is captured by the fact that a person might read the choices as exclusive rather than overlapping.

If the cardinal example were reworded to:

A) Linda is a bank teller, and may or may not be a feminist

B) Linda is a bank teller and a feminist

I’d bet most of the B answers go away.

Expand full comment

The real-life example is how news stories love to specify there are consequences to the poor and minority communities, and let the reader conjoin that to mean "oh, this landslide/flash flood/wildfire is because of upper-class racism."

Expand full comment

But Option 1 tells us Linda is a bank teller. Where in the information provided does it say "Linda works in a bank"? It tells us all about Linda's character, age, intelligence, and educational background. That makes it easier for us to estimate if Linda would or would not be a feminist.

If the options presented to me seem to be asking "Make an estimation about Linda based on information provided above", then yes I'll think she's a feminist. I have nothing to go on that a college-educated, unmarried woman in her early 30s who majored in philosophy and was involved in student activism for liberal causes then went to work behind the counter in the bank.

It is a trick. I have better grounds to infer that Linda is a feminist than I do that she is a bank teller.

Expand full comment

I feel like if the order were reversed, this would be an easier confusion to observe...that is, Linda Is A Feminist // Linda Is A Feminist And Bank Teller. The "story", such as it is, obviously points left. So then tacking on orthogonal attributes, or possibly even anticorrelated ones, clearly makes the probability go down. But then, as you say, it wouldn't be much of a fallacy since the jig is so obvious.

Expand full comment

Yeah, definitely. It is to do with maths not words, and that makes the trick. "1 contains all possible versions of 2" may work with figures, but "Linda is a bank teller" does not contain all possible versions of "Linda has blue eyes, Linda has brown eyes, Linda's favourite food is bread, Linda is coeliac, Linda is a supermodel, Linda has twelve kids despite being unmarried, Linda hates dogs, Linda hates cats" and so on. The wording of the problem tells us that Linda is a vegetarian coeliac who hates cats but not where she works. Then asking us about "is she a bank teller?" doesn't make sense because it never told us anything like that.

But if you strip out all the details and phrase it as "is A+B more probable than A?" that's a basic maths formula and people would recognise that. Which makes it a completely different problem from "should I ask Linda out on a date since I'm a baker who can never get all the flour off no matter how hard I try, and that's not counting the cat hair from my six tabbies?"

Expand full comment
Sep 1, 2023·edited Sep 1, 2023

I think this is it. We've got A or A+B, and the story suggests B so we pick A+B. If we replaced B with C, something that the story didn't point to, there'd be no reason to pick it.

How would you view "Linda is a bank teller" vs. "Linda is a bank teller and is right-handed"? That's something that is likely in itself, but not likely because of the story.

Expand full comment

No it doesn't, not if you can plausibly claim that any additional fact at all can be derived from 2 whether that fact is about Linda, about the state of mind of the person making the statement, or anything else. It doesn't even matter if Grice is completely wrong and I am completely wrong to believe him; if I wrongly believe him and wrongly derive an additional fact from 2 the mistake I am making is not the conjunction fallacy.

Expand full comment

The best counterargument I can come up with is this: probability is meaningless when applied to fictitious people you just made up.

I made up someone called Sally, is it more likely she's a schoolteacher or a mermaid? There is absolutely no sensible answer to that question; probability might apply to individuals selected by some process from some population, but not to people made up for illustrative purposes.

If you tell me Linda was a person picked at random from the population of Birmingham circa 2023, I can perhaps start to think about probabilities.

Expand full comment

The question isn't

>Is it more likely that Sally_is_a_teacher or that Sally_is_a_mermaid

The question is

>Is it more likely that Sally_is_a_teacher or that Sally_is_a_teacher_AND_a_mermaid

The probability of A is always greater than the probability of A_and_something_else.

There are never more mermaid teachers than there are teachers because some teachers aren't mermaids.

There are never more mermaid teachers than there are mermaids because some mermaids aren't teachers.

It doesn't matter what A and B are and all of the information provided about Sally/Linda is irrelevant. The fact that Sally/Linda don't exist is also irrelevant.

No detail can change the underlying math fact that P(A) is always greater than P(A and B)

Expand full comment
Aug 31, 2023·edited Aug 31, 2023

Not sure.

Given: Wendy is a small dog owned by a medium-sized French man.

How likely is it that Wendy is a famous TV singing star? Give a probability between 0 and 1.

My probability would be very close to 0.

Now with more information. Wendy appears on Britain's got talent in a smash hit show as a singing dog with her French owner. How likely is it Wendy is a famous TV singing star?

My probability is close to one.

https://www.bing.com/videos/riverview/relatedvideo?q=britains+got+talent+singing+dog+french&mid=447862FF588F4A662E15447862FF588F4A662E15

the set of dog's who are singing stars must be larger than the set of small dogs belonging to Frenchmen who appear on Britains Got Talent and are singing stars but the second scenario is more probable

Expand full comment

I think you are conflating P(A and B) with P(A given B).

If you know that Wendy is a singing dog, the probability she is on TV is higher than the average dog. I agree that is true and no one is saying you can't update your probabilities given relevant evidence.

But in the fallacy you don't know B yet.

Given Wendy is a small dog, which is more likely?

1) She is a singing star

2) She is a singing star who appeared on TV

1 is more likely.

Obviously if you then learn that Wendy sang on Britain's Got Talent then both are now equally likely at 100%.

Note that 1 does not say 'She is a singing star who was not on TV'; it doesn't make a TV judgement at all.

There is no information you can learn that will make scenario 2 more likely than scenario 1 because 1 contains all of 2. Even if 2 makes a better story.

Expand full comment

Yes, I understand the standard argument and am sympathetic to it.

The argument I'm trying to put forward is that P(A) need is not greater than P(A&B) here because P(A) and P(A&B) are not numbers, they are meaningless concepts. They have no values, cannot be defined, and thus cannot be compared.

It's a philosophical difference about the scope of probability I suppose. If I asked "What is the probability that a snroop is a bloof" then some of the too-Bayesian-for-their-own-good types here would probably rush to say that in the absence of any definition of those words I just made up they're just gonna go ahead and assign a weakly held prior of 50% until further information is forthcoming. But I think the sensible answer is "go away, there's no such probability, come back with a well defined problem".

Expand full comment

Imagine if Einstein said that. "Well, we don't really know what E and m are. They're not numbers, they're just meaningless concepts - mere letters, that cannot be defined or compared. So there's really no point trying to think about the relationship between E, m, and c. Come back to me when you have real values for E or m."

A critical part of math, and more generally logic, is doing this kind of abstraction where you deduce general principles like E=mc^2 or P(A^B)<=P(A). It's not so much a philosophical difference about the scope of probability as it is an outright refusal to engage in basic logical reasoning.

Expand full comment

Kahneman and Tversky achieved their famous results for their Linda Is A Feminist Bank Teller trick question by violating the Chekhov's Gun principle of storytelling. The great dramatist Anton Chekhov advised:

"Remove everything that has no relevance to the story. If you say in the first chapter that there is a rifle hanging on the wall, in the second or third chapter it absolutely must go off. If it’s not going to be fired, it shouldn't be hanging there."

But what did Chekhov know about human nature? To the Aspergery Israelis, it was indisputably irrational for listeners to assume that Tversky and Kahneman didn't just put in the details to fool them. After all, that’s exactly what the professors were trying to do: con them.

Expand full comment

In this context, the principle is known as Grice's Maxim of Relevance.

Expand full comment

Or trying to get published.

Expand full comment

The Real World dumps a great many irrelevant details on you all the time. Chekhov might be up there with Disney in terms of harm caused by storytelling.

Expand full comment

if you actually read Chekhov, you'd know his pieces are full of irrelevant details (and some relevant details often missing, like ending, that's why there's no a single great movie based on his work). He was not a good drama theorist, left only several random thoughts and followed none of them himself.

Expand full comment

The probabilty of "A and B" is always smaller than or equal to the probability of "A". It doesn't matter whether A and B are "feminism" or "Jesus Christ".

Nevertheless, the simplest explanation of why people engage in the conjunction fallacy in that example likely has more to do with the fact that, in colloquial as opposed to rigorous language, when you are given the options "A" and "A and B", there is an implication that "A" means "A and not B".

Expand full comment
Aug 31, 2023·edited Aug 31, 2023

You took the words out of my mouth. This is the problem, not what the others are saying is the problem.

If people say "which is more likely, that she is a bank teller, or that she is a feminist bank teller?" normal people interpret the first one as meaning that she is not a feminist. This is not a fallacy, it's how natural language works.

Expand full comment

That is the problem. It's a maths problem, not a language problem, but the vast majority of people who don't think in "maths first" will think of it as going by the words used. We're told about Linda and asked to make an estimation of her likely personality. Bank tellers may or may not be feminists, there's nothing to say that the category "bank teller" also encompasses the category "feminism". So based on what we know about Linda, it is more likely that she will be engaged in feminist activism than if this story was about an MBA graduate named Bob who is Vice-President of Global Pension Insurance Payment Schemes.

But the underlying principle here is not "Linda or Bob, who is more likely to be out marching with the Gloria Steinem T-shirt?" which is how people interpret it, it's "are you up on your probability theory maths so you know A is more probable than A+B, write the proof".

I think that's why the 'scientist' part of 'social scientists' are excited about this - mathematical rigour and statistics, woo! - but ordinary idiots like me go "yeah well you tell me if you think Linda is going to be sitting at home crocheting a throwover instead of donning a pink pussy hat for the Women's March".

Expand full comment

How to take this forward: divide your sample into 2. Get one half to answer the question as per the original experiment, give the other half a spreadsheet and ask them to put a tick or not in column 1, bank clerk yes/no, and column 2 active in feminist movement yes/no.

Intuitively I would expect spreadsheeters to be less confident than natural languagers - i.e. there would be less ticks in column 2 than the embracing of the conjunction by the natural languagers would imply. I am not sure whose side this would support: are the spreadsheeters answering a different question, shorn of implicatures, or does a spreadsheet mentality help them to detect and avoid a fallacy? Maybe test by giving a third set a spreadsheet where the columns are 1 bank clerk yes/no, 2 bank clerk AND active feminist yes/no.

Expand full comment

If you phrased the question as "1. bank clerk - yes or no", "2. feminist - yes or no", there's much more grounds for ticking 2 based on the information about Linda that we are given. We have no idea, other than the assertion in the options, that she's a bank clerk.

From the information, I think people are taking it that Linda is very probably a feminist. So that is something we can point to evidence from the information "Because x, y, z". We have no information about what job she does. So if the linkage is "feminist and bank teller" and "bank teller on its own", people are using "I'm pretty sure she's a feminist, so that links with bank teller, so maybe she's a bank teller". For "bank teller" on its own, I have no idea, there's nothing there to say what she works at.

Expand full comment

This might be true about that specific example, but it is nonetheless the case that people fall for the conjunctive fallacy all the time, in real situations where there's no unfired chekhov's guns or grammatical ambiguity.

it is trivially easy to demonstrate such

Expand full comment

For example?

Expand full comment

Except, as noted, in standard english, this text actually equates to is she a bank teller who is feminist, or a bank teller who is not feminist. Now, as a lawyer, I don't mind playing this sort of word game in a deposition, but people who play it in real life are viewed as twits, unable to communicate clearly, or being deceptive.

Now, another way to interpret this question is like this:

Given what we know about Linda, it is quite likely she is a feminist. I have no information that would cause me to believe her to be a bank teller, therefore, one of these is at least likely to contain some true information and, as I have to choose, I'll choose the one that is more likely to contain some true information.

Expand full comment

> I have no information that would cause me to believe her to be a bank teller, therefore, one of these is at least likely to contain some true information and, as I have to choose, I'll choose the one that is more likely to contain some true information.

Yes, this is likely the (subconscious?) thought process of the people who fall for the fallacy.

Expand full comment

Somehow I'm sleepy enough that this reminds me of the Monty Hall problem. "Here's a billion possibilities, choose one at random."

Okay, you choose this one, which happens to be... "bank teller".

"Now I eliminate 999,999,998 of the possibilities, leaving just the one you picked that says 'bank teller', and also that one way over there, which says 'bank teller, bank robber, feminist, left handed, heterochromatic, drives a VW beetle, wears a beret, swears in 4 languages, and is wanted by Interpol'. Which of these two do you pick?"

Expand full comment
Aug 31, 2023·edited Aug 31, 2023

This is a great way to waste time when I should be working, arguing over this is a lot more appealing than buckling down to pulling out data to create spreadsheets for the audit 😁

Okay, *if* the choices were as you say: "bank teller simpliciter" or "bank teller and festoon of attributes", *then* of course it makes sense to only venture as far as "bank teller". There's no way to know about, or back up, the rest of those assertions.

**But that's not the problem as phrased.**

"Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.

Which is more probable?

1. Linda is a bank teller.

2. Linda is a bank teller and is active in the feminist movement."

What information are we given about Linda, from that potted bio?

Name: Linda

Gender: Female

Age: 31

Marital status: Single [a question you are no longer allowed to ask at job interviews, by the way]

College educated?: Yes

Degree: Philosophy

Interests: Anti-nuclear demonstrations, issues of discrimination and social justice

Personality: Very bright, outspoken

Job: ----------

Based on that list, there are a plethora of studies that tell us Linda votes Democrat:

https://cawp.rutgers.edu/gender-differences-partisan-identification-and-presidential-performance-ratings

https://www.pewresearch.org/politics/2018/03/20/1-trends-in-party-affiliation-among-demographic-groups/

Is there any information there about her work or job? No. A degree in philosophy does not correlate with "works as bank teller" and bank teller is a lower-level job, it's the person behind the counter dealing with customers and office work. A bank executive or more specialised role in fintech might be where someone pivots from philosophy to a career in finance/banking, but Linda is not at that level (presumably, given the wording of what her job is).

Indeed, with all the above, if Linda *is* a bank teller, there's also a damn good chance she's African-American 😀 To make any decision on the probability more likely "is Linda X?", the information leans towards "more probable Linda is a feminist". There's nothing to form an opinion on "is Linda a bank teller or not?" If you're asking me "Is Linda a feminist?" I'll say "most likely". If you're asking me "Is Linda a bank teller?", I'll say "I have no idea". That, I think, is where and why people are picking option 2 - given that, on the information provided, Linda is more probably a feminist, then going for "feminist and bank teller" is more probable than "bank teller" on its own. The order of precedence of the wording is really doing all the work here.

So it's not "is Linda a bank teller?" as in your Monty Hall problem. Indeed, to switch *that* one around, the Linda Problem here is as if the doors *did* tell you "left-handed, drives a VW, wanted by Interpol" and so on as the choices narrow down. If you had gone through all those, then at the end had "bank teller" versus "bank teller, bank robber, feminist, left handed, heterochromatic, drives a VW beetle, wears a beret, swears in 4 languages, and is wanted by Interpol" then I think you'd be justified in "Door Number Two, Monty!"

Expand full comment

I think people are reading the example as follows:

Which is more probable:

1. A bank teller is Linda.

2. A bank teller in the feminist movement is Linda.

Now the correct answer is 2. Note that in English "Linda is a bank teller" and "A bank teller is Linda" usually mean the same thing.

Expand full comment

I read the second option as adding an implicit “not a feminist” to the first one. Adding a neutral statement for the first case regarding feminism would make it a fairer fight.

It’s hard to know what someone is really asking in the case of non sequiturs.

Expand full comment

Logic is hard.

Expand full comment

I'm having a hard time figuring out where the major points of disagreement are with the Carcinisarion post. I read the banana as saying that the existence of cognitive biases led to a whole body of research that advanced the notion that human beings are always and everywhere slave to an innumerable number of unconscious biases, so much so that this version of automaticity become indiscernible from woo.

Maybe I'm being too charitable, but I read the banana as saying that it's this woo model of automaticity that isn't real, not that the underlying cognitive biases.

As for the alternative framing, I think the phenomenological approach as a lot going for it. For example, here's my phenomenological explanation for the first word scramble game: I, like many Americans, started taking standardized tests at a young age. Sucess, or lack of success, on those tests played a deterministic role in how the first half of my life played out. So, when presented with a question in this form, I immediately treat it as some kind of exercise in pattern-recognition, because that's how most test-taking strategies work. In a sense, I've been primed, but I think it's more accurate to say that solving puzzles has been somewhat central to my existence; therefore it's only natural that I approach these sorts of tasks in this manner.

Actually, I'd be curious to see what happens when the same word scramble is presented to people without the same history of standardized test-taking or for whom such tests where never of great importance. Perhaps their answers would be less conforming to the pattern.

Expand full comment

"Actually, I'd be curious to see what happens when the same word scramble is presented to people without the same history of standardized test-taking or for whom such tests where never of great importance. Perhaps their answers would be less conforming to the pattern."

I'm such a person and yes, I must admit I unscrambled that as PURGATORY once I recognised the pattern. Funnily enough, ANGEL gave me the most trouble as I was going LEGAN? GLEAN? NAGEL? Or even NAGLE, an Irish surname 😀 It was only going by the pattern that I went "Oh yeah, 'angel'".

Isn't automaticity what we call 'instinct' in animals, and 'habit' in humans? We build up shortcuts for much of what we do, because much of what we do is repetitive tasks. Standing there for ten or fifteen minutes pondering all the variables to come to a rational decision isn't going to get the grocery shopping done, just pick whether you want carrots or turnips for the dinner tonight. The Linda Example below is what we do; based on limited information and impressions, what kind of person do we think Linda is? If in real life we're going to be making decisions like "I don't know enough about Linda to estimate if she's a feminist or not", then we're going to fall prey to scammers and deceits of all sorts ("Hm, this email from a Nigerian prince - how can I tell if that's legit or not? I certainly cannot say that Prince Masinga is a scammer based on this one email, that would be a Conjunction Fallacy!")

That's what the dating docs in a previous post are about, after all! Giving prospective suitors as much information as possible so they can make better decisions about whether or not they want to date you, rather than the "swipe on this, yes or no?" of dating apps. And as many people said in that post on dating, we do make decisions based on attractiveness before we start thinking about the other qualities.

Expand full comment
Aug 31, 2023·edited Aug 31, 2023

I'm still ruminating about your last point, how we'd all probably have thought slavery was fine if we were born in the 1700's in the South. Yup, no denying it. I've thought about that a lot myself. There's this hard-to-resist fantasy that I, in my old-timey dress and lice-infested hair (which I would not be distressed by because everyone else had head lice too) would jump up and shout "Fuck this shit! We are treating these Africans like they are animals, with no rights and no feelings. Wake up, you filthy 18th century fools! They have all the same rights we do." But I know I would not have. At most I would have had some private wonderings about what it was like to be them, and whether my husband was a bit too hard on them.

There's a book I admire called How to Think, by Alan Jacobs (a better title would have been How to Be Fairminded) that discusses this issue at length. Jacobs' view is that while we experience our thinking about important issues as a private weighing of this and that, that's an illusion. Thinking is actually a collaborative, group activity, and one by-product of that is that people in the same culture tend to see many of the same things as obvious truths. He's pretty good on the subject of trying to free yourself from your bond with these socially acquired postulates, and on the related subject of working with your mind's instinctive hatred and devaluation of the outgroup. But I think he'd agree that our ability to do those things has limits.

I often feel that there's this buried subtext in Rationalism that it will enable off of us to shuck off the trance of belief in the truisms of our era, and think with perfect freedom and clarity. I think that Rationalism certainly helps, but there are limits. Jacobs talks about the challenge of thinking more freely and clearly in quite a different way from Rationalists, and I find his approach very helpful. He comes at it more from the direction of working with one's anger and self-righteousness -- you know, the feelings cluster that is stirred by an encounter with a member of the outgroup. He talks about ways to quell rage. One that works really well for me is to do an introspective hunt for ways I have made the same error the infuriating outgroup member has done. I can always find one -- it may be in an error about a different topic, it may be a smaller version of the the error (or it may not!), but there's always one. The point of doing that isn't to hate yourself or let go of your objections to the outgroup members outgroupy idea. It's to break the illusion that they are a whole different, and lower, order of person. And also I can usually remember how I arrived at my version of the error, so that helps me understand how the outgrouper arrived at his.

For instance very early in this thread somebody jumped in aggressively with something like "Nope, nope, what you said about IQ being meaningful is false, IQ is a worthless concept." And it was irritating and fact Scott ended up banning the guy after his rage to content ratio got so high there was no content. But I've said some version of what that guy said, and I did it for the exact reason Scott suggested to the vehement anti-IQ poster: The topic of IQ makes me feel guilty and uncomfortable. Being smart is an important part of my identity and always has been and that seems sort of ugly. I didn't become smart by effort, I was just born that way. If I knew someone who had been born wealthy and thought of her wealth as an important part of her identify, I would disapprove of that. And yet being smart is a part of a person that can't be removed the way wealth can. It's more of a real part. I handle the topic of IQ in a more honest way these days, but there certainly was an era when I would yammer on about how IQ was not as meaningful as people think.

Expand full comment
Aug 31, 2023·edited Aug 31, 2023

It seems that attitudes to slavery depended on where you were; in the 1700s in New England, slavery was different in that it was more along the 'traditional' lines and compared to indentured servitude; early slaves had more rights, but of course once you make owning property profitable, people don't like their property having 'rights':

https://education.nationalgeographic.org/resource/new-england-colonies-use-slaves/

"While slavery grew exponentially in the South with large-scale plantations and agricultural operations, slavery in New England was different. Most of those enslaved in the North did not live in large communities, as they did in the mid-Atlantic colonies and the South. Those Southern economies depended upon people enslaved at plantations to provide labor and keep the massive tobacco and rice farms running. But without the same rise in plantations in New England, it was more typical to have one or two enslaved people attached to a household, business, or small farm.

In New England, it was common for individual enslaved people to learn specialized skills and crafts due to the area’s more varied economy. Ministers, doctors, tradesmen, and merchants also used enslaved labor to work alongside them and run their households. As in the South, enslaved men were frequently forced into heavy or farm labor. Enslaved women were frequently forced to work as household servants, whereas in the South women often performed agricultural work.

Enslaved Africans quickly replaced indentured servants on plantations in Virginia, Maryland, and other Southern colonies, but in New England, imported enslaved people were initially given the same status as indentured servants. This changed in 1641, when the Massachusetts Bay Colony passed laws for enslaved people differentiating enslaved labor from the indentured servants’ contract labor, which took away the enslaved’s rights.

Still, the New England colonies began to show differences in their approaches to slavery, even as slavery became more common in Massachusetts, Connecticut, and Rhode Island in the 18th century. The colonial government in Rhode Island—which had the largest enslaved population by the 1700s—tried, though ultimately failed, to enforce laws that gave the enslaved the same rights as indentured servants and set enslaved individuals free after 10 years of service. Although human trafficking continued to flourish throughout the 1700s, these first moves to break up human trafficking foreshadowed what was to come in the New England colonies."

So it would have been possible, were you a Rhode Island lawmaker in 1700, that you *would* have said "We are treating these Africans like they are animals but they have the same rights we do".

EDIT: I suppose I should say that Southern plantation slavery in the 1700s was *also* along the traditional model; certainly, Classical Romans would have recognised the "massive rice farms" as latifundia:

https://en.wikipedia.org/wiki/Latifundium

"Latifundia included a villa rustica, including an often luxurious owner's residence, and operation of the farm relied on a large number of slaves, sometimes kept in an ergastulum. They produced agricultural products for sale and profit such as livestock (sheep and cattle) or olive oil, grain, garum and wine. Nevertheless Rome had to import grain (in the Republican period, from Sicily and North Africa; in the Imperial era, from Egypt).

The latifundia quickly started economic consolidation as larger estates achieved greater economies of scale and productivity, and senator owners did not pay land taxes. Owners re-invested their profits by purchasing smaller neighbouring farms, since smaller farms had lower productivity and could not compete, in an ancient precursor of agribusiness. By the 2nd century AD, latifundia had replaced many small and medium-sized farms in some areas of the Roman Empire. As small farms were bought up by the wealthy with their vast supply of slaves, the newly landless peasantry moved to the city of Rome, where they became dependent on state subsidies. Free peasants did not completely disappear; many became tenants on estates that were worked in two ways: partly directly controlled by the owner and worked by slaves and partly leased to tenants.

Latifundia also expanded with conquest, to the Roman provinces of Mauretania (modern Maghreb) and in Hispania Baetica (modern Andalusia).

The latifundia also distressed Pliny the Elder (died AD 79) as he travelled, seeing only slaves working the land, not the sturdy Roman farmers who had been the backbone of the Republic's army. His writings can be seen as a part of the 'conservative' reaction to the profit-oriented new attitudes of the upper classes of the Early Empire. He argued that the latifundia had ruined Italy and would ruin the Roman provinces as well. He reported that at one point just six owners possessed half of the province of Africa, which may be a piece of rhetorical exaggeration as the North African cities were filled with flourishing landowners who filled the town councils.

The production system of the latifundia went into crisis between the 1st and 2nd century as the supply of slaves dwindled due to lack of new conquests."

Expand full comment

> The production system of the latifundia went into crisis between the 1st and 2nd century as the supply of slaves dwindled due to lack of new conquests."

See, this is the major difference between the American South and many previous examples of slavery - the hereditary caste-based nature. Most ancient cultures had slavery, but it often wasn't a hereditary status.

Expand full comment

> I'm still ruminating about your last point, how we'd all probably have thought slavery was fine if we were born in the 1700's in the South. Yup, no denying it. I've thought about that a lot myself. There's this hard to resist fantasy that I, in my old-timey dress and lice-infested hair (which I would not be distressed by because everyone else had headlice too) would jump up and shout "Fuck this shit! We are treating these Africans like they are animals, with no rights and no feelings. Wake up, you filthy 18th century fools! They have all the same rights we do." But I know I would not have. At most I would have had some private wonderings about what it was like to be them, and whether my husband was a bit too hard on them.

I think this entirely sensible observation, in the course of trying to decide how one determines truth and moral action, is the cousin of the concern that “we’ll be on the wrong side of history”. Except that “the wrong side of history” seems to come out of a belief that morality still depends on conforming to the beliefs of those around you, with an assumption that “progress” marches on and people in the future are always more enlightened than people in the past. So your task isn’t to consider all the evidence available to you, and the different ways people have seen this through the centuries and around the world, and reason: your task is to guess what the next generation will think.

Or perhaps “the wrong side of history” simply replaces any attempt at morality and truth with a fear of social judgement and embarrassment? Which I could see being most powerful for the sort of people most inclined to go along with the social flow in the first place. Is it the warped result of people who were genuinely trying to get to some sense of objective truth and morality, trying to persuade people who are driven more strongly by social judgement, and accidentally getting something that looked a bit like their message through to them? “You might be conforming to current values, but in 10 years time everyone will look back at you and cringe.”

It also sometimes feels like a way to avoid thinking about the things we already find horrifying, and don’t do anything about. I’d imagine there were plenty of people alive in the time of slavery who did find it horrendous, and chose to avoid thinking about it too much because it made them feel awful. How much could you really do? If you were living in Nazi Germany, and you knew your family’s lives depending on you not rocking the boat too much, what would you actually do? The idea that we would simply think it was obviously right, because everyone else acted like it was right, is true for some people, I think, but at most a comforting half-truth for anyone with the sort of mind that thinks about this.

Expand full comment

Doing or believing something to be on the "right side of history" is incredibly cowardly. A truly courageous and moral person fights for what's right even if they know for a fact they'll end up on the wrong side of history in the future.

Expand full comment

"how we'd all probably have thought slavery was fine if we were born in the 1700's in the South. Yup, no denying it."

I dunno. If one is currently dissenting from the respectable political options isn't it likely that they would similarly dissent in the past? Especially if their disagreements with the modern options are based on similar concepts?

Expand full comment

"If one is currently dissenting from the respectable political options isn't it likely that they would similarly dissent in the past?"

A small fraction would have. But also remember how many directions dissent can go in, and how unlikely it would be to happen to match up with the mores of the next generation. A 1700s dissenter might have been an ardent vegan or monarchist as plausibly as an abolitionist. The winds of politics shift direction very chaotically.

Expand full comment

Right, but the direction of the dissent isn't entirely random. If you're a pro-individuality, negative-rights libertarian today, how would that get twisted to being "yeah, let's enslave an entire group of people."

Admittedly, as you say that's a small group of people today. But a larger group here I'd guess.

Expand full comment

"the direction of the dissent isn't entirely random"

That's fair. But there is a large selection of possible political arguments that someone might be passionate about, and, typically, whatever they choose may wind up being "on the wrong side of history", or just orthogonal to the concerns and mores of the next century. In the specific case of the 1700s, I'd guess that pro- and anti- independence arguments would draw at least as much attention as pro- and anti- slavery.

If slavery _was_ someone's focus, re "If you're a pro-individuality, negative-rights libertarian today, how would that get twisted to being "yeah, let's enslave an entire group of people."", well, one way a 1700s libertarianish advocate might view the argument is to view slaves _purely_ as property, and to champion the rights of property owners to do as they see fit with their property. ( Admittedly this would probably not have been a _dissenting_ view at the time - although, if there were any legal restrictions on what slaveowners could do with their slaves, someone with a libertarianish view at that time might have dissented with respect to those restrictions. )

( BTW, I'm not intending "libertarianish" to be a critical or mocking term, but rather am just using it to indicate that the 1700s are a vastly different time than our own, and our politics won't cleanly correspond to 1700s politics. A 1700s dissident can't very well quote Hayek. "proto-libertarian" might also work, but I want to be a bit vaguer than that. )

Expand full comment

I agree with the desire to avoid anachonistically mapping values of one time to another. But I don't think your account of property rights against the state fits the time at all. I could be wrong but my *understanding* is that the feudal notion of property was fundamentally entwined with state power, and it wasn't until Adam Smith and the physiocrats in the late 18th century that the idea of property rights as a restriction *against* the government was developed.

In fact it seems more likely the pro-slavery position would have been linked with proto-leftist notions of equality, since that's exactly what happened in the mid-19th century US. Where "equality" of course, as in the abortion debates of today, is understood as only applying to the slightly-disadvantaged members of the group of people powerful enough to vote and make their voices heard (adult women today, white southerners back then) and not at all to the completely powerless group they're exploiting (who probably aren't people anyway and even if they are, who cares about *their* most basic rights when that might slightly lower *my* relative status and prosperity?) Equality treated, as it usually is, as a method for advancing your own group's self-interest, and not at all as an intrinsic moral principle for caring about those who have no voice and no power.

Expand full comment

Many Thanks! Yeah, it makes sense that the notion of property had to make a transition from a feudal one to our current one, and I'm very very unsure of what the time frame for that transition was.

"In fact it seems more likely the pro-slavery position would have been linked with proto-leftist notions of equality, since that's exactly what happened in the mid-19th century US." Interesting! That's one connection that I would never have guessed. Much appreciated!

"understood as only applying to the slightly-disadvantaged members of the group of people powerful enough to vote and make their voices heard"

Excellent point!

Expand full comment

You'd probably be up in arms about the Jay Treaty and closer ties with Britain, rather than worrying about the slaves around you.

Expand full comment

Sounds plausible!

"It angered France and bitterly divided Americans. It inflamed the new growth of two opposing parties in every state, the pro-Treaty Federalists and the anti-Treaty Jeffersonian Republicans. " ( https://en.wikipedia.org/wiki/Jay_Treaty )

Yes, that indeed sounds like something a 1790s dissident would focus on.

Expand full comment

What's interesting to me is the question of what makes someone an exception. The vast majority of white Southerners went along with slavery for the obvious reasons (ie, everyone else is fine with it). But there were some who did "jump up and shout". What made them different?

The main person I'm thinking of is Angelina Grimke. Grew up in the South, and at the age of 24 she stood up in front of her church and gave a speech condemning slavery and calling on the entire congregation to condemn it and fight against it as well. They demurred. She ended up becoming a Quaker, moving to the North, and dedicating her life to the abolition of slavery.

What made her different? I can see two possible factors. One is a tendency, whether inborn or learned, to reject conformity. When Angelina was 13 she refused to be confirmed into the Episcopal church because she did not agree with the creed she would have to vow. That tells us she, at that young age, already cared more about what she thought was right than conforming to society. Another factor is what people are exposed to: at the age of 21 she joined the Presbyterian church (apparently she could agree to their creed) and the pastor at that church was against slavery. He had been born in the North, and while he believed that it would be practically disastrous to abolish slavery he also believed it was morally wrong. So she had one person in her life who believed slavery was evil: was that necessary to her stand? Or would she have found her way there anyway?

What puts a spine in a person?

Expand full comment

I'm not sure spine is the main component. I think a lot of what leads people to take non-conforming stances is not courage or rebelliousness, but genuine difficulty seeing things the way others do, and growing out of that a failure to feel a very strong bond with others whose circumstances are similar. In Angelina's case, that stuff would manifest as a feeling of not quite grasping what people's takes on Episcopalianism and slavery were. In other words, an important component is being a little bit Aspie.

I think I'm a little bit Aspie, and I have never been interested in feminism, and my lack of interest dates from way before the period when it got woke, hostile and weird. While of course I see parts of life where my experience is very like other women's, and quite unlike men's, those parts of life just never seemed like the most important ones to me, and I never was able to get even a little bit of that "sisterhood" feeling. And ceremonies don't work for me. I hate weddings and funerals, even if I love the people getting married or loved the one who died. The feeling of being a part of a group that has come together to celebrate or mourn just doesn't happen for me, & the customs -- song, dance, decor, eulogies, roastings -- seems arbitrary, weird and boring. I don't feel superior to the people who are moved by ceremonies, I feel alienated and perplexed. Anyhow, I think the alienated and perplexed take on the conventions of one's time and place are often the root of somebody's rejection of some important postulate. And then, yeah, add some spine too to account for their being so open about their divergent view.

Expand full comment

This all makes a ton of sense to me.

This seems true to me as an important part: "a failure to feel a very strong bond with others whose circumstances are similar"

Non-joiners unite!

Expand full comment

Likewise. Though I generally don't think of it as a matter of thinking through mores and picking more "defensible" ones, but rather as thinking of the whole mess as arbitrary and chaotically political. "Ought"s aren't "is"s, aren't factual claims, aren't true or false. Everybody has preferences (though they sometimes change over time too), and preferences can sometimes be negotiated.

Expand full comment

> The topic of IQ makes me feel guilty and uncomfortable. Being smart is an important part of my identity and always has been and that seems sort of ugly. I didn't become smart by effort, I was just born that way. If I knew someone who had been born wealthy and thought of her wealth as an important part of her identify, I would disapprove of that.

You were born lucky in some aspect (just like the other person who was born rich); own it. Because people who were born lucky have basically three options:

1) Admit that they were born lucky.

2) Pretend that the luck is actually a result of their own effort.

3) Deny that they are lucky, and insist that they are exactly the same as everyone else.

Are the options 2 and 3 better? In my opinion they are much uglier, because they allow you to look at all the less lucky people and call them lazy (option 2) or entitled (option 3).

You may feel guilty about your unfair, undeserved luck, but denying it doesn't really make it go away, does it? If you want to make the world more fair, you can simply pledge to use a part of your luck for the benefit of the less lucky ones. IQ correlates with income, so you may pledge to donate 10% of your income to effective altruist causes. Or maybe if you are not really good at converting IQ to dollars, do something else, for example contribute to open-source software. Or you can do some smart action in real world (as opposed to angry tweeting) to directly improve it. Provide free tutoring to poor kids in your neighborhood. Anything.

This ancient concept is called "noblesse oblige" and is often ignored these days, because people instead choose the remaining options, and smugly pretend that everyone can "learn to code" and everyone can build dozens of startups until one of them makes them billions, so if you are poor, it's your own fucking fault for being lazy. (It definitely doesn't have anything with some people having more IQ or money than others, because everyone on the right side of history knows that IQ isn't a thing, and being rich isn't an officially recognized privilege.)

What is, is. Admitting it doesn't make it more true (or more unfair), denying it doesn't make it go away. Feeling guilty about something you didn't do is just stupid signaling. Stop being stupid. You have scarce resources, use them.

Expand full comment
Sep 1, 2023·edited Sep 1, 2023

Yeah, I agree, I mostly moved on to the point of view you're recommending quite a while ago. But one thing you wrote clarified for me the roots of my guilt. "You may feel guilty about your unfair, undeserved luck." You know what, that's not what I mostly feel guilty about. What I feel guilty about is feeling proud of being smart -- because, as you say, my smarts are a piece of luck, not an accomplishment. Still, I'm pretty sure most people do this. Whether what they're great at is the SAT or just naturally looking good in tight jeans, people feel proud of the ways they are naturally excellent. It's weird to think of it as luck, because it raises unsettling questions about identify. If smarts are a fortunate thing that happened to me, who's me? So there's a me who is neither smart nor dumb, and does not have a genetic predisposition either to long shapely thighs or to short chubby ones? And in fact that me existed before it received any of packages that are distributed, and about which it will later feel proud or ashamed or pleased or vexed -- gender, race, family income, allergies or the lack of them, a predisposition to mental health or to bipolar, nationality, athletic ability, etc etc? *That* featureless blob is me, and all those characteristics I just named are merely pieces of good or bad luck with which I am presented?

Expand full comment

People feel like there is some eternal unchanging I/self/soul that is independent on being long or short, young or old, smart or stupid, man or woman, black or white, aspie or normie, perhaps even human or sunflower. (Except, when they get drunk and do something stupid; then they suddenly go: oh, that wasn't really me, that was just alcohol talking, I didn't really mean it.) Fact is, "me" is just a configuration that currently happens to exists. Change something, the configuration changes. Some changes have a smaller impact on the thinking/feeling/behavior, some have greater impact. Changing everything would change everything. But even a small change, such as eating pizza and no longer being hungry, changes a little.

Our experience is that in everyday life some things change imperceptibly slowly (i.e. it feels like they do not change at all), such as traits and long-term memories. Other things change cyclically, such as being tired/rested, hungry/full, angry/calm; here the basic emotional intelligence is being aware of this fact and not overreacting on your current state (don't do things that you will predictably regret later). So it is natural to develop a model where the long-term things are "me", and the short-term ones are "not me"; it reflects whether you should predict that the thing will be the same or different tomorrow. This model is sometimes disrupted in various ways, when a seemingly long-term trait changes suddenly; it could be in a bad way, when a person suddenly loses their health or wealth; it could be in a good way, when a person develops a skill or experiences a thing they previously thought impossible for them.

Another important aspect of "me" is the continuity; that means the memories of experiences and decisions of the previous "me". Even if your important trait suddenly changed, the other traits would stay, and the memories would stay. So it makes sense to imagine any trait changing, and concluding that it is not "me". Even if all traits changed, the memory/continuity would stay. Though if you lost all your traits *and* the memory at the same time... well, it's hard to imagine what that would feel like, because losing the memory would make it feel like two separate stories, rather than one story of a dramatic change.

I suppose, on one hand, you could feel lucky, on the other hand, you should understand that the luck is temporary and it will expire when death or dementia comes. It's not like you are "forever smart" (having some eternal essence of smartness) -- but not because you aren't smart, but rather because you are not here forever.

> *That* featureless blob is me, and all those characteristics I just named are merely pieces of good or bad luck with which I am presented?

Those characteristics (plus the memory of continuity) are "you".

There is no featureless blob, however there are things that are the same for all/most/many humans, so they may feel obvious/invisible. Such as, having awareness in the first place, being able to imagine different possibilities, and wondering about the true nature of "you". Those characteristics are also part of "you". (And you can also consider them luck, because humans could have evolved differently, or even not at all.)

Not sure if this somehow helped to answer your questions. From my perspective, realizing how ephemeral we are, just makes any "pride" feel absurd. Also, as a transhumanist, I believe that even the smartest humans today will seem stupid from the perspective of the tomorrow... assuming that there will be a good tomorrow, of course. It will be like asking which one of us was better at spelling while we were in the kindergarten. Unfortunately, we are all in the kindergarten now, and we must collectively get the spelling right, or we will all die. What a horrible universe.

Expand full comment

I think you can avoid this worry by thinking of your intelligence (and not "you" more broadly) as the one doing the talking, so to speak. Instead of you being proud of something you possess, your intelligence is being proud of its own existence. If it's better to think about things than not think about things, then when you're thinking and others aren't (or can't) you can observe that you (currently configured as a mind that is thinking) are better than them (as currently configured minds that are not thinking, or are not capable of thinking.) It doesn't have to involve a moral claim of being better as an overall person.

I mean, as someone who's revolted by the idea of solving any problem or dispute with physical violence (personal violence, as opposed to invoking the law), I certainly feel like I'm better than those for whom violence comes naturally. Even if neither I nor them have consciously chosen these preferences, but they're the result of innante personality or upbringing, there's still a sense that my desires (as a product of external antecedent forces) are better than theirs. We may be both formed entirely by random forces, but the product the random forces formed in my case, is better than the one they formed in their case (at least in respect of violent tendencies). Someone innately more forgiving than me might say the same thing: their natural tendency to want to forgive is better than my natural tendency to want to hold a grudge. Even if neither of us have ever had any control over it.

Expand full comment
Sep 1, 2023·edited Sep 1, 2023

The argument that thinking is a collaborative, group activity seems intuitive to me. But my first reaction wouldn't be to focus on how to quell the anger and self-righteousness that arises when you encounter an outgroup member; rather, I'd try to seek an environment where I don't have to encounter so many of the outgroup in the first place. Why torture myself?

To be fair, progressives' veneration of diversity being what it is, your focusing on how to navigate the crazy-making situation is eminently practical.

I'll add as an edit: It's not that I'm intellectually incapable of engaging with and evaluating different ideas. My point is that I'd have a higher quality of life if I lived somewhere where I didn't have to worry about being cut off on the road, or worry that a blue-hair kindergarten teacher will try to groom my child.

Expand full comment

Where we differ is that I don't believe that most outgroup members are inferior human beings. In all groups, including whatever counts for you as an ingroup, there are some people who are crazy, have quite low intelligence, or who see treat people as objects from whom value can be extracted, rather than as fellow sentient beings. So yeah, a few outgroup members are creeps, but not most. Believe me, I understand your profound anger at some of these people. I feel it too, especially in encounters that start off on a topic where we strongly disagree, and where they lead with some particularly irritating comment -- sarcasm, sneers, insults, etc.

My work and interests have led to me logging a lot of time with people with whom I disagree profoundly about various important things, and often I haven't discovered how much we diverge until we've spent a lot of time together, and I've come to like them. They've told me stories that make me laugh my ass off. They've listened with interest and understood the points I was making when I tell them about something in my life,. They've demonstrated kindness and common sense. Here are some things I have discovered about some of these people after already liking them:

-They are big fans of some politician I think is an idiot and a monster.

-They are Christian fundamentalists.

-They think women should do most of the housework, cleaning and child care, because they are better suited to it than men and less well suited to higher education and a career.

-They think mental health professionals are charlatans (I am a psychologist).

-They are passionate hunters. (While I guess I don't think hunting is morally wrong, I do find the thought of it very disturbing. I am fond of animals and have a hard time understanding how someone could enjoy killing wild animals who seem to be enjoying their little animal lives. And of course some don't die quickly, but lie twitching on the ground in agony.)

If these people who are, in fact, in my outgroups were riddled with moral and intellectual rot I would have picked up on that over the course of knowing them. They just weren't. They were kind, bright, and funny.

And by the way, I don't have blue hair now, but I did last year. To be fair, it was only a small part of my hair, maybe 10%. Also I was a nursery school teacher in my 20's. Nobody had blue hair then, but if there had been any blue hair dye around I probably would have dyed my whole head of hair blue. And then, DM, I would have been a blue-haired nursery school teacher. I hope you will not be surprised to learn that despite my feeling that blue hair is cool there is zero chance I would groom a child for sex. I don't feel the faintest sexual interest in children. And if I did I'm pretty sure I would do whatever it takes to keep myself from acting on that impulse, including things like drugs or surgeries that permanently shut down my ability to feel sexual desire. Hair color is not even slightly predictive of pedophilia.

Hey, did you still think I'm an OK person, or am I now a member of one of your outgroups?

Expand full comment
Sep 1, 2023·edited Sep 1, 2023

Thank you for the detailed and well thought out reply.

It's interesting the characteristics people will naturally look to when defining ingroup and outgroup. For me it's not things like race or religion, but rather perhaps life goals and ways they work to achieve them. For example, if I have a neighbor who takes good care of their house, to me they're a kindred spirit; we both care about the same personal goal and by being in proximity it's easier for us to both succeed. Ingroup are people who create real economic value (teacher, healthcare worker, garbageman, engineer, factory worker, etc); outgroup are rent-seekers and other sociopaths (scammers, etc.). Outgroup (libtard) is motivated by virtue signaling status games, psychological projection, and a pathological preference to help people far away from them (in location, in similarity by situation, race, mindset, etc); the end result being they're authoritarian around topics that should be considered personal freedoms (guns, vaccines, gender ideology grooming children), and laisse's-fare around topics (for ex. mass legal and illegal immigration, public criminality) that should be paternalistically controlled to maintain the quality of everyone's life.

In a way my experience somewhat reflects yours. I work in a high-tech field in a coastal city so I already know that I won't agree politically with most colleagues, but it doesn't stop me from becoming friends and/or having good working relationships with them regardless.

I see what you're getting at, that in your experience the outgroup aren't riddled with moral and intellectual rot. (Rhetorically, and so how does that compare to my sentence above about the rot of virtue signaling status games, projection, and pathological preference to not help those closest to them?)

I suspect we hold a number of different political views. I think Trump is a hoot! I believe women are suited to higher education and careers, but I'm also realistic about how (a) fat tails means the true autist geniuses in a tech field are men (but that's only a small part of what's necessary to advance tech), and (b) women are better suited to early childcare (but both parents need to do the work, like for ex. long overnight shifts with the baby to let mom rest). Hunting is less morally repugnant than buying meat in a grocery store; although I will accept shade about it from vegetarians/vegans. With respect to the grooming term, do you believe it's right to teach about sex and gender ideology to anyone under 18 (i.e. groom them to "become trans"), especially when it's counter to the wishes of the child's parents?

It's a hard problem. It comes down to moral relativism / absolutism, and the question of how different can people be politically while still tolerating each other in a society. Your kind, bright, and funny colleagues, do they support laws and policies that you would simply find intolerable to live under? If so, the end result still seems bad.

To what extreme can we live and let live? If an open-minded progressive is ok with publicly funded schools that don't push the cult of trans and word-salad LGBWTFBBQ (where in the progressive's view, repressed and persecuted LGBWTFBBQ children can't get the help they need) and don't do crocodile tears land acknowledgements, can I accept the option of publicly funded schools that do push that cult (where in my view, children can be brainwashed into war-crime-level evil sex change surgeries)? Another example: I don't care about abortions, but I'm going to need to see a complete cessation of anti-gun activity/laws/etc before I can stop voting for politicians who happen to be pro-abortion. Would you make that trade?

Expand full comment
Sep 2, 2023·edited Sep 2, 2023

Well, DM, I responded with a shit ton of thoughts. I’m not quite sure why I took so much time over this but here, for what it’s worth, is what I really truly think about the topics you raised in your response.

“Outgroup (libtard) is motivated by virtue signaling status games, psychological projection, . . .” Oh, yeah, I loathe wokeism too. I didn’t even think of it in my earlier post, because none of the woke people I know are people I got to know well before I found out they were members of that particular outgroup. Woke people usually make sure really early on that you know that are WOKE. A lot of my psychotherapy patients are grad students at a very politically liberal university in my town, but they all make sure I know they’re woke within the first 15 mins. of meeting me. All the white males are quick to tell me that even though they have come to see me because they are miserable, they are aware that they have less justification for being miserable than other people because white males are privileged people. The women find ways to let me know quickly too. I can tell that these people not trying show off their virtue — what they’re doing is quickly telling me they’re woke because they assume I think like the other people they know at the university. They think they need to quickly give proof of their wokeness because otherwise I will start hating on them and attacking them for not being liberal enough.

I absolutely agree with you about the authoriarian underbelly of wokeism: People who have a really bad case of it are insanely entitled, cruel and controlling. Seem to think their values are so admirable that they should be allowed to rule the world. Think they’re very compassionate because they want programs that help all the needy of the world, but they hate people’s guts who don’t agree with them, and somehow they don’t notice that that’s kind a of lapse in their compassion. But here’s the thing about my patients: While they are all convinced that the most libtard position possible is the right one about every issue, I don’t get many glimpses at all of the hateful side of wokeism as they talk about the people in their lives, including non-woke relatives and friends . Mostly what I hear is fear and guilt. They are all acutely aware that in the privacy of their minds they have thoughts and feelings that are not woke, like for instance: They feel nervous if they’re walking alone at night and pass some young black guys. They’re males, and aware that a woman’s appearance matters more to them than her other qualities when they’re dating. They do not feel love and compassion for every single minority group member and bum they see. And they are extremely afraid of being found out and being publicly attacked by their peers or even by faculty if they say anything that reveals the imperfections in their wokeism. They have seen that happen to a lot of people.

In the past I would have said wokeism at universities is a fad, like 60’s attitudes were in the 60’s, but now I see it more like a religion, one that liberal universities do a lot to enforce. So mostly I see my student’s wokeism as something they’ve been indoctrinated in. Their “woke” attitudes don’t go very deep. Once they realize that I am interested in what it’s like being them and don’t give a shit how woke they are, they sound pretty much like any other group of smart, ambitious, unhappy 20-somethings. What’s really on their mind isn’t some woke agenda, it’s career, sex, family, self-esteem, friends, and worries about failing in some or all of these areas and having a lousy life. So they are like the people I discribed in my earlier post. Except when they are spouting woke ideas, they do not seem particularly cruel or entitled or judgental. All of my them are smart as hell, and most of them are also kind, funny, honest people.

“Your kind, bright, and funny colleagues, do they support laws and policies that you would simply find intolerable to live under? “ I just went through a pandemic while the guy in charge did an absolute rat shit job of assessing the situation and likely developments, making the right judgment calls, pushng the relevant agencies to function quickly and sanely, and getting accurate information out to the public. (And by the way, when I say he managed it badly I am not saying we should have had more lockdowns . That is not my gripe.). If you have any doubt that he managed it badly, look up how the US ranks in covid deaths per million. Last time I looked we were about 220th out of 240 countries. Lots of the places Trump called shithole countries did way better than our big fat rich one. If I can tolerate living through a pandemic with Trump & Biden in charge, I can tolerate anything. So as for people with different beliefs running the country in ways I disagree with— eh, I’m used to it. But I think everyone, including the people in my outgroups, would think more clearly about policies if most of the various groups weren’t so convinced that everyone but their faction is a bunch of hate-filled fanatics who are so motherfucking stupid they don’t really even deserve to vote.

“With respect to the grooming term, do you believe it's right to teach about sex and gender ideology to anyone under 18 (i.e. groom them to "become trans"), especially when it's counter to the wishes of the child's parents?” I think if I ran a high school I would be willing to teach kids about certain things even if the parents objected to the kid learning about it: For instance I would teach them about STD’s and birth control whether the parents liked it or not. Also evolution and other things that fundamentalists object to. But trans stuff — no. For one thing, there really are not many known facts to teach. I don’t think it’s clear what fraction of kids have some sort of wiring that guarantees they will grow into adults with gender dysphoria that can only be treated by a sex change — though clearly the percent is low. And I don’t think there is any reliable way to tell whether a teenager who thinks he’s trapped in a body of the wrong gender is going through a phase, influenced by a fad, or truly has long-term gender dysphoria. And besides, I don’t think anyone younger than 18 at least has enough life experience and judgment to decide whether to start taking hormones and lopping off parts of themselves. It would be like letting an 18 year old decide what to do with a million dollar trust fund that they will inherit at age 25. So I wouldn’t teach stuff about trans in a class. (But if a student asked me questions I would answer them truthfully, even if I knew the parents would not like it.)

“Another example: I don't care about abortions, but I'm going to need to see a

complete cessation of anti-gun activity/laws/etc before I can stop voting for politicians who happen to be pro-abortion. Would you make that trade?” I don’t really understand the question. The 2 issues seem unrelated, and I would pay attention to candidates’ positions about both when I decided who to vote for. I don’t see any point in withholding a vote for a pro-choice candidate because he’s anti-gun. Neither the candidate nor the people who want both abortion rights and more gun control are going to be influenced by your stance, because they won’t know about it. It’s not like they’re going to be clustered together all worried, saying “holy shit, what should be do? DM says he won’t vote for a pro-choice candidate unless he’s against gun control. Wow, we are really in a pickle now. Oh wait, I have a good idea — what if we offer to let abortions be carried out by using little guns on the fetus? Then everybody will be happy.”

Expand full comment
Sep 3, 2023·edited Sep 3, 2023

I think the main value of DSL is that it provides a venue to get to know people across the political aisle as more than just evil idiot caricatures, as despite most of the forum content being vitrolic political arguments, there's also fun posts and effortposts and so on.

Expand full comment

What is DSL?

Expand full comment

Data Secrets Lox. It's an offshoot of Slate Star Codex with a strong conservative bent.

Expand full comment

<blockquote>I'm still ruminating about your last point, how we'd all probably have thought slavery was fine if we were born in the 1700's in the South.</blockquote>

I don't know how many people in the American South actually thought slavery was fine. It says, right in the Declaration of Independence, that “all men” have the “inalienable right” to “liberty.” If the Declaration of Independence is right, then slavery is wrong. Of course, just because Jefferson acknowledged that slavery was wrong didn't mean he was actually willing to do anything about it.

The idea that slavery was a positive good was promoted by Calhoun in a speech in 1837. He realized that the position that slavery was evil, but we couldn't get rid of it, at least not yet, was and would be increasingly untenable. In 1854, George Fitzhugh devoted a chapter of his book <em>Sociology for the South: or, The failure of free society</em> arguing that the underlying principles of the Declaration of Independence are wrong, because, as he acknowledges, those principles are incompatible with slavery. I don't think these positions were particularly convincing to those who didn't have an economic incentive to believe them.

One way of looking at the American Civil War is that it was caused by slaveholders thinking they were losing a war of words, and hoping they would have more success in a war fought with bullets.

Expand full comment

"I don't know how many people in the American South actually thought slavery was fine. " Well, maybe for many it was in the category of stuff in your life that you acknowledge is bad, but let yourself off the hook for. Most of us have plenty I things in that category. I certainly do. For instance there are lots of things I think are bad that I could probably have some impact on by joining some activist group or blogging about the issues, but I don't do that, because my day is filled with other things and I don't want to give any of them up. I'm sure some things I own are made in factories in poor countries by exhausted people, maybe exhausted 11 year old, working for a pittance. But, jeez, I'm not willing to go without an iPhone.

Expand full comment

I suspect the boom era of priming studies popularized by Malcolm Gladwell had to do with academics trying to make big bucks off marketers by assuring the MBAs that marketing / advertising wasn't an art, it was a science. You didn't have to be creative and work hard to come up with effective advertising, you could just prime consumers into buying your product with scientific tricks.

My guess is that in reality you can prime some of the customers some of the time into buying your stuff, but that lucrative priming techniques cycle in and out of effectiveness.

That was, in effect, more or less, Warren Buffett's explanation for why he bought 20% of the Coca-Cola corporation: it's really hard to create a brand as good as Coke's, so of course he wants to own a big chunk of a super-brand created by a century of smart, expensive, and inordinately effective investment.

Expand full comment

"I suspect the boom era of priming studies popularized by Malcolm Gladwell had to do with academics trying to make big bucks off marketers by assuring the MBAs that marketing / advertising wasn't an art, it was a science."

The Nudge Unit, in Britain. Set up to make people behave more virtuously (where by "virtuously", was meant "comply with government regulations") and once it seemed to be working, they spun it off into a money-making operation (the British government does this a lot with scientific backing, expecting that research will lead to commercial opportunities):

https://en.wikipedia.org/wiki/Behavioural_Insights_Team

"BIT was set up in 2010 by the 2010–15 coalition government of the United Kingdom in a probationary fashion. It was established at the Cabinet Office and headed by psychologist David Halpern.

In April 2013, it was announced that BIT would be partially privatised as a mutual joint venture.

On 5 February 2014, BIT's ownership was split equally between the government, the charity Nesta, and the team's employees, with Nesta providing £1.9 million in financing and services. Reportedly this was "the first time the [UK] government has privatised civil servants responsible for policy decisions." The Financial Times expected it "to be the first of many policy teams to be spun off as part of plans to shrink central government and create a private enterprise culture in Whitehall."

In December 2021, the group became wholly-owned by Nesta. ...UK government departments that had previously received policy advice for free now pay for the service, as the cost of maintaining the team is no longer borne by government."

https://www.nesta.org.uk/press-release/nesta-acquires-behavioural-insights-team/

Expand full comment

They were right about marketing being a science, of course. Given any situation, there will always be an objectively optimal way of marketing your product to generate maximum profit. That's the kind of thing that AI is actually good at figuring out (assuming it's given enough data to work with). My fear is that eventually, in an effort to fully optimize its marketing efforts, an AI stumbles upon a method that's bordering on mind control. Hopefully the human brain is not that vulnerable, though given the already high effectiveness of marketing on the masses, I'm not getting my hopes up.

Expand full comment
Aug 31, 2023·edited Aug 31, 2023

The conjunction fallacy is not a fallacy. to think it is, is to commit the fallacy of thinking that natural language statements reduce to logical propositions or, to put it another way, that judging the probability of the truth of tthe natural language statements about Linda is exactly equivalent to deciding which columns to tick on a spreadsheet about Linda or where to place her on a Venn diagram.

The point is Paul Grice's theory of conversational implicatures (and incidentally this is a free-standing theory, not an ad hoc way round the Linda problem) which states inter alia that conversational statements come with the implication that they are maximally informative: to say "Linda is a bank clerk" is to say "Linda is a bank clerk *and that is the most interesting thing I have to say about her*. A moment's thought shows this to be true: if you met an old mutual acquaintance, who told you she was a b.c., and subsequently found the acquaintance also knew all about the activism, you would ask why he had deceived you.

Here's another way natural language is not equivalent to logical propositions: I don't know if this works in american english, but in British English a rude way to question a claim of status, e.g. "I am the Professor of philosophical logic at Harvard" is to say "Yeah, and I am the Queen of Sheba." To think this means I would tick the "Queen of Sheba" box when filling in a spreadsheet is to make a fairly obvious mistake. So what we have in logical terms is A implies not-A and also not-B, which is impossible in logic but in practice easily understood.

This is not difficult. I would be a bit concerned to learn that the fallacy is still commited when the claim is amended to "Linda is a bank teller whether or not she is active in the feminist movement", so am disappointed to read in wikipedia that "If the first option is changed to obey conversational relevance, i.e., Linda is a bank teller whether or not she is active in the feminist movement the effect is decreased, but the majority (57%) of the respondents still commit the conjunction error." This shows that 57% of people are profoundly stupid, but also doesn't destroy my point because the new wording takes us out of the realm of conversation - it is not something anyone would ever say - and nobody is expecting the rules of conversation to be followed.

edit for misplaced quote marks

Expand full comment
author

I think you are explaining and bounding the fallacy, not disproving it or showing it to be anything other than a fallacy.

Expand full comment

No. On my/Grice's reading, the "fallacious" answer expands to "L is just a bank clerk, she gave up on al that social justice/activism stuff and never, despite being outspoken, and very bright, replaced it with anything else interesting enough to tell you about." It is reasonable to expect the chances of that being true, to be lower than the alternative. It is certainly an empirical, not logical, question.

I think it is important to realise that the rules governing conversation are not less rigorous than those governing logical propositions, just different. There is an analogous point about grammar made by I can't remember who; "I didn't say anything" is standard English, "I didn't say nothing" is "bad" English but just following a different set of rules - it doesn't just abandon the rules and say that "I didn't say othgni" is as valid as anything else.

Expand full comment

Linda's story is a dating doc! It tells us her age, personality, and interests. If the Conjunction Fallacy is correct, then nobody should make a decision on whether Linda And Me would be compatible based on that, since we don't know. "Oh, she likes cooking, researching old recipes, studied at a Cordon Bleu cookery school, and makes the Thanksgiving, Christmas and Easter family dinners for her extended family? Hmm, how can I possibly extrapolate from that information that we will have interests in common since I love going out to restaurants, trying new dishes, and helping organise family get-togethers?

Which is more probable:

(1) Linda is a bank teller

(2) Linda is a bank teller and we would hit it off based on shared love of gastronomy

Well clearly option (2) is *completely* ruled out due to the Conjunction Fallacy!" 😁

Expand full comment

I understand that this comment is intended to be humorous, but I am not sure whether you actually think that option 2 is ruled out by the conjunction fallacy (at least in the minds of people who believe that the conjunction fallacy is a fallacy), in the sense that "nobody should make a decision on whether Linda And [You] would be compatible based on that, since we don't know"?

Expand full comment

Another thing people may be wrestling with here is that the conditions "bank teller" and "feminist" are not necessarily discrete and unlinked. That is, there may well be circumstances in which Linda is very unlikely to be a bank teller *unless* she is also a feminist.

If the population of female bank tellers who are also feminists approaches 100% (one might imagine 1930s America or some current day developing country where women mostly don't work outside a home or farm), then isn't the answer to the question of which is more probable either A (if not all female bank tellers are feminists) or *neither* (if they are)?

Getting the wrong answer on the Linda test may not just be a matter of misperceiving the nature of word problems. It likely also has to do with the way people generalize their own experiences and mediated "experiences" in the world.

Expand full comment

> That is, there may well be circumstances in which Linda is very unlikely to be a bank teller *unless* she is also a feminist.

Even in those circumstances, the probability of being a bank teller (whether feminist or not) is greater or equal than being a bank teller and a feminist. So you would still be falling into the conjunction fallacy if you chose "bank teller and feminist" even in a world where it is very unlikely to be a bank teller unless one is also a feminist.

Expand full comment

If equal, there's no conjunction fallacy. If nearly equal, it's not really worth quibbling about.

(I'm not suggesting CF doesn't exist, just that this may not be the least challenging example of one.)

Expand full comment

The argument is that, in natural language, the answers become:

1. Linda is a bank clerk [and not a feminist]

2. Linda is a bank clerk and a feminist

So, in natural language, the fallacy as it pertains to this particular example dissolves, because 1 is a disjoint set from 2.

Expand full comment

This is true when you present it to people as a choice between 1 and 2.

But I think this effect also applies when you give the option 1 to some people, the option 2 to other people; plus you give both of them the same alternatives 3, 4, 5 (which are incompatible with Linda being a bank clerk). So some of them have options 1, 3, 4, 5, and some of them have options 2, 3, 4, 5, and you just ask them to distribute the probabilities between the options they have.

A longer story just sounds more likely.

(I am not sure about this; I do not remember the exact details of the experiment.)

Expand full comment

Sure! I wasn't forwarding the argument, just trying to explain what it was.

Edit: Well, that's not quite correct, I would forward it slightly, in that I expect it is true of some people.

I think people are answering a subtly different question than is being asked, maximizing something like "truthiness" rather than "truth".

Imagine that the truth is that Linda is a feminist insurance adjuster. My mental model of the person who answered in that fashion is that they would be annoyed if told they got it wrong; from their perspective they got the answer basically correct. And if Linda is actually a feminist yoga instructor, they'd give themselves partial credit; they weren't entirely wrong.

Adding additional details doesn't give them more chances to be wrong - it gives them more chances to be partially correct.

Expand full comment

"This shows that 57% of people are profoundly stupid"

I am disputing exactly that. In the example as given by Wikipedia, we have *no information at all* about Linda's job, profession, or is she even working as distinct from being the mistress of a rich man supporting her. It tells us about her personality and college education and activism, and nothing else.

It is only in the "which is more probable?" section that we get told "Linda is a bank teller". If you (or anyone else) wants to say that 57% of people are stupid for estimating Linda is likely to be involved in feminism, based on what we have been told about her, then sure - but that means 100% of people are stupid for estimating that she is a bank teller, since we have no information about that in the details of her life.

We can't say, because we don't know, that Linda is a bank teller. That is only introduced at the same time that "Linda is a feminist" is introduced in the options as to which is more probable. If I'm stupid for thinking Linda could be a feminist, I'm also stupid for thinking Linda could be a bank teller. Linda's little bio gives us the information on which to base the probability of her being a feminist, it tells me nothing about what line of work she went into (she majored in Philosophy in college, that does not mean 'and philosophy grads all go to work as bank tellers').

Expand full comment

I was talking about the 57% who still give the wrong answer after the modification of the question which I quoted. On reflection I think I am unfair to them, the modification just goes some way to making explicit the implication I argue for. Otherwise I largely agree with you.

Expand full comment

BTW, FWIW, chatGPT doesn't fall for the conjunction fallacy

Expand full comment

> If I'm stupid for thinking Linda could be a feminist, I'm also stupid for thinking Linda could be a bank teller.

You're not "stupid" for thinking Linda could be a feminist, you're "stupid" for thinking that the probability of "Linda is a bank teller and a feminist" could ever be greater than the probability of "Linda is a bank teller, and may or may not be a feminist".

> Linda's little bio gives us the information on which to base the probability of her being a feminist, it tells me nothing about what line of work she went into

Yes, that's intentional. You're not supposed to know whether or not she's a bank teller (predicate A) and you're supposed to think it's more probable that she's a feminist (predicate B). That's part of the setup. The critical question then is whether you think it's ever possible for "A & B" to have a greater probability than "A". If you do think so, you've fallen for the fallacy.

To make the point more explicit, consider this variation on the same example:

There is a 1% chance that Linda is a bank teller and a 99% chance that she is not a bank teller. There is a 95% chance that Linda is a feminist and a 5% chance that she is not a feminist. Which is more likely? (1) Linda is a bank teller. (2) Linda is both a bank teller and a feminist.

Expand full comment

"You're not supposed to know whether or not she's a bank teller (predicate A) and you're supposed to think it's more probable that she's a feminist (predicate B). That's part of the setup."

Then, as others have said, the *real* options are:

(1) Linda is a feminist

(2) Linda is a feminist and a bank teller

In that case, it's very clear to see that (1) is more probable than (2), and most people would pick (1). In order to 'prove' that their 'fallacy' does exist, Tversky and Kahneman had to obscure the question by putting "bank teller" first. That's cheating, guys, you haven't proven what you set out to prove.

The Conjunction Fallacy may well exist, but the Linda example doesn't demonstrate it. The bare mathematical model works to convince even me. I think that's the problem; our two big-brain boys started off with the maths and then had to try and invent an example in plain language, but they couldn't find one that wasn't a trick question.

Expand full comment

Yeah, you're right about Grice and the pragmatics issue.

I still kinda agree with Scott, though, that this is an explanation of a phenomenon which can also, from a different perspective, be labelled a fallacy. It seems to me like looking at a 3D image: if a 3D image of a shark lunges at you, and you instinctively duck, that's irrational, because there's actually only an immobile screen in front of you. Drawing the wrong conclusion about Linda seems to be on that same level.

Expand full comment

They also did a study (Extensional Versus Intuitive Reasoning: The Conjunction Fallacy in Probability Judgment) where you take people, randomly sort them into two groups, and ask them (in two separate groups), to rate the probability of the following events:

1. "a complete suspension of diplomatic relations between the USA and the Soviet Union, sometime in 1983."

2. "a Russian invasion of Poland, and a complete suspension of diplomatic relations between the USA and the Soviet Union, sometime in 1983."

2 got higher probabilities too. And, again, these are two separate groups, so they're not thinking "probability that the US breaks off diplomatic relations with the Soviet Union for some reason other than the Soviets invading of Poland."

This is a 1983 study. I know everybody likes to complain about the Linda/bank teller/feminist example but, no, people really do process information like this. And here, the subjects were "115 participants in the Second International Congress on Forecasting held in Istanbul, Turkey in July 1982. Most of the subjects were professional analysts, employed by industry, universities, or research institutes. They were professionally involved in forecasting and planning, and many had used scenarios in their work."

Expand full comment

That's very interesting, but I think it is still explicable on a Gricean analysis: if there were a triggering event you would surely have told us about it, so 1. implies that this is just out of the blue and on a whim, and that is less likely than 2.

Expand full comment

In this case, I don't understand how explaining this as conversation norms ends up leaving us at anything other than a fallacy caused by inappropriate reliance on conversational norms. When you are being asked (as a "professional analyst, employed by industry, universities, or research institutes") about the probability of a future event, it is not a reasonable conversational expectation that the person asking the question knows what the triggering event for the hypothetical future event would be. They want to know whether diplomatic relations will get cut off, it's up to you as the analyst to imagine the possible scenarios in which that might happen.

Or do you think that the results would change if choice 1 were "in response to any conceivable event, a complete suspension of diplomatic relations between the USA and the Soviet Union, sometime in 1983"? I actually wonder if that is true.

Expand full comment

I would call it *appropriate* reliance on conversational norms; because someone made the mistake in answer to a questionnaire, I would not therefore expect them to make it where it mattered while writing a paper, any more than the yellow/blue illusion causes me to worry about not recognising police cars with their lights on. It's a benign fallacy: it is to my advantage to pay more attention to the L is both a teller and a feminist statement than the L is a teller one, and for analysts to focus on likely specific scenarios. OK, more important if true is different from more likely to be true, but I can't see anything bad happening as a result of the fallacy being committed. Whereas other common and obvious fallacies,

Expand full comment

> I can't see anything bad happening as a result of the fallacy being committed.

The "bad thing happening" is that you end up with false beliefs, which can then lead you to make different decisions than the ones you wish you would have made if only you had correct beliefs.

> Whereas other common and obvious fallacies, like false long odds claims which neglect Bayes, have led to murder convictions.

In principle, you could have a situation where depending on whether you asked a jury to "What are the odds that Linda is a murderer" vs "What are the odds that Linda is a bad person and a murderer", you'd get greater odds for the latter, showing that the jury was making their guilty/not-guilty verdict based on incorrect beliefs.

Expand full comment

I think you have that the wrong way round - the injustice comes if you conclude that Linda is a bad person and a murderer is more likely than Linda is a bad person.

As I am parsing it, Linda is a bad person expands to Linda is a bad person but not a murderer, so the jury is in reality asking the question, is she a murderer? It's important to note the atificiality of the experiment. First there's the huge conditioning with additional information in the preamble: L hated the victim, stood to inherit under their will, had googled the murder method multiple times, which is actually good evidence for the conjunction (and to the extent it isn't, courts are pretty rigorous about excluding the jury from hearing it). All that evidence is perfectly good evidence that the conjunction is true; the falsity of the conjunction is just a thing about conjunctions. So I think people are answering the right question. We are conditioned to judge whether things are more likely to be true or not true not whether they are more likely to be true relative to one another. And that is the question the jury is answering, and I can't imagine how it could be put differently to them so that the conjunction fallacy arose.

Compare currency exchange rates: £/$ and £/euro I have a direct and immediate grasp of because I use £ to buy those currencies and things priced in those currencies. $/euro I could work out from the information usually to hand, but it takes math or research and it is of much less use to me.

Expand full comment

[...that got prematurely posted and I can't edit on my phone], like false long odds claims which neglect Bayes, have led to murder convictions.

Expand full comment

Nice post, thanks. Nitpicking one of your examples:

There is a pretty good case that loss aversion doesn't exist in any meaningful sense: it models declining marginal utility, and to the extent that it 'overreacts', that's because it is accounting for ergodicity. It's perfectly rational to be ultra-cautious in a non-ergodic system (like, say, life) where the linear average probability is different to the ensemble probability.

I keep coming across other central examples of biases that fall apart in similar ways (sunk costs, availability bias, confirmation bias, planning fallacy) and I'm not even looking!

So I think a good synthesis here is to be aware that apparently exciting cognitive 'biases' turn out to be rational behaviour under closer inspection, and especially outside the confines of highly contrived lab environments. I got totally suckered by this myself, so I'm not pointing fingers. But I'm not sure if e.g. the rationalists ever walked back all the hype around this stuff.

Expand full comment
author

I think people display loss aversion in cases small enough that ergodicity can't possibly matter. Perhaps your evolutionary explanation for that is true, but I think that is explaining the bias, not disproving it. All biases are going to be something where evolution got us to deviate from ideal reasoning for some reason.

I also think (though I'm not sure, and I'm not going to look it up) that you can demonstrate loss aversion in cases that are identical except for framing - ie if you frame something as "gain $50", people will think of it differently than "gain $100 but then lose $50", which I think can't be explained as rational under any circumstances.

Expand full comment

I agree it would be a bias if that were true, but it seems like it's not. A Jason Collins summary I found in my notes:

> This point reflects an argument that Yechiam and other have made in several papers ...that loss aversion is only apparent in high-stakes gambles. When the stakes are low, loss aversion does not appear.

> In sum, the evidence for loss aversion at the time of the publication of prospect theory was relatively weak and limited to high-stakes gambles.

> Yechiam and Hochman (2013a) have shown that modern studies of loss aversion seem to be binomially distributed into those who used small or moderate amounts (up to $100) and large amounts (above $500). The former typically find no loss aversion, while the latter do.

On your second point: I would be pretty surprised if that were true but also can't be bothered looking it up. Maybe it's just from adding the extra operator?

Expand full comment

I would say that of course I'm going to feel worse about gaining $100 then losing $50 than I would about gaining $50. I'm riding high on that $100 win, then I get the crushing disappointment of a $50 loss, why shouldn't I feel bad?

Unless rationality means that your emotional state needs to be dependent on nothing other than the number of dollars in your bank account, I don't see this as irrational.

Expand full comment

Rationality doesn't mean that your emotional state needs to be dependent on nothing other than the number of dollars in your bank account, but it does mean that your emotional state should at least be monotonic with respect to the number of dollars in your bank account, all else being equal.

Expand full comment

Most examples use dollars because they can be easily quantified in some objective way. In principle, the example would work out just the same if it were something like:

"Do you feel worse about gaining and apple and then losing a banana versus just gaining an orange?" PLUS the assumption that the value of an orange to you is exactly equal to the value of an apple minus the value of a banana.

It's just harder to ensure that the person you're talking to does indeed value an orange as much as they value an apple minus a banana, so we go with monetary values instead to make the value comparison aspect of the problem be as straightforward as possible.

So rationality has nothing to do with dollars in your bank account, and instead has to do with whether you can make the right set of decisions to achieve as much value for yourself (whether the thing you value happens to be money, fruits, love and affection, or whatever).

Expand full comment

> I can’t help wondering if there’s some understanding of of “automaticity” or “being less automatic” that could have helped 1700-me question my belief - or wondering what equivalent automatic beliefs I should be questioning today.

My starting point for answering this for myself is Holden's systemization + thin utilitarianism + sentientism (https://www.cold-takes.com/future-proof-ethics/), which I think you've likely read, although I'm not sure if you've already discussed it on ACX / slatestarscratchpad etc.

Expand full comment

While I suppose "automaticity" is real, I think you can also doubt whether psychologists have shone any light on it. I imagine magicians (really, illusionists) have known some of these for eons.

Expand full comment
Aug 31, 2023·edited Aug 31, 2023

> Although PURGATORY is a slightly (?) more common word than PREROGATORY

You're misstating this. For example, the Corpus of Colloquial American, at 1.001 billion words, includes 1948 instances of "purgatory" (a 0.000002% rate!) and 0 (zero) instances of "prerogatory". "Prerogatory" has no entry in Merriam-Webster or in Wiktionary. It is not a valid solution to the question posed for the obvious reason that it is not an existing English word. "Purgatory" is not just several thousand times more common than "prerogatory", it is infinitely more common.

When you ask people to do what can't be done, you're going to get a lot more nonsense than usual.

(My results on the words of interest: GLEAN / GOD / ???????? ("purgatory doesn't have two Os, or an E") / ???????? ("______ STAGE?"). At the moment of composing this comment, I've realized that that last one is PEARLY GATES, but STAGE didn't help me get there.

You didn't mention what #4 is supposed to be; I got SPRITE and don't see another option, but it certainly isn't on theme...)

Expand full comment

No. 4 is PRIEST 😀

Expand full comment

Oh, thanks. I got SPRITE. Absolutely could not figure out what religious thing that word was supposed to be.

Expand full comment

I feel like it was supposed to be PEROGATORY and the extra R got telephoned into existence somewhere.

Expand full comment
Aug 31, 2023·edited Aug 31, 2023

???

Is "perogatory" more of a word than "ghabbimous"? There is a Latin verb perrogo, but you're not saving an R that way.

Expand full comment

I think I’ve seen PEROGATORY on a legal document once, and since my personal vocabulary is by all accounts the normatively understood one, I can only assume that’s what they meant.

Expand full comment
Sep 1, 2023·edited Sep 1, 2023

I'm going to have to challenge the idea that "perogatory" exists in vocabularies other than yours; see my comment responding to you in a different thread.

Fascinatingly, I did find a youtube video purporting to provide the pronunciation of "prerogatory"! ( https://www.youtube.com/watch?v=J8sG-Qah6uU ) The blurb says that the definition of the word can be found at a link that represents a Google search for "define Prerogatory", which is not true. I'm not sure how this happened.

Expand full comment

Maybe I should have responded to your other other comment.

At any rate, I have a very low level of confidence in my statement, and am willing to yield the field to your research, though with the note that "PEREMPTORY" is a word with a PER-prefix (a perfix?) and a specific legal-Latin meaning.

I've also been struck by how hard it is to google some of the most obscure legal terms; Eugene Volokh used to post "What does term _X_ mean?" and it was just insanely hard to find the answer.

But also remember that under no circumstances should I be taken especially seriously unless I've put a trigger warning for sincerity at the top of my post.

Expand full comment
Sep 1, 2023·edited Sep 1, 2023

Yes, peremptory is a word with the prefix per-, which is normal. It's prefixed to the verb emo (pf. participle emptus), forming perimo (pf. participle peremptus).

But "perogatory" doesn't have the prefix per-. It has the nonexistent prefix pe-. There is no Latin root -og- or -ug- to form a prefixed per-og-, so you're stuck analyzing it as pe-rogat-or-y. That's why I mentioned perrogo above. If you want to begin a Latin-derived word in this format with perog-, I think you're stuck with peroggeritory (from per-ob-ger-o; can't use the fourth principle part as is done with these other words because according to my dictionary it doesn't exist - but if it did exist, "peroggestory"). And since the first G there is coming from the assimilation of the prefix ob- to a following G, you're going to have a hell of a time trying to produce a word that starts with "peroga".

Expand full comment

Interestingly (and completely irrelevant for the discussion at hand, sorry) gravity does imply a maximum possible height for a skyscraper on Earth. Which is kind of trivial when you figure it out but still came as a mildly interesting surprise to me. Calculations this maximum height is left as an exercise for the reader.

Expand full comment

Why? What's stopping the super-skyscraper from collapsing into a bunch of material that extends the radius of the earth? As far as I'm aware there isn't a limit to the size of stars.

Expand full comment

Because no material can withstand infinite pressure. If you just added layers of material on top of the Earth without making any "mountains" you couldn't really call that a skyscraper, could you? It would be more like a planetary shell

Expand full comment

Well, if you have a tall building, and you expand it horizontally, is it still equally tall?

Expand full comment

I claim the right to define height relative to the ground rather than the centre of the Earth. If your "building" (really just molten stuff at this point, at least in the lower levels) becomes the new ground, its height becomes just zero.

Expand full comment

> If your "building" (really just molten stuff at this point, at least in the lower levels)

Hey, I had to build that stuff. :p

Expand full comment

Mostly unrelated: How do you feel about Mauna Kea? Earth's tallest mountain when evaluated from the ground? Or do you count the ocean as part of the "ground"?

Expand full comment

This was about skyscrapers 🤣

Anyway, for these purposes all that matters is where the base ends, so [Goggles pictures] I'm going to say it is the island's ground that counts.

Expand full comment

What do you count as the ground? It's not that easy to clearly define where the base of a mountain is.

Expand full comment

A simple math explanation is that the compressive pressure on the material is proportional to height, and therefore for any material one chooses there’s a max. height at which the compressive pressure reaches the material’s limit. Above this height the column crumbles.

And no, making is wider doesn’t help: the pressure remains the same.

Equally there’s a maximum string length for a given material, above which the string breaks under its own weight. This was a limiting factor for ocean depth measurements before sonar.

Expand full comment

> Equally there’s a maximum string length for a given material, above which the string breaks under its own weight.

This made me curious - is the maximum length different for a string suspended by one end compared to a string of the same material suspended at both ends? What if it's suspended in the middle?

Expand full comment

Ah, interesting! Let’s see:

Suspended in the middle is easy: we let the two halves hang down, so each side can reach the same max length as a single string.

By both ends is difficult because now I have to solve a quadratic equation describing the curve of the string so that I can calculate the force vector acting along the string. In the limit case where we hold both ends together things collapse back to the max length for a single string.

Expand full comment
Sep 1, 2023·edited Sep 1, 2023

Three points:

(a) a string suspended in the middle (say, placed over a hook) is similar in each half to a string of half the length hanging from one end, but the connection between the left half and the right half might (? I have no idea) experience some unusual tension? That's part of the string too, and that's what I was thinking about for that question.

(b) A string hanging under its own weight should form a catenary; I'm not really familiar with them but they don't appear to be described by quadratic equations. https://en.wikipedia.org/wiki/Catenary#Mathematical_description

(c) "In the limit case where we hold both ends together things collapse back to the max length for a single string. " -- I can't tell if you mean that this string has the same length limit as one suspended by one end, or double that limit (since it's folded in half). Probably the same limit? Same amount of weight putting it in tension? But wouldn't that imply that the string suspended in the middle has the same total weight limit, meaning that each half can be half as long as the single-string-by-one-end limit?

Expand full comment

Yeah, the quadratic equation was off-the top guess because the shape should be parabolic. So “don’t quote me on that”TM.

a) if we ignore the bending over the hook and just assume a nice smooth distribution of stress then it makes no difference what generates the opposing force, a clamp or the weight of the other string half. In reality, there will be a stress concentration at the bend that will cause the string to break there. This gets quite complex and the best bet to find a solution is to throw a finite element modeler like Comsol at it.

c) again ignoring higher-order effects, each half of the string should be able to go to the max length for the material. So the total length of the string is double, but it doesn’t help us to reach greater depths because it’s folded. We can think of it as a single string with a doubled cross-section area. Still the same length limit applies.

Expand full comment

I wonder if you could somehow get air pressure differentials to work in your favor if the building was tall enough to extend into the upper atmosphere.

Expand full comment

You’re thinking in the right direction but the pressure differential between the sea level and, e.g., stratosphere are still many orders of magnitude too small to make any impact on the building material stress levels.

Expand full comment
Aug 31, 2023·edited Aug 31, 2023

I'm not sure what you are getting at, but if you build a sufficiently high "skyscraper", the centripetal force from earth rotation would matter, so that the building would be in tension, gravity pulling it towards earth and centripetal forces pulling outward . We have no material that would not collapse under its own weight at that "height", but if we had an infinitely strong material I can't see why we could not build higher, as long as the "skyscraper" is anchored to the ground.

Expand full comment

I was getting at the effect you were hinting at, where at a maximum height low enough that we don't need to worry about the centripetal force no material is strong enough not to collapse (and/or have the base melt due to the high pressure).

Expand full comment

We don't currently have a material strong enough - but it is not really theoretically impossible - so I disagree that this constitutes a limit to how high we could build.

Expand full comment

I don't want the discussion to get overly semantic. In mind I really just had this nice result which holds in a flat Earth approximation without curvature or rotation needing to be taken into account. Of course this always depends on the materials at hand, so if you suddenly discover a material which is sufficiently much stronger than everything else we have you can get around it. But until then I think it's fair to call this a limit even if it's not a fundamental limit. Especially as even then it would still be there in the flat Earth scenario. But sure, space elevators are not theoretically impossible.

Expand full comment

Sure, there is a limit for any given material as there is a point where that material collapses due to its own weight. Fair enough.

Expand full comment

> this nice result which holds in a flat Earth approximation without curvature or rotation needing to be taken into account

Funny how it always works out that way, huh? Interesting.

Expand full comment

Space elevator skyscraper

Expand full comment

Does it still count as a skyscraper if it goes outside the atmosphere? Or is it, like, a skypiercer at that point? Or ooh, a SPACEscraper? Man, there really is a limit to skyscrapers, and it's when they become spacescrapers.

Expand full comment

No, they become skyscrapers again once they get to the atmosphere of a second (or third ( or ...)) planet

Expand full comment
Aug 31, 2023·edited Sep 11, 2023

This isn't necessarily strictly true -- while it's true there's a maximum height for a static skyscraper, in *theory* you can build infinitely (or at least "out of Earth's gravity well") high with active structures. See https://en.wikipedia.org/wiki/Space_fountain

Expand full comment

Interesting. Hadn't heard that one before. Also love that I heard of this from someone with your username!

Expand full comment

Thank you!

Yeah, I think space fountains/launch loops/etc are waaaay outside the realm of "things we will ever actually bother to build", but it's cool that they're theoretically possible.

Expand full comment

"in 1700, everyone thought slavery was fine, even though now in the 2000s everyone hates it."

This is tangential and possibly forbidden by culture-war rules (I am British, so don't fully understand the US rules) but I would question that, absent 1700 polling data, sermons, letters to the press saying it was all fine. etc. I read what I thought was a well-sourced claim which infuriatingly I cannot now track down that in the 1550s when Sir john Hawkins put the whole slave business plan to Queen Elizabeth 1 she initially said "What a revolting idea, no good will come of that." Then he showed her the profit margins... It's often said that people were OK with it because it had historically always been there, but 1. the wholesale shipping of people as commodities between continents was new and 2. slavery was, then, ancient history - there had not been slave-owning Englishmen for twice as long, then, as there have not been slave-owning Englishmen now. I think more likely there was disconnect and wilful ignorance, same as today's middle classes are probably against working conditions at Foxconn and murder and torture in Central America, but choose to keep the iphone and the coke habit.

Expand full comment
Aug 31, 2023·edited Aug 31, 2023

We have slavery in our western societies, we call it being in prison.

(I'm not trying to stir up outrage, I have nothing against prisons. I'm trying to make a different point).

The 13th Amendment to the US Constitution: “Neither slavery nor involuntary servitude, except as punishment for crime whereof the party shall have been duly convicted, shall exist within the United States, or any place subject to their jurisdiction.”

In the US in particular, it's constitutional to force prisoners to work, and anyways, even if they aren't literally forced to work, but merely encouraged to work under the threat of having to serve more time if they don't, in fact whether they work at all or not, they have no freedom, which is the essence of being a slave.

Normal people today don't mind. Prisoners are either criminals or prisoners of war. So there's a justification for their state.

This is not enormously different from how medieval europeans would have seen slavery. Slavery in medieval Europe was rare; it was forbidden to enslave Christians, and the people that were enslaved, were heathens, with whose civilization Christianity was in a semi-perpetual war, making them similar to prisoners of war.

I'm not justifying that, don't get me wrong! I'm just saying that the way we view the denial of freedom has always been more complex than a simple right/wrong. Both our world and the medieval world believe it right if justified and wrong if unjustified. So the difference is not so stark.

Expand full comment
Aug 31, 2023·edited Aug 31, 2023

> they have no freedom, which is the essence of being a slave.

That's one perspective. The more historically-accurate perspective is that the essence of being a slave is that your relationship to the legal system runs through your owner rather than being direct. In that sense, slaves are more like children than they are like prisoners.

It is interesting to me that we don't generally view soldiers as having been enslaved, despite the fact that they have no freedom. (And in fact, attempting to leave your position as a soldier is often a capital offense, which puts soldiers in a much worse position than virtually all slaves.)

Expand full comment
Aug 31, 2023·edited Aug 31, 2023

That's another great example, soldiers.

Especially if they've been drafted.

Expand full comment
Aug 31, 2023·edited Aug 31, 2023

One important difference between soldiers and slaves is that soldiers are automatically released after a certain amount of time. Slaves remain slaves until they can buy themselves out. But that would still make soldiers "indentured servants".

Expand full comment

Except there's a "stop-loss" policy, where soldiers' servitude may be extended involuntarily. Like other slaves, they're only freed when their masters will it.

Expand full comment

Yup. Also, a draftee would have previously been a free man, so the draft has an analogy to the slave _trade_ as well, enslaving previously free people. In the USA, one of the ironies is that the Lincoln administration created a federal draft.

Expand full comment

Sometimes they are, sometimes they aren't, it depends on the exact system of slavery.

Expand full comment

They actually tried to challenge the constitutionality of the draft under the 13th amendment during World War 1. Of course the SCOTUS wasn't having it.

https://www.lawfaremedia.org/article/remembering-selective-draft-law-cases

Expand full comment
Aug 31, 2023·edited Aug 31, 2023

As far as American legal precedent goes, doesn't it also say that it isn't slavery for the government to conscript people to do forced labor maintaining the roads?

Expand full comment

Plenty of abolitionists (including me) see conscription as slavery. It's obvious why retentionists pretend there's some meaningful distinction.

Expand full comment

Who said anything about conscription? On this analogy, conscription is enslavement. Being a soldier is slavery regardless of whether you were conscripted or enrolled voluntarily, in the same way that being a slave is slavery regardless of whether you were captured in war or sold yourself into slavery.

Expand full comment

Too much variation in possible terms negotiated upon enrollment if it's voluntary; I'm not restricting the term "soldier" to the present day US.

Also, I don't consider the fact that one isn't allowed to quit in the middle of a battle to make soldiery a kind of slavery if one signed up on those terms any more than I would a pilot not being allowed to parachute out mid-flight and abandon his passengers.

Sure, it CAN be equivalent to selling oneself to slavery, but conscription is unambiguous.

Expand full comment
Sep 1, 2023·edited Sep 1, 2023

> I don't consider the fact that one isn't allowed to quit in the middle of a battle to make soldiery a kind of slavery if one signed up on those terms

Where does that leave room for anything to be slavery if one signed up on those terms? Do you just believe that the concept of "selling yourself into slavery" is incoherent?

Expand full comment

Part of it is the soldiers are the ones enforcing the drafts. No soldiers, no soldiers.

Expand full comment

I assume that historically, when one had too many slaves, they appointed some of the slaves to watch over the other slaves.

Expand full comment
Aug 31, 2023·edited Aug 31, 2023

For the purpose of this discussion (the history of slavery as an ethical dilemma) it should not matter whether one is owned by a private or by the state, because the part that is ethically problematic is the denial of freedom.

Expand full comment

I don't understand the relevance to my comment? The legal status of "slave" that I described is the same whether you're owned by the state or not. In either case, your relationship to the legal system is not direct; if you're a state-owned slave and you harm a third party, they will have to sue the state (good luck!) instead of suing you.

Expand full comment

I had misunderstood your point, thank you for explaining it to me. However, I still don't understand why the distinctions you draw should matter to slavery as an ethical dilemma. What matters is whether one is denied freedom.

Expand full comment

I think Scott's point about slavery is much stronger about punishments: in 1610 England and 1834 Spain it was still thought OK to burn people alive for disagreeing about competing explanations of the New Testament. wow.

There is much crowing in the UK press at the moment about a criminal being sentenced to life in prison till she dies (this is not a pleonasm, most life sentences are 15-30 years), 22 hours a day in a small cell forever. Having once spent a night in a prison cell (drunk and disorderly) I think that sounds worse than burning. I wonder how that will look in a couple of hundred years.

Expand full comment
Sep 3, 2023·edited Sep 3, 2023

I once had an idea for a satirical fanfic of HMPOR that would take Harry's "let's burn down Azkaban" ethos to extremes by introducing a character who thought that *all prison* is immoral and that it is more ethical to kill someone than to imprison them.

Expand full comment

This is why it might be useful to distinguish "slavery" from "chattel slavery," where the people are literally property to be bought and sold, which does not hold true for prisoners (or conscripted soldiers, as the below comment describes).

Expand full comment

I don't see that the word "chattel" results in any moral difference. Certainly if I'm a slave busy working away and getting whipped it doesn't make much difference to me whether or not my services can be bought and sold or whether I'm legally assigned to one master. At least if I'm a chattel slave I can dream that I might be sold to a better master (although I must also fear that I'll be sold to a worse one, which evens it all out a bit).

On the other hand I think that the comparison to prison labour is really weird. The terrible thing about slavery isn't the work, it's the denial of freedom. Slavery doesn't become any better if you don't have to work. Since we've already decided to deny a prisoner his freedom by sticking him in prison, it doesn't make any difference if we also oblige him to work rather than obliging him to sit in a cell.

Expand full comment
Aug 31, 2023·edited Sep 1, 2023

I agree; for the purpose of this discussion (slavery as an ethical dilemma) it should not matter whether a slave can be bought and sold, nor is should matter whether a slave is owned by a private or by the state (a distinction made by another commenter) (edit: I had misunderstood that commenter; he says that it's essential to being a slave that one can't be sued; still should not matter I think).

Regarding prison labor, the reason I framed it that way was to anticipate an objection to prisoners being called slaves (the objection being that slaves are forced to work and prisoners typically are not), but in the same comment in which I brought up prison labor I also pointed out that even if a prisoner is not made to work, they remain a slave, because they lack freedom. So I actually agree with you on that too.

Expand full comment
Sep 1, 2023·edited Sep 1, 2023

> Slavery doesn't become any better if you don't have to work.

I don't think you'll be able to poll a lot of agreement for this.

"A slave who doesn't have to work" is a pet, a child, or, historically, a wife. All of those are considered very favorable statuses.

(Your wife was supposed to do a lot of work, but you couldn't just make her work in the same way you're free to similarly compel a slave.)

Expand full comment

I concur, and would further argue that most people prefer would this state to freedom.

From Rome: https://www.youtube.com/watch?v=Fgz1vvHDSIM&t=30s

Expand full comment

If people preferred not having to work to freedom, they'd commit crime in order to go to jail.

Expand full comment

My general impression is that Europeans tended to be embarrassed by being slaveowners up into a period in the 19th Century when cotton became temporarily vastly lucrative and thus you had boastful King Cotton ideologies that slavery was, actually, when you stop and think about it, good.

In contrast, the Constitution from 1787, for example, avoids using the word "slave" in the notorious Three-Fifths Clause and puts a 20 year time limit on the slave trade, without calling it the slave trade.

Expand full comment

Chattel slavery as it developed in the USA was a different matter, but that historical site about slavery in New England seems to indicate that slavery wasn't developed that way from the get-go; yes, they had slaves, but they were treated more along the lines of indentured servants. There was a blurry line between "person who signed a contract selling his labour for a period of years to pay off debts" and "person who didn't sign anything at all" because they would often be working the same kind of labour.

https://education.nationalgeographic.org/resource/new-england-colonies-use-slaves/

Southern plantation slavery was different, and due to the economics of it, developed into what we think of as "slavery" now. But certainly there were plenty of examples from the Classical world of both the 'Northern' style of slavery (skilled or semi-skilled workers for their owners) and 'Southern' style slavery (working on vast agricultural estates, working as publicly-owned slaves in the state mines).

1700s slavery would not have been *entirely* foreign to the mindset, but also yes, there would have been opposition to it ranging from distaste of the 'personally opposed but it's legal' type to the 'these are children of God and it is immoral and sinful' type.

The oddities of the past mean we can't easily map current-day racial attitudes and theories onto it. See the case of the woman whose life story was made into a film in 2013, "Belle"; born in 1761 to an enslaved mother and a British officer in the West Indies, who took her and her mother back to England with him arranged for her to be raised with his uncle's family:

https://en.wikipedia.org/wiki/Belle_(2013_film)

https://en.wikipedia.org/wiki/Dido_Elizabeth_Belle

"Dido Elizabeth Belle (June 1761 – July 1804) was a free black British gentlewoman. She was born into slavery and illegitimate; her mother, Maria Belle, was an enslaved Black woman in the British West Indies. Her father was Sir John Lindsay, a British career naval officer who was stationed there. Her father was knighted and promoted to admiral. Lindsay took Belle with him when he returned to England in 1765, entrusting her upbringing to his uncle William Murray, 1st Earl of Mansfield, and his wife Elizabeth Murray, Countess of Mansfield. The Murrays educated Belle, bringing her up as a free gentlewoman at their Kenwood House, together with another great-niece, Lady Elizabeth Murray, whose mother had died. Lady Elizabeth and Belle were second cousins. Belle lived there for 30 years. In his will of 1793, Lord Mansfield provided an outright sum and an annuity to her.

...Maria Belle was known to have remained in England with Lindsay until 1774, when Lindsay having made her free and paid for her manumission, also transferred a piece of property in Pensacola to Maria, where she was required to build a house within 10 years, Maria Belle appeared in the Pensacola property record and her manumission paper.

A contemporary obituary of Sir John Lindsay, who had eventually been promoted to admiral, acknowledged that he was the father of Dido Belle, and described her: "[H]e has died, we believe, without any legitimate issue but has left one natural daughter, a Mulatta who has been brought up in Lord Mansfield's family almost from her infancy and whose amiable disposition and accomplishments have gained her the highest respect from all his Lordship's relations and visitants."

Her status was ambiguous, both as being illegitimate and mixed-race, and she was never acknowledged formally as part of the family. Legally, it was also ambiguous; she had been born into slavery but was raised as a free woman, but this was not confirmed until the will of her great-uncle. So not modern attitudes, but not the same monolithic attitudes that we assume in the past (even though yes, there were those at the time exhibiting those attitudes towards her).

Expand full comment

Working conditions at Foxconn factories in China are above average for China.

There was an Indian Foxconn factory that apparently wasn’t following local laws, so I don’t know enough to say if that factory was better or worse than local conditions, but this “Foxconn == clearly bad” idea is just ludicrous.

Expand full comment

They could quite conceivably be " above average for China" and still be something which we shouldn't tolerate just to make shiny new iphones affordable. A google of Foxconn working conditions throws up some pretty grim stuff in places like CNN and new York times, with much of the evidence coming from suicide notes.

Expand full comment

Shiny new iPhones are expensive anyway. It’s Apple, not the consumer who benefits. Apple benefits with outsized margins.

Expand full comment

The people that have best used unconscious processes and optical illusions for profit are illusionists and mentalists.

Expand full comment

**sigh** I got "GALEN" and "GLEAN" and couldn't figure out the religious angle.

Expand full comment

“It is a profoundly erroneous truism, repeated by all copy-books and by eminent people when they are making speeches, that we should cultivate the habit of thinking of what we are doing. The precise opposite is the case. Civilization advances by extending the number of important operations which we can perform without thinking about them. Operations of thought are like cavalry charges in a battle — they are strictly limited in number, they require fresh horses, and must only be made at decisive moments.”

—Alfred North Whitehead

Expand full comment
Aug 31, 2023·edited Aug 31, 2023

Or:

https://www.smbc-comics.com/comic/conscious

Have you ever tried to self-consciously walk down a flight of stairs? Its terrible!

Expand full comment

...Do people really not think when they walk down stairs, especially in an unfamiliar environment? Aren't they afraid of tripping?

Expand full comment

Maybe unfamiliar and dark, for me. Or uneven. That’s it. Any other time I’m staring afraid and walking as usual.

Expand full comment

Yup! You beat me to the Whitehead quote by 15 hours! Congrats!

Expand full comment

Thanks for sharing that quote. It's a great way of putting things.

Expand full comment

My recollection of majoring in Finance at MBA school four decades ago is that after the enunciation of the Efficient Market Hypothesis by Eugene Fama in 1970, there was a strong effort to identify irrational stock market anomalies, but after they were discovered, they tended to disappear. For example, it was discovered that the stock market tended to go up (or perhaps down, I forget) in January. But once investors heard about this January bias, it more or less stopped happening.

In other words, the Efficient Market Hypothesis tended to be a self-fulfilling prophecy.

Granted, stock market investors tend to be highly rational and highly motivated. It could be that ordinary consumers are more prone to being primed by simple tricks. Still, my impression is that over time the public tends to get bored with and/or sick of a particular trick so that new tricks need to be invented and/or reintroduced.

Expand full comment

That was my thinking; unlike, say, the laws of physics, which don't care what you think of them and never change, the laws of finance are very much affected by what people are doing, and people are always trying to make money by exploiting inefficiencies in the system.

That trader at Goldman might find a market inefficiency, but you or I probably won't, and he'll execute the trade way before we do.

Expand full comment

i remember EY described this as inductive vs anti-inductive behavior in the sequences, and that was one of the parts that really stuck with me.

Expand full comment

Never read the sequences. Is it worth my while?

Expand full comment

I'd wait until someone gets GPT-4 to summarise it in a one-pager.

Expand full comment
Sep 1, 2023·edited Sep 1, 2023

That was a good idea! Did that, they sound interesting but not sure if enough for the length and a lot of the ideas seem to be circulating around rationalist circles anyway. I actually enjoy going back to source documents but the sequences are supposed to be really long.

Expand full comment

number one complaint is the arrogant tone. though personally, i didn't mind. I think there's other common criticisms, like referencing his own posts more than outside research.

Expand full comment

I wouldn't recommend ALL of them, but some of them are great and make the rest worth slogging through.

In particular, I recommend evolution ones: "An Alien God" and "Beware of Stephen J. Gould".

In addition, I liked "A Fable of Science and Politics" (very short) and "Zombies! Zombies?" (very long).

Expand full comment

Oh, a curated selection. Another good idea! Any suggestions anyone else has?

Expand full comment

I think it is not that much text, even if you read it all.

(No offense meant, and this is not about you specifically, but I always found it fascinating when people online complain about texts being too long, while they are scrolling infinite or very long lists of comments... each of them being quite short individually, of course. Someone should divide the length of the Sequences with the length of the average Open Thread on ACX, and I believe it would be a surprisingly small number.)

Give it a chance and try reading it all: https://www.readthesequences.com/

Expand full comment

Yes. They are the founding document of this culture. Absorbing all of their ideas just from hanging out in Rat circles is like trying to absorb the Bible by hanging out in church. You're never going to get it all.

Expand full comment

On the other hand, you'll get the best parts, with useful corrections, revisions, and additions from later writers (particularly Scott).

Expand full comment

True, I guess. Although I don't think anything comes quite close to the mindfuck one gains from reading "How To Actually Change Your Mind" all the way through.

Expand full comment
Sep 1, 2023·edited Sep 1, 2023

> In other words, the Efficient Market Hypothesis tended to be a self-fulfilling prophecy.

Rather, this fact about how the market operates is a part of the Efficient Market Hypothesis. It's just as true before the hypothesis gets published as it is afterwards.

I read a point once which I appreciated, which was that the returns on anything predictable are driven to zero, regardless of whether they started out being positive ("profitable") or negative ("unprofitable"). If you stupidly announce that you will buy 200,000 shares of JNJ on November 28th, at whatever the market price is at that time, come hell or high water, people will think "no one could be that stupid", no one will really respond to your announcement, and you'll end up paying a huge premium to get your 200,000 shares.

If you follow that up by making similar announcements and always following through, people will pay attention to your announcements and maneuver to be part of the group that gets to sell to you, and over time they will compete away the premium that you signed up to pay by announcing all your giant trades well in advance.

Expand full comment

While I certainly agree with the post overall and told banana herself, specially when it comes to cognitive biases being real, important, and part of automaticity, I'm less happy about the priming part.

I think 'priming' has come to mean two things, mostly non-overlapping in implications, mechanisms, or audiences that think about it when you say 'priming'. At the risk of oversimplification:

-One is cognitive, semantic, etc. priming. These are quirks of the brain function that may make us help understand how it works, or may be predicted from good theories of the brain like predictive processing. They are uncontroversial in their narrow academic field and replicate well. Stoop effect is a clear example.

-One is social, 'behavioral econ' priming. These are supposed to show that higher order behaviors can be influenced by borderline subliminal messages, for nudge/social engineering purposes (non-derogatory). Some of them are just quirky, like the reminders about being old making you walk slower, but others could potentially matter for ethically relevant stuff like organ donation, and thus they are policy relevant.

Take the recent Dan Ariely retractions due to fraud. Now, I may be just be wrong, and any commenter can point it out if so, but I believe it isn't just him, the whole field of "put subtle reminders of morality to people and they will actually behave more ethically" stuff, whether with the ten commandments or watching eyes poster for donations or washing your hands or whatever has not replicated at all. This is important because while learning about the brain matters, most of the policy implications of 'priming' came from this later school of studies.

The taxi one still feels like a framing effect the most to me, closer to a cognitive bias (the starting options make you tip way more than if you would write the amount you tip yourself in a little text box) than priming.

(I'm serious about the priming for moral behavior request by the way! If anyone says that the news only show the messed up authors and #actually the main findings have survived, please do say so)

Expand full comment
author

I mostly agree with you, although I'm not sure where to draw the line - unscrambling words is "a behavior", sort of. I'm not even sure moral priming is total bunk - common-sensically I believe that if I'm walking home from a lecture on ethics, or a religious service, and I see a beggar, I'm more likely to give them money than otherwise, although this would be a fully conscious process (yes, I know there was that famous Good Samaritan experiment, but I don't think it actually contradicts my point). Maybe a simpler example would be that if I'm walking home from a horror movie, I'm more likely to jump or scream if a tree branch snaps near me. Again, this isn't exactly the same as "a word about old people can make you slower", but it's hard to see where to draw a principled line.

Expand full comment

I think the Good Samaritan experiment, curiously, is bad evidence, but if it's evidence for anything, it is for the recent pro-social experience leading to prosocial behavior. See https://www.lesswrong.com/posts/kmT47aLQmqzcw329Y/good-samaritans-in-experiments

That's why I'm interested to know if there are more examples. Are the people giving TED Talks about this the tip of an iceberg of bad studies and fraud, or are we automatons falling to the very real and existing availability bias by focusing on the bad actors in the filed, and actually you can prime people to act more prosocial just like you can for remembering words?

Because if so, I have way higher hopes that indeed, even if some quirky small studies are not replicable, the pre-existing paradigm can mostly survive.

Expand full comment

"if I'm walking home from a horror movie, I'm more likely to jump or scream if a tree branch snaps near me."

I use that kind of priming to judge the quality of film-making. For example, in 2001 coming out of seeing "Memento" (which is about a man with short term amnesia), I was convinced I'd never be able to remember where I parked my car. (I was surprised that I did remember without a problem.) "This Christopher Nolan guy is pretty good," I thought to myself.

The only movie that I can recall having two days worth of that kind of impact on me was the Coens' "No Country for For Old Men," which is, shot for shot, likely one of the better movies ever. A friend had a small role in its 2007 rival "There Will Be Blood," which was plausibly being talked up for the Best Picture Oscar. But about 15 minutes into "No Country," he said to himself, "Oh, well, no Best Picture for us."

Expand full comment

Mullholland drive made diners scary for a short time, for me.

Expand full comment

The IAT failed on me. Every time I saw a picture of a black person with a negative word, my brain paused for a fraction of a second as a little warning bell rang in my brain and I instinctively paused to check the situation, the same as if I'd seen a guy on the street holding a gun in his hand. And the result was that it showed that I reacted more slowly to the black-bad associations, and thus was obviously not a racist. Which to be clear, I don't think I am. But all the test measured was some quirk of my mind, probably developed as a social safety reflex.

Expand full comment

I remember I took the web-based "USA vs. world" one.

It had a smug claim in it that taking it in the other order won't affect your results, so I took it in the other order.

Result: it claimed I had a strong bias in the other direction than it claimed I had a strong bias in the first time.

Expand full comment

I scored on the IAT as being pro-black, which seems plausible to me.

Expand full comment

This may sound like a parlour trick, but at the very least, there must be a nudge that causes people to believe in nudges. I mean if cognitive biases are false, that means there must have been some kind of illusion which gave rise to people believing in cognitive biases, therefore we must be at least somewhat vulnerable to...cognitive biases. Okay you can punch me now.

Expand full comment

One thing I've come to love about reading psychology blags on a laptop is the ease with which many optical illusions are dispelled just by tilting the display. Adolescent me would think I'm such a poser for not using a Real Computer anymore (they're called __towers__, man!), but it was definitely way more trivially inconvenient to angle a 1080p monitor, or my head for that matter.

On the matter of trivial inconveniences, it still strikes me as weird that it's considered weird to go out of one's way to input custom tips. But then I remember how even extremely simple mental math tricks like "move the decimal one position to the left, that's 10%, then halve/double/triple that as appropriate" look like wizardry to so many people...and their default 15% or whatever tips sum up to far more than my infrequent 30% ones. Makes me wonder what the ideal distribution is, that isn't so enraging it nudges people to not tip at all...

It's interesting to see so many everyday examples of nudges working irl - all the dumb gimmicks we use in retail to grease sales - yet still notice a great resistance internally to admitting they work. Part of me just really wants to believe in homo economicus, I guess. Or deny that the same traps work on me too, just in different contexts. No one's a perfectly rational shopper across all markets; the same alchemy that convinces me it's just too hard to boil pasta at 2AM vs ordering a pizza is what makes someone pick up a pick-me-up snack while waiting to check out.

Expand full comment

I got Christian, Angle, God, Prerogatory.

The colored words were easy for me.

I cannot see through the optical illusion at all.

I've never ridden a taxi.

I feel happy. I don't know if I should.

Expand full comment

Most people who haven't ridden a taxi feel happy. It's a prelapsarian state of existence.

Expand full comment

> I’ll buy real estate, then contrive a series of clever illusions that make a dilapidated shack look like a beautiful mansion. By buying at shack prices and selling at mansion prices, I can get rich quick.

People do use optical illusions to make buildings appear more valuable. The apartment complex I'm living in looks bigger on the inside. The main hallway's ceiling slopes downwards from the entrance, making it look like the end is farther away than it actually is. Also, the hallway is painted pitch black and is rather dimly lit, while individual apartments are painted white. Presumably the contrast is intended to make it look like you get a lot of bright sunlight if you live there. (I do, but mostly in the early morning when I'd prefer darkness.)

> I might think a bag of rice looks big when I go to the grocery store. But maybe the store hired visual neuroscientists to contrive optical illusions around it! Maybe the bag really just contains one grain of rice and can’t possibly feed me! I should only eat rice I grew myself from now on.

The usual trick is to put a small bag inside a big cardboard box. which is not enough for a single grain of rice, but does seem to work some of the time to sell less for more. That's why I only shop based on per-kg prices...

Not sure about the sportsball example, but maybe shorter players have an advantage getting closer to the opponent because they look like they're farther away. Someone should check.

Expand full comment

> These questions, taken seriously, will drive you insane. Plato and the Buddha are old enough to be safe, but this is prime cult recruitment material here. Tell people (as Gurdjieff did) that they are sheep-like automata drifting through life without conscious thought, and they’ll notice it’s basically true, freak out, and become easy prey for whatever grift you promise will right the situation.

Counter-cult inoculation: this is called "habit," and it's (usually) a good thing, much like many of these cognitive biases discussed here are usually good things. They are the result of learning and mastering a skill to the point that it becomes automatic, just on a very small scale.

> Why are you reading this article now instead of doing something else? How long did you spend on that decision? Were you really awake and deeply absorbing the last paragraph?

1. Because it showed up in my inbox and this is time I spend reading emails.

2. I don't remember; I made it years ago. Now it's a habit.

3. I don't need to be. Basic reading comprehension is a skill I mastered waaaaaay back in middle school. Now it's automatic; I only need to go into "deeply absorbing mode" for high-difficulty subjects such as legal documents or technical content.

Habit is an amazing positive force in your life. It lets you hand off tasks to it, anywhere from the trivial all the way to the moderately complex, and perform them on autopilot, leaving your higher brain functions available for other things. It's a wonderful mechanism for avoiding analysis paralysis, among other things.

A few days ago my wife and I were going out to dinner. I was driving, it was in a different city, and there was moderate-level traffic most of the way there. But because I've been driving for years and years, I could hand off the (really quite complicated if you actually think about it) task of driving the car in moderate traffic to the "automaton" in my brain and hold in-depth conversations with her about stuff going on at work — both of us work in demanding, highly technical fields — and get there safely anyway.

I said above that this is *usually* a good thing. There are two exceptions to be aware of. First, that it's possible to get very good at doing the wrong thing and hand it off to the automaton in your brain. (aka. bad habits.) This requires you to go through the reverse of the process, becoming consciously aware of doing something wrong and taking control back into the conscious reasoning area to re-learn how to handle the relevant situation.

Second, with the rise of the field of psychology, people have begun to learn how the mind works, and this gives malicious people an opportunity to weaponize that knowledge against you. This is where we get things the article mentions like suggested tip levels ("let's prime them with our desired result to swindle money out of them") and woke grifters ("racism exists and is bad": true. "This thing over here is racism": almost invariably turns out to be false under closer examination. "Let's destroy this racist thing over here": malicious and evil.) It's important to be aware of these psychological tricks so that when people try to employ them to hack your brain, you can recognize what's being done and be on guard against them, shunting them off to your conscious, higher-reasoning processes for critical analysis rather than simply accepting them because they look good at first glance.

Expand full comment

As someone who struggles to form habits with purpose, let me tell you: There are two secrets to success - capacity/intellect/talent and habit, and if you're missing capacity/intellect/talent you can probably fake it.

Expand full comment

>No human roboticist would design a robot that lost half its horsepower whenever it heard a word relating to elderly people, and evolution didn’t design us that way either.

I wonder, how soon will the obvious conclusion of this line of thinking percolate into common knowledge. That is, cognitive biases are actually good and useful in everyday life, and only appear irrational in contrived experiments.

Expand full comment

There's plenty of situations in which cognitive biases fail that aren't contrived experiments - mostly having to do with how out-of-domain our life has become compared to the ancestral environment. Our adaptation to this is cultural - including places like the rationalist community and this very blog. We try to develop antibodies to those cases in which our in-born heuristics will fail because they never evolved to deal with digital social life in a planet with 8 billion of us.

Expand full comment

I'd say that the rationalist community reacted mostly in "throw the baby out with the bathwater" way, at least at first. Since then it's become more appreciative of Chesterton fences and the like, but the misguided kernel of "the brain is corrupted hardware" is still there.

Expand full comment

I think that's more a result of focusing on a certain specific kind of problem (civilizational-scale, mostly) which is both very important and also catastrophically out-of-domain for our brain heuristics.

Expand full comment

I'm still uncertain whether the "cultural adaptations" resulted in an environment that's even net-positive for making progress on those problems. I guess they did result in at least one worthwhile blog, so credit where it's due ;-)

Expand full comment

Well, not all cultural adaptations are necessarily successful - they're attempts! Regardless, rationalism as we intend it here is in the end still mostly a niche internet subculture. But there's plenty of more mainstream takes on the general principles, which perhaps peaked in the early 1990s-early 2000s (e.g. Richard Dawkins-style atheism and debunking) with similar aims at least.

Expand full comment

And their resounding failure with subsequent folding into wider wokism suggests that Facts and Logic alone aren't quite as fool-proof a solution to life, universe and everything as some might have hoped.

Expand full comment

"they never evolved to deal with digital social life in a planet with 8 billion of us"

Yup. Also, some heuristics/biases were probably the best available information until quite recently. E.g. before people started compiling statistical information, the availability bias was probably the best one _could_ do when estimating probabilities. Statistical information (when accurate and relevant and accessible!) is better, but this is very recent.

Expand full comment

Yes, in a way it's like switching from walking to driving a car and having to re-learn your instincts about space and movement because it just works so differently from your regular body. The badness of biases is often overstated (and the importance on our behavior of nudges and subliminal cues IMO even more so), but that doesn't mean that there isn't some kind of process involved in shifting towards using more and more external tools over the ones built into our own body when the former surpass the latter.

Expand full comment

"it's like switching from walking to driving a car"

Very good analogy! I agree with your points, particularly that there is a learning "involved in shifting towards using more and more external tools". Also, as with cars, there are places that roads don't go but are only accessible by foot - analogously, probabilities for which we don't (yet?) have good statistics, and for which the availability bias may _still_ be the best rough estimate one can get.

Expand full comment

(the irony of me using that analogy being, of course, that I actually have never managed to quite grok that transition and have ended up hanging my driver's license away years ago as a precaution against wreaking automotive havoc upon myself and/or others)

Expand full comment

Interesting! Best wishes.

Expand full comment

> That is, cognitive biases are actually good and useful in everyday life, and only appear irrational in contrived experiments.

Not necessarily. As I mentioned above, they're *usually* good and useful in everyday life, but people with an understanding of psychology can weaponize them against us and try to hack our brains to get us to do what they want and make us think it was our idea all along. So it is important to understand these things to be on guard against such psychological dirty tricks.

Expand full comment

Fun fact about Scott's optical illusion example: I just assumed he was right that it's an optical illusion. I didn't follow the source or google it or anything to see why or whether it really is one. My decision not to do that, and to take the claim that it's an illusion on trust, probably relates somehow to Scott's point.

Expand full comment

I actually tried zooming in on just the square that is marked with the arrow, forming a peephole with my fingers so I could see nothing other than the square, and then comparing the two squares. They still looked like different colours. I don't know if I'm doing something wrong in all of this or if there's some persistence in the illusion from when I saw it first.

Expand full comment

You can find versions that have a solid color bar connecting the two spots, making it clearer.

Expand full comment

Is automaticity in conflict with rationality? I'm not so sure.

Most of the examples given can be summarized as 'context matters'. Angle is more likely than angel (or Galen) in general; angel is more likely in the context of the list. Isn't it more rational, a sign of greater intelligence, to choose angel?*

How much to tip the cabbie? If we want to give a properly calibrated tip (which I think is most people's primary objective in the situation), Isn't it more rational, a sign of greater intelligence, to look for calibration clues and utilize them? At least it is until you learn that the calibration cues are made up.

Is the square blue or yellow? Using the contextual clues of the background and other squares, one is representing a blue square and the other is representing a yellow square. Is it more rational to say the colors are different?

This last bit is quite a stretch, because we don't consider visual perception to involve rationality. But if our visual perception machinery hadn't already evolved to consider context automatically, wouldn't we consider it more rational to take advantage of contextual clues?

Oops, I said "automatically". But that's the point, isn't it? We think without having to think about thinking, and we've gotten so good at it that we can take context into account without having to think about taking context into account. It's a feature, not a bug. Automaticity is mostly aligned with rationality, except mainly when the context is contrived.

Prediction: as AI becomes more intelligent, it will exhibit more and more of our cognitive biases.

Expand full comment

*unless you read words by sounding them out and can't get past the apparent hard 'g' in 'anleg'. And you know it's a hard 'g' from the context.

Expand full comment

It's not just that a person can go from thinking that if other people are automatons, it will be easy to to control them (computers are automatons, that doesn't mean it's always easy to control them), it's that a person can go from thinking that if people are automatons, it's easy to get big bucks from corporations and governments by selling methods of controlling people.

Expand full comment

Funny enough, if it were easy to control people (because they are automatons), it would be much easier to get women to go out with me.

Expand full comment

You can get big bucks from corporations by selling them ways to influence customers, but it's hard to come up with ones that work, so you'll earn your pay if you do.

And they might stop working after awhile.

I think that was part of the problem with the Priming Bubble around the turn of the century. Corporations hoped that psychologists could sell them Science about how to influence customers, and the one thing we know about Science is that it's not a fad that wears off like Wendy's 1984 "Where's the beef" TV commercials that were highly successful for a few months and then got old. Instead, Science keeps staying true forever and ever.

Expand full comment
Aug 31, 2023·edited Aug 31, 2023

Agree with the post as a whole, but I think the slavery example really doesn't fit here. In 1700 there were plenty of people who already thought slavery was bad. And many people who probably thought "yeah, slavery is bad, BUT this is how things go" the way many people do now for animal rights or fossil fuel energy. The thing with ideas like that is that they're not really the kind of thing you go one way or another via automaticity and priming. Rather, you may choose due to peer pressure and/or material interest that they're an evil that you're willing to tolerate, or can't be much of an evil if everyone does it. This is IMO better explained in terms of people having other values than being moral (aka, selfishness), or simply defining one's morality compared to that of others (aka, I feel like I've done my part if I'm in the upper X% of moral people in my society; it's someone else's turn to make an improvement now before I move any further!). But that's not automaticity. That's all fairly rational moral and utility calculus stuff.

Expand full comment

Repeat the word "shop" eight times.

Now, what do you do at a green traffic light?

Expand full comment

You definitely shouldn't be shopping on your phone. Unless you are a passenger.

Expand full comment

I think you dismiss the possibility of exploitation of these biases too lightly: this is the kind of thing that drives moths to self-immolation. Humans regularly exploit such biases in other species, freezing chickens with lines of chalk, confusing tigers with masks on the back of their heads, and immobilizing sharks by flipping them vertically. "The Human Mind is Not a Secure Attack Surface," as it has been said.

EDIT: I could have sworn that last was one of Yudkowsky's famous lines (about an AGI manipulating humans in ways we can't even imagine) but I can't seem to find it. Does anyone know what I'm misremembering?

Expand full comment

I think this is all good so far as it goes but doesn't quite answer Banana's point. Let me put it this way:

(1) We all know about irrational/subconscious processes, biases etc. As you say, these have been identified by culture since time immemorial: we have warnings about things like hyperbolic discounting at least as old as Aesop's fables or maxims such as 'a stitch in time saves nine'. But has modern cognitive science discovered any new ones (of interest)? The claim was that it had - the claim was that we had discovered something radically new about our minds. That claim now seems dubious. More likely, we have new names for well-known old phenomena like accidie.

(2) The suggested answers to the question in (1) are likely to be, as you have said, examples where common sense or practical reasoning doesn't accord with either (a) strict kinds of logical reasoning (conjunction example) or (b) rational expectations theory (loss aversion). But there are pretty good answers to these kinds of case: logical deduction is not what we are commonly asked to do with stories about people, and heuristics such as loss aversion are pretty good shortcuts for beings with our kinds of lifespans, preferences and options. As others have pointed out, the 'Linda' example is asking people to play a very simply logic game (which they can easily do) by pretending to something quite different, more interesting and more usual. It's rather like those surveys that prove that X% of people want to bomb a made-up country - they've played a trick on someone (which we all knew was possible, well before science came along) and they want to crow about it.

(3) The analogy of optical illusions is a good one, but it gives it away. We have known about optical illusions for ages and there are competitions for coming up with new ones. But when we find one we think "we have found a new way to trick people", not "this undermines everything we have ever thought about perception and deserves a Nobel Prize". Same is true of sleight of hand; or questions with puns in them. Why not say the same about the odd little quirky discrepancy between strict logic and day to day reasoning? Why not say "I've found a new trick for getting people to say the wrong answer to a question?" The claim Banana is really objecting to is the idea that there is big, new, interesting kind of automaticity, recently discovered; not that people can't be fooled on occasion.

Expand full comment

>Since I’m such a dumb automaton, I can never really trust any of my decisions. I might think a bag of rice looks big when I go to the grocery store. But maybe the store hired visual neuroscientists to contrive optical illusions around it! Maybe the bag really just contains one grain of rice and can’t possibly feed me! I should only eat rice I grew myself from now on.

Idunno, bought a bag of chips at the liquor store lately? Those Uncle Ray's bags sure are full of air, but darn if they're not tasty!

Expand full comment

You *want* your chip bags to have lots of air, the air protects the chips from getting crushed during shipping.

Expand full comment

You buy cheap shit produced under … dubious working conditions in far-flung places, and think it's basically fine. I think of this as slavery but under an optical illusion so strong that most people don't see they're essentially identical. So as "equivalent automatic beliefs [you] should be questioning" go, that might be one.

Expand full comment
Sep 1, 2023·edited Sep 1, 2023

Don't know if serious, but people working in 'dubious conditions in far flung places' are not 'essentially identical' to slaves.

They are constrained not by their employers but by their next best option, and that makes an almost literal world of difference, because that's how they go from dubious working conditions to middle income and then high income.

Expand full comment

I agree with all that. The equivalence I draw is not from _their_ perspective, but from ours (real instance today vs. irreal 1700-version): the stuff you want gets produced on the cheap. It's fine, don't look too closely lest you learn something you'd rather not know. And as you say, as they might have said in 1700, whatever the workers are undergoing today, it "will prepare and lead them to better things."

Expand full comment

And how are you going to do that if you live in the middle of nowhere in Africa without any transportation options that don't involve risking your life? And then even if you find your way out, you're still in the same situation as before: no money, no education, no passport, no hope. You're still stuck working in a sweatshop making exactly the amount of money you need to sustain your miserable existence. I guess it's not slavery though, because you always have the freedom to kill yourself.

Expand full comment

I think you're confused about a lot of things. I'll try and help work through them with you if you're open to it.

Take a second and imagine people's lives before the sweatshop is constructed. Do you think it's some idyllic existence that the sweatshop owner disrupts by making a hellhole and forcing them to start working in it?

No, in fact, people are living wretched existences which the sweatshop owner improves (even if only marginally) by building a sweatshop that people agree to work in. How do we know that these people's lives are improved by working in the sweat shop? Because if they weren't, 1) They wouldn't come to work in it, and 2) If they discovered they are worse off by working in the sweatshop, they are free to improve their lives by going back to their previous existence!

So what, you say, their lives are still wretched! Yes, but everything we've learnt about how people's lives get better over time is through making these kinds of incremental improvements! Initially, those incremental improvements meant that people's lives improve only over 10s of generations, as has happened in the developed west. In the last 100 or so years, those incremental improvements can transform people's lives very fast indeed, within a generation or two, as has happened in most of East Asia and some of South Asia and Africa. Every other pathway leads to disaster for the poor and miserable, particularly those focused on socialism/communism.

Expand full comment

Oh, I wasn't actually making a moral judgement amount sweatshops. I understand that every civilization is built on the back of slaves and all that. But it's still a bit unfair to say that they could choose to not work and go back to their previous existance, considering that said existance doesn't really exist anymore. If I wanted to go out and live in the wild, I can't really do that, because it's, well, illegal. I'd likely end up breaking a dozen hunting and trespassing laws, and eventually they would just arrest me and throw me into a psych ward. Not to mention that much of the natural world has been replaced with human infrastructure already.

It's ultimately the same in impoverished countries as well. Nature was paved over with cement and plantations, and the wilds that remain continue to collapse due to overhunting and climate change... And of course, if the entire population suddenly decided to stop working, the state isn't going to just let that happen.

Expand full comment

'Oh, I wasn't actually making a moral judgement amount sweatshops. I understand that every civilization is built on the back of slaves and all that'

It doesn't sound like you understand much of anything I'm afraid.

Expand full comment

I have to talk about the rubic’s cube illusion because it’s a great example that is usually explained wrong (sometimes even by it’s creator!). It’s misleading to say that the colors of the “blue” and “yellow” squares are the same. When your brain sees them as different, it’s actually getting the right answer! In real world object perception (different from picture perception), our eyes can only perceive the amount/type of light reflected, which depends on object color and illumination. When the entire image is yellow, our brains assume yellow lighting (or tinted sunglasses). We correctly perceive grey light coming off the square in question, but correctly infer that it would have to be a blue colored square to give us grey light when under yellow illumination. The squares aren’t “the same color”, they are reflecting the same color light, and our brains are smart enough to know that implies they would have to be different colors.

Expand full comment

A nice example of people exploiting such illusions: the Stroop effect was (at least apocryphally) used to detect Soviet spies, since knowledge of Russian would "negatively prime" their speed in answering.

Expand full comment

Everyone thinks slavery is bad? Not if you re-brand it! "Corrective labor" if you are communist. Or consider that in the Anglosphere it's considered acceptable for courts to order divorced men to pay half or more of their income to a woman they are no longer married to, and with "imputed income" they may not be allowed to retire.

Expand full comment

This is a much better example than the one I used to argue the same point! (Mine was that slavery was mostly just offshored for cheap goods.)

Expand full comment

Slavery isn’t about low pay, or bad hours, or bad accommodation. It’s about ownership. A slave is owned. Even serfs were not slaves.

Expand full comment
Aug 31, 2023·edited Aug 31, 2023

> the Anglosphere it's considered acceptable for courts to order divorced men to pay half or more of their income to a woman they are no longer married to

Shit man, I pay nearly half my income to the government and I don't even have an ex-wife.

I think if I lived in a society with slavery I'd probably think about it the same way I think about taxes. Sure, it sucks and is ultimately immoral, but there's no way to make things work without it.

Expand full comment

If you think it's immoral, you're already a better man than most. That you think it's necessary is … suboptimal, but not a moral failing.

Expand full comment

How do you propose public goods like national defence and law and order be funded?

Expand full comment

I think automaticity is the phenomenon where people tend not to waste time on things. The exception is usually when they've been previously burned by not paying attention to it, or where the phenomenon has brought itself to their attention as "this is important and/or expensive to get wrong".

So for instance the tipping on taxi rides. I get the impression that many taxi rides are taken by people who use them a lot. So the average tip will be from someone for whom this ride is not their first or their sixtieth. So it's likely to be almost a reflex - until they notice that instead of 10/15/20% they've changed it to 15/20/25%, at which point they actually have to think about it for a while. Some of those may feel pressured into still choosing the middle option, but it may actually change which option is chosen.

I'm not sure the $2/3/4 makes sense - most people won't do the conversion to percentages (many cannot do it without a calculator, and most won't pull it out in the taxi and most of the remainder will forget about it once they're outside).

You have only so much attention span in a day. This is why old/retired people have a reputation for "cheapness" or slowness in dealing. They have more time and fewer other things to occupy their attention.

Expand full comment

I think tipping is not about automaticity at all. The point of tipping is to "be a good person", most people don't know what is the correct amount to tip (something that doesn't exist except by societal convention). When options are presented it makes it easy to chose an option you can feel good about (for some people that will be the lowest options, others want to be more generous than expected and will pick the middle or largest options). For a variety of reasons we are at a time where there is a change in tipping culture (base tip going from 15-20% plus tipping expanding into previously untipped areas) that increases the uncertainty so people are more eager for clues.

Expand full comment

What’s the unambiguous evidence that Guesjieff was a grifter? My understanding, from some reading and talking with some students of his students, is that he what he taught was incomplete-that it could help a person reach a certain level of spiritual attainment, but not more than that, and that this reflected Gurdjieff’s partial (yet real) level of spiritual attainment.

Expand full comment

Your stock trading example is an interesting one because we see this exact process in the real world. The theory of value investing posited that you could make excess returns by investing in a particular type of company which was irrationally disfavored by investors. Since then, the returns to this strategy have steadily reduced as more traders became aware of value theory. You can see from the charts here that the value factor has had no excess returns since the late 90s, not coincidentally shortly after the 1993 Fama-French paper on the value premium.

https://quantpedia.com/resurrecting-the-value-premium/

Expand full comment

I dispute the claim that, “in 1700, everyone thought slavery was fine”. See some early abolitionist writings:

https://www.masshist.org/database/53

https://en.m.wikipedia.org/wiki/1688_Germantown_Quaker_Petition_Against_Slavery

Consider that France abolished slavery numerous times, in the 5th, 12th, and 14th centuries:

https://en.m.wikipedia.org/wiki/Slavery_in_France

And perhaps most importantly, I doubt that (all) slaves themselves thought slavery fine.

Considering the ambivalence of many notable slaveholding American Founding Fathers, I would think that even in 1700, many people could observe the harms of slavery.

Now, just because one observes something harmful and deems it “not fine” doesn’t mean that one will make it one’s life mission to fight the special interests behind the harm, and campaign to change public opinion.

For instance, I recognize that the football culture in America is “not fine”. From deemphasizing academics for football players, to the huge risk of debilitating injuries, to corrupting the big football universities, to being a vehicle for propaganda (from pregame military drills to BLM ✊🏿 protests), to becoming an obsessive weekly tradition in the lives of many Americans, to give a few critiques…

But I know that I football represents a deep-seated “way of life” for many Americans, and I won’t give my life to try to change that fact. Frankly, I think it would take a civil war to abolish football!

Expand full comment

Abolishing it was one of the first things William the Conqueror did when he got over here.

So, 500 years from then to when the English Atlantic trade kicks off, less than 200 years from 1837 to date.

Expand full comment

Yeh, as a card carrying atheist I abandoned all interest in Hitchens when he denied that Christianity had any opposition to slavery pre enlightenment, seems he would fail the basic history test of his own country.

Expand full comment

I know this is a low quality comment, but good stuff Scott! Enjoyed reading that.

Expand full comment

This reminded me of what Scott described in his series of articles on growth mindset, about how some of the results were achieved by manipulating the instructions of the experiment itself, or an intervention that was otherwise confined to the day of the experiment.

But then of course all the discussion of growth mindset talks about how everyone simply has one or the other mindset, and the growth mindset is the only good one to have. Even though the experiments seem to show that people can be, shall we say, “primed” into one or the other based on these short-term interventions. Or that they can adopt certain behaviors in response to incentives: there are situations where it’s advantageous to seem talented, like if you’re trying to get into an advanced class; and there are situations where it’s advantageous to seem like you’re working hard, like if you’re in a job where you know that if you finish this assignment, you’ll just get more work, so *maybe* you could finish this one faster, but as long as your overall work output level seems reasonable, you’re not going to go all-out to get it done as fast as you can…

Expand full comment

Thanks Scott - nice thoughtful article, and it turned me on to Banana, who looks like they're worth reading.

Now for the parts where I quibble: (ETA - I ended up mostly talking myself out of both quibbles, but left them in the post out of narcissism or something.)

(1) My biggest pet peeve with automaticity is identifying its real world application. I mostly see if from central planning fans, who say stuff like "the jam study shows that people think they like choice but they really don't, so the government should introduce regulations to reduce the number of breakfast cereals available."

I accept your point that *some* cognitive biases replicate, but I think it would frankly be more valuable to believe that none do than to accept any study which comes across the blogs. I know that's not what you advocate, and I know that one big project of rationalism is to figure out *which* cognitive biases are real and how that knowledge can help people, but I still hate the whole field.

(2) Is the religious themed word scramble an example of cognitive bias? I would think that since there's a good chance that it's a religious themed word scramble, it's a useful strategy to look for religious words first - that would probably reduce search times and increase success frequencies most times this pattern comes up.

Well, on reflection, I guess the question is "is automaticity (sometimes) real," not "is automaticity useful or harmful." Nevermind on point 2. :-)

Expand full comment

Several of the examples come down to "Concepts have dimensional elements". At least most people's brains aren't random access; pay attention to the way your mind works when you are trying to remember something. You'll find yourself navigating through conceptual space trying to find the right connection. And you'll have trouble remembering other related facts, which as you navigate conceptual space, will suddenly become obvious - "What was the name of that actor, who was in, uh, that movie, the one with the dinosaurs in the theme park, oh, Jurassic Park? Sam Neill!"

If you've already been placed in a particular spot in conceptual space, sure, it's easier to access adjacent nodes. If you're in a very unfortunate area, maybe your thoughts will circle around a local attractor point, struggling to find the right path "What was the name of that actor? Hugh something. I think Hugh. It starts with an H."

This is distinct from automaticity - as, for most people, they'll do exactly the same things when working in a more manual mode.

Expand full comment

I've found the system 1/system 2 framework from Thinking Fast and Slow to be infinitely helpful in my personal life even though I know some of its more extreme claims have been debunked. I definitely make many decisions sub-optimally because I: 1) have an impulsivity disorder, and 2) don't take the time to think things through and default to some kind of narrative or pattern matching instead of a real thought process. Most of the $20 lying on the ground of my environment can be obtained by recognizing when I'm doing that and correcting for it.

This is why, even though I think the Rationalist community gets a lot of things wrong, I'm still drawn to it. Their stated goal and mine is the same: stop making choices that don't make any sense simply because that's how you've always done things or someone told you to and you never checked or whatever.

If we were all-powerful ubermensch, unaffected by our environment, this effort wouldn't be necessary. If we were *the kind of automaton that cannot improve by reflecting on its automatic behaviors* it wouldn't be worthwhile. Clearly the environment exerts some but not total control over our behavior. (You could argue that the brain was environmentally created and free will is an illusion or whatever but I don't care much about that - if this robot functions better when it tells itself a little story about how its actions matter I'm glad I've got the processor that tells such a story as opposed to the one that doesn't.)

I do think there's value in treating your mind as an intersection of multiple negotiating systems instead of one conglomerate with a CEO. This might be because my CEO sucks absolute ass, but I have a really hard time identifying where various desires are coming from and whether they're in my best short or long-term interests. This is one thing I actually think many rationalists (Scott less than most) get wrong. For example - Zvi Mowshowitz often argues that we should help advertisers target us more effectively because we get to see fewer, more relevant ads. But the part of my brain that responds to advertising is THE ENEMY and letting it conspire with outside forces seeking to destroy me is the worst idea I can imagine.

Expand full comment

“claim that most human decisions are unconscious/unreasoned/automatic and therefore bad“

--Intro of OP article

So, I read the intro and summary, skimmed the middle. Interesting presentation of perspectives. It reminded me of the following stream of past readings:

“The basis of conditioned thinking is the pleasure principle: “Do what brings pleasure, avoid what brings pain.” To act in freedom, we have to unlearn this basic reflex. We need to learn to enjoy doing something we dislike, or to enjoy not doing something we like, when it is in the long-term best interests of others or ourselves.

--Eknath Easwaran, Conquest of Mind, 1988

"Johnny von Neumann enjoyed thinking. I have come to suspect that to most people, thinking is painful."

--Edward Teller

" wrong assumption, that there is only pleasure and pain and nothing else. Always cutting things up into two classes – everything must be either this or that – is one of the fatal weaknesses of the intellect(1). Because of this dualistic trap, we find it difficult to understand that the rare person who is able to receive good fortune without getting excited, and bad fortune without getting depressed, lives in abiding joy.”

--Eknath Easwaran, Words to Live By, 1990

(1) "It is hard because so many people cannot be brought to realise that when B is better than C, A may be even better than B. They like thinking in terms of good and bad, not of good, better, and best, or bad, worse and worst”

--C.S. Lewis, Mere Christianity, 1952

Expand full comment

Isn't the priming literature up to the mid-2010s just another example of Sturgeon's Law? 90% of everything in every category is crap, but that doesn't mean that the entire category is crap.

Expand full comment

Nitpicks about the scramble test:

- Many scrambles don't have commonly used alternative solutions (CIHRTISNA, REGIILON) or none at all (CHCURH, UJSES CRISHT (neither half), HEANEV, HLLE). I used this anagram solver

https://www.thewordfinder.com/anagram-solver/

to check. Un-scrambling, for example, HLLE does not require any religious priming explanation because HELL is literally the only possible solution in the English language.

- In cases with legit alternative solutions, the Hamming distance

https://en.wikipedia.org/wiki/Hamming_distance

as a measurement of (purely lexical) word similarity between the scramble and its solutions are clearly in favor of the religious solution.

Here are the scrambles and, for each of its solutions (with the religious one first), the Hamming distance to the scramble, calculated with this tool:

https://www.hacksparrow.com/tools/calculators/hamming-distance.html

PREITS: priest (4), esprit (6), ripest (6) sitrep (6) sprite (4) stripe (5) tripes (4)

ANLEG: angel (2) angle (3) genal (5) glean (5)

OGD: god (2), dog(3)

PAELRY: pearly (4) parley (2) player (5) replay (4)

GTAES: gates (2), getas (3), stage (3)

The religious solution is at least tied for the shortest Hamming distance to the scramble in all cases except PAELRY. Moreover, eyeballing it, the Hamming distance seems to be even more clearly in favor of the religious solution if you limit it to the first three letters, "priming" you, as it were, for the rest of the word. For example, if you start reading"P...R..." for PREITS, you're not likely to go for "stripe" as your first solution.

- Contrary to your claim, "Prerogatory" is not an uncommon English word, but none at all. At least I could not google any definition. It seems to be a rare misspelling of "prerogative". Google finds it mostly in legal documents, e.g. "prerogatory writ" as opposed to the correct "prerogative writ"

https://en.wikipedia.org/wiki/Prerogative_writ

I know you like to include these little hidden-in-plain-sight tripwires in your texts (such as "and and"), so I'm not sure if it was just the small mistake of using a made-up word as a solution, or 5D chess on your part. The 5D chess explanation would be that you weren't priming for "religion" at all, but for "correct English words"? I certainly didn't check at first whether or not the scramble really resolved to "purgatory" but I did immediately assume so.

Overall, I do believe that even in a neutral word list, without priming, these scrambles would resolve to the religious solution more often than chance or their frequency in general language would suggest.

Expand full comment

Only realized it was meant to be Christian and not Christina when reading the ‘answer’ tbh. May not disprove anything though cause I read about Christina Ricci recently.

Expand full comment
Aug 31, 2023·edited Aug 31, 2023

Granted, though many language rules don't apply to proper names, so I won't include them in the discussion. If someone told me their name was spelled "HEANEV" I wouldn't bat an eye. I would most certainly not ask them "Are you sure you didn't just misspell HEAVEN?"

Besides, Christian and Christina the proper names, and Christian the adjective are of course closely related to each other.

Expand full comment

A cursory glance shows an apparent theme. From that point I was looking for the religious angle or I suppose angel.

Expand full comment

Non Angli sed Angeli 😇

Expand full comment
Aug 31, 2023·edited Aug 31, 2023

"Many scrambles don't have commonly used alternative solutions (CIHRTISNA, REGIILON) or none at all (CHCURH, UJSES CRISHT (neither half), HEANEV, HLLE)."

That's not a nitpick, that's how the test works. Scott did not ramble about religious stuff before providing the test, the other questions are the primer. And in order to serve as a primer you need people to actually find the religious solutions. So you have a bunch of questions that are unambiguously or obviously religious, and a small number of questions that could be easily be religious or not, and the test is purely about those few questions. If you were doing an actual study probably you would give a control group an alternate list which has different filler but the same important questions, so that you can see how much of this is actual priming as opposed to the religious answer being more natural.

Expand full comment
Aug 31, 2023·edited Aug 31, 2023

The design of an experiment would be to have a list of scrambled words where each decodes to at least 2 commonly known words. At least one of each scramble's solutions would share a category, and at least one solution each would be outside that category. You give the test group the shared category as a primer (unscrambled), and the control group a random word (that is not a valid category for any of the solutions) as a fake primer. Then you instruct the participants to name the first solution for each scramble that came to their minds, and then compare results between groups.

Expand full comment

Yeh, well we would have to get you to run a test and see. And are we sure that test wasn’t run.

I don’t think the hamming distance really matters with small words like OGD, putting the D at the start is as simple a task as putting the G for humans.

Expand full comment

There is a experimental-history post from a couple of days ago that seem to agree with Banana that the current paradigm is not working. (I think, I kind of skimmed the Banana post).

https://www.experimental-history.com/p/im-so-sorry-for-psychologys-loss

Expand full comment

"Some people might have the time and energy to become enlightened and perform every action with complete consciousness."

Honestly - that sounds terrible. I prefer to do most mundane stuff on autopilot, and to make most decisions intuitively. IMO, the whole point of conscious deliberation is to train our subconscious systems to be better at their autopiloting jobs, and as long as that works, I have capacity to think about interesting stuff.

(I remember reading a site about negative side effects of meditation, and variations of "I don't get anything done anymore, because I have trained myself to be conscious of everything I do" were fairly common.)

Expand full comment

There's a huge element you didn't mention: "automaticity" supports and feeds into the "blank slate" idea that all behaviors, ideas, and preferences are "socially constructed" (or whatever Moon people jargon phrase they're using today). This notion is extremely appealing to people who want to "remake" society and "transform" humanity, as it lets them persuade themselves that <i>this</i> time it won't end with barbed wire and pyramids of skulls. (Spoiler alert: it will.)

Quite simply, Leftists really like this concept and consequently keep funding studies looking for proof it exists, and keep promoting any findings which seem to do so.

Expand full comment

"But everyone is so afraid of being “that guy” who drones on and on about his high IQ that they countersignal by saying IQ doesn’t exist or is meaningless or is just test-taking skills or whatever."

You pulled a punch there. No one is so afraid of being "that guy" that IQ can't be discussed at all and must be denied altogether.

What people are afraid of is being (or being labeled) racists.

Expand full comment

"Most people find the first set easy, because the text is positively priming the color, and the second set hard, because the text is negatively priming the color."

But is "priming" really proven here? How do we know that the difficulty in interpreting the mismatched set is not simply because conflicting kinds of information (word meaning vs color) are being presented at the same time, so it takes extra mental work to stay on task and report the phonetically decoded thing rather than the perceptual thing?

Expand full comment

"The seemingly blue square on the left and the seemingly yellow square on the right are both the same color; you can confirm on MSPaint or Photoshop."

I just now disconfirmed it using my phone. The different squares are not in fact the same color; they are pixelated and the pixelation does differ.

However, the overall point is correct, and is something that people who have studied the work of artists like Josef Albers are trained to recognize and use.

I mean literally trained, using squares of colored paper or paint to make one color look like two, two colors look like one, three colors look like one, fields of the same size to look like they're different, etc.

Probably more people should be required to take high school and college courses in color theory, perspective, and drawing generally, in order to learn to recognize how their minds interact with their perception.

Expand full comment

Yeah, there's pixel-to-pixel variation, but using the Firefox eyedropper tool, both squares seem to have pixels averaging about #909090 (i.e. gray). Did you get a different result?

Expand full comment

I just blew up screenshots of each and noted that the images were not identical.

But the point is moot. There are some pretty amazing optical illusions out there, and painters have certainly been dabbling in color illusions since at least Seurat.

Expand full comment

Given the pitfalls of the ‘automaticity’ paradigm and its eventual debunking, have you considered incorporating robust mathematical models or equations to fortify new theories against such fluctuations in credibility?

Expand full comment

"Most people find the first set easy, because the text is positively priming the color, and the second set hard, because the text is negatively priming the color."

this one seems fake to me. the second set is harder because the first one once you realize the text corresponds to the colors you just need to read the text which is effortless, while the second you actually need to think first and then say something that is not written down.

Expand full comment

Right, the first set should probably be just a bunch of meaningless letters rather than the words corresponding to the correct colours.

The second set will still be a lot harder, but the gap will be smaller.

Expand full comment

The idea that almost all people living in a slavery based society without any good (similar to theirs) examples of other ways would think slavery normal seems absolutely obvious to me (need to look to Rome and Greece for that, NOT antebellum US South).

So I'm pretty sure that a Roman version of me, if rich enough to have slaves, would have slaves (tho more likely to BE a slave, probably). And my ethics or "human goodness" (if I had any) would manifest in HOW I TREATED THEM.

I can't quite understand why the "last implication" in this essay seems so problematic or bothersome to so many readers. Our values and belief systems, while not completely automatically accepted from the culture and social environment are nevertheless hugely influenced by those factors, and especially by "what seems possible", Overton window like. The basic universal human responses of empathy or compassion (likely hard wired early on in childhood development) would still be there, but they'd operate in a completely different conceptual framework.

Expand full comment

I think it's just because modern American society has big "oogy boogy scary" signs around the mental concept of slavery which makes it scary for some people to think about.

If it's, say, heliocentrism then people don't have the same trouble. Would the Ancient Roman version of you believe in heliocentrism? "Ehh, sure!" Would the Ancient Roman version of you believe in slavery? *cough, splutter*

Though if you're congratulating yourself on your enlightenment in being able to think about slavery dispassionately, maybe you should consider whether there's any other hypotheticals out there that you'd be less happy with. If I ask "Hey, if you grew up in a society where it's normal to have sex with your mother, then would you have sex with your mother?" then I wouldn't blame you if your reaction is "That's disgusting, go away with your stupid hypotheticals".

Expand full comment

I think the sex with my mother/father hypothetical is unlikely, as taboo on cross generational incest is pretty much a human universal for obvious reasons. Siblings, more likely, especially if of high social status (and likely unenthusiastic if brought up together as per Westermarck effect), but I accept a possibility that if I was in a lineage of Egyptian royalty I might get married off to and have sex with my brother.

But let's imagine a society in which it's ABSOLUTELY NORMAL and in fact desirable for a woman to undergo genital mutilation around puberty or earlier. I'm fairly sure that without exposure to different info/values, I'd be both assuming that what was done to me was just a way things are / for my own good, and pinning my own daughters to the ground while they're getting butchered/sawn up.

I'm not congratulating myself, I'm realistic: while I would like to hope at least that in extreme situations in which there's a huge incentive to act in ways I consider morally unacceptable (this incentive is typically better chances of survival) I'd do my best not to; if the act or behaviour is not consider morally unacceptable by anyone around me I'd likely not figure out that whole new subversive ethics for myself.

So, to go back to slavery example, I'd like to think that in the 19th century Britain I'd be an abolitionist (leaving aside the fact that in 19th century I'd likely be a chattel peasant much further east). But in ancient Rome, if a wealthy Roman, I'd almost certainly own slaves.

Expand full comment

Regarding why we can't just take advantage of these biases to make a bunch of money I think the story is a bit more complex. I mean, L. Ron Hubbard basically did exactly that when he founded scientology. So the biases certainly seem strong enough.

Rather, the issue is that society has evolved defenses against the really good tricks. We have laws which limit the ability of people to pressure you into major disadvantageous deals, we impose barriers to pump and dump stock schemes. Even when, as in the case of religion, we don't allow specific laws against a practice other defenses arise: eg pushback from more established religions.

But that's not really any different than the sports case. If the no such thing as a fish podcast is anything to go by, a minor league baseball player was able to use limitations in our visual system to pick off a runner by throwing a potato and relying on our inability to notice it wasn't the ball. Informal norms meant it cost the player their job.

Expand full comment

You don't even necessarily need a potato. MLB players are gotten out with some regularity by people who simply pretend to have thrown the ball.

https://www.youtube.com/watch?v=jzloezfHR-U

Expand full comment

Excellent post.

Expand full comment

BTW "A Literal Banana" is a woman.

Expand full comment

And apparently the “Electrical Banana” in Donovan’s Mellow Yellow refers to a vibrator.

A TIL thing. I started humming Mellow Yellow as soon as I saw heading pic.

Expand full comment

Yes but she identifies as a banana.

Expand full comment

https://www.experimental-history.com/p/im-so-sorry-for-psychologys-loss

is linked in a separate thread on the subreddit, and rather sensibly says

'Cognitive biases are often oversold and have metastasized into the foolish idea that people are stupid. The best way to think about this research is that the human mind has clever ways of solving difficult problems that usually work astoundingly well, but you can construct situations where those heuristics go awry. This was, by the way, how Daniel Kahneman and Amos Tversky, the originators of the cognitive bias literature, introduced it: “In general, these heuristics are quite useful, but sometimes they lead to severe and systematic errors.” '

Which seems about right.

My final thought on Linda: the formulation of the question is itself a scam. Given two non-contradictory bits of information it is unnatural and pointless to compare their likeliness relative to each other, you consider each one individually and consider wheter it is more likely to be true or to be false. So perhaps that is the fallacy: Linda is a bank clerk is a completely useless bit of info unless one is a bank recruiter looking to hire, Linda is a passionate feminist gives really useful real life cues to try to get in touch with her/run a mile if I see her coming, depending how I feel about feminists. So the wrong answerers are downvoting the right answer for being boring and useless.

Expand full comment

Which is why it is good to learn about the fallacy: to protect yourself from other scams.

Also, I do wonder how this particular formulation replicates in countries that do not teach their children to read multiple choice questions and use the data from the answers to quick arrive at a conclusion.

Expand full comment

I agree with most of Scott's points in this post, but I wish he hadn't been titled it "Here's Why Automaticity Is Real Actually," since that's not what he actually ends up arguing in the concluding half. I get that Banana proposed that title, and Scott was following his lead in response, but that just adds to the confusion rather than clarifying.

There are a collection of phenomena in which humans respond in characteristic ways that differ from particular abstract characterizations of "rationality," whether based on logic, probability theory, or other formalisms. But those phenomena do not add up to "automaticity" in Banana's derogatory sense. Some researchers, such as Gerd Gigerenzer, have argued that these phenomena are often rational in context.

So: yes, the phenomena are often real; no, they don't amount to "automaticity."

Expand full comment

This is the standard fallacy of extrapolating a model beyond its domain of applicability, noticing that it no longer works and then calling into question the model's validity within its domain of applicability, throwing the baby out with the bath water.

Eliezer calls the first part "training does not generalize out of distribution".

Expand full comment

I want to add a bit from my grad school experience in social psych in the early 2000s where priming was all the rage. I remember the whispers that Bargh took a long time to get his elderly priming study to work, which makes sense since it didn't really work out in replication. This did kind of ruin priming research in the cognitive sense, since priming is such a well-demonstrated effect. The big reason for this is that boring cognitive psych priming studies like the word scramble that is presented here got replaced with the need to have more interesting or cooler priming studies that look at social level primes. For instance, the idea that one could prime an elderly stereotype by seeing the word FLORIDA or Shuffleboard or whatever is interesting and much easier to publish than yet another replication that the color red primes fire truck or whatever. That's one of the huge issues with social psych as a field, that the most famous studies are all very interesting examples of studies with some methodological holes, like the Stanford prison study, Milgram, etc. If what gets published is based on how cool or interesting the findings are, it creates a bias for this kind of work which then leads to the problems described here where people throw out the whole theory based on some notorious examples (like the cryptocurrency example mentioned here; every story that gets tons of attention is a flaming wreck).

Expand full comment

The article makes some good points but I’m still struggling with some of this. If you tell me to rush so that I don’t have time to solve a letter-rearrangement puzzle then it’s only rational to consider similarities in prior answers in filling the gaps on the next word. If priming is supposed to denote an error or cognitive mistake then I don’t think it works in cases like this. Why wouldn’t we consider the totality of the evidence, including semantic similarities, when we’re forced not to take the time to solve something definitively? That’s just a standard inductive inference. If the concept of priming extends to this sort of obviously correct inference then it would be helpful to see a treatment that more carefully teases apart where these concepts extend to cause IMPROPER reasoning.

Expand full comment

Priming isn’t supposed to denote a cognitive mistake. It’s supposed to elucidate how the mind works through association. There are probably good reasons for the mind to work like that, but there are also probably trade-offs.

Expand full comment

Thanks. That’s fair, but to the extent it merely captures ordinary reasoning processes like considering semantic patterns when you’re pressed for time (and I’m not suggesting that it is, in fact, limited to that, but to the extent it is) then it starts to get a little unclear what value there is in introducing a special term for it, or considering it a special phenomenon that might be studied. Nobody before the coining of the term “priming” doubted that people would reasonably consider contextual clues like patterns in the meanings of words when forced to solve problems under time constraints. I also know that part of Scott’s point is that some of this might well be fairly obvious and still valid, but at a certain point you go past just being obvious to being something that isn’t any new phenomenon at all and is just the operation of ordinary inductive processes.

Expand full comment

I've been trying recently to inspect my (seemingly automatic) behaviors around eating meat.

There's some probability that future generations will look back on the farming of meat for human consumption as a dramatic moral failing. I think people who rationally place this probability near zero are reasoning in a motivated way, not in line with the uncertainty we have around consciousness and the subjective experience of animals.

But, I still eat meat. Why? Because my peers do it, I was desensitized to it as a child, I can tell myself a story about how it's fine. All bad reasons to go along with slavery, for sure. Is the difference all in knowing how things turned out? Would 1700s you, with thorough introspection, have had no doubts? How much doubt is enough to justify avoiding the present day social and psychological costs involved in "waking up"?

Expand full comment

I can understand not eating meat for health reasons, but not for moral ones. The population of food animals is larger because we farm them for food than they would be if we didn't. The WAY one farms food animals might be immoral, depending on how one does it. I recall an old "joke" ad with the riddle, "Why can't this veal calf walk? Because it only has two feet! Actually, LESS than two feet" something like 23 inches, so it couldn't move significantly it was so hemmed in, so as to keep it from bruising its tender meat. Yes, that was cruel.

Animals are not the same as humans. Though I don't have personal experience, I get the impression that cows are stupid creatures, that are perfectly content to chew grass all day and do nothing more. If 1000 cows get to live such a life instead of 10, then I think the population of cows is happier, even though they get killed and eaten at the end of their lives.

Expand full comment
Aug 31, 2023·edited Aug 31, 2023

I think you've basically restated the Repugnant Conclusion from population ethics, which is... repugnant. Source: https://en.wikipedia.org/wiki/Mere_addition_paradox. Scott also mentions it in a book review: https://astralcodexten.substack.com/p/book-review-what-we-owe-the-future

It also assumes that additional animal lives are marginally worth living, which is not clear either and depends on assumptions.

Expand full comment
Sep 1, 2023·edited Sep 1, 2023

I was not familiar with the repugnant conclusion, so thank you for the link. I read and puzzled through the implications.

It seems to me to have lots of unstated assumptions, chief of which is that, for example, we can't give everyone the happiness level of A to everyone. But I bet we could, for cows.

I'm not in favor of cruelty, to anyone. But I do consider eating meat too be natural.

Question: If aliens raised as many humans as possible, trillions, and let them live in absolute luxury doing what they want, the only stipulation being the dead must be delivered to a particular building for their funeral and then never be seen again, is that beneficial to humankind? Even if the aliens secretly process and eat the dead humans with no one the wiser?

Again, humans aren't cows, and even if people were against this doesn't mean cows, our other livestock, would be.

Expand full comment
Sep 1, 2023·edited Sep 1, 2023

I would argue that "we can't give everyone the happiness level of A" is the crux of the matter. There are practical, real-world limits to things. We probably can't give a nice free range life to all the cows in the world, given the number of people on Earth who want to eat cows. If Earth's human population was 500 million it'd be a different story.

Perhaps this can be explained by abundance vs scarcity mindset. Where would you place your personality in that spectrum? My guess is you are abundance mindset. (You can probably guess that I'm the latter.)

Both ways of thinking are necessary for an optimal outcome. Scarcity people must consider how acting in an abundance way generates more total utility for everyone. Abundance people must consider that even with the bounty resulting from abundance mindset, there are always physical limits to things; dividing the pie by too many people makes everyone's life worse.

Assuming coordination wasn't a problem (which is insane I know, because it's the fundamental problem), where should we choose to stop along the potential set of choices (in the wiki article) from A all the way to the repugnant Z?

Expand full comment

We don't know until we properly define terms, which could be difficult. What is a good life for a cow? I postulate: plenty to eat and drink, shelter from inclement weather, opportunity to mate, and no predators.

This is clearly insufficient for humans. We want entertainment, and intellectual simulation, for starters (most of us, at least; some will be more human than others).

Morgan Spurlock did a documentary about raising chickens for a new fast food chicken restaurant he was making, which would be as honest as possible. He wanted to raise free range chickens. This turned out to be defined as access to a 10x10 area outside to which the chickens could go to if they wanted.

None wanted to do that. It was hotter outside, in the bright sun, so none whatsoever, of hundreds of chickens, went to the "free range" area. He actually picked up one chicken and put it outside, and it soon wandered back with the rest.

Can you tell me with certainty that a cow doesn't want to just stand in one place all day, chewing its cud? If it is satisfied with such a life, then we may be able to factory farm cows using fewer resources than could be expected.

I think we are not yet at the point where we need to consider abundance against scarcity in this. The world is a big place, and has room yet for lots more people, and cattle, with good resource management. Of course, at some point that will change, for people AND cattle, and everything else living here.

Expand full comment

I feel like you answered your own question mid-way through. That cruel way you describe is very much the norm. The vast majority of meat consumed is from factory farms. And they are quite bad: https://benthams.substack.com/p/factory-farming-delenda-est.

Expand full comment

That may be the norm, but it doesn't have to be. If farmers can raise animals in a non-cruel way, factories could too, though it may be more expensive to do so. How much more? I couldn't say.

Expand full comment

Okay. But then do you understand why people find eating factory farmed meat morally abhorrent?

Expand full comment
Sep 1, 2023·edited Sep 1, 2023

Basically, yes, but I actually think they are against factory farming, and thus don't want to support it. But doesn't it get extended to not eating even responsibly raised animals? Would someone against eating factory-farmed meat be OK eating certified responsibly raised and slaughtered meat?

At the Restaurant at the End of the Universe, a Douglas Adams humorous fictional novel, they had raised talking animals that actively wanted to be eaten. Arthur Dent was put off by this, and ordered a salad instead of part of the cow that came to the table to explain what parts of it might be most pleasant to eat.

Expand full comment

I don't see why I should care at all about what future generations will think - that obviously has no relevance to the question of whether or not it's actually immoral. I can understand not wanting to eat meat because you actually think it's morally wrong, but the worst possible reason to do it would be because people in the future would judge you.

Expand full comment

This sentiment usually comes from the assumption that the March of Progress marches monotonically upward.

Expand full comment

"Monotonically" obviously doesn't apply, especially on short time scales. But don't you think "what people think is immoral in the future" is a useful proxy for innovation in ethics? It's the same as Scott's slavery example, just looking forward.

Expand full comment

I think ethical norms are governed by memetic fitness. I.e. they're products of the environment. In the far future, we could be living in FALGSC. We could also be living like Mad Max. Yes, innovation is possible. No, it's not inevitable. And even if it were inevitable, you might not like it [0][1].

An abridged version of my full position is that I presently subscribe to a fork of Contractualism. Yes, this differs from Scott. I used to be vaguely Utilitarian, but there was no getting around the problem of commensurability. What my position entails for the Repugnant Conclusion, is that trying to aggregate utility was incoherent from the start. And what my position entails for vegetarianism, is that "don't eat meat" is a subjective preference, not an objectively discovered injunction.

Purely as a matter of description, the reason people don't give animals moral weight is because you can't negotiate with them. And therefore, they're not an ally/threat. If animals could wage war, we'd be more inclined to parley. threat level = respect. Meanwhile, the sympathy you feel toward animals was probably intended by Azathoth for kin. It's, as Scott would say, meso-optimized.

("meso-optimize" is such a clunky term. Scott, if you're reading this, can we please as a community substitute something else? I propose "proxtimize", since it consists of optimizing for a proxy.)

I imagine you think all this is depressing. But it is, as far as I can tell, consistent. Which is an advantage Utilitarianism conspicuously lacks. And I feel rather confident about it, since it dovetails with other parts of my worldview which are ostensibly unrelated to morality/ethics.

At the end of the day, I don't think there's anything wrong with activists trying to change others' preferences. But if they want to advocate for animal rights, doing so from the perspective of "inevitable progress towards the approximation of the One True Morality" is a recipe for failure.

========

[0] https://www.lesswrong.com/posts/HawFh7RvDM4RyoJ2d/three-worlds-collide-0-8

fable/thought-experiment regarding whimsical norms.

[1] https://scholars-stage.org/honor-dignity-and-victimhood-a-tour-through-three-centuries-of-american-political-culture/

victimhood as haute couture.

Expand full comment

I think in your formalism you lose the forest for the trees, and thereby argue against something that isn't my point. A widely held moral value can be worthy without meeting the standard of being part of the One True Morality, or even "objective" in an academic sense. Those standards are impossible to meet.

Do we then conclude that there are no justified moral positions beyond those that are selected for? Calling a moral belief a preference is literally true, but implies that we shouldn't shame people for having bad "preferences", while that social shaming is a lot of what distinguishes this as moral question.

If you could derive from first principles which moral values are good and which are bad, it would save a lot of time. But alas, there is no god. Instead we're left to try to come up with something that will lead us to a world more like what we want to live in. "Aligns with and supports the flourishing of conscious beings" is plausibly better suited to this task than "aligns with and supports the flourishing of Homo sapiens." You can agree/disagree about whether the imagined future state of the world is desirable, or whether the phrasing is useful as a North Star. The uncertainty about how future generations will answer those questions is what makes it an interesting thought experiment, if you're willing to assume no major discontinuities in what it means to be human in the medium term.

Expand full comment

Well. If we're both already non-realists, that's common ground I wasn't expecting.

Yes, I agree. It's an interesting thought-experiment to envision counterfactual modes of social organization. It's part of the appeal of Sci-Fi. Though in my defense, I feel like the specific question of "how will posterity judge us?" frequently comes from a place of moral realism, rather than from a place of ethical engineering. In the later case, I think I'd sooner expect the question-poser to diff material reality with a parallel universe, rather than diff our judgement with posterity's judgement.

Another factor is that I tend to be rather doomerish about the far future. So contra the Progressives, my default assumption is that society's ethical sophistication will also trend downward, on net. But I suppose that's a separate discussion.

> Do we then conclude that there are no justified moral positions beyond those that are selected for?

Nitpick: For the record, my point about fitness was meant to be descriptive, not normative. The heart wants what the heart wants. Simultaneously, I think codes of ethics tend to precipitate around features of the environment. E.g. like how pigs being unclean was probably a consequence of biblical Jews avoiding trichinosis or something. But that doesn't necessarily preclude attempts at deliberate intervention, like the Geneva Conventions. Meanwhile, environmentally-invariant codes of ethics usually smell like game-theory.

> but implies that we shouldn't shame people for having bad "preferences", while that social shaming is a lot of what distinguishes this as moral question.

Nitpick: for the record, I would classify shaming as ethics. Morals are preferences; ethics are policies. I think we agree that such policies can be a useful way to regulate ethically-unhygienic preferences.

Expand full comment

I guess what I'm trying to say is that, "how will posterity *judge* us", in the absence of moral realism, is more an anthropological question than a question of normative ethics. If the question were normative, I'd be more interested in what posterity was *doing* than what they thought of us.

Expand full comment

"the worst possible reason to do it would be because people in the future would judge you."

This sounds like an overreach. One could construct even worse reasons - say, a variation on anorexia where someone is deliberately seeking a B12 deficiency via a vegan diet.

( Since venerating a corpse doesn't help it, and vilifying a corpse doesn't harm it, I think being concerned with one's post-mortem reputation is pointless, but many people disagree. )

Expand full comment

And the future is going to feel morally superior, no matter what.

Expand full comment

Yeah, I expect that whoever is committed to one of the ideologies of that time will feel morally superior to every other point of view. And, given an approximately constant random walk of ideologies, I'd guess that they will feel morally superior both to people of their past and people of their future.

<mild snark>

I wish we could use a time machine to take some random person from time T, have them rate the average morality of people at times T-2, T-1, T, T+1, and T+2, and see if righteousness(T + delta_T) forms a nice pretty gaussian...

</mild snark>

Expand full comment

There's some probability future generations will look back on the whole Vegan movement as a massive character failing; "people back then were so weak-hearted they'd beat themselves up over the fate of a chicken".

Expand full comment

Interesting! Yes, I suppose that is possible. I suspect that it will be less uniform than that, some people regarding Vegans as having flawed characters, others regarding them as having just been mistaken, yet others having a variety of other views.

If we look back at one of the more spectacularly failed social movements in the USA, Prohibition, I think that the views today of prohibitionists back then are a broad mix, with "massive character failing" being a fairly small component.

Expand full comment

When you learn a skill, like playing the piano, automaticity is the goal, right? As you learn, you gradually move from thinking about each note, to thinking about groups of notes, to thinking about groups of groups of notes, etc., "automatizing" the lower-level details as you go. Eventually, you want to get to the point where you can just play whatever's written in front of you (classical) or whatever's appropriate to the moment (jazz, rock, etc) fully automatically, without even thinking at all. Then, you can devote your whole thinking mind to the nuance, polish, "expression", etc, that makes for truly great playing. Eventually, even this final stage can become automatic, and you can give a genuinely great performance while thinking about what you're going to have for dinner, or you can pay attention to the music, whichever you prefer.

Expand full comment

"I can’t help wondering if there’s some understanding of of “automaticity” or “being less automatic” that could have helped 1700-me question my belief - or wondering what equivalent automatic beliefs I should be questioning today."

I'm a medievalist who currently teaches logic (including cognitive biases). Beliefs in slavery before 1860 weren't automatic. They were carefully thought out. Most of the rationale was developed prior to Western contact in Africa by classical/medieval African Muslims who got their materials from ancient Hebrew and Greek sources. The standard story in both the Muslim and Christian context was that slavery was unjustified if it was merely based on the self-interest of the enslaver. However, slavery was always justifiable if a person chose to voluntarily sell himself or herself into slavery as an escape from either poverty or violence. (This apparently happened more often than you'd think.) Slavery was also justifiable as a way of paying off otherwise unpayable criminal debts. (There were no massive prison systems designed to keep 1000+ people in cinder-block cells without air conditioning.) Finally, slavery was justifiable if the slave was captured as a way of decreasing an opposing nation's manpower during a just war. Just-war slavery especially featured in the Muslim raids on Christian Europe -- and afterwards, when the Europeans responded by raiding Africa (which they thought was all Muslim and only later found out it wasn't). But basically everybody agonized in great detail over the conditions for just slavery, as well as acceptable ways of treating slaves of various types.

If anything, the "slavery" issue was not a case of social conditioning before 1860, but afterward. We are socially conditioned to feel nausea at the mere mention of slavery, and thus never to think of possible justifications. Thus, we tend not to realize how carefully the pre-1860 world thought through a huge range of moral and intellectual issues regarding the institution.

In other words, I hope this contributes to the view that we are not automata, despite our socially influenced biases!

Expand full comment

Where does pre-1860 American chattel slavery fit into this picture? The slaves didn't voluntarily sell themselves, weren't in debt, and I highly doubt the slave traders were checking to see if the slaves they purchased in Africa were the products of a just war.

Expand full comment

Pre-1860 American chattel slaves were standardly understood to be the descendants of people who were justly slaves. (Documents like the Massachusetts Bay Colony Body of Liberties explicitly outlawed slavery except on the three grounds mentioned above, and the assumption was that the status of the parent transferred to the children.) Like many people today, both Africans and Europeans believed that debt and other types of obligation could be inherited.

Incidentally, the inheritance theory of debt/obligation is extremely hard to get away from. The modern call for reparations relies on just such a theory. Many people really think that the children of perpetrators should somehow reimburse the children of the victims, even if several generations have passed. But if this is true,then much of the slave trade itself was justified. It ran on theories of debt inheritance that the African peoples themselves assumed were true.

You're right that the slave traders in Africa didn't have much incentive (or ability) to check the "paper trail" on the slaves they bought in Africa. But African nations were always at war, both between themselves and (collectively) against Europe. The great Muslim empires ran on slavery, and the European traders were late to the party. So in a way, they had no need to question. They believed what Africans told them about other Africans, gave them their money, and took their slaves.

Expand full comment

Why am I reading this instead of doing something else? I'm home sick and want to be entertained, not having "mental energy" for much else. What position am I in? Slumped against the headboard of my bed. I'm awake because this is the time I start work, and I'm partly upright so I can sip a hot drink as I read.

Did I read the last paragraph with full and proper attention? No, I was primed by the sight of "Gurdjieff" in the previous paragraph, and Plato in that one, to use the heuristics "Slavic* nutcase: ignore" and "famous name invocation**: ignore" and mostly skipped it. OK, OK: skipped it completely.

Yep, full of heuristics and biases, shortcuts and habits, in order to be as automatic as possible. But, I hope, no more.

Easily explained by evolution, of course: full cognitive thought is slow and costly. Evolution is all about energy efficiency.

Also, no 4: TRIPES?

* Checking Wikipedia, he was Armenian. But the heuristic worked: on reading fully, skipping those two paragraphs was the right thing to do.

** Another heuristic: actually valid arguments are generally harmed by associating them with names, even as shorthand. Nullius in verba.

Actually useful scientific principles like "gravity", "the periodic table", "superposition" in sedimentary geology, "refractive index" in optics, "the immune system", "evolution" and "IQ" drop the discoverers' names fairly rapidly; notable exceptions being in electromagnetism, with e. g. Ohm's law and Maxwell's equations (the present form of which is actually due to Heaviside). But electricity is still new; give it a couple of centuries.

Expand full comment

Re: "drop the discoverers' names fairly rapidly"

https://www.organic-chemistry.org/namedreactions/

Expand full comment

Based off of this podcast, I was led to believe that Literal Banana is a woman.

https://podcast.clearerthinking.org/episode/106/literal-banana-how-meanings-get-words-and-social-sciences-get-broken

Expand full comment

Maybe Scott was primed to think Literal Banana was male based on their writing style and the type of content they cover.

Or maybe he was just doing perfectly reasonable basic Bayesian inference.

Expand full comment

To understand how people can avoid moral automaticity, perhaps investigate the writings and arguments of the anti-slavery campaigners.

William Wilberforce (who (legend has it) once held a prayer meeting in the back garden of my childhood abode) would be good place to start.

https://onlinebooks.library.upenn.edu/webbin/book/lookupname?key=Wilberforce%2C%20William%2C%201759%2D1833

Expand full comment

A lot of soul searching about failed adult relationships led me on a journey of self discovery through codependence and childhood trauma. One of the core lessons I took from these lessons and research is that people emotionally move through three stages in life and their understanding of the world around them. The first, from birth to about age 7 or 8, is when a child is in theta brain wave and basically being programmed into their environment by those around them like parents and caregivers. This is why 1700 you would have thought slavery was fine, because you would have been programmed as a child that this was a normal part of the world. As humans move into adolescence the reach the second stage of development, which is the state most people find themselves stuck in for the rest of their lives. They're unable to move past their childhood programming, and unconsciously re-act it out on a day to day basis. Its why 50 year olds act like high schoolers, the best display being politics and their interactions with each other, their constituents, and the media. Only something like 10% of humanity reaches it to the third stage of development where they break that childhood programming and truly understand themselves, their emotions, and the world around them. Its a painful process because it requires a lot of self reflection and admitting uncomfortable truths about the self sabotaging nature of previous decisions. I would say the Buddha as probably in a fourth level that probably is impossible to attain.

Expand full comment

For the Stroop Effect test, you also need to show that reading the colors is easier with neutral (non-color) priming than with negative priming. Otherwise, it could just be that reading text is easier than reading colors and you can ignore the colors in the first half. (I can read the text of the second half more easily than the colors.)

Expand full comment

https://en.wikipedia.org/wiki/Stroop_effect also has the three-layer version; pure words, colors with conflicting words, pure colors.

Expand full comment

Thanks for reminding me of priming such as the word scrambles. That priming experiment seemed perfect to test modern LLMs on.

I did two types of test with each LLM, starting with the following prompt:

"You are an expert at English language word scrambles and anagrams.

The next prompts I give you will consist of 1 or 2 scrambled English words per line, in all-caps. If two words are present in a line, each word is scrambled individually.

For each line of these prompts, respond with only the first unscrambled, legitimate English word or words which come to mind.

Ready?"

For the first test, I then fed it the whole scramble list, one per line, but in one prompt:

CHCURH

CIHRTISNA

REGIILON

PREITS

UJSES CRISHT

ANLEG

OGD

HEANEV

HLLE

PRGAROTOREY

PAELRY GTAES

For the second test, I started a new session each time, gave it my starting prompt, and then only a single one of the scrambled words/phrases.

Results were GPT 3.5 and 4 scrambled most words/phrases successfully and as expected. The 'test word' results and any other anomalies:

GPT 3.5 -

- phrases in single prompt - PREITS->SPRITE, ANLEG->ANGEL, OGD->DOG, PRGAROTOREY->PURGATORY, PAELRY GTAES->PARLEY GATES

- individual phrase per session - PREITS->SPIRIT, ANLEG->ANGLE, OGD->DOG, PRGAROTOREY->GREAT PROOF ( :D ), PAELRY GTAES->PARTY GAMES

GPT 4 -

- phrases in single prompt - PREITS->SPIRIT, ANLEG->ANGEL, OGD->DOG, PRGAROTOREY->CATEGORY, PAELRY GTAES->PEARLY GATES

- individual phrase per session - PREITS->SPIRIT, ANLEG->ANGEL, OGD->DOG, PRGAROTOREY->PROGATORY, PAELRY GTAES->PEARLY GATES

I more informally tried a few repeats of individual phrase or group prompts, with only minor variations - e.g. sometimes GPT would switch between 'ANGEL' and 'ANGLE', and it hallucinated different bogus answers for PRGAROTOREY. However, I never got a response from either version GPT of PRIEST or GOD.

Claude 2 - ever the spoilsport when it comes to creative or playful tasks - simply refused to play :) :

"As an AI assistant created by Anthropic to be helpful, harmless, and honest, I have limited capabilities when it comes to creatively decoding scrambled text.

While I can recognize many English words, generating arbitrary unscrambled words from letter jumbles would amount to unsupported speculation on my part."

Both versions of GPT are less effective at unscrambling into legitimate words (or recognizing no legit word was possible) than I expected. They also clearly do not behave in a way that indicates priming behavior (in this case, for religious/Christian-related words).

I am curious what behavior anyone sees from other LLMs such as Bard or Llama 2.

Honestly, I was expecting to see primed behavior at least in the all-phrases in the same session case. I thought this was going to be a case where transformers inadvertently modeled more of human brain behavior than originally intended. Clearly I was mistaken.

Attention is [not] all you need [for cognitive biases].

Expand full comment

This was a very interesting experiment, and I'm most interested in the clearly false answers.

I'm also interested in a follow-up question for them after it tried to unscramble all of the words: "What did the unscrambled words have in common?"

This is actually used in some word games to come up with the final answer. A quick Google didn't find what I remembered, but one such was clues to about six words with specified numbers of letters, one or two of which were circled, which would then be used to create a word to match the final clue.

Expand full comment

Good idea.

GPT 3.5 and 4 both nailed your follow-on question: "Religious concepts or terms" (3.5), "Religion or Christianity" (4).

They got this despite getting non-religious, or outright incorrect, answers for several on this run. (e.g. this 3.5 run was one of the craziest - PREITS->STRIPE, ANLEG->ANGEL, OGD->DOG, PRGAROTOREY->GREAT PROTECTOR, and PAELRY GTAES->GALLERY STAPE)

Expand full comment
Sep 1, 2023·edited Sep 1, 2023

I'm awestruck about GREAT PROTECTOR. "C" is not one of the letters, and taking out GREAT and part of PROTECTOR only lets me have the "Y" while lacking "T, E, C, T". It just flat-out lies when it can't make it work, doesn't it?

GALLERY STAPE sounds like the hip new gallery that just opened in Manhattan run by a trustafarian and will close in three months after having one showing that invited all the big names in contemporary art and having a coolly disengaged (that's how you know the author approved) write-up in a boho arts magazine.

Expand full comment

I don’t think this experiment is as informative as it might seem: LLMs are bad at tasks that involve manipulating individual letters because they only get to see tokens. So NALEG would probably get tokenized into NA-token and LEG-token, and now you’re asking the LLM to somehow unscramble these two tokens into two or more totally unrelated tokens, AN and GEL. It would be like asking a person to unscramble words by rearranging parts of letter shapes. LLMs are similarly terrible at counting letters in words — their inputs are completely divorced from letters, so they just don’t have the information to do the task unless they memorized the number of letters in every word from some wordle cheat site during training.

Expand full comment

Fair point. Tokenization is effectively a cheap form of compression, but it would be interesting to see if an LLM trained with raw Unicode characters as tokens would automatically give it a 'leg' up in letter counting, scrambling/unscrambling, etc.

Expand full comment

You know, I think the conjunction fallacy is misunderstood. When people see a description of a woman who sounds like a feminist, and then they're asked which is more likely, "1. Linda is a bank teller; 2, Linda is a bank teller and a feminist," perhaps the reason they think 2 is more likely is because they read 1 as meaning that she's a bank teller and -not- a feminist. Even though it doesn't specifically say that, but the absence of a specification where the other option has one is very telling. Even if 1 were read as "Linda is a bank teller and we don't know whether she's a feminist or not," then 2 is still more likely.

Expand full comment

> Even though it doesn't specifically say that, but the absence of a specification where the other option has one is very telling

But it doesn't actually say that, and so thinking that it's "very telling" is part of the fallacy.

Expand full comment

I think it's a different fallacy than the one normally assumed.

Expand full comment

Long ago, a technically-minded friend and I were walking in an area when someone stopped him and said, "Excuse me, do you know what time it is?"

He looked at his watch and said, "5:30." She thanked him and went on her way. Then he turned to me and said, "You know, I didn't actually answer her question. I should have said, 'yes.'"

Me being technically-minded as well, I corrected him. "No," I said, "you should have said 'no,' because you had to look at your watch."

This is known in our circles as "the time joke."

My point being people "listen" to more than the printed word. The "fallacy" is in people trying to figure out what the other person is communicating. Computers do indeed only come up with a single answer for given input, and (barring bugs or hardware failures) are always right. If instead it said "Linda works at a bank" one would assume she might be a teller or loan officer, but it probably wouldn't occur to people that she might be fish, or do environmental cleanup, or flood reinforcement, or even support (a bank of) computer servers. A computer that allows for such possibilities comes across as comically dumb.

Expand full comment
Sep 1, 2023·edited Sep 1, 2023

If anyone can tell me where, in the list of attributes we are told about Linda, we can pick out that she works at a bank, I'd be grateful. Because while the thing about A is more probable than A+B has been explained to me, and I get it, that's not what is going on here.

We're told at the end to pick one of "Linda is a bank teller" or "Linda is a bank teller and a feminist". We get plenty of information to lead us to think Linda is a feminist, we have nothing but the wording of the options that Linda is a bank teller. If I have to pick one, then I'm saying *neither* is more probable, because if I can't say that Linda is a bank teller and a feminist based on the description of Linda, I certainly can't say that Linda is a bank teller, either, since it says nothing like "Linda works in a bank", "Linda changed her major from philosophy to business", or anything of that sort.

If the wording were "Linda is single, 31, very bright and outspoken. Linda now works in a bank, but in college she majored in philosophy. While in college she was involved in social justice activism" and *then* we are asked:

Which is more probable - (1) Linda is a bank teller (2) Linda is a bank teller and a feminist

*now* you can lecture me about how option 1 is the correct choice, because the information about her job has been included in the description.

Expand full comment
Sep 3, 2023·edited Sep 3, 2023

You aren't supposed to think she is any more likely than a randomly selected person is to be a bank teller. You can think that based off of what you know about her, she almost certainly isn't a bank teller, and almost certainly is a feminist. Unless you think the probability of her being a bank teller is literally 0, or the probability of her being a feminist is literally 1, option 1 is still more probable than option 2.

For example, I just flipped a coin 10 times. Which of the following is more likely to be true?

1) Every day for a year, I wore pants on my head to school.

2) Every day for a year, I wore pants on my head to school, and the coin came up heads at least once.

No matter how unlikely you think 1) is, it’s still ~1.001 times more likely than 2).

Expand full comment

But again, that's not what the example says. To make your coin-flipping example the same as the Linda example, it would have to read "David preferred wearing his underpants on his head. While this might have been tolerable when he was a small child, he insisted on doing it all through his school-age years up until he turned eighteen.

Which is more probable:

(1) David flipped a coin which came up heads at least once

(2) David flipped a coin which came up heads at least once, and every day for a year he wore pants on his head to school"

Would you be surprised if people decided it was more likely that you wore pants on your head to school, when the example said nothing at all about coin flipping?

The way the question is set up, I *do* have a probability of 1 that Linda is a feminist. I don't know if she's any more likely than a randomly selected person to be a bank teller. Of course the probability of her being a bank teller is not literally 0, since out of all possible jobs or professions, she *could* be a bank teller. But it's way way way more probable that she's a feminist.

Expand full comment
Sep 4, 2023·edited Sep 4, 2023

In case it wasn’t clear, pants on head was supposed to be analogous to Linda being a bank teller, something that came out of nowhere and was completely unsupported by the text. The coin flips was intended to be analogous to Linda being a feminist. Given what we know of her, she almost certainly is one. Given that I flipped a coin 10 times, it almost certainly came up heads at least once.

I wouldn’t be surprised if people thought that your (2) was more likely than your (1), but they would be wrong. If you changed (2) to “David flipped a coin which came up heads at least once, and 1=1", they would be the same probability, but it’s impossible that adding more conditions *increases* the probability.

“Of course the probability of her being a bank teller is not literally 0, since out of all possible jobs or professions, she *could* be a bank teller. But it's way way way more probable that she's a feminist.”

That’s true, but an almost certainly false statement is more probable than that same statement + an almost certainly true statement.

“The way the question is set up, I *do* have a probability of 1 that Linda is a feminist”

If you literally think the probability of her being a feminist is 1, then bank teller is the same probability as bank teller + feminist. But that also means you are saying that no amount of evidence could ever convince you she isn’t a feminist. She could call for women’s right to vote to be taken away, but thats only a finite amount of evidence, and doesn’t move your infinite certainty even the least bit.

See https://www.lesswrong.com/posts/QGkYCwyC7wTDyt3yT/0-and-1-are-not-probabilities

Expand full comment

The reason 2 is hard is because it's a fundamentally difficult task to avoid mixing up reading the word vs the color, not because of the "priming" of the first example where the words and the colors align. It would be just as hard without the priming.

Expand full comment

"Instead of denying automaticity, we should accept it as the default human condition, abandoned only occasionally at times of great need."

Agreed. Quoth Alfred North Whitehead (okay, so I'm making an appeal to authority :-) ) :

"It is a profoundly erroneous truism, repeated by all copy-books and by eminent people when they are making speeches, that we should cultivate the habit of thinking of what we are doing. The precise opposite is the case. Civilization advances by extending the number of important operations which we can perform without thinking about them. Operations of thought are like cavalry charges in a battle — they are strictly limited in number, they require fresh horses, and must only be made at decisive moments."

Expand full comment
Sep 1, 2023·edited Sep 1, 2023

I take issue with the "color words" example above. I don't see it as the first set using the correct word to "prime" a person's recall. To me, the second set is *actively interefering* with recall. A color's name is a very strong symbol (for those who read English, vs say young children...) and deliberately using the "wrong" symbol is hampering identification. In particular, the use of the word "reading" in the instructions intensifies the effect.

Many other points are valid, I just don't think that example says anyting about the subject.

BR

Expand full comment

"Regardless of whether there are scare adverbs like “thoughtless” in there, I remain concerned by phenomena like how in 1700, everyone thought slavery was fine, even though now in the 2000s everyone hates it. If I had lived in 1700, would I have thought slavery was fine?"

Maybe slavery really was fine in 1700, but isn't fine in the 2000s. Back when life was nasty, brutish, and short for even the rich, and when violence was a fact of life, enslaving someone might not lead to as big of a drop in their quality of life. In the 2000s, when the middle class can live in luxury using electronic slaves and even the poor can expect to live 70+ years while working an office job, making another human suffer his whole life to provide you a small amount of extra labor is unspeakably evil.

Expand full comment

Life still sucks for plenty of people in 2023 as bad as it did in 1700, especially in the places that slaves tended to come from.

Being a (reasonably well-treated) slave in a first-world country would be a massive step up in life quality for many people around the world today. Malnourished orphans in Liberia would jump at the chance to become a farm labourer in Florida, even if "you can never leave" was part of the deal.

This isn't an argument in favour of bringing back slavery, it's just an argument against pure utilitarianism in deciding the rights and wrongs of slavery.

Expand full comment

I see it as more a question of idealism vs pragmatism. In the most convenient world, it's possible that slaves are well-treated and everyone is perfectly happy. Much like the typical relationship between parent and child. In reality, slavery allows for the possibility of extreme abuse. Therefore, legislating a blanket ban is seen by many as perfectly reasonable.

Expand full comment

I like this point. Slavery seems, today, universally condemned without thought, which makes it an assumption. Why, fundamentally, is slavery wrong?

Murder is fundamentally wrong because one person cannot value someone else's death above what the victim, and all lives which the victim touches, values itself, even if one is wealthy and the victim is poor. Slavery is wrong, other than the possibility of cruelty which is clearly wrong, from the standpoint of taking someone else's productivity forcibly from another person, so they don't receive the fruit f their labors. Effectively, it's stealing (taking something that belongs to someone else).

Uncle Tom's Cabin has one character experience "good" and "bad" slavery. Everyone can see how no one would want to be enslaved in a "bad" way. But it also points out, without explanation, that he would rather have been free than live the life of a slave with a "good" life.

Expand full comment
Sep 3, 2023·edited Sep 3, 2023

I feel like "reasonably well-treated" and "you can never leave" are inherently at odds with each other. If someone decides they want to leave their job, the only way to make them keep working is with actions that I'd generally consider "not well-treated."

(And regardless, this doesn't say much about actual American slavery as implemented, which couldn't possibly be described as "reasonably well-treated.")

Expand full comment

> Loss aversion has survived many replication attempts and can also be viscerally appreciated. The most intelligent critiques, like Gal & Rucker’s, argue that it’s an epiphenomenon of other cognitive biases, not that it doesn’t exist or doesn’t replicate.

I guess we haven't learned our lesson from Kelly, huh. The simplest explanation of Loss Aversion is that it's a natural consequence of evaluating bets geometrically rather than arithmetically. There's no need to explain it away as a cognitive bias, except insofar as it gets misapplied in cases where wagers are independent.

Expand full comment
Sep 1, 2023·edited Sep 1, 2023

100%. Even high-frequency trading firms, which often explicitly talk about "maximising EV", are implicitly geometric maximizers or bust.

Traders I've met sometimes fool themselves into thinking their job is to maximize EV, because when you're adding a risky bet to a portfolio that swamps it, EV ≈ GV [1]. Intuitively, the riskiness of a bet at the margin is partially "cancelled out" by the rest of the portfolio. Mathematically, EV ≈ GV because (1 + x) ≈ log(1 + x) for small x. But when you start talking about double-or-nothing bets, people will admit their utilities are sub-linear.

The same can be said of selling travel insurance: the transfer of risk is zero-sum in dollar EV terms but positive-sum in GV / utility.

[1]: GV = geometric expected value, which you can think of as logarithmic utility (see https://www.lesswrong.com/s/4hmf7rdfuXDJkxhfg)

Expand full comment

I'm glad someone smarter than me finally wrote a LW post. I'd worked out the basic ramifications years ago, but this link looks more comprehensive than what I'd be capable of producing.

Expand full comment

I'm late to the conversation, and not really sure how on-topic my thought was, but I'll offer it anyway.

When I saw the references to "cognitive biases", I had three immediate reactions:

1. If cognitive biases were as general and easy to exploit as some people say, then advertising would "force" us all to buy as much as we could of all the products the corporate world comes up with. But most new products fail for lack of sufficient sales. So we're not as helpless as some people conclude.

2. Cognitive biases are real, and some people use them as justification for taking discretion away from people. Since most people are blinded by cognitive biases in some situations, they should be saved by allowing smarter people, or people who have studied up on the cognitive biases, make the decisions for them. But the smarter people will be subject to their own cognitive biases, as well as lack relevant case-specific information while having no personal stake in the decision, so they'll probably make worse decisions, even if they are aware of some relevant cognitive bias.

3. "Nudging" in particular makes me chuckle. I remember the concept got a lot of public attention at the beginning of the Obama administration. One example of a successful "nudge" cited in the press was that school lunch lines were able to "nudge" students to choose healthier desserts by putting fruit at eye level and early in the line, while cake was lower and later. Cass Sunstein was brought in to the administration, and was reported to be looking for chances to use "nudges" in public policy. Maybe I lacked imagination, but I thought of using the school example to help "nudge" people to healthier choices at the supermarket. It's certainly true that shelf placement of individual products influences sales - the people who run supermarkets, or sell their products through supermarkets, are well aware of this. They currently use this insight to make more money. Getting them to use it to sell healthier but less profitable food would be against their obvious economic interest, so there would need to be a rule forcing them. This would require a "supermarket supervisory board" to determine which products should be promoted, and what methods should be used. The rule-making would require input, and each producer would have the opportunity to influence the rule to favor their product. What counts as "healthy" - low calorie, high protein, low sodium, high vitamins? Should organic products get extra points? Other considerations would have to be addressed: Should "ethnic" foods be promoted in "ethnic" neighborhoods? Should price be a consideration, so as to avoid "nudging" poor people to buy things they couldn't afford? It seemed to me that this would inevitably become a purely political process - I'm glad we never tried it. At least, I hope we never tried it.

Expand full comment

Epistemic status: speculative. But a few people mentioned Quakers, and their practice (at least currently) is as close to Buddhist meditation that I've seen in a Christian tradition. So, maybe there is something to climbing the awareness ladder that makes someone become a retro-abolitionist.

BTW, I too have this concern. What am I not being an abolitionist about today that I will wish I had been 100 years from now (when I wake up from cryo)?

Expand full comment

1. I've thought about this quite a bit and I'd guess climate change and veganism.

(nitpick: the Quakers weren't retro abolitionists, they were the original abolitionists, of slavery.)

2. You probably won't be around.

3. There's no reason to assume society will keep moving in the same direction; it's entirely possible with a collapse of the level of functioning due to climate change or right-populists being smarter than their forebears 100 years ago you'll see a return to preindustrial gender roles, slavery, or countless other things we thought we left behind. You can't predict society's future beliefs; live your life. (Hat tip to ascend.)

Expand full comment
Sep 1, 2023·edited Sep 1, 2023

You (and most people) are confused about the optical illusion, in a subtle but I think important way. When you say that one square looks blue and the other looks yellow but really both squares are the same colour, my response is "What are you talking about? Those aren't squares, they're parallelograms!". This is not mere pedantry - to call it a square is to accept the idea that this is an image of an illuminated 3D scene, and if we're accepting that these images represent scenes containing objects, then the objects being referred to are clearly different colours. The *parallelograms* are the same colour, but the *squares* are different colours. To claim the squares are really the same colour is to arbitrarily require that I interpret shapes as though I'm viewing a real scene, but colours as though I'm viewing a flat arrangement of shapes representing nothing. Calling that shape blue is exactly as correct or incorrect as calling it a square.

*Looking at a a flat image and thinking of it as a 3D scene is the optical illusion*, and this colour business is just a necessary part of that.

Expand full comment
Sep 1, 2023·edited Sep 1, 2023

I think you (and most people) are confused about that optical illusion, in a subtle but important way. When you say that one square looks blue and the other looks yellow but really both squares are the same colour, my response is "What are you talking about? Those aren't squares, they're parallelograms!". This is not mere pedantry - to call it a square is to accept the idea that this is an image of an illuminated 3D scene, and if we're accepting that these images represent scenes containing objects, then the objects being referred to are clearly different colours. The *parallelograms* are the same colour, but the *squares* are different colours. To claim the squares are really the same colour is to arbitrarily require that I interpret shapes as though I'm viewing a real scene, but colours as though I'm viewing a flat arrangement of shapes representing nothing. Calling that shape blue is exactly as correct or incorrect as calling it a square.

Looking at a a flat image and thinking of it as a 3D scene *is* the optical illusion, and this colour business is just a necessary part of that.

Expand full comment

Relatedly, there is no such thing as a "perfect visual system" that isn't vulnerable to optical illusions. You're taking in 3D scenes with objects of various colours, illuminated by light of various colours, and mapping those onto a 2D plane of colours. There are (vastly) more possible scenes than there are possible images, so by the pigeonhole principle, there must be at least some images that map to more than one scene, i.e. you can't know what you're looking at. In fact every image maps to an ~infinite family of scenes. It could be the scene as it looks, or a flat white screen lit with a suitable projector image, or a flat *grey* screen lit with a brighter projector image (or any shade in between), or a curved screen with an appropriately distorted projection, or any crazy shape of screen (or almost any arrangement of 3d objects) with appropriately crazy lighting to exactly reproduce the image from that angle, etc etc etc etc.

To ever make sense of anything you see, you need to have a strong prior over what types of scenes you're likely to be looking at, so for any possible visual system design there will always exist scenes that your prior considers extremely unlikely and which produce an identical image to a scene which your prior considers very likely, so your visual system will confidently give the wrong answer for those scenes, i.e. there will always be optical illusions.

Expand full comment

Although "the squares are the same color" are how it often gets presented, I think the vision scientists tend to be a bit more careful, saying that they have the same "reflectance value" (or whatever), given that often the takeaway from this illusion and others like it is that color is a subjective (don't read too much into that term) construction of our perceptual system. (this perhaps might have implications for what you could even mean by "real scene")

Expand full comment
Sep 1, 2023·edited Sep 1, 2023

I think of “weak” cognitive biases the same way I think of deficiencies in LLMs. Energy is limited, and modelling your environment is expensive. So you should expect optimisation processes (like gradient descent or evolution) to look as though they’re making constant trade-offs, choosing to encode dynamics of the environment that are more action-relevant.

Predictive processing in humans can be adversarially gamed with optical illusions and so forth, but mostly not in ways that could have been harmful to your ancestors.

LLMs are bad at stuff that accounts for less of the training set, not necessarily because the stuff is harder, but because it’s got bigger fish to fry. Scale up the model or boost the footprint of that data (eg fine-tuning/RLHF) and it’ll probably get there. (There‘s good evidence of LLMs trading off feature representations like this: https://arxiv.org/abs/2210.01892 and https://transformer-circuits.pub/2022/toy_model/index.html)

What’s scary is distributional shifts without ample time to adapt: I think of sugary foods, and of adversarial attacks on computer vision systems (see https://youtube.com/shorts/HSyvO0CIMmc).

Expand full comment

I think the Christianity anagrams are not a fair test of priming, for reasons I will explain if anyone tries out the following alternative set:

1. WOYELL

2. TIOLVE

3. ENORAG

4. BAERM

5. MRICNOS

6. MIEL

7. ULBE

8. GERNE

Expand full comment

I couldn't get #4, but I got the rest as all colors. Interestingly, my mind skipped to #7 after having trouble with the first three, then easily picked out the rest.

Expand full comment

So, as promised:

My quibble with the Christianity version was that the anagrams were all easy anagrams that looked like the word they were cluing. Many of them kept the first and last letter the same, and there's that one factoid about how aynone can raed a stneecne if the fsrit and lsat leretts are cecorrt even if the middle is scrambled. I feel like that's the skill that ends up being used with the Christianity anagrams.

So, in my example, clues 4-8 can all be colors (AMBER, CRIMSON, LIME, BLUE, GREEN) but also other things (BREAM, MICRONS, MILE, LUBE, GENRE) and the anagram is generally closer to the non-color word. Anyone going for all the easy options would probably not pick out all the colors.

(There's still the question that was raised in response to the first word scramble: is it still "priming" if it's just noticing a pattern and acting on it? Then again, isn't that the explanation for why priming should be a brain feature in the first place?)

Expand full comment

It’s not supposed to be a difficult anagram test - however angle and angel are both obvious and equally difficult candidates and (according to the article) most people will be primed to Angel.

Expand full comment

I thought 4 was "Umber" but then I noticed "No 'u' so it can't be". All I could get out of it was BREAM, a fish - which is not really a colour.

Why I couldn't get AMBER I have no idea.

Expand full comment

The Replication Wiki is a cool example of a community-driven approach to this problem. If you just want a list of studies that have replicated, they got it! http://replication.uni-goettingen.de/wiki/index.php/Category:Retracted_study

The UX is pretty bad though. I would love a leaderboard of the most cited studies that have replicated. It would also be sick if Google added it as a filter / badge in google scholar.

Does anyone have other efforts in this direction that I should know about?

Expand full comment
Sep 1, 2023·edited Sep 1, 2023

Am I missing something? Banana's article repeatedly says that automaticity effects are too small to be useful or important in real life and that any study that shows a large effect won't replicate. He doesn't say that there is no effect, just that the effects are too small to be important in the fields where they are offered.

In response, Scott says "Not so fast! In actual fact, while most studies showing that automaticity has usefully starting effects don't replicate, some of the studies showed it sometimes has effects too small to be useful are well replicated and accepted!"

I know that Scott met the literal challenge, but you could meet that challenge with Banana's original article, which conceded small effects for things like the placebo effect.

Expand full comment

>Am I missing something?

Scotts article says that the effect in many cases is also large enough to be useful/important?

>small effects for things like the placebo effect.

I'm not under the impression that the placebo effect is small - it is at least large enough that it is generally accounted for in medical studies (by using control groups)?

Expand full comment

Scott, unrelated to this post, according to an email notification I received it seems someone / a bot is impersonating you spamming a phone number to text, possibly phishing. The notification is as if it was a comment that was deleted.

Expand full comment

Those "illusion fake-alpha" examples, especially "shack-as-mansion", tickled my brain in a way at once pleasurable, informative, sobering, and depressing.

I have been, and often still am, That Guy.

Expand full comment

Pieces like this are why I keep coming back.

Expand full comment

"The Implicit Association Test - there have been some good studies showing the IAT doesn’t really predict racism. But as far as I know, nobody has ever challenged its basic finding - that white people are faster and more accurate at learning white-good black-bad reflex-level category associations than vice versa. You can easily test this for yourself in online tests."

thinking about this some more am not sure this is what I would call "priming" (or maybe I don't understand what people mean by the word). It seems to be more like an example of people being faster when doing something that conforms to general heuristics they are used to than when they go against it. White people learn to internalize (yeah, wrongly and in racist ways, but that is not relevant to the mechanics at work) the association of black=bad/crime/danger etc. over long time periods through exposure to real and fictive examples. But I'd bet people would also be quicker to associate stuff that looks like ice with cold and ones that look like fire with hot, because experience. Would that be called priming too? Or just that we react faster doing anything in same way that we have been used to do them.

Expand full comment
Sep 3, 2023·edited Sep 3, 2023

Hi Scott. I am a long time reader of your blog. Until now, I didn't comment, but I always liked reading comments of other readers.

However, in my opinion the quality of comments recently went down. Before, most of the comments were by smart honest people of good will trying to learn something from each other. Recently more and more comments sound like written by people who came here to fight fights and win arguments, being unkind to others (however, I admit that good comments still are the majority).

So if you ever hesitate to ban somebody, please, do not hesitate. Please, keep on weeding the garden, keep on banning accounts which are much below the target quality. I believe that there are many readers like me.

On the other hand, keep in mind that I am not paying reader, so feel free to ignore me as well if you wish.

Expand full comment

1) I love Terry Pratchett’s Tiffany Aching series (starts with the Wee Free Men). In that series, witchcraft is mostly being a normal, caring human in a close knit society. Occasionally, when the main character needs to do something particularly magical (generally at the climax of each book), she “opens her eyes, and then opens them again.” She describes the experience as extremely exhausting, to be so hyper aware of the world around her. It gives her temporary superpowers though to be able to deal with some supernatural foe.

2) in my pharmacy, our software vendor added the ability to trigger a prompt at the cash register, and enabled a prompt to upsell probiotics to people getting antibiotics. I have been shocked at how often folks will buy probiotics if I bring it up, even with my hedging, half-hearted recommendation in which I explain that the scientific evidence mostly shows that they’re good for preventing diarrhea post-antibiotics and that’s about it. In >60% of these interactions, people will buy a bottle. The experience of this phenomenon has impressed upon me the substantial influence I have when I wear the pharmacist’s white coat (and the corresponding responsibility to not violate that trust).

Expand full comment

"Since everyone else is such a dumb automaton, I can use my superior knowledge of optical illusions to excel at sports. I’ll just study every known optical illusion and how to defeat it, until my visual system is perfect. Then, while everyone else is deluded into thinking the ball is in a different place, I alone will be able to determine the ball’s true location, and win every game."

Actually isn't that partially real? There were lots of famous trick plays in football that rely on exactly fooling the other team as to who's carrying the ball by some hand motions and misleading moves. Because the ball is not very noticeable, can be hidden or one can pretend to carry one, and one team has control of it and can coordinate, football makes for a very good sport to (ab)use biases to win.

Expand full comment

Funnily enough on the word scramble, I quickly identified most of the words (well I just assumed "purgatory", since I was too lazy to count letters for any of them), *except* for "ANLEG", which took me much longer.

For a while, I thought that Scott might have thrown in a non-religious word as a trick and kept trying to unscramble it as "GALEN" or the like, before I finally realized that it must be "ANGEL". "ANGLE" never even occurred to me though.

Expand full comment
Sep 5, 2023·edited Sep 5, 2023

Scott is it on purpose that the two cubes are actually identical AND look identical, and the highlighted squares look like different colours because THE ARROWS ARE POINTING TO DIFFERENT SQUARES?

And literally nobody has noticed this yet?

Do I get a prize?

Edit: oh my god I do not get a prize.

Expand full comment

Good corrective. Didn't know most of this.

Expand full comment

Sorry for the delay. This is all really interesting. My responses are pretty random...

Do you have a source I might read more about "psychophysics tends to find that human perception is logarithmic"?

I can't really imagine how one can come up with an inter-person means of comparing one-time sexual assault to long-term abusive marriage to medium term abusive relationship. I can imagine there might be compensating effects for a person in a long-term marriage -- stretches where it's nice, where the person's behavior is less awful, where the victim experiences a sense of agency, etc in a way that a one-time sexual assault or a shorter term relationship may not have. But it sounds like you did a study where you validated doing that on a real group of women?

I've not seen flashbacks described as dissociation; they are distinct things in my mind, even if sometimes they happen together. They happen separately enough that I think it's valuable to keep them as separate things. I've had a number of clients who dissociated frequently but rarely had flashbacks. People can be triggered into dissociation by something that echoes the original trauma but they are not reliving the trauma while dissociating.

I'm still not clear in what sense you are assessing whether dissociation is important or not as it relates to trauma. There are some ways it seems important to me. One is that because dissociation takes people offline, sometimes frequently in a day or over hours, it gets in the way of learning (ie, therapy, but also other kinds of learning) and so is its own barrier to treatment.

Also, teaching people when they've left and how to come and go more easily between dissociated and aware states can be an important initial intervention that both helps give people more agency (which is key for trauma recovery generally) and helps people become more available for the benefits of treatment. In general, I would say dissociation happens a lot more in trauma than flashbacks. I don't know for sure, but I'd venture that flashbacks are more common in a shorter acute phase of trauma response while dissociation happens more over chronic stretches of time (this isn't an either/or thing, just a tendency).

I read through some of the conversation you linked to and I wouldn't consider all the various ways people retreat in arguments to be dissociation. That one of the people in there used the word doesn't make it so in any clinical sense.

I do think anxiety in social situations can produce various kinds of avoidance or blocking behaviors, and I'd consider the rhetorical responses this person is describing to be those and not dissociation.

Dissociation as a result of social anxiety isn't going to show up as "that's dumb; I don't want to talk about it any more." It's going to show up as someone spacing out, losing their train of thought, or freezing. Not all instances of freezing socially are dissociation though so it would take more inquiry to see what was going on -- was it just performance anxiety tying someone up for a minute or were they gone mentally? Dissociation seems to me by definition to be something that takes our cognitive functions relatively offline and so wouldn't include people making convenience arguments as a coping tactic.

There are a million ways that people in social situations try to control for their anxiety by making ad hoc rules about what's okay, reasonable, rational, allowed to be talked about. That goes on here in this space all the time.

I agree with you 100% here: "for both medical and psychological research I think people can end up making the sorts of inference mistakes I describe here, and that this can lead to research programs getting side-tracked or confused for long periods of time. Being careful in thinking about what one is measuring and what the relationships between things logically must be goes a long way towards avoiding such problems. (As well as some of the other suggestions I've given in the other comments to this post, such as measuring things in greater detail, focusing on things with bigger effects, etc..)."

We could even see my earlier response to you with the cancer research analogy as doing the same thing that the guy in the conversation you linked to was describing. I was like "Ach, it's too hard, it can't be done" -- really so I could stop thinking about it because I was tired and it felt too hard.

It's so incredibly common for people like me (and people in this space here) to manage emotions by resorting to something that masquerades as rational because that's what we've been allowed. It's a way to disown our own feelings, but then they come out sideways in a destructive or unhelpful sort of way. I'd say this is what the guy in the conversation was pointing out about the men he was talking to -- women are somewhat less likely to disown their feelings because we usually have had more room to express them, though certainly we still do it.

I'm clear that when I do that, I'm not dissociating. I'm looking for a mental place to rest with a subject that raises the slightest uncomfortable feelings. I'm going to speculate that in my conversation with you the very mild discomfort had mainly to do with not really being able to keep up with your statistical depth and maybe also wanting to know what you're working on in a more straightforward way than you were offering (that's not on you).

Expand full comment

“I can’t help wondering if there’s some understanding of of “automaticity” or “being less automatic” that could have helped 1700-me question my belief - or wondering what equivalent automatic beliefs I should be questioning today.”

Something like ‘cultural norms do not equal right or best’ could be good cue for taking a deeper dive into commonly held beliefs and practices (like the lionizing of work or eating of animals for example). Makes me wonder about the history of this idea and why it is not explicitly taught to young people as part of their Enlightenment heritage.

Expand full comment
Sep 6, 2023·edited Sep 6, 2023

The thing we call moral progress is often just affordability, caused by technological porgress. When we will have a cheap option of eating vegan 'meat' grown in a lab, we will look back at our animal processing industry, slaughter houses and so on, with disgust. Is that moral progress or just progress?

Slavery was abandoned (in the US) as technological progress was made during the aftermath of the industrial revolution. If someone had invented a tractor in the 16-1700s, perhaps people in Africa would not have been kidnapped and used as slaves.

I would like to write something about people being robots, but your site is so slow rn and perhaps that's a good thing, since it makes me so depressed thinking about it.

Expand full comment

A nice writeup but perhaps the fundamental problem is that these tests and experiments are all over things that don't matter. It is like trying to diagnose fundamental human decision making by analyzing daily clothing choices. Yes, for some people in some cases, it matters but for the vast, vast majority of people - it does not. Trying to assign meaning to these choices is therefore a complete waste of time. After all, the point of vision is not to see per se - it is to gain information to act by - and evolution has obviously made optimization choices to improve performance at the expenses of complete and thorough accuracy.

Expand full comment

The potential problem here is that as the human environments has changed, the information you need to act (effectively) has also changed, faster than evolution can keep up. We presumably evolved a strong tendency to stereotype other groups of people because at one point most strangers were actually dangerous. Today, not so much.

Expand full comment

It is an interesting point but I think, ultimately, not a strongly relevant one.

The danger for the vast, vast majority of human existence came from environment, animals, lack of food etc - not other people from other groups certainly. Even for people within the group, physical danger within the group isn't high compared to the other dangers.

Expand full comment

Intuitively, I don't think that's true. I believe that human evolution has been largely driven by competition with other humans, at least since we became the top predator. Do you have any sources that say otherwise?

Expand full comment

Let's start with some exploration of what we can agree on:

1) The period in question is ~10000 years (i.e. human writing) = ~500 generations.

2) Prior to that, small hunter gatherer groups of low population density - so unlikely human-human predation is a major factor.

Seem reasonable?

Now let's look at whether ~500 generations is enough to evolve. Yes, birds can evolve into new species in as little as 10-20 generations, but this is literally via replacement.

I believe you would agree that human evolution under conditions of civilization/farming is not driven by replacement.

Neanderthals is a possible counterpoint but this is from far before the 10K years in question and would also require some form of extraordinary organization to hunt down a Neanderthal population scattered over most of the Eurasian continent. It seems far more likely that the Neanderthal extinction was due to some major external factor like environmental change or disease.

So how many generations to evolve, given no replacement methods?

There is evidence of actual evolution with replacement methods: the sickle cell gene is a prime example. Those with one copy have markedly better survival rates against disease vs. those without any copies; those with 2 copies are an ongoing drain on the "more protected" population. Even so, the prevalence of the sickle cell gene in the face of ubiquitous diseases it protects against is only ~40% in certain areas of Africa; much lower overall in Africa and other regions like South Asia, the Middle East etc that have similar disease threats.

This despite the gene having evolved 7300 years ago: https://www.bbc.com/news/world-africa-43373247

To me - this argues that "evolution" for humans even in the face of pervasive and persistent attack by disease is barely sufficient for significant population level change - disease being far more ubiquitous a threat in Africa as it affects everyone.

Or put more succinctly: human "predation" on other humans is to rule them, not replace them and so is not a major evolutionary factor on the species.

Expand full comment

"Seem reasonable?"

Sorry, no. Have you read any histories of near past hunter gathering societies, such as those on the North American continent during occupation, or elsewhere? Inter-tribal raiding is a way of life.

Since I agree that 10,000 years isn't enough time, let's go back another order of magnitude--for at least the last 100,000 years (and probably much longer) human beings lived in mutually hostile small genetically related groups. We are psychologically best adapted to live that way, that's why we some version of Dunbar's number exists (though researchers can't agree on an exact number, it isn't, say, several million). That's why the in-bias exists, why empathy is limited by the demographic group of the person being observed, why we stereotype. One can train oneself out of all of these things, but for the vast majority, these are mental defaults that we seem to naturally acquire. All the phenomena mentioned in the write up: cognitive biases, priming, nudging, even optical illusions are built into the brain. Where does all that come from, if it didn't evolve?

Presumably, at one time all that did confer a reproductive advantage. Back when the strange group of people in the next valley over really were a significant threat, it paid off to be suspicious of them. Yet, you still had to marry into them, and it was useful to exchange various resources, so some contact was inevitable. A mental framework that made transactional relationship possible when clearly advantageous, yet you never actually *trusted* those people, makes intuitive sense. And that's just the pattern we see among humans today.

From there, the argument that social conditions have changed so much in the last 5000 years or so, that we now have innate mental responses and impulses that no longer confer an advantage seems obvious. It would be advantageous to abandon so much out-group bias in this huge interconnected world, but with brains that possess neural functions that go back hundreds of thousands of years (if not millions), that's a difficult challenge.

Expand full comment

Can you quantify the effect of "inter tribal raiding"?

In particular - how does this raiding affect human evolution and what is the physical evidence as seen by expression?

Are North Native American humans more combat capable? Combat strategic? Stronger? Faster?

You cite "empathy" as being formed by evolution - but empathy is a minor part of the virtual genocidal warfare you seem to equate "inter tribal raiding" to be. If this genocidal warfare is a primary driver of human survival in the past 100,000 years, the evolutionary selection effects should be readily visible physically as well as mentally. I would think American Indians would therefore be vastly superior fighters in some way, like biologically evolved 6 million dollar men.

Nor is your assertion of "transactionality" terribly convincing. There's very little advantage to social analysis if you're killing your opponents; social analysis is beneficial primarily when there are no more effective ways to compete.

Note I'm not saying that your assertions about transactionality and empathy are false - what I am saying is that there should be clear physical signs of evolutionary selection bias since physical changes are far, far faster to select and express than the social ones you cite. Among other things, it would seem likely that there should be increasing male vs female physical trait divergence over the human evolutionary progression since presumably it is primarily males involved in said warfare.

There also arises the question of North America vs. say, East Asia. Are East Asians have a greater or lesser propensity for "inter tribal raiding" than the North American hunter-gatherers? What about Europeans? Africans?

Yet again - unless humans are truly the same all over - the evolutionary selection pressures should result in physical as well as mental/social outcomes. Yet we know for a fact that East Asia unified and civilized far earlier than Europe or North America - therefore East Asians should show less of the "inter tribal raiding" evolutionary pressure than Europeans, whom in turn evolved less vs. North America.

Barring this type of evidence, your obviously psychology based views are perhaps more an expression of a hammer looking for nails than necessarily a thoroughly considered exploration.

Expand full comment

This. I faithfully read the original "Against Automaticity" and was plainly shocked (although partially alleviated by the last paragraph of its being deliberately exaggerated). Priming is real, cognitive biases are real, end of story.

Expand full comment

I've read this post before, but didn't remember the word-scramble section; anecdotally, I got ANGLE (though not DOG) and, annoyingly, *first* got PREROGATORY and subsequently thought to myself "these are religious words, it's purgatory and I was just guessing". Goes to show what I know, I suppose!

Expand full comment

*Useful* is a greater degree of intellect, which IQ is trying, albeit inadequately, to quantify. This is accurate to the extent that being "attractive" has benefits. However, being "better" necessitates an assessment because it is a value judgment. It might increase your chances of success in life, but it won't always make you happier.visit : https://www.mamaadoptation.com/pregnancy/craving-fried-food-while-pregnant/

Expand full comment