615 Comments
Comment deleted
Expand full comment
founding

most community churches don't have billions of dollars that they deliberately spend to change the world!

Expand full comment
Comment deleted
Expand full comment

It matters because the stakes are decently high with EA, unlike points of division with an online church.

Expand full comment
deletedJul 20, 2022·edited Jul 20, 2022
Comment deleted
Expand full comment

I'm a consequentialist, but I'm fine with you focusing cultivating virtue, virtue seems both instrumentally useful (virtuous people do more good) and intrinsically useful (people place value on themselves and others being virtuous), so keep on taking virtuous actions, you may never become a utilitarian but that's not the point anyway.

Expand full comment

I agree. I despise virtue signaling. Tell me what you did. That’s what matters. Doing good things does make you feel good, but feeling good because you believe the right things is mental masterbation

Expand full comment

As a sometime fan of virtue ethics and longtime foe of optimizing consequentialism, I have to say this is the first post that made me stop and consider and change my mind a little bit. Thanks.

Expand full comment

As a Stoic, I do not think that consequentialism and virtue ethics are fundamentally in tension. The Sage's actions will be the best approximation of the path which yields the best consequences.

Why is there a conflict between Aristotelian virtue ethics and consequentialism? I am not familiar enough with it.

Expand full comment
deletedJul 21, 2022·edited Jul 21, 2022
Comment deleted
Expand full comment

I don't have a source for Aristotelian ethics, since I'm largely unfamiliar with it.

Reading enough Seneca (Dialogues and Letters on Ethics) though, it became very apparent to me that Stoicism is rationality, rationality is about winning, winning means earning victories for humanity. Taking rational action means you take the decision alternative with the highest expected value given constraints on knowledge + model uncertainty. So that's how Stoic rationality implies consequentialism.

That's the short-version. I was trying to write a longer version, but I'm unsure where to draw the line between succinct argument and tangent. So best not, unless requested. Originally I tried a version, where I quote-mined each point. But I don't have a whole day.

Seneca is a joy to read in English. My writing not so much.

Expand full comment
Comment deleted
Expand full comment

"Logos" refers to the study of logical propositions. Sophistry if you ask Seneca, who has a low opinion of that pursuit. (he's an intuitionist, I guess... or undertandably doesn't see what ethics has to do with formal logic)

The Stoic prefer an action that gives them "preferred indiffierent things" (money, status, power, health etc.) over the inconvenience of "dispreferred indifferent things" (poverty, sickness, exilie), if given an option. That's a utility function.

Consequences of action alternatives should be weighed relatively to one another. When we do, chosing an option with lesser value (ceteris paribum) would be the same as chosing a "dispreffered indifferent". This is quite obvious and the idea of "more > less" is a level of access, that the Stoics would have had access to.

"Maximizing expected value" is just that plus probabilities.

The ancient Stoics lacked our numeracy and many mathematical concepts that are ingrained to modern thought. But if you have all that, decision theory becomes an implicit corrolary. The Stoics lacked cognitive science. But they did know enough psychology to describe how an aspirational rational person (the Sage) ought to work and where us fools fall short. Read Seneca "Of Peace of Mind" and you will probably find your own flaws pointed out in an insightful manner. That's because he basically rattles off all the things that could possibly be wrong with you. And those are not mere Barnum statements, they are predictable, universal failings of the human condition. The Stoics had theory of mind.

So in Aubrey Stewart's translation, I find that "reason" and "wisdom" might as well have been translated with "rationality". "Wise" with "rational". It would not change the meaning.

Expand full comment

The tension is at the extremes. Consequentialists (specifically *maximizing* consequentialists) will follow consequence based arguments to very different conclusions that virtue ethicists will hold themselves back from making. See "repugnant conclusion".

Expand full comment
deletedJul 21, 2022·edited Jul 21, 2022
Comment deleted
Expand full comment

That's true. I am coming to believe in a blend of virtue ethics and satisficing consequentialism. Here "satisficing" is a neologism meaning "good enough above some bar, stop comparing".

I think the latter goes a long way towards keeping the good (rigorous debate) while doing away with the bad (miserable perfectionism and repugnant conclusions) of utilitarianism.

BUT! I agree fundamentally that it's still a system, and systems serve us, not the other way around, and the virtue of virtue ethics is in never really losing sight of that.

Expand full comment
deletedJul 21, 2022·edited Jul 21, 2022
Comment deleted
Expand full comment

The idea of satisficing is it doesn't matter because both things are above a certain reasonable water line. The idea of blending it with other systems is you could turn to those for arguments against it, either before or after the satisficing, but especially after. The idea of making sure systems serve us instead of the other way around is everyone can tell this is a bad conclusion, so we just ignore the system instead of following it.

Expand full comment

I think repugnant conclusions about constructed thought experiments are mostly sophistry. Unless there's a real world situation, where I would feel compelled by Stoic virtue to divert a trolley onto a consequentalist decision maker, before he does something implausibly yet impactfully stupid, there's no conflict?

Expand full comment
Jul 21, 2022·edited Jul 21, 2022

"Be an EA worker for EA causes instead of a doctor" is a very concrete point of contention here, and for a lot of people, a repugnant conclusion.

Ditto "It's a moral crime to eat ice cream or have a child because the <numerical> value you could provide to future persons is greater".

Peter Singer's Drowning Child is absolutely not a hypothetical problem. It's a bundle of direct real world problems brilliantly presented *as* a hypothetical so that once you've made your armchair, intellectual answer, you've bought in to a practical problem that could be worked on, now, right now.

I think both pro and anti would typically agree that EA is where the rubber hits the road on trolley problems.

Expand full comment
Comment deleted
Expand full comment

See my other comment; I think moral systems are best blended with a satisficing component. I think virtue ethics does a better job doing this implicitly, by people's preferences and what they consider virtues (eg the virtue of moderation), whereas consequentialist and deontological systems need this bit spelled out and maybe even stamped on the forehead.

Expand full comment

Yes, I can kind of see that. But that's a problem with projecting an armchair, intellectualized argument onto the real world, ignoring many of its complexities.

I think that forswearing ice cream for guilt about the children and devoting yourself to EA causes over all other obligations or concerns on consequentialist grounds is not justifiable.

I do not predict that this cause of action will be the best utilization of someone's talents for the general benefit of humanity (which as a Stoic I do agree is what one should care about).

That's a longer critique though. I might enter it into the contest, if I can write it persuasively.

Expand full comment

I agree. Elsewhere in this thread someone linked this great bit of writing which includes a few such examples: https://michaelnotebook.com/eanotes/

I don't know if it's a good use of your time or not, but it would definitely be great to see someone enter the contest to critique these problems of scrupulosity from a stoic perspective, and I'd certainly enjoy reading it.

Expand full comment
Comment deleted
Expand full comment

What did he really believe then?

Expand full comment

(I deleted because I was completely off topic and probably wrong)

Expand full comment

Everyone should feel they have the right to be wrong. You’d get more effective critiques that way. Why do humans get so attached to their ideas? Experiments depend on failure. People in practical fields seem to understand this much better

Expand full comment

To some degree effective critiques in practical fields are less personal. Of course people can still invest their identity in a particular practical theory but it is hard and less common.

Expand full comment

More than being afraid of being wrong is a complete lack of desire to defend my ideas, even if they are right. I find discussions emotionally unbearable.

Not claiming it is a good thing.

Expand full comment
Comment deleted
Expand full comment
author

Thanks!

Expand full comment
founding

Agreed!

(Well, I might write some other comments too about the content!)

Expand full comment

What's EA?

Expand full comment

Effective Altruism.

Expand full comment

Have you really never played the Sims? https://wikipedia.org/wiki/Electronic_Arts

Expand full comment

The Euro Area, the official name for the Eurozone.

Expand full comment

Eusko Alkartasuna, a Basque political party.

Expand full comment

The Evangelical Alliance.

Expand full comment

Equivocal Acronym

or just X for short.

Expand full comment

Electric Allies, its an AI alignment group

Expand full comment

Eä is the Quenya name for the material universe as a realisation of the vision of the Ainur. Scott just forgot the umlaut. He's sloppy like that.

Expand full comment

Sloppy yourself. It's clearly a diaeresis.

Expand full comment

Spectacularly done.

Expand full comment

Expected Answer

Expand full comment

EA is the true name of SCP-001.

Expand full comment

Everything and Anything

Expand full comment

Evolutionary Algorithm

Expand full comment

Interesting. Seems a big issue in society in general (not unique to EA) is critiques "that everyone has agreed to nod their head to." This post helped me understand how such critiques can be perpetuated because they help people ignore more serious, specific critiques that threaten individuals' relative status or would require them to work/think harder.

By the way, is this argument a tiny bit reminiscent of the old-line Marxist argument that identity politics is a distraction from class conflict? (Which is not to say their solution to class divides is correct...)

Expand full comment

Unsure, because both class conflict and racial justice can be posed as generally good things to nod along to and do nothing practical about, or as specifically challenging calls to action.

Expand full comment

That's part of the reason, I suspect, Marxism is so well accepted in civilised society?

Expand full comment

Absolutely, vague distaste for capitalism in theory doesn't mean you actually have to do anything about it.

Expand full comment

Marxism does expect Marxists to be revolutionaries and to do things about it. It's just that modern Marxists have come to see the Revolution much in the way that modern Christians see the Second Coming - something much to be aspired towards but not actually expected.

Expand full comment

Maybe the other way around. Expected but not actually aspired to.

Expand full comment

I think it's a general rule that most people do not consistently act as you'd expect them to if they thought their beliefs were true, I wasn't really making a specific point about Marxism - although obviously noncommittal support for Socialism is a useful way to signal that you're socially conscious and care about the lower classes, especially while sipping Champagne with your middle-class friends.

Expand full comment

I really like the analogy, and think that it is not accidental. Those religions and ideologies that eventually survive are usually those that mutate to follow the strategy described by Scott - And so, Christianity transformed from expecting the reach to give up their money and the ruler to lower the damn taxation to a religion that just want you to admit that you are a sinful little thing, without demanding much in particular.

Expand full comment

Christ demands you give up your sin. Which usually people can't do without admitting they have sin to give up in the first place, hence the necessity to admit being "a sinful little thing." Which sins are the most important to give up seems to be what Christians argue about. Love of money? Sexual immorality? Etc.

Expand full comment

This doesn't sound right to me. Religion tends to do best when it's demanding enough that it promotes ingroup solidarity, but not so demanding that it brings about near-certain personal ruin.

When a religion asks little of anyone, it might do OK if it is promoted by society but won't go anywhere if it isn't.

Amidst general religious decline in the US, what religious expressions are actually growing? Amish and Hassidim. Not exactly the least demanding.

Expand full comment

That sounds like Bolshevism rather than Marx. Marx thought communism was inevitable, hence calling his socialism "scientific".

Expand full comment

You have well articulated Voegelin and the problem of ideology and immanentizing the eschaton.

Expand full comment

In that vein, vague distaste for consumerism doesn't mean you have to stop buying shit.

Expand full comment

And lack of engagement with the specific historical events. Such as: state control = centralized control = a system a lot like the one you want to take down. Personally, I’d rather have a hapless ruler like Romanoff than live in Stalin’s monarchy

Expand full comment

I think polite society accepts a watered down version of Marxism that few actual Marxists would agree too.

Expand full comment

Yes, interesting point -- both critiques are potentially ones where everyone can stand around sipping champaign without actually having to do much.

Expand full comment

Reminds me of a criticism of a lot of mainstream anti-racism discourse (typified by "white fragility" and corporate diversity workshops) that treats racism as a moral stain that white people must expunge from themselves via awareness and self examination. But doesn't say anything about specific actions that might cost actual money or inconvenience people.

Banks will happily have someone in to tell them how their industry is perpetuating colonialist conceptions of ownership, but would be far less comfortable with "you should fire some of the white men who make up 80% of your managers" or "change your loan assessment policies that disadvantage minorities in these specific ways."

Expand full comment

You’ve identified two different modes of critique, which one could call the philosophical and the journalistic.

It’s a lot easier to do the philosophical critique than the hard work of finding the anomalies within the existing paradigm, whether it concerns the relative benefits of two similar psychoactive chemicals, the efficacy of a philanthropic intervention, or the orbits of the planets.

Note that we have a plethora of theories of dark matter and quantum gravity but rather few and quite expensive ideas or f how to gather the data that can prove some of them right and some wrong.

This mismatch reflects the fact that theory is relatively cheap and high energy physics experiments very expensive. I suspect that is equally true of pharmacology and social interventions.

To bring this back home to EA: Perhaps not enough money is being spent on measurements.

Or, alternatively, we should accept that systems change slowly, that some money will be spent ineffectively or even counterproductively, and that too much worry about optimization is really a path to neurosis that does nobody any good.

Expand full comment

Has "EA is too high-neuroticism" been done yet?

Expand full comment

On some podcast, I think Robot Friends, the case was made that EA is just secular scrupulosity.

Expand full comment

I like that!

Expand full comment

I wrote a looooong comment on Matt Yglesias's substack a while back which I won't repeat here, but:

I don't object to EA itself. Only to utilitarianism, a completely silly theory which many EA people seem to endorse.

When you're dedicating effort to making the world a better place--as opposed to helping specific people you care about, or just enjoying your life like a normal person--you should absolutely try to be as efficient as possible. EA is great!

But people shouldn't believe that improving the world is the only thing they should ever try to do. Anyone who actually believed that would end up neurotic and miserable (there's your "secular scrupulosity").

Apart from the fact that there's probably no such thing as "utility", that makes the theory self-defeating.

Expand full comment

I feel like you're conflating "making the world a better place is good actually" with "this is the only thing you should ever be doing". I'm not going to argue that there aren't EAs who are neurotic and miserable, but I think most of us are able to compartmentalise and have normal human goals in addition to grand philosophical aspirations. There's some tradeoff obviously, but since being neurotic and miserable is counterproductive this is actually the correct thing for a consequentialist to do.

As for whether utility exists, I find consequentialism gets more appealing the less rigidly you define the utility function, because I don't think human fulfilment is reducible to a single dimension.

Expand full comment

Yes, absolutely. If you compartmentalize (and don't feel bad about it) then you're an effective altruism advocate but not a utilitarian.

Hating on utilitarians is one of my philosophical hobbyhorses (along with hating on libertarians) so I couldn't resist commenting.

Expand full comment

I don’t see why a utilitarian can’t compartmentalize and not feel bad. It seems to me that a utilitarian should be able to say that you are a thinking and feeling being whose desires matter, and that while you should definitely make some changes in your life that will clearly make big differences in the world at relatively little cost to yourself, the little tweaks around optimizing usually aren’t work the costs to your own interests.

Expand full comment

But the problem, at least for most ACX readers, is that there are billions of people in the world whose interests you can affect. A utilitarian who was really a utilitarian wouldn't be allowed to stop at some reasonable point.

It would probably be more effective (swidt) if I never posted about metaethics again and just encouraged everyone to become Bernard Williams-pilled like I did. But anyway--

I agree with Williams that ethics is fundamentally different from natural science.

Science is an exercise of theoretical reason. It seeks to answer the question: "what is the state of affairs/the truth about X?"

Ethics is an exercise of practical reason. It doesn't seek the "truth" about some external reality. It seeks to answer the question: "what should a given person do in a given situation?" And since the way you act in a given situation is determined partly by your character, it also has to answer the question: "what kind of person should you be?"

I think one reason utilitarianism seems plausible is that people miss the importance of this distinction. If you say "*The best outcome* is the one where utility is maximized" it sounds tautologically true--assuming that the concept of utility makes sense, which is very iffy.

But that's not what utilitarians are claiming. What they're claiming is:

A "*The best thing to do at any given moment* is the thing that maximizes utility."

Or, possibly:

B "*The best kind of person to be* is the kind of person whose actions will maximize utility over the course of a lifetime."

Option A doesn't work because it's self-defeating. If everyone had the kind of personality that would make it happen, everyone would be unhappy.

People who support compartmentalization are endorsing option B, but Williams didn't think that would work either. It's the inspiration for his famous "one thought too many" argument.

The problem is that even with option B, your most fundamental desire still has to be utility maximization. All your other desires have to be instrumental. You'll have to say to yourself, for example:

"Because I love my spouse I often do things for their benefit, instead of doing other actions which would benefit strangers more. But even though my only fundamental desire is to benefit humanity, I'm still allowed to feel so strongly that I can't help doing those things. My love for my spouse is a heuristic which leads to utility maximization, because a moral principle that didn't allow people to fall in love would reduce overall utility. That's why I love them."

Nobody actually thinks like this (I hope). And anyone who doesn't (i.e., anyone who does have non-instrumental desires other than universal benevolence) isn't really compartmentalizing. They're just not a utilitarian, even if they think they are.

The content of a moral theory has to be described by what it tells people to do or be, not by the outcome it produces. (This is the theoretical/practical distinction, more or less.) If the theory says "Love your spouse enough that it makes you do non-utilitarian acts, because that tends to maximize utility over the long term", a person who obeys the theory will become a person who does non-utilitarian acts for their spouse *because they love them*--not because it tends to maximize utility over the long term.

In other words, they won't become a utilitarian and the theory itself isn't actually utilitarianism. It's "Have multiple desires like a normal human being... and don't think, even in the back of your mind, that only one of them is fundamental."

This is the example Bernard Williams gave:

Suppose you're a utilitarian and you're relaxing on the beach. (Not sure they're allowed to do that, but whatever. Maybe you needed a break from saving humanity.) You see two people drowning far out to sea. One of them is a stranger and the other is your wife.

If you're an Option A utilitarian, you'll say to yourself. "I must first rescue the person I have the best chance of rescuing."

If you're an Option B utilitarian you'll say: "My wife is drowning. And in a situation like this, you're allowed to give preference to your wife."

Williams said that was "one thought too many."

Expand full comment

Is it a question of how you feel or how you think? Or both?

If some alternative to utilitarianmism suggests you have a burden of obligation you can actually fulfil, then you can fulfil It, and you needn't *feel* guilty, and you also don't need compartmentalisation as a *intellectual. *workaround.

Expand full comment

While there are criticisms to be made of utilitarianism, I don't entirely grok yours. Most moral systems set a standard above what people are capable of. "Feeling bad' about something you don't intend to change reduces utility, so it makes sense from a utilitarian perspective to avoid it.

Some of us allocate a certain quantity of our efforts to 'doing good.' The question then remains "what is good?" And each moral system has an answer to that.

Not being perfect according to a given system is different than not adhering to that system. Few people are perfect according to their moral system. Striving combined with accepting one's shortcomings makes more sense than just drawing a bullseye around wherever the arrow of one's life made impact.

I'm happy to criticize utilitarianism for often focusing on the short term, or for denying evolved wisdom present in some deontological systems. Basically, we can fault utilitarianism for always assuming that people are omniscient, when deficits of knowledge might be better addressed by adhering to well-worn paths than attempting a 'best guess' at utility.

But the faster that an environment changes, physically or morally, the less likely that transitional methods will be helpful.

Ultimately, it's hard to test whether or not a particular action is 'good' without having some objective results that can be measured. Utilitarianism allows for those tests. That kind of humility, of potentially finding out that one's assumptions are wrong, is valuable. The process is far from perfect, but the transition from predominantly traditional systems of morality to systems whose impacts could be tested at least a little was a worthwhile innovation.

Expand full comment

What is the difference between saying:

-'You should maximize utility. It's OK if you only try this during some of the periods during which you plausibly might.'

-'You should pursue virtue by striking a balance between efficiently helping others and enjoying your life.'

?

Expand full comment

> Most moral systems set a standard above what people are capable of.

I find that doubtful. Ethical egoism doesn't. Moral nihilism doesn't. Contractualism and game theoretic ethics doesn't. Evolutionary ethics doesn't.......

Expand full comment

The free will people or the political opinion?

Expand full comment

The political opinion.

I tried to get myself canceled on Matt's substack by saying that even though they have superficially different opinions, the people who think public policy has only one valid goal (maximizing utility, maximizing freedom, or maximizing fairness) are all "basically autistic".

I'm sad that no one actually canceled me.

Expand full comment

This feels like you're just using the word in a way that's technically correct but basically just exists to win arguments. I try to avoid semantic debates because they're totally pointless, but whatever. Utilitarian has so much baggage that I personally prefer the term "consequentialist" for my own morality.

Utilitarianism has this problem while other moral systems don't because it specifically values maximisation, you could obviously make it easier if you bounded it and said saving a million lives is no better than saving one life, but I personally don't think moral truths should be ignored just because they're inconvenient. I think most people would describe themselves as utilitarian because they think the philosophy is a useful guide to action when facing moral uncertainty, not because they pretend that all of their actions are maximising utility - there's just too much uncertainty for that in the real world. It's a philosophy, not a description of your actual behaviour.

You seem to derive utility from smugly pointing out the ancient wisdom that being maximally virtuous all of the time isn't possible, but I'm unsure of who the audience for that is. We're all aware of our human shortcomings already.

Expand full comment

Maybe you wouldn't have to compartmentalise if you didn't believe in utilitarianism -- maybe compatmenalisation is a workaround.

Expand full comment

Yes, I think that's a good way to put it. Some people seem to think that utilitarianism is like quantum mechanics: it's capable of being the truth even though the human mind hasn't evolved in a way that lets us fully understand it, or believe in it without some kind of psychological gimmick.

But that's the science-vs-ethics distinction that Williams talked about. A theory of physics can be true even if our brains can't process it. But an ethical philosophy isn't true or false; it's livable or not livable. You can choose from a range of livable philosophies but if a philosophy isn't livable by humans as actually evolved, there isn't anything right about it at all.

Expand full comment

What, are we not allowed to have transcendent values or something? Do we just settle for whatever instincts worked in the ancestral environment, and hope that's sufficient?

The way I see it, a moral philosophy is meant to be more of a map than a destination to relax in, it needs to have something to say to the people who already consider themselves moral. It seems to me that EA appeals to people are dissatisfied with "livable" philosophies and want something more ambitious.

Expand full comment

I've never understood why people take "it would probably make you unhappy" as a good reason to curtail their commitment to helping the world. For one, it seems obvious that your emotional reaction to various aspects of life is malleable in the long run, but also, would it not be virtuous to suffer anguish in order to help people? If anything it should probably relieve neuroticism and anxiety to be dedicating your life to helping instead of spending time on other things you know are not as virtuous. Maybe part of this is that I reject utilitarianism, especially psychological utilitarianism, in favor of something like Christian deontology, but it's always seemed to me like the obvious conclusion of EA is that we should become philanthropy monks and that everyone is just inventing lazy kludges to dodge this conclusion.

Expand full comment

There's a reasonable case that Christianity obliges all to literally sell everything we have and give it to the poor, most of us are not doing that so it's probably a good thing that forgiveness is a big part of the faith, it's not like we don't have practical examples to follow. https://en.m.wikipedia.org/wiki/Francis_of_Assisi

I guess what I would push back on is the idea that it will make you unhappy - I don't think me being unhappy would actually make the word a better place, and while doing the right thing is likely to involve some hardship and suffering, that's not the same as being unhappy - there's definitely something to be said for the idea that virtue is it's own reward.

Expand full comment

Having read some of your other comments in this thread, I think we just agree on most things, but let me see if I can explain my objection here is clearer language. It seems like the approach to personal EA engagement that Scott and other utilitarian EA people endorse is, "donating to EA is morally good, but if you tried to follow the utilitarian math it'd drive you crazy, so just choose an arbitrary threshold like 10% of your income." I would much rather say, "donating to EA is morally good without constraint, so that whoever can give more time/energy/money to EA is doing more good than otherwise, but the fact that you aren't St. Francis or a monk or etc is just an ordinary sin and you'll be forgiven for it." I guess the attitude around sin and forgiveness is part of it. But I guess I also disagree with the idea that you should tell other people that there is a cut off where you should stop giving, because even if you'll be forgiven for sin we should try to avoid it in what ways we can instead of accepting it as a constraint.

Expand full comment

"Utility" functions much like "enthalpy": a convenient mathematical abstraction over a messy collection of loosely-related phenomena.

Expand full comment

I would say the main criticism of utilitarianism, from a secular perspective, is that whatever "utility points" you gain is inevitably wiped out by Death. I mean that both in terms of one's individual death as well as the heat-death of the universe. After death, none of those things will matter or have any lasting impact into the future. Utilitarians are trying to win a game with no rules, no referees, no penalties, and no prize for winning.

In my mind, the only thing that makes sense in a secular context is to maximize one's subjective sense of meaning and well-being for the time that you are alive. Almost certainly that will involve helping others. But if you are doing that to the point of misery then perhaps you just need therapy (or to hit the gym).

If you are religious, then you believe that there are eternal things which transcend death. In that case it makes sense to maximize the utility of those things.

Scrupulosity is for the religious.

Expand full comment
Jul 20, 2022·edited Jul 20, 2022

The name does not cure the disease. How does one escape scrupulosity? After all, the argument seems so ineluctable:

CAUTION: INFOHAZARD FOLLOWS. Imagine the Queen of Underland explaining all this to Prince Rilian and his rescuers while casting mind-fuddling incense onto a brazier and playing a hypnotic drone ambient.

Given a choice between a good action and an evil one, which should you choose? The good one, of course. That's what good and evil mean.

Given a choice between a very good action and a merely slightly good one, which should you choose? Surely the better one. That's what better means.

Given a choice between a good action and a slightly but definitely less good action, which should you choose? Still the better one, right? How could you defend seeing a choice between two actions, and choosing the worse?

Therefore you must always do the very best thing you possibly can, as best as you can judge it, all the time, forever. Maximising utility isn't the most important thing, it's the only thing.

And if you nod along to all that, there you are, charmed by the Emerald Witch, or as I think of this evil spirit, eaten by Insanity Wolf.

MAXIMISING UTILITY ISN’T THE MOST IMPORTANT THING

IT’S THE ONLY THING.

WHILE YOU’RE SLEEPING

PEOPLE ARE DYING.

WOULD YOU PREFER

INEFFECTIVE ALTRUISM?

BURNT OUT?

WORK EVEN HARDER!

WHEN DOING GOOD

“ENOUGH” IS NOT A THING.

Expand full comment

Note that "good.action" is not synonymous with "obligatory action". Charitable donation is good but not obligatory.

Expand full comment
Jul 21, 2022·edited Jul 21, 2022

But can a thing be good to do but not obligatory? That is the fundamental issue. I gave the Emerald Queen's answer above, but I didn't conjure that from whole cloth. Peter Singer, in his younger days, made the same argument. Here he is in his foundational paper "Famine, Affluence, and Morality" (https://en.wikipedia.org/wiki/Famine,_Affluence,_and_Morality) (numbering mine):

1. "I begin with the assumption that suffering and death from lack of food, shelter, and medical care are bad."

2. "[I]f it is in our power to prevent something bad from happening, without thereby sacrificing anything of comparable moral importance, we ought, morally, to do it."

3. "The strong version [i.e. (2) as just stated] does seem to require reducing ourselves to the level of marginal utility. I should also say that the strong version seems to me to be the correct one."

Quoth the Emerald Queen.

I say "in his younger days", because I heard an interview with Singer in his later years in which the question was put about caring for your own daughter no more than anyone else's. He rather uncomfortably replied that that was one way you could live, suggesting that he had unresolved doubts. On the other hand, when he won the $1M Berggruen Prize last year, he put his money where his mouth is and gave it all away.

I don't know if Peter Singer has formal ties to EA, but it is clear from public statements that they both think highly of each other. So there's a puzzle. What basis can or does EA have for not demanding everything?

If anyone has a coherent argument against ethical maximalism, it will be the first I've seen. All I've come across in the academic literature is assertions to the contrary, expostulation that it is absurd, and invoking magic words such as "scrupulosity" and "supererogation" to name the things they want to be true without showing that the things named exist. All of these responses have also occurred in this thread.

I don't have an answer either, but then, I'm not any sort of A, let alone an EA.

Expand full comment

Here's a simple one: if something is an obligation, you will be punished for not fulfilling it.... so if you don't get punished for not doing X, X was never an obligation.

Here's another: ontologically, obligations aren't transcendent platonic realities, they're social agreements, like promises and contracts. (If one *ought* to do something , to whom is it *owed*?) Singer is only obliged to give away his prize money because he publicly committed to doing so.

Supererogation is no more ontologically suspicious than obligation. Neither is made of atoms.

Expand full comment

Morality is not dependent on who sees what you do. Singer did not give away his prize money because he publicly committed to doing so, he did so because he already thought it the morally obligatory thing to do, prior to telling anyone. Nothing in his work suggests that punishment plays any role in his concept of a moral requirement to do the right thing. Moral obligations are not social agreements; morality does not consist of doing what other people expect of you.

The problems I see in attempted defences of supererogation have nothing to do with the issue of whether supererogation is made of atoms. The problem is the fundamental one: when can it be good to not do the good that you can? Singer's answer is never. The good that you can do is the good that you must do. "Supererogation" is the concept that one need only do "enough", wherever one draws that line, beyond which lie the things that are the good that you can do but the good that you need not do. The problem is how to justify any such line at all.

Expand full comment

I reject (2). Now what?

Expand full comment
Jul 23, 2022·edited Jul 23, 2022

No argument from me. Is there anyone in the house to argue for or against Singer’s Second Axiom?

"[I]f it is in our power to prevent something bad from happening, without thereby sacrificing anything of comparable moral importance, we ought, morally, to do it."

I’m hammering on this point not because I have a committed, reasoned out position on this, but because I don’t, and I do not think that anyone else does except Singer, yet it goes to the very heart of EA. I have read various papers defending supererogation and the finitude of duty, and criticising utilitarianism for its total demandingness, but I have not found coherent arguments in any of them.

Singer offers a coherent argument, but my likening of him to the Queen of Underland and Insanity Wolf shows how little it sways me.

Personally, I hold to the finitude of duty, and draw it much narrower for myself than any EA does. At the same time, I am aware that I do not have a foundation for this. For all practical purposes I do not need one, any more than anyone needed Russell and Whitehead to take 350 pages to prove that 1+1=2 before they could do arithmetic. But in the end, their magnum opus was necessary.

Expand full comment

I feel like that "without thereby sacrificing" part is too easily assumed in discussions of this subject. Are there good, explicit arguments for why "I like nice things" and the like don't have comparable moral importance, rather than just assuming it implicitly?

Expand full comment
Jul 26, 2022·edited Jul 26, 2022

There are certainly arguments. Here is one made by Thomas Aquinas, quoted with approval by Singer:

"Now, according to the natural order instituted by divine providence, material goods are provided for the satisfaction of human needs. Therefore the division and appropriation of property, which proceeds from human law, must not hinder the satisfaction of man's necessity from such goods. Equally, whatever a man has in superabundance is owed, of natural right, to the poor for their sustenance. So Ambrosius says, and it is also to be found in the Decretum Gratiani: 'The bread which you withhold belongs to the hungry; the clothing you shut away, to the naked; and the money you bury in the earth is the redemption and freedom of the penniless.'"

Or filtered through Insanity Wolf:

TO HAVE ANYTHING IS A THEFT / FROM THOSE WHO HAVE NOTHING

THE LIMIT OF GIVING / IS YOUR ABILITY TO GIVE

GIVE / UNTIL IT HURTS

The quotation can be found online, in a different translation, at https://www.newadvent.org/summa/3066.htm#article7. Later in that section, Aquinas goes even further and defends not only giving up one's own goods but taking one's neighbour's, if it is necessary to succour those in need. If an Effective Altruism party got into government, I wonder what their tax and foreign aid policy would be.

Those who do not recognise divine providence can skip the phrase "instituted by divine providence". Some claim that the natural moral order can be objectively discerned by the same methods of observation and reason as are successful for the natural physical order, without any supernatural foundation. (This doctrine is called "moral realism".) But they do not all agree what those truths are.

How should people decide among these and other views, or arrive at one of their own? What is "the natural order" and how may it be discerned? It will not do to read Singer and nod along to his argument, or read anyone else and nod along to theirs; for even GPT-3 can sound convincing when read in that way.

This can be compared to the problem of objective priors in Bayesian reasoning. One does not get to pick whatever priors will give you the posteriors that you want, and meet criticism by saying "But Muh Priorz!" There are ways of arriving at objective priors for a problem (and much debate over the same, but it isn't relevant here). How can we arrive at objective moral priors? And what would they look like?

Expand full comment

Assumption: "Goodness" is a well-ordered, quantifiable thing, even between radically different actions. Second assumption: "Goodness" is an inherent property of actions, and goodness can be aggregated. Third assumption (of utilitarianism): "Goodness" can be compared and calculated stably *between individuals*.

I don't believe any of those assumptions hold, at least when you get down to it. There is no universal "util" that we can calculate; rank ordering actions isn't even theoretically possible outside of very narrow scopes, and actions in and of themselves (disconnected from context and everything else) don't have "goodness" or "badness" as an absolute, abstract property.

Sure, when you *can* compare actions, you should choose the best of them. But without the whole mechanism of utilitarianism's flawed hubris of universal calculationism, you can't really "do the math" in most cases. And every time someone tries, horrors result.

Expand full comment

Also, utilitarianism ignores intention [*] , and can't come up with a sensible.level.of obligation/burdensome...it's either all or nothing.

[*] Anscombe introduced the term consequentialism, and rejected it out of hand because she saw the purpose of morality as apportioning praise and blame, and thought it unreasonable to.do so on the basis of unintentional good or bad consequences.

Expand full comment

Yes, but I also think it's the type of criticism that is acknowledged but ignored. That is, it's a domesticated criticism. I doubt that people in EA realize how big a problem this is. It seems to me that it is probably hobbling the movement quite a bit.

Expand full comment

agree, was sad to see the lacan trail from a few months ago turn cold

Expand full comment
Jul 21, 2022·edited Jul 21, 2022

Me too. I hope someone revives it.

[edit: I guess I'll take a stab at it]

Expand full comment
Jul 20, 2022·edited Jul 20, 2022

"The fact that this has to be the next narrative beat in an article like this should raise red flags. Another way of phrasing “this has to be the next narrative beat” is that it’s something we would believe / want to believe / insert at this place in our discourse whether it was true or not. "

In a word, *no*.

It is the next narrative beat because many of us *smell* it. It has been the smell of EA for years. It has been the smell of EA for numerous readers of this blog (and its immediate predecessor) ever since you (very gently) said they made you feel bad for being a doctor instead of working on EA. That was a bad smell and it never, ever went away.

I don't know precisely what is the problem with EA, because I'm not putting in the epistemic work. But I feel *very* comfortable saying "I know it when I smell it; EA smells like it has good intentions plus toxic delusions and they're not really listening".

If you want the best definition I can personally come up with, it's this: EA peer pressures people into accepting repugnant conclusions. Given that, *of course* it doesn't want real criticism.

Side note: Zvi did an extremely good job writing the coherent and detailed breakdown of EA's criticism problem from the other side, and I understand that the whole point of this essay is you don't want to give that kind of thing any air, but it's not very nice or epistemically reasonable not to give it air.

Expand full comment
author

Disagree because I think it would be the obvious next narrative beat in *any* case, not just EA. "This big and powerful group seems obsessed with soliciting criticism of itself" just naturally lends itself to "...but they're faking it, right?"

I didn't want to go through the details of Zvi's criticism because it didn't make sense to me - it looked like some sort of Derrida-esque attempt to argue that slight details of the phrasing of every sentence proved that it actually meant the opposite of its face value. I could never see it when Derrida was doing it and I can't see it when Zvi is doing it either.

Expand full comment
Jul 20, 2022·edited Jul 20, 2022

I agree that it's Derrida-esque. He's never the most legible. So that's a fair reason. But I think the key takeaway is the 21 point list of EA assumptions, and the idea that EA discounts or discourages disagreements with these, and this part specifically is worth giving thought and air.

I would be suspicious when any big and powerful group wants criticism of itself because that follows naturally from reasoning about the nature of big and powerful groups. But also, I would be suspicious for *specific* reasons in certain specific cases ("such and such a group has a history of inviting dissent, then punishing it a year later", or "such and such a group has payed lip service to disagreements the following times, and I expect the pattern to continue"), and I think these reasons could be instructive in understanding, changing, or even opposing the organizations.

In the case of EA, I wouldn't believe it because EA has the hallmarks of a peer pressure organization, and I think the criticism they're most likely to discount is "the negative value of peer pressure outweighs the positive value of your work". That's not a fully general criticism of organizations; it's a specific and potentially useful criticism of one type of organization.

I wouldn't tell a shy conflict averse person not to work for the US Government. But I would tell them to avoid making contact with EA.

Expand full comment

What kind of movement/organization *doesn't* have the hallmarks of a peer pressure organization? Isn't that literally the way you get a movement rather than a bunch of random individual choices? Sometimes the form of the peer pressure varies (is it a carrot or is it a stick) but it's always peer pressure that makes us conform enough to form a movement.

Expand full comment

Off the top of my head, a lot of political/activist organizations seek to change the minds of non-participants, but only mildly encourage non-participants to become participants. At most, they're targeting a specific sub-demographic for recruitment, while sending a much milder message to the whole demographic.

What sets apart EA (and Christianity, and militant vegans, and ...) is that they're telling people that it's a *personal moral failure* to not join the movement. That's one specific type of movement, and we should distinguish between those and the "milder" movements that only want a small part of your resources/mindshare, or just want you to get out of their way so that they can do their thing in peace.

NB: I've never personally encountered the described peer pressure in my interactions with EAs, but I'm conceding that point for the sake of argument.

Expand full comment

I don't think the OP's concern is particularly proselytizing to 'outsiders'. Presumably they would find EA Judaism which only tried to make those born to EA parents feel bad about being doctors also unappealing.

Expand full comment

Masonry. You have to get the signatures of two master masons to be admitted to the fraternity, and you have to ask yourself to enter; they are not allowed to solicit members.

Expand full comment

As others have pointed out, there are a lot of movements that only want to funnel a fraction of people into the most active level of membership, and a lot of movements that offer but don't push increased participation.

EA is indeed like other totalizing movements, evangelical Christianity among them, in that it wants literally everyone to advance to a high level of membership and will happily pressure all of the people to do so.

If you've never been to an evangelical church, it's quite a trip. Sermons regularly revolve around the idea that everyone is on the borderline between being a bad Christian and a good Christian and the tacit suggestion that one's commitment level to church functions will make all the difference. I once saw a pastor preach through a case of hemorrhoids as an object lesson in dedication. EA can feel like that.

By contrast, the US government is fundamentally a movement, and if I want to join it and work for it, they're happy to use me, but if I don't, they don't care, I'm just a citizen and that's fine by them.

You're right that I would find EA Judaism unappealing, but still better for being confined to a clique.

Expand full comment

It is the dream of anyone who writes a post called Criticism of [a] Criticism Contest to then have a sort-of reply called Criticism of Criticism of Criticism.

The only question now is, do I raise to 4?

I did it that way for several reasons, including (1) a shorter post would have taken a lot longer, (2) when I posted a Tweet instead a central response was 'why don't you say exactly what things are wrong here', (3) any one of them might be an error but if basically every sentence/paragraph is doing the reversal thing you should stop and notice it and generalize it (4) you talk later about how concrete examples are better, so I went for concrete examples, (5) they warn against 'punching down' and this is a safe way to do this while 'punch up' and not having to do infinite research, (6) when something is the next natural narrative beat that goes both ways, (7) things are next-beats for reasons and I do think it's fair that most Xs in EA's place that do this are 'faking it' in this sense, (8) somehow people haven't realized I'm a toon and I did it in large part because it was funny and had paradoxical implications, (9) I also wrote it out because I wanted to better understand exactly what I had unconsciously/automatically noticed.

For 7, notice in particular that the psychiatrists are totally faking it here, they are clearly being almost entirely performative and you could cross out every reference to psychiatry and write another profession and you'd find the same talks at a different conference. If someone decided not to understand this and said things like 'what specific things here aren't criticizing [X]', you'd need to do a close reading of some kind until people saw it, or come up with another better option.

Also note that you can (A) do the thing they're doing at the conference, (B) do the thing where you get into some holy war and start a fight or (C) you can actually question psychiatry in general (correctly or otherwise) but if you do that at the conference people will mostly look at you funny and find a way to ignore you.

Expand full comment

I know EA only from your post, and now Scott's. I got the same vibe from both of your posts... My take away is that the idea of eliminating all suffering/evil in the world is dumb. Suffering is what makes us stronger, builds character, gives us something to fight against, (the hero vs the villain story). I'm not going to say we need more racists, misogynists, or chicken eaters but trying to eliminate all of them is a mistake. We've turned 'no racism' into a paper clip maximizer... and we should stop.

Expand full comment

Suffering never gets eliminated, we just complain about ever tinier molehills as the hedonic treadmill accelerates... and that has been the case since the Industrial Revolution.

Expand full comment
founding
Jul 20, 2022·edited Jul 20, 2022

EA is saying much less controversial things than "eliminate all suffering" or even "eliminate all racism". Things like:

1) 5-year-old children should not die of malaria

2) We should not subject billions of food animals to lifelong torture

3) We should prevent human extinction (Edited for clarity; previously said “We should prevent everyone on earth from dying.”)

I think it's pretty hard to argue that we need e.g. 5-year-olds dying of malaria to "build character".

Expand full comment

"One of these things is not like the other"

Things 1 and 2 are popular things said by lots of people. Bill Gates is saying 1. Special food labels in many restaurants and grocery stores are saying 2.

Thing 3 is extremely controversial across the population of Earth, maybe to the point of war.

Expand full comment
founding

Sorry I think I phrased that poorly. (3) is not “no one should die” it’s “it would be bad if humanity went extinct”. I don’t think that’s too controversial!

Expand full comment

We should prevent everyone on earth from dying? Not a good idea. 8 billion and counting. Death is the inevitable end of life, we should learn to negotiate it with grace. Immortality is a corrupt and selfish goal. Personally, my perfect end would be abrupt and pain- free; timing tbd; I'm ok with whenever.

Expand full comment
founding

Sorry I think I phrased that poorly. (3) is not “no one should die” it’s “it would be bad if humanity went extinct”. I don’t think that’s too controversial!

Expand full comment

Much more controversial things are implications of utilitarianism , though.

Expand full comment

Huh, OK.

1.) Maybe in some places kids need to die of malaria because it keeps the population 'fit'. Giving them bed nets means they are a population now dependent on bed nets. Listen, I'm not saying anyone should stop sending bed nets, or help build water wells, or ... whatever. But no action is 100% good.

2.) better life than no life. OK a hitchhikers guide to the galaxy reference.

Would you be happier if we breed cows that wanted to be eaten?

3.) Yeah! why isn't this number 1?

Expand full comment
founding

2) The main issue (IMO) isn’t whether the animals want to be eaten, it’s that they live in miserable conditions (to the point that their lives are indeed likely worse than no life).

3) In practice this is the most controversial branch of EA because it’s somewhat unclear what could cause human extinction or how to avert it. (The leading EA theory is AI).

Expand full comment

Hear hear

Expand full comment

I really wouldn't worry about the danger of EA becoming so successful that enough suffering in the world is eliminated that life becomes meaningless.

Expand full comment

I think I just want to make a point, that some suffering is good, and so utilitarianism can't be 100% right.

Expand full comment

Where are you getting your definition of Utilitarianism that includes the necessary elimination of all suffering?

Expand full comment

In what way do children who die of malaria become stronger?

Expand full comment

Yeah, this is the sucky part of evolution. It's the one's that live. There are some environments where it's harder to live. But to keep people living there... select those that live.

Expand full comment

Evolution does not have a "direction".

Expand full comment
author

Is it okay to be a doctor, or is that going too far in eliminating suffering which is good? Are you especially virtuous if you beat up random children you encounter on the street?

I avoid these questions by just assuming that less suffering is good in whatever case (except where the suffering is designed to do something useful, like in boot camp). Given how much suffering there is even when people work hard to eliminate suffering, I'm not really worried we'll ever run out.

Expand full comment

Surely the exception criterion should be whether the suffering is consented to/desired by the people experiencing it, not whether it's "doing something useful"?

Expand full comment

Lots of people suffer consensually in ways that they really should be talked out of. Someone who cuts themself every time they make a mistake is consenting to that, but it's probably not healthy and it would probably be preferable that they stopped.

Expand full comment

Right, we all work to reduce suffering, and no way should we try and create more. But I (recently) had this idea that some suffering is good. The good part happens when we get through/ over the suffering and come out the other side. (If you die or never stop suffering then that seems all bad.) The things we suffer through in life seem to shape us more than happy things. We can all think of life histories of people, and see how the suffering shaped them. Fredrick Douglas, James Baldwin, Robin Williams... OK maybe it's not always the getting over it... maybe it's just dealing with it. Well this is mostly a new thought for me, and I may change my mind.

Expand full comment

It's a familiar idea.

Eliezer Yudkowsky made a whole speech about how, if people got bashed by a mace once per month, eventually they'd come up with all sorts of reasons why getting bashed by a mace is actually great for you. It builds characters, it makes your face more resistant to bashing, etc. Sour grapes stuff, from people who regularly get bashed to the face and want to justify why it's actually good.

If you go to a person who doesn't get bashed and tell them of these amazing benefits and ask them "Do you want to get your face smashed in once per month, to build your character?" virtually everyone will say no.

Expand full comment

"Good" "Virtuous" "Suffering". These are philosophical terms that were kidnapped and redefined by EA. And no amount of "well obviously we're better off to skip the metaphysical talk and redefine them" avoids the very nature that such a move is itself philosophical.

"The good is less suffering, therefore less suffering is good." Is just misusing the term good. Be content to just say 'we subjectively feel compelled to reduce suffering' and honestly define suffering either as noxious brain chemicals (over the long term that don't reduce noxious chemicals in other brains) or something else.

Expand full comment

Ok but "it's easier than answering these questions" isn't compelling, firstly (as I'm sure you know), and furthermore, understanding the extent to which suffering should be destroyed is relevant to more than just the destruction of the final suffering.

It's your philosophy of suffering. If I consider suffering The Bad, then I will eliminate suffering in a particular way. I may reduce it wrongly because of my mindset, or I might eliminate good sufferings, or I may label the wrong things as suffering. Having a good and correct philosophy of suffering is *hugely* important to alleviating it. Properly prioritising suffering relative to pleasure, love, endurance etc. is essential to what kind of interventions you use.

So hand waving what suffering is seems like a fatal mistake.

In addition to all that, I would say it's most important to have a strong case on whether suffering is the ultimate evil, what it is, and in what manner it should be prioritized because we are *fixing* other people and other communities. In our zeal to eliminate their "suffering" we destroy Their Good, all while patting ourselves on the back and not bothering to wrestle with ourselves because "all people agree suffering is bad." What is suffering? How bad is it? Is it worth violating others' philosophies or communities to pursue it? And if the answer is yes, you better be damn well able to vociferously defend it.

Expand full comment

Yes yes yes. Some suffering is good. And to say, "oh there is so much suffering, getting rid of this piece must be good."... It doesn't follow. If you agree that some suffering is good, then you need a way to identify the good. Evolution, nature selection, is suffering. Should we do away with natural selection in the name of ending suffering?

As a sports fan, I suffer when my team loses, and yet a team that wins all the time would be the end of sports, losing and suffering is a part of the drama of sports... what makes it fun.

Expand full comment
author

Yeah, I'm definitely not saying you did wrong by posting it, and it seems like the sort of thing that *could have* been true, I think our crux is just that I looked at the examples you gave of how the sentences were phrased in ways that actually discouraged criticism and just didn't see it - they looked like totally normal sentences to me.

Expand full comment

Confusing, but one possibility is there isn't a contradiction there. They could be doing exactly what I say they are, and still also be 'perfectly normal' and that would indeed be the important thing to notice.

Expand full comment
founding

Yup!

Expand full comment

Were the monks that engaged in self-flagellation being too critical about themselves? Or was self-flagellation a means to a different end?

Expand full comment

This sounds remarkably like wanting to believe something whether or not it is true.

Expand full comment

Epistemic status: data point.

I'm an EA. I accepted *the* repugnant conclusion before I heard of EA. Don't assume that EAs' common beliefs are due to "peer pressure" rather than a combination of selection effects and normal epistemic convergence.

Expand full comment
Jul 20, 2022·edited Jul 20, 2022

I'm not assuming EA's common beliefs are due to peer pressure.

I said "EA peer pressures people into accepting repugnant conclusions", *not* "EAs who accept repugnant conclusions were peer pressured". You're a data point against the latter, not the former. I'm claiming and providing a data point for the former.

Expand full comment
Jul 20, 2022·edited Jul 20, 2022

I feel like EA is mostly about peer pressuring people into actually doing things they already knew were good, like helping the poor and being responsible for the future, with the same efficiency that they aspire for in their personal life (EA doesn't really appeal to people who don't already value rationality and efficiency in my experience). I know some people who therefore accept the repugnant conclusion, but it's not like you really need advanced utilitarianism to convince people that "the world ending would be bad, actually" and "we should expend a non-trivial amount of effort in the present day to prevent it".

I mean, is the criticism basically "People would be happier if they weren't part of EA"? Because I'm honestly not sure that that's true, I think most people that EA appeals to are already searching for meaning, they'd probably just join a less effective social movement instead. I know a lot of people who've personally benefitted from EA, getting careers, community and personal fulfilment out of it, I feel you'd struggle to convince me EA is bad even ignoring any positive effects on the world.

Expand full comment

Sorry, I have experience with doing good on an individual basis without pressure, doing good as part of a supportive but very low pressure community, and discussion with EAs. The first two things certainly made me happier, enough so that I do think there's a fairly tight linkage between "moral living" and "happy living". The third thing did not make me happy or give me any indication that further participation would make me happy. Not at all. It gave me the sense that my social group would come to consist of miserable scrupulous neurotics over-scrutinizing very small decisions. I'm over-scrupulous enough, thank you, and trying to *reduce* that quality.

Expand full comment
author

I think the key difference is whether you find over-scrutinizing decisions to be a fun recreational activity. If you do, it doesn't feel like neurosis at all - it just feels like you've suddenly be handed lots of bonus conversation topics and pastimes!

Expand full comment

Conversation is my past time and I have issues with scrupulosity. When I was 25, that would have sounded right. Later, in therapy, and to my great benefit, I was taught that the difference between a measured and an excessive amount of that is much like the difference between a measured and an excessive amount of ice cream.

(I still consume both the literal and figurative gallon in a single sitting from time to time, of course, and I don't worry too much about that)

I think the post where you tried to talk some ethical people out of talking themselves out of having children for ethical reasons was a good example of seeing the excess.

Expand full comment

re: smell of EA, try Michael Nielsen's essay https://michaelnotebook.com/eanotes/

Expand full comment
Jul 20, 2022·edited Jul 20, 2022

I think the caveat I need to add is that I'm a Christian, and so I'm totally fine with accepting a philosophy that demands a lot more than I'm willing to give right now. That said, I do agree with the critique that Utilitarianism alone offers a very narrow vision of what a good life looks like, I just don't think that means it can't be a useful guiding principle for practical action. I guess that's my answer to the author's arguement that EA needs to be part of a broader life philosophy.

lf you view the Biblical commandment to "love your neighbour as yourself" as a condemnation of your own selfishness, then obviously becoming a Christian will make you miserable. If you instead regard it as aspirational, and as part of a broader philosophy, then it leads to a far more fulfilling life than just watering it down to "be nice to other people, when it's not too inconvenient" - this is why maximalism is good, actually. My main criticism of EA is that most utilitarians have a far too constrained view of what human flourishing looks like, but in practice we agree on the best ways to help others so it's not going to be a practical objection until we get to the point where we've solved extreme poverty and existential risk.

Expand full comment

I'm a lapsed Catholic, and came from a family of very serious practice (daily mass, tithes, other serious footwork). I never lost my respect for maximalism, and respect the point you're making about it, while also making the distinction between condemnatory maximalism and aspirational maximalism.

But I did lose my respect for evangelism, and I don't believe it is moral or practical. In fact I would go so far as to say that the cost of evangelism scales with the ambition of the change, enough so that it will basically always outweigh the good.

Expand full comment

Can you say a bit more about that last point? I'm also a lapsed former ultra-devout believer who never lost my respect for maximalism (it was what attracted me to EA in the first place, before I saw its many warts), so I think I'd stand to gain from your take on this.

Expand full comment
Jul 20, 2022·edited Jul 20, 2022

Sorry, I had to find the time to read the excellent note you linked by Michael Nielsen first.

I am not and have never been a Ba'hai, but it's worth sharing two elements of the Ba'hai faith. Note that these are ideals, not necessarily practiced by the average adherent.

1) The Ba'hai believe that Abrahamic religion is still evolving, and has not yet reached its perfect form. So Judaism, Christianity, Islam, and maybe even the Ba'hai faith itself, are true, but incomplete. In this sense the religion very much shares a good quality with EA: it is not settled, may never be settled, and is open to change and improvement.

2) The Ba'hai prophet Baha'u'llah (the principle prophet, more or less like Muhammad) explicitly instructed his followers not to proselytize or force their faith on others. In practice, this is often ignored, but it is also often followed; my two Ba'hai friends did not tell me their religion, and when I found out and asked them about it, each one expressed reluctance to tell me unless I was really interested from my own will.

When I learned this latter property, it was perhaps the single most refreshing thing I ever learned about a religion. I can't begin to tell you how refreshing it felt.

From that followed a frame of reference where I asked "would this thing be better without evangelism?", and for me the answer was so often and so completely "yes" that I eventually stopped asking, with one exception I'll treat at the end.

I have come to believe that people resist change in direct proportion to the magnitude of the push for change; hard push leads to hard heart. Moreover, while some people do yield and change and convert, as a side effect both the converter and the converted are left with a serious intensity, maybe permanently. Furthermore most "we meant well but did evil" mistakes seem to come from this place of intensity.

So when I ask myself whether something would be better without evangelism, what I get back is basically always a form of this:

1) It would take 10-1000 times longer to complete the project.

2) Nobody would feel coerced into it.

3) Nobody will choose to hurt somebody to get the project done sooner.

4) Nobody will have strong feelings to continue "fighting" after the project reaches a natural conclusion.

Basically always the right call.

The exception: technically law, justice, and protection are a form of evangelism; if people want to shoot guns in my neighborhood for fun and I force them to give up their guns, that's tantamount to evangelizing my position. I don't like this, and still basically always reject the aggressive form where I come into your neighborhood to enforce my views, but I concede that some wild or bad actors need to be forced to stop acting, and I'm totally willing to commission or provide that force when necessary.

Final side note: in the unlikely event you're a Ba'hai, I'm going to have a good laugh that I've been telling you your own thing (perhaps inaccurately!)

Expand full comment

I've never been religious, and generally actively believe the religions I've encountered to be false. I have often been annoyed at evangelism.

But I am signed up for cryonics. I've not been evangelical about it. But I've wondered if I should be.

To my mind, if you believe you have a duty to save a drowning man, and you genuinely believe your religion, and it's a religion that contains something like Hell, it seems like you should evangelize. Hell is very bad. It's defined that way. If you see someone choosing Hell over Heaven, they are the drowning man. Do you not have a duty to help them? If they struggle against you, does that change your duty to drag them to shore?

(With cryonics, instead of Heaven and Hell, it's a chance at living, vs definitely dying, but the same calculation applies).

(I don't think I agree with your comment, and besides what I've said, I would note that a lot of your arguments about evangelism would apply to any form of "convincing", like teaching. And I think they obviously don't hold there. The main difference, I suppose, being in your examples it's a belief someone doesn't want to believe. Which is why my last line was about the drowning man struggling.)

Expand full comment

So what kind of charitable giving doesn't make ppl try to feel bad for not doing the right thing? The dog shelter commercials really go over the top with it but all attempts to encourage ppl to do good come along with a fair bit of making ppl feel bad about not doing them. It might not be intentional but because ppl care about doing good just telling them that these ppl are suffering and you could help makes them feel bad about not doing it.

So yah EA might make doctors feel guilty in the same way our society routinely makes investment bankers feel guilty: what are you doing just chasing the dollar and not making the world a better place.

What's different, I suspect, is that EA is making ppl our society lauds as selfless feel bad rather than ppl it we might, at first glance, think of as quite altruistic and selfless. But I don't really see the difference. Choosing not to think carefully about whether your actions are really making the world better or just giving you a warm glow bc that's easier seems equally selfish as choosing to ignore the poor to buy more shit. If our intuitions disagree too much the worse for them.

But EA seems to take a much softer stand on it than mainstream society. It generally just focuses on convincing ppl that hey this would help more while you regularly hear screeds denouncing ppl who just try and enrich themselves without concern for the less well off in the mainstream.

Indeed, you are describing the conclusions involved as repugnant and Isn't describing a conclusion as 'repugnant' another way of saying that society makes you feel bad about reaching that conclusion? (seems implausible these are innately and universally repugnant given the things some cultures behave).

Expand full comment

As I've said at other points in this thread, I am a lapsed Catholic. But you know what kind of charitable giving doesn't make people try to feel bad for not doing the right thing? Church soup kitchens.

Long *after* leaving religion, I started volunteering at soup kitchens / food pantries / etc run by variously Catholic, Protestant and Jewish congregations. I've done this in several cities (Austin, Seattle, Bay Area, Los Angeles) and the pattern is extremely consistent: Nobody pushes *anything*, not increased participation, not donation, not even religion. They are just happy to see people show up to help, and very encouraging about that help. They often had nice deals worked out to get leftover goods and produce from local grocery stores, and from having done the pickup rides, I do not get any sense that those deals were worked out over pressure. They probably just showed up and asked nicely.

You know what's the meanest thing anyone ever said to me about that world? An EA type bluntly said "that's just a pointless salve". You know what? Damned right it's a salve, and given we're all going to die eventually, that *salve* is as much of a virtuous act as anything. I certainly don't see any moral law of the universe that makes it a clear lesser act than protecting animals from food industry cruelty. That kind of knee jerk comparing is *assholery*. Are some things better uses of our time than others? Yes. Is it reasonable to compare anything to anything else and flatly declare one the better and the other a waste of time? *No*, it's assholery. These aren't thoughtless positions. I didn't help the homeless thoughtlessly. They're positions taken from different value systems, and it's childish and insulting to default assume non-utilitarians have poorly reasoned moral systems.

I feel the same thing about doctoring. "what are you doing just chasing the dollar" is assholery. My sister in law is a pediatric child abuse specialist, which means she spends her entire working life handling medical child abuse cases. How *dare* anyone blithely lump her in with some hypothetical ambulance chasers? EAs don't understand how completely off putting that language is. It's honestly worse than pushy street preaching.

Last thing: when I say repugnant, I mean viscerally repugnant to me.

Expand full comment

Thank you for this. I come from a very different background, and you put my perspective into very different words that for some reason I agree with nonetheless.

I am an ER doc, and I also agree that pediatric abuse specialist would be a very hard doctor job for most including myself.

Seems like a good part of this debate can be summarized, to my undergrad-level liberal arts mind, as just universalism vs localism, abstract principles vs concrete doing-stuff. I am sure EA has discussed the heck out of that proposition as well, but like you I don't have the willpower to look up those discussions as I know it won't change my opinion on any of this.

Expand full comment

"Seems like a good part of this debate can be summarized, to my undergrad-level liberal arts mind, as just universalism vs localism, abstract principles vs concrete doing-stuff."

That's a good way of putting it.

On the concrete side, since you're an ER doc, thank you. You sawed the wedding ring off my pregnant wife's finger and reassured her that it was okay. You monitored her after she took too much tylenol during a fever delirium. You checked my daughter when she fell on her head, and gave her antibiotics for her UTI. You gave me high grade benadryl when I discovered I was allergic to jackfruit and my throat closed, and you scanned me to make sure my ribs weren't broken when I was doored by a car while riding my bike. You caught my mother's heart condition after a fall, and set my brother's broken leg. I wouldn't trade you for anything.

Expand full comment

A lot of this could be extended to explain why /r/unpopularopinion is full of popular opinions.

Expand full comment

The good version of /r/unpopularopinion is /r/The10thDentist

Expand full comment

You're absolutely right I'm horrified by everything there.

Expand full comment

Hahaha

Expand full comment
founding

This is an unusually good ACX post, easily top 10%. I've long been frustrated that society has made paradigmatic self-criticism the ultimate high status move, to the point that folks become suspicious of any movement that doesn't do enough of it. You implore the reader to treat EA as a variable — I would really enjoy a fleshed out version of this post with EA actually replaced by X, and merged with the points about self-critique from I Can Tolerate Anything Except The Outgroup.

Expand full comment

Shades of Maoist "struggle sessions" ...

https://en.wikipedia.org/wiki/Denunciation_rally

Expand full comment

But the armchair critics of EA will basically never participate in or even intentionally cause (shades of) struggle sessions, because our whole thing is to allow a certain default apathy, whereas I claim EAs could plausibly end up doing (full on) struggle sessions, because they have the fervor and intensity required to do that.

If I criticize EA as a peer pressure organization and it somehow leads them to (shades of) struggle sessions, that's not evidence the critique was wrong, but evidence that it was right. Only insanely fervent and pressure heavy organizations can go there.

Expand full comment

Ever thus - at least for some time:

"The blood-dimmed tide is loosed, and everywhere

The ceremony of innocence is drowned;

The best lack all conviction, while the worst

Are full of passionate intensity."

https://www.poetryfoundation.org/poems/43290/the-second-coming

Though that's not to say that being "full of passionate intensity" is necessarily always bad, or that "lacking all conviction" is necessarily always good. There ARE - or should be - other options on the table.

Expand full comment

That poem is certainly part of my training to be reluctant towards political and social movements. Thanks for the reminder.

Expand full comment

No one whose actions stay the same while indulging in self-criticism isn’t actually doing self-criticism at all

Expand full comment

Generally agree, although when one is perfect it's hard to improve ... 😉

But your argument kind of hinges on "indulging" and on the question of how credible are the criticisms. Though maybe less a case of "self-criticism" and more one of being open to the criticisms of others - apparently the case with EA soliciting them.

If, in the latter case, the criticisms are trivial or unfounded then EA and their like can't really be faulted for not doing more than saying, "thank you for your comments". However, IF there IS substance, and IF EA and company refuse to acknowledge that and make some effort to correct the problems THEN, and only THEN, can one reasonably throw stones at them.

Sort of my schtick; see my recent post and About for details:

https://humanuseofhumanbeings.substack.com/p/welcome

There are all sorts of feedback mechanisms in society that ensure individuals, corporations, and governments stay on the straight-and-narrow; as Eleanor Roosevelt once put it:

"...our children must learn...to face full responsibility for their actions, to make their own choices and cope with the results...the whole democratic system...depends upon it. For our system is founded on self-government, which is untenable if the individuals who make up the system are unable to govern themselves.”

https://www.goodreads.com/quotes/824275-our-children-must-learn-to-face-full-responsibility-for-their-actions

But that "governing" is part and parcel of feedback and control systems, the paradigmatic example of which is Watt's governor:

https://en.wikipedia.org/wiki/Centrifugal_governor

However, that process of social feedback, any feedback in fact - mechanical or electronic or genetic, has a great many seriously pathological manifestations. An example of which is, arguably or presumably, EA's "virtue signaling", their "self-indulgence" in appearing to be open to criticism while doing nothing to address those ones which presumably have some merit.

Expand full comment

The hardest thing to do in the modern world is believe in something or try to accomplish something. People will call you an idiot for having principles or causes, root for you to fail, and then either gloat if you fail or grumble about you if you succeed.

Expand full comment

Indeed. Though the question of WHAT to believe in is something of a sticky wicket. As W.C. Fields was reputed to have said, "A man has got to believe in something. And I believe I'll have another drink." 😉

The problem these days is maybe less that there's not enough to believe in and more that too many believe in totally untenable claptrap and outright woo - "magical thinking" as Kurt Andersen, author of "Fantasyland: How America Went Haywire" put it therein. Highly recommended book; fairly decent and comprehensive synopsis of it in this oldish Atlantic article by him:

https://www.theatlantic.com/magazine/archive/2017/09/how-america-lost-its-mind/534231/

Expand full comment

Agreed. Unfortunately those who should succeed against the odds and those who should be shamed or talked out of bad ideas look the same from some early stage and invite the same behaviors.

Expand full comment

I suspect this was true of the ancient world as well.

Expand full comment

The post this reminded me of was Post-Partisanship is Hyper-Partisanship (https://slatestarcodex.com/2016/07/27/post-partisanship-is-hyper-partisanship/). A criticism of the foundations is similar to a fargroup criticism, while a criticism of tactics is similar to an outgroup criticism. They are not exactly the same, because in this case the critiques still come from within the group. What they share is that near mode threats have more emotional salience.

Expand full comment

>Go to any psychiatrist at the conference and criticize psychiatry in these terms - “Don’t you think our field is systemically racist and sexist and fails to understand that the true problem is Capitalism?”

Of course, there's a sense in which this isn't really criticism of psychiatry at all; it's just advancement of one's own ideology. Most of these critics have their own version of "sufficiently progressive" psychiatry that they think would be just fine.

Expand full comment

I read those sections the same way - this had nothing to do with psychiatry, but was instead a means to promote unrelated political views.

Expand full comment

I think EAs are motivated to make those "broad structural critiques" because they feel like things outsiders are thinking, or saying, and EAs want to anticipate those external criticisms so they can (1) refute them and (2) proudly claim they already thought of that and its a good point, but here's why all things considered it doesn't hold (kind of like a memetic immune system). And they do _this_ because they think it'll win people over and grow the movement.

Expand full comment

Yeah, I get the sense that most critique “bounces off” in the sense that people give a plausible-sounding refutation, then choose not to engage with it further/at a deeper level, satisfied with their own defense.

Expand full comment

When I was a christian child, I spoke with a series of practicing christians who made sincere claims that they too had seriously questioned their religion. Some had, but I always felt like more had done some relatively shallow and gentle questioning and, since they didn't have a scale that fit non believers, they had just decided it must be the top of the scale, the serious level of questioning. "I've swum in deep water. 2 feet, 3 feet, I've done it all. I always came back."

Expand full comment

>I don’t know if it’s meaningful to talk about EA needing “another paradigm"

Beyond this, I'm not clear how EA could reach another paradigm while still being EA. My conception of EA is a charity selection and funding paradigm based on a rationalist quantitative approach aimed at producing maximum utility for everybody in the world (sometimes including animals).

If something is fundamentally wrong with that model, a paradigm shift wouldn't entail 'changing' EA, but instead shifting the substantial amounts of philanthropic money in the rationalist-sphere away from EA and towards a different grantee-search model altogether.

Of course, anybody with a job in big EA would very much prefer that the big donors stay within the EA paradigm when choosing how to give away their money. Openphil and etc. would likely change a bit if asked nicely, but if the fundamentally best way to perform charity was something far away from EA I am skeptical they would be able to resolve the massive conflict of interest. It seems to me that a much better exercise would be to convince the money behind EA to pick their grantees somehow else rather than convincing professionals to kill their industry.

Expand full comment
Comment deleted
Expand full comment

I have sympathies for EA. So if I want to do some good, I don't calculate impacts, I just donate to whatever GiveWell suggests.

Expand full comment

Interesting. This is a real deviation from my ill-informed outsider's assumptions about EA as well. Are the core GiveWell charity lists still good for doing ROI-style charity selection? Or are they corrupted to more subjective metrics?

Expand full comment

Assigning numbers to subjective judgments so you can do math with them is peak shape-rotator nonsense.

Expand full comment

Not necessarily. Scott had a post about doing this vs just using intuition on the old blog, I think.

Expand full comment

Or peak wordcel nonsense. Depends on who gets to do the assigning!

Expand full comment

Then what the heck is the point of EA? You give money to people to distribute based on data/math and instead they just distribute it based on their personal biases?

Expand full comment

The single biggest piece of writing that predisposed me to agree with you is exactly the criticism that Scott praised. It did a very good job of describing an incident where OP seemed very slow to move away from one of its programs that was allegedly not working. So it seems that if they had to move way from their "EA" program, or their "rationalist quantitative approach aimed at producing maximum utility for everybody in the world" program, they'd struggle pretty hard to do that. But this kind of seems consistent with what Scott said in the post!

Expand full comment

In fairness, I think most people would struggle to admit that what is essentially their life's work, certainly career, is fundamentally unsound and rather than being the best thing in the world to do is just a waste of time and money. I doubt I would be able to say that myself if it came to it.

(I do not think all of that about EA)

Where I disagree with Scott is that he thinks the current EA can die by a thousand specific critique cuts and evolve into something better, whereas I am skeptical the current EA institutions have that capacity for such change. (And further that any movement /cause /philosophy could survive a complete abandonment of its core principles and reason for being)

Expand full comment

Movements survive complete abandonment of their core principles all the time - they just have to do it slowly enough, or at times of great enough crisis. Look at the evolution of the conservative movement over the last 50 (or, better, 200) years.

Expand full comment

Or look at any religion.

Expand full comment

When the paradigm change happens, I’d give even odds as to whether the name “EA” attaches to the new thing or disappears. Sometimes, the paradigm shift is able to be explained as the way of doing what they always wanted to be doing but didn’t quite do right, and sometimes the people who make the switch early are people who were outside the term.

Consider the way that modern biology is said to be Darwinian even though in the late 19th and early 20th century, Mendelian genetics with its discrete units of heredity from a “gene pool” was thought to be this one idea that Darwinian theory could never accommodate, with its demand for constantly varying traits that bring species completely outside what they had been.

If sufficiently high status EAs adopt some revolutionary new idea for thinking about philanthropy that is at odds with the kinds of measurements and observations they’ve been doing, then the new thing could be called EA.

Expand full comment

But a synthesis was eventually found, and the underpinnings of modern biology are much closer to Darwinism than to Lamarckism: https://en.wikipedia.org/wiki/Neo-Darwinism

Expand full comment

But Lamarckism wasn't the new paradigm competing with Darwinism - Mendelism was! To a Neo-Darwinian, we don't think of Mendelism and Darwinism as competitors, but they clearly were at the time - Darwin said traits had continuous variation around the traits of the parents, so that small differences can accumulate; Mendel said traits had binary variation, so that the only differences possible were those already in the gene pool. Once we understood that most traits were controlled by many genes, and that there are rare mutations in any, we were able to synthesize these.

Expand full comment

Raises the question whether money can actively harm in certain situations. I can think of a few. Rags to rags in three generations and all that

Expand full comment
author

It depends on what you mean by "paradigm". The biggest paradigm I can think of in past EA was that it used to emphasize how the most important thing the average person could do was make/donate money, and now it emphasizes that the most important thing the average person can do is take a beneficial job. If it made another pivot at least that large, I think it would be fair to call it a paradigm shift.

I agree that with sufficiently big paradigm shifts it would have to look like "don't do EA, do something else" - though I'm not sure how you could call that an EA paradigm shift, or what it would take to make EA people agree that this paradigm shift was right.

Expand full comment
Jul 21, 2022·edited Jul 21, 2022

Both of those seem like different tactics for "do what is quantitatively the most good", just a tactical shift that seems like it might be targeted at broadening the potential movement? The new one is much more attractive to young potential activists who'd rather do good directly and don't have much investment in their careers, while the older one targets the rich professional class who'd rather keep working at their pre-existing high paying careers. That shift feels like a shift within the same charity framework which wasn't quite what I had in mind.

If it wasn't clear, my point is that if you think "do what is quantitatively, with a data-driven application of rationalist principles, the most good from a utilitarian perspective" is fundamentally bad or unsound or inefficient, critiquing EA directly is fairly pointless because the people you're talking to naturally have a lot, both career-wise and personal labor and mental energy wise, invested in EA being fundamentally right and it would be extremely difficult to throw that away. Rather, any "critique" would need to be in the form of simply starting your favored approach from scratch and if possible avoiding confronting EA people and demanding they change their core beliefs directly.

Expand full comment

Two bits of wisdom from my grandfather you may enjoy on this topic. He was a Master Chief in the US Navy.

“As soon as nothing starts to become something it can’t be everything, and that always pisses someone off.” Or, whenever you bring something into the real world and hit actual constraints people being mad about it is unavoidable.

“Saying that there should be more goodness and less badness isn’t an idea, it’s just a complaint from someone who doesn’t want to spend the effort to figure out how to fix something.”

EA folks seem nice, if a bit at odds with some of my own beliefs. Though to one of your posts I particularly enjoyed, I think those things seem especially prominent to me because I am otherwise identical to them in 95% of the rest of my beliefs. I am always happy to bend over backwards with my spiritual generosity toward complete heathens whose beliefs don’t overlap with mine in any regard.

Expand full comment

Love the first quote.

Expand full comment

Agreed, I'm stealing that one for sure!

Expand full comment

Just remember to credit “Some guy’s grandfather.”

Expand full comment

Love those sayings. As I have discovered in my life, sometimes you have to choose between being nice and getting shit done.

Expand full comment

There’s a definite balance to be struck, but you should always be mindful that it is literally not possible to please everyone.

Expand full comment

Re III. “Zen and the Art of Motorcycle Maintenance” was available at the scholastic book fair in approximately 1990 so I bought & read it. It was published in 1974. IIRC it raised all these points about duality (maybe I’m thinking of “What the Buddha Taught,” but still, widely available.)

Not only is the criticism no longer attached to the ostensible target (in this case EA), I think the machine got stuck in about 1980 and now all it can do is clank and mutter “too individualistic…”

Sometimes it adds “ego death” and “transcendence.”

Things wear out after a while, lose their zing. So back in 1970 when so many American intellectuals were getting high and writing about what their high felt like jn Eastern-philosophical terms, it was surprising and potentially impactful. It is clearly not surprising anymore (re-watch Koyaanisqatsi if this needs to be clarified.) The medium moved on. But the habit of saying those same things has never gone away. I think it shouldn’t count anymore though. The more everyone tries to consciously group themselves, the more their uniquenesses are revealed.

Expand full comment

At the risk of sounding like a personal attack, I believe this is at least a partial example of the general discursive (failure?) mode discussed in the original article: You point out a single general theme and argue for the inverse. Personally, I find the surpise factor/Shannon entropy of this argument low enough to be easily predictable, as Scott mentions about EA criticisms.

Throughout the whole article, I was actually getting Right is the New Left [0] vibes. To my eyes, it looks epistemically suspicious when an argument boils down to a parity inversion on some perceived trend. This also potentially explains why the argument feels predictable: It only carries a single bit of information. When I notice this pattern in myself, I like to play Devil's Advocate, "are all my arguments just back-rationalizations?"

[0]:https://slatestarcodex.com/2014/04/22/right-is-the-new-left/

Expand full comment

Here’s a quote from Scott’s post:

“I don’t know if it’s meaningful to talk about EA needing “another paradigm” - this whole discussion conflates scientific theories, ideologies, and methods for producing change. But if it does, it will come from complaints like the criminal justice criticism, which record boring ways that EA-as-it-exists-now made bad decisions on some specific point. If we had a hundred such complaints, maybe we could figure out some broader failure mode and how to deal with it.”

I agree with this. Not because I know specific things about EA. I don’t. I think development economics can be held back by institutional and governmental conflicting priorities, so I like it that wealthy people are trying to beat the market and get better return on their “investment,” whatever form the return takes. I think some of them will succeed and in aggregate they will not do worse than other NGOs and governments.

I agree with what Scott wrote because hearing people say “we need a new paradigm” is a sign of dry rot. It’s beetles in the tree bark.

The writing sounds a bit like he was pulling punches and I mistook that. I thought he was not realizing what a retread “we need a new paradigm” is. But I was wrong, clearly he does realize.

The “low surprise information environment” is a great comparison. Very low Shannon entropy (new idea to me, but useful.)

There is a blur there though because high surprise, high entropy, does not necessarily lead to “change in paradigm.” In the 1970s when the hippies were expanding their minds, arguably they got a tremendous amount of new information in a short amount of time. Individuals could experience high entropy without large immediate effect in an institution.

High surprise of new critique might lead to change on a smaller scale - the specific, actionable criticism - if trust is high enough and the surprise is at the sweet spot. Not dull, not too far out, just right.

Maybe the presence of the “paradigm” critique is a sign that EA as a movement has reached a certain size. Is it the “storming” phase?

Nonprofit granting agencies deal with the problem all the time though and so do social workers as individuals and as a profession. What they noticed is that the thoughts in the minds of the agency members/board members aren’t the main factor in success or failure of a project. What matters (they decided) is that the board interfaces with a knowledgeable group of members of the target community who can explain what they think they need, and the board can grasp their logic enough and do agree with it enough to provide the assistance requested, and then maintain the relationship in which the board receives feedback and corrects course if necessary. Robots could do it; each member does not need radical consciousness. What they do need is consistency, cooperation, and to perform actions that constitute respect in the target community.

Yesterday I was fortunate to hear a presentation on the role of patient-centered care in SUD treatment. These particular folks want to get away from the top-down, individualistic, paternalistic etc psychiatry and get better results, and intend to do it by collaborating to present choices to patients, rather than trapping them. I’ve seen this work. The weight of the Profession of Medicine can be too great onto a patient. This is one element of the “new paradigm” those APA presentations want; individual docs (small groups) are out ahead of the conference on this.

When a large number of individual docs are practicing with this philosophy, it may be further represented in changes in APA panels. So like Scott wrote, specific criticism incidences can accumulate in one direction. I think it doesn’t have to accumulate in one direction though.

And in this case there already is a suggested new paradigm (patient-centered care) which for a wide variety of reasons not everyone is on board with (it can conflict with health insurance billing apparently.) So the docs nodding at the conference are not necessarily nodding due to low entropy. Some are nodding in order to conceal their actual changes in their practices, perhaps, because they do not want to be a lightning rod.

Using the esketamine question will lead to a temporarily higher entropy. Change? Maybe on the individual level.

I am willing to assume there are some EA folks who do the type of grantee-centered practice this whole thing is converging on. They may have been from the beginning. They may see the “paradigm!” conversation as not something they need to engage with. Liking criticism too much might be an impediment to confident grantee-centered practice. Scott’s right about that also.

As for me arguing the inverse of something. I liked the “Right is the New Left” piece. It reminds me that I pay for this partly to reward his past efforts, which are new to me. Somehow I ended up in the epilogue of Glen Weyl’s “Radical Markets,” and I found myself deeply hoping that centrally-planned surveillance economy is not the direction EA goes. Patient-centered care lets the target be responsible for the data gap between what they ask for and what they need. Multiple interactions allow for corrections in this.

I almost didn’t post last night. I said to myself “you are too tired, this doesn’t make enough sense, don’t clutter up the board.” But I posted anyway and got this great response from you with a lot of good theory. I need to post this before my window of time closes. Thanks for responding.

Expand full comment

Part II of the APA conference topics is a great example, thank you. Maybe it boils down to criticism that requires me to change my behavior versus not.

It is easy to blame capitalism/racism/the system, and no one needs to change anything because capitalism will stay at least for another 50 years. So, we have a wonderful scapegoat. It's not too different from blaming politicians.

In contrast, if anything requires individual people to change something, you'll face resistance. The no-show fee is a great example - even a bit related to capitalism.

My personal preference: I love unfounded criticism, because I can easily rebut it. But if other people criticize me rightly, then I have a problem 😊

Expand full comment
Jul 20, 2022·edited Jul 20, 2022

(Original author of the Anti-Politics Machine review here -- writing anonymously to keep anonymity, but it seems fair to respond since Scott commented on it directly.)

EDIT: as Ivo points out below, I read this article somewhat defensively and I no longer think this comment is useful. See my reply to this comment labeled “Update:” for less defensive thoughts.

Original comment:

I have a lot of thoughts on this that I do not have time to write up properly, but I do think you’re kind of missing the point of this kind of critique. “The Anti-Politics Machine” is standard reading in grad-level development economics (I’ve now had it assigned twice for courses) -- not because we all believe “development economics is bad” or “to figure out how to respond to the critiques” but because fighting poverty / disease is really hard and understanding ways people have failed in the past is necessary to avoid the same mistakes in the future. So we’re aware of the skulls, but it still takes active effort to avoid them and regularly people don’t. My review gave a handful of ideas to change systems based on this critique, but in a much more fundamental way these critiques shape and change the many individual choices it takes to run an EA or development-style intervention.

RCTs are a hugely powerful tool for studying charitable interventions, for all the reasons you already know. But when you first get started, it’s really easy to mistake “the results of an RCT” for “the entire relevant truth”, which is the sort of mistake that can massively ruin lives (or waste hundreds of millions of dollars) if you have the power to make decisions but not the experience to know how to interpret the relevant evidence. I wrote the review not to talk people out of EA (I like EA and am involved in an RCT I think will really help add to our knowledge of how to do good!) but because I think being aware of this kind of shortcoming and when to look out for it is necessary to put the results of RCTs in context and use them in a way that’s more responsible than either “just go off vibes” or “use only numerical quantitative information and nothing else”.

Expand full comment

To say this more clearly — I think the criticisms of EA you’re writing about *already are* the sorts of critique you want, but you’re a little too removed from development work to see how they’re operationalized.

Expand full comment

While I haven’t read The Anti-Politics Machine, what I gleaned from your review reminded me of the stories told about war and refugees in The Good American.

It seems to be a consistent factor in international aid/politics that limited information causes even well-intentioned actors to make terrible errors.

I have little idea how to turn this general critique into a specific one, but one place to start might be simply interviewing the people one is trying to help.

The best specific answer I can offer is simply to reject any policy/law/proposal/aid attempting to help people which has not actually interviewed dozens-hundreds of the people it is proposing to help.

Expand full comment

I like that idea. The only thing I'd add would be a requirement that the people interviewed be selected through a random process, and otherwise be selected through a process guided by statistics to be as representative as possible, like you were running a survey and wanted to do a good job instead of just manufacture a preferred answer.

Expand full comment

This is a really crappy idea. People quite often don't have any sense for what kind of policies will help them. And further, they often have the wrong sense for what policies will help them. Socialism is one big broad example. Interview people about it and they will love it, but implement it for a few years and it'll destroy their societies.

Expand full comment

There is a distinct difference between interviewing the people you are trying to help so you can gain a better context of what their problems are and simply enacting whatever solution they think is best.

People tend to have a pretty good idea of what their immediate problems are. Solutions, on the other hand, are much more difficult.

Socialism is actually a rather good example of this problem. Ask a Russia peasant in 1910 what their problem is, and you’ll get a pretty honest and fair answer that they are being extorted by a variety of feudal lords and the state as a whole, and a result are constantly on the brink of starvation. The Bolsheviks’ solution was forcibly industrializing Russia against the will of many peasants, brutally killing millions.

If you misidentify the problem, it doesn’t matter what solution you propose. You will be wrong. When asking Americans what their most pressing issue is at the moment, most answer inflation. That’s a useful metric even if I put precisely zero faith in the ability of most Americans to articulate the correct response to inflation (increase the federal funds rate to “cool down” the economy).

I really do recommend The Good American for examples of how this works in the international humanitarian framework.

Expand full comment

I doubt the Russian peasant would tell you that. When you ask people what's going wrong in their lives, they all basically tell you some version of 1. the rent is too damn high, 2. my back hurts, 3. my son is a listless deadbeat, 4. I have too much to do.

Alright, policy man, take it from there.

Expand full comment

https://www.pewresearch.org/politics/2021/04/15/americans-views-of-the-problems-facing-the-nation/

It’s possible to use more than half of a brain cell when asking questions.

Expand full comment

Upvote

Expand full comment
Jul 20, 2022·edited Jul 20, 2022

For the record, it seems what the Russian peasants actually told you is that they needed more land. "Redistribute the land" was their leading popular political rallying cry. "Peace, land, bread!" promised Lenin.

But it turns out there wasn't some vast reserve of empty arable land that the peasants seemed to imagine existing somewhere, if only the assholes in charge would distribute it to them. What instead needed to happen was a large increase in agricultural productivity, which meant casting aside certain time-honored and well-beloved methods of agriculture that worked OK in the past when there WERE vast reserves of land available, but were currently counter-productive even by pre-industrial standards.

Stolypin tried to achieve this with a system of gradual privatization and modernization, which might have worked in time, but time wasn't available to him or the Czar. So it came down to the Reds to push it through, over peasant objections.

Expand full comment

You don’t ask them that. You find out a bunch of specifics and that gives you the data to make hypotheses

Expand full comment

“If I had asked people what they wanted, they would have said faster horses.”

-- Henry Ford (maybe)

Expand full comment

As I read it, Scott is not addressing the critique in the book or the review. That was just the thing that triggered thoughts that resulted in this essay. What he’s addressing are more vague and general critiques and a tendency to like critique too much.

Expand full comment

You’re right, I think I initially read the post somewhat defensively. Will add a note.

Expand full comment

Update: after reading Ivo’s comment below, I reread Scott’s argument and think my initial reading was somewhat defensive.

I think part of the problem here is that a lot of general criticisms are based on this sort of specific failing, but get expressed in general terms because you can’t expect a general audience to be familiar with the specifics of 1970s World Bank initiatives in Lesotho. So in some sense I think my review was motivated by a less fleshed out version of Scott’s take here — a desire for people to know an example of the sort of specific failure the general critiques (at least those coming from inside dev Econ) have in mind.

From the inside, a lot of these critiques come with a lot of context (eg examples of what we mean by problems with individualism in development or the need for taking cultural elements into account) that are well understood by the people making the claims but hard to communicate in a forum post (“read these eleven books” is not a good communication method). So I think there are two conversations going on — people with field-specific expertise talking to one another in ways that are clear to them, and outsiders (EAs without firsthand experience in the dev field) trying to make sense of them without the assumed background context. (A lot of these arguments seemed dumb to me until I started taking grad level development courses and built up more of the assumed background knowledge.) I’m not sure what the solution here is, because it seems like making these arguments in the way Scott is asking for (so that outsiders have all the context necessary to know what’s being asked for / critiqued) would extend these from forum posts to several-hundred-page technical books.

Expand full comment

If you have to read 11 books to understand what is being said, then you might be in a circle jerk

Expand full comment
founding

LOL

Expand full comment

> From the inside, a lot of these critiques come with a lot of context (eg examples of what we mean by problems with individualism in development or the need for taking cultural elements into account) that are well understood by the people making the claims but hard to communicate in a forum post (“read these eleven books” is not a good communication method).

This resonated a lot for me personally for the ‘be aware post’ I wrote, though from a very different working context. I ended up writing a readable summary of common EA attentional blindspots – of what was the accumulation of three intensive years of starting and running EA community-building projects, reading papers on psychology studies in my spare time, introspecting and reflecting, sketching out flow diagrams and frameworks, do a ridiculous amount of note-taking on new hypotheses and how those could be falsified or how evidence could be interpreted differently.

And the reason why I did not end up publishing eleven posts about this is because people would get overwhelmed by the specific but abstract claims about human cognition and motivation, not check the core psychology studies and reviews I recommended, and not pick up on the complicated analogies I constructed.

I agree with Scott here that a gradual bottom-up approach of reporting on specific observations of EA initiatives and drawing patterns between them would have been best. There is a tendency that I notice in myself here to want to come up with broad generalising frameworks that can explain or map onto everything that is going on.

Yet, there was no way in my spare time that I could have presented all the nitty-gritty qualitative observations and interpretations I had taken notes on various groups and individuals in EA/rationality in a way that would not have bored/confused/offended readers.

Expand full comment
Jul 20, 2022·edited Jul 20, 2022

Read your post btw! I had not heard about this case of World Bank economic analysts etc misinterpreting what was going on on the ground. I appreciate the crisp concrete explanations of what was off about their research methodology/mindsets and lack of ethnographic and local geographic understandings.

I gestured at some similar concerns and conclusions here: https://www.lesswrong.com/posts/Aq4KNxKscywt3yXqk/some-blindspots-in-rationality-and-effective-altruism?commentId=vsfcirBjN6or9CfqF

But I obviously totally lack your professional experience in non-profit development work in regions of Sub-Saharan Africa.

I thought your idea of inviting over anthropologist to independently study how staff at EA orgs go about their work was cool (though I’d guess staff would not be motivated enough to spearhead such an initiative internally, nor have the time nor patience to have careful back-and-forth conversations with the anthropologists to interpret their findings, considerations and conclusions roughly as meant).

Expand full comment

Thank you! :) I appreciated yours as well.

Just to avoid giving the wrong impression (especially anonymously!) — I have a small amount of development experience, all from the academic side and not as the senior researcher on any projects. So I’m speaking out of current grad school experience / conversations with people with many years of experience, but don’t want to accidentally imply that I’ve been working in the field for decades or anything like that!

Expand full comment

Got it!

Expand full comment
author

I agree with this and appreciate it. I was more annoyed that a lot of commenters seemed to be taking an "EA BTFO" perspective whereas I think that EA tries its best to understand and absorb those kinds of criticisms.

Sorry if my using that review as a starting sentence made it sound like I was criticizing the book or your review of it.

Expand full comment

Thanks! And that’s a fair reaction. I have some frustrations with the comment thread convo as well that I will probably turn into an EA forum post once anonymity is over

Expand full comment

It's easy to talk about changing the paradigm, because that's not a lever you have. In some organizations, criticism gives you status by giving the appearances of being intellectually courageous and ahead of the curve. Some dogs bark loud on a leash.

Start talking about decisions people actually make and you see what they hold dear.

But you should talk about the decisions people actually make, because that's what they can change. No metric in the world matters unless it's helping you make a decision. Horses were an awful way to fight in WWI, what with machine guns and artillery and all, but for a time they alone filled critical recon and maneuver functions and so the cavalry charged again. Command knew what the casualties would be. They hated it. But they (felt that) they didn't have a decision to make; they needed that intel, they needed those maneuvers. So they just accepted the casualties as the cost of business and moved on. It was useless to talk about a war without horses until cars and trucks and tanks could actually fill all those roles, which is why horses were used again in combat in WWII ... and are still occasionally used today.

Expand full comment

Brings to mind climate change. Never mind whether we can actually halt climate by controlling c02. We don’t have the tools yet to both have energy and control C02 at the same time. If the people pushing this thought they’d be the ones freezing in the dark, we’d have less utopian talk about our control over nature

Expand full comment

New renewables are now cheaper per megawatt hour than fossil fuels. So its not a technology problem. The problem is that rolling out a replacement of the entire global energy system is a logistical challenge bigger than a world war, and needs to deal with complex conflicting incentives.

Expand full comment

Only under very limiting assumptions. Such as not worrying about variability (you can't use renewables for baseline load, which means every megawatt hour of solar needs to be backed with a megawatt hour of gas) or the fact that large amounts of renewables dramatically destabilizes the grid (I've seen reports of instability occurring at as low as 20%), and the fact that nameplate ratings aren't actually accurate in the slightest for renewables. And the massive subsidies involved (both by the local governments and by China which massively subsidizes the production of solar panels, mostly in an effort to dump them and kill competition).

Expand full comment

Should you reverse any criticism you hear?

Expand full comment
founding

Yes!

At least you should _consider_ the 'reversal' as a possibility :)

Expand full comment

Scott: "Not because EA is bad at taking criticism. The opposite: they like it too much."

Reminds me of an old joke:

"Masochist: Beat me.

Sadist: No ..."

Expand full comment
founding

LOL

Expand full comment

Not to go off topic, but this does suggest an alternate history where Mercury is closer to Venus and nobody discovers relativity for another fifty years.

Expand full comment

I was more optimistic when I decoded EA as Electronic Arts.

Expand full comment

The Michelson-Morley* experiment from the 1880's also showed there were problems with classical Newtonian dynamic (and there was the problem of the incompatibility between Maxwell's equations and Newton's paradigm on the theoretical side).

This would have been enough for at least special relativity but once you get special relativity, general relativity is the logical next step in your investigations. I don't think the "no Mercury anomaly timeline changes by much.

*Michelson-Morley experiment wanted to measure the speed of light using one of the coolest measuring tool ever, an interferometer. They got a problem because the speed of light did not change when the interferometer was moving.

Expand full comment

In order to rotate their interferometer, they floated their experiment on a vat of mercury. Which makes it even cooler.

Expand full comment

I was just recently watching SciShow's episode on Gregor Mendel's peas and someone in the comments brought up Kepler picking Mars as the orbit to study to support Copernican circular orbits around the sun.

https://www.youtube.com/watch?v=lpObkqMb2_0

Expand full comment

No.

The perihelion drift of Mercury was a neat problem that relativity solved, but it was not a major motivating factor for Einstein. It had an explanation within the existing paradigm: when there's a surprising perihelion drift, there's probably another planet out there. That's how we predicted Neptune's existence. Astronomers thought there was another planet closer to the sun than Mercury, which they named Vulcan.

There were other experimental problems too. The Michelson-Morley experiment was supposed how fast the Earth was moving relative to the ether. It measured that the Earth was not moving relative to the ether at all. At the very least, the experiment should have been able to see Earth's orbital motion, which points in a different direction at different times of the year.

This isn't even the worst prediction of the old paradigm*: the Blazing Sky Paradox / Olbers' Paradox. The universe was thought to be infinitely large and infinitely old and that matter is approximately uniformly distributed at the largest scales (Copernican Principle). Any line of sight should eventually hit a star. Work out the math and the entire sky should be as bright as a sun all the time. This contradicts our observation that the sky is dark at night. This paradox was eventually resolved by accepting that the age of the universe is finite, as described by Lemaitre's and Hubble's Big Bang Theory.

If we read what Einstein wrote, none of these failed predictions actually motivated Einstein to propose relativity. He instead cared more about questions like: What would it be like to chase a light wave? The electric and magnetic fields wouldn't be changing, so they shouldn't be creating each other, so the light wave wouldn't exist. That's ridiculous. So we'd better completely change our notions of space and time to make sure that this can't happen. Einstein's arguments actually are this audacious.

Einstein worked primarily through thought experiments. He would find experimental results afterwards to make his arguments more persuasive to other physicists. Even then, explaining a few obscure existing anomalies wasn't enough to convince most physicists to change their notions of space & time. He had to make new predictions. Which he did: the path of light going by the sun is bent by it's gravity. Eddington's expedition to see a solar eclipse confirmed this, and caused the paradigm shift to spread through the entire community.

* I hesitate to call this Newtonian mechanics because Newton believed in the Biblical creation story.

(Easily findable, not the best) Sources:

https://en.wikipedia.org/wiki/Vulcan_(hypothetical_planet)

https://en.wikipedia.org/wiki/Olbers%27_paradox

https://www.quora.com/How-did-Einsteins-chasing-a-beam-of-light-thought-experiment-contain-the-germ-of-the-special-relativity-theory

https://en.wikipedia.org/wiki/Eddington_experiment

Expand full comment

These are great replies. I should be wrong like this more often.

Expand full comment

That's right about the motivations of Einstein, and thus as an answer about the hypothetical, but it's wrong about the response of the scientific community. His calculation of the precession of Mercury is what convinced people, not his new predictions and definitely not Eddington's expedition.

The idea that new predictions were important is a fabrication of history due to Popperians claiming that's how science should work.

Expand full comment
Jul 27, 2022·edited Jul 27, 2022

According to Kuhn, Newtonian mechanics never actually experienced a crisis; no one seriously questioned its validity just because it failed to predict the movements of Mercury. Instead, the anomalies that paved the way for relativity were related to the aether theory; e.g. the difficulty of integrating it with Maxwell's equations and the repeated failures to experimentally verify the existence of drag.

Expand full comment

1. Seems to me a large part of the reason specific criticism pains more than general paradigmatic criticism because it effectively pushes certain members of a community away from the community. That could generate more anger/sympathy.

2. The development of Lagrangian mechanics seem to come in part from a long term paradigmatic push to incorporate the idea of principle of least action?

3. Asian countries seem to be more collectivistic than western nations and they do enjoy pretty rapid growth?

Expand full comment

Rapid growth....not as rich as the most individualistic Western countries and not the nicest places to live. China is vastly poorer per capita than most of the west and a pretty crappy place to live. And there's no reason to expect they'll be able to catch up to America or Northern Europe.

Expand full comment

I meant more like Asian democracies. They also had much shorter time for development compared to established western democracies

Expand full comment

Japan and Korea had very rapid catch up growth but it is slow today.

Expand full comment

> The development of Lagrangian mechanics seem to come in part from a long term paradigmatic push to incorporate the idea of principle of least action?

(In the spirit of the post and the comments, I shall preemptively criticize myself, to defang mutterings about my ignorance.) Since I am less technically knowledgeable than the average reader, and I don't know how this connects to Scott's discussion of physics, can you please elaborate?

Expand full comment

I was confused by this too. Lagrangian mechanics aren't a paradigm shift, but rather a restatement of classical mechanics in a way that makes it easier to work with non-Cartesian coordinate systems (the classic example is a spinning top moving on a 2D plane, described by the x and y coordinates of the contact point and the pitch and yaw angles of the top). I guess you could argue that by emphasizing the principle of least action and/or non-Cartesian coordinate systems, Lagrangian mechanics prepared the way for the paradigm shifts that came later?

Expand full comment

Lagrangian mechanics is a major change to the way science is understood and done, but is not easily described as a Kuhnian paradigm shift. Kuhn's philosophy describes some, but not all, scientific revolutions.

Instead, if you trace the intellectual history of this revolution, it follows a much more philosophical path: The best of all possible worlds -> Can we write physics as an optimization problem? -> Least action principle -> Lagrangian mechanics.

Milo Prince seems to be arguing that some major developments in science do come from consistent philosophical arguments, instead of from an accumulation of problems in the details.

I would add that this path is not just criticism. The people arguing that this world is an optimum in some sense had to do the work of rewriting classical mechanics themselves. They couldn't just criticize the existing experts and institutions and ask them to fix these problems. You need to offer an alternative paradigm if you hope to replace the existing one.

Expand full comment

Scott made the point using Newtonian Gravity that large paradigmatic shifts come from small anomalies instead of driven by a consistent ideological push, such as “Newtonian gravity isn’t simple enough...” I meant for the development of Lagrangian to stand as a counter example.

Expand full comment

A bit irrelevant and anecdotal, but I've always found the "Asian nations are more collectivist" thing sort of silly in light of what friends who live and work in Asia tell me. For one I hear constant complaints about the deference to authority, which is probably the thing that is most pan-Asian within the "collectivist" concept, but is that really a meaningful concept of collecivism on its own? Anecdotally, westerners seem to think it makes people act stupid, but maybe this is blinkered western thinking or some such.

Specifically I've heard a lot of anecdotes about how little social trust there is in modern China (presumably post cultural revolution?). It includes normal things you'd expect like massive cheating on exams, nepotism, etc., but also stuff westerners would never conceive of like throwing trash in animal exhibits at zoos. When I visited Japan I kept seeing Chinese tourists get yelled at for not respecting public spaces according to the mostly unspoken rules.

Even looking at Japan, I would say they have unusually high levels of concern with decorum and sanctity compared to western societies, but lots of reasons to view Japan as a pretty atomized society.

Anyway, I think the thing that matters for economic development here, in the sense of Fukuyama's mystical Denmark, is just having high social trust, and I'm not sure how common that is in Asian countries.

Expand full comment

> Specifically I've heard a lot of anecdotes about how little social trust there is in modern China (presumably post cultural revolution?). It includes normal things you'd expect like massive cheating on exams, nepotism, etc., but also stuff westerners would never conceive of like throwing trash in animal exhibits at zoos.

I also lean toward the theory that low social trust in China is a result of the revolution (not necessarily the cultural revolution; I'd be willing to blame communism generally), but I can't really back that up in any way other than the observation that it was a communist goal to dismantle existing social structure. "Kudos" to them!

But I don't see how throwing stuff at animals in zoos is an example of low social trust. If you go to a Chinese zoo, there will be stuff provided in the exhibits for you to throw at the animals. That's just what you do in zoos. What does it have to do with the social fabric of society?

If Chinese tourists are reduced to throwing trash instead of wood chips in Japanese zoos, I would say the reason is that the zoos aren't doing a good job refilling the wood chips.

Expand full comment
founding

Asian countries get a generation or two of rapid growth when they decide to adopt Western industrial-market economic norms and pick all the low-hanging fruit, but I don't think there's a long-term advantage. In the 1980s, Everybody Knew(tm) that Japan's uber-dynamic economy was going to lead to their Ruling The World; what happened was rather different. Other Asian nations went through or are going through the same thing on a different timeline.

Expand full comment

I don't know much about the organizational scale, but at the individual social level, a preemptive apology -- even one that's fairly vague and open-ended -- is often the best shield against criticism. If I greet guests with "sorry the house is such a mess" it's rarely really an apology, doesn't communicate genuine guilt or a desire to change, but it makes it socially difficult for anyone to say anything regardless of how messy my house is, because making a specific criticism after a general apology makes the other person look rude. (Note: I understand *many* people feel genuine guilt when their house is anything but spotless, and am not asserting that everyone is like me).

From your description, this feels like the same behavior scaled up. "Oh, I'm so sorry we're terrible in every way" makes it much more difficult for someone to say "you are failing in this comparatively minor but very specific way".

Expand full comment
Jul 20, 2022·edited Jul 20, 2022

Or even just a regular apology. Tends to work amazing in professional contexts IME. Someone is furious something didn't get done. "I am sorry I prioritized these other things and clearly based on how upset you are that was a mistake". Tends to both immediately diffuse the situation, and make them look like a jerk.

So many people tend to go the wrong way of trying to justify or explain their decisions. Trying to wiggle out from under the criticism. Just say "yes that is valid criticism, I am sorry I will do better". It is like conversational jujitsu. Your attacker has no where to go and still generally a surplus of anger that makes them look silly.

Expand full comment

Difficult skill but this does tend to work

Expand full comment
founding

Yup!

Expand full comment

IMO it would be quite rude to comment on one's host's house being messy, preemptive apology or no. Still, you make an interesting point. It also reminds me of (1) it's hard to criticize people with depression/low self-esteem because it gets absorbed in a "shield" of "I'm terrible in every way", and (2) I get annoyed if someone cuts me off in traffic or something, but if they give a little hand wave that totally dissipates (hope this works on other people too...)

Expand full comment
Jul 20, 2022·edited Jul 20, 2022

I have some issues with the whole EA movement, but this is a nice piece that gets at some broader truths.

Expand full comment

Best work yet! I laughed so hard reading this I woke my wife up. Also, it really crystallized what I dislike about all those vague complaints and explained why people are willing to jump on board things that seem to say really awful things about them. This is the kind of essay that makes it worth my subscription!

Expand full comment

Wait... what? Laughed? I thought I understood the whole essay, and I didn't laugh a single time. In fact, I read it with a somewhat grim concern for many of the orgs I'm involved in.

Expand full comment

"But the specific claim at the end of Part I above - that the people in power prefer specific to paradigmatic criticism, because it’s less challenging - seems to me the exact opposite of the truth."

The 'Sadly, Porn' analysis of this would be that the *purpose* of paradigmatic criticism is to defend against having to act on specific criticism. (The false meta-criticism that the "people in power prefer specific criticism" is the repressed true meta-criticism returning as it's inversion but instead localized on "the people in power" (i.e. not *you*) to further defend against action.) I don't speak psychoanalysis so my formulation is probably wrong, but you can see the parallels.

Expand full comment

PS: Milo Prince's comment - "The development of Lagrangian mechanics seem to come in part from a long term paradigmatic push to incorporate the idea of principle of least action?"- adds an additional level of mindblow synchronicity to the idea of paradigmatic criticisms being a defense against action.

Expand full comment
Jul 21, 2022·edited Jul 21, 2022

Yes, this sounds right. Ever since the "sadly, porn" review I've been trying to get into Lacan. There's something more to be said about criticism itself being an effort to engage with the object-signifier of an institution (for Lacan, "the university") in the mode of "student" i.e. "associate myself with [criticism] to gain access to the phallus this [self-critical] organization claims to have". This he would contrast with the mode of the 'hysteric' who genuinely does want to [self-criticize] as part of an effort to reduce suffering."

I may try to expand on this in a separate comment, but I'm not much of a writer and may give up before posting it.

Expand full comment

This sounds like basic business self help: get specific. Management studies contains a lot of nonsense, but one thing that everyone says, and which seems to me to be true, is that you have to be specific. Things must be planned with total specificity or they don't get done; meetings must end with the assignment of specific tasks; job responsibilities must be explicitly listed. The corollary is: if you have a criticism but don't have an action that can be taken to remedy the problem, then the criticism is (next to) useless.

Expand full comment

"The corollary is: if you have a criticism but don't have an action that can be taken to remedy the problem, then the criticism is (next to) useless."

Usually true, but not quite always. I'm reminded that one of the more helpful innovations in the sciences over the last few centuries was to let _fragments_ of an advance be published as journal papers.

In the case of a criticism: In software, bug reports are useful, even if they don't come with suggestions for a solution. The team that owns the software can then take the bug description, brainstorm possible solutions or assign to particular developers to fix, and at _that_ point (reasonably) specific plausible actions are assigned to specific people.

Expand full comment

Yeah, there are definitely exceptions. The bug reporting example is a good one, though. For it to be effective, you have to construct a proper bug reporting channel, with specific people assigned internally to monitoring it and responsibility for resolving bugs clearly apportioned. Those mechanisms aren't provided by the complainer, but they do have to be there, or the complaint/report is likely to produce no effect.

But yes, as you say, there are bound to be some exceptions.

Expand full comment

Many Thanks!

"For it to be effective, you have to construct a proper bug reporting channel, with specific people assigned internally to monitoring it and responsibility for resolving bugs clearly apportioned. Those mechanisms aren't provided by the complainer, but they do have to be there, or the complaint/report is likely to produce no effect."

Very much agreed.

Expand full comment

Brilliant post - amazing it came from the same person who thought it was a good idea to hand tens of thousands of dollars to woke loonies like Alice Evans

Expand full comment
founding

I think maybe your model of Scott isn't very good as neither of the two things you mention are surprising to me.

Even "woke loonies" can be right about _some_ things!

Expand full comment

How much sense does it make to think this way about a book written in 1990? The book’s criticism may seem vague and unactionable now, but isn’t that likely because the paradigm it targeted no longer exists?

Expand full comment
founding

I'm pretty sure the 'old paradigm' is still the dominant one.

Expand full comment
Jul 20, 2022·edited Jul 20, 2022

As a response to "why people in this area love criticism so much" moreso than "why global critiques/narratives with clear political agendas are bad" aka "I don't like the woke"

"Effective" altruism means there is always going to be criticism about how "effective" it really is. It lends itself to min/maxing, because there is an optimization implied in the very first word. And really that's where the value is added here IMO.

So you have to accept whether you are really trying to be "effective" in an optimization sense, or just being altruistic where anything postive is good.

Expand full comment

I learned a new word ☺️

Expand full comment

Great post! I think you should share it on the EA forum so that more EAs will read it.

Or do you prefer if others cross-post your EA-related posts to the EA forum? (Or do you prefer that no one does this?)

Expand full comment
author

I would prefer this one in particular not get crossposted - I feel bad about using EA as an example of this more general point, and I think the EA forum is for more specific and less culture-war-y posts.

Expand full comment

Ok, makes sense. (Glad I asked.) :)

Expand full comment

Wait, I thought you were praising EA here! You agreed with Zvi that they are only seeking criticism of details within their paradigm, but you argued that this is the correct move!

Expand full comment

Someone is afraid of criticism

Expand full comment

I love you inordinately for the tricyclics paragraph

Expand full comment
author

Just realized I couldn't decide whether to use levothyroxine or liothyronine as my example and wrote "levothyronine" and nobody caught that in the first three hours, my readers are getting soft!

Expand full comment
founding

Hey! Some of us only have a _passable_ knowledge of pharmacology (and mostly from reading you) :)

Expand full comment

Do "people in power" prefer "paradigm shifts" versus "specific issues"? The answer might surprise you ... it depends!! On the people and the issues and the paradigm!! WoW!!!

I think "the EA community" should actually do something and then these debates would be in a little more focus.

Expand full comment

Rationalism Critics: "Rationalism doesn't work. Stop it."

Rationalists: "A basic tenet of rationalism is that we'll consider your idea. Should we stop doing rationalism? In this essay-"

Rationalism Critics: "No you're doing it wrong."

Expand full comment
founding

"Have you tried having incorrect beliefs about the world... but on purpose?"

Sometimes, the obviously dumb thing is actually just dumb.

Expand full comment

You're supposed to react emotionally to certain words, phrases, symbols, and ideas, and use that emotional response to determine all your worldviews, obviously.

Expand full comment

This is definitely something people have considered, unfortunately it's really hard to deliberately have incorrect beliefs, regardless of the potential utility.

Expand full comment

I believe the classic example of this being a common, pragmatic recommendation is in gun safety. Every decent gun safety instructor I have ever met teaches people to always treat a gun as if it were loaded. You never act like you have good epistemic certainty that it's unloaded unless you're staring at an empty, open chamber. And you never THINK 'I'm sure this is unloaded' without pushback, because then you'll carelessly neglect its threat.

People are bad at dealing with rare events (like accidental discharges) and with tiny fractions of doubt (99.5% certainty that the gun is unloaded rounds to 100%.) So the advice across an entire subculture is to deliberately underestimate your certainty to get optimal safety.

IMO, that counts as deliberately having wrong beliefs to compensate for cognitive biases.

Expand full comment

But I don't *think* the gun is loaded. I understand that there is a small risk that I have a false belief that the gun is unloaded, that treating the gun like it is loaded is better practice, etc, and act accordingly. But I actually believe that the gun is unloaded.

Expand full comment
Jul 20, 2022·edited Jul 20, 2022

Personally I think the gun is loaded. If you point it at me I will drop to the floor in fear and yell at you to move it away. Not a hypothetical, this has actually happened to me and lots of other people.

Expand full comment

Sounds like you alieve (system I) that the gun is loaded, and instinctually react that way. But arguably you don't *believe* (system II) that the gun is loaded. E.g., you wouldn't feel surprised if you checked and found no bullets in it.

Expand full comment

Correct

Expand full comment

Great answer

Expand full comment

Sometimes a movement that is based on solving everything using Facts and Logic fails -- again. What people here call rationalism is about the fifth or sixth or seventh iteration.

Suppose Yudkowsky criticised Ayn Rand's objectivsm (which he did).

Would it be reasonable for an objectivist to reply "objectivism is all about objective truth, therefore any alternative you are putting forward is a falsehood, and there is no way you are getting me to believe a lie"?

Expand full comment

Yeah. I mean Scott's touched on this so many times it feels weird to say "I'm still not sure he gets it" but there's a clear gulf between the stated goals of rationalism ("Determine truth and make good choices using whatever methods work") and the culture of rationalism. To be fair, this is true of any movement and I think the rationalists do a better job than most at noticing. But while I can see Scott writing a piece on "is there utility to letting people shame us into holding views we haven't been convinced of," I really can't imagine him ever updating his personal beliefs to think he should do so.

I picked that example because it's extreme enough that I think it's difficult to disagree with, but in general I think there are more reasonable positions that the rationalism movement would have strong biases against. IMO they give really insufficient weight to intangibles and unknown unknowns, for instance.

Expand full comment

Yes, these other people are pushing their agenda/viewpoint: race, communism, partiality; but aren't you dismissing them out of hand?

I think it's obvious that EA takes a capitalistic worldview, it's basically squaring the circle of how can I make money and still be virtuous without dedicating my life to doing what I know is good. Because I spend money efficiently to have others do that via excess value.

Which is fine but then you should have to deal with the stock criticisms of capitalism, like its colonial racist foundations etc.

Expand full comment

A lot of EAs work directly in policy, development, research, etc. If that's what you mean by "dedicating my life to doing what I know is good".

Expand full comment

I'm unclear - why should anyone "have to deal with the stock criticisms of capitalism, like its colonial racist foundations etc." I see no merit or value in doing more of that - what do you see as the value?

Expand full comment

This is disingenuous. Obviously you aren't unclear. Why not just say what you mean?

Expand full comment

I don't understand their point of view and would like them to clarify. Is it that hard to understand me?

Expand full comment

No, Axioms, disingenuous is acting like we need to be open to alternative viewpoints, and then just making wild, ideological assertions and treating it like fact with no justification. MT doesn't support a general openness to alternative perspectives, they agree wholeheartedly with the race/communism stuff.

Expand full comment

My gut check on this would just be whether there is anything specific that the critique asks you to do differently, and if so, does that change fit the scale of the problem under discussion? For example, having seminars about how capitalism is the root of mental illness, where the conclusions are that you should feel bad for being part of a capitalist system, doesn't do much despite the scale of the critique. The same people who went to that seminar might try to ban a seminar that argued that you shouldn't charge poor clients fees or any money at all if possible. Seminar two doesn't need to mention capitalism to reach it's core suggestion, which would be way more impactful than idle thinking about capitalism.

In terms of the stock criticism of capitalism: at the extreme end this is just the genetic fallacy. If I were to say that organic farms are racist because the first ones were built in concentration camps, you would obviously discard this as an irrelevant historical fact. The relevant critique is in the vein of the Anti-Politics Machine, saying "because EA looks through a western capitalist lens and operates in a space recently defined by colonialism, it is prone to pushing western models of production and consumption into places where they don't fit." But you don't actually need to know much about the history here to make or understand that point, and to me the history should be a second level analysis that focuses on avoiding similar errors-- the primary level of conversation should be about understanding and fixing specific problems. If there are no specific problems, the critique never mattered to begin with.

Expand full comment

>Which is fine but then you should have to deal with the stock criticisms of capitalism, like its colonial racist foundations etc.

I like how you expect us to take a sentence like this seriously

Expand full comment

Speaking of the APA - or rather of a permutation of that acronym - wonder if you've seen or have any plan to weigh-in on the apparent efforts of the American Academy of Pediatrics to gag the "debate on a resolution calling for an independent and rigorous review of the evidence on treatment of youth gender dysphoria":

https://genderclinicnews.substack.com/p/gagging-the-debate

Apparently many there are not terribly enthused about any "criticism" of "the 'gender-affirming' treatment model."

Expand full comment

Knowing good and evil was the seduction of the sorcerer. The sorcerer still seduces.

Expand full comment
author

I thought it was a snake, not a sorcerer.

Expand full comment
founding

Obviously the sorcerer was polymorphed!

Expand full comment

If your point is that specific, well-researched criticism is harder than general, paradigmatic criticism, I agree. I think that's why you tend to see more of the latter, though much of it is low-quality.

If your point is that paradigmatic criticism (or this specific paradigmatic criticism) is without value, I strongly and specifically disagree.

I admittedly haven't read any of the other entries, but I would be happy to see Zvi win (at least some of the prize pool of) this contest. I briefly considered entering this contest, but was put off for the same reasons he expresses in his post.

To distill what he's trying to say: Imagine if the Catholic Church had an essay-writing contest asking to point out the Church's sins. But then, in the fine print, they strongly implied that they will be judging what is a sin based on the teachings of Jesus Christ, and that it would be judged by a select group of Cardinals. That would drive away anyone trying to point out cases where their interpretations of Jesus's teachings might be wrong, or where the teachings of Jesus don't work on a fundamental level.

This is the same deal. The criticism contest asks for criticism, but then implies that it's going to be judged within EA's interpretation of utilitarianism, thus pushing away any potential criticism of the fundamentals.

Could most places stand to be a bit more utilitarian? Sure! Most places could also stand to follow the teachings of Jesus a bit more closely. Those are both in the general vicinity of "good" in my book, if bounded by general common sense.

But both of them have problems, or at least diverge, if you take them to the extreme. You know this and wrote about it in a post of yours, which I think about a lot. [1] That's when you start getting things like Zvi describes, of non-vegans being treated "as non-serious (or even evil)".

Another red flag is EA focusing on "community building" as a core focus area. You can easily torture utilitarianism into justifying that: sure, you could research malaria cures yourself, or you could talk to ten undergrads and convince them to go into malaria research, and get ten times the probability of success!

Meanwhile everyone starts thinking you're a cult. [2] And they're not… totally wrong? EA isn't yet a cult, but is arguably becoming a religion, even more so than the way "every social movement is a religion". It's built on a core moral foundation (utilitarianism), does free distribution of holy books, [3][4] and has convinced itself that missionary work is of the utmost importance. (Seriously, please read [2].)

And what tends to happen to religions? They tend to start believing in their own importance a bit too much, sometimes at the expense of actual social good. They're at risk of being captured by people that are more interested in improving their social status than actually making the world a better place. They have a tendency towards purity spirals that take their morality farther and farther into Extremistan.

If (as Zvi suggests) we're at the point where people who might be able to work in AI safety or research a cure for malaria or whatever are being treated poorly because they eat chicken, then that's a red flag that EA is starting to fall into these traps.

When you attack a religion, you've got to attack its roots. Not "This person isn't following Jesus properly," but "There is no God and it's absurd to think that there is."

The problem is, this usually results in the destruction of the movement.

How can EA survive this? I think if it took a diminished view of its own importance, you could still salvage a lot from it.

Instead of convincing a large number of people to be good little EAs/utilitarians, have only a small number of core utilitarians bringing up potential cause areas for broader consideration. This is what GiveWell does, and it works pretty well. Their top recommended charities are hard to argue with, even if you don't buy into the overall utilitarian bent behind their work.

Instead of recruiting undergrads to EA as a whole, try to recruit them to explicitly work in specific cause areas you think they may be well-suited for and are understaffed.

To borrow from a different religion's sacred texts: the goal is to cut your enemy. The goal of EA should be to move the needle on these cause areas, not to move the needle on the acceptance of EA or utilitarianism more broadly.

…I guess I sort of ended up writing that contest entry in this comment.

[1]: https://slatestarcodex.com/2018/09/25/the-tails-coming-apart-as-metaphor-for-life/

[2]: https://forum.effectivealtruism.org/posts/xomFCNXwNBeXtLq53/bad-omens-in-current-community-building

[3]: https://80000hours.org/the-precipice/

[4]: https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-recommendations

Expand full comment

> And what tends to happen to religions? They tend to start believing in their own importance a bit too much, sometimes at the expense of actual social good. They're at risk of being captured by people that are more interested in improving their social status than actually making the world a better place. They have a tendency towards purity spirals that take their morality farther and farther into Extremistan.

Which religious groups (in the narrow sense of 'religion') are more morally extreme now than they were in their first few generations?

Expand full comment
Jul 20, 2022·edited Jul 20, 2022

Now is the wrong time frame because of religion's general loss of status and power. Instead, "which religious groups ever became more morally extreme than they were in their first few generations?"

And the answer is clearly at least Christianity (the inquisition) and Islam (holy wars), and arguably Buddhism (as practiced in Japan in various later eras).

Expand full comment

I would point out that the form of Buddhism you're citing (True Pure Land Buddhism) is in fact LESS morally extreme (recitation of nembutsu is seen as the only attainable virtue, and the recitation of nembutsu 10,000 times is all that is required for salvation) and also engages in so many extreme doctrinal variations that it's arguably a heretical form of Buddhism.

...I say, as a follower of a Vajrayana tradition, which is often seen in the same light.

Expand full comment

I wasn't citing True Pure Land specifically (as I'm only dimly aware of it as a distinct sect, and just have a general understanding that there were many sects at this time), but I do agree that various flavors of Japanese Buddhism seem to be seen as heretical outside Japan.

Expand full comment

My apologies, in my experience it and Zen are the only forms of Japanese Buddhism people know.

I'll also comment that Zen is one of the few Japanese Buddhist sects that has wider acceptance outside of Japan because it strongly derives from Chan, a "legitimate" fundamentalist sect within the Mahayana tradition, which is the predominant one. Most other sects (Shingon, Nichiren, Pure Land) are seen as "heretical" because they have a strong influence from Vajrayana or "Esoteric Buddhism", which rejects some pretty fundamental ideas in the other schools like non-violence and celibacy being absolute requirements to attaining Nirvana (unsurprisingly, the places where Vajrayana caught on had strong warrior aristocracies who liked the idea of killing unrepentant bandits and wars of conquest by a "humane king" as part of the dharma).

True Pure Land is rather infamous because they took the idea of "Buddhist teachings being spread by the sword" to its logical endpoint and tried to establish a Buddhist theocracy, as well as rejecting all moral law beyond nembutsu and adherence to the teachings of the priesthood due to their doctrine that it was impossible to attain complete salvation in the current age.

Expand full comment

> That's when you start getting things like Zvi describes, of non-vegans being treated "as non-serious (or even evil)".

I think this happens anywhere you get a critical mass of vegans.

Expand full comment

Thank you, this is great, and I'm saving it.

Another red flag (for me at least) was EA causing Scott to publicly wonder whether he should be an EA worker rather than a doctor. Pretty much everyone agrees on the basic positive value of a well meaning doctor, and not hewing to this standard is a sign of a purity spiral. It's certainly the reason I never pointed my (doctor, good doer) brother at EA.

Expand full comment
founding

Doctors are basically worthless on a global or historical scale. This is just an indisputable statistical truth, not a sign of a purity spiral. The impact of the average doctor is just never gonna be that big, even in the best-case scenario where they diligently save 1-10 lives every day.

Being a doctor is, like you say, basic. The point of EA is to ask if you can do better than basic.

Expand full comment
Jul 20, 2022·edited Jul 20, 2022

The sign of purity spiral isn't the argument you've made or asking undifferentiated people to consider it. The sign of purity spiral is asking a newly minted doctor to switch.

It's not very important to the grand movement that this particular person come on board, but it does show a careless disregard for switching costs that's well associated with destructive purity spirals of the past. It's very French Republican Calendar.

"Disruptive" is a swear word in some circles, and I'm not just talking about old fashioned taxi folks. I'm talking about rationalist west coast academic tech folks.

Expand full comment
founding

Sorry, that's ridiculous. You can't ask "undifferentiated people" anything! You're always asking specific people, because only real specific people read blogposts or essays. There's no such thing as "undifferentiated people".

So it's absurd to say a community is purity spiraling for "asking" a doctor to change careers just because he's read posts that make him consider it.

Expand full comment
Jul 20, 2022·edited Jul 20, 2022

I thought this was clear from context, but I was wrong, so let me clarify: undifferentiated in the relative sense that they haven't invested 10 years of specialized effort.

Scott didn't read posts that made him consider changing. He wrote about visiting and speaking to EAs who told him in person that it would be better if he switched to being an EA.

Edit: It's fine to ask an 18 year old to be an EA because you think EA is best. It's blithe and callous to *tell* a useful specialist that they should do your thing instead. More than blithe, actually. Downright street preachery.

Expand full comment
author

I didn't feel pressured by them. I think I wrote about how I met an EA career counselor who had previously been a doctor but switched to his current job once he realized it was higher impact, and that made me think about the topic. No preaching involved.

I also think that preaching would have been a ... social faux pas .... but that it's important to distinguish "social faux pas" from "objectively incorrect". If you are an atheist, it's a social faux pas to walk into a church and start telling them they're all wrong and dumb and they need to believe in evolution, but this isn't a strike against the truth of atheism, just against the social mores of that particular atheist.

Expand full comment
Jul 21, 2022·edited Jul 21, 2022

Thank you, I misinterpreted and I'll update on their pressure level!

"Social faux pas" and "objectively incorrect" are different but they definitely do have a relationship when you're interacting with new people. There is typically a threshold of social faux pas where you'll likely quietly adjust your best guess of the chance that the speaker is offering you objectively correct information. I will keep using the example of street preachers, but to help illustrate the idea, this group also includes untrained people with heterodox science ideas who are certain enough to corner you and push those ideas without knowing or caring that that's a faux pas.

It's not the same as object level debate about the merits of EA (or the merits of debating the merits of EA), but I think it's worth pointing out as a side note that a lot of the dispute between the extremes of feeling for EA come down to some people's threshold being tripped and some not. This makes it particularly helpful to hear that the EA career counselor wasn't using pressure. I have my own interactions with EA, but at least I can wind back "writer I follow was pressured in what I consider an inappropriate way", which makes them seem less like the kind of seriously insistent group that are dangerous to hear out. Thanks.

Expand full comment

Did he bring hard to replace skills to careers counselling? What about the wasted cost of his medical training?

Expand full comment

Have you considered if the average doctor will likely do more good than the average EA?

I think how many "rationally" approach "world saving" is that they estimate a 5% chance of doing really really good globally, and the impact could be so good that they can't waste their time with local issues. But the estimate is pure air and if it's really 0.001%, they essentially didn't do anything.

Expand full comment

I think it's fine if the Catholic Church only wants criticism within the Christian framework. If you think the teachings of Jesus Christ are bad and wrong, you should not be trying to convince the Catholic Church to abandon them in favor of something better; you should be trying to convince people to abandon the Catholic Church.

On the other hand, even as a non-Catholic you can think, "The Church is an organization that does some good things and some bad things. Some of the bad things are caused by a fundamental difference in our values, but there are some cases where I think they are just making a mistake by their own lights. I will write an essay pointing out those cases."

Expand full comment
author

I think it's fair to want advice on how best to do the thing you're doing, rather than to be told you should do a totally different thing.

Although I am not a perfect doctrinaire utilitarian, I'm pretty close and I feel like I have reached a point where I'm no longer interested in discussion about how even the most basic intuitions of utilitarianism are completely wrong - that feels close to something like "morality is dumb and you shouldn't care about it". While this is a philosophically coherent position, it almost feels more like an aesthetic/emotional choice to care about morality, to the point where I would be surprised if a logical argument could talk me out of it. Although obviously if there were a great argument that could talk me out of utilitarianism I would want to hear it, I feel like this is unlikely enough that it would be unfair to promise people a prize for coming up with good arguments in that direction, when I'm so unlikely to think an argument in that direction is good.

Expand full comment

Not advocating (ie, it's perfectly fine to skip), but have you considered satisficing consequentialism?

https://www.princeton.edu/~ppettit/papers/1984/Satisficing%20Consequentialism.pdf

Expand full comment
author

I'm not going to read that whole paper now, so sorry if I'm addressing a straw man, but if you asked me "would you rather cure poverty for one million people, or for one million and one people", I am going to say the one million and one people, and I feel like this is true even as numbers get very very high. Although satisficing consequentialism is a useful hack for avoiding some infinity paradoxes, it doesn't really fit how I actually think about ethics. "The ethical thing is to pile exactly five pebbles on top of each other, then stop" is incredibly consistent and paradox-avoid-y, but at some point you have to actually satisfy your moral intuitions.

Expand full comment
Jul 24, 2022·edited Jul 24, 2022

No, I think that's a pretty fair man to address, and a pretty fair address. Satisficing consequentialism only makes sense as a system if it fits a sort of natural preference. I will point out that for a lot of people it's very much like https://astralcodexten.substack.com/p/i-will-not-eat-the-bugs .

Expand full comment

I’m curious about the way you apologize to other EA thinkers. Your audience is big, and you really don’t want to be nasty. But the apologies signal to me that your thoughts are cut to a system of cordiality that always underpins conversation. I wonder if you had rougher, cleverer comments that you deleted for kindness? I hear you already express unease for easy and harmless conference talk, but its polite air still emerges in your own writing here. Maybe the Kuhnian anomalies we’re looking for can only show up on the fringes of the conversation, away from the padding of politeness.

Expand full comment

I don't think of politeness as padding, but as a civilised human norm. Having said that, the clever rough stuff is often theatre to epater les bourgeois, whether that's Nietzche, Wilde or Johnny Rotten.

Expand full comment
author

I can't believe you would write such a dumb comment. Total trash. Next time think for two seconds before pressing the "post reply" button.

More seriously, I really hate people attacking me online. It makes me miserable. And their attacks tend to be ... false. Like if someone accuses me of being greedy, or writing something because of some specific sinister agenda, or something, I usually know why I wrote things and they're just wrong.

And this blog is read by ~50,000 people. If I say something mean about some normal person without a huge audience, this may be one of the worst things that ever happen to them, in the same way that the NYT saying mean things about me was one of the worst things that ever happened to me.

And all of these people are effective altruists trying to make the world a better place, who are additionally writing their honest criticisms of EA to make it better. I hate that "this person writes a well-intentioned article intended to improve the world" ---> "they get insulted and used as an example of badness on a blog read by 50,000 people and they're forever known as the person who got this wrong". I hate that I have to write posts like this at all. But readers love criticism, and some points can't be made without it. I think it's an okay part of a broader media ecosystem but I hate doing it and I stand by my apologies.

Expand full comment

Sorry Scott, do you mean my comment or the one above? In emilio's defence I think I understand what s/he means, and wasn't trying to be mean... I agree with you that meanness is just, well... mean-spirited, and we should always try to be civil, especially online where the mean level of meanness may be even higher than in the mean streets of NYC or London, let alone the leafy avenues of Silicon Valley...

Expand full comment

I am quite certain that Scott's first paragraph was purely ironic, and *not* an attack or even criticism.

Expand full comment

You Americans with your *irony* "~}

Expand full comment
founding

Have you and your people not yet imported enough of that yet from us? :)

Expand full comment
author

Sorry, I was responding to Emilio.

Expand full comment

Best Ending Ever

Expand full comment

An example of a specific EA critique that I’ve found really seems to invite serious pushback, downvotes, etc. is the suggestion that more should be spent on Public Relations. Not sure if I’d be feeding the beast by adding my critique to the conversation though.

Expand full comment

Do tell... any links?

Expand full comment

The best I've got right now is this (see the comments in particular), though I don't feel like I managed to fully convey my thoughts: https://forum.effectivealtruism.org/posts/XtJ4GahdqaA4qxcBT/sam-bankman-fried-should-spend-usd100m-on-short-term

Expand full comment

Thanks - interesting - as a communications professional I'm biased, but agree with you that a good reputation / positive brand image is worth a lot of money and can have a multiplier effect by leveraging support from other partners & actors... it's not 'just PR' it's how humans work!

Expand full comment

Haha. I’d downvote that too

Expand full comment

I'd love to hear your thoughts on why you'd be against that!

Expand full comment

Because PR is the very definition of doing nothing. Well, nothing but trying to make yourself look good

Expand full comment

You are correct in the sense that it's *direct* value is zero. At the same time, its *indirect* value is vastly higher than I think almost anyone in EA gives it credit for. We need keep in mind that true rationalists should *win,* and if our goal is doing the most good, we need to aim for that over whatever feels most morally virtuous. It’s not inconsistent for an EA to spend money on ads instead of malaria bednets if spending ad money ultimately gets you more funding for malaria bednets. Companies don’t really care what people think of them either, but they spend money on ads anyway because it increases profit. PR *feels* unsavory to EAs because it's emotionally equated with doing nothing, but in practice, it brings in far more profit (and thus money ultimately spent on good causes) than it costs.

Expand full comment

I’d like to see the data on whether “ad money gets you more funding” and whether that extra revenue gets spent on program activities rather than more ads. Or more administration

Expand full comment

Making EA look good may or may not improve their ability to do things of value. It's an empirical question and not something that should be dismissed a priori.

Expand full comment
author

Yeah, there is a lot of back-and-forth about this (and I do think EA is doing some PR, just small-scale and quietly). I think a lot of people are concerned that PR aimed at outsiders breaks some kind of accountability loop (because you can do bad things without other people noticing and calling you on it), and that because there's no clear outsider/insider distinction it risks bleeding into PR aimed at insiders, at which you're just repeating your own propaganda rather than getting a truthful picture and making good decisions.

Expand full comment

I’m confused. You’re saying that criticizing criticism of EA for not actually listening to “real” criticism against it is the expected (and wrong) move, but then for the rest of the essay you give excellent arguments for why most criticism that’s received well is generic and not the “real” critique that actually needs to be said! Am I misunderstanding, or are you coming to the same final conclusion as the original expected one?

Expand full comment
author

I wasn't specifically intending to criticize EA in this way, just use it as an example. I actually think they're pretty good at taking all types of criticism!

But if I was, I think the argument I'm criticizing in Part 1 is "organizations prefer specific to paradigmatic criticism", and then the argument I'm asserting in later parts is "organizations prefer paradigmatic to specific criticism".

Expand full comment

Ah, thanks for the clarification! :)

Expand full comment

Love this - working in global health & development research, everyone (especially US/European elite) furiously agrees that the whole enterprise is fundamentally inequitable, post/neo-colonial, unconsciously biased, patriarchal and probably racist, and we should all be doing much better.

So that's fine - journals and pundits compete to signal their commitment to decolonising, reframing, and power-shifting, and we all nod along.

BUT, when someone tries to publish a paper unpicking why a particular trial or treatment programme doesn't live up to its vaunted claims (looking at you, deworming), or how Bayesian analysis shows that some hard-won data might be wrong, the knives come out sharpish... but that's how science works right?

Expand full comment

For those unfamiliar with the reference to deworming:

https://twitter.com/KelseyTuoc/status/1549443361332793344

Expand full comment

I for one believe we should restore sub saharan africa to its pre-colonial glory, slavery, warlords, genocides and all!

Expand full comment

Paradigm, schmaradigm... Mercury is notoriously unreliable, so those hidebound Newtonians were also justified by nominative determinism

Expand full comment

We welcome complete paradigm shifts as long as everyone's status remains the same.

Expand full comment

I went to my first (and probably only) EA meeting by accident last month, because I know a guy who attends.

They should call that stuff AA. I am purposefully not expanding that acronym.

Expand full comment

So far, I've assumed that many EA-aligned people welcome criticism because of a culture of (performative) openmindedness. I think the point this essay makes is better, though.

If criticism is vague enough, no one feels personally attacked – it's easier to nod along. You can feel productive and enlightened while changing nothing. I'm not sure if that's everything there is to it. What I am convinced of now is that being specific when criticising is valuable. Suddently whatever you're talking about becomes tractable.

Expand full comment

This isn't quite where I thought this was going. I thought this would be, "If you indiscriminately listen to all criticism, you waste too much time listening to idiots. Biologists are best off ignoring the criticisms of creationists. Physicists should ignore the flat earthers."

Expand full comment

This title is a reference to the young Marx (hegelian stage of his life) in which he criticizes Bauer's brothers? If not, it is an amazing coincidence. (ref: https://www.marxists.org/archive/marx/works/1845/holy-family/index.htm)

Expand full comment

Having spent a lot of times in the humanities I think there is a simpler explanation that probably gets closer to the truth: paradigmatic criticism is an easy win.

I recall being in philosophy seminars and somehow these types of critiques would always emerge, even when they were barely tangentially related to the subject matter.

I eventually came to admire the brilliance of it, since it is almost impossible to argue with. On the one hand, you cannot make an argument within a paradigm if someone completely refuses to engage with that paradigm. It's like stating that 1+1= 2 and someone replying that the existence of numbers is culturally relative. On the other hand, no one wants to argue with paradigmatic criticisms because they are often morally loaded. To use the previous example, any failure to acknowledge the cultural relativity of mathematical truth might be suggestive of some kind of bigotry against people who are bad at math.

To make matters worse, these kinds of critiques were always given the most appreciation. I suspect because no one dared question their relevance, and that they make everyone feel like their doing something important without doing anything. It's like educated middle-class white people embracing their 'white privilege': it makes them feel virtuous without having to do anything costly to help disadvantaged groups.

In sum, it's an easy rhetorical tactic that gives status to the speaker, status to the audience for acknowledging it, and it's difficult to argue with for logical as well as social reasons.

Expand full comment

I couldn’t agree more

Expand full comment

Yeah. Paradigmatic critiques are a classic high-school debate tactic for when you don't actually have any response prepared. Because they cheat the system and can't be answered on their own terms. They're poison pills that *sound* like actual arguments.

Expand full comment

This is a true thing that happens, but I think it over balances you against paradigmatic criticism.

Because there are also some valuable, genuine, important paradigmatic criticisms, such as:

"God dictates that these things are moral and those things are not."

"There is no god dictating morality."

and

"God put Charles on the throne, so we must listen to him."

"God does not put anyone on any throne."

Expand full comment

Those proposals sounds rather specific to me. Some of what passes for criticism these days sounds more like: these bad things are bad so let’s condemn these bad things. Cue the self-congratulatory applause.

Expand full comment

Yeah. Especially when "bad things" are assumed as such based on the meta-paradigm in which the organization (or its members) swim. In San Francisco (stereotyping here), being told that you're part of an institutionally racist organization isn't actually a paradigmatic critique--it's the same as saying "water is wet". It's a bare statement of "fact" that everyone there accepts already. It's entirely content free, almost tautological. Similarly, telling a group of flagellant monks that they're sinful or a chronically depressed person that they're too sad aren't actually critiques that carry any meaning.

Expand full comment

You're right that I wasn't kind enough to paradigmatic arguments, as of course they are sometimes needed. But I will say theres a crucial difference between contesting a paradigm's truths by saying a different paradigm is better/more useful and simply refuting a paradigm.

To use your God example, saying that "God doesn't dictate morality because there is no strong evidence that God exists" is much different than saying "God doesn't dictate morality because your version of God isn't everyone else's version of God".

I'm just sceptical when someone challenges a paradigm by tearing everything down without offering an alternative. Take this line from Scott's post:

-> "EA's seek impartiality and objectivity, but such things are illusions"

Well yes but that's kind of self-defeating isn't it?

Expand full comment
Jul 20, 2022·edited Jul 20, 2022

Yeah, it's a good point. I don't see alternatives on offer very often, but I can at least offer my own: satisficing consequentialist EA, or "relaxed" EA. Keep the thing where the goal is to use do-gooder wealth effectively, but drop the thing where you split hairs about what's best, and replace it with a very broad and easy "good enough" band at the top. Eg Veganism and eating less chicken are both good enough to move the needle at this time and nobody is encouraged to push one over the other.

Expand full comment
Jul 20, 2022·edited Jul 20, 2022

Actually, let me go quite a bit further. In a link from one of the other commenters, it was stated that many EAs advocate for taking a big step back from maximizing consequentialism, which is basically what I already said.

So, let me be honest. I don't want a community where loud maximizers and quiet satisficers coexist. I want a community where loud maximizing is *actively discouraged*. The paradigm I offer is decisively relaxed EA, where it's *not* cool to show up and be too charged up with maximalism.

As well, I want a community that's strongly against the Copenhagen interpretation of ethics. Discovery of a moral problem should not lead to a moral obligation, not quickly, maybe not ever. Interacting with a problem should not increase one's moral obligation over not interacting with it. Maintaining the status quo is morally neutral by default, and good faith attempts to improve it are morally positive by default, and it's only the bar for what constitutes intellectual good faith that is allowed to rise (ie, the only way you're allowed to push people is to think a little harder before making strategically poor altruistic choices).

Expand full comment

"Discovery of a moral problem should not lead to a moral obligation, not quickly, maybe not ever."

This is strongly against the intuitions of not just EAs and utilitarians, but just about everybody. You discover a drowning child, and there happens to be a life preserver right beside you that you can throw. You don't do it, because you think discovery of a moral problem should not lead to a moral obligation. Are you immoral for not throwing the life preserver? If you say no, you're not just arguing against EAs, but against almost everybody on Earth.

Expand full comment

[Epistemic status: I'm going to deliver a polemic, not an argument]

This is why the news is so toxic and the drowning child problem breaks down: in the current world, every moral problem is discoverable, a portion of every day is devoted to discovering them, and almost everyone who thinks this way experiences a jolt of moral responsibility, followed by a serious downward pressure on their happiness, optimism, and ability to act in their immediate life. Did you think about the capitol riot or Uvalde or George Floyd and get sad? I did not. Did you get much good work done those days? I did.

You may not realize it, but there is another group of people who give themselves the right to ignore a certain number of new moral problems in favor of focusing more effectively on their prior commitments to good actions and their overall mental health and good outlook. There always has been such a group, a very large one, but now it's becoming more intentional as a solution to toxic caring pressure. Your object example can be: "I walk through the streets of this third world country from my hotel to my meeting, and ignore the hungry mouths asking me for food, because I trust that Past Me has chosen the meeting to fit a better long term strategy than the one where I just stop and feed people all morning. I am not obligated to feed these people."

The crippling anxiety and depression of "hug every cat" is going to be outcompeted by "I'm sorry it's callous, but I've got soup to make and children to care for, so I don't want to hear about the new problem right now".

So I'm absolutely not arguing against everyone on earth. Everyone feels *compelled* by an actual drowning child and an actual life preserver. But your grandmother and mine both knew how to set practical boundaries on that caring.

Expand full comment
founding

I *don't* discover a drowning child, because drowning children are extremely rare and I will probably never actually see one in my life. Never mind one who is drowning next to a life preserver that I could use, but with no actual lifeguard in sight. That situation is *very very very unusual*, and very very very unusual situations call for very very very special rules.

If you generalize from that special rule meant for an unusual circumstance, to "I saw something on television where someone was suffering or dying", you are going to be led badly astray. I, not being any sort of utilitarian or consequentialist, won't.

And let's not get into those stupid trolleys with carefully-selected numbers of people tied to the tracks. Only thing to do when you come across one of those, is leave the area ASAP.

Expand full comment

As a recovering perfectionist, I’m a big fan of good enough. Maximizing can stop you dead in your tracks while you figure out the best thing to do.

Expand full comment

I agree with this whole-heartedly. I think I'm going to try to make a 'meta' point along similar lines in a separate comment.

Expand full comment
founding

> To make matters worse, these kinds of critiques were always given the most appreciation. I suspect because no one dared question their relevance, and that they make everyone feel like their doing something important without doing anything.

I love being 'that asshole' who absolutely will, happily, just go ahead and 'ask dumb questions' out loud in front of everyone in these kinds of situations!

Expand full comment

> Are we sure that becoming less individualist would be a better use of our energy than becoming more individualist? How did we achieve that certainty? It sure seems like more individualist countries are richer and better places to live. And that within those countries, the most individualist regions and social networks are the richest and best. Aren’t more intelligent people generally more individualist when you do the psych tests and surveys?

There are many books that do allude to how people get to this certainty, that show people are generally happier when they are religious say. I don't feel comfortable summarising these arguments because I think there are entire schools of such thought. I think asking these questions suggests that this case has been assumed, rather than argued for, which is a little unfair.

But I was mainly intrigued by the 'intelligent people more individualist' idea - is this true? It doesn't mesh with my experience. There are so many terrible articles saying 'intelligent people are more likely to do X', for obvious facebook clickbait reasons, which makes researching this area hard. Do we have good data on actual traits that exist for more intelligent people, and where can I find them?

Expand full comment

Now here’s some effective criticism - because you do need to get into the weeds to answer it

Expand full comment

Within race, yes. East-Asians are more collectivist than Europeans but higher IQ on average, but smarter east asians are more individualistic than less smart east asians.

Expand full comment
author

Please name any of these books.

(as for intelligence vs. individualism, I think this is true but I can't find the study. The best I can do is refer you to studies saying high IQ people are more likely to endorse social liberalism, economic liberalism, and libertarianism as a political philosophy)

Expand full comment

The question is, will OpenPhil print this and put it on a wall?

Expand full comment

Now this is, finally, some proper criticism of EA. You should submit it to the contest for extra meta points.

This is probably my favourite article this year. Predictably so of course, since I'm an EA.

Expand full comment
founding

Oh god – this must be like catnip to you :)

Expand full comment

It's easy to reuse paradigmatic critiques from other fields/disciplines, and people will likely either not notice or praise you for being widely read. (This isn't necessarily a bad thing; there's immense value to cross-pollinating insights! It just lowers the barrier to entry.) Specific critiques can only be made from scratch, which takes much longer. It's also much easier to falsify a specific critique, so people are more likely to work on one and give up or not publish it.

This means that there would be a lot more paradigmatic critiques than specific, even if people in a field collectively spent the same number of hours on each. To balance the two types, you'd need a culture that goes out of its way to reward specific critiques and acknowledge the higher effort involved. (Which might ruffle the feathers of some paradigmatic-critics, unless you're really sneaky about it.)

Expand full comment

> It's also much easier to falsify a specific critique, so people are more likely to work on one and give up or not publish it.

For a while I have suspected that something similar is an underappreciated reason for the rise of postmodern/identity-based scholarship in the social sciences. Undergraduates (even academically inclined ones) may well find it easier to write an essay which is both harder for their instructor to poke holes in during grading and, in a sense, pre-written. This doesn't (usually) mean not finding evidence, but it does mean not having to think as rigorously about what the evidence shows or what counts as good evidence.

And having saved themselves effort once, the students know how to do save themselves effort in a similar way on the next essay. Each time it happens, an opportunity to delve into the specifics of an issue - whether quantitatively or qualitatively - is missed.

When a few of those students finish their PhDs or post-docs and become instructors, they may be inclined to load their curriculum with the Theory-driven works with which they've grown quite comfortable. And that might make the new generation of undergraduates more comfortable about beginning the same cycle.

Somewhat related is David Chapman's essay here (https://metarationality.com/stem-fluidity-bridge).

By the way, if Scott thinks the comments on this post merit a highlights post, I predict a title of 'Criticism of Criticism of Criticism of Criticism'. (Now he can spite me by making it 'Some Criticism of Criticism of Criticism of Criticism' instead.)

Expand full comment

This. But you’re not doing useful work if it can’t be falsified imo

Expand full comment

I'm not sure I agree with full Popperism, but I think you can indeed falsify most paradigmatic critiques. It usually takes a combination of strategies--arguing against the philosophical points (or sometimes just sloppy rhetoric, lazy paradigm critics usually produce enough of it to just do that), interviewing or surveying people at stake, imagining alternative methods and thinking about whether they'll solve the problem.

Usually, at the end of that process, you get something stronger than either the original methods or the critics alone. It's an awful lot of work, though.

I wonder if there's a real tradeoff between how easy a criticism to make and how hard it is to test. At least with Scott's 2 categories it seems like it. Specific criticism is hard to invent, easy to test; paradigmatic is the opposite.

Expand full comment

But not usually worth the time, as you point out.

Expand full comment

Really enjoyed this sentence:

"It’s so fun that it can be hard to resist the temptation to believe you’re in it: just as economists have predicted ten of the last two recessions, so science journalists have predicted ten of the last two paradigm shifts."

Expand full comment

This is a bit like reverse bikeshedding (https://en.m.wiktionary.org/wiki/bikeshedding).

Emotional responses:

"The architecture of our office perpetuates systemic bias" -> solemn nods of agreement.

"The bikeshed should not be that hot pink colour" -> angry designer questions if you read his vision document carefully enough.

Phase 1/phase 2 paradigmatic shifts:

"We need to rethink architecture with sustainability in mind" -> unactionable, unfalsifiable.

"We should sacrifice arbitrary amounts of parking space if it allows optimisation of cycle storage" -> now we can have an interesting debate.

Expand full comment

Yes! It’s the actionable items people will fight about. Has everyone forgotten about falsifiability too? I know it’s risky, but otherwise you’re just doing confirmation bias

Expand full comment

I don't want to nitpick examples too much, but Remmelt's piece was largely an attempt to lay out what Glen Weyl thought about the Rationalist Community and EA Community, not just to provide a criticism of his own, and he succeeded at this so well that he wound up in an extensive private correspondence with Weyl that helped motivate Weyl to be more nuanced/charitable towards Rationalists/EAs in general. I don't know how well this generalizes, but I think you could have spent some time on the idea that, between overshooting and undershooting with criticisms, correctly shooting may be implausible as a community-wide standard, and overshooting is often valuable to community health for reasons other than just the criticisms themselves all being substantially good.

Expand full comment

> he wound up in an extensive private correspondence with Weyl that helped motivate Weyl to be more nuanced/charitable towards Rationalists/EAs in general.

This is an accurate summary of what happened, going by Glen’s responses on how he had reflected on his discussions with me (and probably I think with others) and had decided to change how he was going about engaging with the EA/rationalist community.

Expand full comment

I was emailing with him a bit during this time as well, but my impression from his statements was that you were probably the most influential person on his approach he was corresponding with.

Expand full comment

That’s clarifying, thank you!

There was also someone called Craig whom Glen chatted with.

Expand full comment
Jul 23, 2022·edited Jul 23, 2022

> Remmelt's piece was largely an attempt to lay out what Glen Weyl thought about the Rationalist Community and EA Community,

Actually, it was an attempt at clarifying common attentional/perception blindspots I had mapped out for groups in the community over the preceding two years. Part of that was illustrating how Glen Weyl might be thinking differently than thought leaders in the community.

But actually I was conversationally explaining a tool that people could use to map attentional/perceptual blindspots.

Try looking at the post (forum.effectivealtruism.org/posts/LJwGdex4nn76iA8xy/some-blindspots-in-rationality-and-effective-altruism) and piecing together:

- the I-IV. labelled psychological distances throughout the post (where distances represented both over past and future from the reference point respectively of {now, here, this, my}),

- along with approach vs. avoid inclination (eg. embody rich nuances from impoverished pigeonholes vs. decouple from the messy noise to elegant order)

- and corresponding regulatory focus over structure vs. process-based representations.

One thing I find a little frustrating about Scott’s selective depictions of the blindspots piece is that Scott seems to interpreting the claims made as being vague (definitely true in some cases) and as some kind of low-information signalling to others in the community to do the thing that is already commonly promoted as socially acceptable/good (mostly not true; I do think I was engaging in some signalling both in feel-good-relate-with-diverse-humans-stuff and in promote-my-own-intellectual-work-stuff but I felt quite some tension around posting this piece in the community; Scott’s response on individualism speaks for itself).

Whereas the perceptual and motivational distinctions I was trying to clarify are actually specific, somewhat internally consistent within the model I ended up developing, and took a lot of background research (core insights from dozens of papers) and many feedback rounds and revisions to get at.

Note also that I had not had a conversation with Glen when I wrote the post. In our first call, Glen said that the post roughly resonated for him (my paraphrase), but that he also thought it overlooked how like EA/rationality concepts. Eg. he said that Hindu religious conceptions can also be very far in psychological distance and abstraction, meaning there is diversity of human culture and thought that the blindspots post did not represent much.

Expand full comment

Thanks for the clarifications, this was helpful, I wrote my comment quickly without revisiting the piece carefully. For what it's worth I knew that you wrote the post before Glen contacted you, which is why I specified that this opened the conversation up. I think this vindicates the value of your piece even more than the alternative in certain ways in fact.

Expand full comment

Happy to!

All of us are busy, so would also be unreasonable of me to assume people read the post line by line.

I think by me referring in the post to Glen Weyl, that got a lot of people distracted by intergroup tensions (I was not aware at the time that Scott and others had been having online exchanges with Glen; which is on me because I could just have googled around).

Expand full comment
founding

Thanks for sharing extra specific/concrete details!

Expand full comment
Jul 20, 2022·edited Aug 6, 2022

Hey, I am the author of the ‘some blindspots in EA’ post.

Just started reading Scott’s post, after a friend shared the link with me. She, like me, thinks there is a self-criticism fetish in this community – which can get quite unproductive *in terms of* how people seek out and address criticism. The intro of the ‘criticism of criticism of criticism…’ post resonated. I agree also that criticism tends to be quite broad and abstract, and this is a fair portrayal of the list of specific distinctions in my post.

In terms of Scott’s responses to parts of the ‘some blindspots in EA’ post, I appreciate Scott being transparent and humble about what he focussed on and selected out for his writing, and what he cannot or is not making claims about. It’s always hard to portray others’ work you are criticising in a fair or at least open-minded way, and I appreciate the care Scott put into doing this.

The main clarification I need to make is that the brightspot-blindspot distinctions I wrote about are not about prescribing ’EAs’ to be eg. less individualistic (although there is an implicit preference, with non-elaborated-on reasoning, which Scott also seems to have but in the other direction).

The distinctions are attempts at categorising where (covering aspects of the environment) people involved in our broader community incline to focus on more (‘brightspots’) relative to other communities and what corresponding representational assumptions we are making in our mental models, descriptions, and explanations.

These distinctions do form a basis for prescribing the community to not just make hand-wavy gestures of ‘we should be open to criticism’ but to actually zone in on different aspects other communities notice and could complement our sensemaking in, if we manage to build epistemic bridges to their perspectives. Ie. listen in a way where we do not keep misinterpreting what they are saying within our default frames of thinking (criticism is not useful if we keep talking past each other). I highlighted where we are falling short and other communities could contribute value.

~ ~ ~

I’m coming at this subject from a very different angle than Scott, which is going to take too much time to clarify. I do not want to waste my and everyone else’s time by reading and writing long comment exchanges. If you are interested though to hear my thoughts on a specific comment I may have missed, ping me at remmelt[at}effectiefaltruisme.nl with a link to the comment.

Expand full comment
deletedJul 20, 2022·edited Jul 20, 2022
Comment deleted
Expand full comment
Jul 20, 2022·edited Jul 20, 2022

[deleted because it was actually the second part of comment above, which was not showing until I reloaded the page]

Expand full comment

Imo, and I may be simple-minded, but what I hear in this post is that a lot of the criticism is simply not effective. It just doesn’t matter how well meaning you are. In fact, being well meaning might get in the way of seeing what actually needs to be challenged. It’s risky. You could be wrong, you could piss someone off, and these days you could lose your career.

Expand full comment
Jul 20, 2022·edited Jul 25, 2022

> seeing what actually needs to be challenged

Opinions on this differ. There is also herding around what is seen as worthwhile and commendable criticism in this community.

IMO actually at least half of the public posts that were explicitly critical about an EA organisation, group and/or the broader community was pointing out persistent, deeper and/or hard-to-resolve issues in how work and conversations got carried out in the community. My sense is that it is often quite stressful to point out these issues, and it often took take a year or longer for specific issues to be openly discussed in public (particularly in 2016-2017 when it came to dedicated individuals noticing recurring problems in centralised career and grant recommendations by CEA-affiliated decision-makers, but being afraid to speak out about a thing that personally affected them, could make themselves look stupid in the eyes of their peers, and decrease their chances of ‘getting selected’).

But I’d agree that a lot of it was not pointing out specific things specific people had been doing and why specific ways they have been going about it could have harmful wider effects. A lot of the feedback I myself shared with CEA, 80K, ACE, and other EA-dedicated organisations was specifically about certain actions they concretely took and how that made it hard for EA organisers, funders, entrepreneurs, etc to do more rigorous and attentive work. But I usually did not post that feedback on the EA Forum.

There seems to be a lot of overlap between what well-respected figures with a wide reach in the community like Julia Galef, Rob Wiblin and Scott Alexander are saying are non-worthwhile forms of public criticism, and what forms of public criticism they deem acceptable (with emphasis on precise empirically-grounded feedback on projects by rationality-minded people who are inside experts or at least very well-read-up on the work going on, and preferably still positively endorse and encourage EA ideological thinking and efforts overall).

I think it’s important to first observe yourself what people around you are doing, do your background research, and reflect on matters (which I did over three years for the blindspots post). And if needed be prepared to make honest statements that well-respected authority figures do not like as stated.

> you could be wrong

That goes both ways. I would not want there to be some arbitrarily higher burden of proof for criticising the work of others (nor should there be for criticising the criticism of others) compared to those others making statements about their own work.

The people criticised will have a more detailed context-specific understanding of what they have been doing and why they think they have been doing it. But they also self-identify with this particular idiosyncratic work and have more trouble taking a broader allocentric perspective on the effects their work is having.

> you could piss someone off, and these days you could lose your career.

It’s more productive, yeah, to be civilised and respectful in giving feedback to other people in a group.

For the rest, these sound like social and self-focussed worries that would personally not hold me back from criticising this community for missing specific aspects in ways that seem potentially harmful to other persons living around the world.

Expand full comment
founding

This is interesting – thanks for sharing!

Expand full comment

Thank you for capping this excellent analysis with the last sentence. Bingo!

It may lead straight to "But then what?", but it gets the essential problem perfectly.

Expand full comment

A couple of instances of racism in medicine-- racism of the failure to pay attention variety rather than active malice.

Skin discoloration problems which are more common in people with darker skin.

https://www.cbsnews.com/news/diversity-dermatologist-usc-fellowship-diagnosing-treating-people-of-color/

Malone Mukwende, a med student from Zimbabwe who's studying in London, is putting together a handbook/website about diagnosing problems in patients with dark skin because symptoms like blue lips or the classic target pattern for Lyme disease look different.

https://www.washingtonpost.com/lifestyle/2020/07/22/malone-mukwende-medical-handbook/

Pulse oximeters don't work as well on people with dark skin. People are working on developing better pulse oximeters, but it isn't finished yet.

https://www.npr.org/sections/health-shots/2022/07/11/1110370384/when-it-comes-to-darker-skin-pulse-oximeters-fall-short

At this point, the only solution might be to convince doctors to not trust pulse oximeters too much if other symptoms are present.

Expand full comment

And now I've realized why I wanted to post that-- those are strong examples of specific improvement. On the other hand, I don't know whether they would have happened without non-specific issues about racism.

Expand full comment
founding

That's a good point!

There's (almost always) _many_ levels to anything and sometimes 'problems' at one level help motivate finding solutions on others.

Of course, that just makes consequentialist reasoning _even harder_! :)

Expand full comment

Anti racists imo do see active malice where there is merely unfamiliarity.

Expand full comment

They do, but there's hard to tell the difference between habitual not bothering, honest unfamiliarity, and malice.

Expand full comment

Yep. And there is no reason to do so. If it’s someone else you’ll get it wrong. If yourself then you are aware enough to change your behavior.

Expand full comment

It's peculiar that non-whites should be so desperate to move to places where people, as a bare minimum, supposedly ignore their health problems.

Expand full comment

This is specifically about dark-skinned people. Some non-white people are pretty light-skinned, and I wouldn't be surprised if some white people are dark enough to have some trouble with getting diagnosed.

It can be hard to notice a problem when it's part of "normal" procedures-- note the bit about the black doctor who didn't realize there was a problem with oximeters.

People move to the best available place, not ideal places.

Expand full comment

What possible definition of "racism" are you using here, exactly? You're confidently asserting something based on a practically meaningless word.

Expand full comment

It's possible that colorism-- a general preference for lighter-skinned people-- would have been more accurate.

I don't talk about racism a lot because it attributes motive-- or ignores motive-- in ways I don't like. However, I don't think it's completely meaningless.

Expand full comment
Jul 24, 2022·edited Jul 24, 2022

So, assuming dark-skinned people are different from light-skinned people is racism and assuming dark people are the same as light-skinned people is also racism? \s

Expand full comment

No, color-blindness is considered racism these days. Please try to keep up.

Serious answer: I don't think "racism" is a nonsense concept, but it's been watered down and overused.

Expand full comment
Jul 20, 2022·edited Jul 20, 2022

Is there a predictive processing angle here?

Scott’s argument seems to be that effective criticism requires generating a prediction mismatch: do this treatment instead of that one. Fund this intervention instead of that one.

Paradigms themselves are much harder to argue for because they are so harder to develop and transmit. Usually one or two prediction mismatches or gaps can be explained away, but the problem is worse than that; understanding a new paradigm is often very hard to do from “inside” the old paradigm.

I suspect paradigm happen more through an NP-type approach: young people see what works better and copy it, and older people have arranged so many memories and experiences in terms of the older paradigm that they can’t change, since the cost and risk of transition are extreme.

So perhaps Most people aren’t constructing paradigms from scratch or evaluating new ones behind a certain age, probably for cost and risk reasons. “Our paradigm has problems at the edges” is part of most current paradigms, hence the focus on “marginalized people”, it’s a tacit admission of flaws while avoiding being explicit

about them. I suspect this notion outcompetes “our paradigm is perfect” and will eventually lose out to paradigms that highlight their own failures in specific ways.

Expand full comment

Paradigms are built with unnoticed assumptions. The assumptions are so embedded you can’t see them until a significant challenge comes along. Then you can get all huffy and walk away while protecting your career, because everybody knows what everybody knows.

Expand full comment

As you embody with your apology, effective criticism deals in the particular. Therefore to actually do it risks offense as well as being disputed or disproven. My suspicion is most people just don’t have the guts. I also suspect many a career is built on vagueness and virtue signaling. It’s entirely rational not to take an unnecessary risk, isn’t it?

Expand full comment
Jul 20, 2022·edited Jul 20, 2022

>If we had a hundred such complaints, maybe we could figure out some broader failure mode and how to deal with it.

I want to complain about the passivity of your phase one.

I agree it seems good to have more complaints like the criminal justice criticism. I also agree with the broader point of this article, about looking for specific problems over generally accepted broad challenges to the whole system that

Your phase one and the sentence above paint to me a picture that after sufficient independent clues accumulate, inspiration strikes (as it is now likely to do), we quantum tunnel through the insight barrier, and from there follows more mundane work following the road, developing the new paradigm, planting the signposts.

However a subtle inaccuracy that observably keeps reoccurring — as in the mercury story — might look more like EA being inefficient with funding due to a specific repeated error that requires concrete changes to the framework to stop systematic miscalculations.

This is a (transparently) wild guess about the shape of what the next paradigm shift might be caused by, and I'm drawn to it by trying to walk closer to the Mercury story.

Where I think this is incompatible with the picture of phase one above is that looking for one such systematic concrete issue is very different than waiting for a pile of independent problems to accumulate, such that we may accumulate enough 'inspiration fuel' to finally light the fire.

The idea of waiting for a hundred such complaints so we might, maybe, see something broader emerge seems to still follow a pattern of liking broad non-specific patterns emerging over hard contradictions rooted in one or two solid anomalies that you can keep coming back to for support and/or gently bash people/yourself over the head with.

Expand full comment

The older I get, the more I notice that people who have actually tried to build things (companies, institutions, organizations) usually don't make broad paradigmatic criticisms. I think this is simply because they understand the practical uselessness of such critiques.

If there were some sort of ACX Survey Question that could come out of this, I would be curious to probe the age dependence. I would suspect that younger people are more prone to either making or vaguely nodding along with paradigmatic critiques.

I imagine it would be hard to ferret out the real variable I'm looking for: "have you, yourself, actually tried to or succeeded in building a human organization." A lot of managerial types would likely count themselves into this bucket, despite merely being hired into a leadership role, and thus probably never learning the relevant lessons. (In fact, hired-in managers are likely to have an even more skewed sense of what is required to build an organization, because they take existing structures and norms for granted.)

Expand full comment
founding

This is a good point!

I was, as is almost 'mandatory', something of a 'communist' as a teenager. Then I was exposed to Ayn Rand and was, for a long time, basically a libertarian. I'm now a self-described 'libertarian', similar to Scott himself, and mostly find "paradigmatic critiques" boring and unhelpful. I've looked into enough specific things enough to have seen/thought-of the ('fractally infinite') complicating details to no longer expect 'paradigm shifts' to almost every be useful.

That written, it _does_ still seem useful to have at least _some_ knowledge of 'alternative paradigms', but mainly those that were/are 'real' (i.e. 'implemented', and for long-term periods), if only to have _some_ kind of understanding of the relevant tradeoffs between/among them.

_Almost all_ of the 'standard paradigmatic critiques' are lazy 'cached thoughts', e.g. 'communism', and most people are almost completely unwilling to defend any past/historical 'implementations', but also not willing to offer any specific or concrete proposals for future implementations either. And that's sad! If only because I personally enjoy reading/hearing about the few specific/concrete proposals that people occasionally _do_ share :)

Expand full comment
Jul 20, 2022·edited Jul 20, 2022

Great article! It spells out many loose thoughts and believes I had for a while, but never quite managed to pin down. Strongly agree that EA has a fetish for criticism, even outright bad-faith criticism and I am glad someone of significant cultural power pointed it out.

Expand full comment

Scott, you've always had a great talent for pointing out arguments I've felt a vague frustrated discomfort with in some way I couldn't figure out and exposing them as part of a broader pattern made of bad thinking and weird status dynamics.

The next narrative beat *would* be that I recognize that these arguments always have the result that I'm more comfortable with the beliefs I already had and should be skeptical from now on of such things, but as far as I can tell from my memory of such cases I still mostly agree with your arguments even after accounting for that.

Expand full comment

So far as I understand it, Einstein did not develop his theory of general relativity to deal with the Mercury "error".

Rather, "Einstein felt a compelling need to generalize the principle of relativity from inertial motion to accelerated motion. He was transfixed by the ability of acceleration to mimic gravity and by the idea that inertia is a gravitational effect. " See, the following for an accessible treatment: https://sites.pitt.edu/~jdnorton/teaching/HPS_0410/chapters/general_relativity_pathway/index.html#:~:text=Einstein%20felt%20a%20compelling%20need,static%20gravitational%20fields%20in%201912.

Newtonian physics was replaced (sort of replaced - because it is still taught thru most of high school and college) not really because of prediction errors per se but because it could not explain the motion of light. So, I'm not sure your distillation of Kuhn does Kuhn (or even critics of Kuhn) justice. Not every 'revolution' is born of the accumulation of errors during 'normal science'.

Part of the lens thru which you are viewing EA is as if EA is a "new paradigm". And perhaps most EAers think they are doing something "new". But I'd have to say EA is not really new at all. The incorporation of "science" into political and eleemosynary structures and endeavors has a quite a long history.

Expand full comment

The development of relativity isn't a great fit for Kuhn's description of scientific revolutions. I don't know if he commented on it directly, but here's what I'd expect that he'd say in response:

The new paradigm (relativity) appeared earlier than expected. Normally, a bunch of problems have to build up before somewhat puts the work in / has the inspiration to create a new paradigm. Einstein was unusually brilliant, and so he was able to invent the new paradigm out of thought experiments and his personal aesthetics, before experimental problems in the old paradigm became significant. This might have made it harder to persuade the rest of the community, but Einstein followed up his new paradigm with bold predictions that were dramatically proved to be correct. Einstein's genius allowed him to speed run through the paradigm change process.

Expand full comment

"earlier than expected" - shrug maybe? Science may, like evolution, move akin to "punctuated equilibrium" (See Gould). The point remains that it seems (at least to me) that SA has misunderstood both Kuhn and the movement from Newton to Einstein. And thus has similarly misunderstood EA and EA critique as misunderstanding itself as a paradigmic shift, when EA is not per se a new paradigm at all but simply "normal science".

The best one might say is that Singer, et al has added a moral imperative to the question of HOW should we love our neighbor - i.e. charity as a rationalist program rather than an empathetic/emotional program.

Expand full comment

"earlier than expected" - I'm comparing it to the development of quantum mechanics. Classical mechanics, when applied to atomic systems, made tons of terribly inaccurate predictions and everyone know that it needed to be replaced by the early 1920s. There was sort of a new paradigm, the Bohr-Sommerfeld Model or the Old Quantum Theory, but it was widely seen as inadequate. The crisis in the old paradigm occurred before the new paradigm was developed in 1925-26. In the development of relativity, the problems in classical astronomy were still small. Most people thought that they could be resolved within the existing paradigm. The new paradigm was developed before the crisis occurred in the old paradigm.

I agree that SA isn't representing Kuhn or Newton -> Einstein very well. I think that the philosophical underpinnings of science could be a sort of punctuated equilibrium. I don't think that science itself is, because "equilibrium" discounts the real progress that occurs in normal science.

I don't think that EA is a paradigm. It certainly hasn't created anything like universal assent in the field of altruism. It is a community with a worldview, it has created institutions that reflect its worldview, and it is using them to compete intellectually. Maybe EA isn't novel enough to count as a new paradigm. It feels more like a movement to replace the "pre-scientific" world of charity with a single paradigm.

Expand full comment

I don’t know if the orbit of Mercury was significant either way, but prior to Einstein scientists reasoned that there was probably another undiscovered planet between Mercury and the sun, and then someone just recalculated Mercury’s orbit using Einstein’s equations and found it was spot on. I don’t know if this helped relativity gain acceptance among skeptical physicists or if they were already convinced anyway. “There’s an undiscovered planet too close to the sun for us to observe” seems like a reasonable enough guess compared to “Newton was wrong” from the pre-Einstein perspective, so I can’t imagine too many physicists were kept awake at night by the orbit of Mercury.

Expand full comment

Not to miss the point entirely, but there’s an extension to the collectivist vs individualist argument that may not be in the critiqued article but is implicit wherever this argument is made, giving it relevance as a moral argument opposed to a philosophical or factual one.

By encouraging processes that lean towards individualism in more communal cultures, you are essentially supporting the changing of the culture from above, a kind of soft imperialism. King Cnut provides specific counters to the benefits of individualism like the greater happiness of religious folks, but even if it *is* on-net better, one should recognize and reckon with the paternalism inherent in the approach — that those sad collectivists would be happier if they did it more this way — our way. This is precisely what empire-lovers have said about savages for centuries.

There are of course good bitter pill arguments for this approach; Rome gave the Isles civil infrastructure, the Isles gave much of the world common law. And no one is trying to dominate others here. But there should be acknowledgment of this trade off, especially by a philosophy trying to maximize the good.

Expand full comment

I think its frankly weird to be okay with large scale charitable interventions in general, but then once something can be considered to be influencing the culture in a certain way, then it becomes "paternalistic" (or worse, ""imperialistic""). Truthfully, it's paternalistic the moment these people become dependant upon smarter, wealthier foreigners to improve their lives, and virtually any effective intervention is going to impact their culture.

I'm all for being fully non-paternalistic and non-culture-changing. We let these people be, and let them decide their own destiny, while we can focus on real, humanity-scale problems. Of course, the same people crying about paternalism will find some way to cry over this too, something about white supremacy I suppose.

Expand full comment

Really good article. I can recall instances of myself doing paradigmatic criticism of technical architectures, getting the flat "nod along and ignore" response, but having the spiciest email threads come out on fiddly details that look like bikeshedding from the outside.

Expand full comment

Do it! Be specific!

Have you read NOISE? Seems relevant.

https://www.theguardian.com/books/2021/jun/03/noise-by-daniel-kahneman-olivier-sibony-and-cass-sunstein-review-the-price-of-poor-judgment?CMP=Share_AndroidApp_Other

Quote from the above review:

To be strictly fair, the authors do acknowledge the existence of algorithmic bias, although they perhaps underestimate its magnitude. A crucial point they do not acknowledge, however, is that algorithms don’t merely replicate human biases, they amplify them – and by a significant amount. One that was trained on a dataset where pictures of cooking were 33% more likely to involve women than men ended up associating pictures of kitchens with women 68% of the time. Until these issues are ironed out we should beware of social scientists bearing algorithm-driven gifts.

Expand full comment

That's not necessarily amplification of bias unless humans associate pictures of kitchens with women less than 68% of the time. If you showed me a picture of a kitchen and forced me to associate it with either men or women, I'd choose women 100% of the time unless I had some specific reason to do otherwise, for example if I recognized the kitchen as my own, or as a professional kitchen (because most chefs are men). I'd do so because I know that kitchen *are* associated with women in most human cultures throughout history, including in my own, and so if forced to choose a gender to associate with a kitchen, the logical choice is female.

In the same way, if you had a loaded coin that gave heads 70% of the time, and I made you predict what the next toss will be, what will you pick? 100% of the time, you should pick heads. You should not pick heads 70% of the time and tails 30% of the time, because that doesn't maximize your chances of being correct.

Expand full comment

How is this proof of amplification? What's the actual human level of "bias"?

And who the hell defines "bias"? I've literally seen articles like this that believe having *an empirically accurate judgement of something* is an example of bias (e.g. black people being more associated with criminals than other races). Either it's not a bias, or being "biased" isn't a real problem, just one that contradicts left wing narratives.

Expand full comment

Good post. Also in the category of specific criticism that raised enormous hackles on the EA forum: On Deference and Yudkowsky's AI Risk Estimates.

The post argues that EY has made some very bad predictions in the past, including a since-disproved prediction that the world would end due to nanotech, that should influence the weight we give to his future predictions. It's about as stone-cold a Bayesian take as one could ask for, and people absolutely lost their shit over it.

EA is a paradigm and as such won't be the source of the next paradigm. It should get comfortable with that and just place the bets it wants to place.

Expand full comment
author

I agree that that was a useful post.

Expand full comment

The discussion about EA's self-flagellating (my words) "love of critique" made me think of the following (with apologies to the original parable):

There once were two workers. When the boss was dissatisfied with their work, he began to critique them. The first grumbled and complained about being criticized, responding that he was already doing everything right. But, in the end, he changed his behavior according to the critique. The second accepted the critique gratefully, exclaiming that he was glad to be corrected and that the boss should feel free to let him know instantly when something wasn't right. But then changed not a thing. Which one of the two was more responsive to the critique?

Expand full comment

Also, insofar as the EA paradigm is made up of things like utilitarianism, hedonism, altruism and claims about moral obligation, it would be kind of silly to solicit paradigmatic criticism via a contest like this. Philosophers have been examining these ideas under a microscope for hundreds of years. Anyone capable of thinking up a truly novel and mind-changing objection to one of them would have published it in Philosophical Review rather than waiting around for an internet EA contest.

Expand full comment

"But we’ve got to change paradigms sometimes, right? How do we do that without soliciting paradigmatic criticism?"

I write about this general principle more here.

https://questioner.substack.com/p/the-cultural-narrative

Specific criticism towards the scientific community here.

https://questioner.substack.com/p/how-to-make-enemies-and-influence

Implementation testing here and here.

https://questioner.substack.com/p/they-targeted-gamers-gamers

https://questioner.substack.com/p/why-smart-people-believe-dumb-things

The TL;DR is that scientists and academics operate along the same incentives of any other profession and are reluctant to admit that they are wrong because it will mean they lose status and power. The only way to force them to change their habits is to make it clear that the consequences of NOT admitting that they are wrong (when they are) are going to be far more severe and harmful to them than the consequences of admitting they are wrong. You do this by building up a cult following that is willing to hurt the shitty scientists. Also, if you have any sort of useful knowledge (memetics, superforecasting, etc) that the scientific community refuses to believe in, you need to deliver that knowledge correctly. Instead of humbly petitioning the "respected scientists" to look at your findings, weaponize them and use them to destroy people's trust in the scientific paradigm altogether. Don't come to arrogant people as a supplicant pleading for them to "please notice me senpai", instead come to them as a conqueror and hurt them until they are FORCED to notice you.

https://questioner.substack.com/p/fun-with-fascism

Expand full comment

Wait, what are you prescribing levothyroxine for?

Expand full comment
author

Sometimes thyroid hormone is used as an augmentation in depression treatment, although liothyronine is actually a little more common than levothyroxine for that.

Expand full comment

Thank you very much. Does levothyroxine have any impact on depression by itself? I've been taking a pretty significant dose of levothyroxine for essentially my entire life.

Expand full comment
author

Hypothyroidism can cause depression, or something that looks very much like depression. I don't think it's very well understood to what degree thyroid hormones can treat depression not due to hypothyroidism.

Expand full comment

Thank you.

Expand full comment

To state somewhat of an obvious point: the reason EA criticism gets so much traction on the forum is because it’s the most interesting thing you can write. All other posts are about cause areas and they’re (a) only appealing to people who understand or are interested in the cause area and (b) don’t lead to a lot of discussion in that the most you can say is usually “that’s a great idea, keep it up / someone should try that!”

Expand full comment

Why is the final part of this article solely about Kuhn? Surely there are other thinkers on the subject of paradigms?

Expand full comment
author

You caught me, he's the only one I've read.

Expand full comment

Did Kuhn claim that Einstein was a response to "prediction errors" building up from Newton? I don't recall reading that (and that was certainly not Einstein's motivation for general relativity) but it has been 40 years since my first time thru Kuhn and 20 years since my second time thru him?

Expand full comment
Jul 20, 2022·edited Jul 20, 2022

I feel like the example of paradigmatic criticism given in the article about how do we know we know reality is real or that capitalism is good is a bit of a straw man. I've always thought paradigmatic criticism of EA work was more more points like:

- Giving in the developing world, as EA work often recommends, is often used as a political tool that props up violent and/or corrupt governments, or has other negative impacts that are not easily visible to foreign donors

- This type of giving also reflects the foreign giver's priorities, not the recipient's

- This type of giving also strangles local attempts to do the same work and creates an unsustainable dependence on outsiders

- The EA movement is obsessed with imaginary or hypothetical problems, like the suffering of wild animals or AIs, or existential AI risk, and prioritizes them over real and existing problems

- The EA movement is based on the false premise that its outcomes can in fact be clearly measured and optimized, when it is trying to solve huge, complex, multi-factorial social issues

- The EA movement consists of newcomers to charity work who reject the experience of seasoned veterans in the space

- The EA movement creates suffering by making people feel that not acting in a fully EA-endorsed manner is morally bad.

This is the kind of criticism I would consider paradigmatic and potentially valid but also not, as far as I can tell, really embraced by EAs.

Expand full comment

as usual, the lone voice talking about what really needs to be talked about, and no one answers.

Expand full comment
founding

Well, yes, it's not "embraced" by EAs because doing so would cause them to cease to be EAs!

Expand full comment
Jul 20, 2022·edited Jul 20, 2022

Three thoughts:

---

> But we’ve got to change paradigms sometimes, right? How do we do that without soliciting paradigmatic criticism?

> I don’t know, man, I don’t know.

But this one is so obvious that the answer is a well-known aphorism. Change comes one funeral at a time. New entrants to the field -- any field -- choose a paradigm to work in, and to a first approximation they all stick with their choice forever. A paradigm fails when its practitioners end up dying as bitter, lonely losers instead of rich celebrities; this prevents new entrants from joining that paradigm.

The remoteness of the determinant of "success" from the truth explains why paradigms need not be tightly connected to the truth.

---

> The anomalies with Newtonian gravity weren’t things like “action at a distance doesn’t feel scientific enough” or “it doesn’t sufficiently glorify Jesus Christ” or even “it’s insufficiently elegant”. The one that ended up most important was “its estimate for the precession of the orbit of Mercury is off by forty arc-seconds per century”.

The post gives this due credit, but the issue of Mercury has the most historical prominence because it's easy to measure. "Action at a distance doesn't feel scientific enough" is, unlike the other two 'bad' examples, an important criticism that has, in the case of gravity, completely carried the day. We do have a modern theory relying on action at a distance, quantum entanglement, and opinions range from virulent loathing to resigned neutrality.

---

I'm happy to see the post explicitly equating EA's fetish for content-free criticism of EA with the "structural racism" white fetish for content-free criticism of whites. I would go so far as to suggest unkindly that in fact it's a white fetish in both cases, and the reason EA is like this is that it consists mostly of self-hating whites. In that context, I kind of wish people with this level of need for masochism would get into EA where the same fetish does less damage. If they would, then stamping the impulse out of EA would be productive for EA on its own terms, but counterproductive for society.

More realistically, if instead of getting really into EA or antiracism the same people got really into Catholicism, or any other well-rooted traditional belief system, there would be an established channel for their impulses (maybe several!) with the advantages that (1) a fairly robust support system already exists; and (2) people around them would already have well-calibrated theories of how seriously to take them. I think this model -- "when people go nuts in predictable ways, they get channeled into a system that absorbs their quirks, maybe gives them something useful to do, and lets everyone else know what to think about them" -- is underappreciated as an aspect of traditional religious practices.

Expand full comment

Regarding your first point, I just now discovered this is Planck's Principle (https://en.wikipedia.org/wiki/Planck%27s_principle):

"A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die and a new generation grows up that is familiar with it ..."

This sounds like a bit of wisdom that was earned from painful experience. I'm not sure why Max Planck would say this, he was a fabulously successful physicist. He has a fundamental constant of nature named after him, after all.

Expand full comment

Dualism / objectivity / disconnectedness / analysis is a constellation of mental skills that have one use, while non-dualism / subjectivity / interconnectedness / synthesis is a constellation of mental skills with a different use.

If you're writing a short story, for example, you're usually in a very non-dualist, subjective, synthetic, interconnected mental mode while you're composing the first draft. You're asking, "Is this cool?" "Does this resonate?" "How does this seem?". Whatever you put down, that's the right thing to put down. But to edit and proofread, you have to switch to a dualist, objective, disconnected, analytical framework: "Is this right or wrong?" "What does this section do?" "Is this sentence grammatically correct?" In this mode, it turns out that lots of things were the wrong thing to put down and you need to cut them.

They say, "Write hot, edit cold." This is what they mean. I think of these as "expansive" versus "contractive" phases. The expansive phase is characterized by waste, inefficiency, creativity, contradiction, high spirits, productivity. The contractive phase is characterized by perfectionism, miserliness, efficiency, focus, goals.

I don't know EA deeply, but the essential idea ("current charity is inefficient; how can we do better?") is fundamentally contractive. EA is editing / proofreading / analyzing / criticizing charity. Of course, everything partakes of both phases; EA needs expansive-phase thinking too, but it's going to be secondary.

Expand full comment

"Here’s my proposal: ask why they prescribe s-ketamine instead of racemic ketamine for treatment-resistant depression."

Isn't it common for one enantiomer to have more of the desired effect and the other to have more of the undesired effect? (Escitalopram out performs citalopram in studies, if I remember correctly, and you questioned whether Johnson and Johnson picked the correct ketamine enantiomer. And then there was that whole thalidomide thing...) Perhaps there should be more psychiatrists willing to prescribe racemic ketamine to take at home, but if regulation or insurance forces psychiatrists to monitor anyone on ketamine and racemic ketamine has significantly worse side effects (if I remember correctly, a dose of esketamine is similar to a moderate recreational dose of ketamine and an antidepressant dose of ketamine may need to be double that), it's reasonable to be unwilling to do so.

Expand full comment
author
Jul 21, 2022·edited Jul 21, 2022Author

Along with the linked post, I spell out some of my thoughts on racemic vs. es- ketamine at https://lorienpsych.com/2021/11/02/ketamine/

Expand full comment

Specific vs general criticism reminds me of an exchange between Peggy and Bobby Hill. Paraphrased from memory:

Peggy: All my life the media has tricked me into believing that my big feet are ugly.

Bobby: Who? Who in the media tricked you?

Expand full comment

> So instead, they’ll preach things the old paradigm says are good, which haven’t been implemented because they’re vague or impossible or not worth the tradeoff against other considerations. Listen too hard, and you’ll go from a precise and efficient implementation of the old paradigm, to a fuzzier implementation that emphasizes trying to do vague, inefficient, or impossible things

Could I get an example of this?

Expand full comment
author

I actually find this very annoying. If I say something like "it's conventional opinion that the sky is blue", someone in the comments is going to ask "Oh, really, give me three examples of respected media sources asserting the sky is blue?", and this actually takes a little bit of time to find because usually they're slightly more subtle than that or because I have to wade through a bunch of paywalls or something.

Expand full comment

I'm wondering what two million dollars (or two hundred million) could do to improve criminal justice in the US.

I've at least been following news stories about failures and occasional successes in the project for decades, and it's much harder than I thought.

One possibility would be to just publicize existing problems-- bad forensics, coerced confessions, and so on.

One problem is a double standard-- relax standards for imprisonment before trial, and you'll be blamed for any serious crimes committed while waiting for trial, but you won't get credit for not screwing people's lives up.

In regards to damage done by the criminal justice system, it's really hard to compute, because it goes well past the individual-- costs to family and friends, and general damage caused by pulling useful or somewhat useful people out of a community. There's also a corrosive effect of putting innocent people in jail.

Much as I detest the war on drugs (and how would you spend two million dollars on ending it? it's probably a problem of comparable size), it isn't the main problem, and neither are private prisons, even though the incentives are ugly. The major problem seems to be excessively long sentences for crimes. and carelessness in applying them.

Maybe just give the money to the Innocence Project and say the hell with it. It isn't even enough to make another tv show about the problem, though maybe it's enough to get people to produce one.

Expand full comment
deletedJul 20, 2022·edited Jul 20, 2022
Comment deleted
Expand full comment

What do you think are indicators of a high crime rate and why do you think imprisoning more people is a solution?

Expand full comment

Money would be better spent discovering why America has a higher incarceration rate than other nations and then fixing that problem. People and groups such as Color of Change implicitly assume that the problem starts with the police and prosecutors, and extends until incarceration. I don't see why that should be taken at face value.

It's entirely possible that going into high-crime areas and disincentivizing crime is a much more effective way to reduce the incarcerated population. Yet this approach isn't considered. Likely because it's significantly more complicated to make progress on that front than it is to give some money to a left-wing org, tell them to enact criminal justice reform, and call it a day. Only relatively easy avenues of attack are considered.

Expand full comment

A combination of high(er) crime rate than other developed economies due to a large black population and lots of guns, and well resourced law enforcement and justice system meaning that a higher percentage of crimes result in arrests and convictions compared with other high(er) crime countries.

Expand full comment

Yep. Not everything fixed by money. Zuck gave millions to NJ schools with little to show for it

Expand full comment

What fixes things, to the extent that they get fixed?

Expand full comment

Well for starters, you need a consensus on what you're actually trying to fix, which apparently we don't have on this particular subject, because you seem to be under the impression that fixing the criminal justice system would involve fewer people being in prison, whereas many others (such as trebuchet) would say that fixing the criminal justice system needs to involve a lot more people being in prison.

Expand full comment

I don't know what people in general believe about how many people should be in prison. I thought the high end was represented by people who were content with the current numbers rather than wanting them to be higher. Trebuchet exists, but how typical is he?

Expand full comment

“Number of people in prison” is a stupid metric, and the focus on that metric has led to a Goodhart’s Law situation.

Expand full comment

What do you think would be a good metric, or at least a less bad metric for a while?

Expand full comment

It depends on what you’re trying to measure and what your goal is. For my goals, crime rates would be a better metric. If the only way to keep crime rates down is by incarcerating a large proportion of the population, then perhaps that indicates a root cause analysis of why our society develops so many criminals, at which point maybe we look at the proportion of births that occur out of wedlock or the academic performance of the bottom 5-10% of schools.

Expand full comment

Funding the campaigns of individual progressive prosecutors, or paying individuals bail and legal fees, may be more tractable than those kind of systemic changes.

Expand full comment

As long as cash bail continues to be a thing, just paying it for everyone you can afford to pay it for would likely go a long way.

Expand full comment

I suspect the procedural bottleneck might be the number of judges, prosecutors, and public defenders. Which is a good target for throwing money at, although not necessarily non-government money.

Expand full comment

"What do you mean I'm not engaging with your criticism? I spend all my leisure budget on dominatrices telling me what a bad boy I am!" This is a silly argument.

Expand full comment
founding
Jul 23, 2022·edited Jul 23, 2022

It's also a silly argument to quote something that wasn't written and write that it "is a silly argument".

Expand full comment

Glad to see a return to the kinds of posts by Scott I prefer

Expand full comment

Software engineers seem to love to complain about the general paradigm of software development within their organization. Stuff like "We never have any documentation for our code," or "Our whole application architecture is convoluted, ad-hoc, and fragile" or "We never think about usability or accessibility until the very end of the development cycle." Everyone knows on some level that the current paradigm is broken and can complain about how broken it is for hours.

But, when you try to propose some specific solution to address those concerns, like "We should run a nightly build task to ensure that all symbols have Doxygen-formatted comments of at least 500 characters," or "We should have a giant three-hour Zoom meeting with stakeholders from the architecture team, the interface design team, and the accessibility team before beginning work on any task," you'll start getting pushback. Because replacing a flawed paradigm with a magically perfect paradigm is always going to be better for everyone by definition, but specific solutions are always going to have tradeoffs and are always going to have some kind of negative impact on someone.

Expand full comment

I came here to write almost this exact comment. Thanks for doing it better than I would have!

Expand full comment

I love your comment. True in so many domains!

Expand full comment

The magical new paradigm that only exists, differently, in the head of each individual involved will always be superior to what you can actually achieve. "Don't let the perfect be the enemy of the good". Advice ignored almost universally across all human endeavors.

Expand full comment

I think this is different to what Scott is talking about. This is more that people like complaining but don't like having to do work to remedy the problems they're complaining about.

The EA thing, people aren't complaining, they're criticizing. And importantly, the specific criticisms that lead to heated arguments *aren't about the general criticisms*. They're not concrete steps to make X thing "less white supremacist" or something, and the problem is nobody wants to do the work to carry out those steps.

Even the complaints you listed are very specific compared to the stuff scott is talking about.

Expand full comment
founding

I gave this up a long time ago and have mostly just shifted to a 'try seemingly good ideas' approach. Make small improvements incrementally and review their impact regularly and just keep iterating!

It does help to either have an effective 'dictator' to cut thru 'consensus building bullshit', or, alternatively, have sufficient latitude to just try things yourself. I prefer the latter, but the former can be _very_ effective!

Also see 'worse is better' and its various essay progeny!

Expand full comment

I don't know much about EA; I am too far from it. All I can react to is what has reached me.

And what has reached me is EA giving money to weird and self-serving things, such as printing physical copies of Harry Potter fan fiction, or paying someone to relax and learn to ride a bike. When I bring this up, for some reason all the EAs defend this instead of just saying "that wasn't representative".

If EAs keep defending printing physical copies of HPMOR, it does certainly look like they cannot take criticism. (Or at least it looks like *something* has gone badly wrong with them.) So that's where I, for one, am coming from: I find EAs are blind to their self-serving bias.

(I know that most of EA money is in global poverty and AI risk or whatever, but then it should be easy to disavow paying someone to learn to ride a bike, and yet EAs resist it for some reason.)

Expand full comment

> "paying someone to relax and learn to ride a bike"

What does this refer to?

Expand full comment
Jul 21, 2022·edited Jul 21, 2022

Thanks for linking!

There are two parts here:

(1) is long-term AI safety an efficient charity? Note that last time I checked https://www.effectivealtruism.org/ has declined to recommend AI-safety spending as effective. I agree with them.

(2) if someone wants to spend funds on AI-safety stuff then maybe 20 000 dollars on burnout prevention is a good use of funds.

But I think that Against Malaria foundation would spend it more effectively, which is why I donated my funds there and I am not planning to give anything to AI risk guys who apparently are swimming in cash.

Expand full comment

> EAs keep defending printing physical copies of HPMOR

have they defended it as an efficient charity and were not treated as completely mistaken?

or have they treated it as a hobby or something?

Expand full comment

As an efficient charity. (It would be unobjectionable as a hobby, but this was using charity money.)

https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-decisions

Expand full comment
author
Jul 21, 2022·edited Jul 21, 2022Author

Haha, basically every day I meet people who were recruited to high-impact jobs by that Harry Potter fanfic (example: last Friday I met someone who is now an AI safety researcher who first learned about AI safety through a chain of events starting with reading HPMOR). That is honestly the best investment they ever made, and it's perfect because it sounds so stupid and is easy to attack. You can tell who's actually thinking strategically vs. who is just simulating what cool high-status friends would approve of, just by asking people their opinion of that one (admittedly embarrassing) grant.

(I was mildly against that grant when it was first made, but decided to keep my mouth shut about it, which I guess means I neither gain nor lose Bayes points)

Expand full comment

> That is honestly the best investment they ever made

Personally, that makes me even more dubious about their usefulness.

Expand full comment
founding

Why? And who/what is "their"? The people that printed copies of HPMOR and gave them to others? EAs generally?

Expand full comment

Organization which funded this grant.

Expand full comment
Jul 21, 2022·edited Jul 21, 2022

I agree writing HPMOR was an excellent strategic move on the part of EY. But is printing paper copies actually doing any of the work? Surely most of the useful word-of-mouth based on HPMOR is from people stumbling upon it online in normal fanfic places. The kind of people who are liable to buy a physical book are people already plugged into general rat spaces.

Expand full comment
author

I don't know if this is the specific grant LGS is thinking of, but the one I'm thinking of was printing copies and sending them to Math Olympiad winners (possibly some other competition, possibly only in some specific country, I can't remember) in order to recruit them.

I admit I don't know if anyone was recruited by that one in particular, but it doesn't seem like an obviously dumb plan.

Expand full comment

Ah, I see. Sending them to people makes more sense. I agree that it's not "obviously dumb".

However, my prior is still that this would have fairly small effects compared to HPMOR in its natural milieu. It works because it's a cool online fiction story in your favourite trashy genre. It wears on its sleeve that it's "rational fiction", of course — but you don't go in expecting that reading it may well upend your entire moral and epistemological worldview.

So I'm skeptical that it'll work anywhere near as well if you hand it at random to people who may not have any interest in the surface charms of a snarky Harry Potter fanfic, and tell them going in that it's more than just a bit of geeky fun (as you inevitably kinda have to if you're telling those kids "so there's this charity that specifically wants you to read this fic…").

Expand full comment

AI safety research is high impact if AI threat is real, and it's possible to do something about it. Both are pretty dubious.

Expand full comment

This is a very interesting addendum to "at least Harry Potter got millions of kids to start reading".

Expand full comment

I, personally, have become un-recruited from EA because of people defending printing HPMOR books. I used to identify as EA; no longer. I now wage a (mild) spite campaign against EA instead. So take that into account when considering efficacy.

Printing fan fiction is such a monumentally stupid thing to do that I have trouble understanding how smart people like you think it's a good thing.

Sure, some people in EA found out about EA due to HPMOR. But as an *intervention*, handing out HPMOR books is strictly worse than just, you know, telling people about EA. Also, if you hand people HPMOR books, they *might* discover EA as a result, but you cannot quantify that by looking at the number of EAs that came from HPMOR; you must DIVIDE BY THE BASE NUMBER OF PEOPLE WHO READ HPMOR. For a rationalist, you should to apply Bayes's theorem more. We want Pr(EA | HPMOR); you instead are talking to me about Pr(HPMOR | EA).

Also, the HPMOR books are in English and were handed to IMO medalists, many of whom don't speak English and only a fraction of whom would be native speakers.

It's just so stupid all around, it's despairing.

Expand full comment

I apologize for the double comment, but I'm still kind of obsessed with how stupid this grant is. I just can't let it go. And Scott thinks this is "the best investment they ever made".

And even though everyone in EA that I've talked to supports this grant, it is apparently the EAs who are NOT doing groupthink and just "thinking strategically". And I'm apparently "just simulating what cool high-status friends would approve of", even though Scott is cool and high-status and disagrees with me, and even though Scott has no response to my criticisms of the grant.

This is EA in a nutshell: it puts "spend money on malaria nets" on the tin, but actually spends money printing physical copies of freely-available Harry Potter fan fiction and sending them to kids who don't speak English. "Oh, EA loves criticism so much," the EAs say, then conspicuously fail to answer basic questions like "didn't you just totally botch Bayes's theorem in your defense of printing physical copies of freely-available Harry Potter fan fiction instead of saving children's lives".

Expand full comment
author
Jul 24, 2022·edited Jul 24, 2022Author

Yes, looking back on it more, I think *writing* HPMOR was among "the best investment" "they" ever made, but I can't clearly say the same thing about giving it out.

But I don't know why you're having such a strong negative reaction here. In particular:

- I'm not sure why you think they're giving out copies of HPMOR in languages people can't read. The only case I know of doing something like this is grant described in https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-recommendations . The project leader, Mikhail Yagudin, is himself a former Russian Math Olympiad participant and lists on his resume that he tutored gifted math students in Moscow, so I assume he knows what languages these people speak. I don't know if that means he knows that all smart Russians speak English, or that he arranged to use Russian-language copies, but the grant says that he is "coordinating with the team that translated HPMOR into Russian", so I think it's the latter.

- I agree that just because many people entered the rationality community through HPMOR doesn't mean that there's *necessarily* a high conversion rate per reader. But under reasonable assumptions about how many people read the book, I think it implies it's pretty good, especially since part of the conversion process is getting someone to agree to read something in the first place (compare HPMOR to the Sequences). The grant I linked above says that they're already trying giving these people leaflets advertising the community in a normal way, and I feel like it's not totally insane to think "along with the dry leaflets, it might also help to give these kids a really popular Harry Potter fanfiction that describes our community's ideals".

- Open Phil has previously said they value the marginal talented AI safety researcher at $10 million. At that price point, a $28,000 "marketing" project is worth it if there's a 1/350 chance it recruits one extra talented person to go into AI safety. At the number of books they gave out, this means s a 1/210,000 chance per book. Given how many people got into the community through that book and how few people there are in the world, I don't think it's realistic to think it has a less than 1/210,000 success rate.

- I think it's unfair to say EA claims to spend money on malaria nets but actually spends it on HPMOR copies. EA has also spent over $400 million on malaria nets! It just sometimes spends $28,000 on clever long-shot bids to recruit a new generation of talent too! This is working - a surprising percent of the next generation of mathematically gifted kids is going into AI safety research. I just learned that EA funds the most prestigious summer camp for mathematically talented youngsters in Brazil - I think there's something similar in Czech Republic - I just talked to a group of extremely talented young entrepreneurs at Google X, some of the smartest kids in the world, and they told me they were all scrambling to get in to an EA high school fellowship that's going on around the same time. This kind of thing doesn't happen by accident, it happens through a really coordinated worldwide outreach program costing tens of millions of dollars. And if you're already spending tens of millions of dollars to attract talented high school students, and the most popular Harry Potter fanfiction ever happens to be propaganda for your movement, and the most successful crowdfunding project in Russian history was getting a Russian translation of that project, and a Russian gifted math tutor who is also a superforecaster applies for a grant saying he thinks that the Russian translation would be a good fit for young Russian math prodigies, I think it makes total sense to say yes and give him enough money to make it work. Along with the $400 million you're spending on malaria nets.

Expand full comment

I appreciate the response, and I apologize for my grumpy comments.

My issue is with the inevitable cognitive bias: "what will save the world is to invest in my personal hobbies and give money to all my personal friends (so they can learn to ride a bike)."

This bias is really viscerally disgusting to me, and in all other contexts decent people recognize that this is bad. What you personally find fun should not be what you, running a charity, fund with other people's money. Your personal friends should not be granted charity money to relax and have fun, even if you've galaxy-brained yourself into thinking it's the most effective use of money.

And what I see from EA is that people just really love to ignore this insight, and to fund their personal hobbies and give money to their personal friends. In this respect the bike-learning is more egregious than the HPMOR printing, but both are bad. I've also seen variants of the bike-learning grant from other EA agencies (but I have trouble remembering where).

Now, what I hope is that this is not central to EA and is a minor thing, but what keeps happening is that I attack these types of grants and get pushback from EAs saying how HPMOR and learning to bike are effective charities, actually. And that leads me to distrust the entire enterprise.

As for the details of this specific HPMOR grant:

I suspect you're just wrong about the English issue -- IMO medalists are, by definition, from all over the world; there can be at most 6 medalists per country. I'm pretty certain the assumption is that they all speak English, which I find dubious when it comes to what they read for pleasure, and even if there's a Russian translation that can at most cover the few Russian-speaking medalists. You've mentioned "Russian math prodigies" but this is wrong, the HPMOR copies went to IMO medalists, and the IMO is an international competition and not a Russian national one.

Also, uh, HPMOR is both (1) available for free online, why waste the $28,000, and (2) actually has a somewhat wide reach among nerds, with a high percentage of my friends already knowing about it. And back when I participated in math olympiads, I did get plenty of free books, only 1-2 or which I've read -- and I'm an English speaker.

Funny to hear that Open Phil would value me as a researcher at $10 million. (Admittedly I never did get an IMO medal, but if things had gone a little differently I would have, and in any case I'm a mathematician now.) I guess I disagree with that assessment and do not think Open Phil should pay me $10 million to do AI safety research. Nor should they fund my colleagues this way, and my colleagues are on average more mathematically talented than IMO medalists, by my judgement.

Part of the reason they should not pay that much is that they can likely get it for less. I value my drinking water at $100,000 a cup or something, but I don't have to pay this much because water just isn't that expensive. Similarly, getting mathematically gifted people to work on AI research just doesn't cost nearly as much as $10 million a pop, as you can see by how various AI risk institutions have no trouble recruiting and Open AI just hired Scott Aaronson.

Expand full comment
author
Jul 27, 2022·edited Jul 27, 2022Author

====Re:: HPMOR - you're right, sorry, I saw that the sponsor was EA Russia and they were talking about giving the books to IMO Saint Petersburg and I assumed that was a Russian subbranch, but it looks like that's just where the worldwide IMO was held that year.

I still think that, given that:

- Mikhail is himself an IMO winner from a non-English-speaking country, and Habryka (the grantmaker who approved it) is native German

- Mikhail says he "contacted organizers of math olympiads and asked them whether they would like to have HPMoRs as a prize", and Habryka says that "the organizers of IMO and EGMO themselves have read HPMoR, and that the books are (as far as I understand it) handed out as part of the prize-package of IMO and EGMO."

...that this isn't being done thoughtlessly and everyone involved has a good understanding of what languages these people speak, but I guess I can't prove it.

====Re: the bicycle thing

I think that's a kind of harsh way of framing the actual proposal, which was that this person was going to do some 1-on-1 individual counseling sessions, writings, talks, workshops, and also some personal development which they used learning to ride a bike as an example.

I agree that I was uncomfortable with this grant. If I had to guess why it was given, it would be because the grantmakers wanted to signal that they were sad that this person had burned themselves out doing EA work, that our community supports people in this position, and that we're not going to just discard you if you work really hard for us and then feel like you were traumatized by it. I think they might also have been trying to keep Lauren (who is a pretty impressive person) on board and make her feel supported after this happened in the hope that she would go back to working for EA. (I think she has since quit everything and become a Zen Buddhist monk, which is not really how I expected that story to end)

The other excuse I would give here is that this is the EA Infrastructure fund, ie the meta fund for supporting the community and its members. It's a specific fund that people only donate to if they want to build the EA community. Nobody is donating to EA Meta in the hope that it goes to malaria nets. If you look at the EA Funds page ( https://funds.effectivealtruism.org/ ) they make it very clear that you can donate to Global Health (eg development), Animal Welfare, Long-Term Future (usually AI), and EA Infrastructure.

I have donated to all of these in the past, and I donate to Infrastructure specifically because it seems useful to me to invest some money into getting seed corn we can replant later in the future. I am not super happy with how that grant in particular used my money, but given other good things that fund has done, I'm not going to never donate to it again.

(also, keep in mind that I think the people who run that fund are obligated to use it for EA infrastructure projects, and that often there won't be a lot of good ones in a certain month, and they'll have to cover the ones that they get).

==== Re: $10 million per AI researcher - they definitely aren't *paying* the researchers $10 million each, they are willing to use $10 million to cause them to exist, eg create internships, good publicity, labs, etc and the rest of a pipeline that eventually results in there being a single extra good AI safety researcher at the end of it.

I don't know if they actually spend that - it's just their announced willingness of what they *would* spend if necessary - but clearly they haven't spent enough to lure you and your colleagues away from whatever you're doing now!

==== "My issue is with the inevitable cognitive bias: "what will save the world is to invest in my personal hobbies and give money to all my personal friends (so they can learn to ride a bike).""

I agree this is a problem. I think we need to balance the imperative not actually to become power-seeking, with the imperative not to be so terrified of appearing power-seeking that we end up never doing anything to promote ourselves. I remember the Fabian Society book I reviewed where the head of the Society (which influenced the entire course of 20th century history and was super-successful) attributed part of their success to having exceptionally good stationery that made everyone respect them. Every other socialist organization had a rough "we represent the world's poor" look, and the Fabians wasted a bunch of money on stationery and managed to capture the ears of the rich and powerful. This has made me nervous about demanding that people put too much effort into looking spartan.

My opinion is the HPMOR grant was probably good, the burnout grant was probably bad, but that creating toxic incentives where we yell at grantmakers for funding the wrong things without knowing their constraints is the quickest way to have them never do anything interesting or illegible. EA Funds is deliberately for unusual illegible projects, and I'm fine with them trying a few 5-digit experimental things as long as the 9 digit grants keep going to malaria nets, big AI research labs, etc.

Expand full comment

I appreciate this thoughtful response. I better understand your perspective now. Thank you!

I think yelling at grantmakers is a good thing and is the only way to eliminate the drift from an EA focus to an EA-in-name-only-we-actually-fund-our-friends focus. I think everyone is too quick to defend any decision the grantmakers make, no matter how whimsical and thoughtless, with "but you don't know their constraints and deliberations" (true, but only because they don't tell me their constraints and deliberations...)

Also, I could be wrong, but I was under the impression that the money came from a long-termist fund (i.e. AI focused) and not an EA Infrastructure fund. I would complain much less if the money was donated for the purpose of supporting EA itself than if it was donated for the purpose of combatting AI risk. It has way too many epicycles for the latter (instead of funding people to work on AI, let's give promotional material to teenagers in the hopes that it will affect their future career; actually, instead of promotional material let's pay them to read HPMOR; no, even better, instead of paying them to read HPMOR, let's print out physical copies and just hope they'll read those.)

By the way, the fact that various IMO coaches already knew about HPMOR kind of supports my point: most IMO participants will naturally find out about HPMOR even without any intervention, because it has far reach in nerd spaces.

Expand full comment

I don't have a problem with people inviting "in-paradigm" criticism while rejecting out-of-paradigm criticism. If I ask for specific tips on how to improve my piano playing, I'm not interested in hearing your opinion that pianos suck and I should learn the guitar instead. If I've already decided on the piano you're not helping, you're just being annoying.

Having in-paradigm discussions about how to better do the thing you're doing is always valuable, but out-of-paradigm discussions tend to be louder and more distracting, especially as there's always more outsiders than insiders.

Out-of-paradigm discussions are valuable too, but they can't always be the order of the day.

Expand full comment
Jul 20, 2022·edited Jul 20, 2022

"If you incentivize people to preach at you, they’ll do that. But they can’t preach the tenets of the new paradigm, because they don’t know it yet. And they can’t preach the implementable tenets of the old paradigm, because you’ve already implemented them."

I see a lot of this in the social justice movement. We've already implemented the easy parts of the paradigm (e.g. de jure desegregation, gay rights), so we get a lot of non-specific, non-actionable complaints ("Black lives matter!"). The irony is that if there was a new paradigm, it wouldn't be about fighting oppression. It can't be; that's the current dominant paradigm. I strongly suspect most social justice activists would react to a new paradigm by calling it racist, i.e. by becoming the hidebound retro-thinkers who don't get the new thing.

Expand full comment

There's a fetish for criticising the government in the US. Like saying anything negative about the government automatically notches up your social status and in-groupness. It's an unquestioned cultural observance, a birth-right, plus it has that attention and stress release that sustains and naturalises habits of thought. It doesn't get much reality testing. There are downsides to this social game.

Any organisation that resembles a government will get it too. The EA movement is a kind of alternative government (ok, alternative-to-government, if you must) populated by those both high on the fetish scale coupled with heads full of ideas. EA occupies a paradoxical government-anti-government niche, it's going to get hit.

The real and awful downside is the manifold ways it screws up process of government, now at web speed and scale. Governments aren't going away anytime soon. Magical thinking can work, but it often makes problems worse. In this case, clearly worse, in my view.

Expand full comment

I have this exact same opinion except about people who do us gov apologia. I think the truth is there is good defense of the USGOV and also good attacks on it but both are much rarer than the ingroup social signalling stuff.

Expand full comment
founding

Yes!

"USGOV" is _both_ terrible AND awesome (in the normatively positive sense)!

Expand full comment

This is classic salesmanship and propaganda. You point out your own faults ahead of time so it seems like you've addressed them and/or are open to that sort of thing, then steamroll ahead with the things you already wanted to do anyway. It derails real criticism and fills in the part of the conversation which might get filled in by something actually controversial. Like replacing the paint by numbers psychiatrists of the world with a simple form patients fill in.

It is in the 'dark arts' of artificial self flagellation and faux shows how your so open and progressive and yea yea we've covered this part of me being wrong already...and it turns out I'm not wrong!

It is a framing tactic to do just as you said...create a false non-actionable appearance of things that sound controversial or critical (which are even self nominated!), but pose no concrete challenge to any specific thing anyone is doing to ask them to stop doing that, do it differently, or do something new or actually risky. This is a classic herd mentality's defence mechanism.

People play along with this unwittingly and need no sophisitcaed understanding of what they're doing...just as in all things in life. This is the way all propaganda infects the mind and people parrot empty words at the right time to forestall any actual action or change on their part. People love doing nothing - if you give them an 'out' to already be right and offer meaningless 'awareness' of things then they'll pick it almost every time. People get to highly paid professional positions and niche intellectual places by jumping through all the hoops put in front of them like good doggies. A life time of hoop jumping and barking on command successfully doesn't lend itself towards deep reflection or critiques of what they're doing. With soldiers or with doctors they were 'just following orders'.

It supports the inertia of their mentality to just keep on trucking and not think about things like $225 fees for a missed appointment from someone who only makes $18k a year who had to pick up an extra shift or get fired - or who had a breakdown or depressive lay down episode or panic attack or whatever mental problem they are seeking treatment for. The financial penalty IS THE treatment....maybe? Let's go back to talking about something that doesn't personally challenge things I'm doing to make money only treating those who can afford to pay! I've got a lovely batch of approved criticisms we can distract ourselves with over cocktails.

Like how 'the system' is to blame for them being too poor to pay for my expensive services...as if any economy exists where peasants can afford the services of professionals - doctors, lawyers, accountants, and psychiatrists will always be too expensive for serfs. But choosing to personally earn less money to help more people is in the no way taboo basket!

If you want to know if you have a real criticism, people will get angry about it and you may well get punished for speaking it. That ketamine guy, so annoying, let's not invite him back next year.

A real hard one would be....if you're just going to treat people with a protocol spelled out by some professional body based on asking simple questions about how they feel, can't most Psychiatrists simply be replaced by a questionnaire? That'll really get them angry!

You personally are a fraud...but then Scott ask them about making a choice outside the guidelines such as with the Ketamine example and booooom they're upset and don't want to talk to you anymore. For them to personally take a risk or have to learn the differences rather than follow the guidelines like a damn recipe book? They've got insurance and lawsuits to worry about! Taking a risk with 0.5% blowback could lead to a lawsuit and loss of a decade of their damn live to train and get certified as a doctor!

They plan to treat more than 200 patients, so why risk 10 years of YOUR life for other people who are already failures? They're also here to help! But not at any cost or risk to themselves. It's the insurance industry to blame, go look there! Leave me alone! I've got a trip to Sonoma and am taking my 3rd long weekend in the first half of this year and you're really cramping my mood! Even a tiny risk to me is unacceptable. Racism is still bad, let's go back to that topic!

No way they'll risk their personal status or have an independent medical view! So why do they exist again?

So why tell us again why the vast majority of paint by numbers psychiatrists can't they be replaced by a simple form which doesn't cost $300k plus a year in salary? You can get a 5-10x savings to have some 30k-60k a year nurse make sure people fill in the form correctly to discover which pill is right for them! Hurray! Cost disease dispelled!

Expand full comment

A lot of what's being described as "paradigmatic criticism" is not closely related to Kuhn's idea of paradigm change.

The most common view in the philosophy of science before Kuhn was Popperian falsifiability. Popper believed that theories were rejected when they made false predictions. Kuhn rejects falsifiability because (1) all theories have always been falsified and (2) then what? You can't do science with no concepts to understand the world at all. Kuhn instead argues that a paradigm is only replaced when there is something better to replace it with.

Telling EA that they are too individualistic does not contribute to a Kuhnian paradigm shift. It's trying to falsify EA without offering a replacement. What would move towards a potential Kuhnian paradigm shift would be to create an alternative altruistic organization that was more communitarian. Initially, this alternative organization would only be better than current EA in a few specific areas. But as they work more on those areas, and as more people start shifting to their way of doing things, eventually they might become the dominant paradigm for this community.

Expand full comment

I am with you here. People are talking about "paradigms" without really understanding Kuhn or the philosophy of science.

Expand full comment

SA has read and reviewed The Structure of Scientific Revolutions: https://slatestarcodex.com/2019/01/08/book-review-the-structure-of-scientific-revolutions/

I don't think that it is one of his best book reviews. He seems confused on what the book is about and doesn't update his understanding of science very much from having read it. Other people disagree and think it's a great review, so maybe that's just my reading of it.

Expand full comment
Jul 21, 2022·edited Jul 21, 2022

That doesn't read like an actual book review of Kuhn. I am going to leave it at that. I also maybe am too old to make sense of how people are reading it contemporaneously with the idea (correct or not) of "paradigm" already well embedded into the mainstream. When I came to it there were still significant debates about First Edition vs. Second Edition.

Expand full comment

Ha ha. I didn't even realize there are 3rd and 4th editions.

Expand full comment

Watching crypto markets develop was fascinating because they reproduced a huge number of historical scams and frauds that had largely been driven out of 'real' markets - it was a chance to see history replay, with some small tweaks in the initial parameters.

Watching EA & Rat spaces independently work their way through the same issues and dilemmas as eg Aquinas, 800 years later, has a similar vibe. There are probably other parallels that could be drawn between that space and monasticism.

Expand full comment
founding

Yes!

Except (maybe – it's not clear from your comment), I think both have been great – in the sense you describe, i.e. reproducing the same intellectual work as others.

Even if both were _only_ just 'translating' or 'adapting' earlier work/events, that's really useful to do! It was _less_ obvious that 'market history' was 'inevitable' than it now is and reading Aquinas (or the original/historical translations/adaptations) is MUCH harder than 'recapitulating' the same basic arguments.

And it's not like Aquinas was likely _that_ much more 'original' originally anyways :)

Expand full comment

I have mixed feelings about this. It infuriates me to hear EAs trying to claim credit for their willingness to hear criticism. But it also seems odd to require them to do so in the first place.

Let's say I have a negative view of socialism. I think it's terrible and has brought more suffering to the world than anything else human beings have ever foisted upon themselves. I'm really not going to be impressed by how willing socialists are to receive criticism, unless of course they come round to the understanding that socialism is truly terrible, they should stop advocating for it, and join me in ridding the world of it's malign presence.

What I'd actually see in socialists being willing to entertain criticism is some perverse attempt to strengthen their woebegotten ideology. They'd believe that having survived criticism it had become even more worthy than it was before. The criticisms allowed would never be paradigmatic, merely concerning minor matters of implementation, or whatever went wrong the last time socialism was inflicted on some unfortunate population.

But from the point of view of people who really like socialism, my demands are ridiculous. I'm seemingly only interested in socialists accepting the kind of criticism that immediately and irrevocably ends socialism itself. Why would anybody do that concerning something they already fundamentally believe in?

I feel something very similar with EA. I think it is a bad idea in it's entirety, so I'm unimpressed by EA's attempting to strengthen their belief in their ideology by welcoming 'criticism', because to me this criticism can only be superficial and non-paradigmatic. For the "Yeah, we're open for criticism" to mean much, it would have to include the realistic possibility that the believer might renounce the EA worldview as a whole. And of course, in general, nobody is willing to do that, me included. For the very good reason that you'd be offering up something akin to all your values, all at once.

If you're an EA, do you ever countenance the possibility that what you're doing is a properly bad idea and you should just cease and desist? No? What then do you think being open to 'criticism' is going to do for those of us who think your whole enterprise is a terrible mistake?

Expand full comment

> If you're an EA, do you ever countenance the possibility that what you're doing is a properly bad idea and you should just cease and desist?

Yes. In fact, even if benefits would be really dubious it would be enough, as this would allow me to spend more money on entertainment or something.

> I feel something very similar with EA. I think it is a bad idea in it's entirety

Why? Are you also considering charity as bad (like one person above) or for some other reason?

Expand full comment

Though 'charity' as an idea is extremely broad and variable I have some negative thoughts about it generally while having no problems (and support) for various examples with particular characteristics (local, participatory, issue-focused). EA strikes me as epitomising all the worst aspects of interventionist money-based charity.

My last question was really a reference to the Anti-Politics machine review of a few months ago - the book focuses on a particular example of charitable intervention in Lesotho that was generally stupid, and a disaster in many different ways. Those of us familiar with this result of 'charity' can only say (again) "Why are you even surprised at this result?" But the EAs in the comment section were pretty much all "Oh, that bit of charity was a bit of a fuck-up, but maybe if we keep tweaking, we'll eventually stop ruining other peoples lives. And after all, we really really really want to help. It makes us feel very good!!"

My point is that the motivation to be a giver is so strong, that no amount of evidence will convince the believers that what they are doing is, on balance, a bad thing.

Edit - typo

Expand full comment

> we'll eventually stop ruining other peoples lives

was it actually ruining their lives or was it just waste of money?

Expand full comment

In the Lesotho example I think the overall impact was assuredly negative. That may make it a minority case but I certainly don't think it is alone.

Expand full comment

Can you link description of that case?

Expand full comment

"I'm really not going to be impressed by how willing socialists are to receive criticism, unless of course they come round to the understanding that socialism is truly terrible, they should stop advocating for it, and join me in ridding the world of it's malign presence."

What if they come around to the belief that the time is not yet ripe for full-blown socialism, and that they should pursue semi-capitalist, semi-socialist policies until the time becomes ripe? Consider that the liberalization of the Chinese economy has raised a billion people out of poverty and into a middle class lifestyle. In utilitarian terms, it is one of the best things that has ever happened to humanity. If the opponents of Mao, like Deng Xiaoping, took the view that only overthrowing socialism in one fell swoop is worth trying and any reform is useless, they would have been banished to the labor camps and 1.3 billion Chinese people would still have the quality of life of Nigerians.

One of the other best things that ever happened to humanity was the introduction of labor laws--the 40 hour work week, the 2 day weekend, paid leave, workplace safety, the right to strike, etc. These were achieved, in large part, by socialists who believed in reform. There were many socialists who didn't believe in reform, and refused to participate in electoral politics because they thought overthrowing capitalism in one fell swoop was the only solution. Had all socialists taken your view that making capitalism incrementally better is not worth it, billions more working class people would have toiled away their lives in misery.

Expand full comment

Indeed, fair points. My socialist analogy was never going to be perfect and I overdid the rhetoric. But I disagree with you that one of the best things that ever happened to humanity was the introduction of labour laws. I would prefer the freedom to work the hours that I choose etc. And if socialist have been partially responsible for minimum wage laws I would consider that influence also malign and negative.

But that is all still within the analogy and if it doesn't work, then all that's happened is that I have failed to make my point about EA. I don't want EAs to compromise and just do a little bit less damage and cause a little less misery, I want them to stop what they are doing forthwith! I don't want shoplifters to restrict their activities to, say, only low value items, I want them toi keep their hands to themselves. Stop with the shoplifting already! And, stop with the EA completely!

Expand full comment

Deng Xiaoping also used the military to murder peaceful protesters in Tiananmen Square. Your praise of Deng could just as easily be applied to Stalin, and frankly I’m not willing to accept your argument that sustaining the Chinese communist regime was in any way good. And in terms of standard of living, China still lags behind openly capitalist East Asian countries.

Expand full comment

There's 4 orders of magnitude difference between the number of people killed at Tiananmen Square and the number killed by Stalin. There's at least an order of magnitude difference between the improvement in living conditions due to China's liberalization, and the improvement (if any) due to Stalin's industrialization. So no, my praise of Deng could not just as easily be applied to Stalin, unless you think a townhouse with 10 people is the same thing as a city of 1 million.

China lagging behind openly capitalist countries is a good argument for capitalism, not a good argument for not reforming the former communist regime.

Expand full comment

In other words, you are happy to praise mass murderers as long as they didn’t murder too many people and they managed to grow the economy enough.

Expand full comment

See http://benjaminrosshoffman.com/parkinsons-law-ideology-statistics/.

When you have an institution with self-criticism as a cultural centerpiece, the self-criticism is always ritualized. Oftentimes, the *self-criticism will often be extremely accurate and correct* (e.g. X is too authoritarian, X is too political, X is missing a bunch of obvious corruption due to a self-serving bias towards legibility), but still ritualized, as a symbolic protection mechanism. Even this exact mechanism can be called out, and people will still nod their head sagely, rather than feeling curious or fearful, both of which are emotions that could motivate actual change. Getting people to read The Uruk Series won't help; only focusing on the object level and discomfiting people will help.

I think this is what is happening with EA. When you take a normie and have them do things they know are wrong as part of an institution, you will get confabulations and complete failures of self-reflection. When you do the same thing with hyper-reflective hyper-scrupulous autists (term of endearment, I am one), you will get reflection-as-ritual, which strips away its semantic value. This paragraph is more speculative.

Expand full comment

With advance apologies if this point has already been made, I think one should consider the possibility that the reason the questions cited as possibly starting a real fight would start a real fight is because they actually challenge very profound issues in medical practice. I can't speak to levothyronine and so on, but "asking if they’re sure it’s ethical to charge poor patients three-digit fees for no-shows" certainly does touch some of the most fundamental questions. Our age's medical practice, indeed our whole society, is based on a model of organized, bureaucratic rationality (I mean rationality in the sociological, Weberian sense). In this model, rules are established and must be consistently followed. Schedules are set and punctuality is a moral value. Good practice involves following those rules. If patients are informed in advance that no-shows will be charged for their appointment, then all no-shows must in fact be so charged. This ensures fairness, predictability, and (crucially) unimpeachablity before supervisory organs. The other model is essentially feudal. Here, the dr's practice is the dr's property, and the dr may do what they like with it, "to the displeasure of anyone who says otherwise," as one 14th-century Aragonese hidalgo boasted to a royal officer. If a patient pleases the dr, perhaps by being earnest and respectful and really trying to get better, but has problems in life that make it hard to attend appointments--all, and this is key, in the dr's independent, unreviewable judgment--the dr can exercise grace and favor and forgive the fee (a terms that derives etymologically from "fief," incidentally). If the patient is disrespectful, high-handed about making appointments, or anything else that displeases the dr, the dr is entitled to damage them with a punitive fine. In other words, in the final analysis, to ask whether it's ethical to charge poor patients for no-shows challenges one of the most profound and important drives of western civilization in the last 500 years, which is to undo that feudal independence and arbitrariness of judgment, for better or worse, and replace it with bureaucratic, depersonalized rule-following. I suspect this all sounds a little flip (or, conversely, grandiose), but I'm deadly serious.

Expand full comment

This is a huge problem because the system is inherently unstable due to hypocrisy. The doctor isn't held to that standard. They don't have to stick to their schedule and the patient has no recourse. The same applies to stuff like utility companies or construction companies where they can give you wide windows of like 4-12 hours you have to be home for their crew to arrive and other stuff.

Businesses have rights and not people. It isn't depersonalized rule following for everyone, aka equity or equality. It is privileging the wealthy.

Expand full comment
Jul 21, 2022·edited Jul 21, 2022

What if the point of criticism is not to criticize at all, but rather to associate oneself with (in Lacanian terms the object signifier of) a movement or organization or social group which has adopted criticism as part of its identity. This would correspond to the student who attends the university, not out of a desire for knowledge (especially not knowledge about himself) but instead to associate himself with the university (which purports to have knowing, as its object signifier). Actually knowing anything that might threaten his association with the University would defeat his purpose in pursuing this association in the first place.

Since Lacan's day perhaps the "university" (outside of STEM obviously) is no longer so much associated with knowing as with 'critical studies'; those of us who come up through such a system have learned to identify ourselves with being critical, but have no actual interest in criticism per se. When for instance, I talk about my white privilege, I am communicating to other university-trained people that I am a critical sort who thinks deeply about race and equity and justice. This, of course, is in sharp contrast to people who are not like us University types. Those people aren't critical at all, and in fact, are so uncritical as to accept on face value that professional wrestling is real and that their politicians vote the way big oil wants them to because it's what's best for America.

Lest I appear one sided here, imagine instead that I am beating my breast publicly on a powerlifting forum about how we all need to be trying harder and pushing through for more weight, for more reps, for more mass. Here I am associating myself with a particular kind of "no pain, no gain" self improvement, and I'm not interested in any self improvement that might challenge why it is that I wish to identify with this particular narrow kind of personal growth. I might be able to post in good faith a criticism of ignoring zone two cardio if I tie it narrowly to metabolic gains and long term muscle development. But if I post something about how cardiovascular health is more important than size, I would be dismissed at once as an obvious interloper. Talking only about how real serious lifters have good metabolic health and castigating myself for not doing enough to develop myself in this regard would be a signal in favor of a "more pain, more gain" self improvement, as well as a kind of shot across the bow of the lazy and untrained (of which of course there are some on this hypothetical forum kept around as targets for abuse but which mostly exist in the outside world -- not in this community!).

Perhaps what Scott here is calling specific criticism is in fact just true criticism. (Lacan would say that this is criticism in the mode of the 'hysteric' rather than the 'student' -- one desperate to actually know because of the suffering one is already experiencing, rather than merely wanting to be associated with the signifier of "knower".) Even this (hypothesized) 'true' criticism has little power to reach others. Nobody who is deadlifting six plates is going to want to hear about how they need to spend more time walking briskly uphill. And nobody who is comfortable discussing their white privilege wants to be called out for obvious classism and a genuine hatred of the poor.

Final point: I am way out over my skis here in any attempt to apply Lacan to section four of Scott's article, so I'll switch to an older but more familiar-to-me form of discourse. I can't help but question whether or not actual paradigmatic shift ever happens as a result of genuine self criticism (at least in the kinds of paradigms we're talking about here with Effective Altruism, or things that seem similarly cultural or at least culture-adjacent). Perhaps rather people who are critical of something gain in numbers and power and eventually change the thing that once they could merely snark about. This looks like paradigmatic change as a result of criticism, but it's just business as usual. The old guard who favored X regressive policy weren't convinced by reformers who favored Y new progressive policy, they were simply replaced and our (internal and external) Press Secretary stitches this process together into "look at how this organization changed its mind!" Comparing this to the accumulation of scientific development seems overly optimistic, though of course, I acknowledge that the scientific communities of the past functioned according to all sorts of power dynamics as well (perhaps they do just as much today in a way that is less visible to me personally).

Expand full comment

Great comment even if a little loose. Especially the last part.

Expand full comment

Interesting comment

Expand full comment

The real problem with EA is they devote 100% of their time and effort trying to bring up the bottom 10%, and 0% of their time and effort trying to do something to move the boundry forward for everyone. I'd be interested in giving to charity, and would like recommendations of what charities give the most bang for the buck, but all their charities are laser focused on helping the poorest of the poor. That's all very well and good, but, where are their recomendations on how to most effectively support curing aging? or terraforming Mars? or fusion power? etc? There are hundreds of important randomised controlled trials that aren't being run because of lack of funding. But they've got nothing -- not a single recommendation-- for the entire field of medical research; nothing for people that want to donate to help science or technology working at the cutting edge.

Expand full comment

> All of these are the opposite of the racism critique: they’re minor, finicky points entirely within the current paradigm that don’t challenge any foundational assumptions at all. But you can actually start fights if you bring them up, instead of getting people to nod along and smile vacuously.

I think you nailed it, minor finicky points are actionable by individuals, and that makes those individuals defensive.

> I don’t know, man, I don’t know. Thomas Kuhn seemed to think of paradigm shifts as almost mystical processes. You don’t go in some specific direction carefully signposted “Next Paradigm”.

I think making surprising progress in a *specific* direction can do it, rather than focusing on the "big picture" (this seems to overlap with your conclusion as well). Kind of like Musk with Tesla. That wasn't really a paradigm shift, but it was disruptive and disproved the decades-old "prevailing wisdom" that electric cars just wouldn't sell. Pretty much every engineer knew the reasons automakers cited were mostly bullshit, it just took someone with the courage and funds to push and see it through.

For a real paradigm shift, like general relativity and quantum mechanics, most physicists knew there were fundamental problems with prevailing theories at the time as well. How those questions all get resolved is an open question, but it seems like what matters most is that someone is paying attention and puts in the work in specific directions. Sometimes they'll then notice a generalization that then becomes a new paradigm.

Expand full comment

I just want to know where the Deonotological and Virtue Ethics EAs are at. Especially since EA is something of a natural fit for Virtue Ethics, with the whole deal with being wise and how that relates to charity (insert Maimonides writings etc). The trouble is at the moment EA is dominated by Utilitarianism, so it's hard to separate out a critique of EA from a critique of Utilitarianism. Would arguing that you can't really assign utils to human and animal lives and so measure how many pigs are worth a human be as criticism of the former, or the latter?

It's hard to come up with a good criticism of the idea "If you're goal is to do X, you should find the best way to use your resources to accomplish X". But criticisms of utilitarianism go back as long as utilitarianism does, I don't think anybody's going to have anything new to add.

Expand full comment
founding

Deontology and virtue ethics are both _fantastic_ heuristics but it seems 'obvious' ('trivial'? in the math sense) that both are 'approximating' utilitarianism/consequentialism.

The _point_ of deontological duties/obligations, and practicing 'virtuous' behavior, seems _obviously_ to be to result in a 'better future' (relative to alternative possibilities).

Either 'pure deontology' or 'pure virtue ethics' seem like 'moral slavery' to me otherwise, i.e. 'You MUST fulfill these duties/obligations!' – 'Why?' – 'BECAUSE!', or 'You must practice and manifest these virtues!' – 'Why?' – 'BECAUSE!'.

_All_ moral philosophies/theories must, inevitably, grapple with 'the mysterious origins of Good and Evil' and none of them have ever discovered any 'universally compelling' argument about this.

The 'real' answer – which, again, seems 'obvious' to me – is that _of course_ are moral intuitions are a mess! They _evolved_ and The Blind Idiot God (i.e. evolution via natural selection) is a cruel cruel asshole. It's a Miracle they're not MORE inconsistent/incoherent (or sometimes outright contradictory)! And yet, we have nothing else really to go on in building any better moral understanding.

Expand full comment

>The racism critique doesn’t imply any specific person is doing any specific thing wrong.

I don't buy this for one second. It may not imply any specific thing is wrong all by itself, but it's easy to *attach* to some specific thing. Just claim that it's racist to believe or support X. Voila, anyone who supports X is doing something wrong. Or claim that since your paper is about racism, it's racist to deny X. Voila, you can call critics of your paper racist.

In other words, please stop with the mistake theory. Racism accusations are a powerful weapon. Go ahead, say "this paper about racism is trash, and also, some races are smarter than others". You'll quickly discover that yes, you will be accused of doing specific wrong things.

Expand full comment

Agreed

Expand full comment

I have a general feeling of unease whenever the powers that be preach radical social justice, that they're waffling vaguely about paradigms in order to avoid having to make specific changes that would actually produce material differences to disadvantaged groups. It seems like it's much easier to host a talk about how overcoming capitalism is the only way to demolish patriarchy, than to change the wording of criterion x.y.z in the official promotion process document in a way that would less disadvantage staff who are also mothers.

Expand full comment

I didn't think the Anti-Politics Machine offered vague, paradigmatic criticisms of the development model - I read it as offering very specific criticisms of a particular development intervention (cattle were functioning as pensions, trying to encourage people to replace them with things that couldn't function as pensions was never going to work, etc.), and a plea for finding out how a system actually operates before trying to change it. I think this is the kind of criticism people like because it feels like you're finding out How Things Really Work - I can't imagine that the project directors responsible for the interventions criticised would have found the book's criticisms unthreatening. The book claimed that they were idiots.

Agree that this practical, specific criticism is how things change, kicking every support for a model to see which ones collapse and then seeing what you've got left.

Expand full comment

> A case against RCT-driven development aid, somewhat related to the one in Anti-Politics Machine, got 389. It was the #6 highest upvoted post of all time;

What kind of consequences did this have? If any.

Expand full comment

It seems to me that if any of these left-wing criticizers got their way and radically overhauled EA, all it would mean is that many of the big EA donors would simply find somebody else to give their money to. So it's strange that they should be trying to radically transform EA rather than create their own thing, or focus on the more obviously left wing charities that already exist. Perhaps they feel that EA has enough name brand recognition and relatively unconditional financial support that they can trade off of even with their new left wing paradigm?

Expand full comment

Interesting paper about the impact of individualism and collectivism. Collectivism may well be better for low IQ countries.

https://www.sciencedirect.com/science/article/abs/pii/S0160289615000501#:~:text=In%20this%20explanation%2C%20individualism%20moderates,IQ%20(and%20poorer)%20ones.

Expand full comment
founding

I really appreciate both this post and Zvi's and don't think they're (very) contradictory!

I'm _extremely_ sympathetic to EA and mostly think it's fine – tho I'm at a considerable 'personal distance' from it and mostly just read some forum posts and posts like this one (or Zvi's). It feels like one of the big successful 'children' of our 'rationality movement'. It's grown up enough to have moved out and been living independently for years now :)

I do believe there's probably a lot of Sad angst and 'self-flagellation' internal to EA that I'm happy to not have witnessed up-close, and maybe that's all worse _because_ of how EA draws the relevant selected people together, but I think they mostly would have found other outlets for the same thing regardless.

And the more general critique just seems like (yet another) Sad fact about people. At sufficient remove, it's pretty fascinating, but being able to find someone that can even hold it in their heads as a coherent idea is like finding manna in the desert!

Expand full comment

One problem is that people insist on there actually being a paradigm for a given sector in the first place.

What if there aren't any actually useful paradigms, and we just have to do our best to analyze things on an ad hoc basis because all the "rules" we think exist are incorrect and so the only way to tell if something is good or bad is by looking at it and analyzing it?

This is basically what we have to do with art and writing. There are no rules for either, just warning markers that if you go this way, it's a lot harder to produce something good.

https://www.fimfiction.net/blog/539173/rules-for-writing

A lot of people want there to be rules because they serve as a heuristic for judging whether something is a good idea or a bad idea, or is good or bad, but in a lot of fields of human endeavor ,there are no hard and fast rules.

You suggested in this essay that we have to adopt some new paradigm after we discard the old one, but I would argue that this is wrong; we don't actually have to accept that there is a paradigm at all, especially when the evidence for any paradigm's existence is lacking.

Expand full comment

What I find fantastic about the criticisms of the medical system from within is that they have obvious actions the criticizers have to fix the problem. It's full-on virtue self-signalling. What do I mean?

* Complain that not enough black people have access to X care? Move your practice to a predominantly black area.

* Complain that poor people can't afford your service? Charge less for your services?

* Too many disadvantaged people can't travel to your office? Do house calls!

Alas, that would be inconvenient. Instead, people want to feel good for doing something *about* a problem while not having to *do* something about a problem.

Expand full comment

Yes, doctors who prescribe esketamine instead of racemic ketamine are abusing their patients for the benefit of the bottom line of pharma companies. And more to the point, making it more likely that those patients will eventually commit suicide.

I mean, I'd say that sounds like "malpractice" to me, but of course it can't legally be considered such because they have the blessing of the state to not actually treat their patients and make a lot of money doing it.

Yes, I'm still *very* evangelical about this topic since it's the only reason I'm still breathing and didn't kill myself. (Four year anniversary of the day after my planned exit, next Friday!)

Expand full comment

And, of course, as a libertarian* I'm pissed off about the rent seeking involved in even racemic ketamine administration, since the *best* price I've found so far is $250 for a procedure that uses $5 worth of materials.

If I ever happen to get lucky and win the "Internet Lottery" (some sort of stock options or cryptocurrency "hey, suddenly you're a billionaire!" type thing) I'm starting a series of as low cost as possible ketamine infusion clinics. I don't care if it's the most "effective" altruism I could do with the money, but it's the one I care about most, having lost entirely too many friends that way.

*Not technically accurate, but for most people it'll get you an idea of where I'm coming from within at least ICBM coordinates. But almost nobody has any idea what the hair-splitting of "agorist / market anarchist" actually means, and they generally at least have a vague conception of "libertarian", so fuck it.

Expand full comment

Or maybe the more effective thing to do would be to pay for the FDA trials on straight racemic ketamine without any expectation of making money back off of it. I'll have gotten the money for essentially nothing (I'm already being paid for the work I do, the "lottery" aspect is just a bonus) so spending a whole fuck ton of it proving to the government that already agrees that it's safe that it's safe doesn't really "cost" me anything I had before I got stupid rich.

I dunno. If it ever comes up I'll write in and ask for advice.

Since right now I'm working on not becoming *homeless* yet again, it's probably not something to spend a *lot* of time plotting the details of yet. But it's nice to fantasize about saving people's lives since there are so, so many I failed to. But maybe that's just because it's July, and this month starts off with a whole series of suicide memorials for very close friends.

Expand full comment

Thalidomide kids would like you have a word with you.

You may be good at math, but you're shit at chemistry. Don't encourage people to poison themselves, please.

Expand full comment

I think a simpler explanation for what you're observing is simply that these "paradigmatic criticisms" are actually tacit reassurance that the existing paradigm is fine.

Consider an evangelical priest giving a speech to his congregation about how they all need to stop thinking of themselves and think only of what Jesus wants. To someone with no knowledge of Christianity, that would probably sound like a criticism which calls for them to adopt a new paradigm, but anyone familiar with the cultural context recognizes that that is the exact opposite of what's actually happening. It's the same with telling a conference of academics that they need to consider minority perspectives more; the actual message that's being sent isn't "you need to change your beliefs" but "all of your existing beliefs are completely correct, the only problem is that you and others aren't actually doing the things you think everyone should be doing".

Expand full comment