615 Comments
Comment deleted
Expand full comment
founding

most community churches don't have billions of dollars that they deliberately spend to change the world!

Expand full comment
Comment deleted
Expand full comment

It matters because the stakes are decently high with EA, unlike points of division with an online church.

Expand full comment
deletedJul 20, 2022·edited Jul 20, 2022
Comment deleted
Expand full comment

I'm a consequentialist, but I'm fine with you focusing cultivating virtue, virtue seems both instrumentally useful (virtuous people do more good) and intrinsically useful (people place value on themselves and others being virtuous), so keep on taking virtuous actions, you may never become a utilitarian but that's not the point anyway.

Expand full comment

I agree. I despise virtue signaling. Tell me what you did. That’s what matters. Doing good things does make you feel good, but feeling good because you believe the right things is mental masterbation

Expand full comment

As a sometime fan of virtue ethics and longtime foe of optimizing consequentialism, I have to say this is the first post that made me stop and consider and change my mind a little bit. Thanks.

Expand full comment

As a Stoic, I do not think that consequentialism and virtue ethics are fundamentally in tension. The Sage's actions will be the best approximation of the path which yields the best consequences.

Why is there a conflict between Aristotelian virtue ethics and consequentialism? I am not familiar enough with it.

Expand full comment
deletedJul 21, 2022·edited Jul 21, 2022
Comment deleted
Expand full comment

I don't have a source for Aristotelian ethics, since I'm largely unfamiliar with it.

Reading enough Seneca (Dialogues and Letters on Ethics) though, it became very apparent to me that Stoicism is rationality, rationality is about winning, winning means earning victories for humanity. Taking rational action means you take the decision alternative with the highest expected value given constraints on knowledge + model uncertainty. So that's how Stoic rationality implies consequentialism.

That's the short-version. I was trying to write a longer version, but I'm unsure where to draw the line between succinct argument and tangent. So best not, unless requested. Originally I tried a version, where I quote-mined each point. But I don't have a whole day.

Seneca is a joy to read in English. My writing not so much.

Expand full comment
Comment deleted
Expand full comment

"Logos" refers to the study of logical propositions. Sophistry if you ask Seneca, who has a low opinion of that pursuit. (he's an intuitionist, I guess... or undertandably doesn't see what ethics has to do with formal logic)

The Stoic prefer an action that gives them "preferred indiffierent things" (money, status, power, health etc.) over the inconvenience of "dispreferred indifferent things" (poverty, sickness, exilie), if given an option. That's a utility function.

Consequences of action alternatives should be weighed relatively to one another. When we do, chosing an option with lesser value (ceteris paribum) would be the same as chosing a "dispreffered indifferent". This is quite obvious and the idea of "more > less" is a level of access, that the Stoics would have had access to.

"Maximizing expected value" is just that plus probabilities.

The ancient Stoics lacked our numeracy and many mathematical concepts that are ingrained to modern thought. But if you have all that, decision theory becomes an implicit corrolary. The Stoics lacked cognitive science. But they did know enough psychology to describe how an aspirational rational person (the Sage) ought to work and where us fools fall short. Read Seneca "Of Peace of Mind" and you will probably find your own flaws pointed out in an insightful manner. That's because he basically rattles off all the things that could possibly be wrong with you. And those are not mere Barnum statements, they are predictable, universal failings of the human condition. The Stoics had theory of mind.

So in Aubrey Stewart's translation, I find that "reason" and "wisdom" might as well have been translated with "rationality". "Wise" with "rational". It would not change the meaning.

Expand full comment

The tension is at the extremes. Consequentialists (specifically *maximizing* consequentialists) will follow consequence based arguments to very different conclusions that virtue ethicists will hold themselves back from making. See "repugnant conclusion".

Expand full comment
deletedJul 21, 2022·edited Jul 21, 2022
Comment deleted
Expand full comment

That's true. I am coming to believe in a blend of virtue ethics and satisficing consequentialism. Here "satisficing" is a neologism meaning "good enough above some bar, stop comparing".

I think the latter goes a long way towards keeping the good (rigorous debate) while doing away with the bad (miserable perfectionism and repugnant conclusions) of utilitarianism.

BUT! I agree fundamentally that it's still a system, and systems serve us, not the other way around, and the virtue of virtue ethics is in never really losing sight of that.

Expand full comment
deletedJul 21, 2022·edited Jul 21, 2022
Comment deleted
Expand full comment

The idea of satisficing is it doesn't matter because both things are above a certain reasonable water line. The idea of blending it with other systems is you could turn to those for arguments against it, either before or after the satisficing, but especially after. The idea of making sure systems serve us instead of the other way around is everyone can tell this is a bad conclusion, so we just ignore the system instead of following it.

Expand full comment

I think repugnant conclusions about constructed thought experiments are mostly sophistry. Unless there's a real world situation, where I would feel compelled by Stoic virtue to divert a trolley onto a consequentalist decision maker, before he does something implausibly yet impactfully stupid, there's no conflict?

Expand full comment
Jul 21, 2022·edited Jul 21, 2022

"Be an EA worker for EA causes instead of a doctor" is a very concrete point of contention here, and for a lot of people, a repugnant conclusion.

Ditto "It's a moral crime to eat ice cream or have a child because the <numerical> value you could provide to future persons is greater".

Peter Singer's Drowning Child is absolutely not a hypothetical problem. It's a bundle of direct real world problems brilliantly presented *as* a hypothetical so that once you've made your armchair, intellectual answer, you've bought in to a practical problem that could be worked on, now, right now.

I think both pro and anti would typically agree that EA is where the rubber hits the road on trolley problems.

Expand full comment
Comment deleted
Expand full comment

See my other comment; I think moral systems are best blended with a satisficing component. I think virtue ethics does a better job doing this implicitly, by people's preferences and what they consider virtues (eg the virtue of moderation), whereas consequentialist and deontological systems need this bit spelled out and maybe even stamped on the forehead.

Expand full comment

Yes, I can kind of see that. But that's a problem with projecting an armchair, intellectualized argument onto the real world, ignoring many of its complexities.

I think that forswearing ice cream for guilt about the children and devoting yourself to EA causes over all other obligations or concerns on consequentialist grounds is not justifiable.

I do not predict that this cause of action will be the best utilization of someone's talents for the general benefit of humanity (which as a Stoic I do agree is what one should care about).

That's a longer critique though. I might enter it into the contest, if I can write it persuasively.

Expand full comment

I agree. Elsewhere in this thread someone linked this great bit of writing which includes a few such examples: https://michaelnotebook.com/eanotes/

I don't know if it's a good use of your time or not, but it would definitely be great to see someone enter the contest to critique these problems of scrupulosity from a stoic perspective, and I'd certainly enjoy reading it.

Expand full comment
Comment deleted
Expand full comment

What did he really believe then?

Expand full comment

(I deleted because I was completely off topic and probably wrong)

Expand full comment

Everyone should feel they have the right to be wrong. You’d get more effective critiques that way. Why do humans get so attached to their ideas? Experiments depend on failure. People in practical fields seem to understand this much better

Expand full comment

To some degree effective critiques in practical fields are less personal. Of course people can still invest their identity in a particular practical theory but it is hard and less common.

Expand full comment

More than being afraid of being wrong is a complete lack of desire to defend my ideas, even if they are right. I find discussions emotionally unbearable.

Not claiming it is a good thing.

Expand full comment
Comment deleted
Expand full comment
author

Thanks!

Expand full comment
founding

Agreed!

(Well, I might write some other comments too about the content!)

Expand full comment

What's EA?

Expand full comment

Effective Altruism.

Expand full comment

Have you really never played the Sims? https://wikipedia.org/wiki/Electronic_Arts

Expand full comment

The Euro Area, the official name for the Eurozone.

Expand full comment

Eusko Alkartasuna, a Basque political party.

Expand full comment

The Evangelical Alliance.

Expand full comment

Equivocal Acronym

or just X for short.

Expand full comment

Electric Allies, its an AI alignment group

Expand full comment

Eä is the Quenya name for the material universe as a realisation of the vision of the Ainur. Scott just forgot the umlaut. He's sloppy like that.

Expand full comment

Sloppy yourself. It's clearly a diaeresis.

Expand full comment

Spectacularly done.

Expand full comment

Expected Answer

Expand full comment

EA is the true name of SCP-001.

Expand full comment

Everything and Anything

Expand full comment

Evolutionary Algorithm

Expand full comment

Interesting. Seems a big issue in society in general (not unique to EA) is critiques "that everyone has agreed to nod their head to." This post helped me understand how such critiques can be perpetuated because they help people ignore more serious, specific critiques that threaten individuals' relative status or would require them to work/think harder.

By the way, is this argument a tiny bit reminiscent of the old-line Marxist argument that identity politics is a distraction from class conflict? (Which is not to say their solution to class divides is correct...)

Expand full comment

Unsure, because both class conflict and racial justice can be posed as generally good things to nod along to and do nothing practical about, or as specifically challenging calls to action.

Expand full comment

That's part of the reason, I suspect, Marxism is so well accepted in civilised society?

Expand full comment

Absolutely, vague distaste for capitalism in theory doesn't mean you actually have to do anything about it.

Expand full comment

Marxism does expect Marxists to be revolutionaries and to do things about it. It's just that modern Marxists have come to see the Revolution much in the way that modern Christians see the Second Coming - something much to be aspired towards but not actually expected.

Expand full comment

Maybe the other way around. Expected but not actually aspired to.

Expand full comment

I think it's a general rule that most people do not consistently act as you'd expect them to if they thought their beliefs were true, I wasn't really making a specific point about Marxism - although obviously noncommittal support for Socialism is a useful way to signal that you're socially conscious and care about the lower classes, especially while sipping Champagne with your middle-class friends.

Expand full comment

I really like the analogy, and think that it is not accidental. Those religions and ideologies that eventually survive are usually those that mutate to follow the strategy described by Scott - And so, Christianity transformed from expecting the reach to give up their money and the ruler to lower the damn taxation to a religion that just want you to admit that you are a sinful little thing, without demanding much in particular.

Expand full comment

Christ demands you give up your sin. Which usually people can't do without admitting they have sin to give up in the first place, hence the necessity to admit being "a sinful little thing." Which sins are the most important to give up seems to be what Christians argue about. Love of money? Sexual immorality? Etc.

Expand full comment

This doesn't sound right to me. Religion tends to do best when it's demanding enough that it promotes ingroup solidarity, but not so demanding that it brings about near-certain personal ruin.

When a religion asks little of anyone, it might do OK if it is promoted by society but won't go anywhere if it isn't.

Amidst general religious decline in the US, what religious expressions are actually growing? Amish and Hassidim. Not exactly the least demanding.

Expand full comment

That sounds like Bolshevism rather than Marx. Marx thought communism was inevitable, hence calling his socialism "scientific".

Expand full comment

You have well articulated Voegelin and the problem of ideology and immanentizing the eschaton.

Expand full comment

In that vein, vague distaste for consumerism doesn't mean you have to stop buying shit.

Expand full comment

And lack of engagement with the specific historical events. Such as: state control = centralized control = a system a lot like the one you want to take down. Personally, I’d rather have a hapless ruler like Romanoff than live in Stalin’s monarchy

Expand full comment

I think polite society accepts a watered down version of Marxism that few actual Marxists would agree too.

Expand full comment

Yes, interesting point -- both critiques are potentially ones where everyone can stand around sipping champaign without actually having to do much.

Expand full comment

Reminds me of a criticism of a lot of mainstream anti-racism discourse (typified by "white fragility" and corporate diversity workshops) that treats racism as a moral stain that white people must expunge from themselves via awareness and self examination. But doesn't say anything about specific actions that might cost actual money or inconvenience people.

Banks will happily have someone in to tell them how their industry is perpetuating colonialist conceptions of ownership, but would be far less comfortable with "you should fire some of the white men who make up 80% of your managers" or "change your loan assessment policies that disadvantage minorities in these specific ways."

Expand full comment

You’ve identified two different modes of critique, which one could call the philosophical and the journalistic.

It’s a lot easier to do the philosophical critique than the hard work of finding the anomalies within the existing paradigm, whether it concerns the relative benefits of two similar psychoactive chemicals, the efficacy of a philanthropic intervention, or the orbits of the planets.

Note that we have a plethora of theories of dark matter and quantum gravity but rather few and quite expensive ideas or f how to gather the data that can prove some of them right and some wrong.

This mismatch reflects the fact that theory is relatively cheap and high energy physics experiments very expensive. I suspect that is equally true of pharmacology and social interventions.

To bring this back home to EA: Perhaps not enough money is being spent on measurements.

Or, alternatively, we should accept that systems change slowly, that some money will be spent ineffectively or even counterproductively, and that too much worry about optimization is really a path to neurosis that does nobody any good.

Expand full comment

Has "EA is too high-neuroticism" been done yet?

Expand full comment

On some podcast, I think Robot Friends, the case was made that EA is just secular scrupulosity.

Expand full comment

I wrote a looooong comment on Matt Yglesias's substack a while back which I won't repeat here, but:

I don't object to EA itself. Only to utilitarianism, a completely silly theory which many EA people seem to endorse.

When you're dedicating effort to making the world a better place--as opposed to helping specific people you care about, or just enjoying your life like a normal person--you should absolutely try to be as efficient as possible. EA is great!

But people shouldn't believe that improving the world is the only thing they should ever try to do. Anyone who actually believed that would end up neurotic and miserable (there's your "secular scrupulosity").

Apart from the fact that there's probably no such thing as "utility", that makes the theory self-defeating.

Expand full comment

I feel like you're conflating "making the world a better place is good actually" with "this is the only thing you should ever be doing". I'm not going to argue that there aren't EAs who are neurotic and miserable, but I think most of us are able to compartmentalise and have normal human goals in addition to grand philosophical aspirations. There's some tradeoff obviously, but since being neurotic and miserable is counterproductive this is actually the correct thing for a consequentialist to do.

As for whether utility exists, I find consequentialism gets more appealing the less rigidly you define the utility function, because I don't think human fulfilment is reducible to a single dimension.

Expand full comment

Yes, absolutely. If you compartmentalize (and don't feel bad about it) then you're an effective altruism advocate but not a utilitarian.

Hating on utilitarians is one of my philosophical hobbyhorses (along with hating on libertarians) so I couldn't resist commenting.

Expand full comment

I don’t see why a utilitarian can’t compartmentalize and not feel bad. It seems to me that a utilitarian should be able to say that you are a thinking and feeling being whose desires matter, and that while you should definitely make some changes in your life that will clearly make big differences in the world at relatively little cost to yourself, the little tweaks around optimizing usually aren’t work the costs to your own interests.

Expand full comment

But the problem, at least for most ACX readers, is that there are billions of people in the world whose interests you can affect. A utilitarian who was really a utilitarian wouldn't be allowed to stop at some reasonable point.

It would probably be more effective (swidt) if I never posted about metaethics again and just encouraged everyone to become Bernard Williams-pilled like I did. But anyway--

I agree with Williams that ethics is fundamentally different from natural science.

Science is an exercise of theoretical reason. It seeks to answer the question: "what is the state of affairs/the truth about X?"

Ethics is an exercise of practical reason. It doesn't seek the "truth" about some external reality. It seeks to answer the question: "what should a given person do in a given situation?" And since the way you act in a given situation is determined partly by your character, it also has to answer the question: "what kind of person should you be?"

I think one reason utilitarianism seems plausible is that people miss the importance of this distinction. If you say "*The best outcome* is the one where utility is maximized" it sounds tautologically true--assuming that the concept of utility makes sense, which is very iffy.

But that's not what utilitarians are claiming. What they're claiming is:

A "*The best thing to do at any given moment* is the thing that maximizes utility."

Or, possibly:

B "*The best kind of person to be* is the kind of person whose actions will maximize utility over the course of a lifetime."

Option A doesn't work because it's self-defeating. If everyone had the kind of personality that would make it happen, everyone would be unhappy.

People who support compartmentalization are endorsing option B, but Williams didn't think that would work either. It's the inspiration for his famous "one thought too many" argument.

The problem is that even with option B, your most fundamental desire still has to be utility maximization. All your other desires have to be instrumental. You'll have to say to yourself, for example:

"Because I love my spouse I often do things for their benefit, instead of doing other actions which would benefit strangers more. But even though my only fundamental desire is to benefit humanity, I'm still allowed to feel so strongly that I can't help doing those things. My love for my spouse is a heuristic which leads to utility maximization, because a moral principle that didn't allow people to fall in love would reduce overall utility. That's why I love them."

Nobody actually thinks like this (I hope). And anyone who doesn't (i.e., anyone who does have non-instrumental desires other than universal benevolence) isn't really compartmentalizing. They're just not a utilitarian, even if they think they are.

The content of a moral theory has to be described by what it tells people to do or be, not by the outcome it produces. (This is the theoretical/practical distinction, more or less.) If the theory says "Love your spouse enough that it makes you do non-utilitarian acts, because that tends to maximize utility over the long term", a person who obeys the theory will become a person who does non-utilitarian acts for their spouse *because they love them*--not because it tends to maximize utility over the long term.

In other words, they won't become a utilitarian and the theory itself isn't actually utilitarianism. It's "Have multiple desires like a normal human being... and don't think, even in the back of your mind, that only one of them is fundamental."

This is the example Bernard Williams gave:

Suppose you're a utilitarian and you're relaxing on the beach. (Not sure they're allowed to do that, but whatever. Maybe you needed a break from saving humanity.) You see two people drowning far out to sea. One of them is a stranger and the other is your wife.

If you're an Option A utilitarian, you'll say to yourself. "I must first rescue the person I have the best chance of rescuing."

If you're an Option B utilitarian you'll say: "My wife is drowning. And in a situation like this, you're allowed to give preference to your wife."

Williams said that was "one thought too many."

Expand full comment

Is it a question of how you feel or how you think? Or both?

If some alternative to utilitarianmism suggests you have a burden of obligation you can actually fulfil, then you can fulfil It, and you needn't *feel* guilty, and you also don't need compartmentalisation as a *intellectual. *workaround.

Expand full comment

While there are criticisms to be made of utilitarianism, I don't entirely grok yours. Most moral systems set a standard above what people are capable of. "Feeling bad' about something you don't intend to change reduces utility, so it makes sense from a utilitarian perspective to avoid it.

Some of us allocate a certain quantity of our efforts to 'doing good.' The question then remains "what is good?" And each moral system has an answer to that.

Not being perfect according to a given system is different than not adhering to that system. Few people are perfect according to their moral system. Striving combined with accepting one's shortcomings makes more sense than just drawing a bullseye around wherever the arrow of one's life made impact.

I'm happy to criticize utilitarianism for often focusing on the short term, or for denying evolved wisdom present in some deontological systems. Basically, we can fault utilitarianism for always assuming that people are omniscient, when deficits of knowledge might be better addressed by adhering to well-worn paths than attempting a 'best guess' at utility.

But the faster that an environment changes, physically or morally, the less likely that transitional methods will be helpful.

Ultimately, it's hard to test whether or not a particular action is 'good' without having some objective results that can be measured. Utilitarianism allows for those tests. That kind of humility, of potentially finding out that one's assumptions are wrong, is valuable. The process is far from perfect, but the transition from predominantly traditional systems of morality to systems whose impacts could be tested at least a little was a worthwhile innovation.

Expand full comment

What is the difference between saying:

-'You should maximize utility. It's OK if you only try this during some of the periods during which you plausibly might.'

-'You should pursue virtue by striking a balance between efficiently helping others and enjoying your life.'

?

Expand full comment

> Most moral systems set a standard above what people are capable of.

I find that doubtful. Ethical egoism doesn't. Moral nihilism doesn't. Contractualism and game theoretic ethics doesn't. Evolutionary ethics doesn't.......

Expand full comment

The free will people or the political opinion?

Expand full comment

The political opinion.

I tried to get myself canceled on Matt's substack by saying that even though they have superficially different opinions, the people who think public policy has only one valid goal (maximizing utility, maximizing freedom, or maximizing fairness) are all "basically autistic".

I'm sad that no one actually canceled me.

Expand full comment

This feels like you're just using the word in a way that's technically correct but basically just exists to win arguments. I try to avoid semantic debates because they're totally pointless, but whatever. Utilitarian has so much baggage that I personally prefer the term "consequentialist" for my own morality.

Utilitarianism has this problem while other moral systems don't because it specifically values maximisation, you could obviously make it easier if you bounded it and said saving a million lives is no better than saving one life, but I personally don't think moral truths should be ignored just because they're inconvenient. I think most people would describe themselves as utilitarian because they think the philosophy is a useful guide to action when facing moral uncertainty, not because they pretend that all of their actions are maximising utility - there's just too much uncertainty for that in the real world. It's a philosophy, not a description of your actual behaviour.

You seem to derive utility from smugly pointing out the ancient wisdom that being maximally virtuous all of the time isn't possible, but I'm unsure of who the audience for that is. We're all aware of our human shortcomings already.

Expand full comment

Maybe you wouldn't have to compartmentalise if you didn't believe in utilitarianism -- maybe compatmenalisation is a workaround.

Expand full comment

Yes, I think that's a good way to put it. Some people seem to think that utilitarianism is like quantum mechanics: it's capable of being the truth even though the human mind hasn't evolved in a way that lets us fully understand it, or believe in it without some kind of psychological gimmick.

But that's the science-vs-ethics distinction that Williams talked about. A theory of physics can be true even if our brains can't process it. But an ethical philosophy isn't true or false; it's livable or not livable. You can choose from a range of livable philosophies but if a philosophy isn't livable by humans as actually evolved, there isn't anything right about it at all.

Expand full comment

What, are we not allowed to have transcendent values or something? Do we just settle for whatever instincts worked in the ancestral environment, and hope that's sufficient?

The way I see it, a moral philosophy is meant to be more of a map than a destination to relax in, it needs to have something to say to the people who already consider themselves moral. It seems to me that EA appeals to people are dissatisfied with "livable" philosophies and want something more ambitious.

Expand full comment

I've never understood why people take "it would probably make you unhappy" as a good reason to curtail their commitment to helping the world. For one, it seems obvious that your emotional reaction to various aspects of life is malleable in the long run, but also, would it not be virtuous to suffer anguish in order to help people? If anything it should probably relieve neuroticism and anxiety to be dedicating your life to helping instead of spending time on other things you know are not as virtuous. Maybe part of this is that I reject utilitarianism, especially psychological utilitarianism, in favor of something like Christian deontology, but it's always seemed to me like the obvious conclusion of EA is that we should become philanthropy monks and that everyone is just inventing lazy kludges to dodge this conclusion.

Expand full comment

There's a reasonable case that Christianity obliges all to literally sell everything we have and give it to the poor, most of us are not doing that so it's probably a good thing that forgiveness is a big part of the faith, it's not like we don't have practical examples to follow. https://en.m.wikipedia.org/wiki/Francis_of_Assisi

I guess what I would push back on is the idea that it will make you unhappy - I don't think me being unhappy would actually make the word a better place, and while doing the right thing is likely to involve some hardship and suffering, that's not the same as being unhappy - there's definitely something to be said for the idea that virtue is it's own reward.

Expand full comment

Having read some of your other comments in this thread, I think we just agree on most things, but let me see if I can explain my objection here is clearer language. It seems like the approach to personal EA engagement that Scott and other utilitarian EA people endorse is, "donating to EA is morally good, but if you tried to follow the utilitarian math it'd drive you crazy, so just choose an arbitrary threshold like 10% of your income." I would much rather say, "donating to EA is morally good without constraint, so that whoever can give more time/energy/money to EA is doing more good than otherwise, but the fact that you aren't St. Francis or a monk or etc is just an ordinary sin and you'll be forgiven for it." I guess the attitude around sin and forgiveness is part of it. But I guess I also disagree with the idea that you should tell other people that there is a cut off where you should stop giving, because even if you'll be forgiven for sin we should try to avoid it in what ways we can instead of accepting it as a constraint.

Expand full comment

"Utility" functions much like "enthalpy": a convenient mathematical abstraction over a messy collection of loosely-related phenomena.

Expand full comment

I would say the main criticism of utilitarianism, from a secular perspective, is that whatever "utility points" you gain is inevitably wiped out by Death. I mean that both in terms of one's individual death as well as the heat-death of the universe. After death, none of those things will matter or have any lasting impact into the future. Utilitarians are trying to win a game with no rules, no referees, no penalties, and no prize for winning.

In my mind, the only thing that makes sense in a secular context is to maximize one's subjective sense of meaning and well-being for the time that you are alive. Almost certainly that will involve helping others. But if you are doing that to the point of misery then perhaps you just need therapy (or to hit the gym).

If you are religious, then you believe that there are eternal things which transcend death. In that case it makes sense to maximize the utility of those things.

Scrupulosity is for the religious.

Expand full comment
Jul 20, 2022·edited Jul 20, 2022

The name does not cure the disease. How does one escape scrupulosity? After all, the argument seems so ineluctable:

CAUTION: INFOHAZARD FOLLOWS. Imagine the Queen of Underland explaining all this to Prince Rilian and his rescuers while casting mind-fuddling incense onto a brazier and playing a hypnotic drone ambient.

Given a choice between a good action and an evil one, which should you choose? The good one, of course. That's what good and evil mean.

Given a choice between a very good action and a merely slightly good one, which should you choose? Surely the better one. That's what better means.

Given a choice between a good action and a slightly but definitely less good action, which should you choose? Still the better one, right? How could you defend seeing a choice between two actions, and choosing the worse?

Therefore you must always do the very best thing you possibly can, as best as you can judge it, all the time, forever. Maximising utility isn't the most important thing, it's the only thing.

And if you nod along to all that, there you are, charmed by the Emerald Witch, or as I think of this evil spirit, eaten by Insanity Wolf.

MAXIMISING UTILITY ISN’T THE MOST IMPORTANT THING

IT’S THE ONLY THING.

WHILE YOU’RE SLEEPING

PEOPLE ARE DYING.

WOULD YOU PREFER

INEFFECTIVE ALTRUISM?

BURNT OUT?

WORK EVEN HARDER!

WHEN DOING GOOD

“ENOUGH” IS NOT A THING.

Expand full comment

Note that "good.action" is not synonymous with "obligatory action". Charitable donation is good but not obligatory.

Expand full comment
Jul 21, 2022·edited Jul 21, 2022

But can a thing be good to do but not obligatory? That is the fundamental issue. I gave the Emerald Queen's answer above, but I didn't conjure that from whole cloth. Peter Singer, in his younger days, made the same argument. Here he is in his foundational paper "Famine, Affluence, and Morality" (https://en.wikipedia.org/wiki/Famine,_Affluence,_and_Morality) (numbering mine):

1. "I begin with the assumption that suffering and death from lack of food, shelter, and medical care are bad."

2. "[I]f it is in our power to prevent something bad from happening, without thereby sacrificing anything of comparable moral importance, we ought, morally, to do it."

3. "The strong version [i.e. (2) as just stated] does seem to require reducing ourselves to the level of marginal utility. I should also say that the strong version seems to me to be the correct one."

Quoth the Emerald Queen.

I say "in his younger days", because I heard an interview with Singer in his later years in which the question was put about caring for your own daughter no more than anyone else's. He rather uncomfortably replied that that was one way you could live, suggesting that he had unresolved doubts. On the other hand, when he won the $1M Berggruen Prize last year, he put his money where his mouth is and gave it all away.

I don't know if Peter Singer has formal ties to EA, but it is clear from public statements that they both think highly of each other. So there's a puzzle. What basis can or does EA have for not demanding everything?

If anyone has a coherent argument against ethical maximalism, it will be the first I've seen. All I've come across in the academic literature is assertions to the contrary, expostulation that it is absurd, and invoking magic words such as "scrupulosity" and "supererogation" to name the things they want to be true without showing that the things named exist. All of these responses have also occurred in this thread.

I don't have an answer either, but then, I'm not any sort of A, let alone an EA.

Expand full comment

Here's a simple one: if something is an obligation, you will be punished for not fulfilling it.... so if you don't get punished for not doing X, X was never an obligation.

Here's another: ontologically, obligations aren't transcendent platonic realities, they're social agreements, like promises and contracts. (If one *ought* to do something , to whom is it *owed*?) Singer is only obliged to give away his prize money because he publicly committed to doing so.

Supererogation is no more ontologically suspicious than obligation. Neither is made of atoms.

Expand full comment

Morality is not dependent on who sees what you do. Singer did not give away his prize money because he publicly committed to doing so, he did so because he already thought it the morally obligatory thing to do, prior to telling anyone. Nothing in his work suggests that punishment plays any role in his concept of a moral requirement to do the right thing. Moral obligations are not social agreements; morality does not consist of doing what other people expect of you.

The problems I see in attempted defences of supererogation have nothing to do with the issue of whether supererogation is made of atoms. The problem is the fundamental one: when can it be good to not do the good that you can? Singer's answer is never. The good that you can do is the good that you must do. "Supererogation" is the concept that one need only do "enough", wherever one draws that line, beyond which lie the things that are the good that you can do but the good that you need not do. The problem is how to justify any such line at all.

Expand full comment

I reject (2). Now what?

Expand full comment
Jul 23, 2022·edited Jul 23, 2022

No argument from me. Is there anyone in the house to argue for or against Singer’s Second Axiom?

"[I]f it is in our power to prevent something bad from happening, without thereby sacrificing anything of comparable moral importance, we ought, morally, to do it."

I’m hammering on this point not because I have a committed, reasoned out position on this, but because I don’t, and I do not think that anyone else does except Singer, yet it goes to the very heart of EA. I have read various papers defending supererogation and the finitude of duty, and criticising utilitarianism for its total demandingness, but I have not found coherent arguments in any of them.

Singer offers a coherent argument, but my likening of him to the Queen of Underland and Insanity Wolf shows how little it sways me.

Personally, I hold to the finitude of duty, and draw it much narrower for myself than any EA does. At the same time, I am aware that I do not have a foundation for this. For all practical purposes I do not need one, any more than anyone needed Russell and Whitehead to take 350 pages to prove that 1+1=2 before they could do arithmetic. But in the end, their magnum opus was necessary.

Expand full comment

I feel like that "without thereby sacrificing" part is too easily assumed in discussions of this subject. Are there good, explicit arguments for why "I like nice things" and the like don't have comparable moral importance, rather than just assuming it implicitly?

Expand full comment
Jul 26, 2022·edited Jul 26, 2022

There are certainly arguments. Here is one made by Thomas Aquinas, quoted with approval by Singer:

"Now, according to the natural order instituted by divine providence, material goods are provided for the satisfaction of human needs. Therefore the division and appropriation of property, which proceeds from human law, must not hinder the satisfaction of man's necessity from such goods. Equally, whatever a man has in superabundance is owed, of natural right, to the poor for their sustenance. So Ambrosius says, and it is also to be found in the Decretum Gratiani: 'The bread which you withhold belongs to the hungry; the clothing you shut away, to the naked; and the money you bury in the earth is the redemption and freedom of the penniless.'"

Or filtered through Insanity Wolf:

TO HAVE ANYTHING IS A THEFT / FROM THOSE WHO HAVE NOTHING

THE LIMIT OF GIVING / IS YOUR ABILITY TO GIVE

GIVE / UNTIL IT HURTS

The quotation can be found online, in a different translation, at https://www.newadvent.org/summa/3066.htm#article7. Later in that section, Aquinas goes even further and defends not only giving up one's own goods but taking one's neighbour's, if it is necessary to succour those in need. If an Effective Altruism party got into government, I wonder what their tax and foreign aid policy would be.

Those who do not recognise divine providence can skip the phrase "instituted by divine providence". Some claim that the natural moral order can be objectively discerned by the same methods of observation and reason as are successful for the natural physical order, without any supernatural foundation. (This doctrine is called "moral realism".) But they do not all agree what those truths are.

How should people decide among these and other views, or arrive at one of their own? What is "the natural order" and how may it be discerned? It will not do to read Singer and nod along to his argument, or read anyone else and nod along to theirs; for even GPT-3 can sound convincing when read in that way.

This can be compared to the problem of objective priors in Bayesian reasoning. One does not get to pick whatever priors will give you the posteriors that you want, and meet criticism by saying "But Muh Priorz!" There are ways of arriving at objective priors for a problem (and much debate over the same, but it isn't relevant here). How can we arrive at objective moral priors? And what would they look like?

Expand full comment

Assumption: "Goodness" is a well-ordered, quantifiable thing, even between radically different actions. Second assumption: "Goodness" is an inherent property of actions, and goodness can be aggregated. Third assumption (of utilitarianism): "Goodness" can be compared and calculated stably *between individuals*.

I don't believe any of those assumptions hold, at least when you get down to it. There is no universal "util" that we can calculate; rank ordering actions isn't even theoretically possible outside of very narrow scopes, and actions in and of themselves (disconnected from context and everything else) don't have "goodness" or "badness" as an absolute, abstract property.

Sure, when you *can* compare actions, you should choose the best of them. But without the whole mechanism of utilitarianism's flawed hubris of universal calculationism, you can't really "do the math" in most cases. And every time someone tries, horrors result.

Expand full comment

Also, utilitarianism ignores intention [*] , and can't come up with a sensible.level.of obligation/burdensome...it's either all or nothing.

[*] Anscombe introduced the term consequentialism, and rejected it out of hand because she saw the purpose of morality as apportioning praise and blame, and thought it unreasonable to.do so on the basis of unintentional good or bad consequences.

Expand full comment

Yes, but I also think it's the type of criticism that is acknowledged but ignored. That is, it's a domesticated criticism. I doubt that people in EA realize how big a problem this is. It seems to me that it is probably hobbling the movement quite a bit.

Expand full comment

agree, was sad to see the lacan trail from a few months ago turn cold

Expand full comment
Jul 21, 2022·edited Jul 21, 2022

Me too. I hope someone revives it.

[edit: I guess I'll take a stab at it]

Expand full comment
Jul 20, 2022·edited Jul 20, 2022

"The fact that this has to be the next narrative beat in an article like this should raise red flags. Another way of phrasing “this has to be the next narrative beat” is that it’s something we would believe / want to believe / insert at this place in our discourse whether it was true or not. "

In a word, *no*.

It is the next narrative beat because many of us *smell* it. It has been the smell of EA for years. It has been the smell of EA for numerous readers of this blog (and its immediate predecessor) ever since you (very gently) said they made you feel bad for being a doctor instead of working on EA. That was a bad smell and it never, ever went away.

I don't know precisely what is the problem with EA, because I'm not putting in the epistemic work. But I feel *very* comfortable saying "I know it when I smell it; EA smells like it has good intentions plus toxic delusions and they're not really listening".

If you want the best definition I can personally come up with, it's this: EA peer pressures people into accepting repugnant conclusions. Given that, *of course* it doesn't want real criticism.

Side note: Zvi did an extremely good job writing the coherent and detailed breakdown of EA's criticism problem from the other side, and I understand that the whole point of this essay is you don't want to give that kind of thing any air, but it's not very nice or epistemically reasonable not to give it air.

Expand full comment
author

Disagree because I think it would be the obvious next narrative beat in *any* case, not just EA. "This big and powerful group seems obsessed with soliciting criticism of itself" just naturally lends itself to "...but they're faking it, right?"

I didn't want to go through the details of Zvi's criticism because it didn't make sense to me - it looked like some sort of Derrida-esque attempt to argue that slight details of the phrasing of every sentence proved that it actually meant the opposite of its face value. I could never see it when Derrida was doing it and I can't see it when Zvi is doing it either.

Expand full comment
Jul 20, 2022·edited Jul 20, 2022

I agree that it's Derrida-esque. He's never the most legible. So that's a fair reason. But I think the key takeaway is the 21 point list of EA assumptions, and the idea that EA discounts or discourages disagreements with these, and this part specifically is worth giving thought and air.

I would be suspicious when any big and powerful group wants criticism of itself because that follows naturally from reasoning about the nature of big and powerful groups. But also, I would be suspicious for *specific* reasons in certain specific cases ("such and such a group has a history of inviting dissent, then punishing it a year later", or "such and such a group has payed lip service to disagreements the following times, and I expect the pattern to continue"), and I think these reasons could be instructive in understanding, changing, or even opposing the organizations.

In the case of EA, I wouldn't believe it because EA has the hallmarks of a peer pressure organization, and I think the criticism they're most likely to discount is "the negative value of peer pressure outweighs the positive value of your work". That's not a fully general criticism of organizations; it's a specific and potentially useful criticism of one type of organization.

I wouldn't tell a shy conflict averse person not to work for the US Government. But I would tell them to avoid making contact with EA.

Expand full comment

What kind of movement/organization *doesn't* have the hallmarks of a peer pressure organization? Isn't that literally the way you get a movement rather than a bunch of random individual choices? Sometimes the form of the peer pressure varies (is it a carrot or is it a stick) but it's always peer pressure that makes us conform enough to form a movement.

Expand full comment

Off the top of my head, a lot of political/activist organizations seek to change the minds of non-participants, but only mildly encourage non-participants to become participants. At most, they're targeting a specific sub-demographic for recruitment, while sending a much milder message to the whole demographic.

What sets apart EA (and Christianity, and militant vegans, and ...) is that they're telling people that it's a *personal moral failure* to not join the movement. That's one specific type of movement, and we should distinguish between those and the "milder" movements that only want a small part of your resources/mindshare, or just want you to get out of their way so that they can do their thing in peace.

NB: I've never personally encountered the described peer pressure in my interactions with EAs, but I'm conceding that point for the sake of argument.

Expand full comment

I don't think the OP's concern is particularly proselytizing to 'outsiders'. Presumably they would find EA Judaism which only tried to make those born to EA parents feel bad about being doctors also unappealing.

Expand full comment

Masonry. You have to get the signatures of two master masons to be admitted to the fraternity, and you have to ask yourself to enter; they are not allowed to solicit members.

Expand full comment

As others have pointed out, there are a lot of movements that only want to funnel a fraction of people into the most active level of membership, and a lot of movements that offer but don't push increased participation.

EA is indeed like other totalizing movements, evangelical Christianity among them, in that it wants literally everyone to advance to a high level of membership and will happily pressure all of the people to do so.

If you've never been to an evangelical church, it's quite a trip. Sermons regularly revolve around the idea that everyone is on the borderline between being a bad Christian and a good Christian and the tacit suggestion that one's commitment level to church functions will make all the difference. I once saw a pastor preach through a case of hemorrhoids as an object lesson in dedication. EA can feel like that.

By contrast, the US government is fundamentally a movement, and if I want to join it and work for it, they're happy to use me, but if I don't, they don't care, I'm just a citizen and that's fine by them.

You're right that I would find EA Judaism unappealing, but still better for being confined to a clique.

Expand full comment

It is the dream of anyone who writes a post called Criticism of [a] Criticism Contest to then have a sort-of reply called Criticism of Criticism of Criticism.

The only question now is, do I raise to 4?

I did it that way for several reasons, including (1) a shorter post would have taken a lot longer, (2) when I posted a Tweet instead a central response was 'why don't you say exactly what things are wrong here', (3) any one of them might be an error but if basically every sentence/paragraph is doing the reversal thing you should stop and notice it and generalize it (4) you talk later about how concrete examples are better, so I went for concrete examples, (5) they warn against 'punching down' and this is a safe way to do this while 'punch up' and not having to do infinite research, (6) when something is the next natural narrative beat that goes both ways, (7) things are next-beats for reasons and I do think it's fair that most Xs in EA's place that do this are 'faking it' in this sense, (8) somehow people haven't realized I'm a toon and I did it in large part because it was funny and had paradoxical implications, (9) I also wrote it out because I wanted to better understand exactly what I had unconsciously/automatically noticed.

For 7, notice in particular that the psychiatrists are totally faking it here, they are clearly being almost entirely performative and you could cross out every reference to psychiatry and write another profession and you'd find the same talks at a different conference. If someone decided not to understand this and said things like 'what specific things here aren't criticizing [X]', you'd need to do a close reading of some kind until people saw it, or come up with another better option.

Also note that you can (A) do the thing they're doing at the conference, (B) do the thing where you get into some holy war and start a fight or (C) you can actually question psychiatry in general (correctly or otherwise) but if you do that at the conference people will mostly look at you funny and find a way to ignore you.

Expand full comment

I know EA only from your post, and now Scott's. I got the same vibe from both of your posts... My take away is that the idea of eliminating all suffering/evil in the world is dumb. Suffering is what makes us stronger, builds character, gives us something to fight against, (the hero vs the villain story). I'm not going to say we need more racists, misogynists, or chicken eaters but trying to eliminate all of them is a mistake. We've turned 'no racism' into a paper clip maximizer... and we should stop.

Expand full comment

Suffering never gets eliminated, we just complain about ever tinier molehills as the hedonic treadmill accelerates... and that has been the case since the Industrial Revolution.

Expand full comment
founding
Jul 20, 2022·edited Jul 20, 2022

EA is saying much less controversial things than "eliminate all suffering" or even "eliminate all racism". Things like:

1) 5-year-old children should not die of malaria

2) We should not subject billions of food animals to lifelong torture

3) We should prevent human extinction (Edited for clarity; previously said “We should prevent everyone on earth from dying.”)

I think it's pretty hard to argue that we need e.g. 5-year-olds dying of malaria to "build character".

Expand full comment

"One of these things is not like the other"

Things 1 and 2 are popular things said by lots of people. Bill Gates is saying 1. Special food labels in many restaurants and grocery stores are saying 2.

Thing 3 is extremely controversial across the population of Earth, maybe to the point of war.

Expand full comment
founding

Sorry I think I phrased that poorly. (3) is not “no one should die” it’s “it would be bad if humanity went extinct”. I don’t think that’s too controversial!

Expand full comment

We should prevent everyone on earth from dying? Not a good idea. 8 billion and counting. Death is the inevitable end of life, we should learn to negotiate it with grace. Immortality is a corrupt and selfish goal. Personally, my perfect end would be abrupt and pain- free; timing tbd; I'm ok with whenever.

Expand full comment
founding

Sorry I think I phrased that poorly. (3) is not “no one should die” it’s “it would be bad if humanity went extinct”. I don’t think that’s too controversial!

Expand full comment

Much more controversial things are implications of utilitarianism , though.

Expand full comment

Huh, OK.

1.) Maybe in some places kids need to die of malaria because it keeps the population 'fit'. Giving them bed nets means they are a population now dependent on bed nets. Listen, I'm not saying anyone should stop sending bed nets, or help build water wells, or ... whatever. But no action is 100% good.

2.) better life than no life. OK a hitchhikers guide to the galaxy reference.

Would you be happier if we breed cows that wanted to be eaten?

3.) Yeah! why isn't this number 1?

Expand full comment
founding

2) The main issue (IMO) isn’t whether the animals want to be eaten, it’s that they live in miserable conditions (to the point that their lives are indeed likely worse than no life).

3) In practice this is the most controversial branch of EA because it’s somewhat unclear what could cause human extinction or how to avert it. (The leading EA theory is AI).

Expand full comment

Hear hear

Expand full comment

I really wouldn't worry about the danger of EA becoming so successful that enough suffering in the world is eliminated that life becomes meaningless.

Expand full comment

I think I just want to make a point, that some suffering is good, and so utilitarianism can't be 100% right.

Expand full comment

Where are you getting your definition of Utilitarianism that includes the necessary elimination of all suffering?

Expand full comment

In what way do children who die of malaria become stronger?

Expand full comment

Yeah, this is the sucky part of evolution. It's the one's that live. There are some environments where it's harder to live. But to keep people living there... select those that live.

Expand full comment

Evolution does not have a "direction".

Expand full comment
author

Is it okay to be a doctor, or is that going too far in eliminating suffering which is good? Are you especially virtuous if you beat up random children you encounter on the street?

I avoid these questions by just assuming that less suffering is good in whatever case (except where the suffering is designed to do something useful, like in boot camp). Given how much suffering there is even when people work hard to eliminate suffering, I'm not really worried we'll ever run out.

Expand full comment

Surely the exception criterion should be whether the suffering is consented to/desired by the people experiencing it, not whether it's "doing something useful"?

Expand full comment

Lots of people suffer consensually in ways that they really should be talked out of. Someone who cuts themself every time they make a mistake is consenting to that, but it's probably not healthy and it would probably be preferable that they stopped.

Expand full comment

Right, we all work to reduce suffering, and no way should we try and create more. But I (recently) had this idea that some suffering is good. The good part happens when we get through/ over the suffering and come out the other side. (If you die or never stop suffering then that seems all bad.) The things we suffer through in life seem to shape us more than happy things. We can all think of life histories of people, and see how the suffering shaped them. Fredrick Douglas, James Baldwin, Robin Williams... OK maybe it's not always the getting over it... maybe it's just dealing with it. Well this is mostly a new thought for me, and I may change my mind.

Expand full comment

It's a familiar idea.

Eliezer Yudkowsky made a whole speech about how, if people got bashed by a mace once per month, eventually they'd come up with all sorts of reasons why getting bashed by a mace is actually great for you. It builds characters, it makes your face more resistant to bashing, etc. Sour grapes stuff, from people who regularly get bashed to the face and want to justify why it's actually good.

If you go to a person who doesn't get bashed and tell them of these amazing benefits and ask them "Do you want to get your face smashed in once per month, to build your character?" virtually everyone will say no.

Expand full comment

"Good" "Virtuous" "Suffering". These are philosophical terms that were kidnapped and redefined by EA. And no amount of "well obviously we're better off to skip the metaphysical talk and redefine them" avoids the very nature that such a move is itself philosophical.

"The good is less suffering, therefore less suffering is good." Is just misusing the term good. Be content to just say 'we subjectively feel compelled to reduce suffering' and honestly define suffering either as noxious brain chemicals (over the long term that don't reduce noxious chemicals in other brains) or something else.

Expand full comment

Ok but "it's easier than answering these questions" isn't compelling, firstly (as I'm sure you know), and furthermore, understanding the extent to which suffering should be destroyed is relevant to more than just the destruction of the final suffering.

It's your philosophy of suffering. If I consider suffering The Bad, then I will eliminate suffering in a particular way. I may reduce it wrongly because of my mindset, or I might eliminate good sufferings, or I may label the wrong things as suffering. Having a good and correct philosophy of suffering is *hugely* important to alleviating it. Properly prioritising suffering relative to pleasure, love, endurance etc. is essential to what kind of interventions you use.

So hand waving what suffering is seems like a fatal mistake.

In addition to all that, I would say it's most important to have a strong case on whether suffering is the ultimate evil, what it is, and in what manner it should be prioritized because we are *fixing* other people and other communities. In our zeal to eliminate their "suffering" we destroy Their Good, all while patting ourselves on the back and not bothering to wrestle with ourselves because "all people agree suffering is bad." What is suffering? How bad is it? Is it worth violating others' philosophies or communities to pursue it? And if the answer is yes, you better be damn well able to vociferously defend it.

Expand full comment

Yes yes yes. Some suffering is good. And to say, "oh there is so much suffering, getting rid of this piece must be good."... It doesn't follow. If you agree that some suffering is good, then you need a way to identify the good. Evolution, nature selection, is suffering. Should we do away with natural selection in the name of ending suffering?

As a sports fan, I suffer when my team loses, and yet a team that wins all the time would be the end of sports, losing and suffering is a part of the drama of sports... what makes it fun.

Expand full comment
author

Yeah, I'm definitely not saying you did wrong by posting it, and it seems like the sort of thing that *could have* been true, I think our crux is just that I looked at the examples you gave of how the sentences were phrased in ways that actually discouraged criticism and just didn't see it - they looked like totally normal sentences to me.

Expand full comment

Confusing, but one possibility is there isn't a contradiction there. They could be doing exactly what I say they are, and still also be 'perfectly normal' and that would indeed be the important thing to notice.

Expand full comment
founding

Yup!

Expand full comment

Were the monks that engaged in self-flagellation being too critical about themselves? Or was self-flagellation a means to a different end?

Expand full comment

This sounds remarkably like wanting to believe something whether or not it is true.

Expand full comment

Epistemic status: data point.

I'm an EA. I accepted *the* repugnant conclusion before I heard of EA. Don't assume that EAs' common beliefs are due to "peer pressure" rather than a combination of selection effects and normal epistemic convergence.

Expand full comment
Jul 20, 2022·edited Jul 20, 2022

I'm not assuming EA's common beliefs are due to peer pressure.

I said "EA peer pressures people into accepting repugnant conclusions", *not* "EAs who accept repugnant conclusions were peer pressured". You're a data point against the latter, not the former. I'm claiming and providing a data point for the former.

Expand full comment
Jul 20, 2022·edited Jul 20, 2022

I feel like EA is mostly about peer pressuring people into actually doing things they already knew were good, like helping the poor and being responsible for the future, with the same efficiency that they aspire for in their personal life (EA doesn't really appeal to people who don't already value rationality and efficiency in my experience). I know some people who therefore accept the repugnant conclusion, but it's not like you really need advanced utilitarianism to convince people that "the world ending would be bad, actually" and "we should expend a non-trivial amount of effort in the present day to prevent it".

I mean, is the criticism basically "People would be happier if they weren't part of EA"? Because I'm honestly not sure that that's true, I think most people that EA appeals to are already searching for meaning, they'd probably just join a less effective social movement instead. I know a lot of people who've personally benefitted from EA, getting careers, community and personal fulfilment out of it, I feel you'd struggle to convince me EA is bad even ignoring any positive effects on the world.

Expand full comment

Sorry, I have experience with doing good on an individual basis without pressure, doing good as part of a supportive but very low pressure community, and discussion with EAs. The first two things certainly made me happier, enough so that I do think there's a fairly tight linkage between "moral living" and "happy living". The third thing did not make me happy or give me any indication that further participation would make me happy. Not at all. It gave me the sense that my social group would come to consist of miserable scrupulous neurotics over-scrutinizing very small decisions. I'm over-scrupulous enough, thank you, and trying to *reduce* that quality.

Expand full comment
author

I think the key difference is whether you find over-scrutinizing decisions to be a fun recreational activity. If you do, it doesn't feel like neurosis at all - it just feels like you've suddenly be handed lots of bonus conversation topics and pastimes!

Expand full comment

Conversation is my past time and I have issues with scrupulosity. When I was 25, that would have sounded right. Later, in therapy, and to my great benefit, I was taught that the difference between a measured and an excessive amount of that is much like the difference between a measured and an excessive amount of ice cream.

(I still consume both the literal and figurative gallon in a single sitting from time to time, of course, and I don't worry too much about that)

I think the post where you tried to talk some ethical people out of talking themselves out of having children for ethical reasons was a good example of seeing the excess.

Expand full comment

re: smell of EA, try Michael Nielsen's essay https://michaelnotebook.com/eanotes/

Expand full comment
Jul 20, 2022·edited Jul 20, 2022

I think the caveat I need to add is that I'm a Christian, and so I'm totally fine with accepting a philosophy that demands a lot more than I'm willing to give right now. That said, I do agree with the critique that Utilitarianism alone offers a very narrow vision of what a good life looks like, I just don't think that means it can't be a useful guiding principle for practical action. I guess that's my answer to the author's arguement that EA needs to be part of a broader life philosophy.

lf you view the Biblical commandment to "love your neighbour as yourself" as a condemnation of your own selfishness, then obviously becoming a Christian will make you miserable. If you instead regard it as aspirational, and as part of a broader philosophy, then it leads to a far more fulfilling life than just watering it down to "be nice to other people, when it's not too inconvenient" - this is why maximalism is good, actually. My main criticism of EA is that most utilitarians have a far too constrained view of what human flourishing looks like, but in practice we agree on the best ways to help others so it's not going to be a practical objection until we get to the point where we've solved extreme poverty and existential risk.

Expand full comment

I'm a lapsed Catholic, and came from a family of very serious practice (daily mass, tithes, other serious footwork). I never lost my respect for maximalism, and respect the point you're making about it, while also making the distinction between condemnatory maximalism and aspirational maximalism.

But I did lose my respect for evangelism, and I don't believe it is moral or practical. In fact I would go so far as to say that the cost of evangelism scales with the ambition of the change, enough so that it will basically always outweigh the good.

Expand full comment

Can you say a bit more about that last point? I'm also a lapsed former ultra-devout believer who never lost my respect for maximalism (it was what attracted me to EA in the first place, before I saw its many warts), so I think I'd stand to gain from your take on this.

Expand full comment
Jul 20, 2022·edited Jul 20, 2022

Sorry, I had to find the time to read the excellent note you linked by Michael Nielsen first.

I am not and have never been a Ba'hai, but it's worth sharing two elements of the Ba'hai faith. Note that these are ideals, not necessarily practiced by the average adherent.

1) The Ba'hai believe that Abrahamic religion is still evolving, and has not yet reached its perfect form. So Judaism, Christianity, Islam, and maybe even the Ba'hai faith itself, are true, but incomplete. In this sense the religion very much shares a good quality with EA: it is not settled, may never be settled, and is open to change and improvement.

2) The Ba'hai prophet Baha'u'llah (the principle prophet, more or less like Muhammad) explicitly instructed his followers not to proselytize or force their faith on others. In practice, this is often ignored, but it is also often followed; my two Ba'hai friends did not tell me their religion, and when I found out and asked them about it, each one expressed reluctance to tell me unless I was really interested from my own will.

When I learned this latter property, it was perhaps the single most refreshing thing I ever learned about a religion. I can't begin to tell you how refreshing it felt.

From that followed a frame of reference where I asked "would this thing be better without evangelism?", and for me the answer was so often and so completely "yes" that I eventually stopped asking, with one exception I'll treat at the end.

I have come to believe that people resist change in direct proportion to the magnitude of the push for change; hard push leads to hard heart. Moreover, while some people do yield and change and convert, as a side effect both the converter and the converted are left with a serious intensity, maybe permanently. Furthermore most "we meant well but did evil" mistakes seem to come from this place of intensity.

So when I ask myself whether something would be better without evangelism, what I get back is basically always a form of this:

1) It would take 10-1000 times longer to complete the project.

2) Nobody would feel coerced into it.

3) Nobody will choose to hurt somebody to get the project done sooner.

4) Nobody will have strong feelings to continue "fighting" after the project reaches a natural conclusion.

Basically always the right call.

The exception: technically law, justice, and protection are a form of evangelism; if people want to shoot guns in my neighborhood for fun and I force them to give up their guns, that's tantamount to evangelizing my position. I don't like this, and still basically always reject the aggressive form where I come into your neighborhood to enforce my views, but I concede that some wild or bad actors need to be forced to stop acting, and I'm totally willing to commission or provide that force when necessary.

Final side note: in the unlikely event you're a Ba'hai, I'm going to have a good laugh that I've been telling you your own thing (perhaps inaccurately!)

Expand full comment

I've never been religious, and generally actively believe the religions I've encountered to be false. I have often been annoyed at evangelism.

But I am signed up for cryonics. I've not been evangelical about it. But I've wondered if I should be.

To my mind, if you believe you have a duty to save a drowning man, and you genuinely believe your religion, and it's a religion that contains something like Hell, it seems like you should evangelize. Hell is very bad. It's defined that way. If you see someone choosing Hell over Heaven, they are the drowning man. Do you not have a duty to help them? If they struggle against you, does that change your duty to drag them to shore?

(With cryonics, instead of Heaven and Hell, it's a chance at living, vs definitely dying, but the same calculation applies).

(I don't think I agree with your comment, and besides what I've said, I would note that a lot of your arguments about evangelism would apply to any form of "convincing", like teaching. And I think they obviously don't hold there. The main difference, I suppose, being in your examples it's a belief someone doesn't want to believe. Which is why my last line was about the drowning man struggling.)

Expand full comment

So what kind of charitable giving doesn't make ppl try to feel bad for not doing the right thing? The dog shelter commercials really go over the top with it but all attempts to encourage ppl to do good come along with a fair bit of making ppl feel bad about not doing them. It might not be intentional but because ppl care about doing good just telling them that these ppl are suffering and you could help makes them feel bad about not doing it.

So yah EA might make doctors feel guilty in the same way our society routinely makes investment bankers feel guilty: what are you doing just chasing the dollar and not making the world a better place.

What's different, I suspect, is that EA is making ppl our society lauds as selfless feel bad rather than ppl it we might, at first glance, think of as quite altruistic and selfless. But I don't really see the difference. Choosing not to think carefully about whether your actions are really making the world better or just giving you a warm glow bc that's easier seems equally selfish as choosing to ignore the poor to buy more shit. If our intuitions disagree too much the worse for them.

But EA seems to take a much softer stand on it than mainstream society. It generally just focuses on convincing ppl that hey this would help more while you regularly hear screeds denouncing ppl who just try and enrich themselves without concern for the less well off in the mainstream.

Indeed, you are describing the conclusions involved as repugnant and Isn't describing a conclusion as 'repugnant' another way of saying that society makes you feel bad about reaching that conclusion? (seems implausible these are innately and universally repugnant given the things some cultures behave).

Expand full comment

As I've said at other points in this thread, I am a lapsed Catholic. But you know what kind of charitable giving doesn't make people try to feel bad for not doing the right thing? Church soup kitchens.

Long *after* leaving religion, I started volunteering at soup kitchens / food pantries / etc run by variously Catholic, Protestant and Jewish congregations. I've done this in several cities (Austin, Seattle, Bay Area, Los Angeles) and the pattern is extremely consistent: Nobody pushes *anything*, not increased participation, not donation, not even religion. They are just happy to see people show up to help, and very encouraging about that help. They often had nice deals worked out to get leftover goods and produce from local grocery stores, and from having done the pickup rides, I do not get any sense that those deals were worked out over pressure. They probably just showed up and asked nicely.

You know what's the meanest thing anyone ever said to me about that world? An EA type bluntly said "that's just a pointless salve". You know what? Damned right it's a salve, and given we're all going to die eventually, that *salve* is as much of a virtuous act as anything. I certainly don't see any moral law of the universe that makes it a clear lesser act than protecting animals from food industry cruelty. That kind of knee jerk comparing is *assholery*. Are some things better uses of our time than others? Yes. Is it reasonable to compare anything to anything else and flatly declare one the better and the other a waste of time? *No*, it's assholery. These aren't thoughtless positions. I didn't help the homeless thoughtlessly. They're positions taken from different value systems, and it's childish and insulting to default assume non-utilitarians have poorly reasoned moral systems.

I feel the same thing about doctoring. "what are you doing just chasing the dollar" is assholery. My sister in law is a pediatric child abuse specialist, which means she spends her entire working life handling medical child abuse cases. How *dare* anyone blithely lump her in with some hypothetical ambulance chasers? EAs don't understand how completely off putting that language is. It's honestly worse than pushy street preaching.

Last thing: when I say repugnant, I mean viscerally repugnant to me.

Expand full comment

Thank you for this. I come from a very different background, and you put my perspective into very different words that for some reason I agree with nonetheless.

I am an ER doc, and I also agree that pediatric abuse specialist would be a very hard doctor job for most including myself.

Seems like a good part of this debate can be summarized, to my undergrad-level liberal arts mind, as just universalism vs localism, abstract principles vs concrete doing-stuff. I am sure EA has discussed the heck out of that proposition as well, but like you I don't have the willpower to look up those discussions as I know it won't change my opinion on any of this.

Expand full comment

"Seems like a good part of this debate can be summarized, to my undergrad-level liberal arts mind, as just universalism vs localism, abstract principles vs concrete doing-stuff."

That's a good way of putting it.

On the concrete side, since you're an ER doc, thank you. You sawed the wedding ring off my pregnant wife's finger and reassured her that it was okay. You monitored her after she took too much tylenol during a fever delirium. You checked my daughter when she fell on her head, and gave her antibiotics for her UTI. You gave me high grade benadryl when I discovered I was allergic to jackfruit and my throat closed, and you scanned me to make sure my ribs weren't broken when I was doored by a car while riding my bike. You caught my mother's heart condition after a fall, and set my brother's broken leg. I wouldn't trade you for anything.

Expand full comment

A lot of this could be extended to explain why /r/unpopularopinion is full of popular opinions.

Expand full comment

The good version of /r/unpopularopinion is /r/The10thDentist

Expand full comment

You're absolutely right I'm horrified by everything there.

Expand full comment

Hahaha

Expand full comment
founding

This is an unusually good ACX post, easily top 10%. I've long been frustrated that society has made paradigmatic self-criticism the ultimate high status move, to the point that folks become suspicious of any movement that doesn't do enough of it. You implore the reader to treat EA as a variable — I would really enjoy a fleshed out version of this post with EA actually replaced by X, and merged with the points about self-critique from I Can Tolerate Anything Except The Outgroup.

Expand full comment

Shades of Maoist "struggle sessions" ...

https://en.wikipedia.org/wiki/Denunciation_rally

Expand full comment

But the armchair critics of EA will basically never participate in or even intentionally cause (shades of) struggle sessions, because our whole thing is to allow a certain default apathy, whereas I claim EAs could plausibly end up doing (full on) struggle sessions, because they have the fervor and intensity required to do that.

If I criticize EA as a peer pressure organization and it somehow leads them to (shades of) struggle sessions, that's not evidence the critique was wrong, but evidence that it was right. Only insanely fervent and pressure heavy organizations can go there.

Expand full comment

Ever thus - at least for some time:

"The blood-dimmed tide is loosed, and everywhere

The ceremony of innocence is drowned;

The best lack all conviction, while the worst

Are full of passionate intensity."

https://www.poetryfoundation.org/poems/43290/the-second-coming

Though that's not to say that being "full of passionate intensity" is necessarily always bad, or that "lacking all conviction" is necessarily always good. There ARE - or should be - other options on the table.

Expand full comment

That poem is certainly part of my training to be reluctant towards political and social movements. Thanks for the reminder.

Expand full comment

No one whose actions stay the same while indulging in self-criticism isn’t actually doing self-criticism at all

Expand full comment

Generally agree, although when one is perfect it's hard to improve ... 😉

But your argument kind of hinges on "indulging" and on the question of how credible are the criticisms. Though maybe less a case of "self-criticism" and more one of being open to the criticisms of others - apparently the case with EA soliciting them.

If, in the latter case, the criticisms are trivial or unfounded then EA and their like can't really be faulted for not doing more than saying, "thank you for your comments". However, IF there IS substance, and IF EA and company refuse to acknowledge that and make some effort to correct the problems THEN, and only THEN, can one reasonably throw stones at them.

Sort of my schtick; see my recent post and About for details:

https://humanuseofhumanbeings.substack.com/p/welcome

There are all sorts of feedback mechanisms in society that ensure individuals, corporations, and governments stay on the straight-and-narrow; as Eleanor Roosevelt once put it:

"...our children must learn...to face full responsibility for their actions, to make their own choices and cope with the results...the whole democratic system...depends upon it. For our system is founded on self-government, which is untenable if the individuals who make up the system are unable to govern themselves.”

https://www.goodreads.com/quotes/824275-our-children-must-learn-to-face-full-responsibility-for-their-actions

But that "governing" is part and parcel of feedback and control systems, the paradigmatic example of which is Watt's governor:

https://en.wikipedia.org/wiki/Centrifugal_governor

However, that process of social feedback, any feedback in fact - mechanical or electronic or genetic, has a great many seriously pathological manifestations. An example of which is, arguably or presumably, EA's "virtue signaling", their "self-indulgence" in appearing to be open to criticism while doing nothing to address those ones which presumably have some merit.

Expand full comment

The hardest thing to do in the modern world is believe in something or try to accomplish something. People will call you an idiot for having principles or causes, root for you to fail, and then either gloat if you fail or grumble about you if you succeed.

Expand full comment

Indeed. Though the question of WHAT to believe in is something of a sticky wicket. As W.C. Fields was reputed to have said, "A man has got to believe in something. And I believe I'll have another drink." 😉

The problem these days is maybe less that there's not enough to believe in and more that too many believe in totally untenable claptrap and outright woo - "magical thinking" as Kurt Andersen, author of "Fantasyland: How America Went Haywire" put it therein. Highly recommended book; fairly decent and comprehensive synopsis of it in this oldish Atlantic article by him:

https://www.theatlantic.com/magazine/archive/2017/09/how-america-lost-its-mind/534231/

Expand full comment

Agreed. Unfortunately those who should succeed against the odds and those who should be shamed or talked out of bad ideas look the same from some early stage and invite the same behaviors.

Expand full comment

I suspect this was true of the ancient world as well.

Expand full comment

The post this reminded me of was Post-Partisanship is Hyper-Partisanship (https://slatestarcodex.com/2016/07/27/post-partisanship-is-hyper-partisanship/). A criticism of the foundations is similar to a fargroup criticism, while a criticism of tactics is similar to an outgroup criticism. They are not exactly the same, because in this case the critiques still come from within the group. What they share is that near mode threats have more emotional salience.

Expand full comment

>Go to any psychiatrist at the conference and criticize psychiatry in these terms - “Don’t you think our field is systemically racist and sexist and fails to understand that the true problem is Capitalism?”

Of course, there's a sense in which this isn't really criticism of psychiatry at all; it's just advancement of one's own ideology. Most of these critics have their own version of "sufficiently progressive" psychiatry that they think would be just fine.

Expand full comment

I read those sections the same way - this had nothing to do with psychiatry, but was instead a means to promote unrelated political views.

Expand full comment

I think EAs are motivated to make those "broad structural critiques" because they feel like things outsiders are thinking, or saying, and EAs want to anticipate those external criticisms so they can (1) refute them and (2) proudly claim they already thought of that and its a good point, but here's why all things considered it doesn't hold (kind of like a memetic immune system). And they do _this_ because they think it'll win people over and grow the movement.

Expand full comment

Yeah, I get the sense that most critique “bounces off” in the sense that people give a plausible-sounding refutation, then choose not to engage with it further/at a deeper level, satisfied with their own defense.

Expand full comment

When I was a christian child, I spoke with a series of practicing christians who made sincere claims that they too had seriously questioned their religion. Some had, but I always felt like more had done some relatively shallow and gentle questioning and, since they didn't have a scale that fit non believers, they had just decided it must be the top of the scale, the serious level of questioning. "I've swum in deep water. 2 feet, 3 feet, I've done it all. I always came back."

Expand full comment

>I don’t know if it’s meaningful to talk about EA needing “another paradigm"

Beyond this, I'm not clear how EA could reach another paradigm while still being EA. My conception of EA is a charity selection and funding paradigm based on a rationalist quantitative approach aimed at producing maximum utility for everybody in the world (sometimes including animals).

If something is fundamentally wrong with that model, a paradigm shift wouldn't entail 'changing' EA, but instead shifting the substantial amounts of philanthropic money in the rationalist-sphere away from EA and towards a different grantee-search model altogether.

Of course, anybody with a job in big EA would very much prefer that the big donors stay within the EA paradigm when choosing how to give away their money. Openphil and etc. would likely change a bit if asked nicely, but if the fundamentally best way to perform charity was something far away from EA I am skeptical they would be able to resolve the massive conflict of interest. It seems to me that a much better exercise would be to convince the money behind EA to pick their grantees somehow else rather than convincing professionals to kill their industry.

Expand full comment
Comment deleted
Expand full comment

I have sympathies for EA. So if I want to do some good, I don't calculate impacts, I just donate to whatever GiveWell suggests.

Expand full comment

Interesting. This is a real deviation from my ill-informed outsider's assumptions about EA as well. Are the core GiveWell charity lists still good for doing ROI-style charity selection? Or are they corrupted to more subjective metrics?

Expand full comment

Assigning numbers to subjective judgments so you can do math with them is peak shape-rotator nonsense.

Expand full comment

Not necessarily. Scott had a post about doing this vs just using intuition on the old blog, I think.

Expand full comment

Or peak wordcel nonsense. Depends on who gets to do the assigning!

Expand full comment

Then what the heck is the point of EA? You give money to people to distribute based on data/math and instead they just distribute it based on their personal biases?

Expand full comment

The single biggest piece of writing that predisposed me to agree with you is exactly the criticism that Scott praised. It did a very good job of describing an incident where OP seemed very slow to move away from one of its programs that was allegedly not working. So it seems that if they had to move way from their "EA" program, or their "rationalist quantitative approach aimed at producing maximum utility for everybody in the world" program, they'd struggle pretty hard to do that. But this kind of seems consistent with what Scott said in the post!

Expand full comment

In fairness, I think most people would struggle to admit that what is essentially their life's work, certainly career, is fundamentally unsound and rather than being the best thing in the world to do is just a waste of time and money. I doubt I would be able to say that myself if it came to it.

(I do not think all of that about EA)

Where I disagree with Scott is that he thinks the current EA can die by a thousand specific critique cuts and evolve into something better, whereas I am skeptical the current EA institutions have that capacity for such change. (And further that any movement /cause /philosophy could survive a complete abandonment of its core principles and reason for being)

Expand full comment

Movements survive complete abandonment of their core principles all the time - they just have to do it slowly enough, or at times of great enough crisis. Look at the evolution of the conservative movement over the last 50 (or, better, 200) years.

Expand full comment

Or look at any religion.

Expand full comment

When the paradigm change happens, I’d give even odds as to whether the name “EA” attaches to the new thing or disappears. Sometimes, the paradigm shift is able to be explained as the way of doing what they always wanted to be doing but didn’t quite do right, and sometimes the people who make the switch early are people who were outside the term.

Consider the way that modern biology is said to be Darwinian even though in the late 19th and early 20th century, Mendelian genetics with its discrete units of heredity from a “gene pool” was thought to be this one idea that Darwinian theory could never accommodate, with its demand for constantly varying traits that bring species completely outside what they had been.

If sufficiently high status EAs adopt some revolutionary new idea for thinking about philanthropy that is at odds with the kinds of measurements and observations they’ve been doing, then the new thing could be called EA.

Expand full comment

But a synthesis was eventually found, and the underpinnings of modern biology are much closer to Darwinism than to Lamarckism: https://en.wikipedia.org/wiki/Neo-Darwinism

Expand full comment

But Lamarckism wasn't the new paradigm competing with Darwinism - Mendelism was! To a Neo-Darwinian, we don't think of Mendelism and Darwinism as competitors, but they clearly were at the time - Darwin said traits had continuous variation around the traits of the parents, so that small differences can accumulate; Mendel said traits had binary variation, so that the only differences possible were those already in the gene pool. Once we understood that most traits were controlled by many genes, and that there are rare mutations in any, we were able to synthesize these.

Expand full comment

Raises the question whether money can actively harm in certain situations. I can think of a few. Rags to rags in three generations and all that

Expand full comment
author

It depends on what you mean by "paradigm". The biggest paradigm I can think of in past EA was that it used to emphasize how the most important thing the average person could do was make/donate money, and now it emphasizes that the most important thing the average person can do is take a beneficial job. If it made another pivot at least that large, I think it would be fair to call it a paradigm shift.

I agree that with sufficiently big paradigm shifts it would have to look like "don't do EA, do something else" - though I'm not sure how you could call that an EA paradigm shift, or what it would take to make EA people agree that this paradigm shift was right.

Expand full comment
Jul 21, 2022·edited Jul 21, 2022

Both of those seem like different tactics for "do what is quantitatively the most good", just a tactical shift that seems like it might be targeted at broadening the potential movement? The new one is much more attractive to young potential activists who'd rather do good directly and don't have much investment in their careers, while the older one targets the rich professional class who'd rather keep working at their pre-existing high paying careers. That shift feels like a shift within the same charity framework which wasn't quite what I had in mind.

If it wasn't clear, my point is that if you think "do what is quantitatively, with a data-driven application of rationalist principles, the most good from a utilitarian perspective" is fundamentally bad or unsound or inefficient, critiquing EA directly is fairly pointless because the people you're talking to naturally have a lot, both career-wise and personal labor and mental energy wise, invested in EA being fundamentally right and it would be extremely difficult to throw that away. Rather, any "critique" would need to be in the form of simply starting your favored approach from scratch and if possible avoiding confronting EA people and demanding they change their core beliefs directly.

Expand full comment

Two bits of wisdom from my grandfather you may enjoy on this topic. He was a Master Chief in the US Navy.

“As soon as nothing starts to become something it can’t be everything, and that always pisses someone off.” Or, whenever you bring something into the real world and hit actual constraints people being mad about it is unavoidable.

“Saying that there should be more goodness and less badness isn’t an idea, it’s just a complaint from someone who doesn’t want to spend the effort to figure out how to fix something.”

EA folks seem nice, if a bit at odds with some of my own beliefs. Though to one of your posts I particularly enjoyed, I think those things seem especially prominent to me because I am otherwise identical to them in 95% of the rest of my beliefs. I am always happy to bend over backwards with my spiritual generosity toward complete heathens whose beliefs don’t overlap with mine in any regard.

Expand full comment

Love the first quote.

Expand full comment

Agreed, I'm stealing that one for sure!

Expand full comment