615 Comments
Comment deleted
Jul 20, 2022
Comment deleted
Expand full comment

most community churches don't have billions of dollars that they deliberately spend to change the world!

Expand full comment
Comment deleted
Jul 21, 2022
Comment deleted
Expand full comment

It matters because the stakes are decently high with EA, unlike points of division with an online church.

Expand full comment
Comment deleted
Jul 20, 2022Edited
Comment deleted
Expand full comment

I'm a consequentialist, but I'm fine with you focusing cultivating virtue, virtue seems both instrumentally useful (virtuous people do more good) and intrinsically useful (people place value on themselves and others being virtuous), so keep on taking virtuous actions, you may never become a utilitarian but that's not the point anyway.

Expand full comment

I agree. I despise virtue signaling. Tell me what you did. That’s what matters. Doing good things does make you feel good, but feeling good because you believe the right things is mental masterbation

Expand full comment

As a sometime fan of virtue ethics and longtime foe of optimizing consequentialism, I have to say this is the first post that made me stop and consider and change my mind a little bit. Thanks.

Expand full comment

As a Stoic, I do not think that consequentialism and virtue ethics are fundamentally in tension. The Sage's actions will be the best approximation of the path which yields the best consequences.

Why is there a conflict between Aristotelian virtue ethics and consequentialism? I am not familiar enough with it.

Expand full comment
Comment deleted
Jul 21, 2022Edited
Comment deleted
Expand full comment

I don't have a source for Aristotelian ethics, since I'm largely unfamiliar with it.

Reading enough Seneca (Dialogues and Letters on Ethics) though, it became very apparent to me that Stoicism is rationality, rationality is about winning, winning means earning victories for humanity. Taking rational action means you take the decision alternative with the highest expected value given constraints on knowledge + model uncertainty. So that's how Stoic rationality implies consequentialism.

That's the short-version. I was trying to write a longer version, but I'm unsure where to draw the line between succinct argument and tangent. So best not, unless requested. Originally I tried a version, where I quote-mined each point. But I don't have a whole day.

Seneca is a joy to read in English. My writing not so much.

Expand full comment
Comment deleted
Jul 21, 2022
Comment deleted
Expand full comment

"Logos" refers to the study of logical propositions. Sophistry if you ask Seneca, who has a low opinion of that pursuit. (he's an intuitionist, I guess... or undertandably doesn't see what ethics has to do with formal logic)

The Stoic prefer an action that gives them "preferred indiffierent things" (money, status, power, health etc.) over the inconvenience of "dispreferred indifferent things" (poverty, sickness, exilie), if given an option. That's a utility function.

Consequences of action alternatives should be weighed relatively to one another. When we do, chosing an option with lesser value (ceteris paribum) would be the same as chosing a "dispreffered indifferent". This is quite obvious and the idea of "more > less" is a level of access, that the Stoics would have had access to.

"Maximizing expected value" is just that plus probabilities.

The ancient Stoics lacked our numeracy and many mathematical concepts that are ingrained to modern thought. But if you have all that, decision theory becomes an implicit corrolary. The Stoics lacked cognitive science. But they did know enough psychology to describe how an aspirational rational person (the Sage) ought to work and where us fools fall short. Read Seneca "Of Peace of Mind" and you will probably find your own flaws pointed out in an insightful manner. That's because he basically rattles off all the things that could possibly be wrong with you. And those are not mere Barnum statements, they are predictable, universal failings of the human condition. The Stoics had theory of mind.

So in Aubrey Stewart's translation, I find that "reason" and "wisdom" might as well have been translated with "rationality". "Wise" with "rational". It would not change the meaning.

Expand full comment

The tension is at the extremes. Consequentialists (specifically *maximizing* consequentialists) will follow consequence based arguments to very different conclusions that virtue ethicists will hold themselves back from making. See "repugnant conclusion".

Expand full comment
Comment deleted
Jul 21, 2022Edited
Comment deleted
Expand full comment

That's true. I am coming to believe in a blend of virtue ethics and satisficing consequentialism. Here "satisficing" is a neologism meaning "good enough above some bar, stop comparing".

I think the latter goes a long way towards keeping the good (rigorous debate) while doing away with the bad (miserable perfectionism and repugnant conclusions) of utilitarianism.

BUT! I agree fundamentally that it's still a system, and systems serve us, not the other way around, and the virtue of virtue ethics is in never really losing sight of that.

Expand full comment
Comment deleted
Jul 21, 2022Edited
Comment deleted
Expand full comment

The idea of satisficing is it doesn't matter because both things are above a certain reasonable water line. The idea of blending it with other systems is you could turn to those for arguments against it, either before or after the satisficing, but especially after. The idea of making sure systems serve us instead of the other way around is everyone can tell this is a bad conclusion, so we just ignore the system instead of following it.

Expand full comment

I think repugnant conclusions about constructed thought experiments are mostly sophistry. Unless there's a real world situation, where I would feel compelled by Stoic virtue to divert a trolley onto a consequentalist decision maker, before he does something implausibly yet impactfully stupid, there's no conflict?

Expand full comment

"Be an EA worker for EA causes instead of a doctor" is a very concrete point of contention here, and for a lot of people, a repugnant conclusion.

Ditto "It's a moral crime to eat ice cream or have a child because the <numerical> value you could provide to future persons is greater".

Peter Singer's Drowning Child is absolutely not a hypothetical problem. It's a bundle of direct real world problems brilliantly presented *as* a hypothetical so that once you've made your armchair, intellectual answer, you've bought in to a practical problem that could be worked on, now, right now.

I think both pro and anti would typically agree that EA is where the rubber hits the road on trolley problems.

Expand full comment
Comment deleted
Jul 21, 2022
Comment deleted
Expand full comment

See my other comment; I think moral systems are best blended with a satisficing component. I think virtue ethics does a better job doing this implicitly, by people's preferences and what they consider virtues (eg the virtue of moderation), whereas consequentialist and deontological systems need this bit spelled out and maybe even stamped on the forehead.

Expand full comment

Yes, I can kind of see that. But that's a problem with projecting an armchair, intellectualized argument onto the real world, ignoring many of its complexities.

I think that forswearing ice cream for guilt about the children and devoting yourself to EA causes over all other obligations or concerns on consequentialist grounds is not justifiable.

I do not predict that this cause of action will be the best utilization of someone's talents for the general benefit of humanity (which as a Stoic I do agree is what one should care about).

That's a longer critique though. I might enter it into the contest, if I can write it persuasively.

Expand full comment

I agree. Elsewhere in this thread someone linked this great bit of writing which includes a few such examples: https://michaelnotebook.com/eanotes/

I don't know if it's a good use of your time or not, but it would definitely be great to see someone enter the contest to critique these problems of scrupulosity from a stoic perspective, and I'd certainly enjoy reading it.

Expand full comment
Comment deleted
Jul 20, 2022
Comment deleted
Expand full comment

What did he really believe then?

Expand full comment

(I deleted because I was completely off topic and probably wrong)

Expand full comment

Everyone should feel they have the right to be wrong. You’d get more effective critiques that way. Why do humans get so attached to their ideas? Experiments depend on failure. People in practical fields seem to understand this much better

Expand full comment

To some degree effective critiques in practical fields are less personal. Of course people can still invest their identity in a particular practical theory but it is hard and less common.

Expand full comment

More than being afraid of being wrong is a complete lack of desire to defend my ideas, even if they are right. I find discussions emotionally unbearable.

Not claiming it is a good thing.

Expand full comment
Comment deleted
Jul 20, 2022
Comment deleted
Expand full comment

Thanks!

Expand full comment

Agreed!

(Well, I might write some other comments too about the content!)

Expand full comment

What's EA?

Expand full comment

Effective Altruism.

Expand full comment

Have you really never played the Sims? https://wikipedia.org/wiki/Electronic_Arts

Expand full comment

The Euro Area, the official name for the Eurozone.

Expand full comment

Eusko Alkartasuna, a Basque political party.

Expand full comment

The Evangelical Alliance.

Expand full comment

Equivocal Acronym

or just X for short.

Expand full comment

Electric Allies, its an AI alignment group

Expand full comment

Eä is the Quenya name for the material universe as a realisation of the vision of the Ainur. Scott just forgot the umlaut. He's sloppy like that.

Expand full comment

Sloppy yourself. It's clearly a diaeresis.

Expand full comment

Spectacularly done.

Expand full comment

Expected Answer

Expand full comment

EA is the true name of SCP-001.

Expand full comment

Everything and Anything

Expand full comment

Evolutionary Algorithm

Expand full comment

Interesting. Seems a big issue in society in general (not unique to EA) is critiques "that everyone has agreed to nod their head to." This post helped me understand how such critiques can be perpetuated because they help people ignore more serious, specific critiques that threaten individuals' relative status or would require them to work/think harder.

By the way, is this argument a tiny bit reminiscent of the old-line Marxist argument that identity politics is a distraction from class conflict? (Which is not to say their solution to class divides is correct...)

Expand full comment

Unsure, because both class conflict and racial justice can be posed as generally good things to nod along to and do nothing practical about, or as specifically challenging calls to action.

Expand full comment

That's part of the reason, I suspect, Marxism is so well accepted in civilised society?

Expand full comment

Absolutely, vague distaste for capitalism in theory doesn't mean you actually have to do anything about it.

Expand full comment

Marxism does expect Marxists to be revolutionaries and to do things about it. It's just that modern Marxists have come to see the Revolution much in the way that modern Christians see the Second Coming - something much to be aspired towards but not actually expected.

Expand full comment

Maybe the other way around. Expected but not actually aspired to.

Expand full comment

I think it's a general rule that most people do not consistently act as you'd expect them to if they thought their beliefs were true, I wasn't really making a specific point about Marxism - although obviously noncommittal support for Socialism is a useful way to signal that you're socially conscious and care about the lower classes, especially while sipping Champagne with your middle-class friends.

Expand full comment

I really like the analogy, and think that it is not accidental. Those religions and ideologies that eventually survive are usually those that mutate to follow the strategy described by Scott - And so, Christianity transformed from expecting the reach to give up their money and the ruler to lower the damn taxation to a religion that just want you to admit that you are a sinful little thing, without demanding much in particular.

Expand full comment

Christ demands you give up your sin. Which usually people can't do without admitting they have sin to give up in the first place, hence the necessity to admit being "a sinful little thing." Which sins are the most important to give up seems to be what Christians argue about. Love of money? Sexual immorality? Etc.

Expand full comment

This doesn't sound right to me. Religion tends to do best when it's demanding enough that it promotes ingroup solidarity, but not so demanding that it brings about near-certain personal ruin.

When a religion asks little of anyone, it might do OK if it is promoted by society but won't go anywhere if it isn't.

Amidst general religious decline in the US, what religious expressions are actually growing? Amish and Hassidim. Not exactly the least demanding.

Expand full comment

That sounds like Bolshevism rather than Marx. Marx thought communism was inevitable, hence calling his socialism "scientific".

Expand full comment

You have well articulated Voegelin and the problem of ideology and immanentizing the eschaton.

Expand full comment

In that vein, vague distaste for consumerism doesn't mean you have to stop buying shit.

Expand full comment

And lack of engagement with the specific historical events. Such as: state control = centralized control = a system a lot like the one you want to take down. Personally, I’d rather have a hapless ruler like Romanoff than live in Stalin’s monarchy

Expand full comment

I think polite society accepts a watered down version of Marxism that few actual Marxists would agree too.

Expand full comment

Yes, interesting point -- both critiques are potentially ones where everyone can stand around sipping champaign without actually having to do much.

Expand full comment

Reminds me of a criticism of a lot of mainstream anti-racism discourse (typified by "white fragility" and corporate diversity workshops) that treats racism as a moral stain that white people must expunge from themselves via awareness and self examination. But doesn't say anything about specific actions that might cost actual money or inconvenience people.

Banks will happily have someone in to tell them how their industry is perpetuating colonialist conceptions of ownership, but would be far less comfortable with "you should fire some of the white men who make up 80% of your managers" or "change your loan assessment policies that disadvantage minorities in these specific ways."

Expand full comment

You’ve identified two different modes of critique, which one could call the philosophical and the journalistic.

It’s a lot easier to do the philosophical critique than the hard work of finding the anomalies within the existing paradigm, whether it concerns the relative benefits of two similar psychoactive chemicals, the efficacy of a philanthropic intervention, or the orbits of the planets.

Note that we have a plethora of theories of dark matter and quantum gravity but rather few and quite expensive ideas or f how to gather the data that can prove some of them right and some wrong.

This mismatch reflects the fact that theory is relatively cheap and high energy physics experiments very expensive. I suspect that is equally true of pharmacology and social interventions.

To bring this back home to EA: Perhaps not enough money is being spent on measurements.

Or, alternatively, we should accept that systems change slowly, that some money will be spent ineffectively or even counterproductively, and that too much worry about optimization is really a path to neurosis that does nobody any good.

Expand full comment

Has "EA is too high-neuroticism" been done yet?

Expand full comment

On some podcast, I think Robot Friends, the case was made that EA is just secular scrupulosity.

Expand full comment

I like that!

Expand full comment

I wrote a looooong comment on Matt Yglesias's substack a while back which I won't repeat here, but:

I don't object to EA itself. Only to utilitarianism, a completely silly theory which many EA people seem to endorse.

When you're dedicating effort to making the world a better place--as opposed to helping specific people you care about, or just enjoying your life like a normal person--you should absolutely try to be as efficient as possible. EA is great!

But people shouldn't believe that improving the world is the only thing they should ever try to do. Anyone who actually believed that would end up neurotic and miserable (there's your "secular scrupulosity").

Apart from the fact that there's probably no such thing as "utility", that makes the theory self-defeating.

Expand full comment

I feel like you're conflating "making the world a better place is good actually" with "this is the only thing you should ever be doing". I'm not going to argue that there aren't EAs who are neurotic and miserable, but I think most of us are able to compartmentalise and have normal human goals in addition to grand philosophical aspirations. There's some tradeoff obviously, but since being neurotic and miserable is counterproductive this is actually the correct thing for a consequentialist to do.

As for whether utility exists, I find consequentialism gets more appealing the less rigidly you define the utility function, because I don't think human fulfilment is reducible to a single dimension.

Expand full comment

Yes, absolutely. If you compartmentalize (and don't feel bad about it) then you're an effective altruism advocate but not a utilitarian.

Hating on utilitarians is one of my philosophical hobbyhorses (along with hating on libertarians) so I couldn't resist commenting.

Expand full comment

I don’t see why a utilitarian can’t compartmentalize and not feel bad. It seems to me that a utilitarian should be able to say that you are a thinking and feeling being whose desires matter, and that while you should definitely make some changes in your life that will clearly make big differences in the world at relatively little cost to yourself, the little tweaks around optimizing usually aren’t work the costs to your own interests.

Expand full comment

But the problem, at least for most ACX readers, is that there are billions of people in the world whose interests you can affect. A utilitarian who was really a utilitarian wouldn't be allowed to stop at some reasonable point.

It would probably be more effective (swidt) if I never posted about metaethics again and just encouraged everyone to become Bernard Williams-pilled like I did. But anyway--

I agree with Williams that ethics is fundamentally different from natural science.

Science is an exercise of theoretical reason. It seeks to answer the question: "what is the state of affairs/the truth about X?"

Ethics is an exercise of practical reason. It doesn't seek the "truth" about some external reality. It seeks to answer the question: "what should a given person do in a given situation?" And since the way you act in a given situation is determined partly by your character, it also has to answer the question: "what kind of person should you be?"

I think one reason utilitarianism seems plausible is that people miss the importance of this distinction. If you say "*The best outcome* is the one where utility is maximized" it sounds tautologically true--assuming that the concept of utility makes sense, which is very iffy.

But that's not what utilitarians are claiming. What they're claiming is:

A "*The best thing to do at any given moment* is the thing that maximizes utility."

Or, possibly:

B "*The best kind of person to be* is the kind of person whose actions will maximize utility over the course of a lifetime."

Option A doesn't work because it's self-defeating. If everyone had the kind of personality that would make it happen, everyone would be unhappy.

People who support compartmentalization are endorsing option B, but Williams didn't think that would work either. It's the inspiration for his famous "one thought too many" argument.

The problem is that even with option B, your most fundamental desire still has to be utility maximization. All your other desires have to be instrumental. You'll have to say to yourself, for example:

"Because I love my spouse I often do things for their benefit, instead of doing other actions which would benefit strangers more. But even though my only fundamental desire is to benefit humanity, I'm still allowed to feel so strongly that I can't help doing those things. My love for my spouse is a heuristic which leads to utility maximization, because a moral principle that didn't allow people to fall in love would reduce overall utility. That's why I love them."

Nobody actually thinks like this (I hope). And anyone who doesn't (i.e., anyone who does have non-instrumental desires other than universal benevolence) isn't really compartmentalizing. They're just not a utilitarian, even if they think they are.

The content of a moral theory has to be described by what it tells people to do or be, not by the outcome it produces. (This is the theoretical/practical distinction, more or less.) If the theory says "Love your spouse enough that it makes you do non-utilitarian acts, because that tends to maximize utility over the long term", a person who obeys the theory will become a person who does non-utilitarian acts for their spouse *because they love them*--not because it tends to maximize utility over the long term.

In other words, they won't become a utilitarian and the theory itself isn't actually utilitarianism. It's "Have multiple desires like a normal human being... and don't think, even in the back of your mind, that only one of them is fundamental."

This is the example Bernard Williams gave:

Suppose you're a utilitarian and you're relaxing on the beach. (Not sure they're allowed to do that, but whatever. Maybe you needed a break from saving humanity.) You see two people drowning far out to sea. One of them is a stranger and the other is your wife.

If you're an Option A utilitarian, you'll say to yourself. "I must first rescue the person I have the best chance of rescuing."

If you're an Option B utilitarian you'll say: "My wife is drowning. And in a situation like this, you're allowed to give preference to your wife."

Williams said that was "one thought too many."

Expand full comment

Is it a question of how you feel or how you think? Or both?

If some alternative to utilitarianmism suggests you have a burden of obligation you can actually fulfil, then you can fulfil It, and you needn't *feel* guilty, and you also don't need compartmentalisation as a *intellectual. *workaround.

Expand full comment

While there are criticisms to be made of utilitarianism, I don't entirely grok yours. Most moral systems set a standard above what people are capable of. "Feeling bad' about something you don't intend to change reduces utility, so it makes sense from a utilitarian perspective to avoid it.

Some of us allocate a certain quantity of our efforts to 'doing good.' The question then remains "what is good?" And each moral system has an answer to that.

Not being perfect according to a given system is different than not adhering to that system. Few people are perfect according to their moral system. Striving combined with accepting one's shortcomings makes more sense than just drawing a bullseye around wherever the arrow of one's life made impact.

I'm happy to criticize utilitarianism for often focusing on the short term, or for denying evolved wisdom present in some deontological systems. Basically, we can fault utilitarianism for always assuming that people are omniscient, when deficits of knowledge might be better addressed by adhering to well-worn paths than attempting a 'best guess' at utility.

But the faster that an environment changes, physically or morally, the less likely that transitional methods will be helpful.

Ultimately, it's hard to test whether or not a particular action is 'good' without having some objective results that can be measured. Utilitarianism allows for those tests. That kind of humility, of potentially finding out that one's assumptions are wrong, is valuable. The process is far from perfect, but the transition from predominantly traditional systems of morality to systems whose impacts could be tested at least a little was a worthwhile innovation.

Expand full comment

What is the difference between saying:

-'You should maximize utility. It's OK if you only try this during some of the periods during which you plausibly might.'

-'You should pursue virtue by striking a balance between efficiently helping others and enjoying your life.'

?

Expand full comment

> Most moral systems set a standard above what people are capable of.

I find that doubtful. Ethical egoism doesn't. Moral nihilism doesn't. Contractualism and game theoretic ethics doesn't. Evolutionary ethics doesn't.......

Expand full comment

The free will people or the political opinion?

Expand full comment

The political opinion.

I tried to get myself canceled on Matt's substack by saying that even though they have superficially different opinions, the people who think public policy has only one valid goal (maximizing utility, maximizing freedom, or maximizing fairness) are all "basically autistic".

I'm sad that no one actually canceled me.

Expand full comment

This feels like you're just using the word in a way that's technically correct but basically just exists to win arguments. I try to avoid semantic debates because they're totally pointless, but whatever. Utilitarian has so much baggage that I personally prefer the term "consequentialist" for my own morality.

Utilitarianism has this problem while other moral systems don't because it specifically values maximisation, you could obviously make it easier if you bounded it and said saving a million lives is no better than saving one life, but I personally don't think moral truths should be ignored just because they're inconvenient. I think most people would describe themselves as utilitarian because they think the philosophy is a useful guide to action when facing moral uncertainty, not because they pretend that all of their actions are maximising utility - there's just too much uncertainty for that in the real world. It's a philosophy, not a description of your actual behaviour.

You seem to derive utility from smugly pointing out the ancient wisdom that being maximally virtuous all of the time isn't possible, but I'm unsure of who the audience for that is. We're all aware of our human shortcomings already.

Expand full comment

Maybe you wouldn't have to compartmentalise if you didn't believe in utilitarianism -- maybe compatmenalisation is a workaround.

Expand full comment

Yes, I think that's a good way to put it. Some people seem to think that utilitarianism is like quantum mechanics: it's capable of being the truth even though the human mind hasn't evolved in a way that lets us fully understand it, or believe in it without some kind of psychological gimmick.

But that's the science-vs-ethics distinction that Williams talked about. A theory of physics can be true even if our brains can't process it. But an ethical philosophy isn't true or false; it's livable or not livable. You can choose from a range of livable philosophies but if a philosophy isn't livable by humans as actually evolved, there isn't anything right about it at all.

Expand full comment

What, are we not allowed to have transcendent values or something? Do we just settle for whatever instincts worked in the ancestral environment, and hope that's sufficient?

The way I see it, a moral philosophy is meant to be more of a map than a destination to relax in, it needs to have something to say to the people who already consider themselves moral. It seems to me that EA appeals to people are dissatisfied with "livable" philosophies and want something more ambitious.

Expand full comment

I've never understood why people take "it would probably make you unhappy" as a good reason to curtail their commitment to helping the world. For one, it seems obvious that your emotional reaction to various aspects of life is malleable in the long run, but also, would it not be virtuous to suffer anguish in order to help people? If anything it should probably relieve neuroticism and anxiety to be dedicating your life to helping instead of spending time on other things you know are not as virtuous. Maybe part of this is that I reject utilitarianism, especially psychological utilitarianism, in favor of something like Christian deontology, but it's always seemed to me like the obvious conclusion of EA is that we should become philanthropy monks and that everyone is just inventing lazy kludges to dodge this conclusion.

Expand full comment

There's a reasonable case that Christianity obliges all to literally sell everything we have and give it to the poor, most of us are not doing that so it's probably a good thing that forgiveness is a big part of the faith, it's not like we don't have practical examples to follow. https://en.m.wikipedia.org/wiki/Francis_of_Assisi

I guess what I would push back on is the idea that it will make you unhappy - I don't think me being unhappy would actually make the word a better place, and while doing the right thing is likely to involve some hardship and suffering, that's not the same as being unhappy - there's definitely something to be said for the idea that virtue is it's own reward.

Expand full comment

Having read some of your other comments in this thread, I think we just agree on most things, but let me see if I can explain my objection here is clearer language. It seems like the approach to personal EA engagement that Scott and other utilitarian EA people endorse is, "donating to EA is morally good, but if you tried to follow the utilitarian math it'd drive you crazy, so just choose an arbitrary threshold like 10% of your income." I would much rather say, "donating to EA is morally good without constraint, so that whoever can give more time/energy/money to EA is doing more good than otherwise, but the fact that you aren't St. Francis or a monk or etc is just an ordinary sin and you'll be forgiven for it." I guess the attitude around sin and forgiveness is part of it. But I guess I also disagree with the idea that you should tell other people that there is a cut off where you should stop giving, because even if you'll be forgiven for sin we should try to avoid it in what ways we can instead of accepting it as a constraint.

Expand full comment

"Utility" functions much like "enthalpy": a convenient mathematical abstraction over a messy collection of loosely-related phenomena.

Expand full comment

I would say the main criticism of utilitarianism, from a secular perspective, is that whatever "utility points" you gain is inevitably wiped out by Death. I mean that both in terms of one's individual death as well as the heat-death of the universe. After death, none of those things will matter or have any lasting impact into the future. Utilitarians are trying to win a game with no rules, no referees, no penalties, and no prize for winning.

In my mind, the only thing that makes sense in a secular context is to maximize one's subjective sense of meaning and well-being for the time that you are alive. Almost certainly that will involve helping others. But if you are doing that to the point of misery then perhaps you just need therapy (or to hit the gym).

If you are religious, then you believe that there are eternal things which transcend death. In that case it makes sense to maximize the utility of those things.

Scrupulosity is for the religious.

Expand full comment

The name does not cure the disease. How does one escape scrupulosity? After all, the argument seems so ineluctable:

CAUTION: INFOHAZARD FOLLOWS. Imagine the Queen of Underland explaining all this to Prince Rilian and his rescuers while casting mind-fuddling incense onto a brazier and playing a hypnotic drone ambient.

Given a choice between a good action and an evil one, which should you choose? The good one, of course. That's what good and evil mean.

Given a choice between a very good action and a merely slightly good one, which should you choose? Surely the better one. That's what better means.

Given a choice between a good action and a slightly but definitely less good action, which should you choose? Still the better one, right? How could you defend seeing a choice between two actions, and choosing the worse?

Therefore you must always do the very best thing you possibly can, as best as you can judge it, all the time, forever. Maximising utility isn't the most important thing, it's the only thing.

And if you nod along to all that, there you are, charmed by the Emerald Witch, or as I think of this evil spirit, eaten by Insanity Wolf.

MAXIMISING UTILITY ISN’T THE MOST IMPORTANT THING

IT’S THE ONLY THING.

WHILE YOU’RE SLEEPING

PEOPLE ARE DYING.

WOULD YOU PREFER

INEFFECTIVE ALTRUISM?

BURNT OUT?

WORK EVEN HARDER!

WHEN DOING GOOD

“ENOUGH” IS NOT A THING.

Expand full comment

Note that "good.action" is not synonymous with "obligatory action". Charitable donation is good but not obligatory.

Expand full comment

But can a thing be good to do but not obligatory? That is the fundamental issue. I gave the Emerald Queen's answer above, but I didn't conjure that from whole cloth. Peter Singer, in his younger days, made the same argument. Here he is in his foundational paper "Famine, Affluence, and Morality" (https://en.wikipedia.org/wiki/Famine,_Affluence,_and_Morality) (numbering mine):

1. "I begin with the assumption that suffering and death from lack of food, shelter, and medical care are bad."

2. "[I]f it is in our power to prevent something bad from happening, without thereby sacrificing anything of comparable moral importance, we ought, morally, to do it."

3. "The strong version [i.e. (2) as just stated] does seem to require reducing ourselves to the level of marginal utility. I should also say that the strong version seems to me to be the correct one."

Quoth the Emerald Queen.

I say "in his younger days", because I heard an interview with Singer in his later years in which the question was put about caring for your own daughter no more than anyone else's. He rather uncomfortably replied that that was one way you could live, suggesting that he had unresolved doubts. On the other hand, when he won the $1M Berggruen Prize last year, he put his money where his mouth is and gave it all away.

I don't know if Peter Singer has formal ties to EA, but it is clear from public statements that they both think highly of each other. So there's a puzzle. What basis can or does EA have for not demanding everything?

If anyone has a coherent argument against ethical maximalism, it will be the first I've seen. All I've come across in the academic literature is assertions to the contrary, expostulation that it is absurd, and invoking magic words such as "scrupulosity" and "supererogation" to name the things they want to be true without showing that the things named exist. All of these responses have also occurred in this thread.

I don't have an answer either, but then, I'm not any sort of A, let alone an EA.

Expand full comment

Here's a simple one: if something is an obligation, you will be punished for not fulfilling it.... so if you don't get punished for not doing X, X was never an obligation.

Here's another: ontologically, obligations aren't transcendent platonic realities, they're social agreements, like promises and contracts. (If one *ought* to do something , to whom is it *owed*?) Singer is only obliged to give away his prize money because he publicly committed to doing so.

Supererogation is no more ontologically suspicious than obligation. Neither is made of atoms.

Expand full comment

Morality is not dependent on who sees what you do. Singer did not give away his prize money because he publicly committed to doing so, he did so because he already thought it the morally obligatory thing to do, prior to telling anyone. Nothing in his work suggests that punishment plays any role in his concept of a moral requirement to do the right thing. Moral obligations are not social agreements; morality does not consist of doing what other people expect of you.

The problems I see in attempted defences of supererogation have nothing to do with the issue of whether supererogation is made of atoms. The problem is the fundamental one: when can it be good to not do the good that you can? Singer's answer is never. The good that you can do is the good that you must do. "Supererogation" is the concept that one need only do "enough", wherever one draws that line, beyond which lie the things that are the good that you can do but the good that you need not do. The problem is how to justify any such line at all.

Expand full comment

I reject (2). Now what?

Expand full comment

No argument from me. Is there anyone in the house to argue for or against Singer’s Second Axiom?

"[I]f it is in our power to prevent something bad from happening, without thereby sacrificing anything of comparable moral importance, we ought, morally, to do it."

I’m hammering on this point not because I have a committed, reasoned out position on this, but because I don’t, and I do not think that anyone else does except Singer, yet it goes to the very heart of EA. I have read various papers defending supererogation and the finitude of duty, and criticising utilitarianism for its total demandingness, but I have not found coherent arguments in any of them.

Singer offers a coherent argument, but my likening of him to the Queen of Underland and Insanity Wolf shows how little it sways me.

Personally, I hold to the finitude of duty, and draw it much narrower for myself than any EA does. At the same time, I am aware that I do not have a foundation for this. For all practical purposes I do not need one, any more than anyone needed Russell and Whitehead to take 350 pages to prove that 1+1=2 before they could do arithmetic. But in the end, their magnum opus was necessary.

Expand full comment

I feel like that "without thereby sacrificing" part is too easily assumed in discussions of this subject. Are there good, explicit arguments for why "I like nice things" and the like don't have comparable moral importance, rather than just assuming it implicitly?

Expand full comment

There are certainly arguments. Here is one made by Thomas Aquinas, quoted with approval by Singer:

"Now, according to the natural order instituted by divine providence, material goods are provided for the satisfaction of human needs. Therefore the division and appropriation of property, which proceeds from human law, must not hinder the satisfaction of man's necessity from such goods. Equally, whatever a man has in superabundance is owed, of natural right, to the poor for their sustenance. So Ambrosius says, and it is also to be found in the Decretum Gratiani: 'The bread which you withhold belongs to the hungry; the clothing you shut away, to the naked; and the money you bury in the earth is the redemption and freedom of the penniless.'"

Or filtered through Insanity Wolf:

TO HAVE ANYTHING IS A THEFT / FROM THOSE WHO HAVE NOTHING

THE LIMIT OF GIVING / IS YOUR ABILITY TO GIVE

GIVE / UNTIL IT HURTS

The quotation can be found online, in a different translation, at https://www.newadvent.org/summa/3066.htm#article7. Later in that section, Aquinas goes even further and defends not only giving up one's own goods but taking one's neighbour's, if it is necessary to succour those in need. If an Effective Altruism party got into government, I wonder what their tax and foreign aid policy would be.

Those who do not recognise divine providence can skip the phrase "instituted by divine providence". Some claim that the natural moral order can be objectively discerned by the same methods of observation and reason as are successful for the natural physical order, without any supernatural foundation. (This doctrine is called "moral realism".) But they do not all agree what those truths are.

How should people decide among these and other views, or arrive at one of their own? What is "the natural order" and how may it be discerned? It will not do to read Singer and nod along to his argument, or read anyone else and nod along to theirs; for even GPT-3 can sound convincing when read in that way.

This can be compared to the problem of objective priors in Bayesian reasoning. One does not get to pick whatever priors will give you the posteriors that you want, and meet criticism by saying "But Muh Priorz!" There are ways of arriving at objective priors for a problem (and much debate over the same, but it isn't relevant here). How can we arrive at objective moral priors? And what would they look like?

Expand full comment

Assumption: "Goodness" is a well-ordered, quantifiable thing, even between radically different actions. Second assumption: "Goodness" is an inherent property of actions, and goodness can be aggregated. Third assumption (of utilitarianism): "Goodness" can be compared and calculated stably *between individuals*.

I don't believe any of those assumptions hold, at least when you get down to it. There is no universal "util" that we can calculate; rank ordering actions isn't even theoretically possible outside of very narrow scopes, and actions in and of themselves (disconnected from context and everything else) don't have "goodness" or "badness" as an absolute, abstract property.

Sure, when you *can* compare actions, you should choose the best of them. But without the whole mechanism of utilitarianism's flawed hubris of universal calculationism, you can't really "do the math" in most cases. And every time someone tries, horrors result.

Expand full comment

Also, utilitarianism ignores intention [*] , and can't come up with a sensible.level.of obligation/burdensome...it's either all or nothing.

[*] Anscombe introduced the term consequentialism, and rejected it out of hand because she saw the purpose of morality as apportioning praise and blame, and thought it unreasonable to.do so on the basis of unintentional good or bad consequences.

Expand full comment

Yes, but I also think it's the type of criticism that is acknowledged but ignored. That is, it's a domesticated criticism. I doubt that people in EA realize how big a problem this is. It seems to me that it is probably hobbling the movement quite a bit.

Expand full comment

agree, was sad to see the lacan trail from a few months ago turn cold

Expand full comment

Me too. I hope someone revives it.

[edit: I guess I'll take a stab at it]

Expand full comment

"The fact that this has to be the next narrative beat in an article like this should raise red flags. Another way of phrasing “this has to be the next narrative beat” is that it’s something we would believe / want to believe / insert at this place in our discourse whether it was true or not. "

In a word, *no*.

It is the next narrative beat because many of us *smell* it. It has been the smell of EA for years. It has been the smell of EA for numerous readers of this blog (and its immediate predecessor) ever since you (very gently) said they made you feel bad for being a doctor instead of working on EA. That was a bad smell and it never, ever went away.

I don't know precisely what is the problem with EA, because I'm not putting in the epistemic work. But I feel *very* comfortable saying "I know it when I smell it; EA smells like it has good intentions plus toxic delusions and they're not really listening".

If you want the best definition I can personally come up with, it's this: EA peer pressures people into accepting repugnant conclusions. Given that, *of course* it doesn't want real criticism.

Side note: Zvi did an extremely good job writing the coherent and detailed breakdown of EA's criticism problem from the other side, and I understand that the whole point of this essay is you don't want to give that kind of thing any air, but it's not very nice or epistemically reasonable not to give it air.

Expand full comment

Disagree because I think it would be the obvious next narrative beat in *any* case, not just EA. "This big and powerful group seems obsessed with soliciting criticism of itself" just naturally lends itself to "...but they're faking it, right?"

I didn't want to go through the details of Zvi's criticism because it didn't make sense to me - it looked like some sort of Derrida-esque attempt to argue that slight details of the phrasing of every sentence proved that it actually meant the opposite of its face value. I could never see it when Derrida was doing it and I can't see it when Zvi is doing it either.

Expand full comment

I agree that it's Derrida-esque. He's never the most legible. So that's a fair reason. But I think the key takeaway is the 21 point list of EA assumptions, and the idea that EA discounts or discourages disagreements with these, and this part specifically is worth giving thought and air.

I would be suspicious when any big and powerful group wants criticism of itself because that follows naturally from reasoning about the nature of big and powerful groups. But also, I would be suspicious for *specific* reasons in certain specific cases ("such and such a group has a history of inviting dissent, then punishing it a year later", or "such and such a group has payed lip service to disagreements the following times, and I expect the pattern to continue"), and I think these reasons could be instructive in understanding, changing, or even opposing the organizations.

In the case of EA, I wouldn't believe it because EA has the hallmarks of a peer pressure organization, and I think the criticism they're most likely to discount is "the negative value of peer pressure outweighs the positive value of your work". That's not a fully general criticism of organizations; it's a specific and potentially useful criticism of one type of organization.

I wouldn't tell a shy conflict averse person not to work for the US Government. But I would tell them to avoid making contact with EA.

Expand full comment

What kind of movement/organization *doesn't* have the hallmarks of a peer pressure organization? Isn't that literally the way you get a movement rather than a bunch of random individual choices? Sometimes the form of the peer pressure varies (is it a carrot or is it a stick) but it's always peer pressure that makes us conform enough to form a movement.

Expand full comment

Off the top of my head, a lot of political/activist organizations seek to change the minds of non-participants, but only mildly encourage non-participants to become participants. At most, they're targeting a specific sub-demographic for recruitment, while sending a much milder message to the whole demographic.

What sets apart EA (and Christianity, and militant vegans, and ...) is that they're telling people that it's a *personal moral failure* to not join the movement. That's one specific type of movement, and we should distinguish between those and the "milder" movements that only want a small part of your resources/mindshare, or just want you to get out of their way so that they can do their thing in peace.

NB: I've never personally encountered the described peer pressure in my interactions with EAs, but I'm conceding that point for the sake of argument.

Expand full comment

I don't think the OP's concern is particularly proselytizing to 'outsiders'. Presumably they would find EA Judaism which only tried to make those born to EA parents feel bad about being doctors also unappealing.

Expand full comment

Masonry. You have to get the signatures of two master masons to be admitted to the fraternity, and you have to ask yourself to enter; they are not allowed to solicit members.

Expand full comment

As others have pointed out, there are a lot of movements that only want to funnel a fraction of people into the most active level of membership, and a lot of movements that offer but don't push increased participation.

EA is indeed like other totalizing movements, evangelical Christianity among them, in that it wants literally everyone to advance to a high level of membership and will happily pressure all of the people to do so.

If you've never been to an evangelical church, it's quite a trip. Sermons regularly revolve around the idea that everyone is on the borderline between being a bad Christian and a good Christian and the tacit suggestion that one's commitment level to church functions will make all the difference. I once saw a pastor preach through a case of hemorrhoids as an object lesson in dedication. EA can feel like that.

By contrast, the US government is fundamentally a movement, and if I want to join it and work for it, they're happy to use me, but if I don't, they don't care, I'm just a citizen and that's fine by them.

You're right that I would find EA Judaism unappealing, but still better for being confined to a clique.

Expand full comment

It is the dream of anyone who writes a post called Criticism of [a] Criticism Contest to then have a sort-of reply called Criticism of Criticism of Criticism.

The only question now is, do I raise to 4?

I did it that way for several reasons, including (1) a shorter post would have taken a lot longer, (2) when I posted a Tweet instead a central response was 'why don't you say exactly what things are wrong here', (3) any one of them might be an error but if basically every sentence/paragraph is doing the reversal thing you should stop and notice it and generalize it (4) you talk later about how concrete examples are better, so I went for concrete examples, (5) they warn against 'punching down' and this is a safe way to do this while 'punch up' and not having to do infinite research, (6) when something is the next natural narrative beat that goes both ways, (7) things are next-beats for reasons and I do think it's fair that most Xs in EA's place that do this are 'faking it' in this sense, (8) somehow people haven't realized I'm a toon and I did it in large part because it was funny and had paradoxical implications, (9) I also wrote it out because I wanted to better understand exactly what I had unconsciously/automatically noticed.

For 7, notice in particular that the psychiatrists are totally faking it here, they are clearly being almost entirely performative and you could cross out every reference to psychiatry and write another profession and you'd find the same talks at a different conference. If someone decided not to understand this and said things like 'what specific things here aren't criticizing [X]', you'd need to do a close reading of some kind until people saw it, or come up with another better option.

Also note that you can (A) do the thing they're doing at the conference, (B) do the thing where you get into some holy war and start a fight or (C) you can actually question psychiatry in general (correctly or otherwise) but if you do that at the conference people will mostly look at you funny and find a way to ignore you.

Expand full comment

I know EA only from your post, and now Scott's. I got the same vibe from both of your posts... My take away is that the idea of eliminating all suffering/evil in the world is dumb. Suffering is what makes us stronger, builds character, gives us something to fight against, (the hero vs the villain story). I'm not going to say we need more racists, misogynists, or chicken eaters but trying to eliminate all of them is a mistake. We've turned 'no racism' into a paper clip maximizer... and we should stop.

Expand full comment

Suffering never gets eliminated, we just complain about ever tinier molehills as the hedonic treadmill accelerates... and that has been the case since the Industrial Revolution.

Expand full comment