904 Comments
Comment deleted
Expand full comment
Aug 25, 2022·edited Aug 25, 2022

One problem with non-x-risk-risks is that it's harder to know whether preventing them is ultimately good.

It's possible that the agricultural revolution should've been considered a serious medium-term risk at some point, but preventing it from happening would ultimately have been very bad; it's possible that the Black Death helped bring forth the Renaissance and eventually the Industrial Revolution, etc. I'm not sure either of those things are true (probably not), but I think something in that direction could definitely be true, and it's hard to know beforehand.

Focusing on existential risks is a way to get around some of that uncertainty, since it's a terminus for us humans.

Expand full comment
deletedAug 24, 2022·edited Aug 24, 2022
Comment deleted
Expand full comment
Aug 24, 2022·edited Jan 27, 2023

"Nobody Is Perfect, Everything is Commensurable" is one of my favorite of Scott's pieces for a number of reasons, and it's worth reading in its entirety. It's 5-10 minutes, and it addresses that specific question at some length: https://slatestarcodex.com/2014/12/19/nobody-is-perfect-everything-is-commensurable/

Expand full comment
deletedAug 24, 2022·edited Aug 24, 2022
Comment deleted
Expand full comment

For a consequentialist, whether or not we should lie about basic moral principles is a strictly empirical question. Has anyone ever attempted to answer it?

Expand full comment
Aug 25, 2022·edited Aug 25, 2022

> we should lie to people about basic moral principles and obligations in hopes of increasing adherence. That seems like a rather questionable strategy to me.

I'm a moral antirealist, that's morality functioning as intended.

> Why define "good person" at all? That term is simply irrelevant to a consequentialist analysis of which actions are right or wrong (or rather, which actions are better than others and by how much).

It's obviously a simplistic binary, sure. I'm a big believer in supererogation being a thing, so if people are going to attach any meaning to that binary I can work with it.

> Consequentialism explicitly rejects universalizability. Did Scott just become a Kantian?

"If". But more interestingly: https://slatestarcodex.com/2014/05/16/you-kant-dismiss-universalizability/ Sophisticated decision theories has been a recurring topic going back to the old LW days, and universalizability still has a place in consequentialism even if only on instrumental grounds.

Expand full comment

The general point here is right on. There is not really any reasonable reason to point at 10 instead of 1, or 5, or 20 or 50 other than that is what X personally feels is reasonable. Which leaves you in a pretty impoverished place when trying to convince someone that say 5 (or 1) is not reasonable.

Expand full comment
Comment deleted
Expand full comment

Scott already wrote a much better post that deserves that title: https://slatestarcodex.com/2014/12/19/nobody-is-perfect-everything-is-commensurable/

Expand full comment
Comment deleted
Expand full comment

You only see the spicy essay if you subscribe to Scott's substack *and also* his Onlyfans.

Expand full comment
Comment deleted
Expand full comment
Aug 24, 2022·edited Aug 24, 2022

>A person who genuinely values all lives equally by definition places no special value on the lives of his or her friends, family, spouse, children. We're right to view such a person as a horrible person, because they would not be a good friend or parent etc.

I think it's very reasonable to follow this statement with this query: do you view Siddhartha Gautama, Jesu bar Joseph, those adherents of their teachings called arhats, bodhisattvas, or saints, and their adherent monks as horrible people? Or likewise for any other teaching that espouses universal compassion, regarding those who sincerely follow it?

Expand full comment
Comment deleted
Expand full comment
Aug 24, 2022·edited Aug 24, 2022

And I think that stepping over the starving man in the street without even a glance to get home to your wife and child is not laudable. Even the cruelest butchers in history had wives and children they loved. Even the man who may one day kill you or I did that. What merit is thus generated, to say "so long as you reach the bar that even a total monster can reach, you are a good man?"

If all men lived as Bodhisattvas, there would be no suffering in the world at all. I suspect your fundamental objection is that you do not believe there are any Bodhisattvas, that the Pure Land is empty, and there is nothing that can be hoped for or attained beyond the charnel ground of reality, and any who say otherwise are just trying to get you to lower your guard so they can slit your throat- in summary, that actual altruism or benevolence is impossible and everyone who claims to be so is merely a hypocrite. That is a very normal thing for people to believe. It is also a belief that creates immense suffering both for people far away from you and people close to you. Things can be both normal and bad.

EDIT: I would also point out in the specific case of Buddha and monks in general that the goal is not "personal fulfillment" it's "the salvation of all living beings", including one's family. Thus, to a sincere Buddhist, one could very easily make the argument that becoming a monk is a superior display of filial piety than merely being an obedient son (indeed, this is a point raised repeatedly in Chinese and Japanese Buddhist schools).

Expand full comment

>If all men lived as Bodhisattvas, there would be no suffering in the world at all.

Perhaps, but not because everyone's material needs were being satisfied. Maybe such a world would be better, but it wouldn't be better in the way most people care about today.

Expand full comment

I would not say they were horrible *people*, because I think applying such labels is very rarely if ever useful. I would, however, say they were people who caused the widespread dissemination of horrible ideas.

Expand full comment
deletedAug 26, 2022·edited Aug 26, 2022
Comment deleted
Expand full comment

I'm going to avoid continuing this further. I don't see the point trying to defend my earlier arguments after having already deleted them.

Expand full comment

Fair enough.

Expand full comment

And what is it about Buddhism and Christianity that you find so horrible?

I am a Buddhist. Give me the best pitch you have for me to stop being Buddhist.

Expand full comment

I made a similar, shorter, angrier critique in a comment of my own, and I'd like to commend your cool.

Expand full comment
Comment deleted
Expand full comment

Is there anything stopping you from going to church? I'm agnostic, but attend church for the community. The pipe organs are pretty cool, the sermons are frequently interesting from a philosophical and anthropological perspective, and at least where I live the food and socializing afterward are quite nice!

Expand full comment

I hear there's EA meetups, have you tried those?

Expand full comment

Generally churches will ask you to donate TO THEM, rather than just in general. That's the big difference between something like EA and a tithe.

Expand full comment

Charity is bad, ergo Effective Altruism is a net negative compared to the counterfactual.

Expand full comment
author

Also from my spicy essay:

Q: All possible forms of assistance, financial and otherwise, just make recipients worse off, for extremely complicated reasons. There are literally no exceptions to this. I promise I’m not just looking for an excuse not to do charity, I would love to do charity, it’s just that literally every form of charity is counterproductive. Weird, isn’t it?

A: Even kidney donation?

Expand full comment
Comment deleted
Expand full comment

I have frequently done this myself.

Expand full comment

It's a neat quip but misses the point. I'm confident that a drink consumed by myself (or presumably C. S. Lewis) won't make anyone's life significantly worse. I can't say the same for a randomly chosen beggar, who is more likely to become an alcoholic, or get drunk and violent, or get drunk and abusive, or get drunk and arrested, or simply get drunk and miss an opportunity to make his life better.

Expand full comment

I like this. I mean, in general I have had better experiences with this kind of personal giving than giving to organizations, in spite of the common criticisms. It just feels more human or something.

Expand full comment

This would be a better response if EA didn't have a substantial set of people saying 'don't worry about donating now, just study hard and become a doctor and you can give when you are a rich doctor' OR if EA didn't subscribe to Peter Singer's ethical framework.

Or even if I could recall organ donation being brought up in an EA sense before - which could easily be my faulty recall.

Expand full comment

>Or even if I could recall organ donation being brought up in an EA sense before - which could easily be my faulty recall.

Anecdotally, I've seen kidney donation come up multiple times before in EA circles. It's definitely not as common as the GWWC pledge, but some people are doing it and a lot are probably aware of it.

Expand full comment

Cool, that is good to know.

How universal is blood donation and bone marrow registration?

Expand full comment

I feel like this is kind of drifting away from the point that started this thread: Some people claim that they don't donate to charity because it's ineffective, but a lot of them are partially motivated by the fact that they just don't want to (because extremely reliable, direct interventions like blood donation do exist). FWIW, I do think that speculating about peoples' unstated motivations is pretty sketchy and I don't 100% endorse Scott here, but I also understand where Scott's annoyance comes from (some people spend a lot of time criticizing EA despite not actually caring much about charity + a lot of the disagreements that people bring up aren't actually fundamental, they just apply to one specific cause area or intervention instead of the actual EA philosophy itself, like people who equate EA with longetermism (or even E-risk with longtermism)).

But to address your comment, I think that blood/bone marrow donation are sometimes brought up in EA but not all that common. If your point is that a maximally ethically consistent EA would be doing all of those things on top of donating, most of the responses that you'll get from EAs will probably boil down to this: https://slatestarcodex.com/2014/12/19/nobody-is-perfect-everything-is-commensurable/

Expand full comment

I think it's also worth asking how often donated blood is a bottleneck in availability. I used to donate blood fairly regularly. Although I mainly stopped for other, more idiosyncratic reasons, it did feel less pressing after I did a bit of searching and found that there's rarely a blood supply shortage to begin with. Money has unlimited fungibility, but if you donate blood in a non-shortage scenario, the limited shelf-life means your donation may not have any marginal value.

Personally, I have seriously considered donating a kidney, since even back before I became aware of the EA community. But it turns out that, at least as of the time I looked into it, things aren't really set up for convenient untargeted organ donations, where you just give a donation to be retained for the most appropriate recipient. Besides that, my most determining objection was that I decided it would just cost me personally too much paranoia, because I'm the sort of person who, if I only had one kidney, would spend too much of my life worrying about the prospect of something happening to it.

Expand full comment

I may not be explaining myself well,, but it was Scott that brought up kidney donation as a mike drop.

The notes about the marginal effectiveness of blood donation is fair enough...but it doesn't hold for bone marrow registration. Nor does it apply to things like egg or embryo donation.

My point is still that EA as an organized philosophy is more about the E than the A.

Expand full comment

Ironically, it's harder to donate blood as an EA since following an EA-endorsed diet (low in animal products) will make it more likely that you'll be medically disqualified. I've tried to donate blood three times and failed every time.

Expand full comment

Also, extremely anecdotally, the people I know who identify at least loosely as EA are quite a bit more likely to give blood, be nice to strangers, and generally be kind. This idea that people will use giving to charity as an excuse to be terrible in other parts of their lives is profoundly untrue in my experience.

Expand full comment

For people who might be interested in considering kidney donation, terrif post by my friend Virginia Postrel, who gave a kidney to Sally Satel.

https://vpostrel.com/articles/here-s-looking-at-you-kidney

Expand full comment

She's very glamorous.

Expand full comment

I've followed both their work for a long time, and both Satel and Postrel are very remarkable women. I remembered that Postrel had donated a kidney somewhat randomly, but I didn't know for some reason that it was to Satel. Good choice.

Expand full comment

Virginia and Sally are both just wonderful human beings and also thinkers worth reading.

Expand full comment

I believe there’s also some EA institutional funding going to orgs like Wait List Zero, which promotes organ donation. Personally I’m an EA who was convinced by the arguments for kidney donation so I’ve donated mine.

Expand full comment

Good on you for doing that!

I’m actively exploring the idea of giving away one of mine too. I’ve always sort of embraced the idea on a philosophical level, but I had my gall bladder removed a while ago and the experience demystified organ removal surgery a bit for me and made it feel more like “oh yeah, this is a thing I can actually do.”

Expand full comment

Dylan Matthews -- one of the writers of Future Perfect, Vox's EA vertical -- donated a kidney and has advocated for people doing so.

https://www.vox.com/future-perfect/2018/10/15/17962134/future-perfect-podcast-kidney-donation

Expand full comment

I recall hearing a Canadian EA complaining about how much of a hassle it was to donate a kidney because he had to take a vacation day from work and pay for everything himself (due to a law against any and all forms of compensation for donating organs). I might donate a kidney someday though it's not top-of-mind atm.

Expand full comment

In the US, at least, they finally changed this. Everything is paid for, lost wages are covered. It’s much better now, I had to travel to a hospital and they paid for my flight and hotel

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

>Q: All possible forms of assistance, financial and otherwise, just make recipients worse off, for extremely complicated reasons.

That's not how I'd put it, however. Charity is a tax/sink of resources on the <caring/my ingroup/whatever>, at the benefit of, at best, anyone, and, realistically, disproportionately of the <uncaring/my outgroup/whatever> (if only because those that donate rarely benefit from charity themselves, thus culling themselves from the pool that can potentially benefit from it). Which seems <evil/perverse/self-destructive> to me. It hardly matter how much it benefits the recipient, because it's the giver that end up worse off.

P.S: there's of course a bit of motte-and-bailey going on at my side, since one can hardly argue against "you should help people if it makes you happy", but EA, and charity at large, relies on a general injunction and social pressure toward giving help, which is what i object to. Helping others is like smoking a cigare: I can enjoy doing it when I feel like it, but if someone tells me "well you're smoking very inefficiently, here's a device that will help you go through 10 of them in 15 minutes, so you can pump up your game", it turns an activity that exchange a small negative for a small positive into one that exchange a larger negative for a larger negative.

Expand full comment
Comment deleted
Expand full comment

That argument gets weaker when there's strong and pervasive social pressure to give, such as would happen if there's broad acceptance of this proposed social norm of donating at least 10% of your income to charity.

Expand full comment

Exactly. I've pretty bad about the EA tithing pressure before, but seeing posts like Henry's make me feel better about not donating.

Expand full comment

I love this analogy.

Expand full comment

Money almost certainly has diminishing marginal utility for individuals though. Do you think a million dollars to Jeff Bezos or one of the Waltons has as much personal value as it does for an individual struggling with unpaid debt?

Even if we assume that donors get no personal sense of fulfillment from it (which may sometimes be, but certainly isn't always the case,) the core argument is that the money has more utility to the recipients than the donors. That might not be the case for some specific charities, but it seems fairly extraordinary for it to not be the case for all charity in general.

Expand full comment

1. What makes you think utilitarianism is remotely correct? When you see repugnant conclusions at the end of every line you look it might be time to rethink the basic premise.

2. Jeff Bezos having more money isn't just about Jeff Bezos getting fulfillment from his marginal dollars. Money in Jeff Bezos's hands tends to grow and create further value. When Bezos was merely a rich executive you could have made the same argument and then his marginal dollars would have went to bed nets and there would be no Amazon.

The only people that should be involved with anything like EA are people who have no vision and no use for their money (which is plenty, but titans of industry aren't going to be in that list).

Expand full comment

>1. What makes you think utilitarianism is remotely correct? When you see repugnant conclusions at the end of every line you look it might be time to rethink the basic premise.

I think that if the repugnant conclusion is a legitimate extrapolation of our intuitions of goodness, then maybe we should rethink how repugnant it actually is, but if it's not, then the problem lies in our assumptions about what we actually value than in the idea that it's best to maximize the total level of goodness.

There are different flavors of utilitarians, which differ in where they get off at the end of the lines, but agree most of the way to their destinations-

https://slatestarcodex.com/2018/09/25/the-tails-coming-apart-as-metaphor-for-life/

I think that I would be much, much happier in a society where the basic assumptions of people's moral framework were utilitarian than our current one. Like, really overwhelmingly. It's almost impossible for me to overstate the degree of my current level of satisfaction which could be addressed by more utilitarian organization, it's like being a devout Christian living in a society organized around worship of Huitzilopochtli, complete with mass human sacrifice. The Christian might have disagreement with other Christians about the correct interpretation of Christianity, but the difference between their positions, and how they would actually organize a society given the option, is only a tiny fraction of what it is between them and the Huitzilopochtli worshippers. Most utilitarians wouldn't implement Repugnant Conclusion style outcomes if they could, and if they end up destroying the world or something in some extreme eventuality, it's only at a point where they have such a vast degree of power and resources that I think non-utilitarians probably would have failed to avoid destroying the world as well anyway.

>2. Jeff Bezos having more money isn't just about Jeff Bezos getting fulfillment from his marginal dollars. Money in Jeff Bezos's hands tends to grow and create further value. When Bezos was merely a rich executive you could have made the same argument and then his marginal dollars would have went to bed nets and there would be no Amazon.

I actually didn't think Jeff Bezos was an ideal example for pretty much this reason. Most of his wealth isn't cash which he can do anything in particular with, it's valuation of Amazon, which he has a large stake in. But if I searched for the name of a billionaire with a high net-wealth valuation and low plausible utility of their business (maybe a cigarette company magnate or something?) you wouldn't have recognized their name. The Waltons are probably a better example than most who people have a reasonable chance of at least recognizing in aggregate. You can do this sort of thing with some wealthy people, where you assume that the purposes from which they derive their wealth are even more valuable than effective charity, but the idea that you can do it with all of them is frankly pretty absurd. If you want to make a case for that, why not start with the *least* societally valuable wealthy person you can come up with? If you can argue in the extreme case that even the most unworthy personal uses of extreme wealth are more valuable to society than charity, it would be a lot more meaningful than arguing that for the most unusually valuable ones.

Expand full comment

Why care about utility?

Expand full comment

In short, not-having-the-entire-discussion form, I explored a whole bunch of ideas for things I might care about, and felt it was the only one that made sense.

Expand full comment

Because "utility" is defined around maximizing outcomes people care about. The first rule of Tautology Club is the first rule of Tautology Club.

Expand full comment

Yeah but utilitarians spend an awful lot of time on outcomes people demonstrably do not care about.

Expand full comment

>an individual struggling with unpaid debt?

Which one? There's an uncomfortable fact that many of the "neediest" people in our society are functionally black holes of self-destruction and pathological bad choices. Money to Jeff Bezos at least probably doesn't have a negative marginal utility, or generate terrible moral hazard.

Expand full comment

Sure, but that's still a dodge of the broader moral issue when many are not. I have a close friend who's been struggling with a hand-to-mouth existence for years now, and I've given her money on a lot of occasions so she'd be able to make ends meet. If there were a straightforward solution to her financial woes within the powers of my best judgment and her abilities, she'd actually take my advice on it.

The existence of people to whom charitable donations would be badly targeted isn't an argument against the value of carefully targeted charitable donations.

Expand full comment

Sure. My point and hobby horse is more "beware of second order consequences and utility holes". The "carefully targeting" part is a very non-trivial issue.

Expand full comment

I have a position that the better you know the recipient of charity, the more effective the charity can be. I think EA activists are going to learn (as many NGOs and government entities have) that a large portion of their giving ends up wasted. I have seen Scott reference several such examples, so I know EAs are somewhat aware at least.

I very much like the form of local giving that is exhibited by you giving money to help a friend. You can gauge the level and timing of need to best target improvement, and you can also determine if she's able and willing to positively use it. I think trying to aggregate that approach into a large organization giving out lots of money loses too much in that process and has far less value.

There's also the unintended consequence, such as destroying local economies. The one I first heard about was farmers in Sub-Saharan Africa abandoning farms due to being undercut by Western food charity. They could not sell their food due to the abundance of temporary free food available, and did not have the capital to wait out the process. This made Africa less resilient locally, and therefore dependent on future charitable giving in order to survive. In other words, worse off in a long term sense. I can't help but wonder if bed nets are no longer something that Africans can economically create for themselves, and therefore foreign donations are the only source for them.

Expand full comment

>Do you think a million dollars to Jeff Bezos or one of the Waltons has as much personal value as it does for an individual struggling with unpaid debt?

As Ichonochasm remarked, that individual sounds like someone who consume more than he creates, leaving the bill to the rest of mankind. So yeah, even if the margical utility of Bezos's dollar drops to 0, I'd rather he keeps it.

Expand full comment

Consider the possibility that the general social good that will come from a million dollars in the hands of Jeff Bezos might exceed that which will come from a bunch of individuals struggling with unpaid debt.

After all, there is such a thing as the wise and foolish use of capital, and without knowing anything further, it would be not unreasonable to assume that Bezos spends his money more wisely than J. Random Struggling With Debt. There's a good chance that's why Bezos is rich and Mr. Debt poor, after all (allowing that weird exceptions like extreme good or bad luck do sometimes happen).

To take an extreme example, if Bezos would invest that million in a start-up working on a drug for Alzheimer's, which succeeds, that would have immense social value. And if all 100 (say) of the recipients of Bezos's money, were he to donate that million, were to spend it all on Thunderbird and playing Lotto until they were quickly broke again -- then we can easily see it's better, even on social grounds, even on the grounds of maximizing group utility, for the money to stay in Bezos's hands.

That's an extreme example, but there's no a priori reason to think the general principle is obviously unsound, so it's by no means obvious that social utility is maximized by transferring money from he who has a diminished *personal* (consumption) utility to he who has an enhanced personal utility. I mean, unless the pleasure of consumption is the *only* measure one is going to make of social utility, which seems....cramped.

Expand full comment
Aug 25, 2022·edited Aug 25, 2022

If you make the scenario maximally convenient for the supposition that the money might have more utility in the hands of a rich person, and maximally inconvenient for the supposition that it might have more utility in the hands of a poor person, then it's not surprising if you find that the money has more utility in the hands of a rich person in that situation. I could also perfectly well come up with situations where a country would be better off if the head of state were a stupid person who didn't know anything about political administration than a smart person who's knowledgeable about political administration. Would you then agree that, although I deliberately picked extreme examples, there's no a priori reason to think that the general principle that it's not better for heads of state to be intelligent and politically knowledgeable than unintelligent and politically ignorant is unsound?

Expand full comment

The point of my picking an extreme example was to point out that it's not at all implausible -- which tells you where the general trend lies. If you can pick out an example *that is equally plausible* by which the money is better off in the hands of J. Random Debtor than someone who has succeeded in life and in business, go ahead. Then your equivalence will not be specious.

Expand full comment

So, if I understand this correctly, you're conceptualizing charity as some kind of video game , playing the game alone whenever you feel like it with no strings attached is fun, but playing it on multiplayer sucks all the fun out of it and makes it too competitive and demanding.

That's... Not How Ethics Work. There is a reason they're called Moral *Obligations*.

Expand full comment

I don't really get your analogy. Giving shit away for my self-satisfaction is a, let's say, "sin" I indulges from time to time, and like most sin, it's fun in moderation, absolutely destructive when addicted to it.

I also don't see the point of labelling as "moral obligation" something I'm not, in fact, obligated to do in any way.

Expand full comment

Your "sin" way of thinking about charity does closely resemble my video game analogy : the sin is the game, indulging in it in moderation is playing the game on single-player in a chill way, being addicted to the sin is playing the game on multiplayer in an addictive and resource-consuming way. It's a mapping, but not a stretching one.

>I also don't see the point of labelling as "moral obligation" something I'm not, in fact, obligated to do in any way.

But, you see, there is no point to talking about being "obligated" without specifying the framework for obligation, or - if you will -, a theory of What We Owe Each Other. I technically don't owe you anything, not even leaving you and\or your property safe and inviolable. I'm not *obligated*, by any obvious mechanism that can make me, to not take your shit or enslave you. You can make me by using the Laws of Physics, which is the ultimate authority all of the universe answers to and all other obligations eventually has to be translated to, but those Laws seem to be neutral, I can equally well use them against you as you against me.

One way of solving the above conundrum is playing it out till one of us lose, potentially a life. Another way is getting as far from each other as possible in an environment full of more low-risk/high-return targets for our desires than each other, such as deep space. Both of those solutions have pros and cons, I personally prefer the second, but they are often impractical for several reasons. A third way is for us to agree on a joint framework that specifies what each one of us owes the other, in a way that (hopefully) require little or no violence to enforce and keep.

This framework is ever-changing and sometimes unfair to some parties (which is why I prefer the second impractical solution of us leaving others alone and them leaving us in deep space, but alas, it can't be done yet), imagine buying a bunch of blacks to serve you and your farm lands in the 1850s only to be forced to give them away barely a decade after. I don't say this in a tongue-in-cheek way, taking slaves away from slave-owners who bought them out of legitimate money is really unfair, especially when no one told them to not buy slaves before they did and everybody was buying slaves around them. But, the joint framework has to accommodate the blacks, and so we must commit to this unfairness.

All of those words summarized : Ethical Obligations exist, and they compel you to do far more than you currently think. If you think they're unfair, well fair point, but they are also sometimes unfair in your favor as well. If you don't follow them when they work in others' favors, why should others (who very much can harm you and take your shit) follow them when they work in yours ?

Expand full comment

What if I told you I don't believe in moral obligations?

OK, I believe you're morally obliged to do things that you have explicitly agreed to do -- if you make a deal with someone then you're obliged to hold up your end of it. But I don't believe you have any moral obligations to random strangers simply by virtue of having been born on the same planet and of the same species.

If you pass a drowning child, you are not morally obliged to save it. It's a very good thing to save it, and I reserve the right to criticise you if you don't, but you're not obliged to.

If you say you _are_ morally obliged to save the drowning child, then it follows by arguments presented earlier, that you're obliged to save millions of others who need help. And you can't possibly save them all. And it's unreasonable to tell people that they're obliged to do things that they can't possibly do. Therefore you're not obliged to save anyone.

Expand full comment

One... interesting (to use a polite word) consequence of your views is that all crimes are morally A-OK. I have never explicitely agreed, in spoken or written words, to not steal, to not kill, to not rape. I live in a city of 20 million inhabitant, what do I owe all those random strangers ? according to you, nothing. Why shouldn't I rape a woman just because I happened to be born in the same "country", an even more fictional and arbitary entity than "Planet" or "Species" ?

Maybe you will say something like that rape or murder or theft is forbidden by my country's law, and that I explicitly agreed to that law whenever I sign any government paper work ? I would disagree because that's not how "explicitly" work, but you know what ? I can simply kill and rape people from other countries. There, I don't have any obligations towards *those* randos right ? including the obligation not to kill or rape them.

You think this is a self-consistent way to live through life ? Do you want to live according to it ? Do you want *me* to live according to it in close proximity as you ?

>you're obliged to save millions of others who need help

Chad-Yes.jpeg

>it's unreasonable to tell people that they're obliged to do things that they can't possibly do.

Which is why all humans are morally guilty of any terrible thing they could have changed but didn't. Seriously, that's the far simpler method of confronting the horribleness of life, just accept that you're a flawed being who's commiting thousands or millions of crimes each day by inaction and move on, trying to fix what you can. This is, while not trivial, is far more defensible and self-consistent thing to do than your "Akshually moral obligations are not a thing unless I agree to them".

Oh, it's not fair to be guilty without doing anything ? But nothing is fair. A person who worked as an atlantic ship's cook in 1750 is a profoundly guilty human, merely because they cooked food for slavers. Is that fair ? no, is the solution to declare slavery ok because nobody actually agreed to not enslave africans (including modern people) ? no.

Expand full comment

I think this is semantics; I was talking about positive obligations (the obligation to do something) rather than negative obligations (the obligation not to do something). Obviously I believe that there are things you should _not_ do, I wouldn't use the word "obligation" here but you can if you like.

Here's a fun question: by your standards, is a slave who chooses to have children, knowing that they too will be born into slavery, profoundly guilty?

Expand full comment

If your moral system finds caring about your "outgroup" to be repugnant, are you generous towards your in-group?

I think it's fair to say that if you're morally opposed to caring about strangers you're not going to like EA.

Expand full comment

Haha I think the favorite excuse I've used to not give to charity has been:

Q: "If I give now, while I'm relatively poor myself, then I'll basically be giving hardly any money away at all/ helping hardly anyone! But if I just focus on myself *for now* and try to get rich, then really help people AFTER I've accumulated a large amount of capital, then I could help those in need so much MORE!!!

A: But you're obviously never going to get rich so this is just an excuse to do nothing. Also, if everyone assumed that fame and fortune were prerequisites for doing anything "truly good", then our world would be a pretty selfish/ lonely place. Oh wait...

So yeah, what I mean to say is that your arguments are basically just rock solid, and I feel pretty called out. It's so much easier to "wish that we did more to help the poor" when this means "someone else doing something that I tell them to do down the road when I'm dictator" rather than "me parting with a chunk of my paycheck right now."

Expand full comment
Comment deleted
Expand full comment

Well, the general assumption is that well-directed charity has a "social compounding" aspect. e.g. if you donate $30 to buy a malaria net some sickly Sudanese 2-year-old doesn't get malaria and die, and then grows up to invent a cure for cancer or at least become a sainted leader that transforms Sudan into a peaceful prosperous democracy.

I mean, people are generally pretty negative about charitable giving that has *no* compounding, only some immediate effect, like you give $100 to the street person and he buys himself a nice dinner for $25 and spends the rest on a fresh supply of meth, so that your donation has only a short-lived temporary effect. People hate that. So they usually assume there's a fair amount of social "compounding" going on (the bum buys himself a nice dinner for $25, a shower for $5, and a used but clean suit for $60, and with that he gets a job, sticks with it, moves into an apartment...invents a cure for cancer...).

Expand full comment

Yeah, I'm very familiar with these compounding arguments, because I've used them :). And I mean, I don't think they're all wrong. In general, I really don't think we should all feel obligated to save the world anyways. Unless we're super rich and powerful, then we should. But I do think we should feel obligated to do *something* "good" now, rather than just focusing on accumulating wealth and power so we can eventually do good. But I don't think that good needs to be something grandiose or even in the realm of charities/ "altruism." I think we should just all treat each other better/ be more kind and thoughtful. I think we should try to have honest careers that at the very least don't make the world worse. And if, within our tiny spheres of influence, we're giving back rather than being parasitic, I think that's probably enough to make us decent human beings. Giving ten percent might be a bonus, but first off I think we should just try to not be parasites. If we achieve that, we're already pretty awesome.

Expand full comment

If you're very early in your career and you expect to make a lot more money later (even if you don't "get rich") then it probably does make more sense to focus on how you can optimize your career than on donating right now. I agree you should do *something* good now, not least because it will help you get in the habit.

Expand full comment

That's completely reasonable. It's a lot easier to be broke and to say "hey, people with money should be giving more, I just happen to not have any" than to actually build a successful career. And building a successful career that's honest/ positive could be seen in some ways as a "good deed" in and of itself as well, there are more ways to do good in this world than dishing up soup for the homeless. I think "different stages of life" fits into all this somehow as well.

Expand full comment
founding

If it helps, I did that. Waited a long time before donating more than trivial amounts, realized I'm actually in a life moment where I have extra cash, made a sizeable donation to a charity I considered effective.

Now I'm back in no-donate mode, but I'm somewhat more chill because I know that next time I have considerable extra funds, I'll most likely do it again. Which btw is one of the best reasons to donate regularly regardless of income - it's building a habit.

Why don't I do it? Because I just don't like donating - the act itself. I'm selfish and stingy and it doesn't feel good to give my money to anything other than possibly causes I feel something about. This is a personality trait and has nothing to do with my goals and values, so making money and donating more occasionally suits me just fine.

Expand full comment

Whoever your friends are who are telling you not to post this spicy essay, you should know that they are being very unaltruistic. They have had the chance to behold it in all its glory, and are refusing to share that same transcendence with the world.

Put those friends aside. Listen to your soul. You want to post it.

It’s what the world needs.

It’s what the people demand.

Expand full comment

+1.

I come here exactly for this kind of writing.

And if that's too-spicy-for-real-life... I dunno... I think we should then pray for an x-risk to kill us all and let nature rebuild with something better than us.

Expand full comment

Ask someone who can take the reputational hit (a Joe Rogan-like figure) - to post it under their name and see what happens. Link to it...with a comment like, "Man, I wish I could have written something so spicy like this!"

Expand full comment

What was that, about utilitarism or, basically, any logical system/philosophy leading to absurd conclusions if it was taken to extremes/if you were looking for edge cases?

Expand full comment

There are many ways kidney donation become problematic if all people would be required to donate their kidney to random strangers. If donated kidneys are cheap, we *would* get stuck in an inadequate equilibrium where there is less effort to come up with a better, permanent solution (such as, replacement kidney grown from patient's own cells that don't require immunological medication nor deprive anyone of their spare kidney). Like we have become stuck with everything that we have cheap availability of.

Some limited amount of philanthropists donating money or their kidney's to random strangers -- and a bit more donating to their close relatives -- probably isn't too problematic. An universal rule would be.

One could defend oneself saying that one isn't suggesting it to everyone, but it doesn't apply here: your Q/A is targeted at generic you, that is, the public at large, that is, everyone.

Expand full comment

Q: "Well, if hypothetically, every human being got together to cure all the world's ills, the marginal value of charity would decline to zero!"

A: "Cool, I would love to teleport to this imaginary universe you are talking about! Over in this one I just donated ~10% of my net worth after failing to altruistically donate a kidney (ruled out by the hospital due to previous kidney stones), while approximately 99.99% of the population failed to do the latter (situation unclear on the former). The situation seems highly resistant to change, too."

Once we are in the universe where we're all ants chasing each other's asses in endless charity, I promise we can have this discussion. Hell, even once we get, I dunno, halfway there, we can start talking about slowing down. But here in the actual, real world, this is such a ridiculous criticism it belongs in the same category as Pascal's mugging.

Expand full comment

I really don't understand this type of argument; it would be very obvious if we were getting anywhere near the point of absurdity

Signed, another failed kidney donator (they're (for good reasons) quite picky!)

Expand full comment

I don't understand how your reply is related to my comment at all.

Expand full comment

I literally can't understand how you can not understand this? Your comment appears very straightforwardly to be the Q. If you intended something completely different then you did not communicate this.

Expand full comment
Sep 1, 2022·edited Sep 1, 2022

My argument was that donated kidney is suboptimal way to fix a broken kidney, but if donated kidneys become "cheap" by demanding everyone with healthy one to give it away for free citing ethical rules (or frankly, probably even demanding everyone to sell by going market rate, because some people are desperate), we are stuck in an inadequate equilibrium:

- sick people get subpar replacement technology and become immunologically compromised

- donators are worse off

- if donated kidneys are cheap, there is less investment in better technologies

I don't believe kidney donation is a step towards curing world's ills, except maybe in some very local context that wouldn't generalize. More over, the generalized solution would not be described as "everyone coming together" but more like "after a sizeable vocal minority comes together they can bully the rest of us" . (Both are all part of your Q but all assumptions on your part.)

As an another example, volunteer work can achieve some good locally. However, demanding that everyone should do volunteer work is effectively demanding an universal conscription. Such system can be grossly inefficient way of using affected individuals' time.

In general, universally enforced ethical demands to give valuable stuff for free are a form of subvention. This was to argue that all claims of form "people should do stuff" should come with an off-ramp if serious (I admit that conclusion I didn't write upfront).

Expand full comment

Kidney donation isn’t an effective refutation of charity’s ineffectiveness. Physically weakening the most altruistic among us to strengthen the median needed kidney recipient has a very obvious utilitarian downside: kidney donation comes with many risks and the decrease in the donor’s overall life utility output that comes with it could very well outweigh the median kidney recipient’s gain in utility. In a model where overall societal progress is disproportionately driven by a small subset of individuals(a view that I think you endorse), it's just a matter of tweaking your model parameters until promotion of kidney donation is no longer a net gain in utilitarian terms.

On a less devil's-advocate note, this is my obligatory criticism of EA - they don't commit to the bit. Where's the utilitarian evaluation of the average gain in the Sub-Saraharan country that is receiving donations of bed nets in comparison to countries that aren't? Where's the EA research paper on the net effects of cash transfer to the poor on support ending up in the pockets of local warlords? Why isn't EA grappling with the fact that Sub-Saharan Africa is actively regressing in GDP development despite their attempts at helping alleviate poverty in those countries?

If you're gonna commit to utilitarian principles, then commit damn it! Don't stop at assuming that every human life is equally valuable just because it's morally unpalatable to consider otherwise, that defeats the entire purpose of utilitarianism! Clearly EA is comfortable being Utilitarian absolutists about their weird woo-woo stuff like AGI and longtermism, but they shy away from using the same logic on their more conventional pursuits.

Expand full comment
Comment deleted
Expand full comment

What COVID? According to worldometers data, almost every sub-Saharan country other than South Africa (which is a weird case in many ways) has had less than 5000 COVID deaths total, in the past two years. That's noise on their typical mortality counts.

Expand full comment

I had literally this objection the other day FROM AN ECONOMIST.

Expand full comment

That's actually a good point. I would consider donating a kidney to a relative or a dear friend; but I would not donate a kidney to a random child in Africa or to some unspecified number of future humans. In other words, my kidney would be a personal gift, not systematic charity.

I don't agree with the viewpoint that organized charity always makes people worse off, but answering that with "what about kidneys" is IMO not going to be persuasive for people who do subscribe to this view.

Expand full comment

We should be funding research into genetically modifying humans to grow multiple redundant kidneys which are easily detachable for donation.

Expand full comment

We're already growing human-compatible kidneys in pigs, this seems like a neater solution https://www.nature.com/articles/d41586-022-01418-3

(unless of course you believe that pig lives should be treated as equivalent to human lives in which case... bad luck I guess?)

Expand full comment

I'm signing up for an altruistic kidney donation, does that free me of my 10% cost obligation?

Expand full comment
founding

Technically, kidney donation is one of the standard examples where donations make recipients worse - having a heavily regulated but paid market for organs would be a hell of a lot better. Probably even paid organ harvesting from dead people would up the numbers by some factor.

With free donation only we keep wallowing in a local maxima of charity and serendipity.

Adding some TDT and coordination to fill the gap from the individual decision to the final outcome is left as an exercise to the reader.

Expand full comment

Hi Scott, could we translate this essay into Portuguese for our EA Website? Here's something about our work (still a draft: https://docs.google.com/document/d/11lZZyewfCDiaSL8yGjYbu394bJQeq-_hDGYBsKPbNiw/edit?usp=sharing) Feel free to use my email.

Expand full comment

My problem might be similar to this, or deeper: I don’t even know how to tell what’s right, what goals to work toward. Different people say different things about questions like whether to give aid to (potentially) dangerous people. That doesn’t mean there are no good forms of charity, it’s just that _I_ can’t be sure enough that any particular form won’t be bad. When (over a decade ago) I asked questions about François-René Rideau’s post “Why indiscriminate charity is immoral” (https://fare.livejournal.com/104397.html), he said “Giving without discrimination is worse than not giving”.

He also said “Regarding stem-cell donations to anonymous recipient, I think it's a great idea […]”, and I hope his logic also applies to kidneys. But others might oppose giving kidneys to strangers, so much so that it’s not even possible where I live. Anyway, I hope the “human-compatible kidneys in pigs” that Melvin mentioned will make such questions irrelevant in a few years.

Expand full comment

Just re-reading this as I ruminate on the FTX situation; this comment makes it seem like you probably missed KidneyGate, aka "this white woman donated a kidney and wanted her friends, who are women of color, to care about it, what a bitch". So for at least some types, kidney donation is problematic white-saviorism now

Expand full comment

A: Are you donating 10% of your income to anti-charity?

Expand full comment

I'm donating 100% of my income on my needs & goals, so...yes?

Expand full comment

That’s not anti-charity. You should be devoting 10% of your income to efforts to subvert charity (e.g., politicians who want to revoke the 501(c)(3) status of churches) and remove/defund welfare programs. If you actually believe that charity and handouts are bad for people, then you have a corresponding moral obligation to work to stop the handouts.

Expand full comment

Totally missing the framework here.

It isn’t his argument that handouts alone are bad. It’s that giving other people money, unless it’s for things you want, is on net just distorting the world and adding noise.

Not saying I agree but I can see where this comes from.

Expand full comment

It appears though that his argument *is* that most handouts are bad because they are mostly targeting and advancing the causes of his outgroup and therefore things he does not want.

I don’t think there’s a way to reconcile this without discarding the idea of moral obligations, which really discards the idea of ethics at all.

Expand full comment

Revoking the special tax-status of churches - and much more the defunding of all government "welfare"-programs plus a major part of other "welfarers" (red cross!, oxfam, greenpeace!, amnesty ...) - should be your aim in any case. As EA, cuz they are ineffective. As any kinda altruist., as their results are over all: usu. negative. As Uncle Scrooge - as your money is taken and no value produced. Even if you have a plush job at this misleading burners of charity-money. ( A good school-system might be an "investment", not welfare. The one you and we have is: sadism. The one Indian parents pay a few bucks for privately: affordable. Education ministry: "idi na chui!" - 24.8.2022 Slava Ukraina!)

Expand full comment

I know it's not a democracy, but this gets my nomination for highlighted comment!

Expand full comment

I found this hilarious and laughed really hard, for what it’s worth.

Expand full comment

Basically everyone investing in a company exploiting humans or harming the environment is doing so. I wouldn't be surprised if there is a relevant overlap with people donating to charities.

Expand full comment

Yea I'm not sure why rationalists struggle to see the abject failure of non profit incentive structure. The kidney example below is obviously not effective altruism - lots of people in kidney donation are responding to financial incentives! And you'd get even more kidney donors if financial compensation was allowed.

The real question is whether it's easier to fix the non profit complex or to fix capitalism where it's failing.

Expand full comment

> The real question is whether it's easier to fix the non profit complex or to fix capitalism where it's failing.

I see it as the Profit/Non-Profit incentive structure, since there is an interplay between the two and they do not necessarily exist in a vacuum without the other.

Particularly true in healthcare, where a transition from non-profit to profit has resulted in a myriad of issues (from higher costs to decreased access to care). Others can note that the transition to profit has spurred even more advanced drug technology (e.g. in cancer care). Weighing these pros and cons is subjective, though the conversation tends to end at "is healthcare a human right?" If it is, a non-profit incentive structure is more aligned and a profit one is less aligned. However, since profit incentives have a stronger short term reward function and humans are humans, there would still be a degree of bias in favor of profit structures by a subset.

To wrap it up, there are some areas where non-profit structures could be more favorable (e.g. healthcare delivery) and others for profit structures (e.g. drug development), and they tend to coexist in a larger ecosystem together (e.g. healthcare). Despite whatever balance there is, I think there is an inherent bias towards responding to profit incentives even when non-profit may be preferable.

Expand full comment

I have a tough time disentangling the immense role of the government in your healthcare & health science examples. Of course the government exists in every industry but healthcare is just overwhelmingly dictated by regulation to the point where the prices for identical goods and services can vary about 100's of %.

And this is going to sound like a nit pick but it honestly isn't: Non-Profit isn't the same thing as "Not for profit". I'm guessing your examples of high quality healthcare delivery are "not for profits", as in, they run a business that's generally cash flow neutral. Maybe you're thinking of Providence or Intermountain, both of whom have done really well. That business model is super duper distinctly different than a foundation which relies on donations to fund their activities rather than customers who willingly cough up money for the services they receive. The donation:customer earned revenue ratio matters a lot in how an organization behaves.

Markets are just really good at allocating resources, aligning incentives, and validating what consumers actually want. Non profits really struggle to do that and I've never seen a counterexample - though I'd be interested in hearing about one!

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

I would argue that "Effective Altruism" as a distinct entity only starts at maybe the "Cause Prioritizations" level of your tower of assumptions. The two layers below are shared by so many others -- ranging from for-profit insurance companies to government bureaucracies to religions -- that they can't be fairly claimed to be part of EA. "Cause Prioritizations" is also the level of the tower at which people start to raise serious objections to EA. That is not a coincidence.

If all you want is for people to do more from the lowest two levels of the tower, that's fine. But that is not EA, and claiming it is so is very close to a motte and bailey fallacy. People can donate 10% or do charity work in less developed countries because they are observant Christians or the government gives tax incentives or whatever without touching anything resembling EA at all.

If you want my spicy take on this, I think you added those lowest two levels purely as a defensive measure to protect EA. Why not add layers underneath? There are even more foundational assumptions like "Suffering exists", "Cause and effect exist and we have free will to affect it" or "We exist". These are also necessary. But you stopped there because the motte would be too obvious then.

Expand full comment
Comment deleted
Expand full comment

Charities and charitable giving often don't go beyond the "we should help other people" bullet on the bottom. Seriously considering how much effort to put in, and actually thinking about the opportunity cost, is not common. Imagine asking almost anyone in your life detailed, probing questions about how much effort/resource they expend for other people, or tell them that St Jude, while good in isolation, is not a good recipient of their donations compared to other opportunities. Do they get mad?

Expand full comment
Comment deleted
Expand full comment

You guys/gals crack me up. Why does everyone nowadays need a cause? Why do You all need a movement to feel self-important?

I live on $29 or $30K social security because I inherited enough money to buy a house. Since that time I've given the money I was spending on a condo to charity. That's about 20%. Actually, it's 20% this year because I'm cutting *back*.

This is nothing new. I've been giving 10% since I started making good money. When I wasn't making good money, I cut back. Then I inherited the money and went forward.

You guys and gals that live by "shoulds" are, unfortunately, rule by "shoulds." Do what You can and feel good about Yourself. No movement required.

Expand full comment

"Do what you can and feel good about yourself" is a "should," jt, and you're advocating it on a public forum, pointing to yourself as an example. I'm not knocking what you do, which certainly seems good to me; I just think it's a little ironic that you chose to be snarky in describing it.

Expand full comment

It's the prioritisation (or at least equal prioritisation) of people far away versus people close to you. And this intuitively doesn't feel right to a lot of people.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

Which major religion does not have a version of what Christians call the Golden Rule? "Love thy neighbour as thyself" "Do unto others as you would have others do unto you" What allows EA to claim this idea as theirs when it is so ancient and widespread?

Expand full comment

Their idea (still not new) is that everyone is your neighbour to the same extent.

Expand full comment

I'd say that it's not the idea itself that's particularly new, but it just recently became plausible that your donation of $5k (or whatever) can save a life in some African hellhole, but still dubious. The more radical idea is that saving 20 of those lives is obviously better than a donation of $100k to a museum/university/library/animal shelter near you, which people would likely agree with if asked directly but don't tend to think of on their own. Ironically, now that AI alignment non-profits are supposedly even higher priority than that, EA's potential appeal to normies is even less straightforward.

Expand full comment

“ The more radical idea is that saving 20 of those lives is obviously better than a donation of $100k to a museum/university/library/animal shelter near you, which people would likely agree with if asked directly but don't tend to think of on their own.”

I don’t know enough about EA to know what the answer is to the following: does EA think that nobody should donate to museums at all, or is the assumption that there will always be people donating to museums and this is more of a “hey, you donate $X to museums and symphony orchestras and schools, have you considered this other alternative” type of thing?

If it’s the former, then that does give me pause. To very badly paraphrase Mr. Keating from Dead Poets Society: “Medicine, bed nets, AI risk management: these are all worthy pursuits, and necessary to sustain life. But literature, music, education, art: these are the things we stay alive *for*.”

Expand full comment

I think "Nobody is Perfect, Everything is Commensurable" touches on this some, but the basic outline as I understand it is kind of "all of the above:"

-Realistically people are still going to donate to museums and so on.

-If everyone got onboard with EA enough that there was serious risk of running out of funding for museums, etc, then there would be so much money available that all the more urgent priorities would be fully funded and there would still be some left over for museums, etc.

-And finally in the least convenient possible world where there really is a hard choice between museums and saving people from malaria, it seems at least to me impossible to defend leaving children to die so that we can have a nice museum- even if it would be a sad, drab world without nice museums and libraries it's still better than one with children dying painfully.

Expand full comment

You need to think on the margin. In the current world, an additional donation to buy malaria nets will do much more good than one to an art museum, even if in some other world the situation might be reversed.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

The main EA idea here is probably best put by Yudkowsky as "purchase fuzzies and utilons separately" https://www.lesswrong.com/posts/3p3CYauiX8oLjmwRF/purchase-fuzzies-and-utilons-separately

In practice, nobody spends 100% of their "spare" money on the most efficient charity that they can find, and it's clearly more virtuous to donate some part of your non-charity budget to that museum than spending it on ice cream or whatever, as long as you understand that this shouldn't cut into your primary charity spending.

Expand full comment

If you want to throw 10% of your income at charity with provable practical results and then, say, another 20% at supporting the arts, I for one would have no objections.

Expand full comment

> but it just recently became plausible that your donation of $5k (or whatever) can save a life in some African hellhole

That doesn't sound all that recent. Western missionaries have been doing this kind of work in Africa for centuries, and people have been donating to them for just as long.

The cost per life saved has presumably only gone up with time as the super-low-hanging fruit has been taken. As recently as the early 1980s you could probably save a life for the cost of a few bags of flour during the Ethiopian famine.

Expand full comment

IMO, there's a *level* where that idea is true.

Most people in the real world have concentric rings of closeness, right? It's just easier for people who have no sense of community to pretend that everyone is their neighbor.

Expand full comment
Comment deleted
Expand full comment

Each his own. But i think there's a difference between how You prioritize Your charitable giving, and considering the random guy in Africa Your neighbor, right?

Expand full comment

Luke 10:25-37 - The Good Samaritan

Jesus specifically tells a parable where the despised outgroup (Samaritans) can be a good neighbor, even while the ingroup (priest and Levite, the two highest social classes) failed to be. It's not new, or even remotely created by EAs.

Expand full comment

I'd agree that EA shares premises with at least some institutions up to the "cause prioritization" level. It specifically takes the ten percent of income benchmark from the preexisting practice of tithing, so that's one other institution right there which offers the same degree of commitment to giving for the sake of others.

The thing that sets it apart is that among institutions that do encourage a similar degree of donation, the targeting tends to be really, *really* bad. Tithes are mostly used for maintaining the infrastructure and administration of the church, and when there's an excess of tithes, it tends to lead to fancier church buildings and richer church officials. In that context, the targeting level of the chain is actually really important. Under some pretty reasonable assumptions, it's entirely feasible to make donations where almost 100% of the value of your donation is margin of improvement on how useful it would have been if you'd donated it to a church.

Expand full comment

The 10% threshold is also very characteristic of EA, and most non-EAs (apart from a few observant religious people) don't meet it. It's entirely legitimate to include those layers, and historically they have been key to the movement since the beginning.

Expand full comment

I don't agree that others don't meet it. Government tax incentives in many countries incentivise donations up to an even larger percentage of income than 10%. Are government bureaucrats therefore effective altruists? No. This belief in 10%, or taking actions to have people donate 10% is in no way unique to EA. This is mythologizing.

Expand full comment

Right but do you give 10 percent of your money to causes you think are good?

I don’t, and maybe I should. And perhaps a group of secular people doing so is is helpful social proof to encourage me.

Expand full comment

So when an evangelical Christian donates 10% of his income to his church to get gay marriage banned (because going to hell is infinite negative utility, so saving even one person from hell is worth any finite sacrifice, even the lives of quadrillions of happy people existing millions of years into the future), they are an effective altruist? Surely not! The definition of EA must be far more specific than "Donate 10%" or "Maximize benefit based on utilitarian calculation" to make any sense.

Expand full comment

I dunno, that sounds like EA reasoning to me.

Expand full comment

So you are willing to state, in public, that evangelicals working to get gay marriage banned are effective altruists? You consider them to coherently fall into that set?

Expand full comment

I quite seriously would love to see pro-life EA become a thing.

It’s quite likely that there are more cost effective ways to prevent abortions than current practice!

Expand full comment

Pro-life EA has surprisingly similar outcomes to regular EAs, because truly effective pro life is to prevent unwanted conception to start with - so investing in sex ed and contraception.

In practice, I'd expect this looks like giving to orgs that do sex ed and contraception that don't also do abortions.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

Internet porn, Holmes, internet porn.

If official stats are to be belived, abortion rates have been steadily decreasing in recent years. While statistics concerning sex and sexual activity are notoriously unreliable, people have been having less sex and with fewer people.

I suspect without firm evidence that the rise and availability of internet port plays a major role here. Can't get pregnant if nobody is having sex. You're probably not having sex if your Friday night is consumed in hand to gland combat whilst scoping out monkeyporn.

Expand full comment

I think it _could_ count as EA, it's no sillier than a lot of things that EAs spend money on.

But I'd need them to show their maths. Show me a set of internally consistent theological assumptions that leads to "political campaigns against gay marriage" having the greatest souls-saved-per-dollar-spent ratio (as opposed to, say, handing out bibles in Nigeria or putting drive-through confession booths in Brazil), and then convince me that these were honestly the assumptions you started with (and definitely didn't arrive at backwards to support your preordained conclusion), and yes, I'm willing to call it Effective Altruism.

Expand full comment

I'm a Christian; as far as I know those numbers don't exist (at least not in this world), and I'd very much like to see them if they do.

Expand full comment
founding

I feel like somewhere towards the base of the EA tower is the assumption, "science rather than revelation is how humans can make sense of the world"

Expand full comment

The L in QALYs is "life"; afterlife is out of scope (/s).

Expand full comment

I never said it was unique - AIUI the 10% figure was taken from the religious practice of tithing. I said it was *characteristic*, and it is - the Giving What We Can pledge has been part of EA culture since it was created in 2009. And it's rare (not unknown, but rare) outside EA and some observant religious communities: I'm having trouble finding good figures, but https://www.charitychoices.com/page/how-much-given-whom-what says that the average American gives 2.1% of their disposable income to charity.

Expand full comment

While I think it's probably universally accepted "Some methods of helping people are more effective than others" is shockingly underconsidered by even pretty intelligent people when donating to charity or spending their time altruistically. The emphasis on it is definitely something EA specific.

Expand full comment

Exactly. It's a common rhetorical trick--grab something (almost) everyone agrees with and name your organization after it. Then, when someone says they don't like Helping People (the organization), claim that they're really against helping people (the idea). It's utterly dishonest and reprehensible and this essay makes me think much less of Scott for posting it.

Expand full comment

Scott does not say or imply you should give a cent to "Helping People"/any EA-org. He says: If you do not like them, commit to dedicate a small, but regular, part of your income and help people the way you think best. (plus: think about how you do more help with you buck). What's "utterly dishonest and reprehensible" about that?

Expand full comment

No, you've completely misunderstood the argument. The point is that most people say they disagree with EA for something at one of the higher levels of the tower, but their actions reveal that their real disagreement is at the lower levels - else they'd donate significant sums to some charity they have reason to believe is effective. Note that *very few people actually do this*, so their revealed preferences are that they do not, in fact, believe in effective altruism (small E and A) enough to act on it. If someone says "I don't agree with you about the importance of animal welfare, but I already donate 10% of my income to [some other charity]" then great, you can have an honest discussion about giving priorities. If someone says "I disagree that it is necessary to ensure one's charitable donations are effective" (this would have been a common position in Victorian England, for instance) then OK, you can argue about that. If someone says "I disagree with the whole concept of charity" (as some people do in this comment thread!) then your value systems are so different that you're probably not going to get anywhere and you can skip the whole thing. But if someone says "I think AI risk is silly, but I don't donate significant amounts of money to a charity that I prefer" then they're being dishonest with either themselves or you about their real reasons for rejecting EA reasoning.

Expand full comment

EA has a list of somewhat hidden assumptions, which I think relate to the specific circumstances of their base (wealthy working people in the Bay area).

For one thing, that there are only two real ways for a person to be effective in charity. Monetary giving, or specifically working in an altruistic field. This ignores the vast number of people for whom giving 10% would be a very small absolute amount, such as people who do not work for cash, or who barely make ends meet. I want to point out the women in many, especially poorer, areas of the US who donate their time as unpaid daycare. What would EA say about them? I would hope they would recognize significant value in what they do, but all of the written and spoken assumptions about EA would relegate these people to "not doing enough" based on the metrics we see. They're not giving 10% of their income, and they're not part of an altruistic organization. They're just watching their neighbors kids so they can go to work. Making the longer claim that such care is inefficient also tries to sneak in a bunch of other assumptions, for instance that group daycare is "better" than a smaller arrangement. I could go on with more examples, if I'm not being clear here.

Expand full comment

I think most EAs (including Scott) would say "if you don't have an income, or earn so little that you can't spare any of it, you're off the hook."

Expand full comment

Is that a satisfying answer, or a necessary dodge of the implied assumptions? The next obvious question is how much someone needs to make before they are "on the hook." Can someone who makes $25,000/year be on the hook if that's far more than they need to live? Can someone making $200,000 be off the hook because they live in an expensive city and have lots of kids? There's certainly a line of argument (and I feel that EAs would make this argument), that if you make [large income, even in expensive cities], then you should find a way to donate it. Perhaps by having less kids, or living in a smaller home.

EA comes with huge moral implications. Giving people moral permission to just ignore them seems insanely evil, based on the arguments put forward by EAs. Should the takeaway be that the vast majority of people really don't matter to EAs and the goals of EA (as 80,000 hours has pretty much said in the past)? If so, how is that justified in the moral framework as presented? The arguments intended to bring high earners and exceptional people into the EA world make no such distinctions, so adding them in after is problematic at best.

Expand full comment

These are good questions, and I don't have good answers. As someone who's gone through an intensely guilt-ridden religious phase, I think it's important to set a threshold so scrupulous people can tap out of "I should be doing more" spirals. Some people here have donated kidneys, and that's praiseworthy, but a movement that *demanded* adherents donated kidneys would be a hellscape. And as a practical matter, community building can be valuable even if most of the people it brings in don't contribute much in dollar terms. It's interesting to compare it to startup investing: IIRC Paul Graham once said that ~every startup he's invested in is noise on his balance sheet compared to Dropbox and AirBnB, and that when choosing who to invest in he has to consciously ignore the chance a startup will succeed at all in order to focus on the chance that they'll succeed spectacularly. But EA is not bandwidth-limited in the same way, and being more welcoming is probably a better strategy.

Expand full comment

If you want a criterion that handles the tails better than "give 10%", check out the Giving What We Can pledge, which scales based on income.

Approximately every moral philosophy comes with "huge moral implications", and has to deal with the fact that ~no one will act on them perfectly. "give 10%" or GWWC are both social standards aimed at trying to convince people to do more to help the world, not to get people to achieve moral perfection. Relevant blog post: https://slatestarcodex.com/2018/11/16/the-economic-perspective-on-moral-standards/

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

My objection is to the argumentation style. Which is entirely rhetorical sleight of hand, demanding perfection from your opponent before they can criticize anything else.

And you can believe in effective altruism (lower case) while still thinking that Effective Altruism (the entire tower other than the banal base principles) is utter garbage. *That's* the rhetorical trick here--by naming your movement the same as a principle people mostly believe in, you've made yourself immune to attack because you can effortlessly (and fraudulently) divert any criticism to being criticism of the underlying principle. The number of such things is legion.

I'll take an example from a non-culture-war topic (although the culture war versions of this are, if anything, more blatant). MMOs. Developers publish a Balance Patch. People disagree that it actually, in fact, balances things. Stans come back with "well, then since you don't agree with this Balance Patch, you must oppose balance in general". Which is just garbage.

I don't think that last sentence holds. Lots of people believe things and don't follow through for many reasons. Without being dishonest with themselves or revealing that they're being dishonest about their real reasons. I think AI risk is utterly silly. That is true or false, and EA (the movement) is absurd (or not) for spending most of their (perceived) effort on such things *no matter what I give to charity.* Diverting to my personal actions *has no bearing at all on the value of EA, the movement*. It's effectively ad hominem argument--"you can't *actually* criticize EA because you're not perfect either". And that's well beneath Scott. The interlocutor could simply say "yes, I should. Doesn't change that EA is total wack." Or "you don't know my situation, so got take your high-horse moralizing and tell it to someone who needs to hear it." (avoiding the more forthright responses to such blather).

Believing and doing are separate things. And you can disagree with any number of the "lower level" assumptions in detail, while believing something arguably similar. For instance, the whole idea that we can objectively measure effectiveness for charity is epistemically on very shaky grounds. As is utilitarianism, which is smuggled in as a major premise here. I can believe in being "effective" with my charity while

* not giving 10% to any organized charity[1]

* believing that utilitarianism is abhorrent and evil

* believing that helping people near me is much more effective than people far away

* believing that EA (the movement) is an utter waste of everyone's time and money

That is, the agreed-on basis is very small and has nothing to do with EA (the movement) OR effective altruism in general. It's just...one of those principles most of us agree with. The devil is in the details, and that's what's being intentionally conflated here.

I can agree with ends (we should help people) while utterly disagreeing on all the means *including giving any fixed amount of money to organized charity endeavors*.

Edit: the post also comes off as unimaginably smug, in the same way that "checkmate atheists" or the similar atheist versions come across. No attempt to actually understand *why* people don't give as much as they (claim) to want to. Nothing but a high-school debating tactic, more focused on *winning* than actually persuading anyone.

[1] EA focuses on this. Which is stupid IMO, since most of the actually-effective charity involves

* time, not money (and no, the two are not fungible in any meaningful way)

* people with whom you already have a bond of trust, which necessarily involves proximity

* changing lives. Cash drops *look* effective, as do things like well-digging in Africa. But...they're not. Like water into sand, the wells aren't maintained, the cash is frittered away. No lives changed whatsoever.

Expand full comment

I don't give 10% of my income to charity and I think AI risk is silly.

AI risk being a silly venture that consumes billions of charity dollars to no end is not dependent on me giving 10% of my income to charity. EA is a very active statement that EA causes are the most Effective ways to be Altruistic- that is the point. I can engage that intellectually and be skeptical of EA's position without being hypocritical.

Expand full comment

Where do you get the “billions of dollars” figure from? In 2019 EA spent something like 40M on AI safety I believe. It’s definitely more now but probably an order of magnitude less than 2B+.

Expand full comment

The idea is that the value of reducing existential risk is very high because the entire future is at stake. Then any non-trivial probability of reducing existential risk has high expected value.

Analogy: a lottery ticket would only need to have some low minimum percentage chance of winning (e.g. 1%) to be worth buying.

Expand full comment

By this logic, wouldn't the best EA cause be to pray to God that he somehow solve all issues? Low odds sure, but very high payoff if your marginal prayer brings forth the Messiah. I never considered EA as a Pascal's Wager, but now that I have I can't unsee it and it doesn't cast EA in a good light

Expand full comment

- He did say "non-trivial"

- EAs tend to be consequentialists, and as such are looking to maximize expected value, not to do figurative or literal "Hail Marys" whose expected value is lower. A cause with a 1-in-100 chance of preventing human extinction is better than one with a 1-in-1,000,000,000 chance of doing so, even if you spend a million times less money on the latter (and prayers count as a cost; time is money)

- Even believers wouldn't expect God to prevent catastrophe just because one more guy prayed for it. Also, believers tend not to be terribly concerned with existential risk because they believe that no such thing exists: existence will continue in heaven and hell even if terrorists engineer a supervirus that kills us all, or even if a random 17-year-old invents an AGI that (much to the kid's chagrin) kills us all.

- Most people who work on AI risk believe the chance of catastrophe is much higher than 1%. And I myself believe the chance is higher than 1%.

Expand full comment

Personally, I'm not even convinced that the bottom two layers are solid. On the lowest level, "we should help other people" is an awfully nebulous statement. Who are "we", and who says what we should do, and to whom ? I am not against helping people in general, but "I would prefer it if people did not suffer" is a much more defensible statement.

Move one layer up, and you're already in the weeds (pardon the mixed metaphor). Who says that Utilitarianism is true or useful ? I mean, Scott does obviously, but it's a matter of heated debate. Who says that donating to 3rd-world countries is the most "effective" move ? I mean, yes, my dollar will stretch a lot further in the 3rd world; but if I have a 90% chance of my money reaching someone closer to home, or a 1% chance of it reaching someone in Africa, I'd go with the home team, just as a matter of expected value. And how did you arrive at that 10% figure, anyway ? Why not 5%, or 13% ?

Move up from there, and you basically just have ideology, not effectiveness. The EA movement had already decided that x-risk and animal welfare and other sci-fi causes are the best use of your money; now they're trying to construct this elaborate tower to support the conclusion. Like most such edifices, it is built out of sand.

Expand full comment

> Who says that donating to 3rd-world countries is the most "effective" move? I mean, yes, my dollar will stretch a lot further in the 3rd world; but if I have a 90% chance of my money reaching someone closer to home, or a 1% chance of it reaching someone in Africa, I'd go with the home team, just as a matter of expected value.

<a href="https://givewell.org">GiveWell</a> does (if we're comparing these donations to domestic poverty charities as you do). They provide their reasoning and evidence, so if you're interested in these questions skimming that would be a great place to start.

10% is explicitly arbitrary. EAs who think you should give 10% will essentially always think you should give at least 10%, and pick 10% as both a Schelling Point and a compromise with psychological realism.

Expand full comment

Animal welfare as a top-level goal is going to hurt EAs with the vast majority of people in the world. It may poll well in the Bay area, but it's essentially a signal for "we disagree with your core values" and comes across very poorly in the same space as "why aren't you giving 10%"?

In a world where EA means an identifiable group of people, many of whom live in a particular geographic area and who believe in a fairly steady set of core ideologies, it's not realistic to conflate that with lowercase "effective" "altruism" in the common use of those words.

Expand full comment

Exactly. Conflating EA (the movement of rich, weird people in San Francisco, mostly) with being altruistic is rhetorically fraudulent. It's the same game played with all sorts of bill titles in Congress--how can you oppose the "Helping Pretty Puppies and Kittens" act (nevermind the fact that it's mostly pork-barrel spending on the same tired litany of Team-X causes *at best*)? If you do, you must oppose helping pretty puppies and kittens and are being dishonest about it! What's dishonest is the argumentation style.

Expand full comment

The interesting question is whether Effective Altruism as a movement is good or bad for effective altruism as an idea. Are the weird and dumb parts scaring people away from the sensible principles? Can I tell people that I'm into effective altruism without making it sound like I'm into Effective Altruism?

To co-opt the tower metaphor, if you take a perfectly good foundation and build a really ugly upper storeys on it, you're hogging a perfectly good foundation which someone else could use to build something much better.

Expand full comment

I personally give 10% of my income to global health charities *because of EA*, and I usually give through prominent EA organization GiveWell. That may not be the only place to discover the foundational & less basic assumptions, but spreading those assumptions is a key goal of the movement and it worked on me.

Saying "they can't be fairly claimed to be part of EA" because they also exist elsewhere seems very silly to me. Lots of religions posit the existence of a god(s). Does that mean belief in god "can't be fairly claimed to be part of" Christianity?

If you disagree with EA, but you still give 10% of your income to effective charity, then fine! You're not the target of this essay.

Expand full comment

I don't understand in what sense you are claiming that "for-profit insurance companies" share premises like "we should donate 10%" and "we should try to make our donations as effective as possible".

(Or, rather, I'd believe that you can find at least 1 insurance company somewhere that shares these premises, but I'm skeptical that _typical_ insurance companies share such premises.)

Expand full comment
founding

Strongly disagree. Maybe I started hearing about it earlier, but for me 90% of the value of EA is in the base level only. Pretty much all the rest is stuff I'm far from being convinced about.

And yes, it's quite unique. When "regular" charities are offering you a value proposition, they never try to compare themselves with alternatives. They're considering your choice of a cause to be a strictly personal decision, and only try to persuade on the level of how much to give.

In theory, all governmental causes should do EA's basement layer, because they're not spending their own money. In practice: ha.

EA is quite unique in saying that some methods are better than others, and the act of choosing them is actually important.

Expand full comment

This captures much of my thinking toward EA as well. A lot of the criticisms of EA start with a bit of throat-clearing: "The good parts of EA are banal..." But they don't seem all that banal to me. Perhaps they are in some sense obvious or hard to argue with, but that doesn't prevent them from being both profound and underappreciated.

That said! The fact that so many criticisms of EA take this form does suggest that the stack is itself a problem because it makes EA less, well, effective than it otherwise could be. That is, even if the idea of giving what you can is logically unrelated to, say, esoteric calculations of the QALY of ems living in the Horesehead nebula, these ideas are de facto linked by the community itself. As a practical matter, the movement may need to wrestle with this as it grows up.

Expand full comment

Wouldn’t this be true if any religion?

How many people think Christianity has some good stuff about being loving and such, but all that hating the gays and obsession with immeasurable things is a real turn off?

Same can be said for many other faiths.

Expand full comment

I would say the basic tenet of “give charity to help the poor” is banal, because it’s widely held across the US and world. Also “give 10% of your income to charity” is also banal and widely held. Let’s put it this way - 99%+ of ppl giving 10% or more of their money to charity have never even heard of EA.

Expand full comment

The "donate your money to the most cost-effective causes you can find" tenet is novel and not widely-held, though.

Expand full comment

Absolutely. And that is where the first point of departure or "ground floor" of EA really starts.

Expand full comment

Yes, but I think it's worth including the lower levels of the tower too: as this comment section shows, not everyone accepts them, and many more people pay lip service to them but don't act on them.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

That's true. But ppl pay lip service to this concept b/c it's such a widely held human value.

So when something is a value shared by all the Abrahamic religions, the US govt, most moral systems, most humans on the planet, etc - it isn't really the thing that defines EA, and EA proponents should not claim it is the ground floor of their particular philosophy. And when someone is critiquing a unique claim of EA, it seems probable that is their actual critique, and isn't just a ruse for them to critique an otherwise widely accepted principle.

A more accurate tower would seem to be: "It is a widely held belief that it is good or a moral obligation to give to help those who have less, either of money or time or both. EA agrees with this generally held principle, and adds, blah blah blah. It is also a tenet of Abrahamic religions to give 10% of one's income. EA agrees this is a good number b/c xyz." - and then to be prepared to defend the actual claims of EA. Rather than Scott's tower/article, which is basically: "this super new movement EA is founded on this amazing principle no one had thought of before known as "help others," and "tithe," and you're not allowed to critique anything we say unless you're tithing, and if you critique anything we say it's probably just a ruse b/c ur too lazy/immoral to help others or tithe."

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

I'd add to the below - like to me, this article reads like: "Oh, you don't like this or that particular and bizarre claim of my religion? Well it must be because you are immoral or less righteous!" I've heard versions of these args from Rabbis throughout the years. How about....defend the particular and bizarre claim you're making?

Expand full comment

I don't agree with that. The United Way had a scandal a while back when it was determined that a much smaller percentage of their donations went to actual charity and their top positions made mid-six-figures. When people become aware that their charity-of-choice isn't effective/efficient, people abandon them.

I believe what you have identified is that most people, when giving to charity, spend less time determining if the charity uses the money well than they should. That's not a priority problem, but an attention problem. If more people became (capital letter) EAs, this problem would not go away. They would substitute the opinions of Give Well or some other organization instead of whatever method they used in the past. If it turns out that 20 years from now Give Well was headed up by someone skimming from the top and doing a poor job of picking good charitable causes, people would still be giving to them until/unless that was pointed out to them.

Expand full comment

There's a crucial distinction: many people care about *overhead* when donating to charity, but far fewer care about *cost-effectiveness*. They're not the same, and a focus on minimising overhead can be counterproductive: it incentivises charities to underspend on activities that aren't directly Doing The Thing but which would make them more effective, like admin staff or fundraising.

Expand full comment

No disagreement there. I think the core of our disagreement is recognizing how little time and effort most people have available to actually determine effectiveness. We are bound to outsource the determination to someone else, and hope they are not lying. In a very real sense, EAs are simply saying that Give Well is the appropriate place to make that determination, rather than the Red Cross or whatever.

To get ahead of the next layer of discussion, I'll add this. No matter how much we read about Give Well (or really any other non-local charity), we will not truly know how effective we're being. When Scott mentions that some malaria nets are being used as fishing nets, that's recognizing that some portion of the donated money is not being used as effectively as the donator thought. Imagine if you were the person who donated the specific money for that net (or worse, the net(s) thrown away for inscrutable reasons). If you knew that your money got wasted, or was far less effective than you thought, is the proper response to never give to malaria nets again? Or is it to realize that there's some waste beyond our control, even with the best of intentions? Both are real answers to that question, taken by different people for different reasons. I would not say that people don't care about "cost-effectiveness" when they still give to Give Well, knowing that some gets wasted.

Expand full comment

Yes, this is fair; I've read some of GiveWell's output and been impressed with their thoroughness, so I'm generally happy to accept that they know what they're doing (though I did decide that the biggest deworming successes were specific to a particular set of local conditions and probably wouldn't generalise: I should maybe take another look at that). Charity assessments are always going to be somewhat uncertain, and some money will end up wasted, but you can still try to keep that amount low, possibly by switching to a better intervention. BTW, I understand that the fishing-net thing isn't a significant problem in practice, and that assessments of bednet charities take it into account: https://www.vox.com/future-perfect/2018/10/18/17984040/bednets-tools-fight-mosquitoes-malaria-myths-fishing

Expand full comment

On the alternate-use issue... how about simply including a few purpose-built fishing nets alongside the bed nets, taking advantage of the same basic textile manufacturing capabilities, but removing features specific to stopping mosquitos and adding highly visible features (e.g. ergonomic reinforced handles) to make them more useful for catching and carrying fish? Saturate the local market for fishing nets, then fishermen won't need to plunder bedrooms for a second-rate substitute.

Or having overseas donors fund an initial "minimum viable production batch" of bed nets, distributing those as free samples to demonstrate benefits, then offering any interested recipient an apprenticeship in the manufacturing process, with further nets possibly sold on a for-profit basis? That way locals get a clearer firsthand experience of the difficulty and value-added (rather than potentially assuming anything being given away for free must be trash), as well as technical skills and supply-chain access which might provide yet undiscovered cross-domain benefits.

Expand full comment

> The fact that so many criticisms of EA take this form does suggest that the stack is itself a problem because it makes EA less, well, effective than it otherwise could be.

I resonate with this 100% I have made mention of it before and didn't get much interesting convo. I think communication outside of intellectual bubbles is a great limitation of EA endeavors.

Expand full comment

What exactly are you(r friends) afraid of if you posted the full thing? IIRC you already get a bunch of hatemail, and you made the point back in Why Do I Suck that people can give you money but they can't easily take it away from you.

If it's wrong, sure, probably not worth posting. But otherwise...

Expand full comment

It's (probably) not a real scenario, it's (probably) a literary trope to couch the spicy essay that has in fact been released.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

Yeah, exactly. From another of Scott's posts (otherwise unrelated):

"B is using my second-favorite rhetorical device, apophasis, the practice of bringing up something by denying that it will be brought up."

Expand full comment

This is a really important essay.

If you’re involved in EA it is very easy to forget just how *weird* many of the current debates in EA are. This makes it harder for new people to join the community as the core ideas, which are much more appealing to a beginner, get lost.

See this piece for a recent example: https://www.salon.com/2022/08/20/understanding-longtermism-why-this-suddenly-influential-philosophy-is-so/

PS please post the spicy essay

Expand full comment

> This makes it harder for new people to join the community as the core ideas, which are much more appealing to a beginner, get lost.

do people need to join "the community"? if i wanted to go donate a bunch of money isn't there just some EA web page that tells me the top 10 best things to do with it? do i have to hop in forums and argue about it first?

Expand full comment

Yes there is (Givewell), and no you do not need to “join the community.”

Expand full comment

I think most people agree that we *should* give more and maximize the effectiveness. The next question is now to convince people to do so. EA may be good at scrutinizing various charities, but they are incredibly ineffective as far as I can tell at promotion (compared to say mormonism).

Another way to think of it is to ask - is to ask if it is better for say 50% of people to give 2% of their income to a charity that is 50% effective or 5% people to give 10% of their income to a charity that is 100% effective?

If EA turns off people from giving at all because they engage in "weird" conversations about killing carnivores or helping aliens or whatever, maybe less weird approaches that motivate more people in the habit of giving (even if to somewhat less effective charities) is more...effective. The social response to the movement, such as it is, is really important and EA is a real turnoff.

Expand full comment

My problem, and I'm sure you have an "A:" for me is that I don't trust telescopic philanthropy. And I don't trust calculations that equate all humans and all lives however distant they are from you, and sometimes make a virtue out of the distance (eg. valuing 100bn future humans)

Look after your family, your friends, your town, and your colleagues. Donate to good projects you can see with your own eyes. Help your friends because you know better what they need than a stranger. This may not be _effective_ in the EA sense, but if everyone did this we'd usher in the kingdom of heaven, and if everyone did EA it's not obvious to me that we don't accidentally paperclip maximiser ourselves even without the AI!

But I can't deny I make these arguments post-hoc because I don't like the smell of EA.

Expand full comment

I think the relevant answer is the first one:

A: Are you donating 10% of your income to normal, down-to-earth charities?

Expand full comment

But I think actually Jack's comment is a response to the 'you should donate more effectively' mic drop at the end. He's not objecting to charity, he's objecting to the fairly EA-specific claim that it's basically immoral to consider proximity/relation when it comes to giving, which is itself based on the highly contestable notion that 'telescopic charity' supervised by NGOs is just as effective as local charity supervised by your own senses.

Expand full comment

Oh, certainly, but I think the response still works. If you want, adapt it to

A: Are you donating 10% of your income to the most effective charities you can personally observe?

Expand full comment

Well sure, it works in the sense that exhortations to virtue always work, but it's ceased to be a defense of the community/philosophy known as effective altruism

Expand full comment

You're close!

I'm not as extreme as some comments in saying "charity is outright bad" but it's certainly not got a goodrep at the moment. I donate to my church and a local greenspace. I donate my time to helping my friends and family, finding them jobs and sometimes even employing them.

Hitting some 10% on the books donation target is nowhere close to a qualification for being a moral person. I know at least one total scumbag who hits his 10%. It's a bad metric.

Expand full comment
Comment deleted
Expand full comment

Me? More than 10%, like I "said" above.

Why everything quantized these days? You think there's something magic about 10%. There isn't. To me anyway, it's like Jack Johnson "said."

And the joke of it is to make a competition outta it. Who is THE MOST MORAL PERSON HERE!! Sheesh. You can't buy morality, right?

Expand full comment

Would you prefer to get $10k from a scumbag, or to have your child die from malaria?

Is it better to be a scumbag and give 10%, or to be a scumbag and give 0%?

Expand full comment

What if they're not independent? What if the scumbag, granted moral superiority by his 10%, gets worse and worse.

Expand full comment

Sure am, because that's myself. For all I know everyone else is a p- zombie so it makes sense to spend to maximise the happiness of the only person whose subjective experiences definitely exist.

Expand full comment

And this, folks, is why it was worth Scott including the bottom layer of assumptions. Not everyone who agrees with them is an EA (for any reasonable definition of the term), but if you're talking to a solipsist then there's no point in quibbling over anything at a higher level.

Expand full comment

It's a quick way to flush out the psychopaths!

Expand full comment

> I'm sure you have an "A:" for me is that I don't trust telescopic philanthropy

He does, it's right there in the essay:

> Think that 10% is the wrong number, and you should be helping people closer to home? Fine, then go even lower on the tower, and donate . . . some amount of your time, money, something, to poor people in your home country. If you’re not doing this, your beef with effective altruism isn’t “the culture around Open Philanthropy Foundation devalues such and such a form of change”, your beef is whatever’s preventing you from doing that.

Personal aside: I discovered the idea of EA nearly a decade ago, back when it was just this nascent idea about doing good better because the opportunity cost of not doing so is terrible and because it's easy to improve do-gooding from most people's low baselines. That appealed both to my STEM-y process improvement-orientation and utilitarian-flavored religious upbringing. The more I read about EA, however, the more uncomfortable I got about specifics; but that original selling point never waned. I'm guessing a lot of people who get attracted to the idea are like young me nearly a decade prior.

Expand full comment

What do you think of David Khoo's argument elsewhere in the thread, that that A Scott gives is effectively a motte and bailey

Expand full comment

Because a motte and bailey effectively asks you to accept the greater claim on the basis of the lesser. The point of structuring it as a tower is that if you don't accept the higher levels, that's fine, go to the lowest level you're comfortable with. That's still better than what the vast majority of people do for others (almost nothing).

I'm still very happy for someone to go from donating nothing to donating a meaningful amount (10% being a reasonable number for most people) to their local food bank or whatever, even though I personally think that I could have a larger impact on the lives of those in very poor countries (and that the lives of those in very poor countries are essentially of equal moral value to those in rich countries)

Expand full comment

The lesson of paperclip maximer experiment is: utilitarians are scared of devils made in their image.

Expand full comment

You do have more knowledge and power to help those close to you in time, space, and relationship. This is true even if you accept the idea that all humans have equal moral value regardless of time and space. So as a practical matter, I agree with you that most actions for most people should focus on closer lives. But I'm glad some people are putting effort into looking further away and trying to figure out what can be done. Shouldn't be 100% of effort. Maybe it shouldn't even be 1%. I don't know appropriate the number, but it's still an absolutely tiny amount of global effort, so from my perspective, I'm happy to see it growing and becoming more popular.

Expand full comment

How does one distinguish the Tower of Assumptions from the Motte and Bailey? Is there a distinction within the arguments themselves, or is it entirely a question of some combination of social attitudes present among those involved? How do we know which one is more relevant here? (I am not sure where my own biases lie here.)

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

In this case, Scott is explicitly saying "if you don't want to join me in the motte, that's fine, but please at least join me in the bailey." A true motte-and-bailey argument would deny that there's a difference.

Edit: Goddamnit I mixed them up, didn't I?

Expand full comment

Ok but he shouldn't insist that the bailey is 'effective altruism' like he does right here

> Think that 10% is the wrong number, and you should be helping people closer to home? Fine, then go even lower on the tower, and donate . . . some amount of your time, money, something, to poor people in your home country. If you’re not doing this, your beef with effective altruism isn’t “the culture around Open Philanthropy Foundation devalues such and such a form of change”, your beef is whatever’s preventing you from doing that.

Expand full comment

Oh, I see. I don't think he's doing that, though. Scott's saying that people who disagree with EA cite disagreement with the higher levels, but their actions usually reveal that their true disagreement is with the more foundational, and supposedly less controversial, layers. He's not interested in drawing a line around a particular set of beliefs and saying "this is EA", he wants to get everyone as far up the tower as they'll go.

Expand full comment

Why do I have to pick a single "true" disagreement? Why can't I truly disagree with parts of both the higher and lower levels?

Expand full comment

You can, but it's more productive to focus on your disagreement with the lower-level assumption.

Expand full comment

I think the idea is that if you disagree with the lower-level assumptions you almost certainly disagree with every assumption at a higher level than the lowest one you disagree with, so you might as well save everyone some time and just argue the lower level disagreement.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

Because otherwise you are wasting people's time, as they are not arguing against the thing that would change your mind

It's a bit like an atheist going up to a Christian and complaining about the problems in the New International Version of the Bible

You are giving the Christian a false perception that, if he shows you the King James version, you will become a christian, because your objections have been answered. But it wasn't your real objection.

see https://www.lesswrong.com/posts/TGux5Fhcd7GmTfNGC/is-that-your-true-rejection

Expand full comment

Worth remembering that the motte and the bailey are often deployed by different people:

“So it is, perhaps, noting the common deployment of such rhetorical trickeries that has led many people using the concept to speak of it in terms of a Motte and Bailey fallacy. Nevertheless, I think it is clearly worth distinguishing the Motte and Bailey Doctrine from a particular fallacious exploitation of it. For example, in some discussions using this concept for analysis a defence has been offered that since different people advance the Motte and the Bailey it is unfair to accuse them of a Motte and Bailey fallacy, or of Motte and Baileying. That would be true if the concept was a concept of a fallacy, because a single argument needs to be before us for such a criticism to be made. Different things said by different people are not fairly described as constituting a fallacy. However, when we get clear that we are speaking of a doctrine, different people who declare their adherence to that doctrine can be criticised in this way. Hence we need to distinguish the doctrine from fallacies exploiting it to expose the strategy of true believers advancing the Bailey under the cover provided by others who defend the Motte.”

https://blog.practicalethics.ox.ac.uk/2014/09/motte-and-bailey-doctrines/

Expand full comment

Whether or not it's a true motte-and-bailey, I think Scott four years ago wrote an effective rebuttal (or pre-buttal?) to the "Tower" model in "The Tails Coming Apart as Metaphor for Life":

https://slatestarcodex.com/2018/09/25/the-tails-coming-apart-as-metaphor-for-life/

The basic idea is that beliefs that are utterly at odds foundationally may nonetheless generate the same answers to moral questions at the level of ordinary experience where most practical problems arise. The fact that the people converging on those mid-level applications are deriving them from completely different foundational principles only becomes apparent when they're confronted with a more exotic scenario and arrive at diametrically opposing results.

Scott uses the metaphor of a subway system with different "lines" defined by their unique endpoints, but that run together for most of their length:

"Mediocristan is like the route from Balboa Park to West Oakland, where it doesn’t matter what line you’re on because they’re all going to the same place. Then suddenly you enter Extremistan, where if you took the Red Line you’ll end up in Richmond, and if you took the Green Line you’ll end up in Warm Springs, on totally opposite sides of the map."

Maybe where the Red Line is headed after diverging from the Green Line is the notion that stopping lions from eating antelope is a plausibly good idea. I suppose I could ride the Red Line as far as "being eaten is generally undesirable from the standpoint of the one being eaten" and then hop off and wait for the next Green train. But really it's much simpler and clearer for me just to say the Green Line is the one I want to take to begin with.

Expand full comment

my thoughts exactly

Expand full comment

The Tower is making it very clear which consequences do you have to accept for any given set of assumptions, M&B is all about blurring and covering this.

An M&B enjoyer would say something equivalent to this "if you accept that children in Sri Lanka deserve better, and that the human brain is flawed and forgetful and incapable of properly imagining the suffering of those far away, then you must accept Peter Singer as your Lord and Saviour and start reading What We Owe 5 times a day while kneeling". It wouldn't be said this obviously of course, the M&B enjoyer would simply define EA to be each of those 3 separate statement at the same time, then use "EA" in all 3 senses interchangeably without even acknowledging it.

It's the same way religious zealots define their God to simultaneously be "That angry father figure who gets mad when I do <random thing>" and also "The transcendental First Cause which started the entirety of being" and also "That which is Beauty and Truth itself" and several other tangled conceptions, then use whichever definitions happen to be the most favorable when arguing.

But a Tower enjoyer would very clearly say that, if you only accept that SriLanken children deserve better and that the human brain is ill-equipped to compute ethics across arbitrary space and time, then you need only commit to helping those kids in a way that circumvents your brain's prejudices. You're free to do that in any way you imagine, but one particular way of doing that is accepting Peter Singer as your Lord and Saviour and reading What We Owe 5 times a day while kneeling, but you can still refuse this at any time and still not be an asshole who thinks children in Sri Lanka are morally irrelevant.

This, by the way, is also how good software is built : as a series of flexible, modular, layers, each layer only assuming the bare necessary minimum about the one beneath it and the one above it. The Internet is built as a "Stack" (i.e. a Tower) of open protocols, the Java programming language is specified in 2 layers, the JVM and the compiler. A company like Facebook, on the other hand, stipulates that you wanting to use their social network is necessarily implying that you want to use their terrible site and their terrible app and their terrible recommendations engine and etc, although all of these things are conceptually separate and are in fact developed as different layers or components inside Facebook itself, but FB doesn't want you to pick and choose which of their services you want to use. They're "Motte&Bailey"ing their own software, the exact opposite of what good software should be.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

Sorry, no, he is blurring (especially across the last three or four EA related essays) by making a loose implication: if you agreed with the bottom layer of the tower, you would probably, *by logic*, agree with many of the higher layers, and because you all are loudly not agreeing with the top layers, I will challenge your motivations by grilling your practical commitment to the bottom layer.

It's like a motte and bailey with a more-than-usual appeal to logic, like there is a logical bridge from the motte to the bailey, and if you don't cross it, you probably don't like logic, and if the bridge is broken, it's fine, I just need to fix it (by fixing the logic). There's a coded assumption that EAs are on the side of logic.

Which just isn't true. These thought experiments are frightening in their lack of rigor. They are claiming so much more rigor than is actually there.

You are completely free to walk away from Singer, MacAskill, and the entire EA movement, and maybe even a lot of Scott's spicy Q&As, simply on the basis that, "nah, bro, a human isn't a spherical object, and you haven't really proven anything".

It's just like the skeleton picture in the last post. You can leave. But unlike Scott, I think I can leave with my rational sense in-tact. MacAskill didn't prove shit.

Expand full comment

The thing about Logic is that it's a branching son of bitch. People talk about Logic and Logical Proofs as if they're just straight lines from premises to conclusions, but they're actually trees (in the computer science sense) or, more generally, graphs (also in the CS sense). From each premise or fact, you can derive a whole lot of conclusions. A Logical derivation is a complete path in this graph, but there is a lot of other possible paths. Logic under-specifies reality.

Scott is not making an M&B because he's not really telling you that *the only logical way to go from X to Y is EA*, hell, he's not really even implying it. Read the tower again, is "10% of your income is a good place to start if you want to help people" a good way to convince someone your way is the Only One True Way ? it sounds more like a policy maker making a tentative suggestion, a place to start from and revisit later.

And that's exactly the point being made, the tower is not uniformly strong, each layer added makes it weaker and more specific. "We should help people in need" is a hell of a lot harder to disagree with (although, to my horror, some people here are actually doing just that) than "<random person> is a good person to lead EA". Each additional layer puts more assumptions and more constraints and has more alternatives. If you disagree with any layer, you're perfectly free to destroy it and replace it with a layer of your choosing, but then you would have to trust yourself to rebuild the entire upper part of the tower from the layer you destroyed upwards, and to do so effectively. The entire point of EA is that this is hard, the brain's intuitions weren't developed or evolved to reason about humans or morality halfway across the globe. If you dislike EA, fine, find yourself another social movement that you can trust to keep you honest and focused.

In fact, if anyone is doing M&B, I would honestly say it's the kind of EA opponents who use its more tentative and ridiculous outgrowths (AI Alignment) to discredit its more foundational - and much more defensible - assumptions (people everywhere matter exactly as much). M&B can be done in assault as well as defense, it's fundamentally a trick of mixing-and-matching the assumptions in a sneaky way to paint yourself more favorably or paint your opponent more unfavorably.

Expand full comment

I agree with you that upon reflection, what Scott's doing here isn't quite the same thing as M&B. But what I think you're missing with your concept of offensive M&B is the argumentative move of reductio ad absurdum.

Suppose I say to an EA proponent, "Okay, let's presume that every logical move you're making is perfectly valid: the premises at Level 1 fully compel the conclusions at Level 2, and so on." Then I point out that the object-level conclusions at Level 5 are in my view utterly idiotic, so much so that an intelligent person's believing them could only be explained by some cognitive misfire at a much more fundamental level.

I may not have a clear sense of exactly where in the EA person's chain of reasoning something is going terribly wrong. But the more absurd the result at Level 5, the stronger the inference that the problem isn't just creeping in with the transition from Level 4, but actually starts way farther back.

Put another way, if someone says, "Really, we agree on everything right up to the narrowly fact-bound judgment of whether Kamala Harris or Pete Buttigieg would be a better leader for the Democratic Party in the 2020s," that seems like a reasonable statement to agree with, or to counter that our disagreement is maybe one step back of that.

If someone says, "Really, we agree on everything right up the narrowly fact-bound judgment of whether Kamala Harris or Pol Pot would be a better leader for the Democratic Party in the 2020s," I'm going to reply that our apparent agreement on the premises leading up to that supposedly fact-bound judgment must've been wrong.

Expand full comment

I still don't think that you get what I'm saying.

What I'm saying is that, Scott doesn't really say or imply the "Fully Compel" part. Each new layer in the tower is more specific and more vulnerable to alternatives. There is a vast branching tree of possible moral systems starting from "We Owe Each Other Things", and Scott is only showing you a single path through that tree. Furthermore, Scott is *admitting* he's showing you a single path through the tree!, he says you can destroy any of the upper layers and re-build them differently. That is, explore another path in the morality tree. He's fully admitting that his particular moral design decisions are not entirely justified by the problem at hand, and he is challenging you to come up with a different system to solve the same problem.

Imagine Scott as an automotive engineer telling you "Look, maybe a Toyota isn't the single best possible land vehicle, but you are entirely free to design another car! and you need not refuse basic tenets of car design like 4-wheels-per-vehicle and driver-seat-at-the-front - those are very old and widespread and it's very unlikely you will do better than them - you can just refuse Toyota-specific design tenets built on top of them, show me how much better you can be !". Instead of "car design", substitute "morality"; Instead of "Toyota", substitute "EA".

It seems like one of your points is that the tower is under-specified to some extent ? that there is certain additional, invisible, layers interspersed in between the visible layers Scott is showing ? Maybe, the tower is just an illustration. Nobody can be reasonably expected to illustrate the entire Web of Beliefs in his mind into a neat accurate tower. Maybe there is hidden steps in Scott's journey from "We Owe Each Other Things" to "We Should Worry A Lot About Hypothetical AI And Future People", but his point still stands, doesn't it?

His main point is that his beliefs can be made into a hierarchy, and that the entire hierarchy isn't uniformly strong. If you think the hierarchy is incomplete, why, just fill in the missing layers, why don't you ? then continue examining the hierarchy for the "cut-off" layer that you will refuse EA starting from.

If you think Scott is comparing Kamala Harris and Pol Pot, the bad premises need not have arisen in the very bottom layer (which, I would like to believe, is so trivially true. We Owe Each Other Things, is that so hard ? can it possibly hide bad assumptions ?), maybe the bad premises sneaked in through hidden layers that is not shown, you can still think hard and identify or sketch those layers, then refuse anything which relies on them. Scott's point is that whole "Beliefs are layered" thing, the actual layers shown are just an example, not to be taken literally.

Expand full comment

So, there's an ultimately unresolvable problem here: "Who are you gonna believe? Me, Mr. Syllogism? Or your own lying eyes?"

The pre-Socratics looked at something like Zeno's Paradox and said, "Hmm, there must be something wrong with our intuitive belief that objects can move, even though it's hard to pinpoint exactly what." Modern philosophers generally look at Zeno's Paradox and say, "Hmm, there must be something wrong with this chain of reasoning insofar as it leads to the conclusion that objects never move, even though it's hard to pinpoint exactly what."

If EA leads to apparently absurd results, one could conclude that our assumption that there's something wrong with those results is false. Or one could conclude that something in the chain of reasoning that led to those results, even though it seemed prima facie uncontroversial, is actually -- surprisingly! -- false.

The only thing I'm adding here is that, if you're taking the second path, then the greater the absurdity of the top-level conclusion, the more likely it is that the place you'll eventually find the fallacious premise is way back near the beginning of the chain.

Expand full comment

Sorry for the double post, but I realized there was another dimension of this that I also wanted to reply to.

> Imagine Scott as an automotive engineer telling you "Look, maybe a Toyota isn't the single best possible land vehicle, but you are entirely free to design another car! and you need not refuse basic tenets of car design like 4-wheels-per-vehicle and driver-seat-at-the-front - those are very old and widespread and it's very unlikely you will do better than them - you can just refuse Toyota-specific design tenets built on top of them, show me how much better you can be !". Instead of "car design", substitute "morality"; Instead of "Toyota", substitute "EA". <

See, I don't understand this analogy at all. Which suggests to me that it may point to some deep level at which our assumptions differ. (A nice illustration of the ad absurdum principle, but then again I would say that, wouldn't I?) The way I think of it, the analogous conversation is like this.

Q: I think Toyota builds shitty cars.

A: Do you agree that four is the optimal number of wheels for an owner-operated, internal-combustion-powered vehicle?

Q: Yes.

A: Well, then, you agree with at least part of Toyota's design principles!

Q: I suppose so? But every automotive manufacturer agrees about that, and within the universe of such manufacturers, I find that Toyota does a bad job of designing four-wheeled vehicles. I'm not really comparing them to people who think cars should have 3 or 5 or 7 wheels.

A: I see. And the vehicles you have personally designed within the four-wheeled paradigm, how successful have they been?

Q: What the fuck

A: I suspected as much. Not quite as good as Toyotas, are they? Are you even making any effort to get better at car design?

Q: I AM NOT A FUCKING AUTOMOTIVE ENGINEER. I AM A CONSUMER OF AUTOMOBILES, AND IT'S IN THAT CAPACITY THAT I'VE REACHED THE CONCLUSION THAT TOYOTAS SUCK.

A: You should reach your car-sucking-related conclusions more efficiently.

Expand full comment

The motte-and-bailey is distributed across Scott's last few pieces, not contained to this one. Here he's saying "look, we probably agree on motte, and the bailey may be shaky, but you're so focused on the bailey that I doubt your motives. Do you really even believe in the motte? Let's focus on getting you to the motte."

But in the last few pieces, he's been arguing that there is a convincing *logical* bridge from the motte, the base of the tower, to the bailey, the top. It's core to EA to make this point. Even when Scott disagrees with MacAskill in the end, he doesn't disagree that there *is* such a bridge or that it's *right*, just that he doesn't want to cross it so he's going to carve out an exception to being reasonable.

It's a motte and bailey because Scott and other EAs are shifting back to the "everyone can agree here" principle, and pointedly putting people on the defensive with "do you donate money, bro?" while buying time to fix a logical bridge they're pretty well convinced will be fixed; in the end, we're all to listen to William MacAskill and march straight for the bailey, because how can you really argue with cleaning up bottles of glass in the forest, bro? And the rest is just corollaries.

Building on what Jacobethan says, a lot of what's going wrong here is that the chain of reasoning under discussion is *faux logic*. Sorry, I'm an ex-mathematician, and I have a professional interest in the space, so I'm going to rant a bit here.

This is a Wittgensteinian sort of thing. The reasoned arguments of eg Peter Singer and William McAskill, like the reasoned arguments of any moral philosopher, are making implicit assumptions which amount to "assume perfect spherical human independent of other humans", and they're using syllogisms without any rigor. They look like airtight arguments, but they're essentially meaningless symbols when applied to impossibly messy and complex life. It's no wonder the conclusions are so bizarre.

This is all well and good when it's just intellectual bullshit over a cup of coffee or in an obscure journal, and it *can* be great as scouting work for new morality or other new philosophical ideas, pointing us in important directions...

... but when this kind of intellectual bullshit gets used to browbeat real ordinary people into behavior changes, I feel a pretty strong need to call "intellectual bullshit!" and protect the people from the behavior changes.

You can walk away. Not in spite of EA's powerful arguments. *Because* EA doesn't have powerful arguments, just very shaky ones.

Expand full comment
Aug 26, 2022·edited Aug 26, 2022

I don't really understand why you're so militant about this whole thing. EA is not substantially worse than most reasonable moral philosophies, and much much better than some.

>he's been arguing that there is a convincing *logical* bridge from the motte, the base of the tower, to the bailey, the top.

Of course he does, everyone believes that their premises justfiy their conclusions. And just like I said earlier, logical paths are not mutually exclusive. There could be, very well and very reasonably, tons of logical paths from "We Owe Other People A Lot Of Things" to a whole lot of conclusions and moral systems, EA is just one path among many. Nobody is holding you from exploring a different path, the existence of Scott's path doesn't invalidate other paths starting from the same premise. Scott never said EA is the only reasonable moral system derived from basic moral principles like "We Owe Each Other Things", he's just challenging you in good-faith to show other systems.

>The reasoned arguments of eg Peter Singer and William McAskill, like the reasoned arguments of any moral philosopher, are making implicit assumptions [...] and they're using syllogisms without any rigor

To reiterate Scott's Spicy Piece yet one more time, What Do *You* Suggest ? How should we reason about morality, especially in situations we weren't evolved for ? Do you have better alternatives ? I have to say it's a bit bizare to criticize the rigor of a theory without offering a competing theory, it's like criticizing Peano axioms then defining the natural numbers in terms of your fingers.

>This is a Wittgensteinian sort of thing.

What is your Wittgensteinian way of reasoning about ethics and morality that isn't just your 100000 year old monkey brain dressing up its ancient flawed tribe-optimized intuitions as logical theories ?

> intellectual bullshit gets used to browbeat real ordinary people into behavior changes

Well, if you really think this is bullshit, you're welcome to say why or link to somebody who says why in detail. It's probably the least interesting kind of criticism to say "no lol" and leave. If you don't have another formulation of natural numbers besides your fingers, why are you so upset about Peano axioms, regardless of how supposedly lacking in rigor they are ? they can't be possibly worse than your fingers right ?

And I don't think anybody is being intimidated or beaten into conclusions they don't like by logic alone. People can *always* say "no lol" to themselves and just refuse to follow the logic into reality. People routinely live with enormous contradictions and inconsistencies in their life, they can afford to ignore Singer's logic. But if you want to actually object publicly in a way that will convince anyone who hasn't already made up their mind, "no lol" isn't enough, you have to either offer a competitor or at least say in detail what's wrong with that you're critcizing, at a level a bit more useful and detail than "life is complicated maaaan".

Expand full comment
Aug 26, 2022·edited Aug 26, 2022

Goodness, no thank you. I don't think this really addresses any of my points. It pretty much just says "okay, okay, so?"

So, my points. That's all I have. I want a better system, but I feel fine criticizing without having one. This system is bad. That's both true and a necessary starting point for seeking something better.

And while I don't have better, here's a start: https://www.princeton.edu/~ppettit/papers/1984/Satisficing%20Consequentialism.pdf

Expand full comment

This is a genuine question: is "We owe each other a lot of things" what you'd identify as a central axiom of EA, in the same structural position where Scott has "We should help other people"?

Maybe these are ultimately the same; I'm pretty doubtful that they're ultimately the same or even very similar; but I also don't want to assume one way or another that you were intending them as rough equivalents (fine!) or intending to substitute for Scott's formulation what you'd see as a stronger one (also fine!).

Are these -- the "should help" theory and the "owe" theory -- two distinct branches of the EA tree, where you're on one and Scott's on the other? Or are we at a base level of intuition where EA proponents might interchangeably say "owe" or "should help" without there being much explicit discussion of which it really is?

Expand full comment

He LITERALLY ADMITS that he's M&B-ing. He just doesn't care because the end justifies the means.

Expand full comment

The tower is vertical. The motte and bailey is horizontal.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

I basically agree with this, but you could steelman the argument to, "No one outside the effective altruist movement (other than religious groups) is committed to giving x% of your income to charity, so that's the real defining feature and how we should define EA for all purposes." I think that's wrong though, because EA is so thoroughly shot through with utilitarianism that Giving What We Can is basically a Benthamite front group.

This shows up in Scott's points above. Question 4 starts to approach:

Q: "I think utilitarianism is wrong."

A: "How good a utilitarian are you being?"

Presumably if you were considering being a utilitarian, utilitarianism potentially being wrong should be something you want cleared up before you start squandering all your money on mosquito nets or hornet sanctuaries or whatever.

Expand full comment

I love this question, because it made me realize that Scott's presentation of EA as a stack is exactly a way to clear him of the charge of motte-and-baileying and protect his readers from the motte and bailey fallacy. As a stack, it explicitly recognizes that you might accept some of the items and deny others, while somebody using the motte and bailey is trying to trick you into accepting some item because you accepted some other.

When you believe something, it's often hard to separate it out into separate items, like a stack, because everything you believe all seems so causally interconnected. True believers often accidentally motte and bailey, because it seems obvious to them that the Foundational Assumptions necessitate the Specific Projects. You'll deny their Specific Projects and they'll be aghast that you're so evil as to deny the Foundational Assumptions, which maybe you _don't_ deny. But because, for a believer, it's all a mutually-reinforcing constellation of items, it's hard to imagine denying one without denying all. And then they get accused of motte and bailey trickery and it all goes downhill from there.

So the believer can protect against the appearance of motte-and-baileying by expending the intellectual effort to separate out the different items and explicitly note that a good faith interlocutor might accept some but deny others.

Expand full comment

I guess the lowest levels of the tower are so uncontroversial that nobody really disagrees with them (or if they do, they can't articulate why). The higher levels are where the discussion and debate occurs.

Consequently, this makes it look like EA only consists of the high levels, because that's all anyone seems to discuss.

Imagine a race of aliens watching humanity during the Cold War. "All the fighting is happening in places like Vietnam and Afghanistan, and never in Washington and Moscow. I guess Vietnam and Afghanistan are the most important states in the world..."

No. They're not. But Washington and Moscow were so heavily defended that the fighting could only take place elsewhere.

Expand full comment

I like this analogy, but the bottom layers are *not* uncontroversial, since almost nobody acts as if they were true. And it's easy to see why: 10% of your income is a lot, and donating to effective charities that work in far-off parts of the world doesn't give you as many warm fuzzies for your buck as donating to something local or with which you have a personal connection.

(I should probably mention at this point that I donate regularly to AMF, but less than 10% of my income.)

Expand full comment

But nobody acting on them doesn't mean they disagree.

It can mean they still live in a "negative sum game" headspace where anything that doesn't help themselves is wasted.

And I think most people would agree that arguing for that POV is nihilistic and undesirable, but that doesn't make them automatically switch their beliefs at a more visceral level.

Expand full comment

> It can mean they still live in a "negative sum game" headspace where anything that doesn't help themselves is wasted.

That's a very fundamental disagreement with EA's foundational assumptions! If someone thinks that, there's no point trying to convince them of the virtues of AI-alignment research or bednets.

Expand full comment

I don't think anybody thinks that.

I think everyone to some small extent always thinks that.

And how often you fall into that mode of thinking and which parts of concept-space tend to be actively thought about while doing might greatly influence prevelance of donations.

Expand full comment

I disagree that donating closer to home is purely about the warm and fuzzies (disclaimer - I only donate closer to home).

I think I agree with the part about not liking that a further place takes more thinking and careful planning and seeking to understand the local situation to actually improve things. Like if I could find something straightforward like one off earthquake or hurricane relief, I often donate to those, since it's straightforward.

Closer to home it's easier to tell if I'm making an impact. I'm still helping the disadvantaged (homeless shelters, asylum seeker legal funds), and as far as I can tell the fact that I've deliberately chosen these orgs already puts me above the average monthly giver in Australia (most people give monthly when they get ambushed by someone at the mall - usually the cancer foundation)

Expand full comment

Yes, these are all fair points.

Expand full comment

I disagree that it is easier to tell you are making an impact by choosing local charities.

If you gave a single homeless person enough to get housing you could verfity they get housing for a while but not if that fixes there root problem and protects them from becoming homeless again, or that if you gave money to 1000 homeless people how many of them would realise the benefits you had in mind for that donation, or how different there life would be without your donation. Maybe they would have found a bed else where without you the next day.

You would be nowhere near the level of scrutiny required to work out how many QALYs your donation gained.

While you don't get to see who gets the money when you give it to an EA global health charity you can expect them to be doing randomly controlled trials to get answers about how much impact they are having.

(and even if a particular charity isn't doing rcts I don't think that invalidates the point as some are and those allow you to be more sure of impact than most local causes)

Whether you are giving more or less than your peers is a little irrelevant. Given you are giving some money don't you want that money to have the maximum impact?

I don't want to sound like I disapprove of your choices. That you are giving anything at all is great. If you want to give locally that's fine. I just don't agree with the chain of logic that says local causes are more easily measured.

Expand full comment

My local homeless shelter has metrics - the amount of money it costs to run the shelter and the number of people they supported and fed.

Do we hold a global health org to the same standard? Are we also going to address colonialism leading to reduced access to education, opportunity, and poorer infratructure etc? If I follow your logic, I should be funding an initiative for the government of a West African nation to sue France to get back their independence reparations, and maybe that money will go towards fixing their health issues.

What I think I should do is to donate to both causes - distributing antiparasitics in Africa and also the local homeless shelter. But to argue that local charities don't have metrics (or should try to fix the systemic issues) seems counterproductive. I'm happy to track the number of beds provided, number of nights open, and number of visas issued.

And for me personally, my locals are tax deductible and helps me increase my dollar impact. A lot of recs for global health charities are American, which aren't tax deductible for me.

Expand full comment

It's the middle.layers that are controversial. Everyone agrees that it good to do good. No.one thinks mosquito nets are bad. What's controversial is.utilitarianism, telescopic ethics and longtermism.

Expand full comment

Utilitarianism and telescopic ethics are on layer 2 of 5; I think it's reasonable to describe that as a lower layer.

Expand full comment

I think quite a lot of people do give 10% of their income to effective charities. A minority, sure, but not "almost nobody".

Expand full comment

>I guess the lowest levels of the tower are so uncontroversial that nobody really disagrees with them (or if they do, they can't articulate why).

The very lowest level does indeed seem pretty uncontroversial. So much so that even calling them "assumptions of EA" as though non-EAs disagree with them comes off like deflection. It's the same kind of disingenuous rhetoric Scott complains about when his opponents say things like "feminism is fundamentally about the belief that women are people" or "religion is really just about appreciating the beauty in the Universe".

The second-level stuff is all quite controversial, however.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

You can find people disagreeing with the bottom level in *this very comment section*. But yes, most people would agree with those assumptions. This is fine, though: the point is not to draw a line about EA, but to establish where your interlocutor first disagrees with the EA movement, and bring them along as far as they'll go.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

> So much so that even calling them "assumptions of EA" as though non-EAs disagree with them comes off like deflection.

Well, more people nod along to these assumptions than "agree" with them in the sense of actually acting on them in a way consistent with being a good person. Is the thing. You could rephrase the foundational assumptions as the more confrontational: "There is always a lot of good to be done in the world through charitable donations/work. If you agree with this, and if you want to be a good person, then to be self-consistent you need to ACTUALLY TAKE THOSE OPPORTUNITIES month after month that you know to be there. Saying 'yes, it would be good if more people donated to charity' and then donating five hundred bucks to a random group once in a blue moon is not enough. If you know in your heart of hearts you could save more lives/do more good by donating X amount to Y charity every month, DO THAT, or else forfeit your right to say you are a person who really cares about charity".

Note that this doesn't just mean getting off *your* behind and accepting that, by your own principles, you should be doing more than you're doing now. It also means two other highly unpleasant things:

• Accepting that people you admire as charitable are actually probably inefficient and not the moral paragons you've always treated them as. That guy who always gives ten bucks to beggars whenever he meets them in the street is… doing *some* good, but could and (if he seriously espoused the principles he claims to hold) should do more good than that. The performative nature of what he does isn't an excuse.

• Your own past self has potentially been, through inaction, responsible for hundreds of preventable deaths — not even for a cool reason, but because you were trying really hard not to think about it.

Expand full comment

> more people nod along to these assumptions than "agree" with them in the sense of actually acting on them

I don't see how that refutes the claim that the assumptions themselves are uncontroversial, though. I can't even begin to tell you the number of things I believe I *should* do, or refrain from doing, that I nonetheless consistently fail to do, or fail to refrain from doing.

Moreover, I don't see why a belief like "it would be good if more people donated to charity" entails in a strictly logical sense anything about what *I* ought to do. Insofar as EA proponents think the one does imply the other, and insofar as they think my obligations are measured by the criterion of maximum efficiency as EA defines it, *those* are some of the higher-order EA beliefs that are, in fact, quite controversial. "It would be good if more people donated to charity" is not.

Expand full comment

This highlights how the word "should" is eliding a lot of nuance in the bottom tier. Most people agree that "giving money to charities is broadly a good thing." Very few people agree that "giving money to charity is a moral obligation, and not doing so is morally wrong." EA is wholly dependent on people accepting the second version of this, while assuming people will round it off to the first.

Expand full comment

>This highlights how the word "should" is eliding a lot of nuance in the bottom tier.

As well as the words "help" and "other people". "We should do something to help other people in some way" is not the same as "we should help as many other people as possible, as much as possible".

Expand full comment

FWIW I disagree with the lowest levels, as I'd said in my other comment:

https://astralcodexten.substack.com/p/effective-altruism-as-a-tower-of/comment/8596608

Expand full comment

I feel like this post was at least effective at helping me figure out where I depart from the EA community. It's not that I disagree with the base premise. It's that I already believed the base premise, and was already donating >10%, before I heard about them.

Maybe for some people EA at least helps them focus on the idea of altruism, but for me the movement has nothing to offer. It just feels like a distraction. Which is fine. Not every movement is for everyone, and I'm sure EA is doing some good work. But my path lies elsewhere.

Expand full comment

I'm curious -- what's your path?

Sometimes I come across talented former EAs who left to pursue their own non-EA vision of how to do good right; they always make me question assumptions I didn't even know I held, which is great.

Expand full comment

One of my biggest concerns with EA discussions like Scott's last two posts is a sense of precision that I don't think is reflected in the development literature. Confounders abound, such that even the sometimes-complicated analyses can give me the impression of a precision that's just not possible.

For example, should you have children? A lot of EA discussions break it down by extrapolating ad infinitum, like the 500 million years comment from the last post on "What Do We Owe the Future?" Maybe your having a child affects future humans 1% in one direction or another, and that easily means more than all humans who have ever lived in the past.

Really? If we're talking about humans, then we're not talking about a species that will last 500 million years. A few million at best, assuming we don't mutate away because of genetic engineering, altering our environment, or adopting new environments through space-faring. So what impact do my actions today have on humans in 500 million years? EA says there's uncounted masses of people. In reality, probably not so much.

Okay, let's take it down a notch, then, as Scott suggests, and go down the assumption ladder a rung. We should be interested in populations, say, 500 years from now, right? Clearly still human, and still impacted by what we do right now.

Really? What gives us any certainty that actions taken today will have an outsize influence on populations in 500 years? Scott and EA point us to slavery and abolitionism. I'm not convinced. There's a huge debate among historians about whether the old Great Man theory or the Trends and Forces theory is more of a driving principle. A traditional EA approach might be to split the difference, let's say there's a 1% probability that donating to this or that cause alleviates the suffering of 100 billion people over 500 years. That amortizes out to a billion people! But just like Scott's last article, I feel like I'm getting mugged again. I'm not going to save a billion people. I'm signing up for a lottery where I have a 99% chance of helping nobody, and a 1% chance of helping a bunch of people - according to another complicated set of assumptions. One of those sets of assumptions is that my actions today will 1. persist into the future by 100's of years, and 2. not be counteracted. In reality, economics is a dismal accounting of all the ways these attempts to extend the graph out to the right just don't work in real life. There's too much noise. Can you think of a single person whose actions 500 years ago, if cancelled, would result in a dramatically different world today? Most people could probably be replaced by the next available person to take that position or do that job. Maybe slightly poorer, but the situation would likely be a wash in the final calculus of centuries.

Maybe Columbus decides not to sail West, and it takes Europe a few more decades - even a century - longer for Europeans to find the Americas. Or maybe not, there was a lot of trade pressure to find another way to East Asia. Maybe the languages and history of the people in S. America would be different?

But then, Columbus didn't get to the Americas by calculating QALYs of uncountable future generations. He got there by arguing for a shorter route to E. Asia, a 'Calling from God', and a shitty misreading of ancient calculations of the circumference of the Earth. Pursuing some grand strategy for a future completely unlike ours feels like playing that same lottery game from above, but with a few more orders of magnitude stacked against my odds of having any measurable effect whatsoever.

So in one sense, I'm not convinced EA's methods are aligned with my priorities. They seem all about solving the future, but I'm not convinced that's even possible. If it were, I think I'm less interested in speculative ventures like that.

Maybe I also depart from EA at Cause Prioritizations? Maybe earlier? I'll be honest that I don't think pandemics are in any way an X-risk. Existential means a >99.999% kill rate, which is just not something pathogens can deliver. How about nuclear? Again, not an X-risk. Horrific. But existential? Probably not. What about AI? I don't know, but my sense is that this, too, is probably overstated in the same way all the others are. AI is certainly accelerating by leaps and bounds, in new and interesting ways. Meanwhile, I look at what Tesla and Waymo are doing with driverless cars. Everyone agrees that the first 95% of the problem is the EASY part (not that it was easy), the next 4% is wicked hard, and the final 1% is debatable whether/when it will even be possible. (Surprise me, though! Recent videos of what FSD 10.69 can do are impressive.) With AGI people are getting excited, assuming the last 10% will be as straightforward as the first 10%. I'm not so sure.

Where do I go from there? I've always liked the idea of building a better future through means other than your work-a-day job, since long before I heard about EA. So yeah, I'm still at +10%, but I'm wary about throwing my small lot into an organization that's focused too far off into the future, or on abstract/unproven ideas. Mostly I'm worried about getting deep into speculative charitable ventures. To extend the mic drop line at the end of Scott's article:

Q: I’m donating >10% of my income to charity.

A: You should donate more effectively.

Q: How certain are you that your methodology is more effective at promoting the kind of human flourishing that got me into donating >10% in the first place? How do I know the signal isn't going to get swallowed up in the noise, the trends and forces, etc.? Honestly, I just want to make sure fewer children starve to death or are beaten or otherwise suffer. Very often, the EA community gets into abstract philosophizing, which isn't why I participate in altruism. I want to help create a better world. I believe EA does, too, but a lot of the abstract stuff feels like a distraction from my personal approach. I'm watching on the periphery to see if they get something right - like figuring out how to reliably help 3rd world nations ascend the development tree - but otherwise I'm taking my own road to a better world. I hope we're both successful.

Expand full comment

What do you donate to? How do you make that choice and update it as time passes?

Expand full comment

This is a good question. One heuristic I use, in business and in charitable giving, is that the people closest to a problem should be empowered to solve the problem. A lot of the development literature has a frustrating habit of interventions that initially appear effective, but after getting implemented writ large they later fail to have a significant impact.

I agree with EA that a person far away is as important as a person close to me. But I don't agree that my certainty of having an impact on that far away person is the same as the person close to me. I know when I save the drowning child that I did what I intended to do - save a life. Once I start paying people to go watch for drowning children, my certainty goes down significantly. Maybe I just paid people to fish children out of streams who were otherwise catching fish and now their families will starve. Or maybe I just paid a warlord's wages to enforce kidnapping of village children who were cleaning their clothes in the river.

I try to be very careful that the person who receives aid is close enough to the problem that they won't make those mistakes (and also aligned with me enough not to interpret 'repression of the villagers' as equivalent to 'aiding the cause of human flourishing').

Expand full comment

But what do you actually donate to? GiveDirectly?

Expand full comment

Nuclear war is not much of an X-risk. Would take over 20 degrees of cooling for many years to actually-in-real-life kill the species - you need the equatorial seas to freeze solid.

Pandemics are flashy, but I think the focus on disease-causing organisms when assessing bio X-risk is a mistake (it's very relevant to bio GCR, though). For X-risk I'd be more worried about Green Goo - a synthetic alga that doesn't need phosphate, fixes nitrogen, is more efficient than RuBisCO, and isn't digestible would kill us all by starvation.

Expand full comment

Yes, since I read Bean's posts on nuclear war and nuclear winter, I wince every time I see them listed as serious x-risks.

Expand full comment

Who is Bean and where are these posts?

Expand full comment

Bean is a regular SSC/ACX commenter who writes an excellent blog about military (and particularly naval) history. The relevant posts are https://www.navalgazing.net/Nuclear-Weapon-Destructiveness and https://www.navalgazing.net/Nuclear-Winter.

Expand full comment

Have you considered that bean is not infallible?

Expand full comment

I have, but his posts are pretty convincing. What part(s) of them do you disagree with?

Expand full comment

The part where he ignores the fact that there are reasonable urban priority targets requiring ground bursts resulting in substantially more illness and fatality.

Expand full comment

1. I know that those things exist, but they're going to be in the minority of urban targets.

2. Changes nothing related to the X-risk aspect.

Expand full comment

X-risk does not mean humanity is immediately destroyed, otherwise nuclear war being an x-risk would be obviously wrong.

Enough of a technological and population setback would itself be an x-risk. How many resources do we have to sustain getting back on our feet after a two century setback? How many times can humanity bear that kind of setback? I don’t know, but that’s the question to ask.

Expand full comment

Like what? You don't use ground bursts to destroy a city, you use an airburst. You use groundburst on hardened military targets that are underground. So, yes, there will be a bunch of these in North Dakota and Montana, centered on Minuteman silos and control stations. But nobody would waste one on New York. What for?

Expand full comment

Railway marshalling yards are the most likely candidate. Vital transportation infrastructure, and very hard to destroy.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

There are similar amounts of nuclear-winter and no-nuclear-winter papers published. Nuclear winter is pro-narrative for academia given the association with the anti-nuclear movement, no nuclear winter is anti-narrative. Discount the pro-narrative side accordingly and you get a quite-small probability of nuclear-winter doomerism being true.

Note also that it would have to be a really, really bad winter to actually achieve X-catastrophe. You basically have to glaciate every continent (to block hunting) and freeze the equatorial ocean surface solid (to block fishing). (Gathering I can accept as being hard, because you need lots of deadly retries to work out what's poison and not.) Complete collapse of agriculture drops the supportable population density, but nowhere near enough to break viability without hunting/fishing collapsing as well. Now, Snowball Earth requires a temperature ~20 degrees below modern, and that's assuming there are tens of millennia for the water to slowly cool and freeze (water has a high heat of fusion and heat capacity, even on the scale of the Earth system) - obviously, soot in the atmosphere doesn't stay there for 10,000 years, so you'd need much more cooling than that 20 degrees in order for a nuclear winter to achieve the full freeze (I'm not a climatologist, so I can't say exactly how much more).

You need really, really spectacular effects for any climate mechanism to be plausible as an X-risk. Nuclear warfare is a GCR, but we knew that already.

Expand full comment

And as Bean points out in his post, most of the pro-nuclear-winter papers have serious methodological errors and haven't updated sufficiently to changes in the on-the-ground situation, strongly suggesting that their conclusions are ideologically motivated rather than derived from the science.

Expand full comment

I think bombing us back to pre-industrial times would be enough, even if humanity doesn't literally go extinct. We can't sustain 7 billion people on this planet without modern science and industry. Averting a death toll of a few billion would probably still count as one of the most effective EA interventions you can do.

(And if you're taking the long view, we probably can't just rebuild ourselves and try again a few hundred years later, as we've used up the readily accessible fossil fuels we used to advance the first time around.)

Expand full comment

Oh yes, it would be catastrophically bad, but not an existential risk. As for the fossil fuels thing, see Scott's recent review of What We Owe The Future.

Expand full comment

Exactly. People are updating too hard on the "nuclear war won't literally kill everyone" argument.

Expand full comment

There aren't enough nuclear weapons to really do that, either. Yes, it would be bad, but we're talking being knocked back 100-150 years, not 1000.

Expand full comment
Aug 26, 2022·edited Aug 26, 2022

Ord disagrees with you regarding re-establishment of civilisation being impossible. The Precipice:

"Even if civilization did collapse, it is likely that it could be reestablished. As we have seen, civilization has already been independently established at least seven times by isolated peoples. While one might think resource depletion could make this harder, it is more likely that it has become substantially easier. Most disasters short of human extinction would leave our domesticated animals and plants, as well as copious material resources in the ruins of our cities—it is much easier to re-forge iron from old railings than to smelt it from ore. Even expendable resources such as coal would be much easier to access, via abandoned reserves and mines, than they ever were in the eighteenth century. Moreover, evidence that civilization is possible, and the tools and knowledge to help rebuild, would be scattered across the world."

(Also, most conceivable near-future nuclear wars would leave the Southern Hemisphere basically untouched aside from maybe a few hits on Australia, and even nuclear winter doesn't cross the Equator well. You just don't get a worldwide civilisational collapse unless you take the silliest of the nuclear winter calcs seriously.)

Expand full comment

is it about existential risks to humanity, to civilization in general or to our current one?

Are there any existential risks to humanity that can be influenced at all willingly?

Most things we are talking about is our current civilization or even limited to our society and lifestyle. Trying to influence outcomes beyond that is pure gambling and impossible to control.

Expand full comment

> is it about existential risks to humanity, to civilization in general or to our current one?

Just the first one - the others are called "global catastrophic risks" (which magic9mushroom abbreviates to "GCR").

> Are there any existential risks to humanity that can be influenced at all willingly?

Yes, absolutely. Comet or asteroid impacts can be influenced by cataloguing potentially-dangerous objects and if necessary deflecting their courses (we are already doing the first part and working on the second, see https://www.chroniclelive.co.uk/whats-on/kielder-astronomer-dont-look-up-22643484). Human-caused x-risks (unfriendly AGI, engineered pandemics) can be avoided by collectively choosing not to do the dangerous thing (this is of course quite a difficult problem). Supervolcanoes can potentially be cooled down (see https://www.bbc.com/future/article/20170817-nasas-ambitious-plan-to-save-earth-from-a-supervolcano), but failing that can be transformed from x-risks to GCRs by establishing self-sufficient populations on other planets (this is of course also a huge challenge, requiring decades to centuries). Pandemics can be made less of a threat by increasing monitoring, developing better PPE, investing in measures like ventilation and far-UV disinfection, strengthening controls against bioweapon development, developing broad-spectrum vaccines, stamping out the wild animal trade, etc; see https://www.slowboring.com/p/we-should-expect-more-and-worse-pandemics for some ideas.

Expand full comment

The "spicy" essay doesn't seem great to me. Do I need to be giving as much to charity as you before I can critique your choice of charities? Morally maybe, but intellectually no.

I believe that giving lots of money to charity is a good thing. I also don't do it. If you do then fine, I bow down to you as my moral superior. But I also have an issue with the fact that your charitable giving is 100% devoted to buying raincoats for ducks, because if you're going to be making that sacrifice anyway I'd really prefer that it was spent, yknow, effectively.

Expand full comment

Agreed, this seems less like a "spicy essay" and more like "an angry/smug essay".

Bluntly, reading that essay reminded me a lot of dealing with evangelists in my area growing up- constantly take the moral high-ground with your interlocutor, try and guilt-trip them, and beat on the dogma over and over once people start asking questions you don't like. I suspect Scott's friends aren't letting him publish the article because it would give a bit TOO much ammo to people like me who argue that EA has become increasingly dogmatic and sectarian.

Expand full comment

Yeah seems oddly out of step with the generally quality of his thought.

Expand full comment

I mean, if you're utterly convinced you have found the One True Way and you keep hearing people challenge it (especially in ways you find as provocative/senseless bomb-chucking), eventually you're going to stop thinking of them as innocent fools who know not what they do and start thinking of them as heathens arrayed against the Truth in infinite malice. If you're lucky, you'll even be aware of this change in thought.

God knows (to use the phrase) I can get this way when people start asking glib questions about Buddhism like "but isn't wanting to escape the realm of desire ALSO a desire?"

Expand full comment

Not be excessively annoying but that objection has genuinely bothered me about Buddhism and I have not found a satisfying sounding answer. What *is* the answer to that?

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

That the early translators of the Tripitaka into English should have been beaten with Zen staves until they used more precise language.

Buddhism breaks down the all-encompassing word "desire" we use to describe all internal motives and mental states that manifest actions into three categories:

-Taṇhā, meaning "avarice, lust, thirst, greed" is unbounded egocentric desire. It is the kind of desire that has the immediate relation to suffering- the urge to want what we cannot have which clouds the mind and causes pain both directly (through resentment and frustration at our non-possession of these things) and indirectly (through driving us to heartless behavior towards others). Whenever you see the phrase "liberation from desire", it refers to this mental state.

-Chanda, meaning "will to do (something)" are bounded, egocentric desires directed towards specific goals and dissolving upon the attainment of those goals. These are seen as normal and natural, and although Buddhas do not experience chanda that is merely because Buddhahood is transcendent and egoless, and thus does not experience hunger or thirst or a need for sleep and so on.

-Maitrī, meaning "benevolence, compassion, goodwill" is the egoless desire to alleviate the suffering of other living creatures, untainted by thoughts of self-aggrandizement or the expectation of a reward- starting in the bounded form of charitable work and virtuous deeds and culminating in the unbounded and pure desire for all living beings (including oneself) to attain salvation characterized as Buddha-nature. This is the kind of "desire" wanting to attain liberation is, and is seen not merely as a wholesome mental state but (in most schools) as the mental state that must be most directly cultivated to attain Nirvana. You will not find a single Buddhist text condemning the cultivation of benevolence as an obstacle to Enlightenment.

Working a field expecting nothing but the harvest and the landlord's pay is chanda- working a field while thinking "I should be the landlord, and he the farmer! How dare he have all that wealth while I have so little!" is taṇhā. Working a field eagerly without pay, asking not even for a meal to feed yourself, is maitrī, Hopefully this clarifies your question.

If you're familiar with the eschatology of certain Buddhist schools, then you'll notice a similarity between the word Maitrī and prophesied restorer of the dharma Maitreya. This is not a coincidence, as the name Maitreya essentially means "lord/master of benevolence".

Expand full comment

Very interesting information, thanks for explaining it so clearly.

Expand full comment

Agreed.

Expand full comment

> If you do then fine, I bow down to you as my moral superior.

I'm starting to think there's a deep inferential gap here between people who respond to morality as a motivating force towards taking action, and who treat it as another dimension for status games. In the first view: nobody asked you to bow, and choosing to do so does nothing to help.

Expand full comment

I think using a rhetorical device to call out someone browbeating you with their sanctimony (which absolutely is what Scott is doing here by responding to basically every critique with "Why aren't you giving 10% of your income to charity?"- he can argue that he is RIGHT, and that in fact he DOES have the moral high-ground, but he is absolutely insinuating that HE, unlike you, is doing the right thing and he thus has more weight in the discussion) is acceptable and characterizing its usage as seeing "morality as another dimension for status games" is an uncharitable reading.

Are you giving 10% of your rhetorical brain-cells to charitable readings?

Expand full comment

> (which absolutely is what Scott is doing here by responding to basically every critique with "Why aren't you giving 10% of your income to charity?"- he can argue that he is RIGHT, and that in fact he DOES have the moral high-ground, but he is absolutely insinuating that HE, unlike you, is doing the right thing and he thus has more weight in the discussion)

This is what it looks like when there is an inferential gap - that is *not* what Scott is arguing or insinuating. He is specifically finishing the Q&As with a response to the Q posed: "...to normal, down-to-earth charities?" "...today?" "...to produce systemic change?" "...to poor people who aren’t in those exotic philosophical scenarios?"

This is the most important paragraph of the post:

>>>>Think that 10% is the wrong number, and you should be helping people closer to home? Fine, then go even lower on the tower, and donate . . . some amount of your time, money, something, to poor people in your home country, in some kind of systematic considered way beyond “I saw an ad for March of Dimes at the supermarket so I guess I’ll give them my spare change”. If you’re not doing this, your beef with effective altruism isn’t “the culture around Open Philanthropy Foundation devalues such and such a form of change”, your beef is whatever’s preventing you from doing that. You may additionally have an interesting intellectual point about the culture around Open Phil, much as you have an interesting intellectual point about which Bible translations you might prefer if you were a Christian, but don’t mistake it for a real crux.

If the quantity donated really is your crux and your engagement on that point is focused around who "has more weight in the discussion" and not a moral disagreement that can be interrogated... well. Inferential gap, I can't really relate.

Expand full comment

> Inferential gap, I can't really relate.

I can't relate to the intellectual framework in which "but what are you, personally, doing?" is considered a meaningful rejoinder to someone's critique of your account of what people in general ought to do.

Suppose the person making the critique is a brilliant academic philosopher who's also a dissolute alcoholic who focuses 97 percent of his energy on shagging hot grad students and coasting on reputation. At some point his EA-aligned colleagues annoy him enough to rouse himself to write a devastating takedown.

What does it matter whether he personally is doing anything at all to help others, or whether one would approve in a general sense of how he's living his life? The thing to evaluate is the argument, right, not the personal circumstances of the individual making it?

Expand full comment

> I can't relate to the intellectual framework in which "but what are you, personally, doing?" is considered a meaningful rejoinder to someone's critique of your account of what people in general ought to do.

First off: if the critic has an ethical theory that they are successfully implementing, there are any number of ways the conversation can go from there. The post's stinger of "You should donate more effectively" isn't a gotcha, *it's the second brick in the tower*. Others in the comments have made the point that logic is less a chain than a tree, and there are plenty of places to go from there.

But that doesn't seem to be what this is about. The only-sporadically-questioned assumption seems to be that the critic is failing to meet their own ethical standards as well as the EA's. And to put it bluntly, a moral philosopher who fails their own criteria cannot pretend not to be fucking it up either one way or another. I have great sympathy for the fallibility of mortal men (and IMO, any ethical system that doesn't account for it is a non-starter) but akrasia is not a virtue. Nonetheless, there might be something useful here - hypothetical arguments don't have nearly the weight of hypothetical observations, but I'd be interested in what your academic wastrel has to say. Sounds like he's on the far side of that inferential gap anyway, might be illuminating.

But this is the internet, and thinly-veiled bad faith is *always* a third option. Having convictions of one's own takes effort, and keeping them even vaguely consistent in a complex world is unending labor. Trying to achieve them is difficult, and assessing where one comes up short is uncomfortable. And when everyone is following their own path anyway, who will call you out? Far easier to sit back and just be reactive to anyone who might criticize you. Far easier to throw rocks at monkeys that try to climb the ladder.

So: "do *you* practice what *you* preach?" "Yes, and lets swap notes." "No, but here's the standard and here's how one could try." "No, and fuck you for asking." Not even close to a trilemma, but it forces the critic to show if they have at least a little skin in the game. Very useful, as a rejoinder.

Expand full comment

1. Akrasia is not a virtue -- that much we can agree on! But it's still unclear to me why you care whether your interlocutors are themselves morally virtuous. Or at least sufficiently non-akratic that their own conduct can be seen as a function of an internally held moral theory that's consistent with the position they're outwardly arguing for. What difference does it make?

2. Moreover, the way matters to you seems like the opposite of what I'd logically expect. I'll grant that a moral philosopher who acts immorally is "fucking it up either one way or another." But you seem to be saying that you'd take a critique by someone who's fucking up at the level of moral reasoning more seriously than a critique by someone who's fucking up at the level of practical implementation. Am I right in inferring, then, that you're more interested in engaging with weakly reasoned critiques of EA than with strongly reasoned critiques that the critic happens to act inconsistently with? Am I wrong to think that sounds backwards?

3. If the game you want to know whether I have skin in is the game of developing an explicitly worked-out, internally coherent moral system and optimizing for conduct consistent with that system, I'll readily confess to not having skin in that game. But I don't understand why that makes me a presumptively bad-faith critic. In light of what you've said above, it seems like it should be the opposite. My position is that this is not a worthwhile game to be played, and I'm successfully implementing that position by not playing it. Shouldn't that make me one of the good guys, on the terms you've defined?

Expand full comment

At bottom it all rests on this normative assumption that one 'should' help 'other' people. Personally, I have yet to hear a convincing argument for why I should help anyone else if I don't want to. But then again, that may just reflect my 'lived experience' as they say these days (whatever that means - is there an unlived experience?). Now if effective altruism is about how can you be effective if you accept the notion that altruism is something you should do, then you have to keep going and answer the questions of effective for who, and effective in what way? Can these be answered be answered without falling into 'should's?

Without the full essay, I'll hold off writing more or actually trying to make my musings clearer.

Expand full comment
Comment deleted
Expand full comment

EA philanthropy is ironically the worst kind of charity to achieve these feelings for most people.

Expand full comment

Lived experience is as opposed to the professional experience, which is why it has a qualifier.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

Well, obviously if helping others is neither your instrumental or terminal value, then no argument would convince you. EA takes it as given that people want to do good, and their working framework for doing good is total utilitarianism.

Expand full comment

I don't think you should help other people if you don't want to.

It's worth observing that others often find great meaning and satisfaction from helping others, and consider that, if you tried it, you might like it. You could start by helping people around you, if only by being generally pleasant, and expand slowly from there.

But it has to come from wanting to, really, or it's a bit pointless. There's little benefit for anybody in grudgingly helping while feeling obligated and cross. It definitely won't work for you and is also less likely to be effective help. It's like children sharing their toys - if they are told they have to, then it's not real sharing anymore. Their right to keep their own toys has to be recognized before they can experience actual sharing, which comes from deciding to.

Expand full comment

I dunno man if a someone begrudgingly donated $5000 to the against malaria foundation every year, there would be 40 fewer dead children when they retire. I don't really care whether they loved the experience or hated it, the help would have done a lot of good!

Expand full comment

Yeah, as other people have said, if you honestly don't have "do good in the world/save lives" in your utility function, EA isn't for you. (You should be aware that some people would say that if you're analysing your own feelings correctly this makes you, er, well, ontologically evil. Though to be clear I think that's unfair.)

EA addresses people who tell themselves and others that they want to do good, but only act on these morals in incoherent ways.

Expand full comment

What if your utility function says you should sabotage EA to make competitors waste more resources and potentially make it easier to eliminate them in the future?

Expand full comment

You're question is basically the whole field of meta ethics. You can't derive a normative claim from an empirical one. But most people have done existing moral intuitions that you can build on

Expand full comment

The Drowning Child scenario is that argument. Should you help that drowning child even at cost to yourself? Most people would say yes. If you think you shouldn't, you may be considered a heartless monster, but at least you won't be a hypocritical heartless monster if you also don't donate to charity.

Expand full comment

The drowning child scenario doesn’t actually work though.

Expand full comment

Whenever I see this sort of glib dismissal, I wonder if it's in https://www.lesswrong.com/posts/neQ7eXuaXpiYw7SBy/the-least-convenient-possible-world

Expand full comment

So the argument is what exactly?

A: You would feel TERRIBLE if you didn't spend 5,10,20? minutes (or whatever) of your day to save a drowning child you came across.

B: Ok sure lets say that I would (not clear to me this is true for all people, but fine).

A: Ok so now you clearly should feel TERRIBLE about not spending 5,10,20 minutes of your labor to save some child in Africa.

B: Umm why?

A: Well because the situations are the same!

B: They are not the same.

A: They are exactly the same! A child is going to die in both cases and it only takes an arbitrary sacrifice on your part to save them.

B: No they are not exactly the same at all.

1) I have much less visibility into whether my assistance is actually needed.

2) I have much less visibility into whether it actually arrives.

3) I don't have nearly the emotional relationship to a child who is on the other side of the world that I do to one right in front of me. So psychologically they are not the same.

4) the social impacts are not the same, on the one hand members of my community are very likely to reward and respect me for this behavior, on the other my contribution is anonymous.

5) I also likely don't have nearly the actual relationship (the child on the other side of the world is unlikely to be mine or one of someone I know).

6) My action saving a child can be part of a general social understanding in my community that I want people to reciprocate *I want to live in a community where people rescue random children so my children are rescued if I am not at hand*, but Africa is not part of my community and has no ability to reciprocate with me.

7) There is only pone specific child drowning. meanwhile the people I am saving overseas number in the millions. If there were a million children in the river would care to attempt to save any of them?

8) To some extent the aid to children in terrible environments incentivizes more children and more need for rescues. Say I jump in to save a drowning child, and get out and then there is another child in the river drowning, so I save that one, and then there are two more. At this point I might stop, and I also might be kind of resentful about whoever is throwing all these childlren into the river upstream.

I could go on but you get the idea. Utilitarianism is really stripping out a huge part of how actual humans make actual decisions and actually feel about ethics, and instead replaces it with "pretend you are an autistic god sitting on the moon, what would you do?".

And on the one hand the answer is probably be a utilitarian because the normal connections and communities and social arrangements that drive most ethical and day to day decision making are now gone.

But the answer might also easily be "I do nothing because from that perspective people are ants and without much individual value". It is a delicate balancing act and one the utilitarians are generally not very honest about. think in most cases they are fundamentally just pretty good people who get myopically focused on one tiny element/perspective on ethics (much like Marxists).

Expand full comment
Comment deleted
Expand full comment

Saving a drowning child, probably not much different, but less likely. hell I am likely to be less certain the child is drowning, context matters and you take people out of their context some epistemic humility is required. Otherwise you end up intervening in situations where you are not needed/wanted.

If I am in my local park and see a kid struggling I am a lot more likely to jump in and try sand save them than f I am at some beach in FL too. I just understand the situation and stakes, and likely other resources and set of reasons a child might bein the water better.

Expand full comment

I think a confusion slips in here. I often see the question posed as, "Would you rescue a drowning a child, even though you were wearing an expensive suit?" People usually say "yes" (although for myself, I would doubt my ability to execute the rescue, weighed down by all that wet wool). But just because you *would* rescue the child doesn't mean that you're obliged to. I mean: what creates such a obligation?

A sleight of hand is being committed, because in the scenario described a sort of quasi-parental instinct kicks in: one dives in to rescue the youngling without giving a thought to one's attire or the ethical imperatives of the situation. Then Singer tries to persuade us that in other situations where we don't feel the same instinct, we ought nevertheless to act in the same way ... for consistency? Either there's an argument that we're bound to alleviate human suffering wherever it occurs or there isn't: it can't follow from a choice (or even an obligation) to alleviate some particular suffering.

Since it appears to bear a weight in this argument I had not anticipated, I should make clear that I do still donate 10% of my income to effective charities.

Expand full comment

I think we all understand the concept of a moral obligation: even though you could argue that you are not obliged to take a certain action, I'm sure your moral intuitions tell you that neglecting to act to save a child's life (as long as it doesn't come with serious risk to your own) is unacceptable. Instinct or impulse has nothing to do with it: you can imagine yourself in that situation right now, and take as much time as you need to weigh the arguments. But it should be obvious that the child's life is more valuable than whatever you're wearing in this hypothetical, and unless you subscribe to some sort of weird non-interventionist ethic that forbids you from acting to fulfill your values, it follows that you have a moral obligation to save the child's life. If you don't agree with at least that, then your moral system is already completely incompatible with that of most people.

Singer's argument is then that you can perform the same value calculation for a child who dies of cholera at the other side of the world. There is nothing about these two situations - distance, familiarity, or cause - that would mean the same moral obligation doesn't apply. If you accept that you have an obligation to save a drowning child in front of you, then you must also accept that this obligation extends to other situations in which you could save a life at a very, very small cost to yourself.

What you call instinct and what I'd call moral intuition simply serves to take the highest estimate of that value calculation. If you wanted to intuit the value of a cholera-wracked child on the other side of the world relative to your own bank account, you'd probably put a much lower value on that life. But the point is not to claim that one intuition is wrong and the other one is right; it's that they both appear to result from value calculations, but if you actually ask yourself how much you would sacrifice to save someone's life in the most pressing case, you can see that the moral intuition was irrelevant to begin with.

Expand full comment
Aug 25, 2022·edited Aug 25, 2022

>There is nothing about these two situations - distance, familiarity, or cause - that would mean the same moral obligation doesn't apply.

This is exactly the point at issue and a point most people disagree with intellectually, and a truly huge number of people (possibly all) disagree with if we take their practical behavior as any indication of their beliefs.

That step is the entire argument. Positing it as an axiom doesn't work.

Fuck I suspect people are more likely to help their own children if the children are right in front of them versus over the phone or in another country.

Expand full comment

This, though expressed intemperately, is basically correct.

In the first place, why should distance not be morally relevant? Alita will not stand by in the *presence* of evil, but she can eat chocolate while evil is being committed elsewhere.

In the second place, this argument proves too much. The conclusion can't be: therefore you should donate 10% of your income to effective charities. The conclusion ought to be: whenever you have any resources which could save any child's life, you must do so. But not even Peter Singer himself actually acts so, so he must recognise that the argument doesn't go through as written.

Expand full comment

My moral intuitions tell me that it's not my concern. It's not my child and the fact that it is drowning is not my fault. I have no obligation to act. Although one never really knows until the situation arises, I believe I *would* act, but I do all sorts of things without being morally obliged to do so.

I would have an obligation to act if it were my child, or if in my haste to get to my office I had accidentally knocked the child into the canal.

Of course my intuitions may be wrong, and I'm open to persuasion, but I don't believe they are unusual in this respect. And I think the example is encouraging people to think less rather than more clearly about what their moral intuitions actually are by positing an emotive scenario.

Expand full comment

The framing of the situation implies that you are the only one who could save the child. Like, you're there, the child is there drowning, and if YOU don't do something the child will die. In that case I think you do have a moral obligation to help.

But when someone tries to portray charitable giving in the same light it completely falls apart. In fact, if you think about it, it sounds psychopathic. Imagine you see a drowning child. You don't save him. Instead you pull out your phone and start streaming. "Guys, now is your chance to be a hero. Paypal me $5000 and I will literally save this child's life. But it's really YOU who are saving the child's life!!! Isn't that an amazing opportunity?! And if you don't donate, YOU are killing this child, so you know. Wouldn't you save the child's life if you were here, even if it ruined your fancy suit?"

Expand full comment
Aug 26, 2022·edited Aug 26, 2022

Or suppose you can't swim, but somebody else is present who could rescue the child, but won't, because it would ruin their $5,000 suit. After you remonstrate, they indicate a willingness to rescue the child if you pay them the $5,000 (and you happen to be carrying that much cash). Are you obliged to do so?

Expand full comment

At first, I thought I was just nitpicking, but on reflection I'm not so sure.

What you're wearing need not affect the process of saving a drowning child; actually getting into the water with a drowning person is not usually the most effective way to save them.

It's been years and the state of the art has likely improved, but I was always taught that was a last resort. First, you want to try to get the person to float on their back; that gets their head above water, gives them a chance to catch their breath, and lets you make a plan. Next, you want to try to throw something to them, reach out with a pole or branch, or get low and reach out with your leg; this lets you pull them to safety from a position of stability.

And if you do need to jump into the water, you'd be wise to take the time to at least take off any shoes, jeans, sweatshirts, jackets, etc. — even if they're under water, taking 15 second to disrobe might save you more than 15 seconds getting them to the surface.

Maybe the conclusion is something like: even if you're just trying to save a drowning child, there are more and less effective ways to go about it, and the way you approach the situation to optimize the outcome may not optimize for other considerations. I'm sure jumping in and dragging someone to safety feels (and looks) more heroic than calmly coaching someone to float on their back, but feeling (or looking) heroic shouldn't be the point.

Expand full comment

It's interesting that in the linked article "The Drowning Child and the Expanding Circle" (dated 5 April 1997), Singer believed the cost of saving a life to be negligible: "We can all save lives of people, both children and adults, who would otherwise die, and we can do so at a very small cost to us: the cost of a new CD, a shirt or a night out at restaurant or concert, can be the difference between life and death to more than one person somewhere in the world."

Therefore in his phrasing, the cost of intervening is similarly small: "Your clothes get wet and muddy, and by the time you go home and change you will have missed your first class."

Because we are now modelling the cost of saving a life at $5,000, we amend the scenario to say that your $5,000 suit will be ruined, but in doing so we remove it from everyday experience, since most people don't wear $5,000 suits. And no doubt you're correct that rescuing the child wouldn't require the suit to be destroyed in practice.

It does seem odd to me that being wrong by a factor of about 200 on the empirical question of how much it would cost to save a life has apparently no effect on the validity of the thought experiment.

Expand full comment

Thanks for sharing that observation. It does seem like two orders of magnitude should be sufficient to cause an update of some sort.

Expand full comment

The gigantic flaw in so-called "effective altruism" is that it ignores the extreme and often grievous harm done to society and the planet in earning the income, 10% of which is donated to charities.

Most people only get an income if they do something for rich people. Most of the things rich people want done enough to pay for them to be done are extremely damaging both to the world and to the economically disenfranchised majority. If you have a corporate job, you're probably a hit man--the system is just designed to prevent you from seeing it.

Expand full comment
author
Aug 24, 2022·edited Aug 24, 2022Author

A: Are you earning an income? If so, are you donating 10% of it to the poorest people in the world?

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

But a lot of EA people argue for earning to give, which is an argument for Moar Income! in which case Michael's concern remains relevant

Expand full comment

Quote:

“I do regret the way I presented earning to give in relation to 80,000 Hours in the early days. I hadn’t appreciated how sticky an idea it would be relative to 80,000 Hours’ other ideas, nor how often people would conflate 80,000 Hours with ‘earning to give promotion’. I certainly hadn’t appreciated that we’d have to actively push journalists (etc) away from leading with earning to give in order to give the public an accurate representation of our views. But – to be clear – we weren’t deliberately setting out to mislead people; my regret is that I wish I’d understood better the way in which ideas spread and stick.”

https://80000hours.org/2015/07/80000-hours-thinks-that-only-a-small-proportion-of-people-should-earn-to-give-long-term/

Expand full comment

Thanks for the link. I'll he honest I was probably thinking of Sam Bankman-Fried when I made the comment but it's good to see that the 80000 hours people walked that back a little

Expand full comment

His concern wasn't substantiated in any way. It's basically at the level of a narrative as presented here.

Expand full comment

Life experience suffices. Capitalism may have been necessary and good in the 20th century, but these days it's making life worse, and you see objective evidence everywhere you turn. Even programmers aren't immune; it's become commonplace for them to have to interview for their own jobs ("daily Scrum"). Americans are getting poorer, and while the narrative is that their jobs and money are being sacrificed to lift the developing world "out of extreme poverty", analysis shows this doesn't hold up (the CEOs are taking everything, no surprise).

What's actually happening when the WEF types congratulate themselves on "ending extreme poverty" is that someone who used to fish a stream, live in a shack, and have $2/day of cash income, is now being told he needs "fishing rights" (at a price of $5/day) to acquire food the same way his parents and grandparents did. He can't afford this, and of course these "fishing rights" he no longer has are enforced by men with guns (state violence) so he has no choice but to take the "generous" offer of a job, now making $6/day from his fish, less $5 for the fishing rights. This is the sort of change that happens when capitalism supposedly lifts people out of poverty: they're still in the same poverty, but more money is being churned so the numbers look better.

The people running the current system are evil, psychopathic fuckers who have turned the world into an even shittier place than it was when they found it... and I am afraid they will not give up what they have peacefully, thereby forcing us a conflict that could become very ugly.

Expand full comment

What evidence is there that that the recorded decrease in extreme poverty is of this nature, in the sense of "in most or nearly all cases leaving people worse off than before"?

The increase in life expectancy and reduction in infant mortality worldwide seem to point to it being genuine.

Expand full comment

It's the developed world (especially the US) where things are getting worse and will continue to do so until the Davos men have us eating bedbugs and living in pods.

In the poorer countries, the desperately poor are remaining desperately poor: no real change. Capitalism isn't fucking them over more today than it was 50-150 years ago in a more explicitly imperialist form, but it's also not helping them.

Expand full comment

You think scrum is a job interview? What?

Expand full comment
founding

You're literally just making things up and claiming that your fictional scenarios are worse than starving to death. The world is not a shittier place than it was 100 years ago, and we have immense amounts of written and statistical evidence that it's gotten better for almost everyone.

Expand full comment

But genuinely: yes and yes

Okay, now what? Full disclosure, I work in fossil fuels. I donate to homeless shelters, an asylum seeker resource fund, and an organisation for improving the legal standing of indigenous people (and the occasional land buyback).

Should I continue to work for fossil fuels and donate 10% of my income, or should I quit and take up financial counselling (directly helping people affected by predatory lending or gambling etc) ? I know which one would probably make me feel better, but I'm not sure which one genuinely has a bigger impact.

(Current game plan: wait for a redundancy, invest redundo package to generate passive income, retrain in financial counselling - although, doesn't the process of generating passive income necessarily involve creating more of the exploited folk I'm seeking to help in financial counselling?)

Expand full comment

If you were an EA, I'd say keep working in fossil fuels; the harm done by your job is going to be much smaller than the good done by donating.

But the charities you listed don't seem to me to be super effective, so idk.

Expand full comment

Tbh, I diverge significantly from the givedirectly recommendations because they're very America centric (most of their reccs aren't tax deductible for me, the charities do work overseas but are ultimately based in the US) and also because I think it's better to solve short term issues than long term ones. Systemic change is hard and complicated. Mitigating xrisk is mostly pointless because I do not care about unborn hypothetical people at all, I only care about people from the moment of birth to the moment of death existing at the same time as me - mostly because I don't think my decisions need to be influenced by hypothetical nonexistent people.

So by my own metrics, I think I should try to find a good climate fund to donate to (but I haven't found one I'm happy with, either because I don't understand how they're proposing to help or because I don't think it'll work) and local humanitarian stuff focusing on now.

Expand full comment

Clean Air Taskforce is the charity recommended by William McAskill. He estimated 1ton of CO2 prevented per $ donated.

Expand full comment

Really, really interesting recommendation, appreciate it!

Given the choice between continue to work for a fossil fuel producer and funneling my income to the Clean Air Taskforce and joining the Clean Air Taskforce instead, which do you think has a larger impact? Note that I think I might be especially valuable in the CAT if they ever start and Australian chapter, because I have energy industry experience and contacts.

Expand full comment

> I diverge significantly from the givedirectly recommendations because... I think it's better to solve short term issues than long term ones

I'm confused by this part. GiveDirectly doesn't recommend anything, it just transfers cash to poor people. It's a charity, not a charity evaluator. It's also as short-term as it gets. What's the divergence?

Expand full comment

I think he means GiveWell and wrote givedirectly instead, though I don't really see how "give people in Africa insecticidal nets" is a long-term issue compared to homeless shelters.

Expand full comment

FYI, to the extent that tax deductibility is the sticking point for you, you can make tax deductible donations to GiveWell-recommended charities through EA Australia: https://effectivealtruism.org.au/donate/

Expand full comment

Thank you! That genuinely helps!

Expand full comment

One possible answer is, "no, I'm living under a bridge and posting this from a public library"; another is, "no, I'm working as hard as I can to destroy capitalism". There are people doing both at this very moment. Not me, grant you (since I disagree with the premises), but still.

Expand full comment

I think that's unfair. If somebody with his worldview spends 10% of their resources (not necessarily income. Possibly an amount of time and energy that funges with 10% of their income) doing what they think is the optimal way to overthrow capitalism worldwide without regard for how fun/addictive those actions are, then I would say they hold themselves to much the same standard that you do.

Expand full comment

There was no question here, and you have provided no answer. You believe you did some thinking here Scott, but you did not. Not even rhetorically.

Expand full comment

Source: Trust me. No data necessary.

Expand full comment

I don't see how you can get past the fact that industrialization happened and that it plus freer world trade has been a net benefit for humanity despite the fact that it's easy to find ways any particular aspect of both brings harm somewhere. The human condition is always going to be about trade-offs, opportunity costs, taking advantage of asymmetric information, etc.

There's also an argument for how do you donate your money where you live, so do you tip well? Do you give handouts on the street? Do you support a local endeavor where its impact in your town is obvious? Do you give actual hands-on time on your weekends to a local initiative? Aren't those just as valid a way to be EA? They won't solve world hunger, malaria, homelessness, drug addiction, or the big sexy problems of humanity, but why isn't making the life of the people around you - **just for today** - a little bit better not a valid aspect of EA?

Lastly, if your hitman work effort ladders up to the CEOs fat stacks who is then donating to solve world poverty, is that donation not coming from your daily effort as well? Such that you are both complicit in the problem and the solution. But with the hitman analogy, idk, the only way to unwind that is to bemoan industrialization and freer trade and the horse is just long gone from the barn for that to be a battle worth fighting.

Expand full comment

Apparently, "something for rich people" includes the entire economy and infrastructure supporting your life and your ability to post stupid shit on the internet.

Expand full comment

I just want to say I appreciate this reply. It was true and necessary, though not kind. Most importantly, it make me chuckle.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

It was untrue and unnecessary, but it did also make me chuckle, so in that sense, it was good.

Tech is infested with pro-rich, pro-employer sorts who propagate false consciousness, people who often legitimately have 130+ IQs, but who are -2, -3 sigma in situational and social awareness, and who therefore still believe in capitalism and even corporate meritocracy (which is like believing in Santa Claus).

Expand full comment

> "Most people only get an income if they do something for rich people"

Tech includes people who work on hardware, operating systems, and the web browser you use to write this. It's not just ads and cryptocurrency - that's a small slice of the pie. And you weren't talking about the tech sector, you were talking about "most people".

Expand full comment

"Most people only get an income if they do something for rich people"

If that's true, it means that most people are employed in producing luxury goods or services. Which, considering the relative lack of subsistence farming seems like it's both true, and A Good Thing.

Expand full comment

I agree with the general sentiment that serious, infrastructural tech work is generally better for the world that scammy ad/crypto stuff… but there’s no way to prevent the tech you build from being used by authoritarian governments and employers.

Sadly, almost all money comes from the corporate system, which is objectively morally illegitimate and must be torn down ASAP—nonviolently if possible, but violently if necessary. Too many people in the so-called “effective altruism” community ignore the ways in which their work serves to maintain power relationships and a socioeconomic system for which the only morally acceptable choice is total obliteration.

Expand full comment

It's used for all sort of things, most of which support human flourishing. Get busy on lobbying for regulating corporations or whatever it is you want to do, and leave people building useful things alone.

Total obliteration of a load-bearing part of the economic system is mass starvation and death of preventable causes - genocide by proxy.

Expand full comment

This is why labels are dangerous. As you say, EA has become a movement. For many critics, that movement (and perhaps leaders, perhaps projects) *is* EA, and it's what they critique.

To me, EA is just the set of 4-5ish of the bullets near the bottom of the assumptions tower.

The whole trope of EA criticism is almost entirely confusion around this point. *Which* bullets should or shouldn't be in or out would be useful criticism; forgiving critiques of the label would be more useful in the defense.

Expand full comment

This is a central point. EA isn't a set of ideas, it's a group of people pushing a set of ideas. If EA advances, these people become more powerful and begin to influence politics and society. If the people running the show are mostly arguing about killer robots and whether mosquitos have rights then that's the danger to society. Frankly, EA is mostly an argument against giving money to charity, so these sorts of people aren't strengthened by their ideals being normalised; doubly so if you'd be giving money to the "effective" charities that they're presumably running.

Expand full comment

"EA is mostly an argument against giving money to charity" ???

Expand full comment

In the sense that the existence of EA is a reason to not give money to charity.

Expand full comment

Well this is an easy one. If we have any altruistic obligations, they are far less expansive than EA would allow. Certainly not crazy burdens like 10% of my income or my whole choice of career.

Yes, I would save a drowning child. No, I don't have the obligation to live a significant part of my life for others. Attempting to scale up the hypo is unpersuasive. In fact, I recall a recent blog post arguing convincingly how philosophical arguments scaling up normal-sounding hypotheticals to counter-intuitive conclusions are generically unpersuasive.

Besides, systemic change really is more important than charity. And yes, I work for systemic change, by donating and volunteering for the mainstream centre-right political party in my country.

Expand full comment
Comment deleted
Expand full comment
Aug 24, 2022·edited Aug 24, 2022

Absolutely not 1%. Absolutely not 0.1%, for that matter. Absolutely I won't deprive my children of their inheritance to benefit random strangers.

Systemic change is important not because it commands more resources, but because it solves the problem. If you help people become wealthy, they won't need your charity. That's a better solution all round.

EDIT: Just to clarify, I'm not saying we have any moral obligation to help people become wealthy. But there are ways to do that that will help you too.

Expand full comment
Comment deleted
Expand full comment

This is ludicrous. I'm not required to live my life for strangers, it doesn't follow that I have no obligations to other people. In the very comment you are responding to, I told you that I won't leave my money to the dubious benefit of GiveWell NGOs, *precisely because* of my obligations to other people.

If you see no middle-ground between living (part of) your life for strangers, and moral nihilism, that says a lot about the poverty of your imagination, but nothing about me. Reciprocity, skin in the game, negative duties? Never 'eard of 'em, guv.

Stop speculating about my moral compass, it's beneath you.

Expand full comment

Why save lives at all? What is so great about lives? The world has too many people.

Especially lives nearly completely disconnected from me?

If we suddenly discover a civilization of incredibly poor Intelligent creatures living under the Antarctic ice does that suddenly create new demands on my time and income?

Expand full comment
Comment deleted
Expand full comment

Plenty of charity is needed for people right in your communities. Hell I supported my meth addict sister and her child extensively for a while.

Also work in a field that would be pretty high on the "EA impact metrics".

But I am sure you knew all that. Responses like this are why people don't like EA.

Expand full comment

> Why save lives at all? What is so great about lives? The world has too many people.

Well, yes, if you think that small children should die miserably because "the world has too many people," then you're not going to get anything out of EA.

However, also, if you think like that, you are, in fact, evil, and have embraced evil in the depths of your heart.

Expand full comment

Who says I think "small children should die miserably?". I am not harming any children. Them dying miserably is on their communities and parents, not me.

You didn't answer my follow-up question. If we discover a tribe of utility monster creatures who are very very sad living very very terrible live sin Antarctica how much of your current activities and resources do you need to divert to rescue them?

Expand full comment
Comment deleted
Expand full comment
Aug 24, 2022·edited Aug 24, 2022

>where rich countries did forcibly exploit poor countries for centuries

How long and how severely depends on where, and also doesn't map very well onto where people are needy and so the causation argument is fairly weak. For instance I know there is pretty convincing work that the longer a place was under UK rule, the better off it did and is doing today (yes there are confounding variables like more attractive places being colonized early, but it is all extremely fraught and complicated as all "historical data" is due to lack of sample size, quality or controls).

I honestly think there is a strong argument decolonization was something that substantially increased suffering (if that is what you value).

It also doesn't necessarily have a lot to do with my ancestors, who were from a non-colonizing country and migrated to the US in the early 1900s to be miners.

>But I feel some guilt at having the massive unearned advantage of being born in a wealthy country.

I think I probably feel less of this because I grew up in the projects with a single mother who was passed out drunk every day in a rust belt town with very high unemployment.

>EA provides a good framework for how to spread that unearned advantage around to others in the most effective way.

I don't necessarily disagree with that at all. if you are just dispassionately like "how do I best spend resources to eliminate human suffering" EA is the framework for you!

I just disagree with the idea that this is predominantly what "ethics" or "doing good" is about, or that people with different priorities are wrong/bad.

Expand full comment

> You didn't answer my follow-up question. If we discover a tribe of utility monster creatures who are very very sad living very very terrible live sin Antarctica how much of your current activities and resources do you need to divert to rescue them?

How about instead there's a giant asteroid currently headed towards Earth, which will kill you and destroy everything and everyone you love, but some aliens who could trivially stop it declare that there are too many sentients in the universe anyway? And besides, what if in Alpha Centauri there were eighty quintillion sentients about to be genocided too?

This is such a fucking inane argument.

Expand full comment

>but some aliens who could trivially stop it declare that there are too many sentients in the universe anyway?

Well are they right? Seems relevant?

Also define "trivial". If I could press a button right now next to me and some child gets a plate of food I might press the button, I might even press it many times a day. If instead you are expecting me to divert resources and aid from my family and relatives and friends to people on the other side of the world simply because those people could use those resources more?

Well frankly who cares? I am not an infinite pile of resources, I don't have enough for all my projects, or remotely enough for all the projects of people I deeply care about. Why should I divert some to people with no relationship to me whose plight is totally unrelated to me?

Also the Antarctic example isn't inane, it is exactly the situation you are proposing. The mere fact someone had a child wherever, suddenly creates in me obligations to make sure that child has some minimum amount of thriving.

You can always retreat back to "oh if the person was suffering right in front of you you wouldn't stand for it you monster". Except the person isn't right in front of me, that is the whole point. You are trying to build this crystal palace of moral reason on top of one or two of our moral intuitions and when people object by saying "what about these other moral intuitions" you say "oh those particular ones don't matter".

Expand full comment

You did write "the world has too many people", so you should probably shouldn't be surprised if you're taken for a misanthrope.

I think the answer to your question must either be (i) none, we only care about humans or (ii) we'll redirect whatever amount of our resources we're currently donating to alleviate human suffering. That is to say, I don't see any force in the counterfactual, when we do in fact live in a world where a large number of poor intelligent creatures live sad lives.

Expand full comment

You don't think the world has too many people?

So many problems are so much easier with 2 billion people instead of 10.

Expand full comment

Looking at their brains, their social behaviour and their skill in solving problems, it is not unlikely that dolphins are just as intelligent as us. Looking at how dolphins manage to live their lives without having a devastating impact on the planet, shouldn’t we focus on maximising living conditions for dolphins?

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

So reject the 10% figure. The key idea, and the thing that ordinary people seem to do extraordinary mental limbo to avoid, is simply this: howevermuch good you think you should be doing in the world, you should be actually really trying to fulfill that much, not nodding along to "yes, it would be good" and then only doing good things once in a blue moon when the stars align.

Fundamentally EA addresses people who already think donating to charity is good in the abstract, and who maybe give to charity *sometimes*, but aren't consistent with their own stated principles.

If your belief is, truly and deeply, that you only have a duty to save a drowning child when you're physically faced with one… well, sure, EA isn't for you. But if you're just saying that 10% is too much, and the amount of your energy/life/wealth you feel you have a duty to devoting to Doing Good In The World is more like, I dunno, one hour's worth of work a month… sure. Whatever. The point is just that you should think long and hard about what you think that figure is and make every penny of it count, rather than not give some months, and impulsively give it all to some low-impact animal shelter or vagrant or something some months, and feel like you've fulfilled your Duty To Be a Good Person just as much as if you'd spent it in a way that you know would have done far more good.

Expand full comment

I'm not sure there's any such thing as the Duty To Be a Good Person. The way I see it, one has a certain set of duties that it is good for one to carry out. These will differ for everyone based on each person's own situation: a decade ago I didn't have the duties of a father; today I do.

There may be some duties that arise from our personhood alone and not from any more specific status or relationship. But there's no obvious reason to think that kind of obligation is as a general matter more important than any other. Nor am I convinced that there's any sort of freestanding obligation to be good, apart from the goodness of fulfilling whatever particular obligations we might happen to have.

Expand full comment

What you're saying resonates somewhat with me and then I was caught off guard by your conclusion so I have to ask, what's wrong with giving money to vagrants? That's something I do sporadically and I've never felt like it was ineffective or a waste in any way.

Expand full comment

I'm curious to hear what reply you get. From where I stand, EA pointing to the conclusion that impulsively giving money to vagrants is bad, and treating this as not even particularly controversial, is an immediate telltale sign that the movement is getting something terribly wrong.

Expand full comment
Sep 1, 2022·edited Sep 1, 2022

Not sure if Substack already notifies you in this configuration of replies, but I've replied to @jonbbbb above! You'll be glad to know that by my lights, the basic nugget of EA *doesn't* inescapably lead to "you should stop impulsively giving money to vagrants" at all.

(In pyramid terms, that is. Doubtless some currents of thought in the movement say things like that. Though even then I'd guess that most EAs would say that you shouldn't let giving money to vagrants *replace* giving money to more "Effective" causes, but there's no moral prohibition in additionally giving money to vagrants if you want to and were going to spend that extra cash on yourself otherwise.)

Expand full comment

(Sorry for the later reply.)

Oh, there's nothing wrong with giving money to vagrants in itself! The snag is in that "sporadically".

It sounds like — much like most people — you give money to vagrants based on the random happenstance of when you meet them and what cash you have on you when you do; that the amount will vary from month to month. Some months, if you happen to not meet very many vagrants, you will donate significantly less than months with higher numbers of randomized vagrant encounters.

And I would guess you don't feel like you've been a "worse person" on months where you met relatively few vagrants to give money to. After all, like always, you gave money to whichever vagrants you encountered.

But, says EA thinking… this is kind of ridiculous! However much cash you are comfortable giving away to the need per month, there are many months a year where you're not actually hitting that target; where you have cash left over which, if you want to be consistent about your principles, you have effectively conceded you "should" be giving away to charitable causes.

So the basic 'EA' thing to do here is to ask yourself how much money you would be comfortable giving to beggars per months before you stopped (presumably there is a line *somewhere*, you wouldn't just give away your entire life savings if faced with a freakishly high-vagrant month). Keep a tally of the money you actually give to vagrants over the course of the month. Then at the end of the month, subtract the amount you gave directly to vagrants from that "charitable budget", and donate the rest to other charitable causes, rather than let it sink back into your ordinary budget.

(Some Effective Altruists would, of course, say that money given directly to vagrants saves fewer lives than blah blah blah more complicated way you could be donating. But that's on higher levels of the pyramid. You don't have to accept this sort of comparing-different-kinds-of-suffering-and-help at all. The basic insight is "if you have a maximum amount you are comfortable with giving away for moral reasons, then you should not let random, changeable whims direct how much of it you spend, but, rather, make a conscious effort to keep hitting that target.")

Expand full comment

If I loved a Bible study because it involved lots of good discussion of context and translation details and delving into the nitty gritty, but over the course of a few years it got bogged down in Revelations trying to resolve the premillenialism vs postmillenialism debate and became almost exclusively about that, I would be deeply irritated.

The deep debates around the upper levels in your tower are great! They're discussions worth having for those heavily engaged in the topic. But those discussion belong internally, not public-facing front-and-center. Every normie that associates EA with AI risk and animal welfare rather than "seriously, just give ~10% to some indisputably useful things" is a failure by EA, not because AI risk or animal welfare are *bad* causes, but because convincing more people to give more to a range of actually useful things is much more valuable than getting people already bought into EA to reallocate from one set of useful things to another.

Expand full comment

Some EA ideas are hard to deny in the abstract (of course more effective charities are better than less effective ones), but the intellectual virtue of the movement is that it takes those ideas seriously and tries to apply them, even if it leads to weird conclusions that are out of step with broader cultural norms, etc.

There are many cases where "We should do X" is a socially desirable statement to endorse, while actually trying to do X is controversial.

Expand full comment

Thank you. This is a really helpful overview. I've encountered EA primarily from the AI side, so it's good context to see some of the foundational assumptions laid out like this.

Something I've been wondering re assumptions is how fundamental the assumption of infinite economic growth is to the EA movement - because this seems to be a blind spot.

Expand full comment

Wait, what? Where does EA assume infinite economic growth?

Expand full comment

Exactly, yes. That's my question. Unending economic growth is a common assumption amongst technofuturists, and there is crossover between technofuturism and EA, but I'm not sure how fundamental the unquestioned assumption of infinite economic growth is to EA. Its a genuine question.

I read a discussion once where EA proponents were discussing - and I'm probably paraphrasing badly here - the best way that society can prepare for a zero marginal cost future where the majority of work was automated and human labour was economically irrelevant - and the conclusion seemed to be that the best thing was to accumulate capital, which would still have value when human labour was worthless.

Expand full comment
Comment deleted
Expand full comment

I think Thomas Piketty would tell you to zoom your timeline out a few centuries: https://slatestarcodex.com/2018/06/24/book-review-capital-in-the-twenty-first-century/

Expand full comment

Ah, right. I think the answer is "it's not required except perhaps for some detailed arguments at higher levels".

Expand full comment

I think (though am unsure here) that many EAs would agree that short-term growth is possible, desirable, and can be unlocked by interventions like deworming, malaria nets, or microfinance.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

I don't think that many EA interventions need to assume infinite growth in order to be effective, though. Do anti-malaria bed nets rely on the percent of GDP growth in 2200? Do cage-free campaigns? Does AI risk (given that there's a pretty good chance of us inventing AGI in the next century or two)? Edit: On further reflection, I actually can't think of a single "mainstream EA" intervention that assumes infinite growth.

Moreover, as Scott mentioned in his recent review of WWOTF, an eventual cap on growth actually makes some EA arguments stronger. If there's a low cap on how big the economy can get, yet the economy is growing at an exponential rate right now, it can't be all that much longer (in the grand scheme of things) until growth levels out. That suggests we're living in a very unusual period of human history involving brief, explosive growth--which implies that now might be the most impactful time for long-term interventions.

Expand full comment

?

One of the most well-known pieces in EA (well-known because it clearly articulates what good thinkers already believe, not because of dashing originality) pretty convincingly argues against infinite economic growth.

https://www.cold-takes.com/this-cant-go-on/

Expand full comment

I'm an admirer of Effective Altruism--including its willingness to creatively consider some truly unconventional cause areas!--who thinks it's often unfairly maligned.

But, to me, this argument seems motte-and-bailey-ish.

How does one distinguish cases where one should judge a movement by its generally-uncontroversial broad foundational ideals, and where one should judge it by its more specific and controversial claims, policies, institutional culture, and so forth?

E.g., the general idea that one is morally obligated to donate a significant portion of one's income to the poor was not invented by modern Effective Altruists--see zakat, tithing, the Buddhist 'perfection of giving', the arguments of the Church Fathers that *all* one's surplus wealth belongs to the poor as a matter of justice, etc. (Which is not to deny that in contemporary society Effective Altruists are often quite unusually good at *living out* this ancient moral ideal.)

But, "if you think people ought to donate 10% of their income to the poor, you're an Effective Altruist" (and therefore shouldn't criticize the actually-existing EA movement and institutions) sounds a bit like "if you think women should have equal legal rights, you're a feminist" (and therefore shouldn't criticize the actually-existing mainstream self-declared feminist movement and institutions).

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

Agreed; of course you may and should criticize the movement but as long as it's making things happen for its cause better than existing alternatives, it's silly to reject the movement as a whole.

DeBoer's criticism to EA that the ideas either aren't novel or the EA movement isn't needed to make them happen reminds me of a criticism to practically every movement that people find distasteful or harmful for any valid reason. E.g. "Supporting the rights of women isn't novel, and there are other movements that do this than feminism. Feminism flirts with these dangerous ideas and suffers from these excesses. Therefore I'm against feminism and in favour of an alternative that does the same for women's rights." To which the answer is: Show me a movement that has people organised to work for the rights of women, and I might advocate for that one instead! (You can replace feminism with any movement here; it's just an example!)

Maybe there's some kind of definitional confusion here; the Enlightenment gave us the ideas about the equality of the sexes, but feminism is an actual movement that drives that. Same with EA; utilitarian moral philosophy gave us the ideas about donating effectively, but EA is an actual movement that drives that. If you want to criticize what the movement does, your criticisms might be totally valid and acceptable, but as long as the movement does what it's meant to do and does this better than the alternatives, then turning down the movement as a whole while agreeing with its cause is sort of, well, meh?

Expand full comment

> Therefore I'm against feminism and in favour of an alternative that does the same for women's rights." To which the answer is: Show me a movement that has people organised to work for the rights of women, and I might advocate for that one instead!

No, I think an answer in the spirit of Scott's would be "Are you currently working or donating to improve the rights of women?"

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

I mean, sure! But Morgan was saying that donating or working to improve women's rights doesn't automatically make you a member of the feminist movement; same as donating 10% to the world's poorest doesn't automatically make you a member of the EA.

My grandmother is the only person I know who actively donates a part of her income as a matter of her private morality and she's certainly never heard of any EA, although she's read a bunch of moral philosophy and is generally a utilitarian, but she would be a prime member for the movement if she was connected to them. I guess the basic idea is that like-minded people, who are aware of and espouse utilitarian principles in their private lives, are joined together to work for a cause making that endeavour a public one rather than private (utilitarians of the world unite!). Like there's a difference between "donating money" and "donating money AND being part of the EA", and whether being part of the EA helps the cause of donating money to the poorest effectively. What maybe the discussion is missing is the basic concept that probably a lot of people who donate money anyway just join the bandwagon because they already think it's a good idea and vibe with the EA movement (this could be a way to steelman deBoer's criticism); but still, I think it'd be totally remiss not to consider the fact that the EA (from what I've heard!) improves donating or encourages more people to donate more.

EDIT: And to add, I do get that Scott's idea here involves being an EA fanatic who isn't interested in the EA movement itself but rather in donating, like a Jehovah's Witness who is solely interested in saving someone's soul from eternal damnation and the whole Jehovah biz is just a side hustle and totally secondary to saving actual souls.

Expand full comment

> same as donating 10% to the world's poorest doesn't automatically make you a member of the EA.

I really don't get why people think this is about policing the boundaries of the EA movement. That's not the point! The point is that most people should donate more and more effectively! If you do that, you're in agreement with the *important* parts of EA, whether you consider yourself one or not! Your grandmother is probably doing great!

> I think it'd be totally remiss not to consider the fact that the EA (from what I've heard!) improves donating or encourages more people to donate more.

Yes, absolutely. This is clearest in the case of charity evaluators like GiveWell, who direct donations to more effective recipients, but the existence of the EA movement has certainly encouraged me to donate more than I would have done otherwise even though I'm not a card-carrying GWWC member.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

> I really don't get why people think this is about policing the boundaries of the EA movement. That's not the point!

I do get that and totally agree with it, but the criticism to the EA is to the movement, and EA as a movement is for advocating things like donating more and better. The whole point why deBoer criticizes EA is that you can do what EA does without the EA, to which the EA (here embodied in Scott's altruistic interrogation) responds by trying to get deBoer or other critic to just donate more money! If we didn't care about the movement at all, we would just say "Yeah you're totally right. EA sucks and there are some dangerous vagaries. We can donate money better without EA!" But we don't want to concede the point, because EA as a movement can make people donate better, because EA is a way for people to organise to advocate better and more donating. Right? Or am I missing something?

EDIT: So I'm not trying to police boundaries, I'm just trying to point out that EA is important and worthwhile as a movement and that there's a crucial distinction between "individuals who do good privately" and "individuals who group together to do good publicly", which the EA effectively is. So membership isn't important in itself (it's the donating that counts!) but the movement is, which entails a degree of membership as otherwise it wouldn't be a movement!

Expand full comment

No, that makes sense. Thanks!

Expand full comment

And "are you donating at least 10% of your income to the world's poorest?" is IMHO an excellent response to anyone claiming that we can do better without a movement. If they answer "no", then they're obviously wrong; if they answer "yes", we can talk about the benefits of community-building and shared standards.

Expand full comment

It's interesting that, in your experience, the only people who feel moral obligated to give to charity are utilitarians. In my experience as well, it's the case that utilitarian arguments seem to often be the most compelling here.

But the automatic assumption or insistence that utilitarianism is the *only* moral philosophy that can get you here is one of the things I find most annoying about the Effective Altruist movement.

I'm *not* a utilitarian--but I still feel the moral obligation to donate. I'm pretty sure there are a lot of other people like me out there! All the examples I gave of pre-modern religious teachings about the obligation to give from one's surplus income are deontological (justice-based) or virtue-ethics-based.

Ozy Frantz had a trio of great blog posts about how to make EA welcoming to more groups. Aside from political conservatives and religious people, *non-utilitarians* was the third group. https://thingofthings.wordpress.com/2016/09/13/you-dont-have-to-be-a-utilitarian-to-be-an-ea/

Expand full comment

Oh you're right, I didn't mean to say that all donors are utilitarians or that donating is an essentially utilitarian practice, but that the EA as a movement is driven by utilitarian ethics, or at least its core principles are derived from basic utilitarian concepts, which has led them to the practice of donating, and that there exist such like-minded utilitarianism-driven donors outside the sphere of the EA . People can naturally take various moral-philosophical paths to the practice, and I guess on bum-hunch that most donors would be deontologically motivated (even in the OP Scott proposes the 10% rule as a deontological argument, albeit derived from a utilitarian place).

Thanks for the link. To me it seems obvious that "donating better is better than donating worse" is a very broad ethical guideline that doesn't require you to be a self-avowed utilitarian to agree with (and I sort of get why deBoer is disgruntled about how trivial it is).

Expand full comment

> But, "if you think people ought to donate 10% of their income to the poor, you're an Effective Altruist" (and therefore shouldn't criticize the actually-existing EA movement and institutions)

I don't think he's saying this. I think Scott's saying that people who disagree with EA cite disagreement with the higher levels, but their actions usually reveal that their true disagreement is with the more foundational, and supposedly less controversial, layers. He's not interested in drawing a line around a particular set of beliefs and saying "this is EA", he wants to get everyone as far up the tower as they'll go. It doesn't matter whether you're an Official EA Member, it matters whether you donate money effectively.

Expand full comment

As far as I can see there is long-termism and short-termism in effective altruism, but I see very little mid-termism. I have never seen any effective altruist seriously addressing the question: What will the children saved from malaria and intestinal worms do in 30 years?

1. Will they eradicate malaria and intestinal worms?

2. Will they cure their own children's malaria and worms?

3. Will they have children who get malaria and worms that need to be treated by tomorrow's effective altruists?

If the answer of question 1 is yes, basic effective altruism is excellent. If the answer of question 2 is yes, effective altruism at least fulfills its purpose. If the answer of question 3 is yes, effective altruists need to make sure that there is a steady supply of effective altruists in 30 years. Otherwise the result of their actions will be even more suffering a few decades into the future.

Expand full comment
Comment deleted
Expand full comment

Then I guess the question is: Are the effects on IQ big enough to change society? In a study like this one they don't look that dramatic.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3144834/

Expand full comment
Comment deleted
Expand full comment

OK, let's hope for a positive feedback loop then. But is it not still just a hope? I can't quite understand how Effective Altruists can be close to one hundred percent certain of such a positive feedback loop.

For example, to many people the absence of such a positive feedback loops sounds much more plausible than AI Apocalypse. Still, the risk of AI apocalypse is much more discussed in the EA community than the risk that the positive feedback loops will not be positive enough.

Expand full comment

Regarding the effect of IQ on society, Garett Jones' "Hive Mind" covers things that study could not pick up on.

Expand full comment

Thank you for the tip! Seems like a very interesting book.

Expand full comment

If you scroll back like, perhaps even less than a page, through the archives, you'll see the book review.

Expand full comment

Thank you, I must have missed that. I will look if the book or the book review is the easiest to read.

Expand full comment

This is really the crux of it and why I've never found the EA arguments I've been exposed to to be particularly compelling. Surely priority number one should be inculcating more people who share your memes and to carry on your ideas, and priority two is accumulating power, whether financial, political, or otherwise. Saving destitutes in foreign countries (who will go on to produce more destitutes, or perhaps become refugees and burden other countries) does not seem particularly effective, except perhaps as a marketing strategy or as a calculated effort to gain "legitimacy" so you can further your actual goals. How many recipients of malaria nets go on to become effective altruists themselves?

In the worst case, I think some EA interventions (like malaria nets!) are actually undermining future EA supporters, because if they're *actually effective*, i.e., actually saving a massive number of people, well, EA has neither a plan nor the ability to capture any of the value they're "creating": it's a transfer of capital to parties who have no interest in EA or the countries/environments where EA developed, and in the worst case, are actively hostile to them. Most people are not EA, and probably most people can never be converted to become EA -- it really takes a very specific flavor of progressive liberalism to even entertain the thought. And this is without getting into any HBD arguments, which can by themselves make a very compelling case against third world EA interventions.

If the master plan is to wow donors with amazing, unambiguous metrics (like "lives saved") and then turn around and dump their money into actual worthy causes like x-risk mitigation, then all's well and good, and this possibility is the only reason I'm not anti-EA but simply neutral.

Expand full comment

> Saving destitutes in foreign countries (who will go on to produce more destitutes, or perhaps become refugees and burden other countries)

I think this is unlikely. Reducing child mortality will accelerate the demographic transition to small, high-investment families, and thus economic growth in their home countries.

Expand full comment

> Surely priority number one should be inculcating more people who share your memes and to carry on your ideas, and priority two is accumulating power, whether financial, political, or otherwise.

*smashes AGI fire alarm*

Expand full comment

> Surely priority number one should be inculcating more people who share your memes and to carry on your ideas, and priority two is accumulating power, whether financial, political, or otherwise.

Hmmm, will Steve Omohundro next have to write on the "Basic EA Drives" (!?)

Expand full comment

Human biodiversity is downstream from natural selection in response to things like parasites. Reduce the parasite burden and you change what's being selected.

Expand full comment

Seems a bit weird to say that dying children in poor countries need to justify the value of their existence, given that's not the standard we apply to most people we deal with day to day. And you certainly wouldn't to a dying child in front of you.

Though empirically, reduced disease and poverty leads to economic growth and therefore better healthcare. These problems aren't fixed

Expand full comment

I don't talk about justification, but about effectiveness. Saving dying children in poor countries is certainly altruistic and also effective in the short term. But is it effective in the medium term or will the effect instead be more suffering?

Expand full comment

A lot of them won't have been born in the first place (or rather their siblings won't have been). Parents in the developing world have more children when infant mortality rates are higher.

Expand full comment

I'm pretty optimistic neither malaria nor intestinal worms will be significant problems in 30 years' time, so taking your points literally, none of 1-3 will apply, although perhaps you intend "malaria and intestinal worms" to stand in for whatever problems may afflict that generation.

Expand full comment

I think you might be Making Up The Wrong Guy To Get Mad At On Here. The IRL circles I run in are progressive, and that's where I've met the most passionate critics of EA. They donate their income/time/labor to progressive causes, they disagree with efforts to redirect attention to other causes, and they absolutely believe they're agents of systemic change. Rightly or wrongly, these wouldn't feel like gotcha questions to them.

I think "keep going down an assumption floor until we don't disagree anymore" is unpersuasive, because if you go down enough floors of any moral or political philosophy, you get to uncontroversial tenets like "happiness is good," and it's in the distinctions around prioritization that most people actually decide whether their desire to promote happiness leads them to pursue a progressive, conservative, EA, whatever, agenda. It's in that last prescription - "more effectively" - that there's room for lots of aspiring altruists, and not just hypocrites, to disagree.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

Happiness is good isn't an in controversial tenet, especially when you get to attempting to measure it... Most of the critique of EA that I run across is that QALY is an intrinsically disablist concept, and more generally that you can't capture human flourishing in numbers.

(So I point those people at GiveDirectly, but still, it's pretty foundational to EA that you can measure outcomes quantitively and that is not an uncontroversial statement in my circles.)

Expand full comment

+1

Expand full comment

But do they attempt to estimate the effectiveness of particular efforts within their cause area? Are they evaluating whether they should mitigate climate change by working for a NGO that's lobbying politicians to adopt more nuclear power vs working for a company that produces sustainable soap?

Expand full comment

Generally only through personal experience of whether funds are obviously wasted or misdirected - so they can't evaluate efforts they have no personal connection to, mostly limiting them to relatively ineffective rich country options (especially since actively visiting less well off areas of the world became unfashionable due to travel carbon expenditure and White saviour syndrome).

Expand full comment

My recollection is that liberals tend to donate less of their income, though that may be the result of the most popular target of donations being churches.

https://finance.yahoo.com/news/more-generous-liberals-conservatives-081500677.html

Expand full comment

Yes, I would point out that those numbers are both more distorted and less virtuous if you remember that "Betty Smith dumping $10k into God-song-light-hill Fellowship so that the pastor can buy a new yach- so that the forces of GOD can TRIUMPH against the ARMIES OF THE DEVIL (liberals) and her granny will get HEALED of her DEMONIC CANCER" is classed as a charitable contribution.

Expand full comment

In fairness, I’m hedging a bit with the income/labor/time thing, but I do think any numbers before the liberal mass panic of 2016 are probably meaningfully outdated.

Expand full comment

Pff, Oxford Study Bible is best Holy Buble. Everybody Knows.

One lingering question I have about EA is...does it matter what poor people do? If one has no realistic hope of attaining an altruistic career, nor can easily afford 10% income donations without significant QoL loss...is there still a moral obligation? I know I'm still in the top 1% of global income just by virtue of living in the USA, and yet...10% of my __annual__ income still wouldn't total $5k. That's more than a year's donations to not even possibly save one life! Do such rounding error amounts even matter, morally?

I do fully agree many EA criticisms are aimed at branches rather than roots, though. That doesn't seem like a spicy assertion at all. A sort of...Weak Men Are Superweapons, at best.

Expand full comment

Rounding error amounts matter because there are more people in your position than there are rich people, so if everyone did it, the numbers would add up to a significant amount.

On the other hand, if you are doing direct work or building career capital, the opportunity costs of donating might be high enough to make it the wrong choice - the 10% advice is more aimed at people who are not doing anything else.

Expand full comment

Certainly...there are far more people in my position than in the "middle class", however defined. Said position being, stuck in a dead-end retail job, with negative net worth (will only break even in another year or so...the only nice thing about hitting life's 3rd decade), and using that discretionary 10% to fuel a very meagre retirement Pascalian wager. It's not like I literally could not spare another dollar, more like...life isn't *that* fun at this economic strata? There are limits to how many creature comforts I'm willing to sacrifice so as to still put up with my shitty job that consumes roughly 5/7th of all my time resources, plus second-order costs in terms of physical and mental injuries.

I'd rather spend more on myself (chronically neglected/hard-living), or my friends, since not one of us lives a low-pain life. Giving What I Can is for when I've, I don't know..."made it". It seems prudent to ensure one's own continued income-earning capacity first, so as to enable future-donations, even at the cost of some present-donations. (I'd be far more comfortable donating if I didn't live in rental housing; when landlords start making noises about selling out to developers, that does not bode well for future finances. Really don't wanna be caught pants-down there.)

Of course, a perfect utilitarian would still donate, since the potential utils I'd forfeit are more than made up by some even poorer 3rd world person's utility gain. (And that recipient should then also thus donate to someone even poorer, and so on...trickle-down altruism.) Nobody except maybe Peter Singer is that perfect though.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

'to ensure one's own continued income earning capacity, so as to enable future-donations' is the same thing as 'building career capital' - the reasons for donating now rather than later in that case are mostly about habit formation, setting a good example for others and hedging against value drift, and most of that can be done with a much lower amount than the full 10%.

Expand full comment

Put on your own oxygen mask first. Long term, you're of no use to anybody if you don't take care of yourself.

Expand full comment

Personally I don't think you should feel an obligation to donate, if that's what you're wondering. I more think it should be advocated for people who already have more than enough money to maintain a decent life.

Expand full comment

I am with the people who say that this is a pretty basic Motte and Bailey.

EA doesn't even start with a foundation of "give". Or even, "help" (because "give" is pretty oriented towards people with dough.)

EA's foundation is "be right" and EA adherents are willing to spend a tremendous amount of energy disagreeing on what "be right" means. It's as if the main thing is the effective part, and not the altruism.

And this is oh so well exemplified by the niche ideas, the emphasis on 'spend it now, give to charity later', and frankly the piss poor record of EAs doing anything but giving pocket change.

As a Catholic - and one going through a struggling phase - I know very well what this looks like in my faith and in my own life. I moved last year and discontinued a bunch of local charities. Now its a year later and I realize that I am down at 3% of my income in tracked giving. My faith has a lot of fancy churches, a lot of opportunities for people to dress up, and a ton of self righteousness. It's still easier to put money in the collection plate than to work on/with the neighbor with a Netflix subscription and no car insurance. And that's nothing compared to individually living ones life like a discipline of the Man, day over day.

So I get that doing right is a struggle. I still think EA is not doing the Effective part well, and the Altrusim even worse.

Maybe this is a part of the human condition, to be forever part of groups with high ideas and lousy, delayed, distracted execution.

Expand full comment

What do you mean by "frankly the piss poor record of EAs doing anything but giving pocket change?"

https://forum.effectivealtruism.org/posts/ZbaDmowkXbTBsxvHn/historical-ea-funding-data

Expand full comment

Based on prior surveys of SSC readers and giving percentages. I would be glad to update my priors if you have more data.

Expand full comment

Using SSC readers as a ballpark seems a bit like criticizing a Christian who runs a blog about evolutionary biology because too many of his readers don't attend church regularly. Well, yeah. The blog isn't about Christianity, it's about evolutionary biology. Scott's most popular posts are about politics, and politics is lamentably mostly ineffective egoism, the exact opposite of effective altruism.

Expand full comment

You miss my point - the self identified EA rationalist readers reported that they gave far less of their income to any charity than did the theist nonEA readers.

This finding replicated over several years. I do not know what the most recent year findings are.

Expand full comment

That seems like a nakedly unfair comparison. How do EA-identifying atheists compare to lukewarm or non-EA-identifying atheists in charitable donations? You can't just pretend like you're controlling for EA when that obviously is controlling for atheism.

Expand full comment

If I understand you correctly, your thesis is that atheists are less generous in their giving than theists, regardless of EA affiliation.

While I tend to agree with this, it also indicates that EA itself is a waste of time and invested people should be focusing on converting atheists rather than spreading EA practices.

At any rate, if one got the latest SSC/ACX survey data, we could figure this out easy enough.

Expand full comment

Enjoyed this enormously. Not least because I can’t wait for contra contra Freddie deBoer!

Expand full comment

I'm gonna make a fresh and more explicit thread riffing off yesterday's post and a lot of today's comments.

Are you sure you're not going to burn down all the museums to get African population up to 5 billion?! Seems a little repugnant to me...

Expand full comment
Comment deleted
Expand full comment

It's that population growth reduces, not that it reduces population! The population of the UK isn't less than it was before the industrial revolution, nor do I think it would be higher if the industrial revolution had never happened!

"EA donations to Africans would reduce African population" is an incredibly spicy take, I will need a few hours to process it.

Expand full comment
Comment deleted
Expand full comment

That's fine, I tried to consider that possibility with the industrial revolution point.

Crucially I still disagree!

Expand full comment

Once the fertility rate falls below replacement levels you do get outright declines in population--see South Korea, Japan, Russia, most of Europe, etc.

Expand full comment

That's a disagreement with a lot of upper levels, but not the base levels. If you truly value the continued existence of museums as a moral good, donate to protect it! Sure! The crux is whether you do so effectively and systematically.

Expand full comment

There's two parts here.

Whether I think it's good to make sure my donations are effective. I do, great.

But I'm also still allowed to criticise what the movement is doing. Every response is "YOU should do X, YOU should do Y." EA is a philosophy but it's also a movement, and I'm allowed to disapprove of the movement's actions.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

Yes? I feel like that's Scott's point is just that, "don't confuse criticism of the movement, which can be valid, with criticism of the basic idea, which has yet to be successfully refuted to my (Scott's) satisfaction". If you agree with the basic idea but not the movement, and say as much, great! Doubtless some in the EA movement would criticise you, but you're fine by Scott, or at least by this post's Scott.

Expand full comment

It depends, Scott claims the baseline is _Effectiveness_ I think the baseline is _Telescopy_ or whatever word you want to use for valuing future people or total strangers over people in your town/circle.

If it's fine for me to be local, great, but I can also think it's bad for other people to be telescopic.

Expand full comment

When I look at the tower graphic, what I see is a heavily-fortified Motte at the bottom, followed by increasingly daring Baileys. :)

More seriously, I think it reveals one thing about the Motte-and-Bailey trick pulled by other ideologies: that the harmful part is not the structure itself, but rather just in the *deception* involved. If the other "isms" simply laid out their assumption structure in a similar way, and accepted that many people only go for the first rung -- there would be no problem.

Expand full comment

Anyone want to try making the equivalent tower graphic for other beliefs or theories? Will that end up being too controversial precisely because it suggests that some people might accept parts without accepting the whole?

Expand full comment

> This is also how I feel about these kinds of critiques of effective altruism.

Well, how I feel about critiques of effective altruism at this point is that I've been terribly rude. I thought I was talking to people who were disinterestedly seeking the maximum way to do the most utilitarian good in the world. But in fact I walked into a church and started critiquing their religion. And that's unconscionably tone deaf and rude in my opinion so I've stopped doing it.

My point was, and is, and has always been and will always be how the blindspots in the movement mean that even by its own values it is not living up to its full potential. But I now see that the irrationalities in the movement are load bearing. They aren't points of failure but necessary irrationalities and shibboleths to maintain the community. And at that point I'm left with the stark question of whether I want the EA community to exist at all. And that's never been a hard question for me: Yes, I do want it to exist. Maybe even my definition of "full potential" is unsustainable. Maybe every such movement requires some faith.

So it goes. So it goes.

Expand full comment

< But in fact I walked into a church and started critiquing their religion >

I sometimes feel I'm doing the same thing, which I'm also not comfortable about. But part of what keeps me coming back to making the same criticisms is because I see something akin to religious faith but which is utterly denied by the proponents.

There is the taking of a single step backwards which EAs can never do. And in this they are very similar to the faithful of whatever religion. The book review that Scott was reluctant to publish about charitable failures is a good example - the effects of the NGO's intervention in Lesotho were clearly and unarguable negative, but an EA can never update their priors in the direction of "Maybe my 'donating to poor people' is itself a negative, pernicious activity and maybe I should stop doing it" because almost the whole of the persons sense of self is invested in the activity. So Scott can, for ever and a day, retreat to the question "Are you donating 10% of your income to poor people?" as if it was self-evident that this was a good thing to do. That is where I realise I have walked into someone's church. Not only am I wasting my time, but yes, it is probably also rude.

And my excuse for persisting is because that question of Scott's sounds to me like "Are you spending 10% of your time punching someone in the face?"

Expand full comment

> And my excuse for persisting is because that question of Scott's sounds to me like "Are you spending 10% of your time punching someone in the face?"

Please provide me with evidence that paying for insecticide-treated bed nets is the moral equivalent of punching someone in the face, and no, citing something vaguely similar that happened decades ago is not meaningful evidence compared to people actually studying the consequences of giving people insecticide-treated bed nets.

Expand full comment

I feel like I'm going to regret this, but can you point me to somewhere you've written up your strongest critique?

Expand full comment

Before I reply to this with a broad critique: Why do you feel you're going to regret this? I ask because I frankly feel like this is an insular, hostile community. And the fact that even the idea of coming into contact with criticism causes you to feel regret makes me think my evaluation is valid.

Expand full comment

Based on your description of EA as a religion, I think there's a high probability that I'll find your critique is condescending and misses the point, and that I'll regret spending time on reading it. But I'm willing to take that risk in case you have valid points. Please note that I'm *not* asking you to write anything new, because I don't want to waste your time either. But if you can point me to something suitable that you've already written, I would be grateful.

Expand full comment

Indeed. Well, I have written at least three critiques of EA on the comments of this blog and at least a dozen generally. I invite you to do the opposite and write out your argument to convince me of EA's premises so that I might more accurately target my critiques. Or be convinced if I find your case convincing.

As it is, I expect to find your defenses childish and insular to the point that you don't realize you've drank the kool-aid of a cult. But I am fully willing to accept that I may be wrong and would be grateful to you for your strongest arguments in favor of your religion. After all, there is no shame in following the one true faith so long as it is the one true faith.

Expand full comment

OK. I think this is unlikely to be a productive conversation, so I'll have a brief look for your previous comments and respond here if I find anything interesting.

Expand full comment

I agree it's unlikely to be productive. Less because I have nothing to offer and you have nothing to offer and more because you come across as a partisan rather than a truth seeker.

That said, and I understand why that might not be a pleasant claim for you, I am not inherently writing off your critiques as invalid. I just don't feel like your arguments are particularly colorable at this point. And if you put together a solid critique I will take it seriously. But you haven't done that so far.

Expand full comment

So, in what ways do you think they're failing to do the most utilitarian good? And what about that are they not willing to discuss? It's hard to assess the credibility of your claims without specifics

Expand full comment

First off, thank you for not insulting me with a snide remark in the opening. It's a low bar of politeness but one many others have not cleared. As for the rest: I have just said I'm declining to criticize the movement. If I oblige your request with a broad based critique then I would be hypocritical. But I offer you, as I offered others, a chance to make a positive defense that I will then comment on. (I do not guarantee a critique because I might agree with you.)

If that makes me less credible, well, I am explicitly not asking you to believe my claims. So that's no real issue to my point. This isn't a pose. My genuine conclusion is that I'm being rude by pointing out the articles of faith are wrong and that I'm not doing anyone any good by pointing out what are probably socially necessary, if untrue, beliefs. It is not that EA is evil or we should abandon it. And indeed, it never was.

Expand full comment

Thanks for this post. It's a great way of describing the issue.

Expand full comment

> Q: I don’t approve of how effective altruists keep donating to weird sci-fi charities.

> A: Are you donating 10% of your income to normal, down-to-earth charities?

> .....

> Q: FINE. YOU WIN. Now I’m donating 10% of my income to charity.

> A: You should donate more effectively.

Is this going to end with me being morally required to let a seagull eat my eyeballs? Because in that case, I'm going to follow some good advice I encountered recently, and bail out at step 1..

Expand full comment

Are you sure the seagull wouldn't derive more utility from the nutrients in your eyeballs than you derive from having functioning eyes?

(j/k, but I do recommend signing up to donate your corneas and other organs after death).

Expand full comment

Scott has basically written that argument down earlier and the “donate 10% and then stop worrying about everything else” is explicitly the strategy to not get your eyes eaten out by seagulls: https://slatestarcodex.com/2014/12/19/nobody-is-perfect-everything-is-commensurable/

Expand full comment

I would politely point out that is a post from 8 years ago and that Scott's opinions on things have evolved since 8 years ago.

Expand full comment

Do you say that since you think the views in that specific post are likely to have changed? They seem consistent with everything he’s currently saying to me.

Expand full comment

Seems inherent to *any* defense of the tower levels about... level 2, right?

Scott's being *incredibly* smug throughout the piece, which is almost certainly influencing my interpretation negatively, but that last line of "you should donate more effectively" is pretty clearly NOT "don't worry about the details, if you're donating."

Expand full comment

Is the argument "if you don't give 10 percent of your income, then you have no right to an opinion" valid for all nationalities?

In America, charity fulfills many functions that taxes are supposed to fulfill in European welfare states. So for Americans, giving 10 percent can be seen as a form of self-taxation: Americans who give 10 percent of their income to charity give themselves a tax rate that approaches continental European levels.

Are people who pay a lot of taxes allowed to answer "No, but I give 3 percent of my income to charity" and continue the discussion? Or are they always out of the game?

Expand full comment

To this I would add tax deductions, which most European countries won't have. Making the 10% donation in the US less (How much less depends on an arcane ruleset that includes state, income and other deductibles you had that year, so it's hard to put a number on it)

Expand full comment

Usually it’s taken as 10% of pre-tax income if you can deduct it, and for whatever you can’t deduct, post-tax income.

Expand full comment

GiveWell.org mentions that donations to AMF and potentially other charities are tax-deductible in the UK, Ireland, the Netherlands, Germany, Switzerland, Denmark, Norway, Sweden, Italy, and Spain. I suppose that technically does not include most European countries but given that basically all the richest countries are on this list, that seems like a pointless nitpick.

Expand full comment

I think the intent here is partly to argue that it's much more important that people make substantial donations to charity than that they have broad agreement on an optimize philosophy of identifying worthy causes to give to, and partly to push back on people whom Scott believes to be disingenuous of their criticisms of EA.

On the former, the "are you giving 10% of your income to your preferred causes" response is a pivot from arguments over philosophy of optimal giving to the more fundamental question of giving, to push back against the tendency for the perfect to be the enemy of the good.

On that latter, it seems like an attempt to force the interlocutor to engage with the core criticism, either affirming that they agree with EA on lower levels before quibbling over details, or else biting the bullet and engaging directly with the "give 10% or pursue an altruistic career" argument.

Expand full comment

If “functional” EA just boils down to donating 10% of your income to make the world a better place, why is this a significant movement that deserves media and intellectual attention? That’s called tithing and it’s been around for millennia.

Expand full comment

In fact, Scott wrote back on SSC that the choice of 10% that he recommends was clearly directly inspired by tithing.

Expand full comment

Two answers:

1) Because most people don't do it, and the world would be a better place if they did.

2) The focus on *effectiveness* is new. Tithes went to your church, who spent them on the local poor, or maybe some more fancy gold-plated art in the Vatican. But if we focus on effectiveness we can do much more good for the same amount of money.

Expand full comment

I do think the focus on *optimizing your donations for effectiveness* in genuinely new, and it bothered me that the original post didn't emphasize *that* as the core idea of Effective Altruism.

Well before I encountered EA, I was perfectly capable of feeling guilty for not giving more, but the idea that I had an obligation to carefully consider where the donation would do the *most* good was one I hadn't encountered before.

Expand full comment

If the lowest level is just plain old charity--which is thousands of years old, see religious tithing--what's the big deal with EA? There's going to be endless and unproductive debate about what the most effective charities are because that is inevitable given that individuals will have completely subjective criteria for what counts as "effective". Maybe an individual would prefer that some percentage of his donation goes towards religious instruction/proselytizing in addition to mosquito nets and medical treatment.

Frankly, I don't have a problem with that. Just giving some amount of money to charity is, in my book, good enough given the obvious alternative.

Expand full comment

> If the lowest level is just plain old charity--which is thousands of years old, see religious tithing--what's the big deal with EA?

This is what Scott's comical "are you donating 10% of your income?" repeating answer is about. Lots of people are in favour of charity in the abstract, but the big deal with EA is that it says "donating to randomly-chosen charities whenever it makes you feel warm butterflies is bad! you need to be consistently, consciously charitable!". I think most common-wisdom about charities and when it is moral to donate to charitable causes gives as many brownie-points to the man who gives a hundred dollars to a beggar he happened to met on the street, and to the man who gives fifty dollars to a "save the whale" charity, and to the man who gives two hundred dollars to an anti-malaria charity twelve months in a row.

EA's point is: this is crazy! Charity isn't an art-form where, all principles being equal, everyone is entitled to doing it lackadaisically according to their whims and momentary aesthetic impulses! Or, rather, you *can* do that, sure, but that shouldn't be considered especially "ethical behaviour". If your moral system tells you that it would do more good to always donate to XYZ efficient charity, but you don't do that, if you don't even *attempt* to make yourself do that, if you just make irregular donations to whatever cause tugs your attention now and then — you don't get moral brownie points. Your charitable giving is just a weird hobby you're doing for aesthetic reasons, albeit one with positive externalities. And if you truly want to improve the world, rather than get "Sort Of Person Who Improves The World" self-esteem brownie-points, you should do better.

That's the EA assumption, and, as Scott says, most people don't donate 10% of their income to charity, so it seems demonstrably to not be very widely-shared.

Expand full comment

The 10% threshold (or any threshold) is most definitely not an EA thing. It shows up a lot in the context of religious giving, specifically what percentage of your income should go to the church, and it is literally thousands of years old.

So, given our devout Mormon or Baptist who is conscientiously handing over 10% (or 20% or 25% or whatever) of his income to the church what's the objection? 1) A person who tithes X% of their income consistently is most definitely not a dilettante. 2) Given that the criteria for "excellence" in a charity will be largely subjective what's the real basis for criticizing anybody's choice in which organization should benefit from their donation?

Expand full comment

To me, the foundational insight of EA is:

1) I have value X.

2) Some charities satisfy X better than others.

3) If I can figure out which charities those are, I can maximize X.

This doesn't require me to share a value system with anyone! If someone seriously investigates several causes to find where their donations will go, and ultimately decides that a church is the best option for their values, then I'd say that's effective altruism just as much as someone who puts in the same effort and lands on bednets.

Expand full comment

Which is better? A charity that spends $100k on bed nets or one that spends $90k on bed nets and $10k on bible class?

Expand full comment

Surely that depends on whether the Bible class ends up helping people more than $10k worth of bed nets, right?

That might depend on things like whether the class is well-taught, whether it causes students to change their behavior, whether Christianity is true, ... you know, nice simple and straightforward things for a charity evaluator to assess reliably. :-)

Expand full comment

My point is that "better" and "effective" are relative terms. How could they not be? One of the clearest illustrations of this is the disconnect between the religious and the non-religious.

There are other examples as well. Does soldiering count as an altruistic career? It's somebody putting literally everything on the line in service of the common good. The obvious next question is which side do you volunteer for, the Ukrainians or the Russians? Which side seems "better"?

Expand full comment

Yes, and there's sometimes a confusion between effective altruism and Effective Altruism, right? Like EA organizations' reviews and recommendations customarily reflect priorities and values that are common in certain circles, but not universal. That's partly what Scott is getting at here with the top parts of the tower.

At the lower parts of the tower, I may donate to certain causes or organizations out of affinity or habit, but I probably ought to recognize that (as with everything else!) that comes with opportunity costs. An effective altruism insight could be that donating to something you think is good may feel very positive—and may be very positive—yet that doesn't mean it's exempt from opportunity costs.

I know I certainly don't like to think about opportunity cost in my own charitable donations! Yet that doesn't mean it isn't there.

Expand full comment

These people aren't the primary target of EA, which is geared more at secular, inconsistently-giving agnostics/atheists. The EA *movement* probably disagrees with the Mormon givers' criteria for picking good charities, but as far as the basic ideas are concerned, they… probably count as base-of-the-tower Effective Altruists.

Expand full comment

Or, given historical precedence, it would be more accurate to describe EA adherents as base-of-the-tower charitable givers in the long tradition of religious charitable givers.

Expand full comment

What's EA adds isn't recognising that these things (altruism, effectiveness) are true or good, but actually acting systematically on that. To take one example, two decades ago, there was no GiveWell, only an abysmal Charity Navigator. Since then, GiveWell has directed >1B dollars (not all from EAs, to be sure), which in expectation "will save over 150,000 lives and provide cash grants of over $175 million to the global poor".

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

There exist moral systems in which donation is morally bad, and the kind of donations that are "efficient" from a utilitarian PoV specifically so

I don't think anyone here subscribes to them, but in the same way none here subscribes to racism, i.e. there's some "residuals" of a system that surrounds you which remain with you at a subconscious level.

In that sense donating to less effective charities might be an easier barrier to cross.

Not to mention less effective charities usually trade back status, so you might even be able to start donating that way and trick your brain into thinking it's something else.

---

As an unrelated point of interest. How much is 10% of income once you include the tax cuts for it in the US?

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

As a child, I would put a dollar of my pocket money into the collection can, and it would make me feel good.

As a teen, I recognised that a dollar was not a lot of money, so I would put $10 to $20 into the collection can, and it would make me feel good.

As a young adult, I had read a bit about effective altruism. I recognised that per Yudkowsky the developing world is a pit of suffering that we still have an obligation to fix, that per Peter Singer standing by and doing nothing is as bad as letting a child drown, that giving in a suboptimal way is also as bad as letting a child drown, that every QALY could theoretically be quantified and measured and weighted against me in the final moral balance, that I had a strong moral obligation to do as much as possible, and that falling short was literally killing children. Whenever I saw a collection can, I put a socially appropriate amount of change into it (LessWrong had told me about virtue signalling) and felt slightly guilty. Nothing was good enough and everything was a reminder of the crushing obligation. I paradoxically gave less than I ever had.

Now I just ignore EA and do whatever I want. I no longer believe we should give a hypothetical stranger from another galaxy, planet, continent, or city as much moral weight as the people around us. This has somehow lead to taking a high-leverage job helping developing countries, which is what EA apparently wanted anyway.

Expand full comment

Have you read https://slatestarcodex.com/2014/12/19/nobody-is-perfect-everything-is-commensurable/ ? That's how I learned about EA, and funnily enough it got me *out* of the mindset you're describing rather than *into* it. Like, sure, maybe consistently applying the logic behind EA implies that I'm a monster for every cent I choose not to donate. But it turns out I can just... inconsistently apply it, and I'll still end up doing pretty good!

Expand full comment

I’d be interested in what people think of Philosopher Larry Temkin’s critique- he used to be very EA but now is not so sure on certain parts - although I think he’d still be advocating that we should do more eg 10%+, today. (So I guess the upstream argument for helping that you make is still intact) He has many more disanalogies with Singer’s pond now.

https://www.thendobetter.com/arts/2022/7/24/larry-temkin-transitivity-critiques-of-effective-altruism-international-aid-pluralism-podcast

Temkin argues:

in favor of a pluralistic approach to aiding the needy, according to which there are a host of normative reasons that have a bearing on the nature and extent of our obligations to the needy, including outcome-based reasons, virtue-based reasons, and deontological-based reasons.

Rather than a narrow “do most good” EA approach, a decent person should be open to a wider range of moral consideration such as being virtuous, acting right, acting within permissible bounds, and in promoting as much good as one can. Many of these ideas are compatible with wider EA thinking.

To elaborate on Singer’s drowning child analogy, Temkin argues we could consider if those needing help are members of ones own community or family, how many intervening agents there are between oneself and the target needy (in the pond, there are no intervening agents), and whether one is actually saving lives as opposed to defraying legitimate costs of an intervening agents.

Temkin further considers internal corruption in intervening agents, and external corruption in the environment and countries where the needy are.

The case for aid may differ depending on the innocence of the needy, and on who else benefits from intervention both directly and indirectly; and to the extent warlords, tyrants and corrupt regimes may benefit or fail to change in response to their people’s needs. There is also an incentive problem, in that aids agencies have reasons to cover up such corrupt behaviour. There is a further problem in aid agencies displacing local talent; and the difficulty in identifying successful projects that will replicate in different contexts.

Foreign interventions may invoke morally problematic psychological attitudes, show insufficient respect to local people and customs and undermine the interests and autonomy of local people.

Taken together these weaken the case for giving to international aid agencies, and Temkin highlights a real world example in the case of Goma.

Expand full comment

Q: Why don't you ignore those friends, and post the spicy stuff? I want to read it!

A: Are /you/ writing spicy essays?

(sigh)

Expand full comment

This seems ridiculous to me, Effective Altruism (the movement) obviously seems located further up the tower. This is the kind of argument you'd ridicule from other ideologies, a combination 'political correctness is just being nice to people,' and 'how dare you criticise this art when you couldn't even paint it.' It feels defensive, in the way that in-group defenses whose hackles get raised are often written much more to reassure the in-group than to persuade the out-group.

For the record, since it's apparently necessary to argue from authority here, I've devoted my whole career to doing 'altruism effectively', gone through a lot of trial and error on what actually works, and am currently working exclusively on projects in public health in the developing world; I genuinely believe my current work saves lives and makes good use of my skills. I donate to charity, but less than 10%. My life's contribution to altruism is stably somewhere between 0% and 100% as you suggest.

Effective Altruism, at least when I see bits of the community, almost never struggles with the kind of questions that matter on the ground – what's the effective way to deal with a semi-corrupt government or office politics even when the people involved mean well? Effective Altruists always seem to conclude that the Important Questions are the ones they were interested in anyway, the same way opera lovers direct their opera to charity, or Harvard alums direct it to Harvard.

EA spaces often feel to me like people who are really into dragons, so they ask themselves 'what kind of dragon would be most dangerous?' and start preparing anti-dragon shields and arguing over why other people's anti-dragon shields aren't optimal. Point out that dragons don't exist, and they say 'well what monsters are YOU fighting against?'. And hey, maybe some discoveries come out of this, but don't pretend you're not just doing what you enjoy and making a weird subculture out of it.

There have been a few instances where EA has been useful, like getting the debate about cash transfers in the mainstream, and God knows plenty of institutionalised aid is useless as a chocolate teapot. But I don't see the point of engaging with a community where you need three levels of sanewashing to get to something useful. If you want to look at my work and say 'Ha! Gotcha! You're actually doing EA,' fine, but claiming members as adherents when they themselves refused the label is more than a little odd.

Expand full comment

I feel that EA does the thing in your last sentence a frustrating amount of the time, and it rather humorously reminds me of the Rastafarians appointing Halie Selassie as the Second Coming entirely against his will and despite his repeated, stringent denials of divinity and disdain for Rastafarianism as a heretical Christian sect- which, of course, was merely taken as proof of his divinity, as the Son of God would be too humble to allow himself to be worshipped.

Expand full comment

When I encounter EA in the wild (i.e. not here where we're talking about whatever nonsense gibberish of the week), I have encountered it in exactly two contexts:

1. Reading Strangers Drowning, which has a chapter on EA, mostly focused on like, charity for Africans, earning to give, and the like; and

2. Going to GiveWell's site to make sure the AMF is still the top-performing charity for saving lives before dumping $4.5k into it.

I feel like the only time I ever see anything about how EA means we should all cannibalize our left leg to save a duck 2000 years in the future from malevolent AI is on this blog, especially in the comments' section. Perhaps there are some web forums where people endlessly discuss the matter, with the EA label on them, but as I am only interested in helping others and not in endlessly discussing helping others, I only interact with this kind of thing.

Expand full comment

I give to charity but I don't actually have the expectation that my charitable giving will actually make the world a better place. I think that in terms of improving human lives the single great contributing factor of the last 40 years has been foreign investment. In the 1970's South Korea's largest foreign export was human hair. Then along came Phil Knight and Nike. Shoe factories not only employ the locals, they also require investment in local infrastructure to transport raw materials to the factories and finished goods out plus the resources required to install and maintain industrial machinery and so on. Now of course South Korea is largely known as the source of Samsung phones and Hyundai cars, a story that is being repeated in places like China and India. From that standpoint charitable giving is irrelevant compared to the willingness of millions of first world consumers to buy cheap goods produced in poor third world economies.

So why do I give to charity? To tread water. Capitalism and technology will eventually transform third world economies into something that looks more like the developed world. How many people starve to death in first world economies like the United States or Germany? Until then however there are still people who face starvation in the third world. My completely arbitrary hope is that anything I donate to charity will a) feed them and b) not do too much damage to the local economy while we wait for these transformative forces to do their work.

Expand full comment

Hi Slaw,

I've got ten pages left in "Basic Economics." What You "said" comes straight outta that. I agree 110%. Except I would point out that there are still children in the U.S. that don't have food security. IOW, they may not starve to death, but they don't get enough food to reach their full potential.

And, according to.. arrrrgh.. https://developingchild.harvard.edu/ *any* kind-a stress in a young person's life has *permanent* negative consequences on learning and life outcomes.

Who's needs should be prioritizes is a question of values, not a quantity any number can give You. (Last I looked, the group at Harvard doesn't accept individual contributions, so there is that.)

Expand full comment

Isn't starvation a more urgent need than simple malnutrition however? Given somebody who is drowning versus somebody with a broken leg who gets helped first, and more urgently?

Expand full comment

It washy foreign investment that made South Korea rich, it was state sponsoring local industrial champions.

Expand full comment

I think you could argue that there is a chicken and egg component there, in that domestic conditions on the ground made S. Korea an attractive target for foreign investment. But the role of exports in securing the economic prosperity of places like Japan, S. Korea, China--surely that's not controversial.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

I think the problem is that most people don't agree with the second level of the tower but won't admit it even to themselves because it's politically incorrect to be that parochial. If I'm honest I would probably value each QALY at a discount of around x10 for each of these "circles" (rough estimates based on introspection ofc, especially the lower levels are really hard to estimate):

Children, wife

Parents and brother: 1/10

Other close relatives and friends: 1/100

Acquaintances, people I speak to, distant relatives: 1/1000

Distant acquaintances: 1/10 000

People I know of: 1/100 000

People I have something in common with (nationality, profession, some other kind of community I feel attached to): 1/1M

Strangers in a society similar to mine: 1/10M

Total strangers in a different society: 1/100M

So to forego a new $50 toy giving my daughter an hour of quality-adjusted life it would have to be worth 10000 QALYs for total strangers. Just knowing their story would reduce that number to 10 QALYs which may be in reach in some cases but maybe not that scalable.

So if this is true for many people I would advise effective altruists to calibrate the shaming to a level where people prefer to donate some considerable amount rather than question the lower levels of the tower.

Expand full comment

Even people who are philosophically clear that they don't believe they have literally equal responsibility to everyone in the world (and that they're allowed to care about proximity for some purposes) might still be embarrassed to think about the details too much or talk about them too openly.

I appreciate your thinking this through in public!

Expand full comment

Oh shit you're right, this is in public! ;)

Of course I would never talk openly about how much I value different particular people or even different groups but my hope is that noone I know will read this and if so that the categories are sufficiently vague. I think it is clear from how people react to different types of tragedies that most people place an incredibly steep premium on proximity when it comes to caring about others.

In general I´m curious about why utalitarianism is packaged the way it is. To me it seems as the three legs of consequentialism, the stance that happiness is the only value and that all people should be valued equally are pretty much orthogonal and can be embraced or rejected independently of each other.

Expand full comment

As I said in another thread dickens mocked the EAs of the 19C for concentrating on fixing Africa rather than the obvious grinding poverty around them. Back then the EAs were liberal imperialists - fixing Africa via colonial rule. While nobody would accuse the EAs of being colonial there’s a taint of western man’s burden.

Expand full comment

Yeah. The blunt reality is that our moral instincts are triggers by what we directly see. A sad puppy in a cartoon triggers emotions more than a thousand dead children you don't see. The question is whether you say "that's a flaw in the way our brains work which I will try to mitigate" or act to fulfill those feelings instead

Expand full comment

Obviously sad emotions is not the yardstick, I don´t think the most effective charity would be to shield myself from sad fiction. But I think there is some real way in which I think I care much more about the well-being of people close to me than of strangers, separate from their effects on me or value to me personally.

I think to me the answer to your question depends on whether I think of it politically or in terms of people´s individual actions. I prefer institutions and societies to be impartial and treat all its members equally but as individuals I'm not sure there is any reason to question our intuition to be parochial when distributing compassion and charity. Corruption and officials using their positions to enrich themselves and people close to them feels morally wrong, but prioritizing family and friends when helping people in private does not.

Expand full comment

I find arguments for an obligation to universal benevolence compelling albeit depressingly impossible to live up to.

But it's noteworthy that utilitarianism in particular was originally developed as a political philosophy, setting out principles for how *governments* should act--not a system of ethics for individuals. Mohism, the ancient Chinese philosophy of universal equal benevolence that has been claimed by some as a precursor of utilitarianism, was also primarily a political philosophy to guide rulers.

Expand full comment

Total strangers in a society hostile to mine: -1

Expand full comment

That seems dangerous—you value shortening or immiserating their lives *just as much as* you value preserving your own life or your loved ones' lives? You would find killing one of those strangers to be exactly as valuable and worthwhile as saving your own the life?

This would, other things being equal, mean that going and attacking a group of such strangers in a violent suicide attack would become instrumentally rational. Regardless of any details such as what those particular strangers thought about you or what they personally did or didn't do to threaten you or your own way of life.

Expand full comment

I realized that implication well after posting and considered editing it to <0 instead, but was lazy.

Expand full comment

Oh wow, reading this this is so obviously it, why has nobody said it so clearly before?

(Scott also jokingly wrote something similar about a literal ‘distance law of ethics’ way back backing this even up with numbers https://slatestarcodex.com/2013/05/17/newtonian-ethics/)

Expand full comment

This reminds me a lot of the trolley problem in the difference between action and inaction. Because I doubt you would equate the $50 toy to 10k years of life of total strangers if it meant you have to take some action. Like if someone said hey press this button and it will kill 120 children who are total strangers, but it will give your daughter a $50 toy, I would guess you don't press the button.

Another thing this highlights for me is how difficult it is to calculate QALY. Surely it doesn't scale linearly with time. For instance in your list it looks like a factor of 100 between your own children and some friends. So.. I could see that 1 hour of your child being unhappy is equal to 10 of your friends being happy for 10 hours. (Forgive me if I'm wrong I'm new to all this, not sure if things like "happiness" even factor into QALY.) That's actually a situation that could come up. Not a big deal. But if I had to choose between my child being unhappy for a solid year so that 10 friends could be happy for 10 years, obviously not a chance. It's a case of quantity having a quality all its own I guess.

Expand full comment

All good points. You are right it seems very evil even to me to actively commit mass-murder to give my daughter a toy! I guess by actively hurting another person you in some way automatically end up being in some kind of relationship and climb a few levels on the ladder even if you don't personally know the person or see the damage you do. Don't know if this explains the whole difference in intuition between action/inaction but it seems true to me.

I also think it's tricky to think about hour-to-hour or day-to-day happiness in this way, and more so when it comes to children. Even if I love my daughters very much and value their well-being orders of magnitude above that of strangers, it gets weird if I start to put great moral weight to every tantrum and daily struggle, even if it involves real suffering in the moment.

Expand full comment

"it gets weird if I start to put great moral weight to every tantrum and daily struggle"

Yes, there has to be some understanding that tantrums and daily struggles are normal and perhaps even necessary, not indicative of a bad quality of life. On the other hand a tantrum that lasted for a year would be unimaginably damaging. Like I said I don't know much about all this and I'm not sure short term happiness even factors into QALY. Another question would be how does suffering compare with death in terms of QALY? How does the person's age figure into the QALY points? Etc.

Expand full comment

"When people say things like “I think AI risk is stupid, so I’m against effective altruism”, the two halves of that sentence might both be true, but the “so” joining them isn’t."

As stated, this is definitely wrong. While those people's sentiment might not be logical, it can still be true. As we know, people aren't rational, and having a negative emotional impression of the higher tenants of EA can absolutely dissuade them of the basic tenants also.

I know many people who got into EA, discovered that it disagrees with them about e.g. the importance of systematic oppression, and have then stopped to think about QALYs or donating 10 percent of their income. In a way it's emotionally all or nothing, because there is no "EA but with systematic change".

While the post doesn't outright say it, it strongly insinuates that people just bring up these objections as an excuse because they don't want to donate 10 percent of their income.

In many cases this is strictly not the case, and it would be very toxic for our movement to treat criticism like this.

Despite this, I still think it's important to finally acknowledge that some people genuinely don't want to donate 10 percent of their income, and that is at least part of the reason they are looking for more palatable criticisms. Nevertheless, their objections can still be at least partly honest.

Expand full comment
User was indefinitely suspended for this comment. Show
Expand full comment

How well do the responses in the spicy essay work in practice? Doesn't the other party usually respond with variation on "I am too weak, poor and busy for altruism because rich people stole my money and labor, and EA helps them to do it, and the good image of EA is vital to the operation, which is why I'm trying to undermine it"?

Expand full comment

I am weak, poor, and busy. I made <$20k last year, plasma donations make up a significant portion of my income, and I am concerned I will fall under the poverty line this year and, because ACA is written in a deranged way, that will result in me owing the USG $5000.

I also donated $4.5k to the AMF, after trying to altruistically donate a kidney (barred for medical reasons, have been trying to cut back on sodas and scheduled an alarm for ten years to try again). I certainly identify more with EA than the people whose plan is to cut EA's throats.

So I would say, no, the other party doesn't usually respond that way.

Expand full comment

I deeply admire your actions.

May I ask what inspired you to go so far beyond the recommended 10% of your income?

As I recall, Peter Singer himself has said that low-earning people may be obligated to donate even less than 10%--5% or 2.5% might be sufficient.

Do you feel like your donations have significantly reduced your own happiness and quality of life?

I've read an account by a person with an income of c. $38,000 who donates 22% of her income without it affecting her happiness--she's genuinely satisfied with her frugal way of life. But your situation seems much more precarious.

Expand full comment

> May I ask what inspired you to go so far beyond the recommended 10% of your income?

Lack of having done so earlier, plus the amount ($4.5k) is the cost of saving one life.

In general, poor people give more, percentage-wise, to charity than well-off people. Perhaps it's a psychological thing about, "well, 10% of my income is worthless marginal value, but 20% is pretty significant!" Perhaps it's about paying it forward/reciprocation since the poor generally receive substantial assistance from others (in their communities and from the state) in a way the well-off don't. Or having a more developed sense of community, since it's necessary for survival.

> Do you feel like your donations have significantly reduced your own happiness and quality of life?

Nah. I am extremely frugal and don't buy much in the way of luxury goods - the closest is buying a medium pizza once or twice a week ($14-28). Most of my time is spent on my computer, my largest luxury good purchase was a cheapo laptop last month in anticipation of surgery (wouldn't have bought without that). The only way my happiness/QoL is influenced by my financial situation is when tax season comes around and I scream at my computer for two hours for how fucking annoying this shit is.

Expand full comment

Interesting! Thank you for responding! It's inspirational to know that it's possible to be happy while living such an altruistic life.

Your reference to resorting to plasma donation as a significant source of income made your lifestyle sound quite unpleasant to me, but maybe you find it less aversive than it's generally portrayed as being. (I don't find blood donation particularly unpleasant.)

When I read accounts by people living at or below the poverty line in contemporary America, they definitely seem unhappy. In particular, they seem constantly stressed by the *precacity* of their existence--the idea that any unexpected expense like a car breakdown could mean complete ruin (losing their job, eviction, etc.)

I guess savings would eliminate most of this stress. The EA blogger's budget included a substantial element set aside to save. Are you able to save any of your income, or are you living paycheck to paycheck?

Do you plan to have children? Raising a family on that income level seems extremely challenging: although the EA blogger did continue to donate a significant fraction of her income once she had children, it was no longer 22%.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

I think helping, directly ir indirectly, to grow the population in the poorest countries in the world has a strongly net negative impact on the world long term. I think EA are bad for supporting projects that help accomplish this.

I'm not obligated to donate any amount of my income to any charity in order to make this criticism (except perhaps to some sort of organization that endeavors to reduce population in poor countries??) as I dont think the money is wasted or is needed more somewhere else. I think the charity is doing harm and should be stopped.

Expand full comment

It’s far from obvious that reducing child mortality rates (which is what many EA interventions try to do) increases TFR. Often poor people have a lot of kids because they’re less likely to survive to adulthood.

https://blog.givewell.org/2008/08/03/infant-mortality-and-overpopulation/

Expand full comment

Fitting with the essay the question if you think that's true would be "Q: Are you working to kill more poor children?" Or less confrontationally, if you consider overpopulation an issue you could donate to charities who provide contraceptives, etc.

Expand full comment

Conveniently, I have a different problem with EA that happens to point into a similar direction for different reasons.

Basically, [the range of human experiences follows a long tail distribution](https://www.youtube.com/watch?v=IeD3nZX1Sr4) and thus the most effective interventions focusing on the present are about preventing extreme suffering rather than something like malaria, which in terms of the intensity of pain is comparatively harmless.

So if you wanted to give effectively but not helping people in poor countries, you could support something like [Clusterbusters](https://clusterbusters.org/) which will probably mostly help people in the western world.

Expand full comment

"I have an essay that my friends won’t let me post because it’s too spicy."

This is why SSC isn't what it used to be.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

I suspect that Scott is using a literary device to distance himself from the supposed essay, and that what he has is a collection of these Q/As that expands at need to provide apposite responses to every new excuse for not being an EA. I am left wondering whether that collection expresses Scott's actual views any more than does his essay on using units of dead babies to measure money spent on anything but charity.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

Jenga tower, motte and bailey, same thing, different metaphor. The thing itself is not changed by changing the metaphor, although it may change what you notice about the thing.

I want to introduce another metaphor, a road. Scott's image can be read as that road, starting at the foot. This road is broad and easy, easier still as, unlike the diagram, it gently inclines downwards. The road is paved with good intentions. They're written right there in the diagram. This makes it very slippery. Each step taken cannot be taken back, for Singer's Drowning Child stands always at the start, and the further you go the larger she looms. To turn back is to hold her under the water with your own hands.

The farther along the road, the narrower it becomes and the steeper it slopes downhill, until it meets the Altruism Event Horizon which rips minds apart.

Scott's diagram covers the outward activities. Here is my list of stations on the road describing the internal arena.

1. You must prefer good to evil.

2. You must prefer a great good to a small good.

3. You must prefer the greater good to the lesser good.

4. You must always do the very best thing you possibly can, limited only by your ability to discern it.

5. It's a theorem! You can't argue with a theorem!

6. This world is a bottomless pit of suffering! What are you doing about it? Right now?

7. While you're sleeping, people are dying!

8. New car? How many dead babies did it cost?

9. Resting? Revealed preference!

10. Recreation? “I sometimes hear people say, as an excuse for professors going to doubtful places of amusement, ‘You know, they must have some recreation.’ Yes, I know, but the re-creation which the Altruist experienced when he was born-again has so completely made all things new to him, that the vile rubbish called recreation by the world is so dull to him that he might as well try to fill himself with fog as to satisfy his soul with such utter vanity! No, the Altruist finds happiness in Altruism—and when he needs pleasure, he does not depart from it.”

11. "O feet happily chained which are walking in the way of salvation!"

12. "You yourselves are the sacrifice!"

Some notes to these, to indicate that I am not piling straw:

(1)-(4) That's what the words "good" and "evil" mean. You can argue, and some do, that they do not mean anything, but if you believe that, then this comment is not addressed to you. You have immunity to all forms of moral suasion. More moderately, you can argue that it is right to give more concern to the drowning child in front of you than the remote child dying of malaria, either on practical grounds (see comments elsethread on "telescopic charity") or moral grounds (it is right and proper to attend first to one's own circle). However, the first of these leaves you squarely on this road, while the second is one that, strangely enough, I have never seen anyone put up a reasoned argument for.

(5) refers to the theorems of utility theory.

(6) references Scott's essay on Bottomless Pits of Suffering: https://slatestarcodex.com/2014/09/27/bottomless-pits-of-suffering/

(7) is about scrupulosity, a dysfunctionally obsessive concern that one is not doing enough good, which has been much talked of in EA and adjacent circles. I have not heard of anyone solving this problem.

(8) references Scott's suggestion of the currency of dead babies: https://web.archive.org/web/20161019200116/http://squid314.livejournal.com/2008/11/29/

(9) See Robin Hanson, passim.

(10) is adapted (he did not use the word "Altruist" but another) from a sermon by Charles Haddon Spurgeon, a renowned Calvinistic preacher of the 19th century, still read by those of that faith. Also a prolific one — this is from volume 82 of his collected writings. Have some more: "The fact is that man is a reeking mass of corruption. His whole soul is by nature so debased and so depraved, that no description which can be given of him even by Inspired tongues can fully tell how base and vile a thing he is!"

(11) and (12) are from the words of St. Cyprian, just as ferocious in his day (3rd century AD) as Spurgeon was nearer to ours.

It seems to me that the stratospheric heights of the Tower, the farthest reach of the Bailey, and the end of the Road, differ from these religious sources only deep down in the Foundations, at the innermost sanctum of the Motte, and on the first step through the Gate onto the Road. Whether that step is pulled by the salvation promised by God, or pushed by the Singerian Utility Basilisk, it leads to all the rest. That is hinted at by the Q/A that concludes Scott's post.

Expand full comment

Effective Altruism is just another -ism, and like all other -isms, when it gains sufficient momentum, it just becomes another version of animal farm. It’s only a matter of time before UN and similar institutions claim the EA mantle and push all sorts of nightmarish top-down authoritarian measures to create this supposed EA utopia. I used to donate to EA funds until the Covid response revealed how corrupt, reductive, and authoritarian “using the Science and Data for the benefit of humanity” can be. So now I’ll continue my efforts to support my family and immediate community, rather than funneling money through ideological institutions to remote, unseen, corrupt, violent places, thank you very much.

Expand full comment

I mentally classify my tax payments as charitable contributions and call it a day, although I will admit I have taken a recent philanthropic interest in turning Russians into pork rinds.

Expand full comment

Well barked. A happy Ukraine Independence day! My students sang their anthem at the start of our German class this morning. (lousy singers, good learners - love them).

Yeah, my last 3 donations went to U24 - the military parts. The most effective charity around for now. https://u24.gov.ua/ Stop war. Kill invaders! Slava Ukraina!

Expand full comment

The main point of contention that is not on your chart is:

- (agree) "we should help other people"

- (agree) "helping 3rd world is more effective than helping 1st"

- (disagree) "we care the same about people everywhere no matter how removed from us, so we should help the 3rd world more than the guy next door"

A huge human heuristic is concentric circles of concern, where you care more about your family than your friends, more about your friends than your friends-of-friends. You care even less about your city or ethnic group, then about your country, then about the world. A subculture such as rats may fit on that spectrum somewhere, or just "people who think like me and I'd like their company".

This is adaptive in a number of ways. You know the needs of those close to you. It's better for everyone to have a few fierce advocates of their well-being instead of a faceless bureaucracy for which they're just numbers in a spreadsheet. Those who feel kinship with you will likely reciprocate your gesture (and you both know it - it's a coordination mechanism). You're resistant to counterfactual mugging. And best of all - seeing the immediate results of your help builds social ties and reinforces your will to do good, thus developing virtue.

Expand full comment

> A huge human heuristic is concentric circles of concern, where you care more about your family than your friends, more about your friends than your friends-of-friends. You care even less about your city or ethnic group, then about your country, then about the world. A subculture such as rats may fit on that spectrum somewhere, or just "people who think like me and I'd like their company".

This sort of reminds me about a post I was reading recently, about how relatively few people have a fear of flying nowadays compared to social anxiety, even though being afraid of heights is much more evolutionarily-sane than social anxiety. The argument was that this is significantly because we say that a fear of flights is silly but social anxiety is relatable, normal, and perpetual.

Similarly, you have said that being this good is just how good everybody is, and that it's insane to be any more good than that. But, on the other hand, for most of human existence the morality water line included something like a 25% homicide rate, and presumably plenty of rape too. Somehow we managed to stop doing that, and it definitely wasn't by declaring, "this is as good as it gets." So I am dubious on this entire line of argument.

Expand full comment

Totally orthogonal to your point, but I noticed my flight last week didn't have any barf bags. At some point they seem to have decided that airsickness kind of isn't a thing anymore. I wonder when thay changeover happened.

Expand full comment

It's not insane to be any more good than that, but it's against human nature and easy to get wrong. Perhaps some people can manage to do that without becoming twisted (cf. mrs. Jellyby). Most of them, I don't think so.

Expand full comment

> it's against human nature and easy to get wrong.

Going to my refrigerator to grab a water bottle is "against human nature," that doesn't make it easy to get wrong.

> mrs. Jellyby

Mrs. Jellyby is a fictional character.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

There's a long history of western civilization exporting bad ideas that were not adapted at home.

1. Communism.

2. Population control (There's a great book, "Fatal Misconception.")

3. All manner of dumb economic ideas, from import-substitution to state socialism short of communism.

4. State agriculture boards in former colonies.

etc. I could go on.

It's not completely nuts to be skeptical of educated westerners pushing ideas that might appeal to a foreign dictator with the power to put those ideas into action, to the detriment of those he has power over and possibly the whole damn world.

Ideas matter, and the worst consequences of bad ideas tend to be avoided by rich western countries.

That being said, it's not my business how other people spend their money. If it's illegal, that's the government's problem. The amount of money at stake here isn't really very much, and it's possible someone could stumble on something really effective. The more monolithic the movement becomes, the less likely that will happen.

If some cause is immoral, I guess I have some obligation to say something, but I don't matter very much, and no one is going to care what I think.

Human beings are suggestible and listen to high status people and ignore low status people. It's a waste of my time to fight it. The end.

Expand full comment

Is there a charity that doesn't even indirectly help people procreate by feeding or treating them but only suppress population increase by different ways of birth control?

Expand full comment
founding

I agree with this "defense." of EA. Wrestling with EA has made me think far more analytically about my philanthropy. Being charitable to the best of your capacities in whatever form that charity takes is table stakes for entering this argument with a clear consciences.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

No because if you reject egalitarianism then effective altruism becomes nonsense. Why donate to help others if they are not in a group, or fulfilling a function, that you care about. Who the drowning child is matters. None of your criticisms of criticisms matter as the rejection of the value of those being helped invalidates them.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

>If you destroy the foundation, the whole tower falls. But if you destroy the top floor, all the other floors are still standing

I can't resist pointing out that the WTC towers collapsed from the upper parts downwards, destroying them even to their foundations, and an entirely new structure had to be built after clearing away the rubble. I do not know what metaphorical moral might be drawn from this.

Expand full comment

>I do not know what metaphorical moral might be drawn from this.

I'd say it's pretty direct. One's refusal to accept the fragile pinnacle [extreme conclusion] in turn necessitates that they demolish it all the way down, or else they won't escape Scott's chain.

Seems like I've read many religious de-conversion stories that follow a similar trajectory, where it wasn't some question like the theory of a deity that tripped them up, but tugging on some relatively-minor theological thread ends up unraveling the whole tapestry of their faith.

Expand full comment

When I read the title, I assumed that was going to be the point of Scott's analogy, that EA is difficult to persuade people because the whole thing could crumble if an attack successfully lands on any of its many levels.

Expand full comment

Perhaps that is an intended subtext. What level of 5-dimensional chess is Scott playing?

Expand full comment

"The things EAs do aren't actually effective to help others, they should instead do [...]. My reasoning is [argument] and [evidence]!!!"

Have heard this over and over again. It's logically equivalent to saying "the state of the art of science is wrong, let me do a study and publish about it to demonstrate this!".

You aren't disproving science, you are literally applying the scientific method and joining the collective effort.

Expand full comment

The tower of assumptions sounds like a motte and Bailey from the inside.

A: “The best thing we can do is build giant cages of crickets orbiting mars.”

B: “that seems kind of crazy”

A: “how come you just don’t want to help people?”

The state takes close to 50% of my paycheck. Am I to support a wife and children on the other 40%?

I totally agree with giving and want to give more than I can. I’d be all over EA if they said “end the welfare state and lower taxes so people can give more to charities that actually do good.” But for some reason that one never comes up.

Expand full comment

“The state takes close to 50% of my pay check. Am I to support a wife and children on the other 40%?”

If you are actually earning enough that your average tax rate (rather than marginal tax rate) is indeed close to 50%, first of all, congratulations on your financial success! Second of all, yes, if you are earning that much you absolutely can support a wife and children with the other 40%.

Expand full comment

Ok.

Why don’t you convince my wife of this?

I’ll wait patiently while you report back to me how that goes.

Expand full comment

I’d be delighted to! Can you give me her contact details?

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

I asked her just now. She dropped the topic and went back to our conversation about how to resolve the underlying emotional baggage we are both brining to parenting.

From our perspective, this is the most important thing we can do with our lives. But I guess that’s selfish of us and we should instead put our kids in daycare 10 hours a day, work longer at stochastic reward factories, and donate to a nonprofit exploring fun ideas without any empirical feedback loop to reality.

PS federal taxes 35%, state taxes 8%, local taxes 7% is 50% total. Doesn’t take a massive income to get to 50% How many Bay Area families can afford to have more than a few kids?

Expand full comment

You make enough to be taxed 35% federally *total* on your income? Then your family is pulling in two million dollars *every year*. That qualifies as massive even in San Francisco.

Expand full comment

Just went and checked this. It's not 35%, it's 27%

So, federal: 27%

State: 8.7%

Local sales tax: 9.1%

So total tax rate is: 27+8.7+9.1=44.8% Not quite 50% but not way off from that either.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

First, I have run into someone in the commentariat here saying they just don't pay taxes; the motivation was precisely that they think they can spend the money more effectively than the government will. So, it's not a completely unheard-of position in EA circles. More broadly but less concretely, I think there's a fair amount of overlap between EA and small-government libertarians, so your position of "I would like to give less of my money to the government to misspend" is not particularly obscure. (Whether that's a change that one can push for cost-effectively is a separate question; I agree I haven't seen much discussion of it, but then I wouldn't seek it out.)

Second, do you endorse the view that all nonprofits are "exploring fun ideas without any empirical feedback loop to reality"? If not, are you donating to ones that are more effective than that? My own introduction to EA was GiveWell, whose whole point is to obsessively track the quantifiable impact of the interventions they support, so this particular angle of attack seems odd.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

I would suggest your wife start by studying the Lotus Sutra and realizing that the voice in her that says "this is not enough" wouldn't fall silent even if she possessed every jewel, every coin, and every wonderful treasure that existed in the entire world.

EDIT: Upon reading further, now that I know you have three kids- I think that the best thing you can do to help others is being there for your three children. You've already decided to walk the path of a father, so the most virtuous thing you can do is to be a kind, loving father to your children, cultivating benevolence in them that they might go out and do good deeds in the world. That is also a way to express benevolence to strangers.

Expand full comment

Yeah this is totally fair. Prioritizing kids in Africa over your own is difficult to justify when there are stakeholders other than you involved (spouse, kids, etc). It really is hard to convince some people to love you if you demand they accept you obligations to people they don't care about

Expand full comment

> Prioritizing kids in Africa over your own is difficult to justify when there are stakeholders other than you involved

But is this even reasonable?

It seems insane to me to be less present around my kids so i can give more money to help distant strangers.

And if you start to ask why, you start to raise the question 'how much does a good parent matter anyhow', which is basically impossible to answer.

"Sorry kids, daddy doesn't spend time with you because that would be selfish of me, the literature suggests that my presence doesn't seem to matter here since genetics are a thing and nobody has found a way to accurately measure the influence of a dad at home without controlling for other variables like poverty, cheerio"

Scott once complained about lots of social messaging saying "YOU ARE BEING TOO AGGRESSIVE" which might be true for lots of people but certainly not everyone. "YOU SHOULD GIVE MORE TO OTHER PEOPLE THAT YOU DON'T KNOW DIRECTLY" is probably similar.

Expand full comment

I think some of the memes within effective altruism are a fantastic basic template to get one to think about one's own ability to contribute to the good in the world.

Psychologically speaking, it's easy to get annoyed by calls to charity if you never thought about your own limits on it, because it feels like you can only lose - either you give in and lose materially, or you refuse and lose morally, so there's a way in which it can feel like a sneaky trick (maybe? I don't understand psychology well enough to speculate on this, this is my armchair theory). On the other hand, if you have a threshold like "I donate 10% of my income to charity" (or literally ANY OTHER threshold that works for you), then the annoyance (largely!) goes away, because either the prompt falls into your budget for charity or it doesn't, you don't need to decide then and there what the moral/material trade-off is.

Just to make it clear how different to EA some of EA's general principles can be: One of my rules in life is the perfectly mundane "try and keep a 2EUR piece in your wallet and give it to quiet beggars in the street, at the rate of 1 piece per work week". This isn't some shocking amount (I wouldn't even normally bring this one up at all because it's so insignificant, but it's still a nice mundane example of the principle at work), but it's a great way to make me help others on a regular basis without triggering annoyance reflexes or some primal fear of getting exploited / manipulated (or whatever happens in the deep recesses of our minds when humans get defensively annoyed; as mentioned, I don't *really* know what exactly causes it).

Furthermore, if you have a *lot* of money to play with and you want to help people, having a whole community to help you decide where it goes is pretty nice, because choosing something can be pretty anxiety-inducing - if you get it wrong, you're wasting a ton of money that could have gone to good use. But also, you don't always and constantly need to ultimately agree with the community about it to reap this kind of psychological benefit.

Expand full comment

Q: FINE. YOU WIN. Now I’m donating 10% of my income to charity.

A: You should donate more effectively.

Isn't this the crux, really? I have some sympathy with the critique that lots of effective altruist ideas are actually not suprising or interesting or really 'effective altruism' at all. Donating 10% (plus!) - check; working in an altruistic career - check. All this is taught by, for instance, mainstream Christianity... While acknowledging that not everyone can do these things, these basic ideas have been mainstream Christianity (and therefore a core western idea) for centuries. The idea of the drowning child is successfully selling a much older idea to a public that hasn't had the ideas given to them otherwise.

So, as you say in the essay, you ask - why aren't more people doing it, then? The answer to *that* surely lies not in whether the ideas are around but in the question 'why don't people do good things they know are good'. This problem is everywhere, in all our lives, after all!

Carefully assessing the effectiveness of individual interventions with studies and statistics, though, isn't that the original contribution of effective altruism? And isn't that, alone, an extremely valuable contribution?

Expand full comment

I think the answer why many people don't do this is that they're worried, e.g. they're living from paycheck to paycheck and something like "10% of your income" sounds very scary to them. Maybe they could do it this month, maybe not the next, but if there's enough uncertainty, even doing it for one month seems dangerous (what if you would have needed those savings the next month, because of an unexpected, large expense?). Those instincts can persist even if you're in a comfortable place where the chance of a financial problem actually arising becomes negligible.

It's pretty hard to feel like you're in a financially perfectly stable position. Heck, I'm on a Google engineer paycheck and I worry sometimes! I have no sane reason to. Granted, at the moment I'm living frugally compared to usually because I'm gearing up to support a partner who recently lost her job and who is still paying off a mortgage, and I'm cutting back on charitable efforts for that reason; but it's also pretty emotionally difficult to back out of the charitable efforts, and I'm not doing that as much as I should given the current situation.

Expand full comment

I have a wife, three kids, and two parents with end of life issues. I’m also a software engineer who is extroverted and charismatic while also being strategic and conscientious. If I focus more on work I could probably get myself to a role where I make several million a year at a big tech company. But my wife and kids and parents are pretty needy as far as my time.

It seems to me that EA says I should divorce my wife, let her raise our kids on her own, and then I should go whole hog in my career so I can donate even more more to charity.

What’s the EA argument that I should stay with my wife and be a good dad to my kids?

Expand full comment

The EA argument is "you should donate 10% or consider applying to a software engineering role at a EA nonprofit, they pay pretty well these days." There's a deep moral philosophy question you're asking there, but EA has settled on the 10% threshold as a reasonable compromise that lots of people can probably do.

Expand full comment

I’m not sure how this works exactly. As a single guy with my income, sure. I could do that. But man, kids are expensive. I also have to account for my wife’s concerns about our finances. She gave up her (considerable) income to have more time with our family. A lot of EA writing strikes me as single people who are, perhaps, out of touch with the time and financial demands of parenting. What if my kid had intense special needs. Should I just … drop them off for adoption so I can donate more to people studying AGI risk?

If my taxes were lower, it’d be a very different scenario. The government takes something like 50% of my income. Giving 10% would mean I need to pay all my family’s expenses on just 40% of my total income. How come this doesn’t seem to come up in EA circles? Has anyone done research into, say, figuring out what causes other people to donate more?

If religious people typically donate more to charities that help the poor, and educated people typically donate more to their schools, should EA encourage people to be less educated and more religious?

I think helping my kids by setting a good example as a dad “pays out” over multiple generations. It’s just not the kind of thing you can measure. I agree that we should try to do good in the world, and orient our lives around that.

But EA seems like it takes a punt on the central philosophical problem that most thinkers considered seriously - “what does it mean to live a good life” - and instead uses the easiest metric it can come up with, since “you’ve gotta have a metric.”

I reject that last premise, and think that people will naturally do the most good if they manage to come to peace with themselves, internally. But since most people are balls of stress and desire and worry and want, they are constrained in their total capacity to give.

In short EA looks like a relgion for rationalists. And like most relgions, it’s a tower of assumptions with some good stuff at the base, and all the thought leaders hanging out at the top of the tower debating how to effectively get more angels on the head of a pin, because that’s what status competition does to intellectuals.

Expand full comment

Good news! Charitable donations are tax deductible!

And since you’re rich enough to be paying the top marginal income tax rate, that means the cost *to you* of donating is only about half of what the group you’re donating receives! You get to improve the world at a big discount *and* stop the government from taking so much of your money, all in one fell swoop!

Expand full comment

Yes, I am aware of this fact.

How about this: why you don’t you have the conversation with my wife, then report back to me how that goes.

Knowing nothing about our obligations or responsibilities or constraints, you’re telling me what you think I should do differently with my life. And then wondering perplexedly why people maybe don’t like your movement.

Expand full comment

Oh, I’m not telling you to do anything. Your money, your life, more power to you.

I’m just making the point that someone on the kind of income you’ve indicated you have absolutely can afford to make substantial charitable donations and also support a family. Even after tax and donations, the vast majority of families get by on way less than yours.

If you don’t want to give away your cash that’s your business. But you can.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

What kind of income do you think I’m indicating?

The vast majority of families do not live in the Bay Area of California, so your point on income is mute. Propose a budget for a family of three kids in Santa Clara CA with parents that need live in help.

Do you count the money spent on help for my parents or taking care of my kids as charity? Or does it only count if it goes towards a bunch of nerds who have made a religion out of the orthogonality thesis?

This all ignores the latter point about having a partner with different values and beliefs about what is prudent. You either arent married, or are married to someone who shares your values.

And to add this all up you then retreat to a Bailey which says, sure,” I’m not telling you want to do.” Just saying you COULD do things differently if you wanted to. What’s the difference? Why not tell me directly that you think I ought to give more?

I have to say your whole approach does not seem effective at persuading me of much. I still want to give more than I do, and I’ve written about EA before, in particular deworming charities. But your communication here reminds me both EA and a priest telling me that I should go to confession more often. You have no idea about the complexity of the life I live and yet feel confident to make claims about what I’m capable of.

I might say they you are capable of choosing to empathize with people who give less than 10% and think EA is silly - you simply don’t want to. But I’m not telling you who to empathize with, that’s your business!

Expand full comment

> How about this: why you don’t you have the conversation with my wife, then report back to me how that goes.

I dunno, why don't you have the conversation with your wife? How do you imagine the conversation going?

You: "Honey, I'd like to do something genuinely good, but it would be pretty significant drain on our income - I'm thinking of donating 10% of my income to a good charity. What do you think?"

Her: "I think I hate you, you son of a whore! How dare you do this to me! Now we'll starve to death on the streets, you fool!!!!"

OR maybe she'd just say, "Hm, well, let's do the math on that." Or "That sounds like a wonderful idea." It's pretty easy to reject something when you shove off responsibility for actually rejecting it on somebody else who isn't present.

Expand full comment

I’ve had the conversation with her. The “let’s do the math” on it makes it clear that this is nontrivial and also reinforces her belief that i am too idealistic and naive. I want to leave big tips at restaurants and help friends and family who are in trouble with gifts.

She tends to see me as overestimating how much money we have. Which is probably fair. Any advice?

Expand full comment

Dude anyone who has a partner they love understands, don't worry about it. You're donating 50% of your income to the government and maybe half of your remaining energy or more on helping other people. You're a massive benefit to society. Thank you for existing. Be happy and proud.

If that doesn't do it for yoi,

Marriage is an implicit agreement (when done right) to share values and try not to change them too much. If after you've explained your feelings to your wife and she doesn't share them, it's fine to accept that as invalidating your moral obligation. Heck ask her what she'd feel okay with. 1% of income? Donate that effectively and leave it at that.

Expand full comment

Wow, thank you. I'm trying to avoid getting too wrapped up in what strangers on the internet think but this is honestly quite helpful.

Expand full comment

I am 79 and live in the Dominican Republic.

I give away about 80% of my modest (Social Security) income. I do this for selfish reasons..

The biggest part of my "charity" is paying the bills and providing cash for a Haitian family that I live with. In return they take care if me every day.

I also provide $40 per month for three other families that friends of my primary family.

I'm very happy doing this, but I would like to have more income..

When are you going to examine my serious improvement on Neom?

Rodes.pub/LineLoop

Peter Rodes Robinson

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

I think Freddie's critique is more that EA isn't really all that different from any other approach to doing good. Those of us who want to do good (when that is indeed what we want to do) would like to do good as effectively as possible.

Even your ad-hominem-y Q and A's gesture to that point. You're criticizing your presumed interlocutor for not doing good as effectively as possible [please see my ETA below], with the understanding that that standard is what the interlocutor already agrees with, and with the further understanding that you and the interlocutory really do have the same foundation. But if we have the same foundation, the "commentary" you mention (i.e., the upper floors of the tower) is all there is to discuss about EA.

Okay, I'm making a few assumptions about your intention and the "understanding" you're working under. But I must ask, what makes EA distinct?

I suspect what makes it distinct is how EA'ers approach the problem of deciding WHAT is most effective or HOW to devote limited resources to good ends. It's probably also the CONTENT of what they as a group advocate or keep open for discussion. So if a very significant number of EA'ers really do promote killing all predatory species, or if the EA culture and approach are peculiarly friendly to entertaining that argument, then that's a legitimate criticism. (That said, I'll happily concede that those are big if's. I have no idea if they are correct. Again, I'm not being at all familiar enough with EA.)

Maybe I'm misunderstanding things. I'm not at all well-read on EA. From what little I have read, I don't think EA'ers are wrong or bad. And I can think of a lot worse things to do than spending resources to prevent malaria in Africa. And while I donate to a local charity, I don't donate a full 10%, even though I could well afford to. And even though I'm fairly optimistic that charity does what it claims to do and does so as efficiently as possible, I haven't bothered to educate myself on what I need to know to do the assessment.

ETA: I said above that you're criticizing your interlocutor for not doing good as "effectively as possible." But I guess I realize you're not critiquing their effectiveness, just whether they make some sort of effort to do good. I'm not sure that changes the point I was trying to make, but I now realize I was misconstruing your point.

Expand full comment

I think the critics have huge egos and/or believe strongly in the Adam Smith argument of the selfish baker, and they think it's better to invest in themselves, morally. If the Wright Brothers had donated more, the world would have gotten fewer airplanes, later, and thus less total utility. A lot of people think they're Wright.

Expand full comment

I think that one of the assumptions that you are hiding is the assumption that charitable donations do more long-term good than market investment. This seems wrong in theory, because profit-seeking companies are more strongly accountable to the people they are supposed to serve. And it seems wrong empirically--if I look at the record, most of human improvement seems to come from profit-seeking investment. And if you ask me whether I can behave consistently with my view that market investment is a better use of money than charitable donation--I can! And I also donate blood.

Expand full comment

I absolutely agree that most human improvement comes from profit-seeking investment rather than charitable giving. However, I do feel the need to point out, there’s also an awful lot more of it.

I think it would be a lot worse for the world to lose all our businesses than to lose all our charities. But that’s a different question to whether I with my marginal dollar can do more for the world by trying to maximise the benefit to the world, or by maximising the benefit to myself.

Expand full comment

right. it's hard to know what works best on the margin. But I think that because charities are not accountable to their intended beneficiaries, this reduces the chance that on the margin they are the right place to invest you money

Expand full comment

What do you think of Give Directly?

Expand full comment

I prefer charities with a deep understanding of their community. Enough to know who is both needy in a particular way and who is likely to use charity to help address their needs. But I could be wrong about that--it could be that Give Directly is no less helpful to needy people than is supposedly targeted assistance from a local charity.

Expand full comment

Interesting! Not really the take I would have expected from a libertarian perspective.

Personally I generally trust people to know how to address their own needs better than a charity will. So as long as you’re getting the cash to people who don’t have much cash, it seems to me like a good way to avoid the “charity not responsive to its beneficiaries” problem.

And hey! Maybe they even use the cash to invest in a business!

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

You can never truly care about more than Dunbar's number of people, and attempting to do so is nothing more than an act of self-delusion.

Expand full comment

Maybe, but it's an act of self-delusion that can help a lot of people! If I end up in urgent care with a broken ankle or something, it hardly matters to me if the doctor who treats me really cares about me, thinks she cares about me but is deluded, or doesn't care--as long as I get treatment.

Expand full comment
Aug 25, 2022·edited Aug 25, 2022

I don't expect the doctor to care about me, I expect her to care about my money, today and in the future, and about other people's money even under the circumstance that I have the freedom to speak the truth about her and thus affect her reputation. It is the beauty of free trade and free speech.

> Love—more generally, the sharing of a common end—works well, but only for a limited range of problems. It is difficult to know very many people well enough to love them. Love can provide cooperation on complicated things among very small groups of people, such as families. It also works among large numbers of people for very simple ends—ends so simple that many different people can completely agree on them. But for a complicated end involving a large number of people—producing this book, for instance—love will not work. I cannot expect all the people whose cooperation I need—typesetters, editors, bookstore owners, loggers, pulpmill workers, and a thousand more—to know and love me well enough to want to publish this book for my sake. Nor can I expect them all to agree with my political views closely enough to view the publication of this book as an end in itself. Nor can I expect them all to be people who want to read the book and who therefore are willing to help produce it. I fall back on the second method: trade.

> I contribute the time and effort to produce the manuscript. I get, in exchange, a chance to spread my views, a satisfying boost to my ego, and a little money. The people who want to read the book get the book. In exchange, they give money. The publishing firm and its employees, the editors, give the time, effort, and skill necessary to coordinate the rest of us; they get money and reputation. Loggers, printers, and the like give their effort and skill and get money in return. Thousands of people, perhaps millions, cooperate in a single task, each seeking his own ends.

> -- *The Machinery of Freedom*, David Friedman

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

I don’t think you need to get very exotic before utilitarianism gives bad answers.

Even the base level “a random foreigner villagers happiness is a ‘better’ use of resources than your close kin” is already a bad answer.

Or rather it is a good answer if you are a disembodied spirit equally interested in all mankind. But that isn’t what anyone or any organization actually is.

Expand full comment

Since this is a somewhat spicy post, I hope you are up for a somewhat spicy retort...what’s the difference between the “tower of assumptions” model and “motte-and-bailey” model?

Expand full comment

Reading comprehension to notice that Scott isn’t claiming that lower areas on the tower is also EA but that the lower you disagree on them the more productive discussions tend to go.

Expand full comment

The tower of assumptions says $SPECIFIC_CLAIM isn't really what $ETHICAL_SYSTEM is really about, let's talk about $LOWER_LEVEL that we can agree on. It seems like the recipient would feel like they've been motte and bailey'd. How is it that the recipient can tell the difference between "they're contributing to a more productive discussion" versus "they're moving the goalposts on me".

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

I would assume the lines about how the Atheist saying they disagree with a translation of the Bible, when the problem is that they don’t believe in God would have answered this question, as

1. The atheist clearly isn’t Christian even if they go to lower levels of assumption

2. In that example, implicitly it’s the Atheist who plans to move the goalposts later, since if the hypothetical Christian comes back and proves the translation was not in error, the atheist would say “Well, that’s fine and all, but I also don’t find the existence of god/historical veracity of the Bible/Christian doctrines on morality plausible”

Similarly, It’s pointless to say that EA as a movement asks too much at 10% when actions reveal that implicitly you believe any donation to charity is too much and claiming so is misleading and leads to a lot of wasted effort.

I’ll also add that the mechanism by which Motte and Bailey works is to strategically equivocate between different definitions of a word / concept whenever it’s convenient, instead of, you know, making a large blog post about how these are different levels at which you can engage with a position.

Expand full comment

"I don’t think “kill predatory animals” is an especially common EA belief, but if it were, fine, retreat back to the next-lowest level of the tower!"

To me, this reads a lot like classic motte-and-bailey. Let's unpack. "EA" is the word/concept. "but if it were, fine" seems to indicate a strategic decision based on convenience. "retreat back" is the equivocation. I think this phrase is a decent representation of the whole, or at least is good enough that my initial, intentionally provocative, assertion is worth thinking about because even if they aren't the same, it sure feels like they rhyme. I don't think your comments are doing a good job thinking about or engaging with the point, so I'll respond to myself with the kind of feedback that might be helpful.

The main difference that you are highlighting is that m&b thinks you will agree at the retreat back position, but TOA thinks you won't (or maybe just isn't sure). But then again, if "we all" agree with the drowning child core, and if disagreement on intermediate levels is ultimately irrelevant (which it probably is), then ex ante expectations about agreement are not a distinction between the two and we are back to struggling to find daylight between TOA and m&b.

Expand full comment

It’s hard to argue about vibes, but I now see how someone can come to this conclusion.

The “ultimately irrelevant” part is the rub. In the motte and bailey, it’s ultimately irrelevant to the Christian / EA / feminist, but in the article, it’s ultimately irrelevant to the Atheist / EA critic / woman respecting non-feminist. The people in the both camps want to stay as high as possible because impressiveness and I agree this is reason enough to draw a comparison! Where I disagree is that you can distinguish between these two cases.

I’m sorry that my original comment wasn’t useful and I apologize for the initially aggressive tone of my reply.

Expand full comment

I’m definitely on the fence about the “ultimately uncertain” caveat (alluded to in the parenthetical above), and need to think about that some more. I want to have a bright line distinction but at the very least it is hard to come up with one. Appreciate the genuine dialog. Cheers

Expand full comment

I didn't go there in my own comment above, but I, too, thought it resembled a motte and bailey. Or maybe. I don't know enough about EA.

Expand full comment

Sometimes I have a hard time even believing that EA is actually a unified or consistent thing.

Why 10%, why not 10.5%? My amp goes to 11 and I might have heard of the Catholic Social justice concept of the universal destination of goods, long before EA became a "brand".

Didn't Swift's "A modest proposal" put an end to serious consideration of utilitarianism.

Who doesn't want to be more "effective"? Even the soup kitchen worker will choose a ladle rather than a quarter teaspoon as a soup distribution tool.

Expand full comment

One of the things I like about this blog is how you share intelligent and well-written criticism of yourself and your ideas. That level of intellectual honesty is rare.

Meanwhile, why is there so much criticism of the idea that altruism should be effective? It says a lot about the critics.

Altruism as such an important, deliberate part of life is a very Christian idea. It has spread everywhere, as being an idea that should not be questioned. I think if my Hindu ancestors from some time ago returned, they'd be shocked by this uncritical obsession their descendants have for altruism. I mean, it is probably ok, but is it this important?

Why can't it be questioned or atleast be improved?

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

"Meanwhile, why is there so much criticism of the idea that altruism should be effective?"

Because much of the current emphasis of EA comes across as "Forget your stupid pet causes which are ineffective and worse than useless, donate to *our* pet causes instead!"

Pretty much everyone can agree that mosquito nets are a good idea, because this is not that far removed from charitable work for the Third World conventional church and secular charities do. When it drifts into "animal suffering - are insects sentient?" and "AI is the biggest threat to our survival", then we're getting into "donate to my Charity For Small Fluffy Dogs (totally coincidentally, I have six small fluffy dogs and this charity will help me)" territory.

EA can be open to the Mrs Jellyby Factor: so busy organising letter-writing campaigns to help the poor Africans that they don't notice their own child falling down the stairs and splitting his head open:

"But no, they knew nothing whatever about Mrs. Jellyby. "In-deed! Mrs. Jellyby," said Mr. Kenge, standing with his back to the fire and casting his eyes over the dusty hearth-rug as if it were Mrs. Jellyby's biography, "is a lady of very remarkable strength of character who devotes herself entirely to the public. She has devoted herself to an extensive variety of public subjects at various times and is at present (until something else attracts her) devoted to the subject of Africa, with a view to the general cultivation of the coffee berry—AND the natives—and the happy settlement, on the banks of the African rivers, of our superabundant home population. Mr. Jarndyce, who is desirous to aid any work that is considered likely to be a good work and who is much sought after by philanthropists, has, I believe, a very high opinion of Mrs. Jellyby."

…We passed several more children on the way up, whom it was difficult to avoid treading on in the dark; and as we came into Mrs. Jellyby's presence, one of the poor little things fell downstairs—down a whole flight (as it sounded to me), with a great noise.

Mrs. Jellyby, whose face reflected none of the uneasiness which we could not help showing in our own faces as the dear child's head recorded its passage with a bump on every stair—Richard afterwards said he counted seven, besides one for the landing—received us with perfect equanimity. She was a pretty, very diminutive, plump woman of from forty to fifty, with handsome eyes, though they had a curious habit of seeming to look a long way off. As if—I am quoting Richard again—they could see nothing nearer than Africa!"

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

My maternal grandfather came from a poor family. His parents still saved hard and sent him to London to take the ICS exam. The British had made it that hard for Indians to take the Civil services exam. Once he returned, having passed it, his parents and 5 siblings (and eventually their spouses and kids over time) moved in with him. He was now making more money and this was normal. He took care of them all, thoughtfully. His wife, my grandmother, never complained. It was duty.

That is how society was structured. Your family and then extended family first. There was no concept of sending money to any causes. ANY causes. The Hindu temple doesn't demand anything. You're not guilted into donating to it.

Today things are more Westernized. They try to donate to causes. I don't think has been a positive change. Your own parents and extended family should come first, in my view, for a healthy society.

Expand full comment

Altruism is hardly (only) a Christian idea. The tradition of donating a fixed percentage of your wealth to the poor is thousands of years old, and has manifested in Islam, among other religions.

Donating a small percentage to charity also does not preclude the fact that your parents and family should come first. I know many people who donate a small percentage, and do not see any change in their lifestyles or their ability to help their extended families.

Expand full comment

Well, resources are finite. To commit to a church 10% of your salary, is non-trivial. You could put that into retirement savings instead, or into the account of a poor relative for their future.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

Littlewood's law suggests that the news should be ignored, because in a world of 8 billion people all with their hot takes and the news media's bias towards interesting, controversial and dramatic stories, that we should ignore the news over virtually anything, and that includes EA.

Link here: https://www.gwern.net/Littlewood#:~:text=At%20a%20global%20scale%2C%20anything,%E2%80%8Bnetworked%20global%20media%20covering

PS: EA is to Charity what Capitalism is to Economies or Democracies are to Politics. A flawed system of humans that nevertheless outperforms by orders of magnitude everybody else. Or "the system is imperfect and always will be, and flaws can be removed, but it's still more useful than any system yet designed for charity."

Expand full comment

I do a lot of volunteer stuff with career development to help people do resumes and prep for job interviews. Have a pretty good track record of getting people on career tracks, although my reach there is in the low dozens. Embarrassed to say that at present I only give a few percent of my income to normal down to earth charities. Posting mostly just to say: it feels good to do something good for someone right in front of you that helps them beyond the few hours you spend on the effort. In case anyone else is similarly inspired.

I like the EA movement overall, but somewhat align with DeBoer on it. Although I also have my own science-fictional/fantastical ideas that I’m pretty sure give other people eye-rolls so I don’t judge anyone too harshly. It seems the highest utility thing you could do to help the world would be to “Fix the things that fix things” or in other words inspect, study, and enhance the fundamental social mechanisms that formalize problems, create solutions, and assign resources, so I think I’m pretty strongly aligned with EA on that front.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

Glad you're addressing this but still missing pieces.

"Think that 10% is the wrong number, and you should be helping people closer to home? Fine, then go even lower on the tower, and donate . . . some amount of your time, money, something, to poor people in your home country, in some kind of systematic considered way"

-I am giving way more than 10%. It ends up in a savings account so I can start my own project in the future. Why are future poor people discounted more than present more people?

-What if the poor person in the future I want to help is myself?

-----------------

"Think that 10% is the wrong number, and you should be helping people closer to home? Fine, then go even lower on the tower, and donate . . . some amount of your time, money, something, to poor people in your home country, in some kind of systematic considered way"

-I am giving way more than 10% to help poor people. The poor people in question happen to be my own children. But helping those poor people doesn't count because the unwritten assumption is that the help must not overlap with existing expectations, right? EA is BETTER than everyone else doing the "minimum". Some QALY are better than others, the ones that reduce your guilt and/or give you feelings of social status are better.

-------------------------

"Helping people"

-I am helping. My wirehead project will remove all feelings of suffering, the side effect is that no one has kids anymore. The QALY of future potential people can't be measured.

-Wait, helping is removing suffering AND helping them reproduce or not reducing reproduction? How many kids are good enough? Help that reduces 4 kids to 2 kids is okay, 2 kids to 0 kids is not?

-Now you are saying there are considerations BEYOND QALY?

---------------------

"Q: Here are some exotic philosophical scenarios where utilitarianism gives the wrong answer.

A: Are you donating 10% of your income to poor people who aren’t in those exotic philosophical scenarios?"

-If one rejects the basis of utilitarianism, then it always gives the wrong answer. It isn't "less basis", it came after rejecting all moral theories that came before. "Every philosopher that came before us just didn't have the common sense that we do."

-----------------------

"We should...." is just "foundational".

-This isn't foundational, this rests on unspoken assumptions. Just be more honest.

You can't get an ought from an is. Using "we should" is either:

1. A claim to objective morality (which needs grounding).

2. Dishonest shaming language used to modify the behavior of others.

3. An inaccurate statement about one's own subjectively feelings.

Be honest enough to say that you just subjectively feel compelled, there is no such thing as should without objective morality.

Expand full comment

Surely rationalists have a name for the fallacy where one dismisses valid criticism because the critic fails to meet arbitrary criteria?

Expand full comment

I think either "ad hominem" or "tu quoque" would be applicable, depending on the circumstance.

Expand full comment

I was going to make a joke about being so low on the tower I am only helping myself, and that you're welcome. But, and I suspect most of your readers don't have to worry about this, the invisible foundation is really, seriously yourself and obviously you should take care of yourself first, like putting on your oxygen mask first in an airplane crash. It might be obvious, but because it's invisible I feel too many people miss it.

Expand full comment

Yes. I also think this is a "first world problem".

Expand full comment

LOL, the last one made me laugh. People can always find an excuse. Some told me this once: “Feel universally, think globally, act locally.” Give money or time to your local soup kitchen or no kill animal rescue. Get a job with the ARK or whatever local organization serves the developmentally disabled in your area. Or get a job at Hospice. Get on the local library board, or school board, or zoning board or water board - wherever your expertise and energy can help counteract the forces of greed and denial that are holding us back from dealing effectively with the very real problems the human race is facing here. Give money to Planned Parenthood. Volunteer at Habitat for Humanity. Deliver Meals on Wheels to shut ins. Put together a crew and go around winterizing the houses of the people who use Meals on Wheels, for free. Get together with your neighbors and spend a day each week picking up all the trash on your block. Lots to do. Lots to do…,

Expand full comment

More effective than giving to poor people is giving to institutions that are trying to change the institutional frameworks. The main obstacle to poor people getting richer is almost always government policy.

We should be trying to make more governments like Hong Kong in the 20th century: strong rule of law, strong property rights, low taxes. That is all that is required for poor people to pull themselves out of poverty.

Expand full comment

You are right. The problem absolutely is with the foundations. We should help some people, sometimes. Typically, those we are actually responsible for (our children, family, and, say, patients, students, soldiers, depending on our role), and occasionally, those that come across our path, like, a drowning child. Nothing follows from this about some kind of mandate to "strongly consider how much effort we devote to this". This obligation does not exist.

Expand full comment

The old phrase goes, "one man's modus ponens is another man's modus tollens".

For what it's worth, I agree with you that if the tower were logically sound, it would be better to tear down the foundations than climb to the top. Charity is good. It is not obligatory.

Expand full comment
User was indefinitely suspended for this comment. Show
Expand full comment
Aug 24, 2022·edited Aug 24, 2022

"But beyond that, you might wonder why the atheist didn’t think of these things. Are the translation errors his real objection to Christianity, or is he just seizing on them as an excuse? And if he’s just seizing on them as an excuse, what’s his real objection? And why isn’t he trying to convince you of that?

This is also how I feel about these kinds of critiques of effective altruism."

Having seen some atheist arguments which are exactly this ("that word is translated X but it should be translated Y!"), I see you.

Actually, what this makes me think is that Effective Altruism is going through its own version of the Donation of Constantine. Just as every critic of the Church/Christianity likes to blame Constantine for Ruining It All and turning Christianity into just another state body:

https://en.wikipedia.org/wiki/Constantine_the_Great_and_Christianity

So EA is going through the same growth phase. It's no longer a bunch of scrappy nobodies following a weird philosopher, it's putting institutions in place. Heck, it's *got* institutions to put into place. Rich and influential people are getting involved. EA is even throwing money at political campaigns. It's becoming mainstreamed into general society.

Speaking as a Catholic, welcome to the consolidation phase. And be prepared for ten tons more of the same kind of criticism about selling out, about "but you're not doing anything new, and what new stuff you are doing is weird and strange", and "I liked you better when you were a bunch of scrappy nobodies challenging the status quo".

Be prepared for the accusations, which do seem to be covered by the "Q: Come on, effective altruism doesn’t even emphasize the “donate 10% of your income to effective charities” thing anymore! Now it emphasizes searching for an altruistic career!" objection, about having lost the way and become tied up in administering the institutions:

"Woe to you, scribes and Pharisees, hypocrites! For you tithe mint and dill and cumin, and have neglected the weightier matters of the law: justice and mercy and faithfulness. These you ought to have done, without neglecting the others."

Expand full comment

I feel like I've got to stand up for the atheists a little bit. The reason these translation errors come up is usually not just pedantry (though there's lots of that too, don't get me wrong). Rather, it's because the Christian says, "this Bible verse says X, which clearly means Y, which is why we should stone gay people to death, Amen". One possible response to this statement is, "actually the Bible is not a moral or legal authority, it's just a book of historical fiction, so you should maybe take its proclamations with a grain of salt" -- but most Christians would not even understand how to parse that sentence. To them, it is *obvious* that the Bible is the first and final word on Life, Universe, and Everything. However, while such faith is impervious to criticism, it could be argued that the Bible was written down and translated by humans, and humans make mistakes -- and so, maybe the verses in it do not mean what one thinks they mean.

Expand full comment
Aug 25, 2022·edited Aug 25, 2022

You're really talking about evangelicals here. Catholics (among others) have a considerably more nuanced view of the Bible, and the question of how much is metaphor, and how context shades meaning, is not only well understood, but often discussed and pondered.

Expand full comment

As the quote attributed to St. Augustine goes: "The Scriptures teach us how to go to Heaven, not how the heavens go".

Expand full comment

You guys sure seem to disagree a lot on how to go to Heaven according to scripture, though :-)

Expand full comment

Well, no more I think than atheists disagree on the correct governing ethical principles, or constitutional lawyers disagree on what exactly the Second Amendment says. It's kind of human nature, no? We're a disputatious species.

Expand full comment

Agreed, but we atheists do not have the benefit of being guided by an (allegedly) infallible God...

Expand full comment

There are thoughtful atheists out there, and genuine problems of translation. The thing is, this goes back to something Scott posted ages ago about "a whale is not a fish, checkmate theists" type of argument.

That only works if you are talking to a hard-boiled literalist inerrantist. Most other Christians will go "Dude, the point of the book of Jonah is forgiveness, mercy, and not being a dick about 'but I really want those Ninevehians to perish since you told me this would happen". Arguing over "but if this really is God's word, why didn't God tell the writers that whales were mammals? hence no God!" doesn't cut any ice unless you are the type to be cast down by "what, you mean that word should be A instead of B? well now I'm going out to purchase slaves and crush my enemies under the wheels of my chariot, instead of all that dumb 'all men are created equal' stuff you just disproved for me!"

This kind of "that word does not mean what you think it means" parsing gets brought up a *lot* in inter-denominational disputes, especially between the more liberal and more traditional wings both inside the same denominations and between different denominations. Take a gander at the Isaiah Question and The Translation of Almah 😁

https://en.wikipedia.org/wiki/Almah

Generally this goes along the lines of "first, that verse doesn't apply where you claim it applies and second, it doesn't mean Mary and third, even if it does, it just means Mary was a young unmarried woman, not a virgin SO NO VIRGIN BIRTH" (and depending how far they want to go, e.g. Unitarians or whatever, NO DIVINITY OF JESUS, NO SON OF GOD).

Expand full comment
Aug 26, 2022·edited Aug 26, 2022

Well yes, in general, I agree. In my experience, many Christians -- arguably, most -- treat the Bible as sort of a collection of metaphorical and allegorical guidelines. Their version of God is some sort of a nebulous abstract entity that exists outside of the Universe in some ineffable way, and whose presence will mostly be felt only after you die. But these kinds of Christians rarely get into debates with atheists, because they understand that faith cannot be communicated -- you either have it, or you don't.

Rather, the kind of Christians who tend to engage atheists are either professional apologetics (thus, charlatans out to make a buck); or considerably closer to the fundamentalist end of the spectrum. They are likely to believe (or claim to believe) that the Bible is literally true and "God-breathed"; in which case, asking "in that case, how come there are so many contradictory translations of it out there ?" is a valid retort.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

I don’t know if this is an uncharitable take, but I’ve always thought that the Real EA Response to any sort of philosophical questions etc is that “you are not really virtuous enough to be qualified to even have an opinion on this, but I guess we can humor you sometimes, we do enjoy philosophy,” and this essay feels along those lines?

In any case, I think donating 10% of your income to down-to-earth charities that help poor people right now is a commendable serious and virtuous act.

I also have what seem to me to be genuine philosophical confusions about what is good to do etc, which feel sort of EA-adjacent , but I don’t think EA (at least online EA) is the right space to think those things through, and maybe it’s not supposed to be (though I do think it sometimes tries to be?).

My own EA actions are nonzero but definitely not 10% level.

Expand full comment

If your foundation has "should" in it, you are nowhere near the bottom of your tower.

Expand full comment

From your last article, because I think you should pay attention to what you've said, and realize that you're engaging in this article from the opposing perspective:

"I’m not sure I want to play the philosophy game. Maybe MacAskill can come up with some clever proof that the commitments I list above imply I have to have my eyes pecked out by angry seagulls or something. If that’s true, I will just not do that, and switch to some other set of axioms. If I can’t find any system of axioms that doesn’t do something terrible when extended to infinity, I will just refuse to extend things to infinity. I can always just keep World A with its 5 billion extremely happy people! I like that one! When the friendly AI asks me if I want to switch from World A to something superficially better, I can ask it “tell me the truth, is this eventually going to result in my eyes being pecked out by seagulls?” and if it answers “yes, I have a series of twenty-eight switches, and each one is obviously better than the one before, and the twenty-eighth is this world except your eyes are getting pecked out by seagulls”, then I will just avoid the first switch. I realize that will intuitively feel like leaving some utility on the table - the first step in the chain just looks so much obviously better than the starting point - but I’m willing to make that sacrifice."

Expand full comment

I’m trying to imagine the personal situations Scott has found himself in recently that have led to this. Pure speculation but it feels to me like he is having trouble getting the amount of intellectual distance he likes to have about issues, partly due to being personally IRL immersed in a group of people who are thinking about it and talking about it frequently.

He’s good at recognizing the role of intuition and emotion in individual decision-making. Medical professionals should be. But he’ll do medical arguments with the medical people who all already buy in to needing to recognize the role of emotion. With EA he now has to argue individual decision-making with more machine-oriented people (perhaps-engineers etc) who are used to analyzing nonliving systems instead. He may get boxed in to having to argue for an irrational territory of the individual and I think that will seem weird to him, being on the more “rational” end of the medical spectrum. Also it wouldn’t be fun in the context of people whose work does not demand recognition of it.

I liked the seagulls post a lot. The sketches of his decision processes are valuable.

Expand full comment

There’s an argument as to whether local donations are “charity” in the same sense as donations to distant causes. If I want my community to have a Museum of Z I can’t buy it off a shelf, I have to give time/money/both and probably involve other people to make it happen. It may look like charity in that money changes hands and I don’t get money back, but another model has it as simply a more abstract level of community participation. Me buying vegetables at the store (giving the store money and getting vegetables back) is not charity, it’s an individual transaction, and the museum is a group transaction.

If the museum to me really is a charitable appeal equal in some way to international hunger then I could approach it as charity - but it isn’t necessarily so.

I think local philanthropy in some communities is a clique. There’s a lot of sociology to be done around who is “supposed” to be involved or give, and where and why. EA interrupts assumptions about wealthy techies being awful technocrats. It also politely asserts that maybe (your/my/etc) pet projects aren’t all that logical (which people love to hear) and asserts that wealthy techies are going to explode the system yet again and be charitable THEIR way. It pokes the bear of society’s resentment toward wealthy techies. Then people have to reach for reasons.

Also international charity developed a bad reputation as paternalistic, ineffective and probably racist when done by NGOs and governments. Some people will object to new, energetic, non-NGO/govt efforts as again being paternalistic etc. Think globally act locally.

The reputation is not always deserved but that’s complexity which is irritating to some.

Expand full comment

I think this has changed my mind about EA. Well done!

Have you heard of PURPLE crying? (Stick with me.) When I had my kids it was talked about in every new parent class; there were posters all over every healthcare facility we visited; we got flyers in the mail. I mostly rolled my eyes. Then I had a chance to chat with a nurse that was involved in the campaign. “Can you explain to me how this is different from normal colic?” “Oh, it isn’t. But we found that by making it sound like some new research finding, people took more seriously that this is a normal phase and not something to fix and so were less likely to shake their babies.” Ah. This moment feels the same to me.

Am I reading this right? Is it reasonable to view EA as a PR campaign for the ‘obvious’ need to give and to strive to give well?

Expand full comment

That's a great way of looking at it!

Expand full comment

If that's all EA is (or most of what it is), then Freddie's criticism is right....and also besides the point.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

The comparisons with motte and bailey don't seem to make any sense to me. Motte and Bailey is if you defend X, and if someone challenges you, change to defending Y while making it sound like a defense of X. Scott is saying that if you disagree with the higher parts, you should give up on the higher parts and do something else that is consistent with the lower parts. That's not Motte and Bailey. That's letting the attacker live in the Motte indefinitely, which makes it no longer a fallacy.

Expand full comment

Scott is explicitly not allowing his interlocutor to "live in the motte indefinitely." The motte, the lowest level, is "we should help other people." Scott is only content to let people stop at "donating 10% of your income is obligatory", which is a level up. Plenty of people have stopped climbing by then (correctly, imo; it's good to do, but not obligatory).

And honestly, he's not even content to let people stop there. Look at the final question again, where his interlocutor commits to giving 10%. His words to that person are "you should donate more effectively". He's trying to drag them up the tower again.

This is a *textbook* M&B.

Expand full comment

No?

If I say feminism is about women being people, then you say "but I believe that," and I say, "okay, you should also believe in widespread access to contraception," then that doesn't mean that I'm hiding in a motte and tricking you. I am presenting a base-level belief, which we are in agreement on, and am attempting to argue you into the next step of that belief. This is part of attempting to change one's mind.

Motte-and-bailey is the most abused fallacy ever. It's just used for a million fallacy fallacies. "Well, you said A, but you also said B, motte and bailey discovered, argument destroyed!"

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

Take the following statements:

1. "Feminists believe women should be in control of their bodies, rather than their husbands or other men."

2. "Feminists believe marital rape is bad."

3. "Feminists believe abortions should be legal through at least the second trimester."

Taken in isolation, each of these are, I think, accurate descriptions of a movement. A motte-and-bailey is a form of affirming the consequent which occurs when someone equivocates between different levels, attacking someone who disagrees with one of the higher levels as if they were disagreeing with one of the lower ones. Here's an example:

Person A: "I think an abortion at the 24th week is too late; at that point in development, there is another person involved, with their own body, whose life is being ended."

Person B: (based on #3 above) "A feminist wouldn't say that." (equivocating with #2 and #1 above) "So you don't believe in women's bodily autonomy. Is marital rape fine to you, then?"

Person A: "Wait, what? First of all, no, and second, how is that responsive to my point?"

Is that happening here? Well, let's take the very first Q-and-A in the article.

Q: I don’t approve of how effective altruists keep donating to weird sci-fi charities.

A: Are you donating 10% of your income to normal, down-to-earth charities?

I think it's totally fair to unpack this as:

1. "Effective altruists believe it is good to give to the needy."

2. "Effective altruists believe people should give 10% of their income to charity."

3. "Effective altruists believe people should give as much as they can, principally towards animal welfare and X-risk from AI."

Person A: "I think donating to 'AI X-risk causes' isn't good, I don't buy that we're in for fast takeoff and I don't think the existing causes are making meaningful progress on the issue."

Person B: (based on #3 above) "Clearly you're not an effective altruist." (equivocating with #2 and #1 above) "So you must disagree that altruism is good. Are you even giving 10% of your income to normal, down-to-earth charities?"

Person A: "Wait, what? First of all, yes, and second, how is that responsive to my point?"

These look pretty parallel to me.

Expand full comment

He is not saying that, as you are not an effective altruist, therefore you are not giving money to charity. He is asking if you are not giving money to charity, to establish where the point of contention is.

Equivalently, let's imagine an interlocutor saying, "Hey, I don't agree with feminism on X." "Do you agree with feminism on Y?" This isn't saying X = Y. It's just asking if you think Y thing. You have to ask if the other person thinks Y thing, so that we can determine where our disagreement begins. Here, I have to determine if you actually care about charity at all. And yes, I actually *do* have to determine that, because a bunch of people in this very comments section actually think charity is evil! Probably the largest concentration of them, ever!

Expand full comment

"He is asking if you are not giving money to charity, to establish where the point of contention is."

Except he's not actually responding to anyone, because it's an FAQ aimed at a hypothetical reader.

Expand full comment

Asking what your interlocutor *does* and what they *believe* are two different matters. All that matters here is what they *believe*. You can believe that charity is important without giving a dime...as long as one of the following is true (with possibly other reasons):

* You believe that there are other things more important that must be satisfied first (and haven't been)

* You are giving time and effort to charity (not money)--many people believe that giving money to an organization is the laziest, least effective form of giving. They may be wrong or right depending on your definitions of effective (which may differ), but it doesn't mean a darn thing about their belief.

* You don't see how it's relevant at all and so refuse to answer.

It just comes across as judgmental and hostile.

Furthermore, if "gives 10% of your income to charity" is a requirement for being an effective altruist, that's *way* higher up the assumption tower than simply "helping the poor is good." So it's an inapposite response to begin with.

Expand full comment

While I don't agree with the commenters who suggest Scott should post the full 'spicy' essay, I'm very glad he posted this, both because it's really helped me to crystallise my thoughts on EA and the nature of ideological movements in general, and also because it's convinced me to get off my arse and donate 10% of my income to charity.

As someone who agrees strongly with the bottom 2 boxes, and a lot of what's in the middle one, I remain pretty suspicious of EA (as the movement which generally presents itself in reality under the EA banner - which i think is pretty reasonable, since as others have pointed out, it's a motte-and-bailey argument to defend the upper floors of the tower by resorting to support for the lower, not that that's generally what Scott is doing), for 2 main reasons:

1. EA generally seems to regard Peter Singer's philosophy as gospel truth, in a pseudo-religious way that goes beyond the basic utilitarian-ish thinking that motivates the bottom pillars. This is offputtingly dogmatic in itself, but it also leads to both a focus on animal welfare which comes at the expense of human welfare (which as what singer would call a 'speciesist' i stongly reject in a society which already contains so much human suffering), and also the demand that all people regardless of physical or temporal distance should be equally considered in personal ethical calculations, which sounds good but ends up leading to wierd and potentially dangerous repugnant-conclusion-esque ideas. I agree that people should be more utiloitairan in their thinking, but allowing a *small* proximity decay factor in utilitarian calculations leaves you with ~95% of the ethical fruit while largely proofing you against most of the wackiness (note that concern over x-risk is still very much valid here).

2. I think much of the 'bad smell' the EA movement has among ordinary people, including many commenters here, is down to the fact that the movers and shakers of the movement all tend to be a not just very wealthy and educated, but a wierdly specific sort of western, largely anglophone, socially privileged, futurist, wealthy and educated person (I have resisted just saying 'bay area people' here, but it's emblematic for a reason). There's no sin in being such a person, but when a movement like EA - which ought, by its foundational priciples, to be a broad-ish church - becomes monopolised by such people their culture takes hold in a way that both restricts the movement's perspective and makes it alienating to everyone else, compounding the problem. One example of a negative outcome of this is the experience most ordinary, curious and moderately lower-pillar-EA-inclined person will have after encountering 80000 Hours' promotional material, which claims to want to help everyone align their career with more effective utility, buyt in practice it becomes obvious pretty quickly that they're mainly interested in recruiting Ivy League/Oxbridge graduates to work on causes-celebre within the 'EA set' (one cannot help but notice this helps perpetuate the socially elevated and particular nature of that set) while us mere peasants who make up most of the population get 'oh, give to the LTFF i guess'. I get that some people are higher-impact targets than others, but this feels like lost utility to me. Lots of ordinary people, like myself, would love to work in a way which benefits society more if only there were better frameworks to help us do it.

Expand full comment

"Lots of ordinary people, like myself, would love to work in a way which benefits society more if only there were better frameworks to help us do it."

Well, for one job application I saw a while back on the 80,000 hours website, you can always go work in the staff canteen for one of these organisations. Even lowly worker ants like us can contribute by making sure The Important Ones are fed and watered so they can work harder at their Big Important High-Paying Jobs 😁

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

I think Freddie is correct in that EA can come across like "we are the first people ever to think of doing good, and how to do it right".

Yes indeed, if only there had been some earlier statement of principle about helping those not in your in-group:

https://en.wikipedia.org/wiki/Parable_of_the_Good_Samaritan

https://www.biblegateway.com/passage/?search=Matthew%2025%3A31-46&version=ESV

I think what ruffles people's feathers *is* the "You are already donating? You should donate more effectively" and that EA is the only game in town when it comes to knowing how to donate effectively. 'Sure, maybe your group/organisation/church has been running missions to the Third World for the past hundred and sixty years, but *we* know better than you how to help those people because we are Smart and Use Science and Maths'.

Maybe you *are* better at knowing how to help most effectively, but sounding as if your movement believes that it is the first group ever to figure out the best way to do good and what the most important principles are *is* annoying. Especially when the emphasis pivots from "feed the hungry, clothe the naked, visit the sick and imprisoned" to things like "Existential Risk! If we don't solve AI now, the utilons of the future quadrillions of possible inhabitants of our light cone are under threat!" while the hungry go unfed and the naked go unclothed because this is *more* important than trivial local present needs.

Expand full comment

Every generation thinks they've discovered/invented everything and autodidacts are the worst about it.

Expand full comment
Aug 25, 2022·edited Aug 25, 2022

As a (not at all religious, BTW) Jew, it’s highly amusing seeing Christians snark about “those new-comers acting like they invented charity” and then quoting The New Testament rather than the religion that literally introduced the concept of tithing (“maaser” - giving one tenth). I would let this one pass but for the snark, and for this not at all being a rare occurrence.

Expand full comment

Certainly Judaism and Islam have strictures regarding charity, and other beliefs such as Buddhism also incorporate similar principles. I'm not trying to claim "Christians invented charity".

But because I'm Catholic, I'm sticking to my own last, not taking it upon myself to say what the Jewish tradition does or doesn't say about any one practice. The broader point was the EA movement can sound like it's claiming that it did, indeed, invent charity (except under a different name, and doing it better).

Expand full comment

As someone who isn't really an EA supporter or critic, who has never heard anyone mention EA in real life and has barely encountered it on the Internet outside of Scott's writings, my first thought is: are there really people THAT critical of EA? Is anyone out there saying "EA delenda est" or otherwise spending much time on the subject? Or are we really just talking about missionaries for EA trying to convert people to the cause and receiving these sorts of explanations for their rejection?

I suppose as I think about it, you could probably break subjective assessments of charitable causes down like this:

1. Causes that are actually a great use of resources -- your personal favorite charities.

2. Causes that are a pretty good use of resources but not the best. E.g. supporting the opera if you think having an opera in your city is important but not important the way saving lives is.

3. Causes that you see as beneficial, but barely so. E.g. supporting political campaigns of the party that you see as narrowly the lesser of two evils. Or donating to charities that are highly ineffective at their supposed mission. Perhaps making the lives of factory-farmed chickens better if, all else equal, you'd rather chickens suffer less but you don't actually care at all about chickens.

4. Causes that have basically zero value but at least don't make the world worse. E.g. "sci-fi" causes that you think are meaningless. Or donating money to your alma mater to buy out the contract of a football coach so that your team can maybe win more games.

5. Causes that actually make the world worse. Like donating to the wrong political party or to the wrong side of the culture war.

My sense would be that critics of EA are mostly arguing that it does too much in categories 3-4, not enough in 1-2. But is anyone even arguing it doesn't do ANYTHING in categories 1-2, or that it's doing much in category 5? I'm not going to count basically Randian arguments of the form "all charity is counterproductive", which aren't REALLY criticizing EA per se.

Any of us could find tons of causes that fall into all of these categories. Why does EA get highlighted by some people? Maybe it's just that within a certain subcultural bubble, a large percentage of their peer group is actively involved, so they feel the need to develop strong opinions about why they themselves aren't involved.

Expand full comment

It is fair to ask whether EA critics are sufficiently altruistic, but I suspect it is the ‘effective’ bit which attracts most of the criticism.

The Unorganized might have valid criticism of organized *religion*. City folks may object to aspects of *organic* farming. I think that’s ok and probably a good thing.

Expand full comment

This post seems to just be endorsing motte and bailey arguments as applied to EA?

Expand full comment

This. Thank you for helping me remember that term, which was escaping me, because I was feeling the same way.

Expand full comment

Personally I prefer Field and Fortress, since it's hard to remember what motte and bailey mean. Plus it is alliterative.

Expand full comment

I hadn't thought of that. Yes, I always get confused as to which is the motte and which is the bailey. I guess "motte" resembles the word "moat" (maybe they're related), and it's easier to defend if you're behind a moat. But that's a lot to ask of my (aging) mind.

Expand full comment

Having contributed my share of criticism, let me say that EA is correct on this ( or our EA friend in the dialogues above, anyway):

"Then are you donating 10% of your income to charity?"

Stop criticising, if criticising is all you are doing, and go do good yourself. Or as it was put elsewhere:

"41 “Then he will say to those on his left, ‘Depart from me, you cursed, into the eternal fire prepared for the devil and his angels. 42 For I was hungry and you gave me no food, I was thirsty and you gave me no drink, 43 I was a stranger and you did not welcome me, naked and you did not clothe me, sick and in prison and you did not visit me.’ 44 Then they also will answer, saying, ‘Lord, when did we see you hungry or thirsty or a stranger or naked or sick or in prison, and did not minister to you?’ 45 Then he will answer them, saying, ‘Truly, I say to you, as you did not do it to one of the least of these, you did not do it to me.’ "

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

This is going to come across as curmudgeonly, but the mere fact that effective altruists spend so much of their time arguing about effective altruism and defending it from criticisms makes me feel like a) the altruism isn't really the point, and possibly b) the altruism can't be all that effective, either, because if it was why would you need to spend so much time arguing about it? Listening to this stuff makes me want to take my money and spend it all on vacations that involve lots of fossil fuel, ripping holes in mosquito nets for sport, and gratuitous rainforest destruction.

Expand full comment

By tower, I'm going to assume you mean something like Minas Tirith: a Motte and several concentric Baileys, right?

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

Judaism originated tithing - not EA. It’s in the Torah. Many religious Americans (Jews, Christians, Mormons, etc) tithe. The Torah prescribed other incarnations too, like Pe-ah, where farmers were required to leave portions of their fields for the poor. So EA can’t claim tithing, or systematic charity-giving, they just reincarnated it for atheists. And most ppl already give or aspire to give charity, and those who do are widely praised.

Where EA differentiates from these widely held beliefs and practices, is in its strict utilitarianism, and the specifics of where to give. That’s also what goes against most ppl’s moral intuitions and preferences. So THAT’s why the conversation starts at EA’s stranger components - bc the tenets you’ve laid out as the ground floor are so commonly held that they don’t define EA. The conversation/debate about EA starts with the particulars (malaria nets or whatever) b/c THAT’s where the EA branch breaks off from the “what everyone already believed for centuries” tree, so that is a better way to define EA.

Just like when ppl question religions, they don’t question the basic concept of charity - they question the religion’s particulars, such as “give it to the church.”

Expand full comment

A mere quibble: I dunno who says EA "originated the concept of tithing," but the Giving What We Can FAQ says this: "Religious traditions across the world also ask their followers to not just give, but to give generously. In Judaism and Christianity this is made more explicit as a contribution of 10% of income, referred to as tithe.

The Giving What We Can Pledge also uses 10% as a benchmark number for giving generously."

So it ain't them making the claim, at least. Could there even be someone in EA who doesn't know this already?

Expand full comment

It was Scott who implicitly claimed that EA owns these ideas when he put them at the ground floors of the EA ideology. I’m not accusing EA of plagiarism, I’m trying to explain why ppl’s questioning of it starts where it starts. And that’s bc no one associates the ideas of “give charity to help the poor” or “give 10% to charity” with EA. Probably 99%+ of ppl who tithe to charity in America have never even heard of EA. So the reason (ino) is the opposite of Scott’s argument - it’s not that they disagree w giving charity but use malaria nets arguments as a diversion tactic. It’s that they assume give charity, have no issue with tithing, and the only interesting part is the particulars.

Expand full comment

Addition to my below comment: It’s the same if I were to question Christianity on giving charity. I would start w something like “why is it moral to give money to the church instead of xyz?” - and that would not be a deflection tactic to try and back into me not having to give charity. It would be bc I already accept ppl should give charity, I just don’t accept they should give it to the Pope or something. It’s the same w EA. Like I just don’t know that he’s correct on why ppl question what they question ab EA.

Expand full comment

Speaking personally, what got my goat was the impression I received from early accounts that EA was saying, in effect, "So you are feeding the hungry in your city? Ha ha, what a maroon! Don't you know you should be worried about the vast biomass which greatly outnumbers humans of insects, so instead sign up to fully support our 'don't swat that fly!' campaign instead!"

Unfair characterisation, yes, but the blithe "effectiveness means we're doing it RIGHT, unlike all those dumb-bells doing it wrong" attitude was very grating. Since EA was addressing an audience of young, secular, bright, educated liberals, then it had no idea that the tone came across as implied (or not so implied) condescension; why should the movement be aware of that? It was focussed on a group of "our grandparents never attended church, chapel or temple; our parents never attended church, chapel or temple; and we sure don't attend church, chapel or temple" people so comparisons with religious organisations never entered their minds.

They were just back-patting themselves about being smarter, because Science And Maths Approach, look at our graphs and surveys!, when it came to 'most bang for your buck' than the mass of their ordinary fellow citizens who just put money in a collection box for good causes. Still damn patronising and condescending, but not meant in a sectarian way 😀

Expand full comment

I don't donate as much as I could. I feel a bit guilty about this since I could, with minimal inconvenience, be a larger help to the world (near or far) than I am, and the only reason I don't is I choose not to (or rather, choose to just let things lie). That others made the adjustment (and often more) I was too lazy to make causes me to feel morally inferior, sometimes triggering reflexive defensive notions.

80K Hours and Give What We Can are founded by the same guy - comments that posit a dichotomy between EA jobs and EA giving seem silly.

Expand full comment

Never underestimate the value of woe. I think woe is far more powerful than the confections of word puzzlers. William Blake says much about this in his work on Job.

Expand full comment

Did you not write the biggest takedown of motte and bailey arguments that exist on the internet? And yet your idea of tiers here seems to take the motte and bailey fallacy and turn it into virtue. Which, you know, fair enough, but it's worth mentioning.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

I think that accusations of motte and bailey here have misunderstood the piece entirely. He’s not retreating to the motte: he’s pointing out that critics must either accept or reject the motte, not say that their disagreements with the bailey excuse them from considering the motte. (Or ignore the motte altogether since it’s inconvenient.) I think this is entirely fair and nonfallacious and establishing where the first link of disagreement is would be beneficial to any debate.

Expand full comment

Other than the last line, which says "if you agree with me on anything, you're really an EA, just doing a shitty job of it."

Expand full comment

POST THE SPICE. HOLD NOTHING BACK. CALL ME DUKE ARTEIDES CAUSE I NEED THAT SPICE.

Expand full comment

Objections:

1. The government already takes about 40% of my income "for the better good" - go get it from them.

2. I'm unconvinced that the work involved in determining which are the most effective charities in-practice doesn't overwhelm the differences in charities with simple paperwork requirements. That is, the cost of determining how effective a bunch of charities are at-scale is greater than the benefits accrued from donating to one charity over the next most effective. The cost of data production is non-zero.

3. Misery loves company.

Expand full comment

1. "I was robbed, therefore I have no obligation to help anyone else ever again" doesn't ring quite as true as I think you'd like it to. (Though I expect there's a fair bit of support for smaller governments in EA circles.)

2. GiveWell's claim is that the difference in charity effectiveness is very large; is there a specific reason why you're unconvinced by their argument?

Expand full comment

On #2 (which I agree with)--the burden also falls *most* on the smaller, more localized charities, which are frequently the most nimble and in tune with the local needs (and thus theoretically the most effective at actually changing things). It incentivizes the really big organizations that have the overhead necessary to handle the metricization demands.

It's just like adding red tape/compliance costs doesn't actually burden (relatively) large companies, which is why they're just fine with it in many cases. Because they already have the infrastructure to handle it, while it chokes off any possible competition. If you can't get much money because you don't have a proven (by the metrics) track record, and you can't get a (proven by the metrics) track record without a huge chunk of change just to do the record keeping...yeah. There's an issue.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

OK, you got me. I'm going to stop helping people in any way, or at least commit to doing it as ineffectively as possible (maybe harm a person more every time I help one?), so you can't tar me with your incredibly wide EA brush.

(What I actually want is to be able to say "I don't agree with EA/I am not an EA", referring to the social movement that calls itself that. But if I think that it's possible to have goals, some of which may be altruistic, and that it's possible to be more or less effective in pursuing them, then you want to claim I'm an EA, which is not true in any sense that makes the term useful.)

Expand full comment

> Do we hold a global health org to the same standard.

I believe EA absolutely does. Give Well is a leading EA organisation and their list of top charities mentions multiple times that they create this list on empirical data not marketing. They aren't recommending giving generally to global health orgs. They are specifically recommending effective organisation that are providing demonstrably high impact per dollar interventions

https://www.givewell.org/charities/top-charities

Lets agree that good metrics are table stakes and neither of use are considering donating to anything that doesn't provide excellent metrics of how effective they are. If you are relying on the organisations metrics (rather than first hand evidence gathering) then what advantage does the cause being local to you give to understanding the metrics the organisation provides?

I believe give well's recommended charities provide excellent information about whtat they have done, how much it cost and how effective it was. If I haven't convinced you of that but you are still interested have a look at the report (well laid out webpage not dense PDF) Givewell produced on how effective they believe the Malaria Consortium charity to be.

https://www.givewell.org/charities/malaria-consortium#CEA

If we both agree that global and local charities can have their impact monitored, and we want to get the most bang for your buck, then I think we should be donating to global causes. The best anti malaria charities can save a life for approximately ~$5000. That figure isn't saying that giving one child one bed net will save that childs life. Its baking in all the uncertainty about how many nets get used effectively, and how many of those children would have been fine without one and so one. Getting one bed net to one child costs $5 so they are estimating to save one full life time of healthy life would require distributing 1000 nets.

I don't think that any 1st world cause is going to have anywhere near that level of impact. Assuming giving your tax incentives turn $5000 into $10000 and you give $1000 each to 10 10 homeless people I highly doubt that will result in one full life time of healthy life being saved.

Summary:

Individual EA charities are chosen for being efficient, well studied, and working in a high impact area and I believe they are held to a much higher standard than most

Assuming all other impact measuring concerns to be equal you can get much better bang for your buck with global health charities because they are much cheaper problems to fix.

> If I follow your logic, I should be funding an initiative for the government of a West African nation to sue France

I don't see how my logic leads there. That is not a well studied area that has proven to have an impact and could benefit from greater investment. The purpose isn't to chose the project that has some chance of getting the most money into the hands of someone who will try and improve 'health in west Africa'. That could fail in so many different ways and so is a bad investment. Its to give money to specific organisation that has a proven track record of making an impact

Expand full comment

Donating helps assuage wealth guilt. Instead of spending money to reduce factory farming, invest in sustainable lab grown meat, so everyone can enjoy a nice juicy steak. Instead of attacking an issue directly, look for a method of creating a countervail to the issue which previously didn't exist. This is how you maximize Good.

Expand full comment

This reminds me of the Christian video by LutheranSatire (which I won't link to) that has a similar style of rebuttal (but in song form), which I'll summarize as

Q: Jesus is OK with gay people and told us to affirm and accept them as they are.

A: Do you believe that Jesus is God?

Q: No...

A: Then I don't care.

(I disagree with LutheranSatire on LGBT issues but agree with this particular rebuttal.)

Expand full comment

Are we taking it as a given that outside critique is invalid because it comes from outside? Because that's what this reads like.

Expand full comment

I wouldn't say necessarily, no. But if you think Jesus is just some itinerant rabbi from 2000 years ago, then why care what he has to say in the first place?

Expand full comment

I care because the person I'm arguing with cares. It's a relevant point in their moral system, even if it isn't in mine.

Expand full comment

The question is, do you understand their moral system well enough to integrate it into your argument? My observation is that often when a person is debating one who has a different moral system, the person only knows enough about the moral system to score cheap talking points, but ignore the wider context, which their opponent uses to obliterate their arguments.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

That's a fair point, but it doesn't mean it can't be done. The exchange above could continue-

Q: But you do believe Jesus is God, even if I don't. If Jesus wants you to affirm and accept gay people and you believe Jesus is God, you should accept that. Just because I don't believe Jesus is God and arrived at these beliefs for a different reason doesn't mean you can ignore the implication of your own beliefs.

PA: You're misinterpreting Jesus's words, what he actually meant was (...)

At least, this is how the discussion could continue in good faith, otherwise we're at the level of denying basic predicate logic because you don't like your opponent. There's always "I'm sick of this argument" to fall back on, but that's not really a counterargument, just a way of saying "I'm not willing to debate at this time."

Expand full comment

The assumption by the Christian in this scenario will almost certainly be that the questioner is arguing in bad faith, and really only cares about normalizing homosexuality in all corners of society. If the Christian accepted homosexuality for some internal Christian reason that was logically incoherent e.g. "Jesus cursed a fig tree for not bearing fruit out of season, but gay people aren't trees, so he wouldn't curse them, I guess gays are awesome!", the questioner wouldn't correct the Christian's logic, he'd just be happy there was one more person who normalized homosexuality.

Likewise, if some EA advocate decided, for reasons of faulty logic that were nonetheless internally rooted in the belief system of Scott's EA tower, that EA actually required everyone to donate to organizations fighting for social justice in the specific way its critics thought was best, the critics would happily take their money.

You can test the outsider by suggesting that you accept the argument but moving a different direction than you suspect the outsider wants. Maybe the Christian says "ok, so I can love the sinner but still hate the sin, so homosexual acts are wrong but the people who do them are still loved by Jesus" and the outsider gives a "no not like that!" reply. I suspect EA proponents who said "ok sure, program X or person Z is kinda stupid, lets pivot to program A by person B" would continue to get flack if program A did not advance the critic's pet cause.

Expand full comment

Sort of a postscript here, but in the example above, that's the hypothetical strawman platform constructed by the initial arguer in the first place. The person speaking for themself is the one who defends their position with a thought-terminating cliche, as if it was the best defense they had available. It doesn't speak well of their belief system if the hypothetical point they're defending against doesn't have a real refutation.

Expand full comment

Is it less a motte-and-bailey, and more a-la-carte? EA may prefer some specific set of beliefs, but they also have suggestions for those who are less committed to the finickier tenets, which seems in line with the "effective"part. Per OP, donate or do SOMETHING.

Expand full comment

I think you are conflating "effective altruism", the concept; with Effective Altruism, the movement. The concept is all well and good; obviously, I want my altruism to be effective ! But the movement itself is more than just a methodology that purports to achieve that goal; it is a prescription for specific causes you should donate to in order to be "effective". In other words, the EA movement doesn't just say, "we make our records transparent to help you optimize your donations"; instead, they say, "we ran this mathematical formula that is based on mumble mumble, and determined that the most effective way to spend your money is on AI risk and animal welfare, so that's what we'll be doing".

Additionally, the whole "10%" thing sounds like a setup for moving goalposts. If I say "yes, I donate 10%", the obvious answers are, "why not 20% ?" and "why are you wasting money on mosquito nets when you could be saving all of humanity from AI ?". Basically, the bottom line has already been written -- it says "donate to EA" -- and you're just working backwards from there.

Expand full comment

Your friends are right. This is lazy and bad.

Q: Here are some exotic philosophical scenarios where utilitarianism gives the wrong answer.

A: Are you donating 10% of your income to poor people who aren’t in those exotic philosophical scenarios?

If consequentialism is false, that undermines the idea of an obligation to donate to poor people at all, even ones outside those specific situations. You are smart enough to get that. You are atypically uncharitable here.

You can think EA is overcriticized, and too deferent too its critics, and you can hate that. But “how much are you donating?” just isn’t a categorical counterargument.

Expand full comment

I feel like most systems of virtue ethics and deontology also support charity to the poor. I don't see how consequentialism is required for that.

Expand full comment

They support it as *good*, but not *obligatory*. It's the difference between "it is good to do this" and "it is a failing not to".

Expand full comment

I feel like most consequentialist systems don't view good/obligatory as a meaningful distinction. Since 'duty' isn't a privileged concept in util, you're not obligated to do anything. Some actions are just better than others.

I think some folks used to deontological ethical frameworks interpret this to mean that util implies you have a duty to maximize utility, but it doesn't really say that. Util just says that that's the best thing you can do (for boring definitional reasons).

You could interpret util to mean that you have an obligation to maximize utility, but that isn't the *obligatory* interpretation ;)

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

Let’s say we’re both utilitarians. After talking it out carefully, we determine which actions are more welfare-maximizing than others and by how much. We agree actions are good to the degree they are welfare-maximizing. Then I buy myself a gold-plated yacht and a mountain of cocaine, occasionally sailing by starving children to honk the horn at them.

“I’m not doing anything wrong”, I say; “our moral theory doesn’t privilege the concept of ‘wrong’. I am, we agree, not doing very much ‘good’. But—so what? What then?”

Is this view really utilitarian? If so, utilitarianism is pretty empty. I think utilitarianism needs to come equipped with a concept of duty/obligation in order to have any bite.

Expand full comment

I view util as more of a wrapper for deciding what things are good and bad. Where to draw the line on how hard you optimize is a hard question in its own right. I don't see a ton of value in saying that there is a particular line which is privileged as a duty.

It's all just blurry continuous piles of more or less goodness. Buying the yacht is less good than donating that money.

Did you violate your duty to humanity by buying the yacht? I dunno, that isn't a framing that seems helpful to me.

Would it be morally praise-worthy if you donated to an effective charity instead of buying the yacht? IMO, totally.

What do you get from the duty/praise-worthy distinction? I get some value in it from a legal society perspective of shaming people who don't satisfy what we decide as the minimum duties, but if we're already past that minimum bar I don't see the distinction as very relevant. That said, if the duty distinction helps you have a bigger positive impact, go for it! Who am I to judge?

Expand full comment

I'm in the opposite camp. I reject completely the idea that I have a personal moral obligation to help other people (unless my own individuated voluntary action was, foreseeably, both the proximate and sufficient cause of the unjust predicament). However, since I know that most other people disagree with me and think we DO have such an obligation, I would prefer that those people who choose to donate to things aren't giving their money to international grifters with huge overhead or to political issue lobbyists who wear $10K suits and eat $500 lunches with powerful jerks.

In the tower metaphor, I think the foundation is shaky as hell, but that if you are absolutely positively convinced that you must build on it, build the best thing you can.

Expand full comment

As far as I can tell, there are few comments here that directly reject the "we should help other people" part. I suppose that's predictable given the overlap between readers here and EA as a whole. There are a few, all of which sound somewhat libertarian. I'd like to go further and point out that it is philosophically possible to go further than zero on the obligation scale, down into the negatives where statements like "the statement 'we should help other people' is actively bad, misleading, and possibly evil."

I'm not a philosopher and won't make such arguments, but I feel it necessary to point out the range of objections. (Samples: "people are bad there should be fewer of them"; "people individually are ok but we are over-observing the universe and will cause it to collapse if we have more people observing it"; "neither animals nor people are intrinsically good, but clearly the existence of people has damaged the biosphere and everything would be better if we were trending towards zero humans -- helping people is insane and feeling good when you do so is just an evolved mechanism like feeling good when you have sex"; "the world needs more of the kind of people who help, not the kind of people who need help, therefore helping the helpless makes humanity worse and the best way to increase per-capita altruism would be to sterilize or genocide the selfish and un-altruistic"; "helping others is dysgenic by nature"; "our only hope of avoiding AI eliminating us entirely is to spend however much time we have left pruning down humanity in the hopes of proving that we are ultimately harmless and can be confined to this planet like a big zoo for hairless pandas -- we should be euthanizing anyone not on board with this, as the 'indomitable human will!' people are precisely the ones that will get us all turned into paperclips"; "existence itself is suffering, a child dying of malaria is better off, and the only reason we pretend otherwise is because the pretense allows us to flatter our inherent animal fears instead of just committing suicide the way we all secretly want to"; "we are obviously a simulation and already we appear to be reaching compute limits -- we really need to get this under control before someone notices and turns us off")

Personally, my response the tower is "don't should on yourself".

Expand full comment

Glibly: I like altruism, but I'm not so sure about the effective part.

Less Glibly: There's some nasty buried even in the foundational assumptions, and the only one I can't think of a problem with is "Some methods of helping are more effective than others".

"We should help other people" has issues with pretty much every word in the sentence, to start with. Who is the "We" in this? Lots of people have more effective resources to help with, should they be the ones doing proportionately more helping? "Should" implies obligation of an unfixed level, but I don't personally feel obligated to do more than help those in eyesight. Everything else is at the level of a want rather than an obligation, and no amount of hypothetical or -actual- suffering of other unseen people is going to convince the emotional part of my brain that this is suddenly a pressing need on the level of "Drink water, you're dying of dehydration".

"Help" is particularly odd; perhaps the kind of "Help" I think is best is taking the capital from the highest individual concentration and redistributing it, through bloodshed if necessary, weighing that the suffering caused to a few would be outweighed by the utility to the many. This would imply that I'm okay causing suffering, possibly extreme suffering, to a few people in the interests of many others, which is at odds with anyone who has "avoid causing suffering entirely" as a function. Perhaps I'm an anti-natalist, and I figure the best kind of "help" is to render all of humanity sterile to prevent further suffering, having some kind of function that weights suffering magnitudes of order higher than pleasure. These are kind of extreme examples, but they're also things a decent number of people actually believe, so they can't really be treated as absurd hypotheticals either.

"Other people" is a funny phrase. Which other people? Friends, family members, inner circle, outer circle, people in my community, people in my nation, the world? Disproportionately I'm going to weight who I should help in favor of some of those, and I'm not sure everyone will agree with me on the exact weighting. The extremes I can think of is the completely selfish (who can be excluded easily I think) and the eusocial (seeing themself as truly no different from any other person and allocating resources appropriately). This also doesn't take into account "future" or "hypothetical" people, which is something I have a hard time working up any kind of moral enthusiasm for. Until said people exist, to me, they might as well be "fictional".

What I'm trying to say in this scattershot approach is this: Effective is a means to an end, and I'm not sure we share the same end.

Expand full comment

I heavily endorse the tower of assumptions framework. It's much healthier for debates than eternal accusations of motte-bailey. Of course it should work for every set of belief not just EA and especially your outgroup. Figuring which part of the tower you actually disagree with should be necessary for any thoughtful and reasonable critique.

Expand full comment

Yet depressingly, lots of people here can't tell the difference between a tower and a motte-and-bailey :-(

Expand full comment

I like the tower analogy, and I find it helpful because it shows what I've gained from utilitarianism (more Peter Singer than the "EA Movement") and it shows where I get off the bus.

The first daring claim of EA is that we should all be doing something meaningful to help strangers. Giving 10% of your income isn't some brand new idea, as many people have pointed out, but it's something that very few people in rich countries actually do. I think putting an explicit target in place and encouraging people to hit it is really valuable, and I have gained from this.

The second daring claim is that when we try to help others, we should think critically about how to be as effective as possible. Again, this isn't exactly novel and EA aren't the only people doing this, but it genuinely differs from how most people give.

The third claim is where I get off the bus, which is that the way to think critically about how to be as effective as possible is to do explicit utilitarian calculus in order to identify the highest value-added causes and strategies. My problem with this is twofold. First, I think that this style of discussion is inaccessible to most people. That's not a problem in and of itself, but it has the consequence of restricting the world of EA thinking to a small group of highly educated people who like quantitative and analytical thinking. Second, decisions about how to prioritize animal welfare, AI safety, and global public health end up coming down to beliefs about speculative probabilities and moral weights. This is, again, not necessarily a bad thing, because identifying what speculative assumptions matter is really helpful. But it creates a problem when combined with the first issue, because discussions about these speculative probabilities and moral weights end up happening among a small, heavily selected slice of the population. I can't help thinking that the steady movement of EA towards "sci-fi" charities is in part due to what seems to be a huge overlap between people involved in EA and people who like sci-fi. And I can't help thinking that the reason global public health has slipped down the list of EA activities is because, from a utilitarian calculus perspective, it's boring.

And third, this style of identifying effectiveness ends up putting a ton of effort into these big-picture moral tradeoff questions and relatively little into nuts-and-bolts questions of how to actually effectively administer programs. I love givewell, and give 10% of my income through givewell. However, most of what givewell ends up doing is giving money each year to the Against Malaria Foundation, Evidence Action, and a handful of other very well run organizations doing crucial work. Those organizations are highly cost-effective because they are very well run, and they've been well-run for decades before EA was a thing. It's great that EA is funneling more cash toward these organizations, but I really wonder whether encouraging people interested in EA to get really good at debating the relative value of X-risk vs public health interventions is more useful than encouraging them to learn how effective organizations are managed.

I think this is what someone like Freddie De Boer is getting at when he says that it's a combo of trivial and stupid. He sees "care about others and make substantial personal sacrifices to help them" and "care about distant people as much as close people" and "try to be effective in how you help" as trivial. He sees the utilitarian calculus as stupid. I agree with him, except that I think he greatly underestimates the value of pushing trivial, banal, arguments.

Expand full comment

I agree with pretty much everything you're saying here.

Expand full comment

I don't disagree with Scott's post and its intent, but was triggered to digress by "Helping 3rd world is more effective than helping 1st (eg bednets vs. alumni donations)". (Leaving aside for the moment that "alumni donations" is a very unfair example; alumni donations IMHO do more harm than good.)

EA became a movement largely because mainstream charity is surprisingly ineffective. Most of the reasons I've heard given for why it's ineffective amount to "because it's aimed at the 3rd world" (I would say "4th world", those parts of the 3rd world which are having the worst problems at the moment). Corruption, war and instability, cultural resistance, lack of infrastructure for maintenance, lack of social trust due to endemic desperation, massive tragedy-of-the-commons problems, insufficient education to transfer control to the locals, violent religious extremism, etc.

"2nd world" originally meant "the communist bloc", but that's not a useful meaning anymore. I think we should redefine it as "economically in-between 1st and 3rd world." Countries like Mexico, maybe. Using that definition, I think charity should focus on 2nd-world countries, or lagging areas of 1st-world nations.

As an American, I say Mexico and some Caribbean nations in particular should be our focus, because they're our next-door neighbors, we can help them more efficiently than we can help people in Africa, we understand them better, and because helping them also helps ourselves.

A gaping wound heals from the outside in, not from the inside out. Your body heals the area around the edge of the hole until it's whole enough to begin healing the area inside it, shrinking the hole as it goes. That should be the model: not to dump resources into wastelands ruled by warlords, but to assist nations that aren't quite able to do much global altruism themselves, to get them to the point where they, also, can assist other nations.

Expand full comment

I donated 1k to the EAIF because I lost a weight loss bet but I don’t have a systematic tithing plan.

I think I can get a higher ROI from professional gambling than a charity would earn on its endowment. So patient philanthropy is my excuse.

Expand full comment

Your recent posts about EA just encouraged me to donate to the World Land Trust, which someone in the EA community rated as a highly effective charity.

I donated to WLT the first time in 2021 after listening to a Sam Harris podcast about the EA movement.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

I will always be a supporter of effective altruism, but I will say that I found 80,000 hours to be utterly repellent when I started listening to them maybe 7-9ish years ago. Unless they've changed, their close association with the movement is surely going to leave a bad taste. Their whole schtick seemed to be that you should be a privileged, wealthy, charismatic genius whose parents could afford an ivy league university or you should please stop blighting the movement with the crime that is your existence. This was what I got from the material in their podcast and their articles, anyway - basically, anyone who is not a genius and/or went to an elite school isn't valuable, and should be actively gatekept out of any participation in effective altruism. This bothers me especially because if I know anything about the people who get involved in these kinds of movements, their biggest vice isn't necessarily money or power, but access, because access is their conduit to those things - so gatekeeping the movement to future golf partners certainly appears self-serving. IMO it would almost be better if those kinds of pharisaical people weren't trying to hitch their wagon to good maximization because of the collateral damage they could potentially cause.

Maybe my experience was some kind of freak outlier where I happened to select the podcasts and articles where they said the most repellent possible things, IDK.

Expand full comment

Gwaaaaaaaaaaaaaaaaaaan ya tease just post it. What are they going to do, write a NYT article that says you're racist again? You can take it.

Expand full comment

Cool. May I translate it to Portuguese and publish it on <80000horas.com.br>?

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

> To me, the core of effective altruism is the Drowning Child scenario.

The Drowning Child scenario is a bad source of intuition for ethics. The scenario, as depicted, relies for its intuitive force on the fact that one has actually come upon the child, and so that child in now immediately proximate. In then seeks to transfer any conclusions drawn to cases in which the persons to be aided are not proximate. But proximity (not merely in a geographic or temporal sense, but most of all in a relational sense), is fundamental to ethics. To give but one example, one has ethical responsibilities to one's own child that one does not have to unrelated children halfway around the world.

My rejection of EA is at the level of what Scott, in his "tower," calls its "fundamental assumptions." By the time we get to its "less basic assumptions," I think it's a dumpster fire.

Expand full comment

> But proximity (not merely in a geographic or temporal sense, but most of all in a relational sense), is fundamental to ethics. To give but one example, one has ethical responsibilities to one's own child that one does not have to unrelated children halfway around the world.

This example does not provide good evidence for your claim. You stand in a special relationship to your child as a parent, and that's true no matter the proximity.

If your child moved to the other side of the world tomorrow, would you have less of a duty to help them flourish?

If you want to give a friend of yours a bit of money for a life-saving surgery, would learning that they just last week moved abroad change anything? How about learning that the friend grew up abroad?

Expand full comment

The relationship of parent to child is *itself* the relevant "proximity." That's *exactly* what I was emphasizing when I said I was speaking of proximity "not merely in a geographic or temporal sense, but most of all in a relational sense."

So all the bits about moving to the other side of the world or growing up abroad are irrelevant. They are not counterexamples to what I'm talking about.

Expand full comment

Right, so I think the drowning child experiment doesn't take you from "I should care about my child" to "I should care about some stranger's child in a distant country". I think it's more meant to take you from "I should care about a stranger's child that's right in front of me" to "I should care about some stranger's child in a distant country".

Expand full comment

I agree that is the intention of the scenario. I don't think that it successfully argues for that transition.

Expand full comment

If you think that proximity is of critical moral relevance, then yep, agreed that typical EA is unlikely to be a good fit for you.

I think the Drowning Child scenario does a great job of asking people the question "Is proximity morally relevant?" Lots of people feel like the answer to that should be no, it's not relevant (I'm one of those people).

I think if your response to the Drowning Child is that you really don't care much about what happens to folks distant in time or space from you, it's likely that you won't find most EA charities a good fit. In particular, longtermism and global health & poverty will probably not mesh with you.

I would bet that the majority of people believe that the life of a person in their community is no more than twice as important as a random person somewhere in the world. Given that, I think that what you're saying about the importance of proximity is more extreme than most folks. Happy to learn that I'm wrong if there are surveys or other quantitative data on this.

Expand full comment

I agree with most of your summary, descriptively.

One caveat: I strongly endorse a particular form of longtermism consistent with "being a good ancestor," but that's because I think one's descendants are "proximate" in the relevant sense, which is not merely a simple function of space and time.

Expand full comment

>I would bet that the majority of people believe that the life of a person in their community is no more than twice as important as a random person somewhere in the world.

I would argue most people who claim they believe this are lying to themselves about their actual feelings because it makes them feel better about themselves.

Expand full comment

Isn't this literally just the Motte and Bailey/Field and Fortress argument, except with five levels instead of two?

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

My only specific argument against EA (I have many other arguments, mostly related not feeling bad about being selfish, but it's not specific to EA, it's about altruism as a whole) is that EA, by doing sometimes convoluted computations on costs and benefits, abstract the decision about to who you should give away from you, the giver, it introduce an adviser. And one with sophisticated arguments.

This is one step (often the major step) to being scammed.

And I believe the natural psychological opposite to generosity is scam aversion (I feel for example that being stingy is very highly correlated with fear of being scammed, and the effort you typically spend in detecting/avoiding scams ).

So while most EA is certainly not scam, it is more scam-like than "just feel like it" altruism for many people (certainly people who do not spend most of their time doing analytical tasks) , and this will trigger the stop-altruism-scam-alert switch in many heads.

Expand full comment

> So while most EA is certainly not scam, it is more scam-like than "just feel like it" altruism for many people (certainly people who do not spend most of their time doing analytical tasks) , and this will trigger the stop-altruism-scam-alert switch in many heads.

I'd wager that EAs give their charitable giving a great deal more thought than most non-EAs, and are therefore less liable to get scammed. For example, GiveWell make all their cost-effectiveness analyses available online (https://www.givewell.org/how-we-work/our-criteria/cost-effectiveness/cost-effectiveness-models) and other EAs evaluate them (https://forum.effectivealtruism.org/posts/ycLhq4Bmep8ssr4wR/quantifying-uncertainty-in-givewell-s-givedirectly-cost).

I find it more likely that someone who gives after seeing some ad or after an acquaintance asks is susceptible to getting scammed.

Expand full comment

Sure, but I think i didn't make my point as clearly as i could. I was not speaking about the minority of people who will do EA analysis to choose their charity. They are already convinced.

i talk about deciding to donate to a cause that you believe in vs donating to an EA organisation that will decide how to allocate your money as effectively as possible. It's losing direct decision that feels scam like.

In fact, i even guess that giving decision power to your ingroup (for exemple a catholic charity doing a lot of different stuff if you are catholic) feel less scam like than giving it to an ea organisation for many people, at least when they don't easily follow EA benefit computations...

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

The too-spicy-to-publish Q&A gives the wrong impression, and the implicit charge of hypocrisy levelled at the person disagreeing with EA rings hollow, because when you ask most EAs similar questions you'd get the following answer:

Q: Do you, an EA, donate 10% of your income to effective non-profits?

A: No.

One piece of evidence here being the EA Survey (https://rethinkpriorities.org/publications/eas2020-donation-data):

"The median percentage of income donated in 2019 was 2.96%. [...] 20% of EAs who answered the donation question reported donating 10% or more of their income in 2019."

And people who fill out the EA survey and the subset who answer the donation question will be far more likely to have donated than the typical person who identifies as an Effective Altruist.

Expand full comment

1. I give some 2-6% of my income to poor people, as I have some very poor relatives (and they are ok-people). I would help them much, much more if I could get them a work-permit for my country. (Yeah, Bryan Caplan may be the most effective altruist around. Maybe after Bill Gates.) And if one of them turns out really smart - just as Scot wrote - I have an extra 10K stuffed under my mattress just for her.

2. I think all commenters should first answer the question. Do you do 10%? - Then go on with their smarty ramblings. Why: I am stunned by the HUGE amount of comments on this post (after just 10 hours) - and the LOW percentage of those who address the one, simple, huge and important question the post raises.

2. b) Obviously, I feel: The reason they comment, and the reason they don't tell - is: They don't "tithe"( I don't really). And they do feel guilty about it (who could not!). And they absolutely hate feeling guilty about it. (I claim I don't, too. But I guess I am bs-ing myself. Too.) And so we are back to: "It's Cognitive Bias - all the way down". - Scott is the psychiatrist. He knew what he was up to.

3. I am contra the tithe. I understand it was meant also as an upper bound. Still, it feels like: "Less is kinda stingy." I say: It's too high. As Scott himself (on SSC) once wrote - I paraphrase -, the really bad poverty in this world could be alleviated by less than 1% of world GDP. And I guess people here would get much less defensive about 1% or 0.2% than about 10%. Start small. It may be enough. - 10% does matter less for Scott and me. But hey, there are people who really like their car and their vacation. And their pizza-delivery - plus a hundred other things that make me blink. And God bless them.

4. The taxes we pay is more than 10%. Closer to 50%. - And as we now longer pay it to a king just to not get killed for not paying (and get protection from other "kings" plus maybe some irrigation works), but as we are told we pay for "fairness"/"equality"/"health"/"war on poverty"/"education"/"valuable infrastructure" we are kind of entitled to feel we pay those taxes to "do good". Charity, really. (Ok, the USofA also puts a bit more than 2% of GDP into defense - we do a bit less. Trump claims this is "charity for world-peace". And Germany free-loading. And he may not even be that wrong there. Anyway, the biggest parts of the budget are "social" - all over the first world, probably in all 190+ countries.) - Asking for another 10% on top - is over the top, imho.

5. Two fine pieces bashing high-flying EA. Had us feeling smart, safe and cozy. Now you hit the breaks, turn around, look me into the eyes and ask: What about you, yes, you? - Dr. Alexander: I love the rides you take me on!

6. I am into bible quotes, too, so here it goes, Matthew 19 , obviously:

16 And behold, a man came up to him, saying, “Teacher, what good deed must I do to have eternal life?” 17 And he said to him, “Why do you ask me about what is good? There is only one who is good. If you would enter life, keep the commandments.” (...) 20 The young man said to him, “All these I have kept. What do I still lack?” 21 Jesus said to him, “If you would be perfect, go, sell what you possess and give to the poor, and you will have treasure in heaven; and come, follow me.” 22 When the young man heard this he went away sorrowful, for he had great possessions. - end of quote (just before the tow-goes-through-eye-of-needle-part. Or camel: misquote, but silly stuff can be so much more memorable.) - Here we are: asked for 10% not 100%, and we still scream, run and hide.

7. The time I gave over 10% to charities I had just a stipend, was 19 and near suicidal - cuz "life" seemed a fucking grey goo of pointlessness. If you feel like donating the full tithe or your kidney, et al to complete strangers, I very much hope, you are ok. I doubt you are. But I love you. Jesus loves you. ;)

Please, please get a life. There is one out there. For you. Not sure a poor me even wants your dimes, if you are sad inside: "losing love

Is like a window in your heart

Everybody sees you're blown apart

Everybody sees the wind blow"

Expand full comment

I was taught that the give all you can bible verse was Jesus dialing in on the young man's material selfishness as what was that one's primary sin. People who aren't very wealthy do have other, possibly mortal, sins, but for this one it was his materialism.

Otoh, taxes aren't charity. They are collected at gunpoint, are not voluntary, and the taxpayer is advised that 'government is what we do together' and that the taxpayer will get the benefit of those taxes. (Via more aggressive tax collection, or so we are told.)

(I did like this post, ty.)

Expand full comment

I am glad to hear that you are in a better place.

I want you to consider that you can care deeply for yourself and also care deeply about others, to the point where you make your life one of humble service. I also want you to consider that you may be projecting on others when you so easily and effortlessly proclaim all criticism of EA as projection.

Expand full comment

Q: 100% of my work time is spent in a chronically and sometimes acutely personally hazardous job that supports a system to save humans. When was the last time you saved the life of a person you could name, risked your own existence to help another, or shadowed someone who had to triage resources to help save lives?

I fly search and rescue helicopters for a living. When flying a search pattern for 2+ hours (about $9k USD per hour that tax payers fund) that is based solely on an individual saying that "they saw a distress flare in the distance", I wonder about whether all of these reports should instead get directed to a hotline that says "thank you for your call, instead of responding we are purchasing $20k worth of life vests and distributing them."

I also think about the 2 hours spent hovering over a beach looking for a young man who went swimming, began to struggle, and was seen going underwater. I think about finally finding that body, and how the moment we had originally arrived on scene and werent able to see anyone swimming we knew that we werent trying to save a life anymore, we were just going to be trying to find a body. Should my boss have called that family, and told them "we are standing down the helicopter, and are going to direct those $18k towards more signage that your family member ignored when they went swimming on a red flag day?" Im not sure, but I also know that I am not the person who has to have those conversations with a grieving family member, which means its easier for me to abstract. It is really, really difficult to be a person in the middle of these situations, and figure out what how limited resources should be allocated.

What I struggle with for Effective Altruism is that it seems the intellectual firepower of the movement is not directed towards supporting the people who have to make these difficult decisions, and instead towards increasingly esoteric abstract ideas. It seems to have been hijacked by thinkers who lean very heavily on big calculations that negate a lot of the human element of charity and humanitarian work. These same thinkers seem to be now used, intentionally or not, as a mechanism for tech billionaires to try to launder their reputations about doing good while avoiding anything that would challenge the structures that created their wealth or would move the needle on helping anyone who is alive today. I originally got interested in Effective Altruism because I felt that if you wanted to effect change in the world, and help people, you should be willing to open yourself up to daring ideas that were testable and had a way to measure impact on humanity. That seems to no longer be the banner around which EA rallies.

And that makes me sad.

Expand full comment

> What I struggle with for Effective Altruism is that it seems the intellectual firepower of the movement is not directed towards supporting the people who have to make these difficult decisions, and instead towards increasingly esoteric abstract ideas. It seems to have been hijacked by thinkers who lean very heavily on big calculations that negate a lot of the human element of charity and humanitarian work.

For what it's worth, as of one year ago, EA funding was directed like this:

- 44% to global health and poverty

- 13% to animal welfare

- 10% to biosecurity (including pandemic prevention)

- 10% to AI

etc.

I think you're right in the sense that a lot of conversations right now are about more "frontier" issues like longtermism and AI. But to some degree at least that's because things like global health and animal welfare are pretty settled and uncontroversial within EA: the moral arguments have been made, there's funding infrastructure, orgs have been spun up, etc.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

I think I found the same source(1) as you regarding funding. I am having trouble, not being in the field, of understanding how AI risk should be anywhere near 10% of the money under the EA banner. Why is that considered a greater risk than stopping an asteroid from hitting Earth? One we know for a fact has happened, the other is complete conjecture. If we are going to throw gobs of money at a fanciful project based on a technology that doesnt exist but is supposedly right around the corner, why not spend 10% of the money on research for how to "ensure that fusion technology isnt controlled by capitalist powers who will find some way to maintain resource scarcity to preserve class structure?"

If I was to pose a spicy take, it would be that people who have a computer science background find AI interesting, and they desperately want both morale legitimacy and funding, and the way to find that is to say you are working on AI risk. Extra spicy take - pushing people towards focusing on AI risk as a career is a deliberate attempt to lock up smart people with little world experience or social skills and prevent them from focusing their cognitive power towards taking on embedded power structures. If someone wants to research AI safety, by all means go ahead. But its status as one of the highest priority areas for people to pursue as a career on 80,000 hours is problematic and I think symbolic of why EA has started to take repeated criticism.

(1) https://80000hours.org/2021/08/effective-altruism-allocation-resources-cause-areas/

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

> I think I found the same source(1) as you regarding funding.

Yes, sorry, I forgot to share the source.

> I am having trouble, not being in the field, of understanding how AI risk should be anywhere near 10% of the money under the EA banner. Why is that considered a greater risk than stopping an asteroid from hitting Earth? One we know for a fact has happened, the other is complete conjecture.

Well, in short ppl prioritise AI above astoroid risks because they think it's more important and more neglected. E.g. Toby Ord puts the existential risk of an asteroid impact this century at 0.0001% (this one is fairly solid since we have base rates and asteroid tracking systems) and the risk of AI at 10% (as you note, this one is much less certain).

I know AI risks seem really weird and out there, and I sympathise. What made me take it seriously is a combination of:

1. Seeing the incredibly fast progress being made in the last years, including on making general AIs (e.g. https://www.infoq.com/news/2022/05/deepmind-gato-ai-agent/). This makes me think expert forecasts and estimates like Ajeya Cotra's (https://astralcodexten.substack.com/p/biological-anchors-a-trick-that-might) that give a median estimate of transformative AI at 2050 or so may not be wildly off.

2. So that suggests we may get transformative human-level AI in our lifetimes. If that happens, it's going to have an enormous impact on society.

3. Already today, AIs are doing things we didn't intend them to do in unexpected and inscrutable ways. We don't really understand what goes on inside them, nor do we have any idea how to make them "aligned" with our values. If we give a very powerful AI with a bunch of responsibility and it fails in a hard-to-predict way, that seems potentially very bad.

Of course the actual arguments (and counter-arguments) are more nuanced, but imo that's enough to feel that it isn't some totally crazy and impossible thing to be concerned about.

Personally, I feel more of an emotional connection to global health/poverty and especially animal welfare. But I now think AI is important too, and that there's room for EA to care about more than just those two.

> But its status as one of the highest priority areas for people to pursue as a career on 80,000 hours is problematic and I think symbolic of why EA has started to take repeated criticism.

Well, 80,000 Hours is a longtermist org. There's also Animal Advocacy Careers (https://www.animaladvocacycareers.org/) and Probably Good (https://www.probablygood.org/) that have more of a neartermist focus. Fwiw, I think a focus on careers tends to skew longtermist, since it's more talent-constrained than e.g. global health and poverty (where we have good interventions that just need more money).

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

The tower metaphor does a good job reminding me that, much like Christianity or Linux, it's okay to fork.

I feel optimistic that there could, in my lifetime, be another group of people who say, "let's do good, let's prioritize good, let's give intelligently, let's help encourage the willing to give 10%, and also, nah bro, I'm not a utilitarian, and I'm definitely not an EA, those guys are weird".

If anyone else on this comment thread is interested in taking a few steps toward such a movement, let's talk. I don't think I'd put 10% towards it yet, but I do keep finding myself wishing for somewhere to put 10k that will do actual good, and "Non-Creepy Alternative to EA" seems pretty good to me. There are a lot of cush software people here. Maybe we can do something good together.

[Note: I wrote "get as many people as possible to give 10%", then I changed it to "help encourage the willing to give 10%", because of my public position that evangelism is not best.]

Expand full comment

...another group of people who say, "let's do good, let's prioritize good, let's give intelligently, let's help encourage the willing to give 10%, and also, nah bro, I'm not a utilitarian, and I'm definitely not an EA, those guys are weird".

One thing I've seen in other comments - and which I've found insightful - is that many church communities (or other movements) match all of these descriptors, and the arrogance of EA advocacy in thinking that one specific community is the only one currently fitting the bill is part of the turn-off. Consider existing movements too, even if they don't match your aesthetic.

Expand full comment

Point taken. I very much will.

Expand full comment

I like your terminology of "fork", and I think it works with the mental model of what is happening in the EA community, at least as an outsider. I think what advocates of EA have to decide is whether their branch includes the folks who want to use the EA framework to justify spending gobs of money researching AI risk as the most effective utilitarian application of giving for good. If not, then it seems like they need to defend against this adversarial attempt to take of the EA banner. I would be inclined to say that if I was a big name EA entity I would try to separate my brand and identify the "long termism" folks as a separate fork so they dont get to launder their reputations using the good will that EA has developed.

Expand full comment

A tricky problem with maximizing flavors of utilitarianism is there will always technically be something "best", and the difference between best and second best, at the limit of time, will be unbounded. People will argue about which thing is best and which thing is second best, but in the long run they seem much more likely to split into separate camps than to tolerate giving some to one cause and some to another.

Expand full comment

Let me begin by saying that I am a strong believer in donating to charities. I contribute more in a year than the average American family household income. I do not gauge it off my income as I am retired and living on my savings. In my situation, my 1040 income is just a phantom of accounting conventions.

My contributions are primarily directed to supporting institutions in my communities and to the relief of poverty in those communities. I minimize contributions to universities, hospitals, museums, and "arts" (performing, plastic, or otherwise) on the theory that the money that goes to them winds up in the pockets of the professional managerial classes who are overfeed in our society.

I feel good about what I am doing. I beleive that I am discharging my obligation in full.

At the same time, I am not impressed by what I have read about "Effective Altruism". It strikes me as a an attempt to recreate the obligational structure of revealed religion without recourse to scripture or tradition. As such, and as with all such efforts that I have yet seen, it feels like all of its foundational axioms are prestidigitated from mid air.

I am not opposed to these axioms as I find them to be consonant with those derived from revealed religion in western civilization, but, I don't find the reasoning made on their basis compelling.

Expand full comment

> At the same time, I am not impressed by what I have read about "Effective Altruism". It strikes me as a an attempt to recreate the obligational structure of revealed religion without recourse to scripture or tradition. As such, and as with all such efforts that I have yet seen, it feels like all of its foundational axioms are prestidigitated from mid air.

I agree with this. I'm personally religious and have participated in religious proselyting. And man do I see echoes. Except...without the whole transcendent "truths" and eternal salvation involved. "Having the form of godliness but denying the power thereof" comes to mind.

Expand full comment

Words have meanings, as Scott clearly thinks. Given the way language is used, I think it's highly unclear whether "Effective Altruism" refers the minimal core of action-guiding ideas as he describes them, or (as he denies) to the actually existing movement.

This is partly because most people describing themselves as EAs don't donate 10% of their income to effective charities (evidence: https://rethinkpriorities.org/publications/eas2020-donation-data ). And are far more likely to accept the ideas that Scott treats as being in the bailey. As an empirical fact, someone can be accepted as an EA without ever donating anything, but not if they depart too far intellectually.

I do personally use EA in Scott's sense to describe myself, but I feel the need to spell that sense out to avoid unclarity. E.g. I say "I believe in effective altruism in the sense of donating more and more effectively, which for *me personally* captures the core ideas. And I'm giving 10% of my lifetime income."

My impression is that Scott doesn't think that "feminism" in practice means "thinking men and women are equal". The same considerations apply to what "Effective Altruism" means.

Expand full comment

I think the best criticisms of Effective Altruism is at the foundational assumption level. For example, Effective Altruism seems to believe in helping people indiscriminately, regardless of whether they are our allies or enemies. I don't think that's a good thing, in fact, I think it's naive and stupid. We should be only be helping our allies and be ambivalent to the suffering of our enemies, if not actively working to kill them. In the "drowning child" scenario, nobody ever says "Hold up. Is this child a *good* or a *bad* person?"

If you believe that helping good people has the same value as helping bad people then it's only a short step from there to "Good things are the same as bad things" (post dumb kitten picture here) and then nobody will ever take you seriously.

We need to ask ourselves "What kind of world do we want to create? Which people are helping move that project forwards and which are holding it back? That way we can draw up a list of our friends and our enemies, and start applying Effective Altruism to our friends and Effective Nihilism to our enemies.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

Nobody asks if a drowning child is a good or bad person because children are understood by most to be too young to hold real moral responsibility for their deeds (outside a few notable exceptions).

You also clearly don't mean "good or bad", given that you've suggested some things in the past that fail virtually every single moral framework imaginable besides your own philosophy of one (like this, your 100th incarnation of the argument "why can't we just kill everyone who disagrees with us and their entire families so that nobody will ever fuck with us again?")

Given that you also insist that you're a supergenius capable of creating sci-fi infohazards with your mastery of meme warfare, however, I also don't really believe you can be a good or bad person for similar reasons as a child cannot and once again urge you to seek treatment.

To new readers: the above person is so far outside the common psychic framework of humanity he's only visible as a distant speck from the sanity waterline. Conduct interactions accordingly.

Expand full comment

This debate reminds me of Scott's remark in the old SSC, in the flavor of two very different conceptions of government: some people believe that governments are basically good and effective, with conflicts and inefficiencies a regrettable but fixable issue - or basically bad, as horse-trading and pork barrel machines where if any good happens, it is by accident.

I think something similar happens here as well - are charities/NGOs/nonprofits basically good, where inefficiencies can be solved by data/planning (where EA proponents seem to stand), or are they jobs programs for naive Beltway college grads where any good that happens is despite, not due to their best efforts? If you hold the second opinion, the effectiveness numbers/framework is suspicious - even if NGOs are not actively cheating, they know the rules and the score, and will focus on the programs and the metrics that will attract even more funding, and the cycle begins anew - with doing good getting lost somewhere in the Q3 targets or the RCT results. Focusing too closely on metrics will take you nowhere if everyone has incentives to game them.

(Q - A: yes, I used to volunteer at a NGO doing gender equality stuff for African teenagers, promising across all the right metrics, yadda yadda - had such an awful experience that, paraphrasing Scott's review of San Fransicko, I sacrificed all my principles on the altar of "this isn't worth my dignity, time or effort". Now squarely on board with the second opinion.).

Expand full comment

I agree with that. I'm suspicious of any systematic set of metrics attempting to quantify effectiveness, because they inevitably get gamed *even if people aren't being malicious*. And there are many malicious actors in this (and any similar field involving money) field.

And opaque metrics crunched by some organization with its own internal structure and opaque hierarchy of values? Yeah, no thanks. That's blind faith.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

1. "your friends won't let you post"?

Really, you let your friends decide what you can and cannot post? I'd say you need a bit more autonomy.

2. "Is there some more systematic way to commit yourself to some amount between 0% and 100% of your effort (traditionally 10%)? And once you’ve done that, how do you make those resources go as far as possible? This is effective altruism, the rest is just commentary.

...

Foundational Assumptions

We should help other people.

We should strongly consider how much effort we devote to this

Some methods of helping people are more effective than others."

If this is what EA is, then it really isn't much of anything and certainly not new.

Consider the history of hospitals as a more efficient and effective means of the charitable activity of caring for sick. The acceptance of Christianity as a legitimate religion in the Roman Empire drove an expansion of the provision of care. Following the First Council of Nicaea in AD 325 construction of a hospital in every cathedral town was begun, including among the earliest hospitals by Saint Sampson in Constantinople and by Basil, bishop of Caesarea.

How about Populorum progressio (1967)?

The New Economics, W. E. Deming (1994) which is certainly applicable to the management of philanthropic activities.

It would seem to me that the invention of taxation might flow from the same foundational principals.

Does "civilization" which is rooted in the essential solidarity that is part of the human condition also flow from the ostensible EA foundational principals?

Weak EA cannot be anything new. Strong EA if it really exists must be something else and here is where some of the hard critique of Singer, et al, the gimmics of 10% (why not 9% or 11%, why not progressivity in giving), and "ranking" and other pseudoscientific methods used as evaluative regimes perhaps comes into play.

3. Does "longtermism" suffer from the immanitzation of the eschaton? There would seem to be some risks of creeping "gnosticism" as Voegelin used that term.

Expand full comment

I don't think EA is bad in the sense that any one individual shouldn't do the things it proposes. Donating 10% to an effective charity is a good action to take right now. I do have an issue with the idea that EA or something like EA is or ever can be *the* solution to most global problems. Maintaining a system that creates vast amounts of wealth, funnels almost all of that wealth upwards in a very steep pyramid and then motivating those at the upper layers to voluntarily donate some of their personal wealth to alleviate the suffering that is caused, in large parts, by the very system that enables those people to have that option is....not super convincing to me.

Expand full comment

Also worth noting that collapse of a certain layer doesn't even necessitate collapse of the higher layers; you might have *other reasons* for connecting them than the specific EA ones.

E.g. a Christian might agree with the foundational assumptions, disagree with some of the less basic assumptions (like utilitarianism), but still conclude—because of the foundational assumptions, and the Christian's own intermediate assumptions—that it would be good to support many of the specific charities GiveWell recommends. Utilitarianism isn't the only reason for thinking that it's a good idea to give a real chunk of your money to the global poor and that you should try to do this in an effective, data-based way.

Something I wrote about why Christians and other non-utilitarians should still support GiveWell charities: https://jecs.substack.com/p/notes-on-effective-altruism

Expand full comment

This is a great essay! I've often wished more Catholics and other Christians would become involved with Effective Altruism.

Expand full comment

So long as EA remains primarily defined by people like Yud, who are not merely atheist but anti-theist (in that they make the abolishment of religion a key part of their personal morality and ingrain it as an unspoken assumption into the EA and rationalist community- see the common chauvinism around here that the religious are unthinking dogmatic anti-science zealots almost to the man) that is likely to remain a wish. Most people can take the hint they aren't welcome somewhere.

Expand full comment

Seems unlikely. At least Catholics generally reject utilitarianism, root and branch. The thesis is that we already have a guide to what is good and right, and it is not subject to a majority vote. If coveting your neighbor's wife were to be considered by 98% of modern adult voters to be A-OK, the discipline of marriage just another form of oppression and enslavement, and giving free rein to sexual frolics with whomever whenever was shown beyond quibble to maximizes the utilon count worldwide, the Church will nevertheless consider it a grave sin.

Indeed, the idea that the greatest good is achieved by maximizing human pleasure is probably pretty offensive to Christian ideology. We're here (the Christian would say) to do God's will and execute on a number of duties we are given by virtue of our stewardship -- of the natural world, of each other, of our children and the future generally -- and whether we have fun doing it is somewhat besides the point. I mean, in principle we surely should, there are a lot of ways to do so, but if for some unfortunate random reason we cannot, we are called to nevertheless persist in doing what is right and needful even if it deprives us -- or others, even many others -- of pleasure.

Expand full comment

The replace predators by herbivores argument goes back to a NY times opinion blog article from 2010 [0] by a Jeff McMahan, who seems to be indeed active in the EA community [1].

From the NY blog:

> Suppose that we could arrange the gradual extinction of carnivorous species, replacing them with new herbivorous ones.

> I concede, of course, that it would be unwise to attempt any such change given the current state of our scientific understanding.

I totally agree with that.

> Perhaps one of the more benign scenarios is that action to reduce predation would create a Malthusian dystopia in the animal world, with higher birth rates among herbivores, overcrowding, and insufficient resources to sustain the larger populations. Instead of being killed quickly by predators, the members of species that once were prey would die slowly, painfully, and in greater numbers from starvation and disease.

Obviously. I would assume that most mammals have a finite life span, so every one of them has to die eventually. Getting killed by a predator is certainly not nice, but it clearly beats most other forms of death nature might have in store for them.

> There is therefore one reason to think that it would be instrumentally good if predatory animal species were to become extinct and be replaced by new herbivorous species, provided that this could occur without ecological upheaval involving more harm than would be prevented by the end of predation.

While I get the argument, I think it is extremely theoretical. It feels like discussing the merits of turning our sun into a black hole. Granted, we have no idea how we could accomplish it and clearly don't have the tech that would result in a net positive from doing so at the moment, but perhaps there is some theoretical argument being made that feeding a black hole interstellar gas would result in more usable energy than just capturing Sol in a Dyson sphere. Meanwhile, we are still burning coal in power plants.

[0] https://archive.nytimes.com/opinionator.blogs.nytimes.com/2010/09/19/the-meat-eaters/ (Article is paywalled, but viewing the source code, then copying the meat of the article in a new html file and viewing that with a browser works reasonably well.)

[1] https://forum.effectivealtruism.org/topics/jeff-mcmahan

Expand full comment

Oh lordy, I appreciate your spiciness.

I have a similar spice level when some of my very smart friends claim that utilitarianism is incoherent because their is no universal morality or utility function. FFS, everyone wants their infant to live, and thus reduction in infant morality is effectively a universal good. Oh? There's some people who would be happier if their infants died? Oh damn you got me there!! Well f*ck it, I guess there is no good that can possibly be done in the world because for every ostensible good, there is some tragic, broken-brained individual (real or imagined) who wants the opposite 🙄🙄🙄

It's good for us to challenge our moral intuitions, but this is ridiculous. And it pisses me off because its about something that actually matters.

Also, I get a little crazy when lovely brilliant wise humans roll out the intuitively-barbarous utilitarian thought experiments. While the *first order* consequences of murdering a random person to distribute their organs to 5 others is a net good, the *second order* consequences are predictably awful with very high confidence (living in a society in which we must fear that the Mandatory Altrustic Giving Police are coming for us next).

Of course, utilitarianism is challenging. If we *had* a DeepThought, we might commit seeming moral atrocities because over the course of the next 10,000 years it would save a trillion lives. ... in that universe, shit would get seriously weird. And similarly, it's possible that lowering infant mortality has a 4th, 5th or 6th order effect that is net bad... and I don't know what to do about such "foot bridge" (https://www.themantic-education.com/ibpsych/2016/10/27/moral-dilemmas-the-trolley-and-the-footbridge/) problems. In that sense, I am kinda glad we don't have a DeepThought, and even that such future prediction is likely to be intractably hard. And this does make the idea imprecise and messy. But that's fine. Maximizing the flourishing of conscious creatures is a good North Star even if our loss function is heuristic-based and messy.

Expand full comment

"FFS, everyone wants their infant to live"

There was a large legal kerfuffle in the US suggesting this isn't true.

Expand full comment

I chose the word with care! :-)

"An infant (from the Latin word infans, meaning 'unable to speak' or 'speechless') is a formal or specialised synonym for the common term baby, meaning the very young offspring of human beings. The term may also be used to refer to juveniles of other organisms. A newborn is, in colloquial use, an infant who is only hours, days, or up to one month old. In medical contexts, newborn or neonate (from Latin, neonatus, newborn) refers to an infant in the first 28 days after birth;[1] the term applies to premature, full term, and postmature infants.

Before birth, the term fetus is used. The term infant is typically applied to very young children under one year of age; however, definitions may vary and may include children up to two years of age. When a human child learns to walk, the term toddler may be used instead."

https://en.wikipedia.org/wiki/Infant

Expand full comment

WP is written and edited by people on the left.

Fetus is used by people who don't think of babies as humans until they are born.

People who value 'fetuses' call them babies.

Expand full comment

Is the meaning of 'infant' actually controversial? Regardless, this is just semantics. I mean a child who has been birthed, and now is outside of their mum's womb. Post-natal. Post getting nutrients via the umbilical cord and now likely getting it from a boob or bottle. Whatever that is called. Because yes, there is controversy over when the entity within the womb should be considered a person. But only a small percentage of people support an arbitrarily late-term abortion, and vanishingly few support a post-birth abortion (unless the child is suffering and has little hope of surviving).

Expand full comment
Aug 25, 2022·edited Aug 25, 2022

I don't think the meaning of the word is unclear. But I think your argument that "of course everone wants her baby to live!" is much less obvious than you think it is, given that there are clearly people -- apparently a very large number of people -- who want the right to decide that maybe it shouldn't, well past the point when it is clearly alive, e.g. kicking and such.

To accept your axiom, we would have to imagine that there is a fair number of people who (1) are sufficiently negative on the life of that creature (fetus/baby/whatever) that they want the option to kill it, and yet (2) at some undetermined point in its growth, when nothing particularly special or unique happens, all of a sudden they switch over emotionally from "I want this thing dead" to "Nothing is more important than the life of this thing!"

That's....hard to credit. Doesn't sound like normal human beings. Id est, if a woman who really wanted to abort her fetus were, for some random reason, compelled to carry it to term and give birth to it, it would be more plausible to imagine that she still carries significant reservations about whether this creature should be alive or not. Indeed, is this not one of the standard pro-choice arguments? That if you compel someone to be a mother to an unwanted child, the child will end up not being well-parented? That it is, in fact, better for that particular child to not to be born to that particular mother (which implies a pretty drastic reduction in quality of life if it is)?

Expand full comment

Thank you for the thoughtful comment.

Might part of your argument be that this concern about the life of a post-natal baby is a social construct, because seems likely that only by *convention* would someone want to preserve the life of a born baby at all cost, but be willing to kill a unborn baby kicking in the womb? And therefore it isn't universal?

If so, I agree that social convention influences where this line is drawn, a social convention that differs left from right. That being said, late term abortions are rare and vastly less popular than early term abortions. If there is a poll about infanticide, wouldn't you agree that 97%+ would likely say that it is morally unacceptable (and that the <3% were probably mostly trolls or people who didn't understand the question)? Our moral intuitions align at some point despite differing in early stages of fetal development. And the Left isn't nearly so extreme in their views on abortion as the Right makes them out to be (probably vise-versa). We think it's complicated, and generally speaking, very sad.

Relatedly, factoring farming of animals is morally abhorrent. I suspect most people would experience extreme distress if faced with the horrific living conditions of those poor creatures. But social convention says "it's acceptable." By social convention some truly horrific things are deemed acceptable (as we all know). But it still has constraints. Importantly, a great deal of "acceptable" atrocities are committed "out of sight, out of mind" for the majority of the population (whether it be concentration camps, factory farms, etc), or are otherwise subject to powerful rationalization ("animals aren't conscious" or "those people are evil and subhuman"). The need for the atrocity to be out of sight and/or rationalized points, IMO, at a more fundamental shared intuition that the atrocity is an atrocity.

And the unborn are "out of sight" compared to a born baby. I will bet you that a large percent of (the small percent of) mothers who'd have considered a late-term abortion yet didn't get one still are flooded with feelings of love and care for the post-natal baby... because we humans have a harder time denying a vivid reality than one behind obscuring layers.

But some mothers still won't. Some poor girls will give birth to a baby and then throw it in a dumpster. But this is an act of desperation. Animals do it too, of course - like a mother rabbit (a doe) will killer her babies if she feels sufficiently unsafe (I saw this a couple times raising rabbits... it breaks my heart). But I don't think this detracts from the universality. Sure, it means that reduced desperation is ALSO an extremely important measure of a society's well-being, and an important, valid and effectively universal utilitarian goal. The rate of non-desperate, non-mentally-ill humans wanting to commit infanticide has got to be nearly non-existent, wouldn't you say? (and I would count "one child policy" parents as desperate, though that could be a longer discussion)

That all being said, I could take another angle entirely, which is to refine my statement to: "Most people wants their post-natal baby to live and be well. Those that do (99+%) desire interventions that can preserve their post-natal baby's well-being if threatened, preferring (of course) interventions with as few tradeoffs as possible. Therefore, reducing infant morality via low-tradeoff interventions improves the first order flourishing of mankind an an important, non-trivial and effectively universal way." In other words: reducing the morality of infants born to *parents who want them* is an effectively universal goal, and thus an appropriate target for utilitarian focus.

This could potentially generalize to: "increase the likelihood that people can achieve their preferences over a long period of time in such a manner that minimizes negative externalities." It just so happens that putting resources toward near-universal preferences is more cost effective (vs weird rare preferences).

Expand full comment

I feel like the strongest objection to EA is something along the lines of "Objector claims to not value human lives or prefer non-suffering of others, except insofar as receiving direct (non-moral) benefits from that person such that they have value as a tool." Fortunately, people who seem to honestly feel this way are fairly rare or I think we would be stuck in a much worse Nash equilibrium. As it is, such people must be negotiated with differently from typical humans as they have a different value set.

Expand full comment

"Q: I refuse to positively contribute to any form of EA (despite my very significant capacity to do so) until Scott publishes his spicy essay, therefore doing the most good through an emotional blackmail"

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

Q: I don’t approve of how effective altruists keep donating to weird sci-fi charities.

A: Are you donating 10% of your income to normal, down-to-earth charities?

A2: I don't have philosophical beliefs that imply that I must donate to charities at all, so even if I donate nothing, I'm fine according to my own beliefs. I object to your weird sci-fi charities because I believe that that weirdness is a symptom of bad epistemic hygeine. And yes, I do try to keep proper epistemic hygeine myself, thank you for asking.

(And if I add up the amount of money from my taxes that goes to, or purports to go to, helping other people, it would exceed 10% anyway.)

Also, I think the drowning child argument is deceptive. Surely you know about central and non-central examples. I'd save a central example of a drowning child. I might not save a noncentral example of a drowning child. If someone tried to exploit me by creating an endless series of portals to locations in the world where there were drowning children, at some point I would refuse to save any more. I'm reminded of the Superman story where some guy wants to romance Lois Lane, so he creates a radio that automatically scans the world and keeps playing whatever disaster it can find, so Superman hears about and keeps having to go off and save people.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

> If someone tried to exploit me by creating an endless series of portals to locations in the world where there were drowning children, at some point I would refuse to save any more.

Sure. Now in this hypothetical, roughly how many children would you save before calling it a day? Is it zero?

Expand full comment

Answering that question is opening yourself up to an exploit.

Expand full comment

It's opening you up to a *response*, as all dialogue does. I can think of plenty of principled tradeoffs where a significant finite answer would be perfectly defensible, given of course healthy error bars for physical ability, opportunity cost, etc.

What would be less defensible would be a regression of non-answers specifically serving as a smokescreen for a answer the speaker themselves believes to be unacceptable, which would certainly be something given the diversity of opinions already in this comments section. The fallible have my sympathy, the depleted my respect, and even the egoist my attention. The coward has my scorn.

Expand full comment

Nobody owes you an argument, or an opportunity to convince them that you, not they, are correct.

Expand full comment

I tend to agree. Likewise, one is not obligated to refrain from judgement of one who presents the easy part of an argument and declines to continue right as it might become personally inconvenient - in fact, the more someone doubles down on their right to *not* be convinced the easier it becomes. People drop out of online conversations all the time, but when there's a demonstrated willingness to engage but a seeming allergy to the object level it doesn't leave many alternatives.

Expand full comment
Aug 25, 2022·edited Aug 25, 2022

The answer is that 1) it would depend on how many resources it would take to save the children and on how generous I felt that day, and 2) the answer to the question "how many children would I be *obligated* to save before calling it a day" would be zero.

Expand full comment

> 2) the answer to the question "how many children would I be obligated to save before calling it a day" would be zero.

We're probably not going to solve the interplay between morality and obligation this late in a comment thread, but is it justifiable to make inferences about a person who chooses to save zero when they were perfectly capable of more?

Expand full comment
founding
Aug 24, 2022·edited Aug 24, 2022

> For me, basically every other question around effective altruism is less interesting than this basic one of moral obligation.

I find the question of moral obligation to be the *least* interesting part of the discussion, because I'm a non-cognitivist and don't think most of the participants are using "should" in any way that I recognize as coherent. I contribute to GiveWell simply because I find their empirical analysis compelling and have decided saving 10-15 statistical lives per year is a more appealing use for that portion of my income than any other I can think of, and I don't fuss about anyone else making different spending decisions. Likewise, I've taken the GWWC pledge for pragmatic reasons rather than moral ones. The community has (wisely) agreed on a Schelling point that if you contribute 10% then nobody will give you shit about not doing more, so by taking the pledge and holding to it now I can get involved in the much more interesting debates around cause prioritization without anyone being able to brush me off with answers like the ones in the "spicy" version of this post.

Basically, I don't think this tower model is valid, because I reject some of the fundamental assumptions and yet I'm still on board with most of the higher stuff.

Expand full comment

I'll be a bit more blunt. Responding to criticism with "well, you're a hypocrite or lying about your real concern" is insulting and bad-faith. Belief and action are separate--lots of people agree that "helping people is good" and even that 10% is a reasonable number for resources to spend *without actually doing either in any systematic way*. Telling them that they have to go be better about that before they can criticize y'all for wasting time and money on AI-risk (or whatever the criticism is) is both non-responsive to the criticism and downright bad-faith argumentation.

And this style of "question from fake person + real response" is a rhetorical trick to smuggle in assumptions. Just like it was for Socrates. For example, the first appropriate response way back up there would really be more like

Q: I don’t approve of how [E]ffective [A]ltruists[1] keep donating to weird sci-fi charities.

A: Are you donating 10% of your income to normal, down-to-earth charities?

R: Why does that matter? That wasn't even a question. It's a statement of fact. And now you're questioning my virtue? <walks off justifiably more convinced that EA and everything associated with it, anyone who uses that label for anything is a crank and an obnoxious person>

[1] Note the sleight of hand Scott pulled here? He conflated "people who are effective about their altruism" with the real criticism, which is of "people who identify with Effective Altruism, the movement". I've put it back to be more true to what's actually going on.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

I think Scott's question wasn't ideally stated. The relevant question imo isn't "I don’t approve of how EA keep donating to weird sci-fi charities", it's "I don’t approve of EA because it's donating to weird sci-fi charities".

The point is that the second statement may not be completely justified if donating to "weird sci-fi charities" isn't all of what EA is. It would be enough to reject the higher levels of the tower, but not the foundation.

Expand full comment

But it's all of what EA, in practice, *does*. Or enough to suspect that the roots are also messed up. Because by their fruits ye shall know them...and donating to weird sci-fi charities instead of actually helping real people here and now isn't (to the presumed interlocutor) a good fruit.

And that doesn't solve the problem that responding to "why are you doing X" with "well, you're not doing anything, this is something, so shut up" (which is what he's doing) is really bad faith. Absolving yourself from criticism because of something that the critic is or isn't doing is just plain wrong. No matter what. Criticism stands or falls on its own merits.

Expand full comment

> But it's all of what EA, in practice, *does*. Or enough to suspect that the roots are also messed up. Because by their fruits ye shall know them...and donating to weird sci-fi charities instead of actually helping real people here and now isn't (to the presumed interlocutor) a good fruit.

Here are the top recipients of EA funding last year:

1. Global health and poverty, 44%

2. Farmed animal welfare, 13%

3. Biosecurity, 10%

4. AI risks, 10%

5. Near-term US policy, 8%

6. Effective altruism/rationality/cause prioritisation, 6%

etc.

Granted, longtermism is likely a higher % now that the FTX Future Fund has started doling out cash, but global health and poverty still gets huge chunks of EA money.

=> https://forum.effectivealtruism.org/posts/nws5pai9AB6dCQqxq/how-are-resources-in-ea-allocated-across-issues

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

AI risks + Farmed Animal Welfare + Biosecurity (all things that are *highly* debatable) end up being 33%. That's enough to *rationally* wonder if the tree really is bringing forth good fruit. Especially since their priors on all of those things are, by the lights of most people, *really wacky*. I'd say that anyone who claims to focus on being effective yet spends *even 0.1%* of a significant fraction of their money on AI risk is well into "it smells" territory (because my priors for both AI risk and the ability of spending money to do anything about it are both...really really low).

Effectively, people can disagree about what counts as being effective. Saying "well, you're not giving enough money to charity" when challenged about what you're spending it on (and hence why you claim to be being effective) isn't an answer to that. It's a deflection. And one that *justifiably* tells people you're spouting dogma, not actually interested in serious discussion. Especially when you insist that spending that flows directly from your core premises.

That is, if "AI Risk is the 4th most important thing to spend money on" is a direct outcome of your core assumptions, for most people, they're going to say that the core assumptions are wack somewhere. And if you insist that the person *really* doesn't actually like charity if they demur...that's just bad faith.

Edit: And it doesn't really matter. The core of my point is the last sentence--criticism stands or falls on its own. Talking about what the critic is doing (or isn't doing) is *entirely and always* an irrelevant deflection and a bad-faith argument technique.

Expand full comment

Sure, but the bottom levels of the tower are not in serious question by anybody (except for a few habitual contrarians [including myself] in this thread). The bottom levels are strong and popular and well fortified, they almost resemble (dare I say it) a motte.

Here's the way I see this conversation:

"I don't like the top of your tower"

"But you have to admit the bottom levels are pretty great"

"Oh yeah, the bottom levels are great, but I don't like the top of your tower"

"But you have to admit the bottom levels are pretty great"

Where is this conversation going? Do we ever reach the point where we can discuss the top of the tower?

I would say that if Scott wants to say "Hey, people should do more to help others, whether within the EA paradigm or outside it" then that's fine, but it should be a separate thing, not a deflection of criticism of either effective altruism or Effective Altruism.

Expand full comment

> Sure, but the bottom levels of the tower are not in serious question by anybody (except for a few habitual contrarians [including myself] in this thread). The bottom levels are strong and popular and well fortified, they almost resemble (dare I say it) a motte.

So I think I only mildly disagree. I think in one sense they are pretty obvious, but then the vast majority of charitable donations seem to be made with little to no consideration for effectiveness, many people don't bother to do things like donating at all, before GiveWell there was ... what ... the utterly inadequate Charity Navigator, etc.

I guess while I agree the bottom levels aren't anything fundamentally new, and most people are happy to pay lip service to them, EA systematically acts on that foundation in a way that's not typical (though also not unheard-of).

> I would say that if Scott wants to say "Hey, people should do more to help others, whether within the EA paradigm or outside it" then that's fine, but it should be a separate thing, not a deflection of criticism of either effective altruism or Effective Altruism.

I think Scott and many other EAs (myself included) would be thrilled to see more people act on that foundation, with their own angle on it. See e.g. Richard Yetter-Chappell (https://rychappell.substack.com/p/the-strange-shortage-of-moral-optimizers):

> [T]he absence of any competing explicitly beneficentrist movements is striking. EA seems to be the only game in town for those who are practically concerned to promote the general good in a serious, scope-sensitive, goal-directed kind of way. If a large number of genuinely beneficent people believed that actually-existing-EA was going about this all wrong, I’m surprised that they haven’t set up an alternative movement that better pursues these goals while avoiding the shortcomings they associate with traditional EA. (Perhaps they’d prefer different branding. That’s fine; I’m not concerned here with the label, but with the underlying values and ideas.)

Expand full comment

agreed with this for sure. Getting burned in response to criticism doesn't erase your criticism. We're all hypocrites, but that doesn't mean all of our ideas are nonsense.

Expand full comment

"Q: FINE. YOU WIN. Now I’m donating 10% of my income to charity.

A: You should donate more effectively."

This sets up a sort of tyranny of the rocket equation, charitable giving edition. First, you agree to donate 10% of your income/time/energy. Now, you need to research charities to make sure you are not just giving alumni donations to a wealthy university that doesn't need it. But this takes up more time and effort, amounting to X% of your remaining income/time/energy. You also need to make sure that your sources are trustworthy, so this requires another y% of the income/time/energy that's left.

I think that this goes against the message of https://slatestarcodex.com/2014/12/19/nobody-is-perfect-everything-is-commensurable/, which makes no more demands than "10% of income to charity". So where do we actually draw the line? If 10% of income is no longer good enough because it is donated to one's wealthy alma mater, then it seems like Effective Altruism was never always about "10% of income to charity", and controversial statements about which causes are more worthy (and which charities are more effective in promoting their stated causes) are lower down on the tower of assumptions than Scott proposes. I think this is what makes this article feel so much like motte-and-bailey.

Another way of stating it: the Effective Altruism movement needs to stop pretending to be solely about technical "is" problems when a core part of its mission is adjudicating "ought" problems. The "is" problems include giving 10% and calculating charity effectiveness. The "ought" questions are exactly those Scott dismissed as the equivalent of denouncing Christianity because of Bible translation errors: whether one cause is more or less worthy than another cause. Once you are there, what principle is stopping arguments about alumni donations vs. feeding starving children to end up as arguments about feeding starving children vs. donating to weird sci-fi charities?

Expand full comment
Comment deleted
Expand full comment

I was using alumni donations as a stand-in for ineffective giving. It works just as well as the option of "keep the expensive suit" in Peter Singer's scenario of a drowning child, or any other thing generally considered ineffective from a charity standpoint.

Expand full comment

Post the essay, if The NY Times can’t end you what chance does anyone else have?

Expand full comment

I think that for Effective Altruism to add any value, you need to be able to get from the second level of your tower all the way to the top.

The bottom two layers are just "Altruism", with no "Effective" about them; they're pretty much universal values (even the choice of 10% rather than 5 or 15 is shared with a lot of Christian tithes and Muslim Zakat, because fingers!).

One way from the second layer to the top is via the third and fourth layers, but I think those are both highly questionable - I think it's hard to imagine a less effective form of altruism than AI safety, and I'm dubious about how efficiently individual effort can mitigate the risks of pandemics or nuclear war.

But a much better one is via sites like Givewell.org, which try to identify specific, well-grounded, short-termist projects ABC and XYZ that do more good per pound spent than other similar projects.

When I started describing myself as an effective altruist, those seemed like quite a large part of the movement. Nowadays, they largely seem to have been eclipsed by "long-termist" projects, whose effectiveness looks much lower to me, and I think that's a shame.

I still donate to malaria prevention, because that's who Givewell recommend, but I'd like to see more EA effort going on competing projects trying to solve the same efficient-short-term-charity-identification problem with different methodologies, to test whether their results replicate, and less focus on high-speculative moonshots.

Expand full comment

> When I started describing myself as an effective altruist, those seemed like quite a large part of the movement. Nowadays, they largely seem to have been eclipsed by "long-termist" projects, whose effectiveness looks much lower to me, and I think that's a shame.

You might know this, but as of last year at least global health and development still got close to 50% of EA funding. Now it's likely less (as a %) due to the FTX Future Fund increasing longtermist funding, but I think in absolute numbers it keeps going up.

Expand full comment

I didn't, and "close to 50%" is depressingly low.

I think this reinforces my conviction that effective-altruism-the-philosophy should, if followed correctly, lead to extreme scepticism about an awful lot of Effective-Altruism-the-movement, on the grounds that it's diverting funds from effective short-termist causes to ineffective long-termist ones.

Or, in Tower terms, something has gone horribly wrong between level 2 and level 4.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

> I think this reinforces my conviction that effective-altruism-the-philosophy should, if followed correctly, lead to extreme scepticism about an awful lot of Effective-Altruism-the-movement, on the grounds that it's diverting funds from effective short-termist causes to ineffective long-termist ones.

Fwiw, I don't think this is true. The amount of money EA directs to global health and poverty in absolute terms is trending up significantly. See e.g. the plot of money moved by GiveWell annually here: https://forum.effectivealtruism.org/posts/KNMRojuJgSjGth7m2/givewell-s-money-moved-in-2020

Expand full comment

Is it reasonable to assume the most effective use of charity during my lifetime isn't known yet, and I could put 10% in a piggy bank until my death or when I recognize it as I get older/wiser/the world changes?

Are there any very rich effective altruist people or organizations who think the answer is no who would insure my donations? I make 100k, donate 10k now, but if I change my mind and wish to use that cash a different way they give me 10k in future and don't donate the next 10k they receive or generate.

Expand full comment

> Is it reasonable to assume the most effective use of charity during my lifetime isn't known yet, and I could put 10% in a piggy bank until my death or when I recognize it as I get older/wiser/the world changes?

Yeah, this is called patient philanthropy or patient altruism.

=> https://forum.effectivealtruism.org/topics/patient-altruism

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

I do not buy the "You should donate more effectively" answer. Does it imply that in the "drowning child scenario" if my suit costs more than average cost of saving a life via some charity, I should let the child drown but commit to increase my donation to that charity by the cost of my suit? This contradicts my intuitive answer.

And furthermore, I am not convinced that QALY is the metric I want to optimize for. If I were trying to ratinalize my choice of charities, I would probably say I am optimizing future scientific/cultural output of society, but even here I do not truly optimize, rather choose things that I personally consider important by gut feeling.

Expand full comment

Well no, "donate more effectively" is separate from the issues of how much to donate. In the drowning child scenario, all "donate more effectively" would say is that if there were two children drowning in one pond and one child drowning in another, and you could only reach one pond in time, you would want to make sure you were running toward the two-child pond.

Expand full comment
Aug 24, 2022·edited Aug 25, 2022

Well, my story kind of implies, that I am committed to spend the cost of my suit on saving children, and unfortunately there is no second pond with two drowning children inside.

So my choice is to either save the one I see, or sell my suit and donate to charity that will, on average, save 2.1 children's lives. (Even better than the second pond I do not have!).

My instinctive answer is that I should be saving the child I see. Which implies, that the decision I make is not purely utilitarian.

Which means that while this story does convince me that spending money on lives is generally a good thing, I do not believe in arithmetical optimization of lives per dollar spent.

Expand full comment

I think you're sort of mangling the intent of the thought experiment when you bring in the whole concept of selling your suit and donating the proceeds. Rescuing the child in the pond is precisely meant as a metaphor for saving lives through charitable donation, so what you're doing is comparing the real thing to the metaphor of the thing - which will certainly get you some odd results!

This is why I bring in the version of the two ponds - instead of comparing saving the child in the pond vs giving your suit for >1-childen's-worth of charity money, it's a more sensible metaphor to compare saving one drowning child than two. (This is, yes, different from Singer's original formulation - but so is your formulation where the suit-wearer has access to GiveWell.)

Singer's point with his thought experiment is to say that seeing the child drowning, even though it makes it obviously more motivating to save that child, doesn't actually carry moral weight - a dying child is just as bad if you see it or not. If you accept that in the two-pond version of the story I gave earlier you would strongly prefer to save the two children rather than the one, I think you can back-convert from the metaphor in the same way to show how you would be (or at least, should be) equally motivated to pursue "arithmetical optimization of lives per dollar spent."

Expand full comment
Aug 25, 2022·edited Aug 25, 2022

Well, "my" experiment explicitly shows me, that I do not agree that dying child right here is as important as dying child elsewhere, because I will choose saving one child right here over saving two children elsewhere via charitable donation of same amount of resources, and even find it abhorrent not to do so.

Also, I am not sure I can mangle the intent of my thought experiment, because I am the one deciding what the intent is, am I not? Obviously you might find my thought experiment not convincing, and that's OK, I have nothing against you pursuing effectiveness in your donations in whatever way you choose.

Expand full comment

5 years ago most people outside of LessWrong/rationalist space had never heard of Effective Altruism. You would meet people and tell them “We should give charity effectively” and they would be like “cool, great idea, I want to join your movement” and EA really did mean “try to be effective with your charity”. (and yes, I do give charity and yes I do try to make it effective).

But since then something changed. Effective Altruism, the movement, got much larger. The idea that “we should give charity effectively” is also more prevalent. So prevalent that people have it without even thinking to name it.

They don’t think “I’m going to think about where my money is most effective so I am therefore an EA”. But people who give charity are more primed to think about it’s effectiveness just from being in a world where that kind of question is in the zeitgeist.

You might say it’s “in the water supply.

(Just like CBT:

https://slatestarcodex.com/2015/07/16/cbt-in-the-water-supply/

)

It remind me of this article: https://slatestarcodex.com/2013/04/11/read-history-of-philosophy-backwards/

Where the point is made that once a philosopher “won” they are no longer associated with their “winning” ideology, just their controversial ones.

In fact, a movement, is never defined by it’s ideology

(https://slatestarcodex.com/2016/04/04/the-ideology-is-not-the-movement/)

And especially not when it’s ideology is just the most obvious thing that “we should be effective with our money” I mean, republicans don’t donate to the democratic party since they think democrats are not effective at governing as well as republicans - so the instead donate to the republican party, which to them is “more effective” - does this make them “effective altruists”?

Almost anyone who donates anything thinks at least somewhat about “where would this money go best”.

So when people talk about EA, they are usually talking about EA “the movement” not “EA the idea that is already obvious”

EA has gone the way of feminism where it means different things at different times, but so far this duality has not been weaponized like the other words in the most famous SSC article of all time:

https://slatestarcodex.com/2014/07/07/social-justice-and-words-words-words/

Expand full comment

> Where the point is made that once a philosopher “won” they are no longer associated with their “winning” ideology, just their controversial ones.

There is a very important distinction between a philosophy that has "won", and one that has merely become popular to talk about without people actually internalizing its principles. I think one rewrite of this this post would be to say that some have made the shift to critiquing the controversial parts of EA, while more or less ignoring the parts that would be inconvenient to either disagree with or actively follow. (In fairness, not a new pattern at all. See also: religion.)

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

A metaphor.

One of my recreations is long bicycle rides: 50 miles, 100 miles, more. The moment I set out on one of these, my sole concern is to cover the distance with as much dispatch as possible. That does not mean going as fast as I possibly can at each moment. That would just wear me out and result in abandoning the effort without getting half way through, or even getting injured. For an endurance activity, the effective way is to maintain the highest power output that I can sustain for the whole distance. That is the target. The effort I can make (to riff on Peter Singer's book titles) is the effort I must make (to riff on the contents of those books). I may briefly exceed it to get up a hill, but I can recover when I go down the other side. Stops to eat, drink, or pee are no longer than they need to be, and those will be my only rest until the trip is done. Even during those stops, I am not "resting". I am preparing myself to take up the work again. Everything, including "rest", is in service of reaching the finish in the shortest time that is possible for me. Rest, properly conducted, is part of the work.

The exegesis:

a long bicycle ride: doing good.

minimising the time: doing as much good as possible.

attempting to go as fast as possible all the time: scrupulosity, burnout, nervous breakdown.

actually minimising the time: actually doing as much good as possible.

rest only in service of the work: rest only in service of the work.

until the trip is done: the rest of your life.

Expand full comment

This would imply that you are morally obligated to give as much as you can and still remain alive, sane, and employed; it doesn't justify a limit of 10%.

Expand full comment
Aug 25, 2022·edited Aug 25, 2022

Indeed it would. That is where I see the EA argument inexorably leading. (To be clear, I reject the whole thing down to its foundations.) The pledge is called “Giving What We Can”, not “Giving 10%”. Supererogation, limited duty, tithing and no more: that is for the laity, on the pragmatic grounds that if you demand everything, you will get nothing, but if you demand a little, you may get that little. A spiritual Laffer curve. But for those called to the priesthood, or to sainthood, everything is demanded and everything is given, and they serve as an example to the laity to show them that the good they can do will never be enough.

Expand full comment

Sorry charities and the overall betterment of humanity...you talking about that makes me feel uncomfortable with the choices I've made in my life, so my feelings are MORE important than saving or improving hundreds of millions of lives?

Though this does bring up a point for EA...could it not be more about psychology and helping avoid ruffling feathers? The chimp primate hierarchy holds humanity back in many many ways. How to ease in the fragile egos of people who at least hold some good values? Or do we focus on the super EA types alone to shunt them into charity work while alienating the bros of the world? Could a splinter movement of 'Easing into Altruism' be of value to capture people who are not used to giving anything to charities or causes?

There is an argument about the EA movement being a portmanteau of obvious ideas or well known ideas with deep historical roots...

And the answer to that is...'so what?' Every idea has historical roots and other people doing similar things or coming up with combinations which are local or historical repeats to some extent.

Certainly wanting to do things efficiently is indeed a long standing idea. In no way did the idea, concept, or desire to implement or think about ways to do things to have a greater impact with fewer resources invented by the EA movement. From guerilla warfare to evolutionary trends of organisms across time to be effective in their environments to the very physical acts of osmosis or brownian motion of particles...it appears efficiency is a foundational principle or idea which can be derived from observation of the natural world. Therefore not attributable to EA.

Also certainly, wanting to or organising to donate tithings or 10% to charity or in some other way to a cause greater than yourself outside your household in a way which is gifting or giving away resources has deep historical roots in many disparate cultures from donations of food to buddhist monks to church tithings to leaving the edges of your field unharvested in the torah and not gleaning for dropped seeds in the harvest. These ideas are 3,000 plus years old at least.

Indeed, indeed. And yet as a secular movement to inspire more interest in and actual instances of giving to charities it has certainly sparked a lot of conversations on the topic with some people donating to charity who otherwise would not have since the more traditional religious charities lack the reach or authenticity to convince certain audiences or demographics.

Of course EA is also off putting in its message and style of communication or concerns to some people. Just like the catholic church's old messaging about saving your soul through donations to their organisation was off putting to many people like Martin Luther, which led to a reformation now effecting over a billion people. EA seems to lack the potential to cause such a huge impact as that and yet it is the focus of anyone's criticism?

Would not some EA folks getting into some big church charity organisations and making them operate even 5% better than they were operating be a boon to the world? What could possibly be wrong with that? It is an insecurity and ruffled feathers to note how one own's life did not and likely will not go down such a pathway. But what the heck is wrong with getting even a handful of smart people into that kind of work? Sorry charities and the overall betterment of humanity...you talking about that makes me feel uncomfortable with the choices I've made in life, so my feelings are MORE important than saving or improving hundreds of millions of lives?

It shows an odd lack of self reflection for someone engaged deeply in a debate on charity who objects to EA to talk about negative utility when they are....thinking about charity more than they otherwise would have!

Expand full comment

Lots of ad hominem attacks to defend Effective Altruism here... I believe a movement should be able to stand on it's ideas, not on shaming anyone who criticizes it for not giving more to charity. But maybe I'm just a horrible person because I don't give 10% of my income to charity, and therefore I'm not even morally qualified to comment...

I think people should just be nice to each other/ look out for one another's needs. I generally have had super positive experiences with giving (and receiving, when I was living on the road) in very simple, personal ways. Handing cash to some hobo or service worker just feels really nice. I don't know how much it helps, but it feels nice (and it definitely helped me in the past). Giving money to some charity and then getting harassed by them for the rest of my life for more money feels less nice.

Also, charities aren't actually guaranteed to be moral arbiters for our communities. Giving money to a charity is very similar to giving money to a church or political party, and has similar potential downsides. Having your primary source of revenue for your organization be donations does not guarantee that your organization is deserving of such donations.

So yeah, I might find a charity I really believe in in the future that overcomes my general concerns here. But for now I'll just play it safe and keep giving in personal ways as often as I can. I think this is the opposite of the ideal of effective altruism, which is about giving to organizations with grandiose goals to save the world a million years from now rather than simply giving to some random poor person to brighten their day today. But I just think building up good vibes in my community is nice, I don't really feel like I have the power to effect bigger things in the present or the future (I'm very dubious about the effectiveness of my donations to stop something like, say, nuclear war). I'll just do what I can to make life more pleasant for those within my tiny sphere of influence and call that enough.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

> Q: I don’t approve of how effective altruists keep donating to weird sci-fi charities.

> A: Are you donating 10% of your income to normal, down-to-earth charities?

This seems like a pretty serious mistake on the part of the answerer; if I think you're doing a lot of work to make the world worse, you should obviously stop doing that regardless of whether I'm doing any work to make it better.

> A: Are you helping people today?

Literally everyone will say yes to this. They'll be right. Again, the challenge makes no sense.

> Q: Here are some exotic philosophical scenarios where utilitarianism gives the wrong answer.

Utilitarianism gives the wrong answer in every scenario that involves measuring the amount of whining people do about something to determine how to respond. It gives the wrong answer in many, though not necessarily all, scenarios where two people want one thing. But it is guaranteed to make those scenarios worse over time rather than better.

I don't think either is an especially exotic class of scenario.

Expand full comment

I was reminded of this by earlier comment on this post, but fairly deep in thread. If only there was another blogger who four years ago wrote a post about why agreeing with previous steps in the tower has nothing to do with disagreeing with the end state of an ideology. I'll let Scott take it from here:

"The further we go toward the tails, the more extreme the divergences become. Utilitarianism agrees that we should give to charity and shouldn’t steal from the poor, because Utility, but take it far enough to the tails and we should tile the universe with rats on heroin. Religious morality agrees that we should give to charity and shouldn’t steal from the poor, because God, but take it far enough to the tails and we should spend all our time in giant cubes made of semiprecious stones singing songs of praise. Deontology agrees that we should give to charity and shouldn’t steal from the poor, because Rules, but take it far enough to the tails and we all have to be libertarians.

I have to admit, I don’t know if the tails coming apart is even the right metaphor anymore. People with great grip strength still had pretty good arm strength. But I doubt these moral systems form an ellipse; converting the mass of the universe into nervous tissue experiencing euphoria isn’t just the second-best outcome from a religious perspective, it’s completely abominable. I don’t know how to describe this mathematically, but the terrain looks less like tails coming apart and more like the Bay Area transit system:" ...

"But it’s even worse than that, because even within myself, my moral intuitions are something like “Do the thing which follows the Red Line, and the Green Line, and the Yellow Line…you know, that thing!” And so when I’m faced with something that perfectly follows the Red Line, but goes the opposite directions as the Green Line, it seems repugnant even to me, as does the opposite tactic of following the Green Line. As long as creating and destroying people is hard, utilitarianism works fine, but make it easier, and suddenly your Standard Utilitarian Path diverges into Pronatal Total Utilitarianism vs. Antinatalist Utilitarianism and they both seem awful. If our degree of moral repugnance is the degree to which we’re violating our moral principles, and my moral principle is “Follow both the Red Line and the Green Line”, then after passing West Oakland I either have to end up in Richmond (and feel awful because of how distant I am from Green), or in Warm Springs (and feel awful because of how distant I am from Red)."

But are you giving 10%?

Expand full comment
Aug 25, 2022·edited Aug 25, 2022

I think your conversation is inherently unfair to most ordinary people, because you are only considering as altruistic charitable work something that you can put down on your 1040 as a charitable donation. But that's not so much how ordinary people live. Decent people act charitably a dozen times a day, every day -- holding the door for a stranger with arms full of packages, helping a child lost in store find his parent, listening patiently to an old person tell the same old boring fish story he's forgotten he told you umpteen times already, telling the waitress who looks glum to keep the change from a $20 for $3 cup of coffee, taking a meal to someone from the church recovering from surgery -- and on and on. But also nobody keeps track of all this, writes it down in some Journal Of Charity from which they could quote during such a conversation. Indeed, they probably don't even remember most of it, because they don't think very much about it, it's just the way decent people behave.

Does it add up to 10% of their working hours? Who knows? Many people who do the most in this area -- students and retired people, who are the backbone of volunteer networks at hospitals, poll stations, grand jurors, soup kitchens, shelters -- aren't working at all, so the calculation would be difficult.

Expand full comment

"nobody keeps track of all this, writes it down in some Journal Of Charity"

In late imperial China, there was a popular religious movement encouraging people to do exactly that:

"The ledgers of merit and demerit were a type of morality book that achieved sudden and widespread popularity in China during the sixteenth and seventeenth centuries. Consisting of lists of good and bad deeds, each assigned a certain number of merit or demerit points, the ledgers offered the hope of divine reward to users 'good' enough to accumulate a substantial sum of merits."

Expand full comment

OK, Scott. Good point, but what would you say IS the best argument against EA? Stop arguing against the bad arguments.

If it matters, I generally support EA.

Expand full comment

act on the margins of your current behavior and stop weighing in on everything else

Expand full comment
Aug 25, 2022·edited Aug 25, 2022

Since I’m in the middle of my Nth ‘reread all SSC articles chronologically from the oldest one up’ streak, and am around the 2014 mark…

Following this framework, at the moment, curious what would you currently say modern feminism is ‘about’? Or if you think it could charitably be modeled as a similar tower of assumptions?

(‘Q: I don’t approve of how feminists keep aggressively shaming nerds online for just wanting affection like every human being! A: what are you actively doing in your personal and professional life to resist unconscious bias and call others out on it when you spot it?’)

Expand full comment

+1 to this.

Expand full comment

I think the real objection is something like "I don't want Effective Altruism the movement to be high status, because that will raise the individual status of people I don't like who are doing bad things and will help propagandize for bad ideas".

I don't agree with this objection. But surely it's what is really going on.

Expand full comment

I love the "things I will regret writing" category. As far as I'm concerned, turn the spice level up to 11.

Expand full comment

Hoo boy! So much to address!

Altruism is required for the Good Society. Yes!

And there is a great force multiplier for directing altruist dollars to poor countries.

I have done a bit of Effective Altruism. But not 10% of my income. And I probably never will. More local efforts get a higher priority even if they aren't Effective.

My excuses:

1. Jesus called for Loving thy Neighbor, not loving Humanity. Maybe this is about soul growth. The spiritual experience of helping a person you actually meet and get to know is extremely different from giving to fix some statistic. But there are other reasons for focusing on neighbors, as I will get to below.

2. There is a limit to altruism. If the prosperous give all their surplus to the needy, there is no incentive to be prosperous. Indeed, even if one wants to prosper just for the purpose of giving more, too much giving now eats into giving more later. (Do note: many of the needy in Africa and other traditional societies are needy BECAUSE of the excessive effective marginal tax rate imposed on the productive in said societies.)

3. Most of the good works in the world are motivated by self-interest. The farmer farms to make money. The burger flipper expects to get paid for serving me a Big Mac, etc. The non-spiritual reasons for altruism are that no market system is free of externalities, public goods, needy unproductive people, etc. Taking some wealth out of the productive market system for taking care of other needs is a double plus good thing. But at some point excess takings kill the goose that lays the golden eggs.

4. Gifts given locally are a mix of True Altruism and Rational Self Interest. If I pay for an adequate homeless shelters, I can without guilt say "fuck off" to the bums who accost me on the sidewalk for beer money. Indeed, that is the Rational Self Interest case for paying taxes for a welfare system. Better to be half as rich and not to live in an armed compound to avoid kidnappings and worse. Donating to some cause 5000 miles away does not meet this criterion.

5. Good government is a public good. Some altruist dollars and efforts need to go into maintaining the legal infrastructure that makes First World nations like the US First World. If I donate 1 percent to poor people in Kenya and 9% to keeping the American Dream alive, in the long run more dollars go to Kenya than if I put all my altruist efforts to Kenya.

6. There is a general limit to altruism: if you ameliorate all the effects of sloth and/or stupidity, you get a whole lot of sloth and stupidity. Utility increases in the short run, but falls off a cliff in the long run. Recipients need to do their part.

7. And this applies to nations as well as individuals. If I donate to save African kids from malaria and/or starvation, and they go on to start having their first of 12 kids at 14, I am creating more need than I fix. Helping Africa make the transition to modern standards of living is a worthy cause, but a first order look at the effects of charity can be incredibly misleading.

8. In any giving situation there are going to be grifters and unworthy needy who squander their gifts. They are easier to detect if they are local.

9. If a church or similar organization shows preference for its own, there is a supervision effect that isn't trivial. Real churches have Standards. If you have to give up sin. If you have to give up being a drunkard, a druggie or a player in order to get alms ,there is a global increase in productivity.

10. Dividing your charity dollars is very inefficient. If you give $20/year to a charity, all you are doing is paying for their mailings to you. Choose a small number of causes.

-----

In conclusion, if you want to be an Effective Altruist, go for it! With my blessing.

But cut some slack to us Ineffective Altruists. Our efforts might be more important than you realize sometimes.

Expand full comment

After making several more superficial comments about the essay, I have thoughts about the tower/branching tree structure. I think Scott is on to something when he talks about disagreements manifesting at levels not representative of where the actual disagreement is.

I think the foundation assumptions are not quite the true root of the tree, and that there are significant disagreements prior to the 'foundation assumptions' listed there.

I would list the following as more foundational (largely based on an American pov):

It is morally justified to intervene in the lives of other humans. Sometimes this is an obligation, in other cases this is optional.

Interference can be positive or negative in aspect (aka intent).  If negative, it is obligatory to be narrowly focused in terms of person or actions acted upon, and narrowly focused in degree of action (in other words, proportional).

If positive, the obligation to be narrowly focused is not present, and may in fact be forbidden to be too narrow in focus. However, particularism of optional interventions is still permitted and in many cases celebrated.

The obligatory or optional nature of intervention is sometimes encoded in law but is more often moral in nature. Where encoded in law, obligatory negative interventions are even more narrowly focused and positive interventions are generally held to be more universal.

It is broadly understood that morality is an individual choice within a larger group of people who tend to share similar preferences.  In otherwords, there may be a common understanding that intervention X is morally required but intervention Y is not, and individual opinions differ. 

Obligations are bounded by physical reality, limited resources, and the rights of man. In otherwords, there is no magic money tree and charity interventions do not negate the rights of private property, self defense, and self agency.

It is almost always acceptable to intervene via attempting to persuade other individuals to change their moral stance on other interventions.

***

These are the foundation-level assumptions, and they are not universal, and by my read of both Christian tradition and the rationalist community, there are bodies of thought which hold overt physical (or even internal mental) intervention or even commentary on the actions of others to be largely forbidden.  An example: cloistered contemplative nuns who spend their days in the maintenance of the convent and prayer.  Or the zen student living in a hut pondering the universe.

It is only once we get past the hurdle of deciding that it's okay to intervene in the lives of other people that we start debating how, when, and why to do so.  There are multiple different takes there.  A lot of people have put a lot of thought into best options.  Effective altruism is a group of some thought bundles - bundles with internal contradictions, varying opinions of importance of different goals, and varying opinions on the acceptability of different means, motivations, and intents.  Also tons of math.

If I had to pick one particular way that I think EA differs from Christian charity, it's that ideal EA has a lot of math and ideal Christian charity has a lot of love. Despite many similarities in action, the two traveled apart deep into the tree roots/down the tower. Lumping them together because 'both give 10%' mistakes form for substance.

Expand full comment

As a disinterested observer: Sounds like EA community is now a large enough (moral) community, with sufficiently many insufferable members that people like to troll it? (like Crossfit maybe?)

So sounds like that’s the real objection here

Expand full comment
Aug 25, 2022·edited Aug 25, 2022

If you start in a reasonable seeming place and follow arguments where they lead and end up in an unreasonable seeming place, you should suspect you made a mistake somewhere and that it might have been a long ways up the chain. It is valid and good to consider reasoanble-seeming assumptions to be refuted by reductio ad absurdum, and to dismiss reasonable-seeming arguments when they prove too much.

I think this holds even when the chain of reasoning from the assumptions to the absurd/harmful conclusions is a lot fuzzier than a mathematical proof-by-contradiction. For example I consider the mass murder by Stalin and Mao to be evidence against the core ideological claims of communism. They could still be right, but they empirically seem to lead bad places, which is more likely in worlds where they're wrong than in worlds where they're right. So while I think the EA criticis that you are referring to here are mostly wrong on the object level, the argumentative approach that you are complaining about them taking is basically legitimate.

[I currently donate 10% of my income (or consumption, whichever is higher) to AMF and seasonal malaria chemoprevention, and nonzero amounts to more speculative EA projects.]

Expand full comment

The obvious answer in that case is: OK, granted. let's go down the tower and see how many levels we can make crumble until we finally reach granite-level solidity.

I'd argue the first levels: "we should help other people", "how much we should help should be a subject of discussion" are pretty solid, and "some ways of helping are more effective than others" introduces the first chink in the armour, since it strongly reminds me of "some things are more beautiful than others", i.e. while seemingly obvious, actually introduces infinite discussions.

The second level already seems extremely fragile, since

- "give 10% of your income" is already argued within the community and one of the answers arrived at is "actually, focus on making as much money as you can, give none of it away while you can invest all of it and use compound interest to add to the pile, then give the whole of it away at your death in one big money blast".

- "giving to third world countries is more effective than giving to first world countries" only works if you think all progress won't be wiped out from the lack of robustness of societies in third world countries in the mid-term future. Best example to date: all utility gained from helping Sri Lankans develop has effectively been wiped out by government mismanagement. If one expects global shocks to markets and trade to continue, it does not seem best to help distant populations and effectively get negative total utility instead of helping locally and betting on better relative stability ensuring at least some positive utility in the long term.

- "utilitarian calculus (QALY)"... well, I'd refer you to a better writer than I who wrote https://slatestarcodex.com/2013/04/08/whose-utilitarianism/

As an aside, and to anticipate the obvious response from the spicy essay, while we are not currently giving 10% of our income to charity, my wife and I are sacrificing a portion of our finances and time to charities, most generally local, and also Ukrainian.

Expand full comment

A look at the specific projects will tell you something about their more basic assumptions lower down on the tree. If you don’t like their projects, their values might not be a good match for your values, and you should look elsewhere to donate.

Expand full comment

If you force someone to logic their way through their moral intuitions you will turn most people into nihilists/egoists/deontologists, and they will stop being interested in charity entirely, except in whatever narrowly defined way their deontology may prescribe. Utilitarianism is fundamentally incompatible with 99% of people's morals intuitions; not in complicated exotic philosophical ways, but in ways like "I enjoy giving money more to people who look like me" ways people don't want to admit but feel deeply.

Expand full comment

So your friends told you "you can't publish this essay, it's too spicy", so you went out and published it in the form of quotes surrounded by some commentary and suddenly it's okay?

Expand full comment

The simplest way I've found to convey the thrust of your essay is this: "EA is a question, not an answer."

The question is "how can we help others as much as possible?"

There are many possible answers. It's fine to disagree - for well thought out reasons - with those pursued by most self-described EAs. But so long as someone at least agrees this is the right question to ask, and then actually seeks out, reflects on, and lives by their own answer, in my book they're an EA and their critiques fall within the umbrella of EA discourse.

Where I disagree slightly with your post is this...Answers can fall into two broad categories: giving more, and giving better. Until the very last line, your essay emphasizes "giving more." But I don't think the call to give more is actually the crux of most EA critics objections.

Nor is the call to give more at all unique to EA. Every nonprofit or volunteer group in the country - from church groups to Planned Parenthood to the local PTA - makes moral appeals to contribute or volunteer to help out in some way. And religious tithes go back centuries, as others have noted. In fact, EA is probably less aggressive in prodding people to give more than many traditional charities, which commonly take out TV ads and mail thousands of pamphlet solicitations, etc.

Given this, I don't think "you should donate at least 10% of your income to charity" is what provokes the commonest criticisms of EA in particular. I suspect many people who do not themselves give 10% of their income to charity would quickly concede that they SHOULD, especially those on the left. (I sometimes joke that EAs need to convince conservatives about the altruism, and progressives about the effectiveness...)

The more common objections regard what it means to "give better". Give more *to what?* Many, and probably most EAs answer this question in a way its critics don't like. And too often they confuse this disagreement with a rejection of EA as a whole - with a rejection of the question, rather than of one particular answer.

So in my experience, a better way to respond to criticisms of EA is to do three things.

1. Focus on the giving better. This severs EA from much broader ideological baggage about how generous we're obligated to be, and narrows the question to "how can we help others as much as possible, *with whatever resources we're willing to devote to that purpose*?" It also makes people less defensive about how selfish they are.

2. Help the critic clarify, in their own head, whether they're really rejecting the question as a whole. There are some people who do! Some people think they have no obligation to help others; or that they have stronger obligations to people in their nation or region; or that "the good" cannot be quantified because they're virtue ethicists or something. These people are not EAs. But if they think our question is the right question to ask, then...

3. Press them to explicitly articulate their own answer, ideally in writing; to subject it to public scrutiny and personal reflection and modification; and then to take their own ideas seriously enough to take action on that basis. Unlike in your post, this needn't necessarily involve giving 10% to charity. But it will be the very essence of EA, and then you can congratulate them for participating in the community they set out to criticize.

Expand full comment

Side note: I'm a little surprised by how frequently kidney donations seem to come up in EA discourse.

I thought one of the main points of *Effective* Altruism was that I, as a wealthy Westerner, can do a great deal of good at a relatively tiny cost to myself. I can easily spare $50 (I realize that isn't true of everyone reading this blog!), which can purchase, say, 100 doses of polio vaccine or 200 deworming pills. I just saved 100-200 children from polio or parasitic worms, at a trivial cost to myself! What a fantastic deal!

Kidney donation is the exact opposite of this. I have only two kidneys, so I can donate at most one if I want to live. Donating a kidney entails major surgery, which always carries risks such as infection. Even if the surgery goes perfectly, I'll be in pain and physically weakened for a certain amount of time, and I'll always be down one kidney, so if my one remaining kidney ever malfunctions, I am screwed.

For all this, the benefit is to a single recipient (can't cut my kidney in half and give it to two people), who will have to take immunosuppressive drugs for the rest of their life so that their body doesn't reject the transplant. So, it's not even a case of "a sacrifice on my part restores one person to full health," it's "a sacrifice on my part saves another person from death/lifelong dialysis, but does not restore them to full health."

The cost-benefit ratio is drastically different between kidney donations and financial donations to poor people. Why does EA focus so often on kidney donations? To speak EA language, is it *effective* for EAs to push for kidney donations rather than getting middle class and rich people to donate more money?

Expand full comment

Kidney donations are not about a single recepient. They are about unlocking chains of donations. A single undirected donation can result in dozens of people getting a kidney.

Expand full comment

Very late on this ball, so I'm going to just make a short point and apologies if it was already made:

"Then are you donating 10% of your income to charity?"

From what I understand of my tax bill, I'm coerced into donating 40% of my income to charity. Your moral hectoring doesn't convince me to give even more, especially when it comes from adherents of the crankiest, most absurd moral system ever devised by human beings.

Expand full comment
Comment deleted
Expand full comment

And I'd rather contribute to that than to fighting AI risk!

Expand full comment

Loved it. It's one of my personal strategies. Being of a (almost universally viewed as conservative) faith, and donating 10% of my income to the charitable institutions of that faith, as well as having a quasi-altruistic career (career preparation/development, specifically serving folks leaving incarceration) it is mildly fun to ask stereotypically "progressive/liberal" friends what *THEY* do to help their fellows of their own free will. Provided the "charity" you choose to support isn't actively doing harm, I'm not going to quibble that you're 1) trying to do good and 2) doing good of your own volition.

Expand full comment

Or maybe the whole charity donation thing is just a scam to avoid taxes.

The beauty of being a billionaire with a "charitable trust" is that you pay 0 taxes on the money you put in it. You can control the charity forever - which also pays no taxes forever. The only requirement is that you donate 5% a year - which a moderately competent family office can achieve.

Now add in agenda promotion: not only do you get the lovely lovely tax breaks, you can now throw money to promote your own agenda.

Win win.

Expand full comment
Aug 25, 2022·edited Aug 25, 2022

I don't agree with "We should help other people" as a rule. The drowning child scenario cuts out for me just as soon as the drowning child is out of sight. I can say this with confidence because:

1. I'm fairly confident in most circumstances I would rescue a drowning child I came across

2. I do not attempt to save the lives of other children who may die

3. I'm on the internet, so I can engage with logic that makes me look bad

I don't argue that EA is "bad" or will not help people, even possibly including myself (eg. your kidney donation retort). I simply recognize that my brain will feel rewarded highly by one action and will not be nearly as highly rewarded by the other, especially as it becomes more vague than directly saving a life that will appreciate me as the savior. Surely if "the core of EA" is extrapolating from how people feel about X to the superficially similar Y.... then it becomes fair to respond that most people obviously feel quite differently about Y.

Presumably the retort to this is that my world devolves into chaos? It's basically the world we live in. As long as people don't give it much thought, it functions. It would be pretty funny if the real x-risk was that telling everyone about EA made them realize that they don't give a fig about the out group (emperor has no clothes!), resulting in some incel killing everyone with a pandemic.

Ironically(?) this is also the solution to AI (assuming you can prevent it from modifying its own reward function, which is the real problem): ensure that the reward function includes a steep drop-off for both repetition and distance from goal, and you approximate humanity.

Expand full comment
deletedAug 26, 2022·edited Aug 26, 2022
Comment deleted
Expand full comment

Isn't morality just an extrapolation of our internal motivations? That's what the "drowning child" scenario seems to say. Otherwise where does it come from?

Expand full comment
Comment deleted
Expand full comment

Sure, you can define morality in terms of game theory too. It still doesn't give you EA does it? It gives you the drowning child, but then game theoretically the lives of the other children are surely worth less?

Expand full comment
Comment deleted
Expand full comment
Aug 26, 2022·edited Aug 26, 2022

Ah. Great comment, I like your moral framework proposal, so minimal argument from me there (I replied).

My argument was only against morality/EA as founded on the drowning child.

"Morality" and "moral obligation" are context-heavy words like "god" and "love" that must be used carefully.

Expand full comment

Not arguing with the rest of it, but I am not fond of the implication that only people who can afford to donate 10% of their income should be allowed to criticize EA. The more people gets cut off from criticizing a group, the bigger the chance of collective blind spots.

I am really not trying to be difficult here - this does not come from a place of blindly assuming I couldn't possibly because wow, that's a lot of money. I know my budget, and I do a yearly assessment of how much I can afford to give. I am clinging to rather painful 1%, because not much of my income is disposable. (To compare, my main hobby gets 0,5%).

Expand full comment

Thanks. May I publish a version of this post in Portugues on <80000horas.com.br> or <altruismoeficaz.com.br>?

Expand full comment

It's hard to imagine how this essay is too spicy to publish. Is this really all you could share of it?

Expand full comment
Aug 27, 2022·edited Aug 27, 2022

I wouldn't want A as a friend.

Expand full comment

I consider this to be essential context for the FdB article: https://freddiedeboer.substack.com/p/behavior-is-the-product-of-incentives

On the EA Forum, Linch commented, "[...] from the article I can't tell if Freddie considers e.g. South Asian air pollution, or rodent birth control, or interpretability for AI safety, or far-UVC light inactivating viruses, or pandemic-proof shelters, as done by reasonable, professional people, in reasonable, professional organizations, to be part of the problem or part of the solution. [...]"

I think FdB has very weak incentives to write helpful answers to those questions, and we ought to keep this fact in frame.

EA doesn't have a novelty problem, the entire modern world has a Twitter problem. As you said in Theses on the Current Moment, "There's an oversupply of tweeting and an undersupply of everything else." There are people (both EAs and not) whom I find highly thoughtful in person and in long-form...but *so* obtuse in their tweets. So I give attention and consideration to the former and not to the latter. I can't help but imagine an alternate world where this posture is normal--in such a world, I think FdB wouldn't had much incentive to write the "Novelty Problem" article, and (correct me if I'm wrong but) I think you would not have seen as much need to write this post.

Expand full comment

Enjoy Scott Alexander instructing you how to (self) motte-and-bailey as well as whataboutism any criticisms of the EA movement.

Expand full comment

Speaking fully within a USA context at this point

> . Of course the government exists in every industry but healthcare is just overwhelmingly dictated by regulation to the point where the prices for identical goods and services can vary about 100's of %.

It is a lack of regulation which allows for this? Free market... I'll get more to this later.

> Non-Profit isn't the same thing as "Not for profit".

It is worth nitpicking the two. There are non-profits like healthcare providers, who are normally non-profit due to historical, legal, and/or tax reasons.

https://en.wikipedia.org/wiki/Non-profit_hospital

Not-for-profit companies that could be for-profit are less common, and the for-profit slice comprises a large amount of software, technology and drug vendors which are among the greatest expenditures next to labor. Most major insurers are also for-profit.

As I noted earlier, the lack of government regulation around these contrary economic incentives and structures is a huge problem. Primo present day examples are insulin, oncology drugs, nursing labor.

> Markets are just really good at allocating resources, aligning incentives, and validating what consumers actually want

Providers want software that works and they don't, they hire more people to handle what software should.

Providers want insurers to pay on time and with less hassle, insurers keep making it more and more difficult.

Patients want to see their providers on time and with their full attention, providers are often too busy or too distracted to do this. Often the patient believes they're getting this level of service when they're not.

Patients want to know what they'll have to pay for and if insurance will cover it up front, providers can talk to insurance and ask but nothing is binding until after the services are rendered, even if the patient is under anesthesia.

Providers are burning out. The average life span is decreasing. People don't get the preventative care they need. Chronic care costs get milked. Acute care costs get spiked. Some companies get wealthier, some politicians get lobbied, and then it takes a decade to reign in prices on insulin (and just insulin) who's inventor essentially granted free patent access in order to make sure insulin was always available. Non-profits get roped into things and have to pass along prices to consumers since they normally have only a few months worth of capital to burn.

So I don't find your statement on markets to be true in general, e.g. look at the world, but for healthcare in particular I find it to be quite quite wrong.

Expand full comment

I feel disgust towards "effective altruists" for the exact opposite reason; they might fly to an "very remarkable conference" in San Fransisco from Europe with free money without caring about doing good in smaller situations.

Expand full comment

That prevented me from donating for a while, esp because I was a contractor (paid by project) with kind of shitty insurance and no pto. But thanks to lobbying in the US (some funded by EA orgs) that has recently changed.

I donated through a program that paid for travel and expenses as well as matching pay up to $2000 per week for taking time off work. It also gave me vouchers that will put friends/family members on the top of the kidney donor list if they need a kidney in the future, plus prioritization and free medical care if I ever need a kidney or have complications, and they arranged everything. This is the program https://www.donor-shield.org/kidney-donors/

Expand full comment

I don't donate 10%, because I simply don't care about anyone who isn't a personal friend or a blood relation (or who isn't my cat).

Everyone else can burst into flame and die screaming, and I care only as it affects me and mine as defined above.(*)

As you may have gathered, I'm not any sort of a utilitarian(**)

(*) I'm not speaking theoretically here. I stepped over dying famine victims in the street as a child and IIRC it didn't bother me one little bit.

(**) To be technical, I'm an ethical nihilist, the most relaxing and logical of personal worldviews.

Expand full comment

I think the argument over charitable giving can be broken into two, almost independent questions: (1) how charitable should you be, and (2) given a fixed answer for #1, how should you allocate your donations? The Q/A's seem to mix these two questions, and use people's answers to #1 as a pretext to dismiss their arguments on #2. It is liable to work as an argument ad baculum, because many people are ashamed of how little they give -- but the A's leave the substance of the Q's unrebutted, to the extent that the Q's aren't made of straw.

Expand full comment

> I have an essay that my friends won’t let me post because it’s too spicy.

The committee that decides what you post only has one voting member. Also, in this context, "spicy" is a euphemism for "uncomfortable, expensive, and/or dangerous", and the euphemism is designed to obscure fact that some true things take more courage to say than others.

Expand full comment

Is condom supply to Africa (against unplanned and stds) effective altruism?

Expand full comment