Comment deleted
Expand full comment
deletedAug 24, 2022·edited Aug 24, 2022
Comment deleted
Expand full comment
Comment deleted
Expand full comment
Comment deleted
Expand full comment

Charity is bad, ergo Effective Altruism is a net negative compared to the counterfactual.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

I would argue that "Effective Altruism" as a distinct entity only starts at maybe the "Cause Prioritizations" level of your tower of assumptions. The two layers below are shared by so many others -- ranging from for-profit insurance companies to government bureaucracies to religions -- that they can't be fairly claimed to be part of EA. "Cause Prioritizations" is also the level of the tower at which people start to raise serious objections to EA. That is not a coincidence.

If all you want is for people to do more from the lowest two levels of the tower, that's fine. But that is not EA, and claiming it is so is very close to a motte and bailey fallacy. People can donate 10% or do charity work in less developed countries because they are observant Christians or the government gives tax incentives or whatever without touching anything resembling EA at all.

If you want my spicy take on this, I think you added those lowest two levels purely as a defensive measure to protect EA. Why not add layers underneath? There are even more foundational assumptions like "Suffering exists", "Cause and effect exist and we have free will to affect it" or "We exist". These are also necessary. But you stopped there because the motte would be too obvious then.

Expand full comment

This captures much of my thinking toward EA as well. A lot of the criticisms of EA start with a bit of throat-clearing: "The good parts of EA are banal..." But they don't seem all that banal to me. Perhaps they are in some sense obvious or hard to argue with, but that doesn't prevent them from being both profound and underappreciated.

That said! The fact that so many criticisms of EA take this form does suggest that the stack is itself a problem because it makes EA less, well, effective than it otherwise could be. That is, even if the idea of giving what you can is logically unrelated to, say, esoteric calculations of the QALY of ems living in the Horesehead nebula, these ideas are de facto linked by the community itself. As a practical matter, the movement may need to wrestle with this as it grows up.

Expand full comment

What exactly are you(r friends) afraid of if you posted the full thing? IIRC you already get a bunch of hatemail, and you made the point back in Why Do I Suck that people can give you money but they can't easily take it away from you.

If it's wrong, sure, probably not worth posting. But otherwise...

Expand full comment

This is a really important essay.

If you’re involved in EA it is very easy to forget just how *weird* many of the current debates in EA are. This makes it harder for new people to join the community as the core ideas, which are much more appealing to a beginner, get lost.

See this piece for a recent example: https://www.salon.com/2022/08/20/understanding-longtermism-why-this-suddenly-influential-philosophy-is-so/

PS please post the spicy essay

Expand full comment

My problem, and I'm sure you have an "A:" for me is that I don't trust telescopic philanthropy. And I don't trust calculations that equate all humans and all lives however distant they are from you, and sometimes make a virtue out of the distance (eg. valuing 100bn future humans)

Look after your family, your friends, your town, and your colleagues. Donate to good projects you can see with your own eyes. Help your friends because you know better what they need than a stranger. This may not be _effective_ in the EA sense, but if everyone did this we'd usher in the kingdom of heaven, and if everyone did EA it's not obvious to me that we don't accidentally paperclip maximiser ourselves even without the AI!

But I can't deny I make these arguments post-hoc because I don't like the smell of EA.

Expand full comment

How does one distinguish the Tower of Assumptions from the Motte and Bailey? Is there a distinction within the arguments themselves, or is it entirely a question of some combination of social attitudes present among those involved? How do we know which one is more relevant here? (I am not sure where my own biases lie here.)

Expand full comment

I guess the lowest levels of the tower are so uncontroversial that nobody really disagrees with them (or if they do, they can't articulate why). The higher levels are where the discussion and debate occurs.

Consequently, this makes it look like EA only consists of the high levels, because that's all anyone seems to discuss.

Imagine a race of aliens watching humanity during the Cold War. "All the fighting is happening in places like Vietnam and Afghanistan, and never in Washington and Moscow. I guess Vietnam and Afghanistan are the most important states in the world..."

No. They're not. But Washington and Moscow were so heavily defended that the fighting could only take place elsewhere.

Expand full comment

I feel like this post was at least effective at helping me figure out where I depart from the EA community. It's not that I disagree with the base premise. It's that I already believed the base premise, and was already donating >10%, before I heard about them.

Maybe for some people EA at least helps them focus on the idea of altruism, but for me the movement has nothing to offer. It just feels like a distraction. Which is fine. Not every movement is for everyone, and I'm sure EA is doing some good work. But my path lies elsewhere.

Expand full comment

Nuclear war is not much of an X-risk. Would take over 20 degrees of cooling for many years to actually-in-real-life kill the species - you need the equatorial seas to freeze solid.

Pandemics are flashy, but I think the focus on disease-causing organisms when assessing bio X-risk is a mistake (it's very relevant to bio GCR, though). For X-risk I'd be more worried about Green Goo - a synthetic alga that doesn't need phosphate, fixes nitrogen, is more efficient than RuBisCO, and isn't digestible would kill us all by starvation.

Expand full comment

The "spicy" essay doesn't seem great to me. Do I need to be giving as much to charity as you before I can critique your choice of charities? Morally maybe, but intellectually no.

I believe that giving lots of money to charity is a good thing. I also don't do it. If you do then fine, I bow down to you as my moral superior. But I also have an issue with the fact that your charitable giving is 100% devoted to buying raincoats for ducks, because if you're going to be making that sacrifice anyway I'd really prefer that it was spent, yknow, effectively.

Expand full comment

At bottom it all rests on this normative assumption that one 'should' help 'other' people. Personally, I have yet to hear a convincing argument for why I should help anyone else if I don't want to. But then again, that may just reflect my 'lived experience' as they say these days (whatever that means - is there an unlived experience?). Now if effective altruism is about how can you be effective if you accept the notion that altruism is something you should do, then you have to keep going and answer the questions of effective for who, and effective in what way? Can these be answered be answered without falling into 'should's?

Without the full essay, I'll hold off writing more or actually trying to make my musings clearer.

Expand full comment

The gigantic flaw in so-called "effective altruism" is that it ignores the extreme and often grievous harm done to society and the planet in earning the income, 10% of which is donated to charities.

Most people only get an income if they do something for rich people. Most of the things rich people want done enough to pay for them to be done are extremely damaging both to the world and to the economically disenfranchised majority. If you have a corporate job, you're probably a hit man--the system is just designed to prevent you from seeing it.

Expand full comment

This is why labels are dangerous. As you say, EA has become a movement. For many critics, that movement (and perhaps leaders, perhaps projects) *is* EA, and it's what they critique.

To me, EA is just the set of 4-5ish of the bullets near the bottom of the assumptions tower.

The whole trope of EA criticism is almost entirely confusion around this point. *Which* bullets should or shouldn't be in or out would be useful criticism; forgiving critiques of the label would be more useful in the defense.

Expand full comment

Well this is an easy one. If we have any altruistic obligations, they are far less expansive than EA would allow. Certainly not crazy burdens like 10% of my income or my whole choice of career.

Yes, I would save a drowning child. No, I don't have the obligation to live a significant part of my life for others. Attempting to scale up the hypo is unpersuasive. In fact, I recall a recent blog post arguing convincingly how philosophical arguments scaling up normal-sounding hypotheticals to counter-intuitive conclusions are generically unpersuasive.

Besides, systemic change really is more important than charity. And yes, I work for systemic change, by donating and volunteering for the mainstream centre-right political party in my country.

Expand full comment

If I loved a Bible study because it involved lots of good discussion of context and translation details and delving into the nitty gritty, but over the course of a few years it got bogged down in Revelations trying to resolve the premillenialism vs postmillenialism debate and became almost exclusively about that, I would be deeply irritated.

The deep debates around the upper levels in your tower are great! They're discussions worth having for those heavily engaged in the topic. But those discussion belong internally, not public-facing front-and-center. Every normie that associates EA with AI risk and animal welfare rather than "seriously, just give ~10% to some indisputably useful things" is a failure by EA, not because AI risk or animal welfare are *bad* causes, but because convincing more people to give more to a range of actually useful things is much more valuable than getting people already bought into EA to reallocate from one set of useful things to another.

Expand full comment

Some EA ideas are hard to deny in the abstract (of course more effective charities are better than less effective ones), but the intellectual virtue of the movement is that it takes those ideas seriously and tries to apply them, even if it leads to weird conclusions that are out of step with broader cultural norms, etc.

There are many cases where "We should do X" is a socially desirable statement to endorse, while actually trying to do X is controversial.

Expand full comment

Thank you. This is a really helpful overview. I've encountered EA primarily from the AI side, so it's good context to see some of the foundational assumptions laid out like this.

Something I've been wondering re assumptions is how fundamental the assumption of infinite economic growth is to the EA movement - because this seems to be a blind spot.

Expand full comment

I'm an admirer of Effective Altruism--including its willingness to creatively consider some truly unconventional cause areas!--who thinks it's often unfairly maligned.

But, to me, this argument seems motte-and-bailey-ish.

How does one distinguish cases where one should judge a movement by its generally-uncontroversial broad foundational ideals, and where one should judge it by its more specific and controversial claims, policies, institutional culture, and so forth?

E.g., the general idea that one is morally obligated to donate a significant portion of one's income to the poor was not invented by modern Effective Altruists--see zakat, tithing, the Buddhist 'perfection of giving', the arguments of the Church Fathers that *all* one's surplus wealth belongs to the poor as a matter of justice, etc. (Which is not to deny that in contemporary society Effective Altruists are often quite unusually good at *living out* this ancient moral ideal.)

But, "if you think people ought to donate 10% of their income to the poor, you're an Effective Altruist" (and therefore shouldn't criticize the actually-existing EA movement and institutions) sounds a bit like "if you think women should have equal legal rights, you're a feminist" (and therefore shouldn't criticize the actually-existing mainstream self-declared feminist movement and institutions).

Expand full comment

As far as I can see there is long-termism and short-termism in effective altruism, but I see very little mid-termism. I have never seen any effective altruist seriously addressing the question: What will the children saved from malaria and intestinal worms do in 30 years?

1. Will they eradicate malaria and intestinal worms?

2. Will they cure their own children's malaria and worms?

3. Will they have children who get malaria and worms that need to be treated by tomorrow's effective altruists?

If the answer of question 1 is yes, basic effective altruism is excellent. If the answer of question 2 is yes, effective altruism at least fulfills its purpose. If the answer of question 3 is yes, effective altruists need to make sure that there is a steady supply of effective altruists in 30 years. Otherwise the result of their actions will be even more suffering a few decades into the future.

Expand full comment

I think you might be Making Up The Wrong Guy To Get Mad At On Here. The IRL circles I run in are progressive, and that's where I've met the most passionate critics of EA. They donate their income/time/labor to progressive causes, they disagree with efforts to redirect attention to other causes, and they absolutely believe they're agents of systemic change. Rightly or wrongly, these wouldn't feel like gotcha questions to them.

I think "keep going down an assumption floor until we don't disagree anymore" is unpersuasive, because if you go down enough floors of any moral or political philosophy, you get to uncontroversial tenets like "happiness is good," and it's in the distinctions around prioritization that most people actually decide whether their desire to promote happiness leads them to pursue a progressive, conservative, EA, whatever, agenda. It's in that last prescription - "more effectively" - that there's room for lots of aspiring altruists, and not just hypocrites, to disagree.

Expand full comment

Pff, Oxford Study Bible is best Holy Buble. Everybody Knows.

One lingering question I have about EA is...does it matter what poor people do? If one has no realistic hope of attaining an altruistic career, nor can easily afford 10% income donations without significant QoL loss...is there still a moral obligation? I know I'm still in the top 1% of global income just by virtue of living in the USA, and yet...10% of my __annual__ income still wouldn't total $5k. That's more than a year's donations to not even possibly save one life! Do such rounding error amounts even matter, morally?

I do fully agree many EA criticisms are aimed at branches rather than roots, though. That doesn't seem like a spicy assertion at all. A sort of...Weak Men Are Superweapons, at best.

Expand full comment

I am with the people who say that this is a pretty basic Motte and Bailey.

EA doesn't even start with a foundation of "give". Or even, "help" (because "give" is pretty oriented towards people with dough.)

EA's foundation is "be right" and EA adherents are willing to spend a tremendous amount of energy disagreeing on what "be right" means. It's as if the main thing is the effective part, and not the altruism.

And this is oh so well exemplified by the niche ideas, the emphasis on 'spend it now, give to charity later', and frankly the piss poor record of EAs doing anything but giving pocket change.

As a Catholic - and one going through a struggling phase - I know very well what this looks like in my faith and in my own life. I moved last year and discontinued a bunch of local charities. Now its a year later and I realize that I am down at 3% of my income in tracked giving. My faith has a lot of fancy churches, a lot of opportunities for people to dress up, and a ton of self righteousness. It's still easier to put money in the collection plate than to work on/with the neighbor with a Netflix subscription and no car insurance. And that's nothing compared to individually living ones life like a discipline of the Man, day over day.

So I get that doing right is a struggle. I still think EA is not doing the Effective part well, and the Altrusim even worse.

Maybe this is a part of the human condition, to be forever part of groups with high ideas and lousy, delayed, distracted execution.

Expand full comment

Enjoyed this enormously. Not least because I can’t wait for contra contra Freddie deBoer!

Expand full comment

I'm gonna make a fresh and more explicit thread riffing off yesterday's post and a lot of today's comments.

Are you sure you're not going to burn down all the museums to get African population up to 5 billion?! Seems a little repugnant to me...

Expand full comment

When I look at the tower graphic, what I see is a heavily-fortified Motte at the bottom, followed by increasingly daring Baileys. :)

More seriously, I think it reveals one thing about the Motte-and-Bailey trick pulled by other ideologies: that the harmful part is not the structure itself, but rather just in the *deception* involved. If the other "isms" simply laid out their assumption structure in a similar way, and accepted that many people only go for the first rung -- there would be no problem.

Expand full comment

> This is also how I feel about these kinds of critiques of effective altruism.

Well, how I feel about critiques of effective altruism at this point is that I've been terribly rude. I thought I was talking to people who were disinterestedly seeking the maximum way to do the most utilitarian good in the world. But in fact I walked into a church and started critiquing their religion. And that's unconscionably tone deaf and rude in my opinion so I've stopped doing it.

My point was, and is, and has always been and will always be how the blindspots in the movement mean that even by its own values it is not living up to its full potential. But I now see that the irrationalities in the movement are load bearing. They aren't points of failure but necessary irrationalities and shibboleths to maintain the community. And at that point I'm left with the stark question of whether I want the EA community to exist at all. And that's never been a hard question for me: Yes, I do want it to exist. Maybe even my definition of "full potential" is unsustainable. Maybe every such movement requires some faith.

So it goes. So it goes.

Expand full comment

> Q: I don’t approve of how effective altruists keep donating to weird sci-fi charities.

> A: Are you donating 10% of your income to normal, down-to-earth charities?

> .....

> Q: FINE. YOU WIN. Now I’m donating 10% of my income to charity.

> A: You should donate more effectively.

Is this going to end with me being morally required to let a seagull eat my eyeballs? Because in that case, I'm going to follow some good advice I encountered recently, and bail out at step 1..

Expand full comment

Is the argument "if you don't give 10 percent of your income, then you have no right to an opinion" valid for all nationalities?

In America, charity fulfills many functions that taxes are supposed to fulfill in European welfare states. So for Americans, giving 10 percent can be seen as a form of self-taxation: Americans who give 10 percent of their income to charity give themselves a tax rate that approaches continental European levels.

Are people who pay a lot of taxes allowed to answer "No, but I give 3 percent of my income to charity" and continue the discussion? Or are they always out of the game?

Expand full comment

If “functional” EA just boils down to donating 10% of your income to make the world a better place, why is this a significant movement that deserves media and intellectual attention? That’s called tithing and it’s been around for millennia.

Expand full comment

If the lowest level is just plain old charity--which is thousands of years old, see religious tithing--what's the big deal with EA? There's going to be endless and unproductive debate about what the most effective charities are because that is inevitable given that individuals will have completely subjective criteria for what counts as "effective". Maybe an individual would prefer that some percentage of his donation goes towards religious instruction/proselytizing in addition to mosquito nets and medical treatment.

Frankly, I don't have a problem with that. Just giving some amount of money to charity is, in my book, good enough given the obvious alternative.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

There exist moral systems in which donation is morally bad, and the kind of donations that are "efficient" from a utilitarian PoV specifically so

I don't think anyone here subscribes to them, but in the same way none here subscribes to racism, i.e. there's some "residuals" of a system that surrounds you which remain with you at a subconscious level.

In that sense donating to less effective charities might be an easier barrier to cross.

Not to mention less effective charities usually trade back status, so you might even be able to start donating that way and trick your brain into thinking it's something else.


As an unrelated point of interest. How much is 10% of income once you include the tax cuts for it in the US?

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

As a child, I would put a dollar of my pocket money into the collection can, and it would make me feel good.

As a teen, I recognised that a dollar was not a lot of money, so I would put $10 to $20 into the collection can, and it would make me feel good.

As a young adult, I had read a bit about effective altruism. I recognised that per Yudkowsky the developing world is a pit of suffering that we still have an obligation to fix, that per Peter Singer standing by and doing nothing is as bad as letting a child drown, that giving in a suboptimal way is also as bad as letting a child drown, that every QALY could theoretically be quantified and measured and weighted against me in the final moral balance, that I had a strong moral obligation to do as much as possible, and that falling short was literally killing children. Whenever I saw a collection can, I put a socially appropriate amount of change into it (LessWrong had told me about virtue signalling) and felt slightly guilty. Nothing was good enough and everything was a reminder of the crushing obligation. I paradoxically gave less than I ever had.

Now I just ignore EA and do whatever I want. I no longer believe we should give a hypothetical stranger from another galaxy, planet, continent, or city as much moral weight as the people around us. This has somehow lead to taking a high-leverage job helping developing countries, which is what EA apparently wanted anyway.

Expand full comment

I’d be interested in what people think of Philosopher Larry Temkin’s critique- he used to be very EA but now is not so sure on certain parts - although I think he’d still be advocating that we should do more eg 10%+, today. (So I guess the upstream argument for helping that you make is still intact) He has many more disanalogies with Singer’s pond now.


Temkin argues:

in favor of a pluralistic approach to aiding the needy, according to which there are a host of normative reasons that have a bearing on the nature and extent of our obligations to the needy, including outcome-based reasons, virtue-based reasons, and deontological-based reasons.

Rather than a narrow “do most good” EA approach, a decent person should be open to a wider range of moral consideration such as being virtuous, acting right, acting within permissible bounds, and in promoting as much good as one can. Many of these ideas are compatible with wider EA thinking.

To elaborate on Singer’s drowning child analogy, Temkin argues we could consider if those needing help are members of ones own community or family, how many intervening agents there are between oneself and the target needy (in the pond, there are no intervening agents), and whether one is actually saving lives as opposed to defraying legitimate costs of an intervening agents.

Temkin further considers internal corruption in intervening agents, and external corruption in the environment and countries where the needy are.

The case for aid may differ depending on the innocence of the needy, and on who else benefits from intervention both directly and indirectly; and to the extent warlords, tyrants and corrupt regimes may benefit or fail to change in response to their people’s needs. There is also an incentive problem, in that aids agencies have reasons to cover up such corrupt behaviour. There is a further problem in aid agencies displacing local talent; and the difficulty in identifying successful projects that will replicate in different contexts.

Foreign interventions may invoke morally problematic psychological attitudes, show insufficient respect to local people and customs and undermine the interests and autonomy of local people.

Taken together these weaken the case for giving to international aid agencies, and Temkin highlights a real world example in the case of Goma.

Expand full comment

Q: Why don't you ignore those friends, and post the spicy stuff? I want to read it!

A: Are /you/ writing spicy essays?


Expand full comment

This seems ridiculous to me, Effective Altruism (the movement) obviously seems located further up the tower. This is the kind of argument you'd ridicule from other ideologies, a combination 'political correctness is just being nice to people,' and 'how dare you criticise this art when you couldn't even paint it.' It feels defensive, in the way that in-group defenses whose hackles get raised are often written much more to reassure the in-group than to persuade the out-group.

For the record, since it's apparently necessary to argue from authority here, I've devoted my whole career to doing 'altruism effectively', gone through a lot of trial and error on what actually works, and am currently working exclusively on projects in public health in the developing world; I genuinely believe my current work saves lives and makes good use of my skills. I donate to charity, but less than 10%. My life's contribution to altruism is stably somewhere between 0% and 100% as you suggest.

Effective Altruism, at least when I see bits of the community, almost never struggles with the kind of questions that matter on the ground – what's the effective way to deal with a semi-corrupt government or office politics even when the people involved mean well? Effective Altruists always seem to conclude that the Important Questions are the ones they were interested in anyway, the same way opera lovers direct their opera to charity, or Harvard alums direct it to Harvard.

EA spaces often feel to me like people who are really into dragons, so they ask themselves 'what kind of dragon would be most dangerous?' and start preparing anti-dragon shields and arguing over why other people's anti-dragon shields aren't optimal. Point out that dragons don't exist, and they say 'well what monsters are YOU fighting against?'. And hey, maybe some discoveries come out of this, but don't pretend you're not just doing what you enjoy and making a weird subculture out of it.

There have been a few instances where EA has been useful, like getting the debate about cash transfers in the mainstream, and God knows plenty of institutionalised aid is useless as a chocolate teapot. But I don't see the point of engaging with a community where you need three levels of sanewashing to get to something useful. If you want to look at my work and say 'Ha! Gotcha! You're actually doing EA,' fine, but claiming members as adherents when they themselves refused the label is more than a little odd.

Expand full comment

I give to charity but I don't actually have the expectation that my charitable giving will actually make the world a better place. I think that in terms of improving human lives the single great contributing factor of the last 40 years has been foreign investment. In the 1970's South Korea's largest foreign export was human hair. Then along came Phil Knight and Nike. Shoe factories not only employ the locals, they also require investment in local infrastructure to transport raw materials to the factories and finished goods out plus the resources required to install and maintain industrial machinery and so on. Now of course South Korea is largely known as the source of Samsung phones and Hyundai cars, a story that is being repeated in places like China and India. From that standpoint charitable giving is irrelevant compared to the willingness of millions of first world consumers to buy cheap goods produced in poor third world economies.

So why do I give to charity? To tread water. Capitalism and technology will eventually transform third world economies into something that looks more like the developed world. How many people starve to death in first world economies like the United States or Germany? Until then however there are still people who face starvation in the third world. My completely arbitrary hope is that anything I donate to charity will a) feed them and b) not do too much damage to the local economy while we wait for these transformative forces to do their work.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

I think the problem is that most people don't agree with the second level of the tower but won't admit it even to themselves because it's politically incorrect to be that parochial. If I'm honest I would probably value each QALY at a discount of around x10 for each of these "circles" (rough estimates based on introspection ofc, especially the lower levels are really hard to estimate):

Children, wife

Parents and brother: 1/10

Other close relatives and friends: 1/100

Acquaintances, people I speak to, distant relatives: 1/1000

Distant acquaintances: 1/10 000

People I know of: 1/100 000

People I have something in common with (nationality, profession, some other kind of community I feel attached to): 1/1M

Strangers in a society similar to mine: 1/10M

Total strangers in a different society: 1/100M

So to forego a new $50 toy giving my daughter an hour of quality-adjusted life it would have to be worth 10000 QALYs for total strangers. Just knowing their story would reduce that number to 10 QALYs which may be in reach in some cases but maybe not that scalable.

So if this is true for many people I would advise effective altruists to calibrate the shaming to a level where people prefer to donate some considerable amount rather than question the lower levels of the tower.

Expand full comment

"When people say things like “I think AI risk is stupid, so I’m against effective altruism”, the two halves of that sentence might both be true, but the “so” joining them isn’t."

As stated, this is definitely wrong. While those people's sentiment might not be logical, it can still be true. As we know, people aren't rational, and having a negative emotional impression of the higher tenants of EA can absolutely dissuade them of the basic tenants also.

I know many people who got into EA, discovered that it disagrees with them about e.g. the importance of systematic oppression, and have then stopped to think about QALYs or donating 10 percent of their income. In a way it's emotionally all or nothing, because there is no "EA but with systematic change".

While the post doesn't outright say it, it strongly insinuates that people just bring up these objections as an excuse because they don't want to donate 10 percent of their income.

In many cases this is strictly not the case, and it would be very toxic for our movement to treat criticism like this.

Despite this, I still think it's important to finally acknowledge that some people genuinely don't want to donate 10 percent of their income, and that is at least part of the reason they are looking for more palatable criticisms. Nevertheless, their objections can still be at least partly honest.

Expand full comment
User was banned for this comment. Show
Expand full comment

How well do the responses in the spicy essay work in practice? Doesn't the other party usually respond with variation on "I am too weak, poor and busy for altruism because rich people stole my money and labor, and EA helps them to do it, and the good image of EA is vital to the operation, which is why I'm trying to undermine it"?

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

I think helping, directly ir indirectly, to grow the population in the poorest countries in the world has a strongly net negative impact on the world long term. I think EA are bad for supporting projects that help accomplish this.

I'm not obligated to donate any amount of my income to any charity in order to make this criticism (except perhaps to some sort of organization that endeavors to reduce population in poor countries??) as I dont think the money is wasted or is needed more somewhere else. I think the charity is doing harm and should be stopped.

Expand full comment

"I have an essay that my friends won’t let me post because it’s too spicy."

This is why SSC isn't what it used to be.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

Jenga tower, motte and bailey, same thing, different metaphor. The thing itself is not changed by changing the metaphor, although it may change what you notice about the thing.

I want to introduce another metaphor, a road. Scott's image can be read as that road, starting at the foot. This road is broad and easy, easier still as, unlike the diagram, it gently inclines downwards. The road is paved with good intentions. They're written right there in the diagram. This makes it very slippery. Each step taken cannot be taken back, for Singer's Drowning Child stands always at the start, and the further you go the larger she looms. To turn back is to hold her under the water with your own hands.

The farther along the road, the narrower it becomes and the steeper it slopes downhill, until it meets the Altruism Event Horizon which rips minds apart.

Scott's diagram covers the outward activities. Here is my list of stations on the road describing the internal arena.

1. You must prefer good to evil.

2. You must prefer a great good to a small good.

3. You must prefer the greater good to the lesser good.

4. You must always do the very best thing you possibly can, limited only by your ability to discern it.

5. It's a theorem! You can't argue with a theorem!

6. This world is a bottomless pit of suffering! What are you doing about it? Right now?

7. While you're sleeping, people are dying!

8. New car? How many dead babies did it cost?

9. Resting? Revealed preference!

10. Recreation? “I sometimes hear people say, as an excuse for professors going to doubtful places of amusement, ‘You know, they must have some recreation.’ Yes, I know, but the re-creation which the Altruist experienced when he was born-again has so completely made all things new to him, that the vile rubbish called recreation by the world is so dull to him that he might as well try to fill himself with fog as to satisfy his soul with such utter vanity! No, the Altruist finds happiness in Altruism—and when he needs pleasure, he does not depart from it.”

11. "O feet happily chained which are walking in the way of salvation!"

12. "You yourselves are the sacrifice!"

Some notes to these, to indicate that I am not piling straw:

(1)-(4) That's what the words "good" and "evil" mean. You can argue, and some do, that they do not mean anything, but if you believe that, then this comment is not addressed to you. You have immunity to all forms of moral suasion. More moderately, you can argue that it is right to give more concern to the drowning child in front of you than the remote child dying of malaria, either on practical grounds (see comments elsethread on "telescopic charity") or moral grounds (it is right and proper to attend first to one's own circle). However, the first of these leaves you squarely on this road, while the second is one that, strangely enough, I have never seen anyone put up a reasoned argument for.

(5) refers to the theorems of utility theory.

(6) references Scott's essay on Bottomless Pits of Suffering: https://slatestarcodex.com/2014/09/27/bottomless-pits-of-suffering/

(7) is about scrupulosity, a dysfunctionally obsessive concern that one is not doing enough good, which has been much talked of in EA and adjacent circles. I have not heard of anyone solving this problem.

(8) references Scott's suggestion of the currency of dead babies: https://web.archive.org/web/20161019200116/http://squid314.livejournal.com/2008/11/29/

(9) See Robin Hanson, passim.

(10) is adapted (he did not use the word "Altruist" but another) from a sermon by Charles Haddon Spurgeon, a renowned Calvinistic preacher of the 19th century, still read by those of that faith. Also a prolific one — this is from volume 82 of his collected writings. Have some more: "The fact is that man is a reeking mass of corruption. His whole soul is by nature so debased and so depraved, that no description which can be given of him even by Inspired tongues can fully tell how base and vile a thing he is!"

(11) and (12) are from the words of St. Cyprian, just as ferocious in his day (3rd century AD) as Spurgeon was nearer to ours.

It seems to me that the stratospheric heights of the Tower, the farthest reach of the Bailey, and the end of the Road, differ from these religious sources only deep down in the Foundations, at the innermost sanctum of the Motte, and on the first step through the Gate onto the Road. Whether that step is pulled by the salvation promised by God, or pushed by the Singerian Utility Basilisk, it leads to all the rest. That is hinted at by the Q/A that concludes Scott's post.

Expand full comment

Effective Altruism is just another -ism, and like all other -isms, when it gains sufficient momentum, it just becomes another version of animal farm. It’s only a matter of time before UN and similar institutions claim the EA mantle and push all sorts of nightmarish top-down authoritarian measures to create this supposed EA utopia. I used to donate to EA funds until the Covid response revealed how corrupt, reductive, and authoritarian “using the Science and Data for the benefit of humanity” can be. So now I’ll continue my efforts to support my family and immediate community, rather than funneling money through ideological institutions to remote, unseen, corrupt, violent places, thank you very much.

Expand full comment

I mentally classify my tax payments as charitable contributions and call it a day, although I will admit I have taken a recent philanthropic interest in turning Russians into pork rinds.

Expand full comment

The main point of contention that is not on your chart is:

- (agree) "we should help other people"

- (agree) "helping 3rd world is more effective than helping 1st"

- (disagree) "we care the same about people everywhere no matter how removed from us, so we should help the 3rd world more than the guy next door"

A huge human heuristic is concentric circles of concern, where you care more about your family than your friends, more about your friends than your friends-of-friends. You care even less about your city or ethnic group, then about your country, then about the world. A subculture such as rats may fit on that spectrum somewhere, or just "people who think like me and I'd like their company".

This is adaptive in a number of ways. You know the needs of those close to you. It's better for everyone to have a few fierce advocates of their well-being instead of a faceless bureaucracy for which they're just numbers in a spreadsheet. Those who feel kinship with you will likely reciprocate your gesture (and you both know it - it's a coordination mechanism). You're resistant to counterfactual mugging. And best of all - seeing the immediate results of your help builds social ties and reinforces your will to do good, thus developing virtue.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

There's a long history of western civilization exporting bad ideas that were not adapted at home.

1. Communism.

2. Population control (There's a great book, "Fatal Misconception.")

3. All manner of dumb economic ideas, from import-substitution to state socialism short of communism.

4. State agriculture boards in former colonies.

etc. I could go on.

It's not completely nuts to be skeptical of educated westerners pushing ideas that might appeal to a foreign dictator with the power to put those ideas into action, to the detriment of those he has power over and possibly the whole damn world.

Ideas matter, and the worst consequences of bad ideas tend to be avoided by rich western countries.

That being said, it's not my business how other people spend their money. If it's illegal, that's the government's problem. The amount of money at stake here isn't really very much, and it's possible someone could stumble on something really effective. The more monolithic the movement becomes, the less likely that will happen.

If some cause is immoral, I guess I have some obligation to say something, but I don't matter very much, and no one is going to care what I think.

Human beings are suggestible and listen to high status people and ignore low status people. It's a waste of my time to fight it. The end.

Expand full comment

Is there a charity that doesn't even indirectly help people procreate by feeding or treating them but only suppress population increase by different ways of birth control?

Expand full comment

I agree with this "defense." of EA. Wrestling with EA has made me think far more analytically about my philanthropy. Being charitable to the best of your capacities in whatever form that charity takes is table stakes for entering this argument with a clear consciences.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

No because if you reject egalitarianism then effective altruism becomes nonsense. Why donate to help others if they are not in a group, or fulfilling a function, that you care about. Who the drowning child is matters. None of your criticisms of criticisms matter as the rejection of the value of those being helped invalidates them.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

>If you destroy the foundation, the whole tower falls. But if you destroy the top floor, all the other floors are still standing

I can't resist pointing out that the WTC towers collapsed from the upper parts downwards, destroying them even to their foundations, and an entirely new structure had to be built after clearing away the rubble. I do not know what metaphorical moral might be drawn from this.

Expand full comment

"The things EAs do aren't actually effective to help others, they should instead do [...]. My reasoning is [argument] and [evidence]!!!"

Have heard this over and over again. It's logically equivalent to saying "the state of the art of science is wrong, let me do a study and publish about it to demonstrate this!".

You aren't disproving science, you are literally applying the scientific method and joining the collective effort.

Expand full comment

The tower of assumptions sounds like a motte and Bailey from the inside.

A: “The best thing we can do is build giant cages of crickets orbiting mars.”

B: “that seems kind of crazy”

A: “how come you just don’t want to help people?”

The state takes close to 50% of my paycheck. Am I to support a wife and children on the other 40%?

I totally agree with giving and want to give more than I can. I’d be all over EA if they said “end the welfare state and lower taxes so people can give more to charities that actually do good.” But for some reason that one never comes up.

Expand full comment

I think some of the memes within effective altruism are a fantastic basic template to get one to think about one's own ability to contribute to the good in the world.

Psychologically speaking, it's easy to get annoyed by calls to charity if you never thought about your own limits on it, because it feels like you can only lose - either you give in and lose materially, or you refuse and lose morally, so there's a way in which it can feel like a sneaky trick (maybe? I don't understand psychology well enough to speculate on this, this is my armchair theory). On the other hand, if you have a threshold like "I donate 10% of my income to charity" (or literally ANY OTHER threshold that works for you), then the annoyance (largely!) goes away, because either the prompt falls into your budget for charity or it doesn't, you don't need to decide then and there what the moral/material trade-off is.

Just to make it clear how different to EA some of EA's general principles can be: One of my rules in life is the perfectly mundane "try and keep a 2EUR piece in your wallet and give it to quiet beggars in the street, at the rate of 1 piece per work week". This isn't some shocking amount (I wouldn't even normally bring this one up at all because it's so insignificant, but it's still a nice mundane example of the principle at work), but it's a great way to make me help others on a regular basis without triggering annoyance reflexes or some primal fear of getting exploited / manipulated (or whatever happens in the deep recesses of our minds when humans get defensively annoyed; as mentioned, I don't *really* know what exactly causes it).

Furthermore, if you have a *lot* of money to play with and you want to help people, having a whole community to help you decide where it goes is pretty nice, because choosing something can be pretty anxiety-inducing - if you get it wrong, you're wasting a ton of money that could have gone to good use. But also, you don't always and constantly need to ultimately agree with the community about it to reap this kind of psychological benefit.

Expand full comment

I think I’d feel more likely to donate a percentage of my income if I was part of a community that I met with in person regularly where that’s what everyone did. And maybe someone stood up on stage and gave uplifting talks about why it’s important for us to be good to each other. I’ve been an atheist all my life and never went to church except when grandparents used to make us go on Christmas, but I sometimes wonder what it’s like to be a regular churchgoer, and I suspect the rest of us are missing out on something important. Anyway, I’m about to start a family after just starting a new career without a penny to my name at middle age, so that’s my excuse. But I’ll continue to consider it in the future.

Expand full comment

Q: FINE. YOU WIN. Now I’m donating 10% of my income to charity.

A: You should donate more effectively.

Isn't this the crux, really? I have some sympathy with the critique that lots of effective altruist ideas are actually not suprising or interesting or really 'effective altruism' at all. Donating 10% (plus!) - check; working in an altruistic career - check. All this is taught by, for instance, mainstream Christianity... While acknowledging that not everyone can do these things, these basic ideas have been mainstream Christianity (and therefore a core western idea) for centuries. The idea of the drowning child is successfully selling a much older idea to a public that hasn't had the ideas given to them otherwise.

So, as you say in the essay, you ask - why aren't more people doing it, then? The answer to *that* surely lies not in whether the ideas are around but in the question 'why don't people do good things they know are good'. This problem is everywhere, in all our lives, after all!

Carefully assessing the effectiveness of individual interventions with studies and statistics, though, isn't that the original contribution of effective altruism? And isn't that, alone, an extremely valuable contribution?

Expand full comment

I have a wife, three kids, and two parents with end of life issues. I’m also a software engineer who is extroverted and charismatic while also being strategic and conscientious. If I focus more on work I could probably get myself to a role where I make several million a year at a big tech company. But my wife and kids and parents are pretty needy as far as my time.

It seems to me that EA says I should divorce my wife, let her raise our kids on her own, and then I should go whole hog in my career so I can donate even more more to charity.

What’s the EA argument that I should stay with my wife and be a good dad to my kids?

Expand full comment

I am 79 and live in the Dominican Republic.

I give away about 80% of my modest (Social Security) income. I do this for selfish reasons..

The biggest part of my "charity" is paying the bills and providing cash for a Haitian family that I live with. In return they take care if me every day.

I also provide $40 per month for three other families that friends of my primary family.

I'm very happy doing this, but I would like to have more income..

When are you going to examine my serious improvement on Neom?


Peter Rodes Robinson

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

I think Freddie's critique is more that EA isn't really all that different from any other approach to doing good. Those of us who want to do good (when that is indeed what we want to do) would like to do good as effectively as possible.

Even your ad-hominem-y Q and A's gesture to that point. You're criticizing your presumed interlocutor for not doing good as effectively as possible [please see my ETA below], with the understanding that that standard is what the interlocutor already agrees with, and with the further understanding that you and the interlocutory really do have the same foundation. But if we have the same foundation, the "commentary" you mention (i.e., the upper floors of the tower) is all there is to discuss about EA.

Okay, I'm making a few assumptions about your intention and the "understanding" you're working under. But I must ask, what makes EA distinct?

I suspect what makes it distinct is how EA'ers approach the problem of deciding WHAT is most effective or HOW to devote limited resources to good ends. It's probably also the CONTENT of what they as a group advocate or keep open for discussion. So if a very significant number of EA'ers really do promote killing all predatory species, or if the EA culture and approach are peculiarly friendly to entertaining that argument, then that's a legitimate criticism. (That said, I'll happily concede that those are big if's. I have no idea if they are correct. Again, I'm not being at all familiar enough with EA.)

Maybe I'm misunderstanding things. I'm not at all well-read on EA. From what little I have read, I don't think EA'ers are wrong or bad. And I can think of a lot worse things to do than spending resources to prevent malaria in Africa. And while I donate to a local charity, I don't donate a full 10%, even though I could well afford to. And even though I'm fairly optimistic that charity does what it claims to do and does so as efficiently as possible, I haven't bothered to educate myself on what I need to know to do the assessment.

ETA: I said above that you're criticizing your interlocutor for not doing good as "effectively as possible." But I guess I realize you're not critiquing their effectiveness, just whether they make some sort of effort to do good. I'm not sure that changes the point I was trying to make, but I now realize I was misconstruing your point.

Expand full comment

I think the critics have huge egos and/or believe strongly in the Adam Smith argument of the selfish baker, and they think it's better to invest in themselves, morally. If the Wright Brothers had donated more, the world would have gotten fewer airplanes, later, and thus less total utility. A lot of people think they're Wright.

Expand full comment

I think that one of the assumptions that you are hiding is the assumption that charitable donations do more long-term good than market investment. This seems wrong in theory, because profit-seeking companies are more strongly accountable to the people they are supposed to serve. And it seems wrong empirically--if I look at the record, most of human improvement seems to come from profit-seeking investment. And if you ask me whether I can behave consistently with my view that market investment is a better use of money than charitable donation--I can! And I also donate blood.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

You can never truly care about more than Dunbar's number of people, and attempting to do so is nothing more than an act of self-delusion.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

I don’t think you need to get very exotic before utilitarianism gives bad answers.

Even the base level “a random foreigner villagers happiness is a ‘better’ use of resources than your close kin” is already a bad answer.

Or rather it is a good answer if you are a disembodied spirit equally interested in all mankind. But that isn’t what anyone or any organization actually is.

Expand full comment

Since this is a somewhat spicy post, I hope you are up for a somewhat spicy retort...what’s the difference between the “tower of assumptions” model and “motte-and-bailey” model?

Expand full comment

Sometimes I have a hard time even believing that EA is actually a unified or consistent thing.

Why 10%, why not 10.5%? My amp goes to 11 and I might have heard of the Catholic Social justice concept of the universal destination of goods, long before EA became a "brand".

Didn't Swift's "A modest proposal" put an end to serious consideration of utilitarianism.

Who doesn't want to be more "effective"? Even the soup kitchen worker will choose a ladle rather than a quarter teaspoon as a soup distribution tool.

Expand full comment

One of the things I like about this blog is how you share intelligent and well-written criticism of yourself and your ideas. That level of intellectual honesty is rare.

Meanwhile, why is there so much criticism of the idea that altruism should be effective? It says a lot about the critics.

Altruism as such an important, deliberate part of life is a very Christian idea. It has spread everywhere, as being an idea that should not be questioned. I think if my Hindu ancestors from some time ago returned, they'd be shocked by this uncritical obsession their descendants have for altruism. I mean, it is probably ok, but is it this important?

Why can't it be questioned or atleast be improved?

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

Littlewood's law suggests that the news should be ignored, because in a world of 8 billion people all with their hot takes and the news media's bias towards interesting, controversial and dramatic stories, that we should ignore the news over virtually anything, and that includes EA.

Link here: https://www.gwern.net/Littlewood#:~:text=At%20a%20global%20scale%2C%20anything,%E2%80%8Bnetworked%20global%20media%20covering

PS: EA is to Charity what Capitalism is to Economies or Democracies are to Politics. A flawed system of humans that nevertheless outperforms by orders of magnitude everybody else. Or "the system is imperfect and always will be, and flaws can be removed, but it's still more useful than any system yet designed for charity."

Expand full comment

I do a lot of volunteer stuff with career development to help people do resumes and prep for job interviews. Have a pretty good track record of getting people on career tracks, although my reach there is in the low dozens. Embarrassed to say that at present I only give a few percent of my income to normal down to earth charities. Posting mostly just to say: it feels good to do something good for someone right in front of you that helps them beyond the few hours you spend on the effort. In case anyone else is similarly inspired.

I like the EA movement overall, but somewhat align with DeBoer on it. Although I also have my own science-fictional/fantastical ideas that I’m pretty sure give other people eye-rolls so I don’t judge anyone too harshly. It seems the highest utility thing you could do to help the world would be to “Fix the things that fix things” or in other words inspect, study, and enhance the fundamental social mechanisms that formalize problems, create solutions, and assign resources, so I think I’m pretty strongly aligned with EA on that front.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

Glad you're addressing this but still missing pieces.

"Think that 10% is the wrong number, and you should be helping people closer to home? Fine, then go even lower on the tower, and donate . . . some amount of your time, money, something, to poor people in your home country, in some kind of systematic considered way"

-I am giving way more than 10%. It ends up in a savings account so I can start my own project in the future. Why are future poor people discounted more than present more people?

-What if the poor person in the future I want to help is myself?


"Think that 10% is the wrong number, and you should be helping people closer to home? Fine, then go even lower on the tower, and donate . . . some amount of your time, money, something, to poor people in your home country, in some kind of systematic considered way"

-I am giving way more than 10% to help poor people. The poor people in question happen to be my own children. But helping those poor people doesn't count because the unwritten assumption is that the help must not overlap with existing expectations, right? EA is BETTER than everyone else doing the "minimum". Some QALY are better than others, the ones that reduce your guilt and/or give you feelings of social status are better.


"Helping people"

-I am helping. My wirehead project will remove all feelings of suffering, the side effect is that no one has kids anymore. The QALY of future potential people can't be measured.

-Wait, helping is removing suffering AND helping them reproduce or not reducing reproduction? How many kids are good enough? Help that reduces 4 kids to 2 kids is okay, 2 kids to 0 kids is not?

-Now you are saying there are considerations BEYOND QALY?


"Q: Here are some exotic philosophical scenarios where utilitarianism gives the wrong answer.

A: Are you donating 10% of your income to poor people who aren’t in those exotic philosophical scenarios?"

-If one rejects the basis of utilitarianism, then it always gives the wrong answer. It isn't "less basis", it came after rejecting all moral theories that came before. "Every philosopher that came before us just didn't have the common sense that we do."


"We should...." is just "foundational".

-This isn't foundational, this rests on unspoken assumptions. Just be more honest.

You can't get an ought from an is. Using "we should" is either:

1. A claim to objective morality (which needs grounding).

2. Dishonest shaming language used to modify the behavior of others.

3. An inaccurate statement about one's own subjectively feelings.

Be honest enough to say that you just subjectively feel compelled, there is no such thing as should without objective morality.

Expand full comment

Surely rationalists have a name for the fallacy where one dismisses valid criticism because the critic fails to meet arbitrary criteria?

Expand full comment

I was going to make a joke about being so low on the tower I am only helping myself, and that you're welcome. But, and I suspect most of your readers don't have to worry about this, the invisible foundation is really, seriously yourself and obviously you should take care of yourself first, like putting on your oxygen mask first in an airplane crash. It might be obvious, but because it's invisible I feel too many people miss it.

Expand full comment

LOL, the last one made me laugh. People can always find an excuse. Some told me this once: “Feel universally, think globally, act locally.” Give money or time to your local soup kitchen or no kill animal rescue. Get a job with the ARK or whatever local organization serves the developmentally disabled in your area. Or get a job at Hospice. Get on the local library board, or school board, or zoning board or water board - wherever your expertise and energy can help counteract the forces of greed and denial that are holding us back from dealing effectively with the very real problems the human race is facing here. Give money to Planned Parenthood. Volunteer at Habitat for Humanity. Deliver Meals on Wheels to shut ins. Put together a crew and go around winterizing the houses of the people who use Meals on Wheels, for free. Get together with your neighbors and spend a day each week picking up all the trash on your block. Lots to do. Lots to do…,

Expand full comment

More effective than giving to poor people is giving to institutions that are trying to change the institutional frameworks. The main obstacle to poor people getting richer is almost always government policy.

We should be trying to make more governments like Hong Kong in the 20th century: strong rule of law, strong property rights, low taxes. That is all that is required for poor people to pull themselves out of poverty.

Expand full comment

You are right. The problem absolutely is with the foundations. We should help some people, sometimes. Typically, those we are actually responsible for (our children, family, and, say, patients, students, soldiers, depending on our role), and occasionally, those that come across our path, like, a drowning child. Nothing follows from this about some kind of mandate to "strongly consider how much effort we devote to this". This obligation does not exist.

Expand full comment
User was banned for this comment. Show
Expand full comment
Aug 24, 2022·edited Aug 24, 2022

"But beyond that, you might wonder why the atheist didn’t think of these things. Are the translation errors his real objection to Christianity, or is he just seizing on them as an excuse? And if he’s just seizing on them as an excuse, what’s his real objection? And why isn’t he trying to convince you of that?

This is also how I feel about these kinds of critiques of effective altruism."

Having seen some atheist arguments which are exactly this ("that word is translated X but it should be translated Y!"), I see you.

Actually, what this makes me think is that Effective Altruism is going through its own version of the Donation of Constantine. Just as every critic of the Church/Christianity likes to blame Constantine for Ruining It All and turning Christianity into just another state body:


So EA is going through the same growth phase. It's no longer a bunch of scrappy nobodies following a weird philosopher, it's putting institutions in place. Heck, it's *got* institutions to put into place. Rich and influential people are getting involved. EA is even throwing money at political campaigns. It's becoming mainstreamed into general society.

Speaking as a Catholic, welcome to the consolidation phase. And be prepared for ten tons more of the same kind of criticism about selling out, about "but you're not doing anything new, and what new stuff you are doing is weird and strange", and "I liked you better when you were a bunch of scrappy nobodies challenging the status quo".

Be prepared for the accusations, which do seem to be covered by the "Q: Come on, effective altruism doesn’t even emphasize the “donate 10% of your income to effective charities” thing anymore! Now it emphasizes searching for an altruistic career!" objection, about having lost the way and become tied up in administering the institutions:

"Woe to you, scribes and Pharisees, hypocrites! For you tithe mint and dill and cumin, and have neglected the weightier matters of the law: justice and mercy and faithfulness. These you ought to have done, without neglecting the others."

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

I don’t know if this is an uncharitable take, but I’ve always thought that the Real EA Response to any sort of philosophical questions etc is that “you are not really virtuous enough to be qualified to even have an opinion on this, but I guess we can humor you sometimes, we do enjoy philosophy,” and this essay feels along those lines?

In any case, I think donating 10% of your income to down-to-earth charities that help poor people right now is a commendable serious and virtuous act.

I also have what seem to me to be genuine philosophical confusions about what is good to do etc, which feel sort of EA-adjacent , but I don’t think EA (at least online EA) is the right space to think those things through, and maybe it’s not supposed to be (though I do think it sometimes tries to be?).

My own EA actions are nonzero but definitely not 10% level.

Expand full comment

If your foundation has "should" in it, you are nowhere near the bottom of your tower.

Expand full comment

From your last article, because I think you should pay attention to what you've said, and realize that you're engaging in this article from the opposing perspective:

"I’m not sure I want to play the philosophy game. Maybe MacAskill can come up with some clever proof that the commitments I list above imply I have to have my eyes pecked out by angry seagulls or something. If that’s true, I will just not do that, and switch to some other set of axioms. If I can’t find any system of axioms that doesn’t do something terrible when extended to infinity, I will just refuse to extend things to infinity. I can always just keep World A with its 5 billion extremely happy people! I like that one! When the friendly AI asks me if I want to switch from World A to something superficially better, I can ask it “tell me the truth, is this eventually going to result in my eyes being pecked out by seagulls?” and if it answers “yes, I have a series of twenty-eight switches, and each one is obviously better than the one before, and the twenty-eighth is this world except your eyes are getting pecked out by seagulls”, then I will just avoid the first switch. I realize that will intuitively feel like leaving some utility on the table - the first step in the chain just looks so much obviously better than the starting point - but I’m willing to make that sacrifice."

Expand full comment

There’s an argument as to whether local donations are “charity” in the same sense as donations to distant causes. If I want my community to have a Museum of Z I can’t buy it off a shelf, I have to give time/money/both and probably involve other people to make it happen. It may look like charity in that money changes hands and I don’t get money back, but another model has it as simply a more abstract level of community participation. Me buying vegetables at the store (giving the store money and getting vegetables back) is not charity, it’s an individual transaction, and the museum is a group transaction.

If the museum to me really is a charitable appeal equal in some way to international hunger then I could approach it as charity - but it isn’t necessarily so.

I think local philanthropy in some communities is a clique. There’s a lot of sociology to be done around who is “supposed” to be involved or give, and where and why. EA interrupts assumptions about wealthy techies being awful technocrats. It also politely asserts that maybe (your/my/etc) pet projects aren’t all that logical (which people love to hear) and asserts that wealthy techies are going to explode the system yet again and be charitable THEIR way. It pokes the bear of society’s resentment toward wealthy techies. Then people have to reach for reasons.

Also international charity developed a bad reputation as paternalistic, ineffective and probably racist when done by NGOs and governments. Some people will object to new, energetic, non-NGO/govt efforts as again being paternalistic etc. Think globally act locally.

The reputation is not always deserved but that’s complexity which is irritating to some.

Expand full comment

I think this has changed my mind about EA. Well done!

Have you heard of PURPLE crying? (Stick with me.) When I had my kids it was talked about in every new parent class; there were posters all over every healthcare facility we visited; we got flyers in the mail. I mostly rolled my eyes. Then I had a chance to chat with a nurse that was involved in the campaign. “Can you explain to me how this is different from normal colic?” “Oh, it isn’t. But we found that by making it sound like some new research finding, people took more seriously that this is a normal phase and not something to fix and so were less likely to shake their babies.” Ah. This moment feels the same to me.

Am I reading this right? Is it reasonable to view EA as a PR campaign for the ‘obvious’ need to give and to strive to give well?

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

The comparisons with motte and bailey don't seem to make any sense to me. Motte and Bailey is if you defend X, and if someone challenges you, change to defending Y while making it sound like a defense of X. Scott is saying that if you disagree with the higher parts, you should give up on the higher parts and do something else that is consistent with the lower parts. That's not Motte and Bailey. That's letting the attacker live in the Motte indefinitely, which makes it no longer a fallacy.

Expand full comment

While I don't agree with the commenters who suggest Scott should post the full 'spicy' essay, I'm very glad he posted this, both because it's really helped me to crystallise my thoughts on EA and the nature of ideological movements in general, and also because it's convinced me to get off my arse and donate 10% of my income to charity.

As someone who agrees strongly with the bottom 2 boxes, and a lot of what's in the middle one, I remain pretty suspicious of EA (as the movement which generally presents itself in reality under the EA banner - which i think is pretty reasonable, since as others have pointed out, it's a motte-and-bailey argument to defend the upper floors of the tower by resorting to support for the lower, not that that's generally what Scott is doing), for 2 main reasons:

1. EA generally seems to regard Peter Singer's philosophy as gospel truth, in a pseudo-religious way that goes beyond the basic utilitarian-ish thinking that motivates the bottom pillars. This is offputtingly dogmatic in itself, but it also leads to both a focus on animal welfare which comes at the expense of human welfare (which as what singer would call a 'speciesist' i stongly reject in a society which already contains so much human suffering), and also the demand that all people regardless of physical or temporal distance should be equally considered in personal ethical calculations, which sounds good but ends up leading to wierd and potentially dangerous repugnant-conclusion-esque ideas. I agree that people should be more utiloitairan in their thinking, but allowing a *small* proximity decay factor in utilitarian calculations leaves you with ~95% of the ethical fruit while largely proofing you against most of the wackiness (note that concern over x-risk is still very much valid here).

2. I think much of the 'bad smell' the EA movement has among ordinary people, including many commenters here, is down to the fact that the movers and shakers of the movement all tend to be a not just very wealthy and educated, but a wierdly specific sort of western, largely anglophone, socially privileged, futurist, wealthy and educated person (I have resisted just saying 'bay area people' here, but it's emblematic for a reason). There's no sin in being such a person, but when a movement like EA - which ought, by its foundational priciples, to be a broad-ish church - becomes monopolised by such people their culture takes hold in a way that both restricts the movement's perspective and makes it alienating to everyone else, compounding the problem. One example of a negative outcome of this is the experience most ordinary, curious and moderately lower-pillar-EA-inclined person will have after encountering 80000 Hours' promotional material, which claims to want to help everyone align their career with more effective utility, buyt in practice it becomes obvious pretty quickly that they're mainly interested in recruiting Ivy League/Oxbridge graduates to work on causes-celebre within the 'EA set' (one cannot help but notice this helps perpetuate the socially elevated and particular nature of that set) while us mere peasants who make up most of the population get 'oh, give to the LTFF i guess'. I get that some people are higher-impact targets than others, but this feels like lost utility to me. Lots of ordinary people, like myself, would love to work in a way which benefits society more if only there were better frameworks to help us do it.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

I think Freddie is correct in that EA can come across like "we are the first people ever to think of doing good, and how to do it right".

Yes indeed, if only there had been some earlier statement of principle about helping those not in your in-group:



I think what ruffles people's feathers *is* the "You are already donating? You should donate more effectively" and that EA is the only game in town when it comes to knowing how to donate effectively. 'Sure, maybe your group/organisation/church has been running missions to the Third World for the past hundred and sixty years, but *we* know better than you how to help those people because we are Smart and Use Science and Maths'.

Maybe you *are* better at knowing how to help most effectively, but sounding as if your movement believes that it is the first group ever to figure out the best way to do good and what the most important principles are *is* annoying. Especially when the emphasis pivots from "feed the hungry, clothe the naked, visit the sick and imprisoned" to things like "Existential Risk! If we don't solve AI now, the utilons of the future quadrillions of possible inhabitants of our light cone are under threat!" while the hungry go unfed and the naked go unclothed because this is *more* important than trivial local present needs.

Expand full comment

As someone who isn't really an EA supporter or critic, who has never heard anyone mention EA in real life and has barely encountered it on the Internet outside of Scott's writings, my first thought is: are there really people THAT critical of EA? Is anyone out there saying "EA delenda est" or otherwise spending much time on the subject? Or are we really just talking about missionaries for EA trying to convert people to the cause and receiving these sorts of explanations for their rejection?

I suppose as I think about it, you could probably break subjective assessments of charitable causes down like this:

1. Causes that are actually a great use of resources -- your personal favorite charities.

2. Causes that are a pretty good use of resources but not the best. E.g. supporting the opera if you think having an opera in your city is important but not important the way saving lives is.

3. Causes that you see as beneficial, but barely so. E.g. supporting political campaigns of the party that you see as narrowly the lesser of two evils. Or donating to charities that are highly ineffective at their supposed mission. Perhaps making the lives of factory-farmed chickens better if, all else equal, you'd rather chickens suffer less but you don't actually care at all about chickens.

4. Causes that have basically zero value but at least don't make the world worse. E.g. "sci-fi" causes that you think are meaningless. Or donating money to your alma mater to buy out the contract of a football coach so that your team can maybe win more games.

5. Causes that actually make the world worse. Like donating to the wrong political party or to the wrong side of the culture war.

My sense would be that critics of EA are mostly arguing that it does too much in categories 3-4, not enough in 1-2. But is anyone even arguing it doesn't do ANYTHING in categories 1-2, or that it's doing much in category 5? I'm not going to count basically Randian arguments of the form "all charity is counterproductive", which aren't REALLY criticizing EA per se.

Any of us could find tons of causes that fall into all of these categories. Why does EA get highlighted by some people? Maybe it's just that within a certain subcultural bubble, a large percentage of their peer group is actively involved, so they feel the need to develop strong opinions about why they themselves aren't involved.

Expand full comment

It is fair to ask whether EA critics are sufficiently altruistic, but I suspect it is the ‘effective’ bit which attracts most of the criticism.

The Unorganized might have valid criticism of organized *religion*. City folks may object to aspects of *organic* farming. I think that’s ok and probably a good thing.

Expand full comment

This post seems to just be endorsing motte and bailey arguments as applied to EA?

Expand full comment

Having contributed my share of criticism, let me say that EA is correct on this ( or our EA friend in the dialogues above, anyway):

"Then are you donating 10% of your income to charity?"

Stop criticising, if criticising is all you are doing, and go do good yourself. Or as it was put elsewhere:

"41 “Then he will say to those on his left, ‘Depart from me, you cursed, into the eternal fire prepared for the devil and his angels. 42 For I was hungry and you gave me no food, I was thirsty and you gave me no drink, 43 I was a stranger and you did not welcome me, naked and you did not clothe me, sick and in prison and you did not visit me.’ 44 Then they also will answer, saying, ‘Lord, when did we see you hungry or thirsty or a stranger or naked or sick or in prison, and did not minister to you?’ 45 Then he will answer them, saying, ‘Truly, I say to you, as you did not do it to one of the least of these, you did not do it to me.’ "

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

This is going to come across as curmudgeonly, but the mere fact that effective altruists spend so much of their time arguing about effective altruism and defending it from criticisms makes me feel like a) the altruism isn't really the point, and possibly b) the altruism can't be all that effective, either, because if it was why would you need to spend so much time arguing about it? Listening to this stuff makes me want to take my money and spend it all on vacations that involve lots of fossil fuel, ripping holes in mosquito nets for sport, and gratuitous rainforest destruction.

Expand full comment

By tower, I'm going to assume you mean something like Minas Tirith: a Motte and several concentric Baileys, right?

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

Judaism originated tithing - not EA. It’s in the Torah. Many religious Americans (Jews, Christians, Mormons, etc) tithe. The Torah prescribed other incarnations too, like Pe-ah, where farmers were required to leave portions of their fields for the poor. So EA can’t claim tithing, or systematic charity-giving, they just reincarnated it for atheists. And most ppl already give or aspire to give charity, and those who do are widely praised.

Where EA differentiates from these widely held beliefs and practices, is in its strict utilitarianism, and the specifics of where to give. That’s also what goes against most ppl’s moral intuitions and preferences. So THAT’s why the conversation starts at EA’s stranger components - bc the tenets you’ve laid out as the ground floor are so commonly held that they don’t define EA. The conversation/debate about EA starts with the particulars (malaria nets or whatever) b/c THAT’s where the EA branch breaks off from the “what everyone already believed for centuries” tree, so that is a better way to define EA.

Just like when ppl question religions, they don’t question the basic concept of charity - they question the religion’s particulars, such as “give it to the church.”

Expand full comment

I don't donate as much as I could. I feel a bit guilty about this since I could, with minimal inconvenience, be a larger help to the world (near or far) than I am, and the only reason I don't is I choose not to (or rather, choose to just let things lie). That others made the adjustment (and often more) I was too lazy to make causes me to feel morally inferior, sometimes triggering reflexive defensive notions.

80K Hours and Give What We Can are founded by the same guy - comments that posit a dichotomy between EA jobs and EA giving seem silly.

Expand full comment

Never underestimate the value of woe. I think woe is far more powerful than the confections of word puzzlers. William Blake says much about this in his work on Job.

Expand full comment

Did you not write the biggest takedown of motte and bailey arguments that exist on the internet? And yet your idea of tiers here seems to take the motte and bailey fallacy and turn it into virtue. Which, you know, fair enough, but it's worth mentioning.

Expand full comment


Expand full comment

I have no beef with EA, and donating 10% sounds great and commendable to me. No, I'm not doing it myself. I might if I felt I was in a better position to, and maybe at some point in the future I will. But I'm starting from nothing financially and about to start a family at 40. That's my priority. If you don't think that's a good enough excuse, well...too bad. I don't care.

As several commenters have alluded to, I think the foundational assumptions are contestable. "We should help other people." Sure I agree it's nice to help other people if you can, but that's completely different from, "you are obligated to sacrifice to help strangers whose plight you did nothing to cause."

I think you can even make a utilitarian case against indiscriminate universal altruism: It would destroy itself. Altruism could not have evolved if it were indiscriminate, nor would it be able to sustain itself.

A person who genuinely values all lives equally by definition places no special value on the lives of his or her friends, family, spouse, children. We're right to view such a person as a horrible person, because they would not be a good friend or parent etc. If people are good in their particular context to the people who are important to them, it makes the world a better place and trickles outward.

Nothing wrong with being generous to strangers if you want to and are in a position to do so... But saying that I have an "obligation" to do so is just someone trying to manipulate me into doing what he wants me to. He's not my master, and he's not doing it in my interest or in the interests of my family, so why should I listen?

I've come to accept that I'm not God; the world's problems aren't my problems. I just want to be a good husband and father, and that's good enough for me.

"The Morality of Everyday Life" by Thomas Fleming was an interesting book that went into these ideas in detail and presented an alternative to liberal universalism. Not saying it's the best take or a definitive refutation, but I found it interesting and compelling.

Expand full comment


1. The government already takes about 40% of my income "for the better good" - go get it from them.

2. I'm unconvinced that the work involved in determining which are the most effective charities in-practice doesn't overwhelm the differences in charities with simple paperwork requirements. That is, the cost of determining how effective a bunch of charities are at-scale is greater than the benefits accrued from donating to one charity over the next most effective. The cost of data production is non-zero.

3. Misery loves company.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

OK, you got me. I'm going to stop helping people in any way, or at least commit to doing it as ineffectively as possible (maybe harm a person more every time I help one?), so you can't tar me with your incredibly wide EA brush.

(What I actually want is to be able to say "I don't agree with EA/I am not an EA", referring to the social movement that calls itself that. But if I think that it's possible to have goals, some of which may be altruistic, and that it's possible to be more or less effective in pursuing them, then you want to claim I'm an EA, which is not true in any sense that makes the term useful.)

Expand full comment

> Do we hold a global health org to the same standard.

I believe EA absolutely does. Give Well is a leading EA organisation and their list of top charities mentions multiple times that they create this list on empirical data not marketing. They aren't recommending giving generally to global health orgs. They are specifically recommending effective organisation that are providing demonstrably high impact per dollar interventions


Lets agree that good metrics are table stakes and neither of use are considering donating to anything that doesn't provide excellent metrics of how effective they are. If you are relying on the organisations metrics (rather than first hand evidence gathering) then what advantage does the cause being local to you give to understanding the metrics the organisation provides?

I believe give well's recommended charities provide excellent information about whtat they have done, how much it cost and how effective it was. If I haven't convinced you of that but you are still interested have a look at the report (well laid out webpage not dense PDF) Givewell produced on how effective they believe the Malaria Consortium charity to be.


If we both agree that global and local charities can have their impact monitored, and we want to get the most bang for your buck, then I think we should be donating to global causes. The best anti malaria charities can save a life for approximately ~$5000. That figure isn't saying that giving one child one bed net will save that childs life. Its baking in all the uncertainty about how many nets get used effectively, and how many of those children would have been fine without one and so one. Getting one bed net to one child costs $5 so they are estimating to save one full life time of healthy life would require distributing 1000 nets.

I don't think that any 1st world cause is going to have anywhere near that level of impact. Assuming giving your tax incentives turn $5000 into $10000 and you give $1000 each to 10 10 homeless people I highly doubt that will result in one full life time of healthy life being saved.


Individual EA charities are chosen for being efficient, well studied, and working in a high impact area and I believe they are held to a much higher standard than most

Assuming all other impact measuring concerns to be equal you can get much better bang for your buck with global health charities because they are much cheaper problems to fix.

> If I follow your logic, I should be funding an initiative for the government of a West African nation to sue France

I don't see how my logic leads there. That is not a well studied area that has proven to have an impact and could benefit from greater investment. The purpose isn't to chose the project that has some chance of getting the most money into the hands of someone who will try and improve 'health in west Africa'. That could fail in so many different ways and so is a bad investment. Its to give money to specific organisation that has a proven track record of making an impact

Expand full comment

Donating helps assuage wealth guilt. Instead of spending money to reduce factory farming, invest in sustainable lab grown meat, so everyone can enjoy a nice juicy steak. Instead of attacking an issue directly, look for a method of creating a countervail to the issue which previously didn't exist. This is how you maximize Good.

Expand full comment

This reminds me of the Christian video by LutheranSatire (which I won't link to) that has a similar style of rebuttal (but in song form), which I'll summarize as

Q: Jesus is OK with gay people and told us to affirm and accept them as they are.

A: Do you believe that Jesus is God?

Q: No...

A: Then I don't care.

(I disagree with LutheranSatire on LGBT issues but agree with this particular rebuttal.)

Expand full comment

Is it less a motte-and-bailey, and more a-la-carte? EA may prefer some specific set of beliefs, but they also have suggestions for those who are less committed to the finickier tenets, which seems in line with the "effective"part. Per OP, donate or do SOMETHING.

Expand full comment

I think you are conflating "effective altruism", the concept; with Effective Altruism, the movement. The concept is all well and good; obviously, I want my altruism to be effective ! But the movement itself is more than just a methodology that purports to achieve that goal; it is a prescription for specific causes you should donate to in order to be "effective". In other words, the EA movement doesn't just say, "we make our records transparent to help you optimize your donations"; instead, they say, "we ran this mathematical formula that is based on mumble mumble, and determined that the most effective way to spend your money is on AI risk and animal welfare, so that's what we'll be doing".

Additionally, the whole "10%" thing sounds like a setup for moving goalposts. If I say "yes, I donate 10%", the obvious answers are, "why not 20% ?" and "why are you wasting money on mosquito nets when you could be saving all of humanity from AI ?". Basically, the bottom line has already been written -- it says "donate to EA" -- and you're just working backwards from there.

Expand full comment

Your friends are right. This is lazy and bad.

Q: Here are some exotic philosophical scenarios where utilitarianism gives the wrong answer.

A: Are you donating 10% of your income to poor people who aren’t in those exotic philosophical scenarios?

If consequentialism is false, that undermines the idea of an obligation to donate to poor people at all, even ones outside those specific situations. You are smart enough to get that. You are atypically uncharitable here.

You can think EA is overcriticized, and too deferent too its critics, and you can hate that. But “how much are you donating?” just isn’t a categorical counterargument.

Expand full comment

I'm in the opposite camp. I reject completely the idea that I have a personal moral obligation to help other people (unless my own individuated voluntary action was, foreseeably, both the proximate and sufficient cause of the unjust predicament). However, since I know that most other people disagree with me and think we DO have such an obligation, I would prefer that those people who choose to donate to things aren't giving their money to international grifters with huge overhead or to political issue lobbyists who wear $10K suits and eat $500 lunches with powerful jerks.

In the tower metaphor, I think the foundation is shaky as hell, but that if you are absolutely positively convinced that you must build on it, build the best thing you can.

Expand full comment

Glibly: I like altruism, but I'm not so sure about the effective part.

Less Glibly: There's some nasty buried even in the foundational assumptions, and the only one I can't think of a problem with is "Some methods of helping are more effective than others".

"We should help other people" has issues with pretty much every word in the sentence, to start with. Who is the "We" in this? Lots of people have more effective resources to help with, should they be the ones doing proportionately more helping? "Should" implies obligation of an unfixed level, but I don't personally feel obligated to do more than help those in eyesight. Everything else is at the level of a want rather than an obligation, and no amount of hypothetical or -actual- suffering of other unseen people is going to convince the emotional part of my brain that this is suddenly a pressing need on the level of "Drink water, you're dying of dehydration".

"Help" is particularly odd; perhaps the kind of "Help" I think is best is taking the capital from the highest individual concentration and redistributing it, through bloodshed if necessary, weighing that the suffering caused to a few would be outweighed by the utility to the many. This would imply that I'm okay causing suffering, possibly extreme suffering, to a few people in the interests of many others, which is at odds with anyone who has "avoid causing suffering entirely" as a function. Perhaps I'm an anti-natalist, and I figure the best kind of "help" is to render all of humanity sterile to prevent further suffering, having some kind of function that weights suffering magnitudes of order higher than pleasure. These are kind of extreme examples, but they're also things a decent number of people actually believe, so they can't really be treated as absurd hypotheticals either.

"Other people" is a funny phrase. Which other people? Friends, family members, inner circle, outer circle, people in my community, people in my nation, the world? Disproportionately I'm going to weight who I should help in favor of some of those, and I'm not sure everyone will agree with me on the exact weighting. The extremes I can think of is the completely selfish (who can be excluded easily I think) and the eusocial (seeing themself as truly no different from any other person and allocating resources appropriately). This also doesn't take into account "future" or "hypothetical" people, which is something I have a hard time working up any kind of moral enthusiasm for. Until said people exist, to me, they might as well be "fictional".

What I'm trying to say in this scattershot approach is this: Effective is a means to an end, and I'm not sure we share the same end.

Expand full comment

I heavily endorse the tower of assumptions framework. It's much healthier for debates than eternal accusations of motte-bailey. Of course it should work for every set of belief not just EA and especially your outgroup. Figuring which part of the tower you actually disagree with should be necessary for any thoughtful and reasonable critique.

Expand full comment

I like the tower analogy, and I find it helpful because it shows what I've gained from utilitarianism (more Peter Singer than the "EA Movement") and it shows where I get off the bus.

The first daring claim of EA is that we should all be doing something meaningful to help strangers. Giving 10% of your income isn't some brand new idea, as many people have pointed out, but it's something that very few people in rich countries actually do. I think putting an explicit target in place and encouraging people to hit it is really valuable, and I have gained from this.

The second daring claim is that when we try to help others, we should think critically about how to be as effective as possible. Again, this isn't exactly novel and EA aren't the only people doing this, but it genuinely differs from how most people give.

The third claim is where I get off the bus, which is that the way to think critically about how to be as effective as possible is to do explicit utilitarian calculus in order to identify the highest value-added causes and strategies. My problem with this is twofold. First, I think that this style of discussion is inaccessible to most people. That's not a problem in and of itself, but it has the consequence of restricting the world of EA thinking to a small group of highly educated people who like quantitative and analytical thinking. Second, decisions about how to prioritize animal welfare, AI safety, and global public health end up coming down to beliefs about speculative probabilities and moral weights. This is, again, not necessarily a bad thing, because identifying what speculative assumptions matter is really helpful. But it creates a problem when combined with the first issue, because discussions about these speculative probabilities and moral weights end up happening among a small, heavily selected slice of the population. I can't help thinking that the steady movement of EA towards "sci-fi" charities is in part due to what seems to be a huge overlap between people involved in EA and people who like sci-fi. And I can't help thinking that the reason global public health has slipped down the list of EA activities is because, from a utilitarian calculus perspective, it's boring.

And third, this style of identifying effectiveness ends up putting a ton of effort into these big-picture moral tradeoff questions and relatively little into nuts-and-bolts questions of how to actually effectively administer programs. I love givewell, and give 10% of my income through givewell. However, most of what givewell ends up doing is giving money each year to the Against Malaria Foundation, Evidence Action, and a handful of other very well run organizations doing crucial work. Those organizations are highly cost-effective because they are very well run, and they've been well-run for decades before EA was a thing. It's great that EA is funneling more cash toward these organizations, but I really wonder whether encouraging people interested in EA to get really good at debating the relative value of X-risk vs public health interventions is more useful than encouraging them to learn how effective organizations are managed.

I think this is what someone like Freddie De Boer is getting at when he says that it's a combo of trivial and stupid. He sees "care about others and make substantial personal sacrifices to help them" and "care about distant people as much as close people" and "try to be effective in how you help" as trivial. He sees the utilitarian calculus as stupid. I agree with him, except that I think he greatly underestimates the value of pushing trivial, banal, arguments.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

I don't disagree with Scott's post and its intent, but was triggered to digress by "Helping 3rd world is more effective than helping 1st (eg bednets vs. alumni donations)". (Leaving aside for the moment that "alumni donations" is a very unfair example; alumni donations IMHO do more harm than good.)

EA became a movement largely because mainstream charity is surprisingly ineffective. Most of the reasons I've heard given for why it's ineffective amount to "because it's aimed at the 3rd world" (I would say "4th world", those parts of the 3rd world which are having the worst problems at the moment). Corruption, war and instability, cultural resistance, lack of infrastructure for maintenance, lack of social trust due to endemic desperation, massive tragedy-of-the-commons problems, insufficient education to transfer control to the locals, violent religious extremism, etc.

"2nd world" originally meant "the communist bloc", but that's not a useful meaning anymore. I think we should redefine it as "economically in-between 1st and 3rd world." Countries like Mexico, maybe. Using that definition, I think charity should focus on 2nd-world countries, or lagging areas of 1st-world nations.

As an American, I say Mexico and some Caribbean nations in particular should be our focus, because they're our next-door neighbors, we can help them more efficiently than we can help people in Africa, we understand them better, and because helping them also helps ourselves.

A gaping wound heals from the outside in, not from the inside out. Your body heals the area around the edge of the hole until it's whole enough to begin healing the area inside it, shrinking the hole as it goes. That should be the model: not to dump resources into wastelands ruled by warlords, but to assist nations that aren't quite able to do much global altruism themselves, to get them to the point where they, also, can assist other nations.

Expand full comment

I donated 1k to the EAIF because I lost a weight loss bet but I don’t have a systematic tithing plan.

I think I can get a higher ROI from professional gambling than a charity would earn on its endowment. So patient philanthropy is my excuse.

Expand full comment

Your recent posts about EA just encouraged me to donate to the World Land Trust, which someone in the EA community rated as a highly effective charity.

I donated to WLT the first time in 2021 after listening to a Sam Harris podcast about the EA movement.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

I will always be a supporter of effective altruism, but I will say that I found 80,000 hours to be utterly repellent when I started listening to them maybe 7-9ish years ago. Unless they've changed, their close association with the movement is surely going to leave a bad taste. Their whole schtick seemed to be that you should be a privileged, wealthy, charismatic genius whose parents could afford an ivy league university or you should please stop blighting the movement with the crime that is your existence. This was what I got from the material in their podcast and their articles, anyway - basically, anyone who is not a genius and/or went to an elite school isn't valuable, and should be actively gatekept out of any participation in effective altruism. This bothers me especially because if I know anything about the people who get involved in these kinds of movements, their biggest vice isn't necessarily money or power, but access, because access is their conduit to those things - so gatekeeping the movement to future golf partners certainly appears self-serving. IMO it would almost be better if those kinds of pharisaical people weren't trying to hitch their wagon to good maximization because of the collateral damage they could potentially cause.

Maybe my experience was some kind of freak outlier where I happened to select the podcasts and articles where they said the most repellent possible things, IDK.

Expand full comment

Gwaaaaaaaaaaaaaaaaaaan ya tease just post it. What are they going to do, write a NYT article that says you're racist again? You can take it.

Expand full comment

Cool. May I translate it to Portuguese and publish it on <80000horas.com.br>?

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

> To me, the core of effective altruism is the Drowning Child scenario.

The Drowning Child scenario is a bad source of intuition for ethics. The scenario, as depicted, relies for its intuitive force on the fact that one has actually come upon the child, and so that child in now immediately proximate. In then seeks to transfer any conclusions drawn to cases in which the persons to be aided are not proximate. But proximity (not merely in a geographic or temporal sense, but most of all in a relational sense), is fundamental to ethics. To give but one example, one has ethical responsibilities to one's own child that one does not have to unrelated children halfway around the world.

My rejection of EA is at the level of what Scott, in his "tower," calls its "fundamental assumptions." By the time we get to its "less basic assumptions," I think it's a dumpster fire.

Expand full comment

Isn't this literally just the Motte and Bailey/Field and Fortress argument, except with five levels instead of two?

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

My only specific argument against EA (I have many other arguments, mostly related not feeling bad about being selfish, but it's not specific to EA, it's about altruism as a whole) is that EA, by doing sometimes convoluted computations on costs and benefits, abstract the decision about to who you should give away from you, the giver, it introduce an adviser. And one with sophisticated arguments.

This is one step (often the major step) to being scammed.

And I believe the natural psychological opposite to generosity is scam aversion (I feel for example that being stingy is very highly correlated with fear of being scammed, and the effort you typically spend in detecting/avoiding scams ).

So while most EA is certainly not scam, it is more scam-like than "just feel like it" altruism for many people (certainly people who do not spend most of their time doing analytical tasks) , and this will trigger the stop-altruism-scam-alert switch in many heads.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

The too-spicy-to-publish Q&A gives the wrong impression, and the implicit charge of hypocrisy levelled at the person disagreeing with EA rings hollow, because when you ask most EAs similar questions you'd get the following answer:

Q: Do you, an EA, donate 10% of your income to effective non-profits?

A: No.

One piece of evidence here being the EA Survey (https://rethinkpriorities.org/publications/eas2020-donation-data):

"The median percentage of income donated in 2019 was 2.96%. [...] 20% of EAs who answered the donation question reported donating 10% or more of their income in 2019."

And people who fill out the EA survey and the subset who answer the donation question will be far more likely to have donated than the typical person who identifies as an Effective Altruist.

Expand full comment

1. I give some 2-6% of my income to poor people, as I have some very poor relatives (and they are ok-people). I would help them much, much more if I could get them a work-permit for my country. (Yeah, Bryan Caplan may be the most effective altruist around. Maybe after Bill Gates.) And if one of them turns out really smart - just as Scot wrote - I have an extra 10K stuffed under my mattress just for her.

2. I think all commenters should first answer the question. Do you do 10%? - Then go on with their smarty ramblings. Why: I am stunned by the HUGE amount of comments on this post (after just 10 hours) - and the LOW percentage of those who address the one, simple, huge and important question the post raises.

2. b) Obviously, I feel: The reason they comment, and the reason they don't tell - is: They don't "tithe"( I don't really). And they do feel guilty about it (who could not!). And they absolutely hate feeling guilty about it. (I claim I don't, too. But I guess I am bs-ing myself. Too.) And so we are back to: "It's Cognitive Bias - all the way down". - Scott is the psychiatrist. He knew what he was up to.

3. I am contra the tithe. I understand it was meant also as an upper bound. Still, it feels like: "Less is kinda stingy." I say: It's too high. As Scott himself (on SSC) once wrote - I paraphrase -, the really bad poverty in this world could be alleviated by less than 1% of world GDP. And I guess people here would get much less defensive about 1% or 0.2% than about 10%. Start small. It may be enough. - 10% does matter less for Scott and me. But hey, there are people who really like their car and their vacation. And their pizza-delivery - plus a hundred other things that make me blink. And God bless them.

4. The taxes we pay is more than 10%. Closer to 50%. - And as we now longer pay it to a king just to not get killed for not paying (and get protection from other "kings" plus maybe some irrigation works), but as we are told we pay for "fairness"/"equality"/"health"/"war on poverty"/"education"/"valuable infrastructure" we are kind of entitled to feel we pay those taxes to "do good". Charity, really. (Ok, the USofA also puts a bit more than 2% of GDP into defense - we do a bit less. Trump claims this is "charity for world-peace". And Germany free-loading. And he may not even be that wrong there. Anyway, the biggest parts of the budget are "social" - all over the first world, probably in all 190+ countries.) - Asking for another 10% on top - is over the top, imho.

5. Two fine pieces bashing high-flying EA. Had us feeling smart, safe and cozy. Now you hit the breaks, turn around, look me into the eyes and ask: What about you, yes, you? - Dr. Alexander: I love the rides you take me on!

6. I am into bible quotes, too, so here it goes, Matthew 19 , obviously:

16 And behold, a man came up to him, saying, “Teacher, what good deed must I do to have eternal life?” 17 And he said to him, “Why do you ask me about what is good? There is only one who is good. If you would enter life, keep the commandments.” (...) 20 The young man said to him, “All these I have kept. What do I still lack?” 21 Jesus said to him, “If you would be perfect, go, sell what you possess and give to the poor, and you will have treasure in heaven; and come, follow me.” 22 When the young man heard this he went away sorrowful, for he had great possessions. - end of quote (just before the tow-goes-through-eye-of-needle-part. Or camel: misquote, but silly stuff can be so much more memorable.) - Here we are: asked for 10% not 100%, and we still scream, run and hide.

7. The time I gave over 10% to charities I had just a stipend, was 19 and near suicidal - cuz "life" seemed a fucking grey goo of pointlessness. If you feel like donating the full tithe or your kidney, et al to complete strangers, I very much hope, you are ok. I doubt you are. But I love you. Jesus loves you. ;)

Please, please get a life. There is one out there. For you. Not sure a poor me even wants your dimes, if you are sad inside: "losing love

Is like a window in your heart

Everybody sees you're blown apart

Everybody sees the wind blow"

Expand full comment

Q: 100% of my work time is spent in a chronically and sometimes acutely personally hazardous job that supports a system to save humans. When was the last time you saved the life of a person you could name, risked your own existence to help another, or shadowed someone who had to triage resources to help save lives?

I fly search and rescue helicopters for a living. When flying a search pattern for 2+ hours (about $9k USD per hour that tax payers fund) that is based solely on an individual saying that "they saw a distress flare in the distance", I wonder about whether all of these reports should instead get directed to a hotline that says "thank you for your call, instead of responding we are purchasing $20k worth of life vests and distributing them."

I also think about the 2 hours spent hovering over a beach looking for a young man who went swimming, began to struggle, and was seen going underwater. I think about finally finding that body, and how the moment we had originally arrived on scene and werent able to see anyone swimming we knew that we werent trying to save a life anymore, we were just going to be trying to find a body. Should my boss have called that family, and told them "we are standing down the helicopter, and are going to direct those $18k towards more signage that your family member ignored when they went swimming on a red flag day?" Im not sure, but I also know that I am not the person who has to have those conversations with a grieving family member, which means its easier for me to abstract. It is really, really difficult to be a person in the middle of these situations, and figure out what how limited resources should be allocated.

What I struggle with for Effective Altruism is that it seems the intellectual firepower of the movement is not directed towards supporting the people who have to make these difficult decisions, and instead towards increasingly esoteric abstract ideas. It seems to have been hijacked by thinkers who lean very heavily on big calculations that negate a lot of the human element of charity and humanitarian work. These same thinkers seem to be now used, intentionally or not, as a mechanism for tech billionaires to try to launder their reputations about doing good while avoiding anything that would challenge the structures that created their wealth or would move the needle on helping anyone who is alive today. I originally got interested in Effective Altruism because I felt that if you wanted to effect change in the world, and help people, you should be willing to open yourself up to daring ideas that were testable and had a way to measure impact on humanity. That seems to no longer be the banner around which EA rallies.

And that makes me sad.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

The tower metaphor does a good job reminding me that, much like Christianity or Linux, it's okay to fork.

I feel optimistic that there could, in my lifetime, be another group of people who say, "let's do good, let's prioritize good, let's give intelligently, let's help encourage the willing to give 10%, and also, nah bro, I'm not a utilitarian, and I'm definitely not an EA, those guys are weird".

If anyone else on this comment thread is interested in taking a few steps toward such a movement, let's talk. I don't think I'd put 10% towards it yet, but I do keep finding myself wishing for somewhere to put 10k that will do actual good, and "Non-Creepy Alternative to EA" seems pretty good to me. There are a lot of cush software people here. Maybe we can do something good together.

[Note: I wrote "get as many people as possible to give 10%", then I changed it to "help encourage the willing to give 10%", because of my public position that evangelism is not best.]

Expand full comment

Let me begin by saying that I am a strong believer in donating to charities. I contribute more in a year than the average American family household income. I do not gauge it off my income as I am retired and living on my savings. In my situation, my 1040 income is just a phantom of accounting conventions.

My contributions are primarily directed to supporting institutions in my communities and to the relief of poverty in those communities. I minimize contributions to universities, hospitals, museums, and "arts" (performing, plastic, or otherwise) on the theory that the money that goes to them winds up in the pockets of the professional managerial classes who are overfeed in our society.

I feel good about what I am doing. I beleive that I am discharging my obligation in full.

At the same time, I am not impressed by what I have read about "Effective Altruism". It strikes me as a an attempt to recreate the obligational structure of revealed religion without recourse to scripture or tradition. As such, and as with all such efforts that I have yet seen, it feels like all of its foundational axioms are prestidigitated from mid air.

I am not opposed to these axioms as I find them to be consonant with those derived from revealed religion in western civilization, but, I don't find the reasoning made on their basis compelling.

Expand full comment

Words have meanings, as Scott clearly thinks. Given the way language is used, I think it's highly unclear whether "Effective Altruism" refers the minimal core of action-guiding ideas as he describes them, or (as he denies) to the actually existing movement.

This is partly because most people describing themselves as EAs don't donate 10% of their income to effective charities (evidence: https://rethinkpriorities.org/publications/eas2020-donation-data ). And are far more likely to accept the ideas that Scott treats as being in the bailey. As an empirical fact, someone can be accepted as an EA without ever donating anything, but not if they depart too far intellectually.

I do personally use EA in Scott's sense to describe myself, but I feel the need to spell that sense out to avoid unclarity. E.g. I say "I believe in effective altruism in the sense of donating more and more effectively, which for *me personally* captures the core ideas. And I'm giving 10% of my lifetime income."

My impression is that Scott doesn't think that "feminism" in practice means "thinking men and women are equal". The same considerations apply to what "Effective Altruism" means.

Expand full comment

I think the best criticisms of Effective Altruism is at the foundational assumption level. For example, Effective Altruism seems to believe in helping people indiscriminately, regardless of whether they are our allies or enemies. I don't think that's a good thing, in fact, I think it's naive and stupid. We should be only be helping our allies and be ambivalent to the suffering of our enemies, if not actively working to kill them. In the "drowning child" scenario, nobody ever says "Hold up. Is this child a *good* or a *bad* person?"

If you believe that helping good people has the same value as helping bad people then it's only a short step from there to "Good things are the same as bad things" (post dumb kitten picture here) and then nobody will ever take you seriously.

We need to ask ourselves "What kind of world do we want to create? Which people are helping move that project forwards and which are holding it back? That way we can draw up a list of our friends and our enemies, and start applying Effective Altruism to our friends and Effective Nihilism to our enemies.

Expand full comment

This debate reminds me of Scott's remark in the old SSC, in the flavor of two very different conceptions of government: some people believe that governments are basically good and effective, with conflicts and inefficiencies a regrettable but fixable issue - or basically bad, as horse-trading and pork barrel machines where if any good happens, it is by accident.

I think something similar happens here as well - are charities/NGOs/nonprofits basically good, where inefficiencies can be solved by data/planning (where EA proponents seem to stand), or are they jobs programs for naive Beltway college grads where any good that happens is despite, not due to their best efforts? If you hold the second opinion, the effectiveness numbers/framework is suspicious - even if NGOs are not actively cheating, they know the rules and the score, and will focus on the programs and the metrics that will attract even more funding, and the cycle begins anew - with doing good getting lost somewhere in the Q3 targets or the RCT results. Focusing too closely on metrics will take you nowhere if everyone has incentives to game them.

(Q - A: yes, I used to volunteer at a NGO doing gender equality stuff for African teenagers, promising across all the right metrics, yadda yadda - had such an awful experience that, paraphrasing Scott's review of San Fransicko, I sacrificed all my principles on the altar of "this isn't worth my dignity, time or effort". Now squarely on board with the second opinion.).

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

1. "your friends won't let you post"?

Really, you let your friends decide what you can and cannot post? I'd say you need a bit more autonomy.

2. "Is there some more systematic way to commit yourself to some amount between 0% and 100% of your effort (traditionally 10%)? And once you’ve done that, how do you make those resources go as far as possible? This is effective altruism, the rest is just commentary.


Foundational Assumptions

We should help other people.

We should strongly consider how much effort we devote to this

Some methods of helping people are more effective than others."

If this is what EA is, then it really isn't much of anything and certainly not new.

Consider the history of hospitals as a more efficient and effective means of the charitable activity of caring for sick. The acceptance of Christianity as a legitimate religion in the Roman Empire drove an expansion of the provision of care. Following the First Council of Nicaea in AD 325 construction of a hospital in every cathedral town was begun, including among the earliest hospitals by Saint Sampson in Constantinople and by Basil, bishop of Caesarea.

How about Populorum progressio (1967)?

The New Economics, W. E. Deming (1994) which is certainly applicable to the management of philanthropic activities.

It would seem to me that the invention of taxation might flow from the same foundational principals.

Does "civilization" which is rooted in the essential solidarity that is part of the human condition also flow from the ostensible EA foundational principals?

Weak EA cannot be anything new. Strong EA if it really exists must be something else and here is where some of the hard critique of Singer, et al, the gimmics of 10% (why not 9% or 11%, why not progressivity in giving), and "ranking" and other pseudoscientific methods used as evaluative regimes perhaps comes into play.

3. Does "longtermism" suffer from the immanitzation of the eschaton? There would seem to be some risks of creeping "gnosticism" as Voegelin used that term.

Expand full comment

I don't think EA is bad in the sense that any one individual shouldn't do the things it proposes. Donating 10% to an effective charity is a good action to take right now. I do have an issue with the idea that EA or something like EA is or ever can be *the* solution to most global problems. Maintaining a system that creates vast amounts of wealth, funnels almost all of that wealth upwards in a very steep pyramid and then motivating those at the upper layers to voluntarily donate some of their personal wealth to alleviate the suffering that is caused, in large parts, by the very system that enables those people to have that option is....not super convincing to me.

Expand full comment

Also worth noting that collapse of a certain layer doesn't even necessitate collapse of the higher layers; you might have *other reasons* for connecting them than the specific EA ones.

E.g. a Christian might agree with the foundational assumptions, disagree with some of the less basic assumptions (like utilitarianism), but still conclude—because of the foundational assumptions, and the Christian's own intermediate assumptions—that it would be good to support many of the specific charities GiveWell recommends. Utilitarianism isn't the only reason for thinking that it's a good idea to give a real chunk of your money to the global poor and that you should try to do this in an effective, data-based way.

Something I wrote about why Christians and other non-utilitarians should still support GiveWell charities: https://jecs.substack.com/p/notes-on-effective-altruism

Expand full comment

The replace predators by herbivores argument goes back to a NY times opinion blog article from 2010 [0] by a Jeff McMahan, who seems to be indeed active in the EA community [1].

From the NY blog:

> Suppose that we could arrange the gradual extinction of carnivorous species, replacing them with new herbivorous ones.

> I concede, of course, that it would be unwise to attempt any such change given the current state of our scientific understanding.

I totally agree with that.

> Perhaps one of the more benign scenarios is that action to reduce predation would create a Malthusian dystopia in the animal world, with higher birth rates among herbivores, overcrowding, and insufficient resources to sustain the larger populations. Instead of being killed quickly by predators, the members of species that once were prey would die slowly, painfully, and in greater numbers from starvation and disease.

Obviously. I would assume that most mammals have a finite life span, so every one of them has to die eventually. Getting killed by a predator is certainly not nice, but it clearly beats most other forms of death nature might have in store for them.

> There is therefore one reason to think that it would be instrumentally good if predatory animal species were to become extinct and be replaced by new herbivorous species, provided that this could occur without ecological upheaval involving more harm than would be prevented by the end of predation.

While I get the argument, I think it is extremely theoretical. It feels like discussing the merits of turning our sun into a black hole. Granted, we have no idea how we could accomplish it and clearly don't have the tech that would result in a net positive from doing so at the moment, but perhaps there is some theoretical argument being made that feeding a black hole interstellar gas would result in more usable energy than just capturing Sol in a Dyson sphere. Meanwhile, we are still burning coal in power plants.

[0] https://archive.nytimes.com/opinionator.blogs.nytimes.com/2010/09/19/the-meat-eaters/ (Article is paywalled, but viewing the source code, then copying the meat of the article in a new html file and viewing that with a browser works reasonably well.)

[1] https://forum.effectivealtruism.org/topics/jeff-mcmahan

Expand full comment

Oh lordy, I appreciate your spiciness.

I have a similar spice level when some of my very smart friends claim that utilitarianism is incoherent because their is no universal morality or utility function. FFS, everyone wants their infant to live, and thus reduction in infant morality is effectively a universal good. Oh? There's some people who would be happier if their infants died? Oh damn you got me there!! Well f*ck it, I guess there is no good that can possibly be done in the world because for every ostensible good, there is some tragic, broken-brained individual (real or imagined) who wants the opposite 🙄🙄🙄

It's good for us to challenge our moral intuitions, but this is ridiculous. And it pisses me off because its about something that actually matters.

Also, I get a little crazy when lovely brilliant wise humans roll out the intuitively-barbarous utilitarian thought experiments. While the *first order* consequences of murdering a random person to distribute their organs to 5 others is a net good, the *second order* consequences are predictably awful with very high confidence (living in a society in which we must fear that the Mandatory Altrustic Giving Police are coming for us next).

Of course, utilitarianism is challenging. If we *had* a DeepThought, we might commit seeming moral atrocities because over the course of the next 10,000 years it would save a trillion lives. ... in that universe, shit would get seriously weird. And similarly, it's possible that lowering infant mortality has a 4th, 5th or 6th order effect that is net bad... and I don't know what to do about such "foot bridge" (https://www.themantic-education.com/ibpsych/2016/10/27/moral-dilemmas-the-trolley-and-the-footbridge/) problems. In that sense, I am kinda glad we don't have a DeepThought, and even that such future prediction is likely to be intractably hard. And this does make the idea imprecise and messy. But that's fine. Maximizing the flourishing of conscious creatures is a good North Star even if our loss function is heuristic-based and messy.

Expand full comment

I feel like the strongest objection to EA is something along the lines of "Objector claims to not value human lives or prefer non-suffering of others, except insofar as receiving direct (non-moral) benefits from that person such that they have value as a tool." Fortunately, people who seem to honestly feel this way are fairly rare or I think we would be stuck in a much worse Nash equilibrium. As it is, such people must be negotiated with differently from typical humans as they have a different value set.

Expand full comment

"Q: I refuse to positively contribute to any form of EA (despite my very significant capacity to do so) until Scott publishes his spicy essay, therefore doing the most good through an emotional blackmail"

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

Q: I don’t approve of how effective altruists keep donating to weird sci-fi charities.

A: Are you donating 10% of your income to normal, down-to-earth charities?

A2: I don't have philosophical beliefs that imply that I must donate to charities at all, so even if I donate nothing, I'm fine according to my own beliefs. I object to your weird sci-fi charities because I believe that that weirdness is a symptom of bad epistemic hygeine. And yes, I do try to keep proper epistemic hygeine myself, thank you for asking.

(And if I add up the amount of money from my taxes that goes to, or purports to go to, helping other people, it would exceed 10% anyway.)

Also, I think the drowning child argument is deceptive. Surely you know about central and non-central examples. I'd save a central example of a drowning child. I might not save a noncentral example of a drowning child. If someone tried to exploit me by creating an endless series of portals to locations in the world where there were drowning children, at some point I would refuse to save any more. I'm reminded of the Superman story where some guy wants to romance Lois Lane, so he creates a radio that automatically scans the world and keeps playing whatever disaster it can find, so Superman hears about and keeps having to go off and save people.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

> For me, basically every other question around effective altruism is less interesting than this basic one of moral obligation.

I find the question of moral obligation to be the *least* interesting part of the discussion, because I'm a non-cognitivist and don't think most of the participants are using "should" in any way that I recognize as coherent. I contribute to GiveWell simply because I find their empirical analysis compelling and have decided saving 10-15 statistical lives per year is a more appealing use for that portion of my income than any other I can think of, and I don't fuss about anyone else making different spending decisions. Likewise, I've taken the GWWC pledge for pragmatic reasons rather than moral ones. The community has (wisely) agreed on a Schelling point that if you contribute 10% then nobody will give you shit about not doing more, so by taking the pledge and holding to it now I can get involved in the much more interesting debates around cause prioritization without anyone being able to brush me off with answers like the ones in the "spicy" version of this post.

Basically, I don't think this tower model is valid, because I reject some of the fundamental assumptions and yet I'm still on board with most of the higher stuff.

Expand full comment

I'll be a bit more blunt. Responding to criticism with "well, you're a hypocrite or lying about your real concern" is insulting and bad-faith. Belief and action are separate--lots of people agree that "helping people is good" and even that 10% is a reasonable number for resources to spend *without actually doing either in any systematic way*. Telling them that they have to go be better about that before they can criticize y'all for wasting time and money on AI-risk (or whatever the criticism is) is both non-responsive to the criticism and downright bad-faith argumentation.

And this style of "question from fake person + real response" is a rhetorical trick to smuggle in assumptions. Just like it was for Socrates. For example, the first appropriate response way back up there would really be more like

Q: I don’t approve of how [E]ffective [A]ltruists[1] keep donating to weird sci-fi charities.

A: Are you donating 10% of your income to normal, down-to-earth charities?

R: Why does that matter? That wasn't even a question. It's a statement of fact. And now you're questioning my virtue? <walks off justifiably more convinced that EA and everything associated with it, anyone who uses that label for anything is a crank and an obnoxious person>

[1] Note the sleight of hand Scott pulled here? He conflated "people who are effective about their altruism" with the real criticism, which is of "people who identify with Effective Altruism, the movement". I've put it back to be more true to what's actually going on.

Expand full comment

"Q: FINE. YOU WIN. Now I’m donating 10% of my income to charity.

A: You should donate more effectively."

This sets up a sort of tyranny of the rocket equation, charitable giving edition. First, you agree to donate 10% of your income/time/energy. Now, you need to research charities to make sure you are not just giving alumni donations to a wealthy university that doesn't need it. But this takes up more time and effort, amounting to X% of your remaining income/time/energy. You also need to make sure that your sources are trustworthy, so this requires another y% of the income/time/energy that's left.

I think that this goes against the message of https://slatestarcodex.com/2014/12/19/nobody-is-perfect-everything-is-commensurable/, which makes no more demands than "10% of income to charity". So where do we actually draw the line? If 10% of income is no longer good enough because it is donated to one's wealthy alma mater, then it seems like Effective Altruism was never always about "10% of income to charity", and controversial statements about which causes are more worthy (and which charities are more effective in promoting their stated causes) are lower down on the tower of assumptions than Scott proposes. I think this is what makes this article feel so much like motte-and-bailey.

Another way of stating it: the Effective Altruism movement needs to stop pretending to be solely about technical "is" problems when a core part of its mission is adjudicating "ought" problems. The "is" problems include giving 10% and calculating charity effectiveness. The "ought" questions are exactly those Scott dismissed as the equivalent of denouncing Christianity because of Bible translation errors: whether one cause is more or less worthy than another cause. Once you are there, what principle is stopping arguments about alumni donations vs. feeding starving children to end up as arguments about feeding starving children vs. donating to weird sci-fi charities?

Expand full comment

Post the essay, if The NY Times can’t end you what chance does anyone else have?

Expand full comment

I think that for Effective Altruism to add any value, you need to be able to get from the second level of your tower all the way to the top.

The bottom two layers are just "Altruism", with no "Effective" about them; they're pretty much universal values (even the choice of 10% rather than 5 or 15 is shared with a lot of Christian tithes and Muslim Zakat, because fingers!).

One way from the second layer to the top is via the third and fourth layers, but I think those are both highly questionable - I think it's hard to imagine a less effective form of altruism than AI safety, and I'm dubious about how efficiently individual effort can mitigate the risks of pandemics or nuclear war.

But a much better one is via sites like Givewell.org, which try to identify specific, well-grounded, short-termist projects ABC and XYZ that do more good per pound spent than other similar projects.

When I started describing myself as an effective altruist, those seemed like quite a large part of the movement. Nowadays, they largely seem to have been eclipsed by "long-termist" projects, whose effectiveness looks much lower to me, and I think that's a shame.

I still donate to malaria prevention, because that's who Givewell recommend, but I'd like to see more EA effort going on competing projects trying to solve the same efficient-short-term-charity-identification problem with different methodologies, to test whether their results replicate, and less focus on high-speculative moonshots.

Expand full comment

Is it reasonable to assume the most effective use of charity during my lifetime isn't known yet, and I could put 10% in a piggy bank until my death or when I recognize it as I get older/wiser/the world changes?

Are there any very rich effective altruist people or organizations who think the answer is no who would insure my donations? I make 100k, donate 10k now, but if I change my mind and wish to use that cash a different way they give me 10k in future and don't donate the next 10k they receive or generate.

Expand full comment
Aug 24, 2022·edited Aug 24, 2022

I do not buy the "You should donate more effectively" answer. Does it imply that in the "drowning child scenario" if my suit costs more than average cost of saving a life via some charity, I should let the child drown but commit to increase my donation to that charity by the cost of my suit? This contradicts my intuitive answer.

And furthermore, I am not convinced that QALY is the metric I want to optimize for. If I were trying to ratinalize my choice of charities, I would probably say I am optimizing future scientific/cultural output of society, but even here I do not truly optimize, rather choose things that I personally consider important by gut feeling.

Expand full comment

5 years ago most people outside of LessWrong/rationalist space had never heard of Effective Altruism. You would meet people and tell them “We should give charity effectively” and they would be like “cool, great idea, I want to join your movement” and EA really did mean “try to be effective with your charity”. (and yes, I do give charity and yes I do try to make it effective).

But since then something changed. Effective Altruism, the movement, got much larger. The idea that “we should give charity effectively” is also more prevalent. So prevalent that people have it without even thinking to name it.

They don’t think “I’m going to think about where my money is most effective so I am therefore an EA”. But people who give charity are more primed to think about it’s effectiveness just from being in a world where that kind of question is in the zeitgeist.

You might say it’s “in the water supply.

(Just like CBT:



It remind me of this article: https://slatestarcodex.com/2013/04/11/read-history-of-philosophy-backwards/

Where the point is made that once a philosopher “won” they are no longer associated with their “winning” ideology, just their controversial ones.

In fact, a movement, is never defined by it’s ideology


And especially not when it’s ideology is just the most obvious thing that “we should be effective with our money” I mean, republicans don’t donate to the democratic party since they think democrats are not effective at governing as well as republicans - so the instead donate to the republican party, which to them is “more effective” - does this make them “effective altruists”?

Almost anyone who donates anything thinks at least somewhat about “where would this money go best”.

So when people talk about EA, they are usually talking about EA “the movement” not “EA the idea that is already obvious”

EA has gone the way of feminism where it means different things at different times, but so far this duality has not been weaponized like the other words in the most famous SSC article of all time:


Expand full comment
Aug 24, 2022·edited Aug 24, 2022

A metaphor.

One of my recreations is long bicycle rides: 50 miles, 100 miles, more. The moment I set out on one of these, my sole concern is to cover the distance with as much dispatch as possible. That does not mean going as fast as I possibly can at each moment. That would just wear me out and result in abandoning the effort without getting half way through, or even getting injured. For an endurance activity, the effective way is to maintain the highest power output that I can sustain for the whole distance. That is the target. The effort I can make (to riff on Peter Singer's book titles) is the effort I must make (to riff on the contents of those books). I may briefly exceed it to get up a hill, but I can recover when I go down the other side. Stops to eat, drink, or pee are no longer than they need to be, and those will be my only rest until the trip is done. Even during those stops, I am not "resting". I am preparing myself to take up the work again. Everything, including "rest", is in service of reaching the finish in the shortest time that is possible for me. Rest, properly conducted, is part of the work.

The exegesis:

a long bicycle ride: doing good.

minimising the time: doing as much good as possible.

attempting to go as fast as possible all the time: scrupulosity, burnout, nervous breakdown.

actually minimising the time: actually doing as much good as possible.

rest only in service of the work: rest only in service of the work.

until the trip is done: the rest of your life.

Expand full comment

Sorry charities and the overall betterment of humanity...you talking about that makes me feel uncomfortable with the choices I've made in my life, so my feelings are MORE important than saving or improving hundreds of millions of lives?

Though this does bring up a point for EA...could it not be more about psychology and helping avoid ruffling feathers? The chimp primate hierarchy holds humanity back in many many ways. How to ease in the fragile egos of people who at least hold some good values? Or do we focus on the super EA types alone to shunt them into charity work while alienating the bros of the world? Could a splinter movement of 'Easing into Altruism' be of value to capture people who are not used to giving anything to charities or causes?

There is an argument about the EA movement being a portmanteau of obvious ideas or well known ideas with deep historical roots...

And the answer to that is...'so what?' Every idea has historical roots and other people doing similar things or coming up with combinations which are local or historical repeats to some extent.

Certainly wanting to do things efficiently is indeed a long standing idea. In no way did the idea, concept, or desire to implement or think about ways to do things to have a greater impact with fewer resources invented by the EA movement. From guerilla warfare to evolutionary trends of organisms across time to be effective in their environments to the very physical acts of osmosis or brownian motion of particles...it appears efficiency is a foundational principle or idea which can be derived from observation of the natural world. Therefore not attributable to EA.

Also certainly, wanting to or organising to donate tithings or 10% to charity or in some other way to a cause greater than yourself outside your household in a way which is gifting or giving away resources has deep historical roots in many disparate cultures from donations of food to buddhist monks to church tithings to leaving the edges of your field unharvested in the torah and not gleaning for dropped seeds in the harvest. These ideas are 3,000 plus years old at least.

Indeed, indeed. And yet as a secular movement to inspire more interest in and actual instances of giving to charities it has certainly sparked a lot of conversations on the topic with some people donating to charity who otherwise would not have since the more traditional religious charities lack the reach or authenticity to convince certain audiences or demographics.

Of course EA is also off putting in its message and style of communication or concerns to some people. Just like the catholic church's old messaging about saving your soul through donations to their organisation was off putting to many people like Martin Luther, which led to a reformation now effecting over a billion people. EA seems to lack the potential to cause such a huge impact as that and yet it is the focus of anyone's criticism?

Would not some EA folks getting into some big church charity organisations and making them operate even 5% better than they were operating be a boon to the world? What could possibly be wrong with that? It is an insecurity and ruffled feathers to note how one own's life did not and likely will not go down such a pathway. But what the heck is wrong with getting even a handful of smart people into that kind of work? Sorry charities and the overall betterment of humanity...you talking about that makes me feel uncomfortable with the choices I've made in life, so my feelings are MORE important than saving or improving hundreds of millions of lives?

It shows an odd lack of self reflection for someone engaged deeply in a debate on charity who objects to EA to talk about negative utility when they are....thinking about charity more than they otherwise would have!

Expand full comment

Lots of ad hominem attacks to defend Effective Altruism here... I believe a movement should be able to stand on it's ideas, not on shaming anyone who criticizes it for not giving more to charity. But maybe I'm just a horrible person because I don't give 10% of my income to charity, and therefore I'm not even morally qualified to comment...

I think people should just be nice to each other/ look out for one another's needs. I generally have had super positive experiences with giving (and receiving, when I was living on the road) in very simple, personal ways. Handing cash to some hobo or service worker just feels really nice. I don't know how much it helps, but it feels nice (and it definitely helped me in the past). Giving money to some charity and then getting harassed by them for the rest of my life for more money feels less nice.

Also, charities aren't actually guaranteed to be moral arbiters for our communities. Giving money to a charity is very similar to giving money to a church or political party, and has similar potential downsides. Having your primary source of revenue for your organization be donations does not guarantee that your organization is deserving of such donations.

So yeah, I might find a charity I really believe in in the future that overcomes my general concerns here. But for now I'll just play it safe and keep giving in personal ways as often as I can. I think this is the opposite of the ideal of effective altruism, which is about giving to organizations with grandiose goals to save the world a million years from now rather than simply giving to some random poor person to brighten their day today. But I just think building up good vibes in my community is nice, I don't really feel like I have the power to effect bigger things in the present or the future (I'm very dubious about the effectiveness of my donations to stop something like, say, nuclear war). I'll just do what I can to make life more pleasant for those within my tiny sphere of influence and call that enough.

Expand full comment