742 Comments

The argument I'd make is that the vast majority of people don't donate to charities that buy mosquito nets, and the vast majority of charitable dollars in the US doesn't go to those sorts of charities. So clearly, if we judge people's beliefs by how they act on them, most Americans do not support the basic principles of Effective Altruism.

Expand full comment
Nov 30, 2023·edited Nov 30, 2023

I basically like EA. I went to university with Rob Wiblin. I used to donate a large portion of my income to SCI. "You'll never meet a nicer group of nerds."

I also am basically a progressive. I almost always vote for the leftists, and a couple times when I was in university I even voted far left (for the Australian Greens.)

Recently though I've become disenchanted with both. I dunno, something about any reasonable philosophy pursued dogmatically to its conclusion becoming increasingly ludicrous. Lately you end up with the people who think we should prioritise donating money effectively to those in need deciding to fund ivory-tower AI risk researchers, and the people who think we should eliminate racism deciding it's OK to yell at Jewish people.

I will probably (in the next year or so) get baptised and become a Christian. My long term partner was raised in the Catholic Church, I've been going for the past year or so, and this community is really, really nice.

EDIT - I apologise to any in the AI risk community I have offended with my flippant characterisation. You guys do you! I fully expect to get away with my flippant characterisation of the woke community, given this blog's readership, lol

Expand full comment

Agree with Robert Stadler that step (2) is pretty nontrivial and this should be emphasized more -- given how charitable donations are in fact distributed, it seems that "think about effectiveness" *is* in fact doing significant work and is not just something everyone practices. (Maybe something everyone agrees with in some sense, but I guess you covered this in point 3.)

But I have to point out -- even deBoer's uncontroversial summary is *not*, in fact controversial, and the reason why is right in his own piece!

Because on the one hand he writes:

> Generating the most human good through moral action isn’t a philosophy; it’s an almost tautological statement of what all humans who try to act morally do.

But on the other hand he later writes:

> Ultimately EA most often functions as a Trojan horse for utilitarianism, a hoary old moral philosophy that has received sustained and damning criticism for centuries.

And that's the thing -- "generating the most human good" *isn't* a tautological statement of what all humans who try to act morally do; it is specifically a consequentialist (arguably specifically utilitarian) goal. A deontologist does *not* try to generate the most human good with their moral action! That's not what they believe moral action is for! So even if we say the rest is trivial and that EA is just utilitarianism or consequentialism, that part is still pretty dang nontrivial!

Expand full comment

”And I’m a terrible vegetarian. If there’s meat in front of me, I’ll eat it.”

If this happens somewhat regularly, I don’t think that you’re a bad vegetarian as much as not a vegetarian.

On the other hand, this is super common:

” A poll conducted by CNN surveyed 10,000 Americans about their eating habits, and roughly 6% of the respondents self-identified as vegetarians. The researchers then asked individuals to describe their eating habits, and 60% of the "vegetarians" reported having eaten meat within the last 24 hours.

Okay, that could've have been a fluke (or just a really, really dumb sample group). Then the U.S. Department of Agriculture conducted a similar study. This time, they telephoned approximately 13,000 Americans, and 3% claimed to be vegetarians. When they followed up a week later, 66% of the self-proclaimed veggie-lovers had eaten meat the day before.”

https://www.businessinsider.com/survey-60-of-self-proclaimed-vegetarians-ate-meat-yesterday-2013-6?r=US&IR=T

Expand full comment

EA to me is just one aspect of the ongoing global "mindset war." Not everyone sees the world as westerners do. Their sense of being an individual, embedded in a culture, may be quite different. One issue of the Western mindset, with its focus on individuality and fairness, is that an underlying sense of guilt tends to accrue over time.

The advocate of EA has accepted the guilty feeling but does not want to look deeper at the mindset itself. So they elect to try to align themselves with things that are "good" according to the value system of the mindset. This helps them to perpetuate their mindset and simultaneously alleviate the sense of guilt. In a sense, it's colonialism, because they would like the mindset to occupy more minds globally but are trying to find a way to make this workable.

To me, the deeper question is... Do we actually want a world full of guilty-feeling people obsessed with fairness?

Expand full comment

I think your argument only really works because you are assuming that you are responding to a person who doesn't do charity work. This may be applicable to a lot of situations you've encountered in practice, but DeBoer's argument repeatedly explicitly references "are shared by literally everyone who sincerely tries to act charitably", rather than "are shared by literally everyone".

I would propose the following model: there are various charity communities, with Effective Altruism being one of them. They each discuss how to do good, and some non-EA charities come up with arguments against EA such as "EAs have an excessive focus on existential risk". These arguments then get disseminated into the broader society, including among people who don't do charity at all, and this is where you notice it because all the charitable people near you work for EA.

Your counter then essentially boils down to "these anti-EA arguments have been adopted by non-charitable people". But... that's a fully-general counterargument against reasoning about how to evaluate big charities? Like assuming most people are non-charitable and information diffuses broadly, any argument about how to run charities would be adopted by non-charitable people.

Expand full comment

Copy-pasting from a comment I left on Freddie's post:

'You argue that no one actually disagrees with the basic premise of EA (that we should try to ensure our charitable donations go to high-impact charities and not elsewhere).

Counterpoint: this guy wrote an article (https://www.honest-broker.com/p/why-i-ran-away-from-philosophy-because) criticising Bankman-Fried, EA and utilitarianism more generally. He sums up his position on how to be good at the end of the article:

"2. We rely too much on numerical measures and reducing things to formulas—the most valuable things in human life resist quantification.

3. I’m referring to core human values—such as love, compassion, forgiveness, trust, kindness, fidelity, decency, hope, etc. These are practices not arguments, and hence require no appeal to a larger context. Anyone who claims they require arguments (for falling in love or acting compassionately, for example) should be treated with extreme caution.

4. Above all, beware of people who won’t do a good deed until they have calculated the long-term consequences and expect certain desired results in return. Maybe you can do a business deal with them (or maybe not), but never get into a close personal relationship with them.

5. Gratuitous actions of generosity and kindness are best done without calculating rewards or consequences (which, by the way, is the reason I’ve celebrated gift giving and the bonds of trust and love it creates in my writings)."

In other words, good intentions and positive vibes are the only things that matter when it comes to doing the right thing, and there's something intrinsically suspect about someone who thinks that policies should pass a cost-benefit analysis. 750 people liked the article.

(I left some comments criticising his arguments and stylistic touches. He deleted the comments and banned me from commenting. Freddie: I do appreciate your thick skin.)

So no, I don't think everyone already basically accepts the principles underpinning EA. Not when this guy is arguing that you do good by looking in your heart and being compassionate, and there's something majorly sus about someone who actually wants to quantify how much good one charity is doing relative to another.

You can see the same essential dynamic play out whenever (and I'm sure you've had this experience personally on plenty of occasions) someone proposes a policy ostensibly intended to combat rape/racism/the spread of Covid/child porn/drug overdoses/whatever, you argue that the policy will have no impact on the problem in question, and someone immediately retorts "what, so you're saying rape/racism/Covid etc. isn't a problem?" This exact exchange has played out for me so many times that I have to assume there's a significant proportion of the population who really do believe that the intentions behind a policy or action are all that matter, and how effective that policy or action is at achieving its stated goal is just the fine print. This is the whole reason the politician's fallacy (1. We must do something 2. X is something 3. Therefore, we must do X) is so seductive.'

Expand full comment

1. I think you vastly underrate non-EA charity. There are legions of non-EAs who 1. donate significantly and consistently to charity or make it their life's work, 2. make good faith attempts to do so as well as they can (even if it's according to e.g. scripture or trusted authority, rather than rationalist calculation), and 3. actually act upon them. This is common in most parts of the world, and the fact that it's common is at the heart of DeBoer's objection. Maybe it's not common where you live or in your general circles. I guess you're not likely to see troupes of monks in the Bay Area. I don't think you get a form in the US with your payroll with facts about various charities and retirement account options and get to choose which to automatically send money to each month.

2. You gloss over the actually unusual differences between EA and non-EA in this essay. You blithely assume consequentialism is the best approach to determining effectiveness in point 2. You offhandedly say that non-X-risk and X-risk charitable efforts are actually really similar, with "a lot of assumptions". It's precisely EA's applying a radical form of consequentialism with an unusual set of assumptions that is being criticised, yet you just skim past them and fail to address the issue.

Expand full comment

On point 2, I think the "consequentialist reasoning" part, along with some of your other assumptions for caring about x-risk -- valuing foreign lives as equal to local lives, valuing some amount of animal lives as equal to human lives, valuing potential future people at all -- is an important identifier for the EA position. Many people don't share those assumptions, and so animal welfare charities and x-risk charities are "obviously" worse than global health charities.

I used to think that donations to x-risk charities were bad, because I saw them as taking money away from helping people today. But over the years, I've seen people post about leaving the movement and not donating any of their money to charities at all, and now I think if the x-risk stuff keeps people in and donating, that's a good thing. By the graph last time, more than half of the money goes to stuff I'd agree with, and not all my yearly donations go to global health, either.

Expand full comment

"Once you stop going off vibes and you try serious analysis, you find that (under lots of assumptions) the calculations come out in favor of x-risk mitigation. There are assumptions you can add and alternate methods you can use to avoid that conclusion. "

I think a lot hinges here on whether you use an intertemporal discount rate. And you should.

Expand full comment

Not directly related to the post, but. One of my copouts is that anything outside a really tiny time horizon is a Knightian uncertainty. You can talk about probabilities, but you can't meaningfully calibrate on them, so they are meaningless.

Some cherry-picked examples:

- Veganism is not as good as semaglutide at reducing animal suffering. Maybe I should have saved money spent on Impossible burgers and donated the difference to Novo Nordisk.

- Raising the profile of MIRI resulted in the best and brightest minds working on AI. Maybe if Eliezer finished high school and went on to become a decision theorist in academia, and not an AI x-risk popularizer, the AI would still be at the level of Alexa.

- Fighting global warming may end up net-negative for life on Earth: think how much richer the biosphere was 55 million years ago, compared to now.

Expand full comment

I think the AA analogy is interesting here, because AA makes a big point of being run in ways that are almost opposite to EA in some respects. I'm thinking of the Twelve Traditions in particular:

https://www.aa.org/the-twelve-traditions

For example, tradition 11: "Our relations with the general public should be characterized by personal anonymity. We think A.A. ought to avoid sensational advertising. Our names and pictures as A.A. members ought not be broadcast, filmed, or publicly printed. Our public relations should be guided by the principle of attraction rather than promotion. There is never need to praise ourselves. We feel it better to let our friends recommend us."

There is a big contrast with Will MacAskill going on the Daily Show, or the many campus groups trying to hit KPIs for new recruits...

Or tradition 8: " Alcoholics Anonymous should remain forever non-professional. We define professionalism as the occupation of counseling alcoholics for fees or hire. But we may employ alcoholics where they are going to perform those services for which we may otherwise have to engage nonalcoholics. Such special services may be well recompensed. But our usual A.A. "12th Step" work is never to be paid for."

Big contrast with the much-discussed shift from "earning to give" to "EA jobs".

Of course it would make no sense for EA to try to adopt the Twelve Traditions. They only really make sense if you think your primary job is not to actually do anything, but just to make space for a Higher Power to work, which is not the EA philosophy at all.

Still, AA has been extremely successful over almost a century in running successfully and maintaining its core purpose, without any major PR blowups and without turning into a cult, despite serious dangers of both when running substance abuse groups. So there may be something to learn.

Expand full comment

At this point I think EA is a pretty broad movement. It consists of good people doing sensible things, good people doing silly things, bad people doing silly things, and bad people doing sensible things.

I wish to say hooray for the good people and the sensible things, and boo for the bad people and the silly things, but it seems like people get caught up in saying hooray or boo for the movement as a whole.

All of this is also true for most things, but it's even more true for Effective Altruism, which has one of the broadest good-bad and sensible-silly ranges of any movement I know.

Expand full comment

While I felt this article from FDB was a piece of crap, I think it reflects the (aware) public's attitude about EA and is therefore worth addressing. Of course, the point of essayists like FDB is to be correct not just reflect (describe is fine) the public's vibe.

Based on both talking to people and the helpful responses to my comment on the last piece about this my conclusion is that people feel that EA is a way for a certain sort of person not only to look down on them for being dumb but also excuse themselves from having to follow the usual rules. Sometimes those rules might be pretty literal (SBF ignoring the law) but more often it's just the rules about what kind of justification is needed before being they get praise for what appear to be self-serving and unlikely notions of what doing good consists of.

And some of this is inevitable. EA starts with two strikes against it in that it's a view championed by weird people that asks us to be open to the possibility that the most good might be done via counterintuitive actions (indeed, I think that's how I'd distinguish EA from merely the idea that ROI on charity matters).

However, I think another part of this is the common problem that movements/ideologies face accurately portraying themselves to the public. Indeed, I think these problems aren't unique to EA but are pretty much the same problem much of the left faces in presenting itself to the right.

The problem is that, in any tribe or movement, there is a tendency to stop talking as much about the boring shared ideas and talk more about ones that are more controversial in the movement -- and reporters only make this worse since you can only write one short article saying the boring centrist (eg givewell) thing is still broadly supported.

On its own I don't think this would be that big an issue except for the tendency to give fellow tribe members the benefit of the doubt and thus to avoid calling them out to the same extent one might do for non-members. For the left in the US this happens in how it responds to people who get the science wrong in arguing for environmentalism or for some racial/gender justice position even while jumping on the people who do the same in the other direction.

Especially when the people doing the criticism are in some fashion better educated or equipped to make such criticisms it leads to an impression to outsiders it's all just a sham -- heads I win, tails you lose. The Google bro's summary of the science wasn't quite up to the standard of a peer reviewed journal so he gets ravaged even while the same academics say nothing about the much more egregious errors of the people on the same side. With EA it's the sense that the skepticism that might be applied to the same arguments from a traditional religious or common sense moral view don't seem as apparent/strong when it's someone pushing a theory of AI x-risk or some longtermism option.

What makes this a difficult problem is that it doesn't necessarily reflect any kind of hypocrisy within any individual. There is just a tendency not to want to be an asshole towards or just not to expend your limited time yelling at the person you see as fundamentally having the right idea but making some mistakes so the very real disagreement just isn't voiced (eg when pushed the cast majority of lefty academics do disagree with bad environmentalism or bad social science in the service of social justice but don't feel the same need to make a fuss). With respect to EA the issue comes about because, from the inside, even those of us who are very skeptical of many AI x-risk claims or longtermist propositions still see those people as engaged in the right sort of approach even if they've made a mistake. Yet, from the outside what it feels like is there is one rule for you and another rule for me.

I don't think there are any easy fixes but I do think there are some strategies we might try (these are just guesses). We can try to do more to present EA as less of a lecture at outsiders about what you should do and more of an invitation to consider what they might think are surprising our counterintuitive conclusions about how charity should be done. So create the sense that we are asking (and thus valuing) them about what conclusions they would reach not just telling them what we've decided.

A more fraught option is to be less collegial in public facing interviews and portrayals. We could talk up more how the people who reach the wrong conclusion about AI x-risk or whatever are effectively killing people (one way or the other) rather than treating them as serious arguments (basically try to hide the visibility of the EA internal Overton window). But the problem with this is that it has real costs to the movement in terms of internal cohesion and our ability to calmly engage with each other.

But given that highly paid motivated people in politics haven't really solved the issue it seems like a hard problem.

Expand full comment

This is just another Bay Area land grab, same as they think they invented and now own sourdough bread baking. Almsgiving is as old as the Book of Proverbs and is one of the pillars of Islam, among other things. Spending money in a way which gives maximum value for money is a concept as old as money, and if you want to talk about doing something then actually doing it and doing it effectively really are pleonasms, in that not doing it is not doing it, and doing it ineffectively is also not really doing it.

I don't know what the worldwide charitable spend is vs the EA spend, but probably 1000x as much? Does e.g. Gates regard himself as EA?

I comment here in my own name in the hope it keeps me polite, but then there's lots of people with my name so I am happy to out myself as someone who consistently gives over 3% p.a. and less than 10% to charity, and why would I call myself EA? Any more than taking the odd precaution to avoid dying prematurely makes me a Thielite or Johnsonist.

Expand full comment

Yay for Scott not defending utilitarianism. Freddie is basically right (if interpret him correctly) in saying EA is fine if it doesn't bring along this dubious philosophical baggage.

Expand full comment

Personally, this recent kerfuffle has made me re-think my objection 'Wokeism' (to which Scott alludes in similar terms). I seem to have been focusing on the headline-grabbing voices from the fringe, then focusing my ire on the broader cohort and every principle involved. This may be a naive perspective, but I can't shake the feeling that most of this argument is people talking past each other. One side over-emphasises the fringe people/ideas and the other remains focused on the core principles.

At least this simple reading might help account for why EA critics don't seem to grasp what seems obviously distinctive about a charitable approach driven from the head rather than the heart. They're focused on the people involved. Which is why Bankman-Fried is constantly invoked.

Expand full comment

"Maybe a better answer is to judge movements on the marginal unit of power. An anti-woke person believes that giving anti-racism another unit of power beyond what it has right now isn’t going to free any more slaves, it’s just going to make cancel culture more powerful."

I don't think it need be about what they *would do* with an 'extra unit of power'. That's more speculative and hypothetical than it need be.

Rather, it can simply be about what they are doing currently. This allows you to observe that anti-racists might have historically done x, y and z but are currently doing [whatever low value or harmful thing you would say they are doing now], while their historic achievement of ending slavery remains unopposed and isn't in any sense something they are currently maintaining. Meanwhile, effective altruists are still currently saving lives from malaria, advancing their campaigns for AW etc., which wouldn't be done otherwise.

Expand full comment

Personally, I strongly agree with deBoer here.

Also, I believe your 3 points can make sense only to Americans (at least according to my own stereotype of Americans, as people who holds some variation of "anarcho-capitalism" view is a default and transparent ideology).

1. "donate some fixed and considered amount of your income to charity"? You mean, pay taxes? And by the way, 10%? Those are rookie numbers.

2. "Think really hard about what charities are most important"? Like a hedge-fund would? This is just ridiculously naive. This kind of "hard thinking" requires training and expertise that takes years to obtain, and to do it properly I'll have to dedicate most of my time just for that, and I'll have to secure the backup and support of other professionals and administrative assistants. In other words: I'll have to get a career in the civil service.

Expand full comment

In your comment thread on Freddie's post, you mentioned a hypothetical person who "tried really hard to figure out the best charity and donated to the endowment for a performing arts center or something. I would want to talk to them and see where our assumptions differed."

I'm not exactly that person, but I'll take that as an invitation anyway. The part of that comment that really jumped out to me was the idea that the performing-arts-center donor might be motivated by "some kind of galaxy-brained idea for how the plays at that performing arts center would inspire a revolution in human consciousness which would save millions of lives."

This jumped out to me because my own thoughts on the value of charitable giving are almost diametrically opposed. I like the idea of donating to performing-arts-center-type charities, since at least some of these concretely embody what I value most about humanity. The main motivation I would have for donating to a charity that tries to save lives is the "galaxy-brained" thought that just maybe, the people whose lives I save might go on to build even more performing arts centers than I would have been able to fund with my own money directly. But it's pretty hard to convince myself that this is actually true, so if I really were to rationally reflect on quantifying the impact of my charitable giving (in relation to my actual values), I would be less rather than more inclined to donate the money to mosquito nets.

I do wonder whether something along these lines might be behind the aversive reaction that so many people have to EA discourse, even in its "mosquito nets" form. Many people have an uneasy sense that maybe they ought to care more about saving lives than about donating to their old school, a performing arts center, the local community sports club, etc. But they don't really care more about saving those lives, and they don't like having to think about that fact. EA discourse forces people to reflect on the discrepancy between what they actually value and what they think they are supposed to value, which is a deeply uncomfortable thing to reflect on.

Expand full comment

I tried to become part of the EA community, from about 2020 I gave part of my income, first to Evidence Action, then to GiveWell. And then in 2022 sanctions were imposed against Russia, which made any transfer of money from Russia to abroad as difficult as possible. And I gave up.

In some ways, this situation only increased my sympathy for EA, because it reminded me of how stupid government decision makers are compared to effective altruists. But partly it also made me think about how EA is a movement for the First World, totally not designed for similar situations that happen outside of it.

Expand full comment
Nov 30, 2023·edited Nov 30, 2023

> Freddie deBoer says effective altruism is “a shell game”

If his claim amounts to saying it is a vacuous concept, because it is shared by practically everyone one way or another, then I guess the main clue to rebutting it is in the word "effective"!

I must say I've never understood why someone can feel guilty about the worse luck or predicaments of others, unless of course that person had a role (by commission or omission) in unfairly bringing those misfortunes about! Guilt in that context seems a backhanded way of putting themselves on a pedestal by the conceit of taking upon themselves responsibilities beyond what they truly own.

Expand full comment

I feel like I basically agree with effective altruism on its own merits (donate to charities that do a lot of good!). I think the cultural part that can be grating is the expectation that you show your napkin math to someone else and have to listen to an argument about how your math undervalues sn-risks. Moral persuasion, in that sense, can feel a little bit like a cult or a religion.

Peer pressure to take the giving what you can pledge (something most people probably agree with in principle, to fdb's point) is a bit different, and I accept it's more foundational than the result of the napkin math, it just gets less attention.

Expand full comment

My take:

* FDB, EA, and a handful of associated communities live in a bubble, and share values that are very rare in the world outside it. EA tries to be big-tent within that bubble, and fails, because people like to loudly disagree about all sorts of things.

* EA basically functions as the charity arm of the group, the brand, the sub-community.

* FDB, as a helpful contrarian, dislikes many aspects of the values and behavior of the people in the group. He likes the expressed principles behind EA and general style of reasoning, but incorrectly assumes they are universal, because bubble. It's too "obvious" to be relevant.

* Having dismissed the expressed values, FDB focuses on their actual actions and focus.

* The people in EA are "weird" (meant in a nice way, sorry). The kinds of things EA supports are, surprise surprise, the kinds of things that these kinds of people conclude are valuable.

* If there were no EA brand, these people would not suddenly find themselves valuing different things and giving charity towards those instead, they would find themselves either giving less charity or giving to those same causes regardless, possibly in a less effective manner.

* FDB seems to miss something critical: EA did not make these people into weird people. They started out as weird people. Removing the brand will not eliminate all their "mistakes" (divergences from what FDB considers sane behavior) and turn them into EA-but-less-incorrect.

* How might one judge EA? In absolute impact of the group, it's pretty positive. But: If you start with the assumption "there are this group of smart people who are willing to donate lots of money to charity" and then consider the "EA" part to be the "...and then they donated to areas that are less than ideal, from the perspective of my own values", then EA looks like a problem.

I myself have plenty of disagreements with EA, but I recognize that they're not people who would suddenly have better values without it. EA causes these people to give more charity. More charity is good.

Expand full comment

There's this joke everyone makes when they first hear about "Evidence-based medicine." One version is, "wait, as opposed to what!?"

But eventually you have to admit this was a rallying cry for a reason.

The "criticism" section of EBM has a spooky symmetry here:

https://en.wikipedia.org/wiki/Evidence-based_medicine#Limitations_and_criticism

Maybe this is a sort of ring cycle of epistemology.

A: "Hey we need to more intelligently do X."

B: "Well of course we should do X smartly, everybody thinks so!"

A: "Ok but there's still this problem..."

Expand full comment
Nov 30, 2023·edited Nov 30, 2023

> Systematic Altruism

Oh, I see. You want the movement to share your initials, thus kabbalistically making you its natural leader! We're onto you!

(Good name though.)

Expand full comment

The big problem I have with EA is that, as a movement, it has the Silicon Valley mindset, and I'm instinctively drawn to the DC mindset (and FdB, being a Brooklynite has a different mindset again).

The Silicon Valley mindset is about building organisations that can do things, whether those are charitable or businesses. The DC mindset is about taking relatively small amounts of money and using them to leverage huge amounts of government money and power.

Climate change is an x-risk, but it's not one where modest amounts of charitable money have leverage. If OpenPhilanthropy could buy 20% of the land area of Arizona and cover it in solar panels, then that would make a difference, but that's spectacularly outside the budget envelope of any charitable entity. There might be things that can be done in research (industrial heat, energy storage, flight, shipping) but that doesn't seem to be a place where relatively modest charitable funds could make a big difference, and there looks to be plenty of commercial investment in that research anyway.

But: you all live in California, there's a massive project that would have a big impact on carbon emissions if it was successful (CAHSR), and there is zero effort going into making that spending cost-effective. A few million dollars of lobbying budget aimed at specific decisions (like the massively overbuilt elevated sections that could be ground level) could have saved billions of dollars from the budget. It is always politically easier to spend money inefficiently and buy off all potential political opposition (if you build it on the ground, it splits farms; the cheap solution is to force farmers into land-swaps so they end up with a farm all on one side of the line, but that annoys those farmers and turns them into political opponents); if there's effective lobbying to spend money efficiently, then officials have to pay a price either way, and will usually choose to do the "right thing", ie spend the money efficiently.

There's also lots of federal lobbying (easier permitting for solar panels, easier permitting for long-distance electrical transmission cables), but the present EA institutions have shown no particular facility for lobbying. I would love to support an "effective green policy lobbying" think-tank that did assessments of how much difference individual policies would make to carbon emissions and recommended things like "accepting a compromise where you trade-off building a couple of oil pipelines for a bunch of HVDC electrical transmission lines" because the oil pipelines don't increase oil consumption by much but the HVDC increases the possible solar share of the electricity mix significantly. The research on this is done, but there's no lobbying organisation; the current green organisations are oriented around symbolic wins (like blocking a pipeline) and activist thinking, not around pragmatic cost-benefit analyses and lobbyist thinking. I think that the cost-benefit analyses are very much an EA-mindset thing, and climate change is a clear x-risk, so should be right in the EA wheelhouse. But the Silicon Valley approach is absolutely the wrong one for climate change: the DC approach of building a "highly respected think-tank" and having lots of semi-associated lobbyists is what works for this sort of issue.

Expand full comment

Once I got really upset here about how come I’m not donating 10% of my income. I had reasons, felt like I’m supporting all these people and paying all these taxes and just ... how? It took growing seriously in my faith to realize my enormous capacity for self deception and self delusion. Realizing that atheists followed God’s will better than I did was a serious wake up call.

Having said all this, the X risk stuff is where I start getting off the boat because all that matters there is, “do you have the right consequentialist model?” I understand we have lots of reasons for thinking we do, but I think this falls prey to the trap of “doing the most legible good while ignoring illegible problems.” If we spent all the money trying to guard against AGI, on lobbying to bring regulatory clarity to prediction markets, that might do more good against AGI risk than focusing on the AGI risk directly.

I like the idea of thinking seriously about good. But does EA _really_ do this? It never tries to define or reason about the nature of good, and instead assumes that goodness is a property that obtains to various degrees for some world states and not others, but the world states come about through some intricately branching process which has zero relation to good. Yet the thousands of years of prior work on this topic - what is good - produced numerous independent groups of researchers concluding something like, “the physical mechanism is so self-regulating that evil doesn’t last, so don’t worry about the future and do the best you can where you are.”

In short

- I think EA haters who aren’t donating 10% should serious question whether they are being honest with themselves

- I think part number 2 in your analysis is THE ENTIRE GAME and being wrong there isn’t a small deal, and

- EA totally ignores the philosophical band ontological foundations is good then focuses on the easiest to measure things, uses a basis that I think is wrong, and ultimately limits the group’s effectiveness

Expand full comment

It’s not really historically accurate to say slavery was ended by “anti racists”.

Expand full comment

> wokeness is just a modern intensification of age-old anti-racism. And anti-racism has even more achievements than effective altruism: it’s freed the slaves

This is plainly false. It is not "anti-racism" that ended slavery. Not in the world at large, not even in the particular context of the US.

"Anti-racism" doesn't get that credit. Opposition to slavery stemmed from an application of the Golden Rule, which is indeed ages-old. Devarim (Deuteronomy) speaks of setting your slaves free, because "remember that you were slaves in Egypt".

Lincoln was firmly opposed to the concept of slavery, while also being a firm believer in what would today be called "differences in statistical distribution curves". He did end slavery in the US, freeing 4 million slaves at the cost of 0.6 million lives. But he also wanted to relocate all blacks to Africa (Liberia) or Central America (Chiriqui) or *anywhere* that isn't US. He was ready to spend any amount of the federal budget to achieve this relocation, back in times when federal budgets had been spent very sparingly. Lincoln's message to blacks was: "Your race suffer from living among us, while ours suffer from your presence. It is better for us both, therefore, to be separated."

I really appreciate your writings, but the claim that *anti-racism* ended slavery, is laughably false.

Expand full comment

> In other words, everyone agrees with doing good, so effective altruism can’t be judged on that. Presumably everyone agrees with supporting charities that cure malaria or whatever, so effective altruism can’t be judged on that. So you have to go to its non-widely-held beliefs to judge it, and those are things like animal suffering, existential risk, and AI. And (Freddie thinks) those beliefs are dumb. Therefore, effective altruism is bad.

Wow, this is a really uncharitable reading of Freddie's point. The problem is not that "everyone agrees with doing good" just like EA; it's that the EA movement wants you to believe that "doing good" effectively is really only possible if you subscribe to their entire ideology -- which is obviously untrue, given that other charities exist, have existed for a long time, and have been reasonably effective (though obviously ineffective charities have also always existed). But EA wants to appropriate their achievements for itself. Saying "if you want to do good, and to do it in the most efficient way possible, then you're basically a member of EA" is as sleazy as saying "if you want to love your neighbour and care for the downtrodden then you're basically a Christian".

Yes, I understand that technically the "social technology" of EA is separate from its rather specific culture; but it is only separate in the same way that believing in the divinity of Christ is separate from going to Church on Sundays, reading the Bible, talking to other Christians in "Christianese", praising the Lord, etc. That is to say, while the concepts are distinct in some academic/philosophical sense, they are not distinct in practice, and pretending like they are is borderline dishonest.

Expand full comment

When you have firehoses of money going to "charity", a) asking people to a) give *more* money and b) telling them to devote more time and mental effort to charity is a hard ask.

Yes, there are firehoses. Look at the largest line items in both federal and state (or provincial in my case) budgets. They are charity (I am excluding defense in the federal case because it is obviously actually the job of government). You are already giving far more than that 10%, involuntarily (I assume you are not receiving government money, although it's hard to tell these days).

Now, agitating to redirect the firehoses might do some good.

Also: valuing foreigner lives more than your fellow citizens is not a good idea. Just so you know where I'm coming from.

Expand full comment

Perhaps the best way to put my position is "The basic rubric of trying to find charities that are most efficient, and of demanding evidence-based charitable action, is really good and should be uncontroversial; however, EA also generates a ton of esoteric stuff and an attachment to developing more, and this has obvious less-than-ideal effects when it comes to spreading the philosophy. So EA advocates should work to minimize those aspects and fixate on the bed net/kidney donation stuff that has the most concrete impact and best optics."

I would argue that, for example, while Dylan Matthews has been a good popularizer, he's also tended to front the crazier stuff, which as an EA advocate is not ideal.

And look I'm not saying that this has no risks or costs - maybe the more esoteric stuff really would prove in the long run to have the most positive impact. What I'm advising could be a mistake. But I think that as EA recovers from the fallout of the SBF scandal, and given that public opinion is so essential to donations, I think EA leaders and organizations should do their best to try and center their conversation on the mundane but essentially stuff. And maybe have a critical conversation about whether longtermism should be spun off as a separate enterprise that EAs can get involved with or not.

Expand full comment

I would question the premise that a rationalist approach to altruism (and, broader, the ethics) is necessarily a good thing.

1. Certain tools help with some problems at every margin: if we need to hack a lot of trees, adding more properly managed people with axes will probably help.

Other tools for other problems help at some margins, but are pretty harmful at other margins. This happens in many areas, see, e.g. the "uncanny valley" discussions in CGI.

Scientistic (aka "rationalist", "systematic", "effective", "with a spreadsheet") approaches proved to be great tool in many areas, but they are certainly prone to "uncanny valley" issues when applied at some margin to some problems. For example, many cities are still blighted by the disasters of "rational" urban planning of the XX century. These disasters are the result of work of well educated and well intentioned people with access to vast resources, but they happened to work at a margin where their approach was harmful, even though the same approach is indispensable at the lower margin (you need to put some calculations in the erection of structures) and possibly useful at higher margin.

Ethical problems are a class of problems that are obviously prone to the similar issues for some tools. For example, if I have a question about mathematics/zoology/ancient history, approaching a Berkeley professor of the relevant discipline is probably a good idea. If I have a question about ethics, asking an ethics professor is probably a very bad idea - they are a person who made their career researching quirky and edgy ethical questions and not giving correct answers to simpler questions.

So it is not unreasonable to worry that a rationalist approach to altruism might be currently at the same uncanny valley where applying more of a good thing actually leads to a worse outcome. Is it?

2. One of the signs that one is at the uncanny valley is the emergence of paradoxes nearby. Paradoxes such as Pascal's mugging or Utility monster can be viewed as cliffs in this landscape, when applying a logical reasoning leads to a disastrous conclusion. If there are cliffs nearby, we know that the landscape is complex and not a gently upwards sloping plain. Of multiple known ethics paradoxes, the Pascal's mugging is most relevant as the x-risk discussions are obviously quite vulnerable to the same exploit. Also, a version of Utility monster can be easily constructed around animal welfare problems.

Another worrying sign of decreased marginal utility of your tools in ethics are the obvious un-ethical deeds done when justified by greater goals. You dismiss EA association with SBF in your previous essay as insignificant compared to EA achievements but fail to address the issue that SBF was not a random crook who happened to donate to EA. He was driven by a pretty rationalist but crazy flavour of EA (basically, he explained that since he will eventually donate all his wealth to very good causes, he is obliged to take even odds risks at the maximum scale, because in some universe all his bets will pay off and he will solve all problems there.)

So a real worry about EA is that it is a bunch of well-intentioned and clever people who wondered far enough in an uncanny valley of applying spreadsheets to ethics to be actually moving in the wrong direction.

Expand full comment

Your definition of EA still bites FDB's criticism: nobody would disagree that these are good things to do: *Actually* donate a fixed amount of your income to an effective charity.

The distinctive part of EA is that it corrects what counts as effective charitable giving. The ordinary person who agrees with your definition is anthropocentric: they believe that only human well-being matters, so any charitable cause that goes toward animals (save the cute puppies on ASPCA commercials!) doesn't make sense to them. EA is distinctive in denying that only humans and cute puppies matter.

EA is also distinctive in that it cares not just about currently-existing animals but ones that will exist well into the future. Ordinary people agree with this, so they sometimes feel guilty when polluting the environment that future people will have to live with. But EA takes this worry much more seriously, so you get all the dorky sci-fi platforms that FDB thinks are dumb. EA advocates must bite the bullet on this: their commitments entail these weird platforms.

That's either a reductio for people like FDB or an indication that morality demands weird things.

Expand full comment
Nov 30, 2023·edited Nov 30, 2023

The woke are literally trying to re-segregate schools. I object to any concession that calls them “anti-racist”, or lumps them in with the civil rights movement or the civil war era radical republicans. They are can only be considered anti-racist if you accept their own redefinition of “racism” to mean something almost diametrically opposite of what every American that wasn’t a studies professor understood the word to mean prior to 2013.

Expand full comment

> Why should people judge effective altruism on its big successes, but anti-racism on its small failures?

> Maybe a better answer is to judge movements on the marginal unit of power. An anti-woke person believes that giving anti-racism another unit of power beyond what it has right now isn’t going to free any more slaves, it’s just going to make cancel culture more powerful.

I notice that the reasons I support progressive causes are pretty similar to the reasons I support EA and I do think there is somewhat of a symmetry here.

Consider a person who really dislike spending money on AI alignment and is anti-EA for this reason. You tell them that EA will not spend their money on AI alignment if they donate it for mosquito nets, so there is no reason to oppose EA. The person, however, isn't persuaded. They feel that not opposing EA will generally make it more powerful and bring more attention to the whole cluster of memes that is associated with it. And that you know it. And the reason why you are okay with that is that you are at least somewhat fine with the idea of finansing AI alignment research.

Likewise, I,may try to persuade you that you shouldn't oppose "wokeness", as there are lots of good it is doing and have done and that if you support some of their other causes they will not transfer money to the "stregthtening of cancel culture fund". I don't think there even is any fund like that. Will this be persuasive for you? Or will you immediately think that the reason I'm comming with this argument is that I'm at least neutral towards deplatforming?

Expand full comment

"Once you stop going off vibes and you try serious analysis, you find that (under lots of assumptions) the calculations come out in favor of x-risk mitigation. There are assumptions you can add and alternate methods you can use to avoid that conclusion. But it’s a temptation you run into. "

That is the big problem. There's plenty of people willing to argue about future value of a dollar and so don't give money now, give it later.

The problem with that is that 'tomorrow never comes'. After all, if the putative value of my dollar is going to be greater in 2030 (and so do more good/save more lives), then I hang on to my money now and don't donate until 2030. However, when 2030 rolls around, the same argument applies: hang on to my money until 2035. Rinse and repeat until I die or spend all my money on myself. And EA has a lot of this type of number-crunching philosophy that tangles people up.

Other charities are out there with equally big and vague aims about 'ending hunger' or 'global poverty' or the rest of it. However, if they collect donations for flood relief, as well as talking about long-term aims, they do actually use that money (or a good lump of it) on flood relief. They don't sit on the donations, giving the explanation that by this economic theory or that calculation, hanging on to the money is even better for the poor in the long run, so that people die while the money to relieve them is being endlessly held in reserve for the better return.

EA's public perception problem is when you go on about long-termism, why should I care about people not yet born who may never come into existence five hundred years from now, instead of doing something to feed the hungry and clothe the naked who are suffering right this minute? It does sound like "jam yesterday and jam tomorrow but never jam today".

I'm glad most EAs are the types who *will* donate to mosquito nets and immediate relief of need, as well as the AI risk/X-risk stuff. But I think that's *despite*, not *because of*, the philosophy.

Expand full comment

I'm curious who has made a strong case for why giving to charities is good in the first place, with a strong steelman case that it's better than the Elon Musk way of building businesses that solve problems in a sustainable (i.e. profitable way), or investing the money in such businesses at least.

Magatte Wade e.g. makes a strong case for the latter as a solution to Africa's problems.

If there is no strong such defence, the problem with EA seems to me that it's just not effective

I'm personally invested in and do entrepreneurship in developing countries. There is an utter abundance of problems that can be solved, an utter demand for help from talented foreigners that bring useful skills (with a bit of humility that they're not a saviour, just doing business), and an utter absence of supposed EAs that prefer cozy Bay Area circles (even though many areas in SF are far worse than conditions in developing countries) over actually working in a foreign culture and tinkering to actually do things that effectively and visibly do good, and get direct feedback from customers.

Expand full comment

I agree with you and disagree with Freddie for the reason you give here, which I think could be more usefully articulated in one sentence: EA is primarily a set of practices, not beliefs, and those practices are actually quite unusual and highly laudable.

With that out of the way: I do also think that, to the extent EA also promulgates a set of unusual beliefs, many of those beliefs are wrong. In particular, the way EAs frame AI doomerism and existential risk more generally is so straightforwardly a restatement of Pascal’s wager that I’m shocked they can’t see it. If my favourite book posits an infinitely bad outcome that must be avoided at all costs, it’s worth taking the precautions prescribed by my favourite book against that threat, even if I acknowledge that the risk may be infinitesimally remote. This logic breaks down, of course, when you consider that many people have a different favourite book, and that there are an infinite number of these theoretically possible but infinitesimally unlikely risks, and that society would grind to a halt if we took all the precautions required to ward them all off. Furthermore, when one considers the officious and counterproductive behaviour of those who have embraced the “x-risk” worldview (church ladies who take the threat of eternal damnation seriously, AI regulators who take the threat of superintelligent AI seriously), it becomes clear that there are severe social costs associated with these beliefs. And also, that adopting them makes you act like an asshole.

Perhaps you’ll tell me that, unlike the Bible, your favorite book (or Hollywood film franchise) is an accurate guide to how the future will actually play out, because you and your friends are “rationalists” who have carefully plotted out the probabilities and incanted all the proper spells and clutched all the proper talismans to dispel all of your cognitive biases. To that I can only sigh and roll my eyes.

Expand full comment

Isn't there a deeper philosophical basis and way of thinking to EA that identifies it?

For instance, to take your first 3 points of definition - 1. 10% giving, 2. Thinking hard about which charities are most important; 3. Actually doing these things - they certainly separate it from 'universally held beliefs', but not from, say, serious Christian communities. The churches I have been part of certainly have done these things - and perhaps have helped people from a broader range of social and intellectual backgrounds do them!

Isn't the variety of 'consequentialist reasoning' used in number 2 actually the defining feature of EA? It's where EA has very obviously made a deep contribution - especially in encouraging people to use serious data-driven analysis of the impact of giving?

If we dig down into the philosophical underpinnings of that - a certain flavour of utilitarianism, for instance - doesn't that both explain what EA does badly, and explain why it is controversial? E.g. why AI risk or animal suffering seems like primary concerns to those in EA, much higher up the chain of priorities than they are for Communists or Christians or whatever? #

Expand full comment

The biggest problem I have with EA is its blatant attempt to convert altruism into a Veblen good.

Old fashioned altruism always had its "holier than thou" people - not the least bit clear that it is beneficial to have the "be seen as rich" people added in.

Expand full comment

This is helping address some of my objections to this movement, so thanks. I still have four basic problems.

1. A more minor point, but...in a world with increasing polarisation and toxic extremism, do you really think the attitude "[a]ny group with any toolbox has earned the right to call themselves meaningfully distinct from the masses of vague-endorsers" is a great one to champion? You're basically saying here, unless I misunderstand, that ideologues are better than centrist normal people, because they do things. (Obviously you're not saying *all* ideologues, but on the whole.) Should we really be encouraging more highly-organised, highly-subculturish movement-forming in the current world? Wouldn't the best, most productive and constructive path be to try to spread general moral principles like effective giving *as broadly as possible*, and as *least* tethered to both social subcultures and controversial social agendas as is practically possible?

2. More concretely, the elephant in the room for me is always the utilitarian dogma. I really think Effective Altruism *needs* to split into two different names for this reason. Effective giving and utilitarianism have as much to do with each other as methodological naturalism and philosophical naturalism do, despite the conceptual overlap in both cases. Conflating "scientist" and "atheist" would, I think most would agree, be grossly offensive to the thousands of scientists who believe in God. You'd be telling them that it doesn't matter how rigorously naturalistic their scientific work is; if they don't agree with the view that scientific natural laws are literally all that exists, they're not real scientists! Similarly, much of EA comes dangerously close, in a motte-and-bailey related way, to implying that one cannot be called charitably effective, or promoting good outcomes, unless they believe that outcomes are literally the only morally relevant thing in the universe! And that's the attitude that can be reasonably called cult-like.

3. I appreciate you explaining why you don't think this same reasoning applies as a defence of wokeness. But I still see a huge potential motte-and-bailey here. Regardless of how often it happens, it's in principle very easy to say to your friend one week "you should become an effective altruist, it's *just* the attittude that giving should be effective!" and then the next week "what? you don't believe in AI-risk? you said you were an effective altruist! traitor!". I find it hard to believe this doesn't happen often in practice. And while I get the point about difficulty with the ambiguous nature of language, don't you think this conceptual problem should at least be clearly acknowledged and guarded against?

4. All of the above were very theoretical. My main practical objection to EA is the longtermism thinking, and I utterly reject the idea that this in any way naturally follows from thinking logically about impacts. Consider the trolley problem. I am generally against turning the trolley in the original case, but I can certainly see why someone might, and I wouldn't condemn them for it. But imagine we change it so that the trolley merely has a *25% chance* of killing the five people on the original track (otherwise no one dies), and you can turn it to remove that chance and with certainty kill the one person on the other track. At this point, I struggle not to think of someone who would turn it as a sociopath. They'd be an unusual kind of *ethical* sociopath who is following a moral code, but still seeming to completely lack fundamental moral intuitions that a certainty of death is incomprehensibly worse than an unlikely possibility of it. That saying "well 1 life lost for sure, or an average expected loss of 1.25 lives, clearly the first is better!" is horrific reasoning, literally treating that one person as nothing but a statistic to be weighed against another statistic. That to allow (or worse cause) a person who exists now to actually suffer or die, in order reduce a small probability of harm to everyone, is just morally reprehensible. And EA longtermism is entirely built on this thinking.

I don't claim my moral intuitions about this can be proven correct, or even that they are correct. I do claim that the longtermist moral conclusions are about as far from "obvious if you think about it and look at the math" as it's possible to be.

Expand full comment

Yesterday you had me fairly convinced that I was being weird and annoying because I was judging EA on their vibes and not their substantive contribution. This post has reconvinced me that the vibes are freaking weird man.

"I think most of the people who do all three of these would self-identify as effective altruists (maybe adjusted for EA being too small to fully capture any demographic?) and most of the people who don’t, wouldn’t."

Not a chance in the world. Maybe "the people who do all three and are also in Scott's Bay Area social circle" are EA, but I cannot imagine being so wildly ignorant of the rest of the world that I assume anyone who does charity does it within the principles of my personal, admittedly small, charitable movement. This is literally like saying "nearly everyone I know who actually follows through on their charitable intent is a Southern Baptist, and therefore I must conclude that Southern Baptists are the only charitable people." I live in the Bible Belt! Of course most charity takes place through the church.

I know tons of people who regularly give to causes they genuinely believe are worthwhile, based on rigorous analysis of whether those charities actually match to those values. Not one of them would be caught dead identifying with the EA social movement.

Expand full comment

As someone who believes both in conventional effective charity and the value to attempting to mitigate AI-based X-risk, as counterintuitive as it seems based on consistent underlying logic i think it would actually benefit both movements to become less publicly associated (if still overlapping in practice). My argument is as follows:

1. People are broadly in favour of conventional effective charity, even if they don;t care enough or realise how important it is.

2. People are broadly worried about AI and happy about safety research, even if they don't really understand the risk categories.

3. When people advocating explicitly for effective charity start saying the best use of funds is for ivory tower research of exactly the sort their pals do, people understandably get very suspicious very quickly. Our political enemies can then very easily make out the EA-AI-Silicon Valley cluster is a sinister monolithic cabal and discredit both causes associted with it. We should avoid this as much as possible by not co-branding them. AI Safety research is very important and people should advocate for it, just not while wearing the 'EA hat'.

Expand full comment

> Once you stop going off vibes and you try serious analysis, you find that (under lots of assumptions) the calculations come out in favor of x-risk mitigation.

Where can I find some of this analysis?

I don’t expect the average EA skeptic to find it convincing, because good analysis offers many jumping-off points. A motivated critic has their pick of objections.

But there is a Straw EA out there who basically relies on Pascal’s Mugging. Next time I see it deployed, I would like to be able to gesture at a more defensible set of assumptions.

Expand full comment

1. Christians do this via tithes (to maintain local churches and fund this church charity programs) and often additional giving on top of that. Hell is also the ultimate x-risk, if you think of that. They don't choose careers based on it as much though, but they probably personally volunteer more. EA is just a different aim, but both it and even rationalism sometimes mirror religion a bit unconsciously.

3 is true, but unfortunately people don't. SBF is just Jim and Tammy Faye Bakker; televangelism is kind of "effective evangelization" in using centralized modern tech to reach nationwide or worldwide audiences, but it proved to be extremely vulnerable to individual empire building, hucksterism, and more. And christians have to deal with that despite it being unfair to tar all of them.

honestly though the riffing on EA dude to openAI is weird to me. The other side is venture capitalists and Microsoft lol, how on earth is EA worse than them? And for the past 10 years weve been grousing about internet technologies unexpected negative effects, like Facebook radicalization, Twitter mobs and cancel culture, youtube and dancing to the algorithim, paypals arbitrariness in freezing financial support, crypto as e-waste and ponzi schemes, etc.

why are we all suddenly "trust techies with AI completely, full steam ahead!"?

Expand full comment

The motte is "do the most good according to whatever moral philosophy you hold" and the bailey is "do the most good according to consequentialist calculation." The motte is nearly tautological, but then the EA movement is nothing like unique in following it. For example, Mormons really do donate a tenth of their earnings and take their charity very seriously. If you want to promote EA specifically, that requires defending consequentialism specifically. In that context, it's totally fair to bring up things like x-risk or the fringes of animal welfare that really do seem to follow from consequentialist principles but that most people find strongly counterintuitive, to put it diplomatically.

Expand full comment

I have no beef with Effective Altruists - but I do think the concept of "Effective Altruism" becomes less useful the more you widen it to include more abstract things like AI safety.

Like, is donating to a Christian Mission "effective altruism" if I frame it as a rational calculation based on my percentage belief that hell is real, the expected Quality of Years Lived by a soul not in hell (this is incidentally a large number), and the number of souls a particular donation is likely to save?

(And to be honest: while 100% your average Christian is not going to frame it in those terms, I actually don't think it's too far from how many Christians would defend how they donate their money, in broad strokes)

You can argue, yes, this is Effective Altruism, and I feel like if your definition of EA is "it's just about the mindset in picking causes", you kind of have to. It's hard for me to draw a fine distinction between that sort of argument and other fairly abstract "EA causes" like AI risk.

And I think that's an okay definition of EA, but maybe not a useful one? Because the result is instead of focusing on malaria nets, you end up focusing on convincing everyone else of the assumptions that you're making (convincing people that hell is real, convincing people that AI is dangerous), and ultimately, it just ends up a lot like any other charity: you end up funding things like "awareness campaigns" to to convince more people to join the cause rather than the more concrete stuff.

Whereas I feel like if you define Effective Altruism as specifically the concrete, measurable, short-term impact stuff, it becomes a lot more useful as a concept. The focus becomes on the effectiveness, on doing the most concrete measurable good.

That's not to say an Effective Altruist can't care about saving souls or safe AI, but I feel like that work should be considered separate: it's fine to be a Christian and an Effective Altruist, it's fine to be an AI Risk Proponent and an Effective Altruism, but it's probably not helpful to the concept of Effective Altruism if I frame my almsgiving or my support of AI alignment efforts as "effective altruism".

Expand full comment
Nov 30, 2023·edited Nov 30, 2023

At this point, it sorta feels as if EA is like fantasy sports for people who don't like sports. You guys don't have long winded debates about the merits and shortcomings of WAR ratings for pitchers vs position players, or how much money is too much to spend on Jalen Hurtz in an auction draft. Instead, you argue about the relative value of non-profits and their various activities. Just like with sports, arguing about EA has become an end in and of itself; it is as much a part of EA as the 10% giving pledge.

Expand full comment

Sorry, I am not very convinced. When I first encountered EA it was an attempt to use resources more efficiently recognizing that much of the world of charity and philanthropy was corrupt and misguided and truly ineffectual given the ratio of admin expenses to real help in a cost/benefit assessed way. All well and good, and taking to heart the realization that most charities were scamming the desire for people to do good but had no operational way to effectuate this desire was a good thing. Once again, all well and good. How did this go from a nerdy intellectual way to help do effective altruism to tithing and 80,000 hours and animal rights and global AI risk, etc. I respect the level of intelligence (or should I say rationalistic tendencies) and commitment that many in the EA movement have, but this is hardly protection from this inflationary hubris. Saving chickens counts, but thwarting the purpose of mosquitoes doesn’t. Once you start down the slope of animal rights it becomes hard to understand what animals to save and what should be destroyed. Local and farm to table is great as a kind of plaything to improve certain cooking outcomes, but when it needs a “philosophy” and turns into a movement that has grandiose dreams of saving the planet from some existential risk, things start looking a lot less like EA and a lot more like a pseudo-religion complete with attempts to acquire power by hook or crook. The capacity to flip means and ends becomes easier and easier as you couch things in terms of religious metaphors of preventing the apocalypse or ushering in the eschaton. Maybe it is just easier to think of this as nerds need a pseudo-religion too and nerds like to think of themselves (like all religious folks) as the good guys that are in concert with the arrow of history. My hope is that EA stops thinking bigger and starts thinking much smaller. Out with the idea that you can do more as a movement or powerful folks placed to alter global policy and in with recognizing that doing “good” and acting ethically in the world is an extraordinarily difficult thing but that careful thinking in a default EA mode can help folks forward in the own journeys to, if that is their wont, to effectuate change in the world—by charity or not. The minute acting ethically in the world turns into how do we accrete power to turn the world to our vision, you have become a religion or political movement even if you think that the best way to change the world is to put your “people” into place and “grow” your movement. A group of smart, or even super smart nerds can no more escape the knowledge problem and Knightian uncertainty than a bunch of rabid zealots. The great thing to me about EA, as I see it, is that it takes a world fraught with such uncertainty and impossible knowledge requirements and tries to turn some of that existential reality into a more manageable space using the tools of a rationalist utilitarian. An excellent balm for the world, no doubt. There is much to learn from those using the lens of a rationalist utilitarian, but the need to have or create a movement seems misguided. Movements very quickly move from persuasion to force and the reversal of ends and means, which is why FTX and many other EA folks with power find it easy to set ethics aside for the “greater good”. Is the greater good a planet with no humans, or 1 billion people or one with 20 billion people? Rejecting the idea of a movement or that EA must try techniques to conquer institutions or create counter-institutions or capture either private or public monies and prestige would be a first step. Apologies for the long post.

Expand full comment

> I checked to see if I was being a giant hypocrite, and came up with the following: wokeness is just a modern intensification of age-old anti-racism. And anti-racism has even more achievements than effective altruism: it’s freed the slaves, ended segregation, etc. But people (including me) mostly criticize wokeness for its comparatively-small failures, like academics getting unfairly cancelled. Why should people judge effective altruism on its big successes, but anti-racism on its small failures?

I don't agree with the premise. Wokeness is *not* "a modern intensification of age-old anti-racism," but an outright repudiation thereof. It's precisely what you alluded to above with the words "are these people just virtue-signaling? Is it bad for their coalition to appropriate something everyone believes?"

This article is largely organized around the ancient wisdom, "by their fruits ye shall know them," the that by looking at what people *actually do,* rather than what they say they support or want to do, you can get a good picture of their character. So let's compare today's woke movement to those who fought against racism in the past.

Abraham Lincoln spent significant time and effort fighting for equal rights. In an 1854 letter, years before he was elected to the Presidency, he wrote, "Our progress in degeneracy appears to me to be pretty rapid. As a nation we began by declaring that 'all men are created equal.' We now practically read it 'all men are created equal, except negroes.' When the Know-nothings get control, it will read 'all men are created equal, except negroes and foreigners and Catholics.' When it comes to this, I should prefer emigrating to some country where they make no pretence of loving liberty,—to Russia, for instance, where despotism can be taken pure, and without the base alloy of hypocrisy." (Surprisingly modern rhetoric for a 19th century statesman!) As President, he used his power repeatedly in the pursuit of equality, through the Civil War, the Emancipation Proclamation, and then pushing for the Thirteenth Amendment. He even ended up giving his life for the cause; after winning the war, he gave a speech in which he mentioned that one of the next things on his agenda was to pursue some degree of political equality for black people, including voting rights. In the audience was an actor named John Wilkes Booth, who was so infuriated by this idea that he vowed this would be the last speech Lincoln ever gave. Three days later he followed through on it.

Frederick Douglass, a contemporary of Lincoln's and one of the most influential black voices on the subject of the abolition of slavery, called for strict equality, nothing more, nothing less. He famously proclaimed, "Everybody has asked the question... 'What shall we do with the Negro?' I have had but one answer from the beginning. Do nothing with us! Your doing with us has already played the mischief with us. Do nothing with us! If the apples will not remain on the tree of their own strength, if they are wormeaten at the core, if they are early ripe and disposed to fall, let them fall! I am not for tying or fastening them on the tree in any way, except by nature's plan, and if they will not stay there, let them fall. And if the Negro cannot stand on his own legs, let him fall also. All I ask is, give him a chance to stand on his own legs! Let him alone!"

Martin Luther King Jr. famously called for a time when race would not matter, when people would be judged "not by the color of their skin but by the content of their character." He (much less famously!) also admonished his own people to meet whites halfway, telling them that they needed to shape up their own conduct if they wanted to be taken seriously. He condemned high rates of crime, sexual misconduct, and other societal improprieties among black people and told them that they needed to do better, that equality meant not only the power to do all the same things as everyone else but also the responsibility for the choices made with that power.

Today's wokesters are nothing like these past figures. Woke thought-leader Ibram X. Kendi wrote in "How To Be An Antiracist" that "The only remedy to racist discrimination is antiracist discrimination. The only remedy to past discrimination is present discrimination. The only remedy to present discrimination is future discrimination." (This sounds like nothing so much as Governor George Wallace's proclamation that we must have "say segregation now, segregation tomorrow, segregation forever!")

People who speak of equality and race-neutral policies are condemned as "racist" by the woke. *Black* people who follow in the footsteps of King and Douglass are treated even worse. (For example, my wife once lived in the same town as Bill Cosby. She tells me that "all the women" knew what he was like and that they should be wary of him. HIs character flaws were no secret, but it wasn't until he started telling black people that they needed to clean up their act that he started getting in trouble for it.) What we end up with is a system that is "anti-racist" not in the sense of being opposed to racism, but in the sense of anti-matter: exactly like matter in every way, except for a few specific properties, in which it's exactly the same except for being oriented in the opposite direction.

Meanwhile, the results they have produced, the fruits by which we are to know them, are not just cancel culture, but riots, pro-crime policies at both local and national levels, and creating lots of misery for the people they claim to be helping. (One of the most obvious examples being the 2008 financial crisis. They weren't using the term "woke" back then, but the ideas in the 1990s that led to government policy pressuring banks into making subprime mortgages more available to minorities and low-income people, who ended up hurt by far the hardest in the crash because of it, are easily recognizable as woke policy.)

They've appropriated the name of virtues everyone believes in, and used these names to shield themselves from well-deserved criticism when the things they do cause very real harm, and all too often exacerbate the problems they claim to be fighting, rather than alleviating them! And for that, they deserve all the criticism they receive and more.

Expand full comment

I remain convinced that EA is unpopular because it shows up a key inconsistency in others, not because of anything it is, itself. I suspect that most people don't actually think that charitable work as such is worth doing, and resent efforts to systematize making it more worth doing. That is, likely for most folks charity is just a de-mythologized form of tithing, and calling attention to the efficacy of this is grotesque and rude. Like asking if God really enjoys the smell you get when you incinerate an animal's corpse on an altar. Look bud this is a religious thing we're doing here, don't blaspheme it.

Statement of conflict of interest: I don't give to charities of any kind.

Expand full comment

A decent response. 2 things come to mind: as a member of AA who credits it with saving my life, that argument has some appeal to me. However, the vast majority of people quit drinking without it, and in many of its specifics, it’s inarguably weird. More to the point, it’s never been proven to work. But most importantly it operates with a philosophy of “attraction rather than promotion,” which avoids many of the pitfalls EA falls into. If you’re not shouting it to the rooftops, you’re not pissing people off with your smugness. Second, the focus on malaria prevention would seem to argue against the success of the rationalist, consequentialist approach? Cases are rising worldwide due to factors like global warming and a new urban dwelling mosquito ravaging cities in Africa. Wouldn’t money have better spent on preventing those things? Well you can’t predict them. Because nobody can predict anything as well as EAs, convinced of their own intellectual superiority, seem to believe they can.

Expand full comment
Nov 30, 2023·edited Nov 30, 2023

So, I've believed in the theory of utilitarianism for 12+ years at this point, and aiming to do altruism effectively is a natural extension of that. I've been around the EA community for 6+ years at this point, and I will say most of the people I have met in the community are very smart and genuinely good people.

But, I do have some problems with the movement, starting with the concept. The concept is just so broad, it’s not meaningful. As Freddie points out, I think it is akin to a movement that said “Do politics good” or “effectively make the world a better place.” It’s the kind of shit Silicon Valley made fun of in it’s first season where they showed all these small startups doing random stupid shit saying it was all to make the world a better place. Yea EA as a community has some central themes that Scott points out, but the concept itself is still vague and broad in a way that's a turn off to me and many others (it feels unnecessarily elitist I think?). I do wish it was called systematic altruism or something else a little more pointed.

Moving on, another thing I have a big problem within the EA sphere is the “math”, the “evidence, and the “consequentialism”. All in quotes because I don’t know of a better way to say that this stuff doesn’t really have evidence in a way the term is typically used, and it doesn’t use math in a factual way you’d kind of expert a hard science to use, and the consequentialism is just whatever someone conjures up rather than anything else. What does saving 200k lives today do for the future 500,000 years from now? What says donating that money to charities deemed less effective by EA (like research, or education) wouldn’t have a much stronger effect in the far future? The error bars on this stuff is just so high, it just isn’t that convincing. That’s why you can have SBF justifying everything he did, and MacAskill spending millions(maybe just a rumor) on promoting his book, because all this stuff is just whatever people feel like rather than something you can actually look at the evidence of.

It reminds me of an EA meeting where a high-up member of USAID came to talk with 20+ years of experience in global development. Someone asked him, “In your experience, what is the most effective intervention you’ve seen?” And he kinda scoffed at the question, he was like, “What do you mean most effective? Most effective for what??” “How do you compare a deworming program in one area of the world with educational support in another?”

EA would break this down into some type of metric and purport to have an answer, to a degree that I just don’t find appropriate. EA kinda feels like the wide-eyed kid that dreams big but doesn’t understand how the world works.

I probably can’t describe this correctly, but it also feels weird to me that a CEO of a tech conglomerate can potentially do more for the world than all of EA could, yet they wouldn’t be an EA unless they explicitly chose that career due to some like EA based career evaluation. (And if they would be considered an EA despite no interaction with the community, that’s not meaningful).

I kinda wish there was a movement that was more about being the best version of yourself, for yourself and for others. And I wish it didn't explicitly tell me how to do that, but gave me tips and tricks, personal stories, classes, training, whatever. I think that's something that would resonate much more strongly with me, and many others.

In short, I’m glad EA exists. I’m glad organizations like Givewell exist. I’m glad there are people out there genuinely trying to make the world a better place. I just hope the movement matures, maybe with a renaming, maybe with a split (or both). I hope the degree of confidence in their evidence and what they recommend lowers. I hope they expand the acceptable ways they consider effective altruism. I hope they broaden their messaging to reflect more with the average person. But I will always commend anyone who truly tries to improve the world/do what they think is best for others, EA or not.

Expand full comment

I do not identify as EA, but I believe EA on balance is a positive force and does good things. For now, that is enough. Institutions often transmogrify; if EA is around in 100 years, it may be that it is no longer a force for good, but today it is.

Among the people I hang out with, I suspect less than 50% know the word "altruism". Probably way less than 50%. The EA label itself indicates intellectualism. This is not a bad thing, but it is certainly a barrier to entry.

Expand full comment

"4: It’s tautological that once you take out the parts of a movement everyone agrees with, you’re left with controversial parts that many people hate."

It can be the other way round --most variations of X have a feature no one likes, such as politicians lying. Then the unusual feature would be something everyone likes. Everyone would like honest politicians and efficient charities, but at the same time they re going to closely monitor them for being really honest and really efficient.

Expand full comment

I think one of the things about the EA movement that gives me pause is the low percentage (only about 20%) of adherents who are vegan, according to the Rethink Priorities survey in 2020. The easiest thing you can do is to not do something, namely sending economic signals in support of slavery, r*pe, torture, and murder. If the grand majority are that bad at math and care for others so much less than they care for their own sensory pleasure or convenience, they're hard to trust.

Expand full comment

have you responded to the (admittably vague) comments of distrubist and molebug that ea encouraging normal busy people to think about problems far away from these selves in itself is bad?

Because problems far away and abstract, make people dumber, easier to lie to and you end up supporting causes like wars where you dont know any virtue of either side and have no drop of historical context.

Does ea even remotely have an argument about how to prevent fraud "yeah im soooo great look at me take 1 million dollars to a 3rd world country" - next sam bankman?

Expand full comment

I'm going to say you're broadly a left winger or Blue Tribe or whatever you want to call 'people who live in San Fransisco are San Franciscans even if they're not 100% on board with the Democratic Party.'

The left is split into two philosophical factions that often pretend very hard they're the same. Utilitarians and idealists. Utilitarianism, I assume, you are familiar with. Idealists, who are mostly Hegelians, make up the farther left part of the party. Socialists, for example, are heavily influenced by Marx who was influenced by Hegel who was a famous idealist. Utilitarianism and idealism are completely incompatible as philosophical beliefs. They might agree on a political program but they will never agree on fundamental goals. They don't even share any philosophical heritage. They are really, really different.

You, and the entire EA movement, are utilitarians. Freddie, and the entire socialist movement, are idealists. You complained a few threads ago about how Freddie keeps showing up and saying, "Why don't you do more to support the revolution?" (more or less). And your response was something like: you refuse to define even what a revolution is or how it'd help.

But you're talking past each other. For a Hegelian idealist the revolution is the point. They believe in a series of succeeding world-historical spirits which it is their philosophical duty to advance. If you're not helping with the Hegelian undergoing then what you're doing has no value. On the other hand, utilitarians basically think all of that is fake and kind of made up. There is no spirit of the age because it can't be observed or quantified.

A good example I like to use is Mills vs Marx on taxes. When confronted with the argument that taxing rich people more was unfair because rich people had done nothing morally wrong but were suffering an additional burden Mills ultimately agreed it was not fair. He even added it was disincentivizing a good thing which was a concern. But he justified it by saying that it led to more good than it did bad so long as the money was spent wisely. In other words, it's not fair to the rich but it increases net utils so it's justified.

Marx meanwhile dismissed it by saying that all of it was class struggle and that whatever brings about the revolution was moral by definition. You had a moral duty to work to do that. In fact morality was defined by working to advance that.

You can see this today in the divide between neoliberal Democrats who want taxes in order to fund programs vs more left wing Democrats who say they would impose wealth destroying taxes because it'd be more fair or create a more just (in their opinion) society or reduce the influence of capitalism. The former is utilitarianism: taxes are justified because they create net utils. The latter is idealism: taxes are justified because they help bring about society wide change toward the ideal.

So you're both talking past each other because you have fundamentally incompatible worldviews. Marxists do not care about utils. EAs do not care about the weltgeist. The proper thing to do is to decide which one you are and then realize the criticism from another philosophical school will always be, at best, problematic.

PS: I suspect he's upset you're 'king of the nerds' because he's correctly identified that EA is the charitable arm of the rise of a new set of post-industrial elites, the 21st century equivalent of Carnegie libraries. A movement that will do a lot of good but doesn't subvert the existing system. The fact it produces net utils is actively bad because it prevents revolutionary consciousness.

Expand full comment

Social commentators like Freddie, and quite frankly his implacable critics (who are not the same, but are in the same profession) are not writing about the topics you spent most of your essay arguing about. You get to it at the end: criticizing EA is about taking away units of power from the social group that metaphorically goes to the libertarian meetings, not about taking donations away from Give Well. Most people are implicitly scared that any social group that gains power will use that power to impose something on them. If you think the people who attend libertarian meetings are weird aliens orthogonally from their online essays, you would want to stop them from attracting impressionable new recruits, or being able to define what's considered acceptable to say at an office party, even if you're generally in favor of deregulation or other libertarian ideas, lest you one day find yourself living in and being expected to conform to a weird alien society.

Expand full comment

Thanks for posting this further analysis, you've helped me clarify my thoughts and my mix of admiration and unease with EA. At this point it boils down to:

1. "One answer: don’t have opinions on movements at all, judge each policy proposal individually. Then you can support freeing the slaves, but oppose cancel culture. This is correct and virtuous, but misses something."

I think this is mostly where I stand. I'll defend EA against most general accusations, because it's doing plenty of good in the world, simple as that. Hell, even for those of us who are not AI-doom-pilled, the fact that people are being motivated to do research on how to make AI useful for human goals sounds like useful work.

2. about point #2 in your definition, "think really hard about what charities are most important, using something like consequentialist reasoning"

And this is the bit that I end up disagreeing with, even more starkly now that you've isolated the idea so clearly. My model of doing good in the world is "spray good in all directions, or in whichever directions you find yourself connected with". So even if malaria nets end up saving more QALYs/$ than vaccinations, or work with the homeless, or endowments to public libraries and the arts, I still prefer a world where people give generously to any and all of these according to their feelings and connections.

Human flourishing is complicated, and I actually enjoy the fact that there are no simplistic or totalizing shortcuts to promoting it.

Expand full comment

I read FdB's post and thought. Yeah that seems right. And then I read your post and thought, yeah that seems right. I also think you spend too much time worrying about what other people think. Screw them, you do you and pay as little attention as possible to the outside critics.

On a different note: I'm not an EA. I don't make enough money to give 10% away, (semi retired, working 25-30 hrs/ week). But your example has made me think more about volunteering ~10% of my time locally. So thanks for that.

Expand full comment

>Freddie has a piece complaining that woke SJWs get angry when people call them “woke” or “SJW”. He titles it Please Just F@#king Tell Me What Term I Am Allowed To Use For The Sweeping Social And Political Changes You Demand. His complaint, which I think is valid, is that if a group is obviously a cohesive unit that shares basic assumptions and pushes a unified program, people will want to talk about them. If you refuse to name yourself or admit you form a natural category, it’s annoying, and you lose the right to complain when other people nonconsensually name you just so they can talk about you at all.

Welcome to the inverse of the Euphemism Treadmill.

'Social Justice' and 'Woke' *were* both terms originally coined by the lefties in the movement, until the right took them over and strawmanned them to death and turned them into insults and slurs.

There's no new word that the left could suggest for themselves that the right won't apply the same treatment to.

So we're stuck in the situation where all popular labels are avoided, and any that emerge are quickly appropriated, corrupted, and rejected.

It's not fun for us, either.

Expand full comment

A long time ago, before EA, I came across either the Cochrane study, or somebody commenting on it, while searching out the most effective charity for saving lives. (This could have been as early as 2004, my memory isn't clear). The reason I was searching out the most effective charity for saving lives is that I was involved in an argument (somebody argued I should support some policy or other, or maybe that I should donate to some charity or other, and I found it absurdly expensive with regard to its benefits, and, long story short, the argument demanded I provide a more cost-effective way to save lives). And when I examined the data, holy shit. I had an ace argument against so many ineffectual policies and charities.

"If you really wanted to save lives, you would be donating to provide malaria nets in afflicted countries."

At some point later I came across GiveWell, which was a nice all-in-one resource for these arguments.

(There's a nonzero if small chance, given where I was arguing in this timeframe, that I may have played some part in inspiring the existence of EA; I was -really- fond of this argument. Out of a sense of curiosity I spent some time digging through old Overcoming Bias and Less Wrong archives to see if I could find a smoking gun, and failed, but wow, I've been arguing with some of you for a long time in different forums using different pseudonyms. Hi, Nancy! Also, it's kind of surreal how many relative nobodies from fifteen-twenty years ago I had dumb internet arguments with are either internet famous or famous-famous today.)

I'm not an EA, to be clear, because I'm not a utilitarian. Used to be something like utilitarian, then a virtue ethicist, now I'm something I sometimes call a relative moralist. Think of "It's a Wonderful Life" as a moral framework, kind of. A big part of it is that I think we, as humans, need a moral framework that tells us whether or not we are "good". Broadly, the basic idea is that morality is relative to a relevant average.

If you are in a society where people walk by the drowning child, and you walk by the drowning child, you're not good, or bad. You're average. Substitute the average member of your society in for you, and nothing changes. If you throw out a flotation device but don't make too much more effort, you're good. If you wade out into the water and ruin your suit and save the child, you're something like heroic.

If the average person would throw out a flotation device, and you walk by - well, the average member of your society would do better. You're bad/evil. If you throw out a flotation device, you're not good or evil. And if you ruin your suit to save the child, you're good.

It's kind of an unholy union of virtue ethics and utilitarianism, where moral value is subjective and hard to mathematically evaluate, but relatively easy for us as humans to evaluate. Good people, and good acts, make the world a better place than the status quo; bad people, and bad acts, make the world a worse place than the status quo. It has a place for heroism and villainy, and doesn't subject people, outside of weird edge cases, to too much moral luck (insofar as it has moral luck, it's "people who naturally want to do good/bad acts seem to get an advantage", which seems kind of okay, and "people born into societies where the average person will eventually set themselves on fire to protest to save the rainforests would seem to get screwed over", which seems alien enough that I wouldn't expect the moral framework to continue to operate anyways - it's intended for humans, not bizarro-humans).

Because I think what most people really want from their ethical system is a reasonably clear answer on how to be a good person. Utilitarianism almost gets there, except you never "win" - as long as you could improve utility somehow, there's another step you need to take, and if you don't take it, you're in some sense guilty of not taking it. I don't think it actually does a good job, as a moral framework, of separating out the basic human concepts of "evil", "bad", "neutral", "good", and "heroic". Make number go up, never stop making number go up. And I've met some people for whom this works, who don't seem to have meaningful internal moral categories; the idea of creating an entity that will make everybody in the world slightly happier, but at the cost of torturing one particular person quite terribly, is purely a mathematical question to them, and they'll do the math and continue on with their lives. The question of whether it is good or evil to do so is basically beside the point. They'll make their choice in the trolley problem and move on.

The average person, however, doesn't actually operate like that, and trolley problems will haunt them for their entire lives. They don't need to know "utility went up" - they need to know whether or not they are a good person.

EA provides some kind of answer - here, tithe 10% and you're a good person. And so far so good. But then the question becomes "What am I tithing 10% towards?" And if the answer looks like utility, rather than goodness, then you stop answering the question of whether or not they are a good person or not.

Insofar as EA focuses on AI alignment, it may or may not be maximizing utility. What it isn't doing is answering the far-more-important-to-people question of "Am I a good person?" And insofar as EA associates with supervillains, this can seriously outweigh any good EA does, when people ask themselves whether or not they are a good person.

Maybe you don't lose the core people you really care about, if you lose all the people who are concerned with the question of whether or not they are a good person, instead of whether or not they are maximizing utility. But you do lose something important there: An opportunity to make the world a better place. Imagine if half the people currently donating 10% of their income stopped.

I don't make the argument above anymore, for a variety of reasons. But a lot of it comes down to "I don't think that argument actually cleaves reality at its joints for most people."

Expand full comment

This is a really bad job of steelmanning.

The line between "agrees with the philosophy" and "actively participates in the movement", which you use to distinguish EA from commonly held beliefs, is not particular to EA, and the reason that he calls it a "shell game" is EXACTLY that EA postures as "its a movement" "no its a set of principles" as convenient ...

"Tell me what to call my movement" FdB does - UTILITARIANISM ! It's RIGHT THERE!

Maybe rename to "Utilitarians in action" or "Nerds relentlessly quantifying morality"

Expand full comment

> If I’m a YIMBY [sic] despite my policy preferences and because I’m considered outside of the YIMBY kaffeeklatsch, that means that it isn’t about policy and is about being a cool shitposter.

>I agree with Freddie: it’s better to define coalitions by what people believe than by social group.

Alternate framing: Lets say that Freddie spends 360 days a year trashing the political movements whose politicians are most likely to actaully pass YIMBY policies, and spends 5 days a year saying that he sure wishes the NIMBY politicians he actually helps to elect had more YIMBY policies.

In that case, does it make any sense to call him a YIMBY when his overall impact on the world is electing more NIMBY politicians?

These so-called 'social groups' are *voting blocks*. That makes them *very relevant* to the question of which policies get implemented.

If your whole deal is opposing the election of YIMBY politicians - even if you are opposing them on the basis of other issues and have no problem with their YIMBY policies - then yeah, people get to question your commitment to that movement.

Call it a revealed preference.

Expand full comment

I’m not an effective altruist but If I wanted to briefly summarize EA to a random stranger I’d say something like it’s a movement that helps donors to systematically find effective charities and commit to generating more resources for them in an efficient manner.

The squishiest part of my description probably lies in what “effective” constitutes. But while some people might not think EA staple charities like those that address AI risk or animal welfare or malaria (or pick one) are “effective” charities, I have a hard time seeing how they’re ineffective enough to warrant such numerous and energetic attempts to refute the EA movement compared to other charitable movements.

Can someone who objects to EA and believes the movement is a net negative please summarize why the world would be better off without EA? Like in just a few words that an average person like me might understand?

Thanks for your help.

Expand full comment

The other two points missed here are:

1. Not everyone is a consequentialist

2. Not everyone is altruistic

I think either one would be necessary for agreeing with EA as a philosophy.

Example of #1: I know people who donate to religious groups; the majority of them are not doing it because their soteriology makes them think that converting more people will maximize utility. Telling them that mosquito nets will save more lives is not going to move them.

Example of #2: I know people who see donating to charity as a purely transactional act. They do it solely because it makes them look or feel good. Again the mosquito net example will only move them inasmuch as people have made it attractive.

Expand full comment

To me, responding to the scandals confronting the EA movement and the corresponding critiques by pointing to the scrupulosity of EA adherents would be like the Catholic Church responding to the sexual abuse crisis by pointing to Mother Teresa.

These scandals are a test. How do you respond to the test? By covering your ears and trying to point to all the good that you do? Or by honestly taking a hard look at yourself, and considering whether these problems are revealing a real vulnerability or blind spot.

My perception is that EA's fatal flaw is hubris and arrogance that leads it to believe that it will not be subject to the same problems that have infected every other organization with a noble mission as it scales up. This response confirms it for me.

Expand full comment

The whole "EA's do their research" really seems more like "I have OCD about my donation choices". All sorts of "rational" people raved about FTX. Show me an EA publicly calling out FTX beforehand and I'll show you the 1-3% of EAs who are rational researchers.

And for YIMBYs, I'm really tired of these unicorns. Unless you literally own your backyard, are willing to have yourself, family or friends displaced for a year or so, you're literally not "Yes In MY BACK YARD". Saying you support YIMBY policies in somebody else's backyard doesn't make you a YIMBY. It makes you an astroturfed or a hypocrite. I've never met a YIMBY who owned their back yard. I suppose they exist, and I'd bet they hover in the low percentiles also. How about a poll?

Expand full comment

Perhaps related question: Do you consider Bjorn Lomborg (https://en.m.wikipedia.org/wiki/Bj%C3%B8rn_Lomborg) an Effective Altruist and/or is he in any way formally associated with your movement?

Expand full comment

Feels like alot of these debates come down to talking past one another due to lack of agreed upon objective (as best as possible) principles by which to "evaluate" a social movement (EA, progressives, SJW's, Rationalists, etc).

Would be very curious to read an attempt at an elucidation of those principles that reasonable people could agree to in advance of such an argument (not naive enough to think that this would actually stop the arguments, but would be interesting).

Things that might arise: the movement's best or worst actions, ideas and adherents, gap between professed beliefs and actions (at both organizational and individual levels), naming conventions (including Capitalised Organizations and lower case generic case of belief), etc,

Anyone know of such an attempt that has been written already?

Expand full comment

I disagree with EA because consequentialism is wrong. 9/11 was worse than 3,000 people dying of natural causes, so it makes no sense to talk about how EA did the equivalent of preventing 9/11. The damage 9/11 did was mostly not from people dying. You can't quantify morality, it's about whether something makes you feel bad or not.

I think it's also wrong to suggest that consequentialist EA thinking is unpopular. It was popular enough that we had COVID restrictions that prioritized saving people over maintaining freedom and our cultural identity. I'm worried that EA will just empower this kind of homogenized, rote morality even more.

Also it's pointless to worry about AGI, since even if someone comes up with a way to "align" it, others will develop un-aligned AI anyway. It's not a technology that can be kept under control, like nuclear weapons.

Expand full comment

>I think this is the role of the wider community - as a sort of Alcoholics Anonymous, giving people a structure that makes doing the right thing easier than not doing it. Lots of alcoholics want to quit in principle, but only some join AA.

This is a good analogy. Some people use 12-step groups to help them recover from substance abuse. Others recover on their own. And those who recover on their own are gonna be extremely annoyed if you sound as if you don't know they exist. It is possible to do the thing without the group. And similarly, you don't need an all-encompassing social structure to donate to charity.

Expand full comment

"And anti-racism has even more achievements than effective altruism: it’s freed the slaves, ended segregation, etc."

Most of the people who ended slavery were racists & white supremacists, e.g., Abraham Lincoln. They just did not believe that white supremacy justified enslaving black people.

The people who ended legal segregation believed that individuals should be treated equally by the law. They did not believe that all racial disparities were the result of white racism.

The anti-racism of Kendi, DiAngelo, etc. should not be credited with ending slavery and legal segregation.

Expand full comment
founding

"The best books, he perceived, are those that tell you what you know already." -Orwell, 1984

Expand full comment

I don't think you should feel bad about "free riding" off GiveWell. That's why they exist! Maybe if you don't trust them or their methodology or something, I don't know, but it probably isn't even a great use of your time to do the same work they're doing unless you have reason to believe you have a comparative advantage for that work that they don't (which in your case I could actually believe, but in my case I certainly don't).

Expand full comment

I think DeBoer gets it wrong when he implies there's general agreement with EA because lots of people view themselves as "shining a light on problems that are neglected". "Problems that are neglected" is often in the sense of things being overlooked. But a lot of the stuff EA focuses on, on the other hand, are problems that are relatively obvious but nonetheless underfunded.

The ways that EA frames charitable giving also seem far from universally accepted. The way philanthropy is framed often puts "donating to GiveDirectly", "donating to Make-a-Wish", and "donating to Harvard University" in essentially the same category. EA disagrees. I think many people's concept of the effectiveness of nonprofits still focuses predominantly on internal organizational measures of efficiency, like administrative overhead. EA would say those matter only indirectly and may be misleading. EA's approach to encouraging people to donate also differs from another approach that I think is still popular, encouraging people to find a destination for their charitable giving that they particularly care about, that fits well with their affiliations and interests. EA instead focuses on arguments from marginal utility, that someone can do more good more easily than they might have expected.

Expand full comment

I feel like it's pretty easy to agree with the effective altruism point of view but think that their calculations are wrong and that donating to e.g. AI risk is totally ineffective. The criticism of the EA movement is then just that their heart is in the right place but they're bad at estimating the future impact of their actions, and if they wanted to do real EA they should all be trying to industrialise India or something.

Expand full comment

I don't think these arguments address the criticism that DeBoer is making. (It's funny, I never heard of DeBoer, but this is the second time in two days I've seen him mentioned in blogs I like. So, since I'm not too familiar with him let me admit I may be superimposing somewhat my own views.)

The criticism is of EA is that it presents itself as an ethical system in itself rather than merely a methodology.

If EA were defined as Scott defines it in this post, I don't think it would be controversial. Other people might think that the EA-associated people have some weird priorities, but there is no fundamental problem with people spending their charitable money on things that other people think are silly.

However, when I see people discuss EA, they rarely mean it as a form of deciding how to effectively be altruistic. Rather, it is an ethical system formalizing a fundamentalist utilitarianism. This is the reason it ends up with kind of weird, but basically harmless, priorities (long-termism, animal stuff, AI). However, as DeBoer points out in the article, utilitarianism has a lot of known philosophical problems and contradictions. It is often a practical ethical system, but it fails to capture much of what most people inherently think of as moral.

This creates a danger that a conventionally amoral ethical system could supplant other more traditional ethical systems, which would have significant bad consequences for society. Writing this, it occurs to me that EA people should be more concerned about unintended side effects of too much EA!

Expand full comment

Nit: "Everyone says they want to be a good person and donate to charity and do the right thing."

Personally, I'm a counterexample. I neither have these as goals, nor say that I have these as goals.

Expand full comment
Nov 30, 2023·edited Nov 30, 2023

I find myself questioning the wisdom of engaging so thoroughly with Freddie's posts about EA. I didn't even read the Shell Game post because by now I expect FdB's EA criticisms are not actually helpful or insightful, just combative and clickable.

https://freddiedeboer.substack.com/behavior-is-the-product-of-incentives now returns Page Not Found, but that was the post where he bravely described the perverse incentives that he and others write within. He said he gets paid a lot more to write brief, conflict-oriented diatribes than to write lengthy, researched analyses such as his post about the Nation of Islam. I took that post seriously, and decided that if I was too busy to read his books and effort posts, then I probably shouldn't read him at all.

So, real question: Aren't we getting clickbaited? Do y'all endorse reading FdB's Shell Game post? Isn't it less like his books and more like a tweetstorm, and shouldn't we treat it accordingly?

Maybe the mistake is all mine--maybe if I chose not to read FdB's clickbait, then I shouldn't have read Scott's reply to it. Well, too late. Scott, could you say something about how you think about the Shell Game post (and I guess your own reply as well) in light of https://www.astralcodexten.com/p/its-bad-on-purpose-to-make-you-click ?

I admit that I did enjoy reading Scott's reply, especially section 4. But I wonder if I should treat it as a guilty pleasure.

Expand full comment

I am confused. 1. When Scott's father went to Haiti to provide free medical aid - was that EA?

2. All here ridicule art-museum donations. But the richest family in my town donated big time to our art-museum. And I am glad we have it. Esp. Rodin's thinker. If 20 billion people live for 2 billion generations, but "no art": we might just as well all end our shallow existence NOW. And art is not always free and should not always be for sale - and not always end up in the mansions of the rich and locked up by art-investors. You make it available to the public: "Phew, donated to an art-museum! Silly!" (I hate seeing tax-dollars spent on most art.)

3. I will sound brutalski now, forgive me if you can: SSC/ACX is aware of the average IQ in parts of Africa heavily affected by curable/preventable diseases. I assume the smarter part is aware of cheap interventions - impregnated bed nets, deworming, condoms ... - and can afford it (smarter earn more). Thus the donated help gets to the less-smart population of countries with an average IQ under 80. Since when is EA dysgenic? (SE-Asia had it worse with Malaria; seems they were able to do sth. about it.)

"It is relatively easy for me to ignore the plight of the poor, as long as I assume they're dumb (and I start with the assumption that most people, foreign or otherwise, are). But smart people are for me what Americans are for those America-first people; I can't bring myself to not care about them. They matter to me, in the way that everyone should matter to me but can't because I don't have enough emotional resources. In fact, all day I found myself worrying about that bright poor kid hoping she gets to a decent school at some point, even though I haven't spared a thought all day for any of the multiple-amputees I've seen wandering around. Not sure how I feel about that." I feel that is: good. And altruism that considers each life of same value (no human does in real life), seems neither efficient nor very "altrui"stic - if one does not really care about the "other". Sad about Kissinger and Shane Macgowan. Bin Laden is dead? GREAT!

Expand full comment
Nov 30, 2023·edited Dec 1, 2023

I think a better critique of EA is that they fail to demonstrate that charity makes the world better. I think there's a very strong argument that the best way to improve the world is to maximize economic growth, and all charity (at least all institutional charity) represents a misallocation of resources away from the economic engine. There are many more future humans than current humans, so they must be given greater weight in Utilitarian calculations. If redistribution harms economic growth (an empirical question!) then charity offers a fixed one-time benefit at the expense of an exponentially greater future benefit.

This problem is exacerbated by the Peter Singer-inspired notion that all lives are equally valuable. I think that's obviously false. The drowning child in front of me in the US is much more valuable than the drowning child in the Democratic Republic of Malariastan. The US child has a much higher chance of growing up to be a scientist or engineer who could discover something to benefit humanity. But failing that, it has a rational expectation of one day contributing $70k/year to the world economy. The expected contribution of the Malariastan child is essentially zero if you consider cost of living - people in places like that are living at the subsistence level. IMO that difference really really matters. Having a philosophy that systematically reallocates resources from a functional post-industrial society to a dysfunctional malthusian society seems pretty clearly suboptimal from a "what is objectively best for the world" perspective.

Whether or not this line of reasoning is actually correct isn't totally clear to me, but I think it's something that at least deserves serious consideration. I've tried to get EA's to respond to this argument on several subreddits, but I've yet to encounter a robust response. I would like to challenge someone to seriously engage with me on this.

Expand full comment

The ideology is not the movement but it took only a decade from EA's inception for a sociopath to hijack it for his own motives. EA has a philosophical underpinning that naturally leads to this outcome. It's anti-localism to the extreme, saying that helping your neighborhood is counterproductive when you could be giving money half way around the world. However, it is much harder to make a difference on a global scale than a local one. That naturally leads to the belief that you need to increase your wealth and power as much as possible. The mild version is the guy who joins a hedgefund to use his increase his earnings to give more money to GiveWell. The more radical version is a guy who uses shady means to obtain power to make changes. I'm sure that most EA people are sincere but it's not surprising it would also attract the Sam Bankman-Fried's of the world.

Expand full comment

The people who go to Libertarian Party meetings *are* weird aliens. It's kind of a catch-all category for "doesn't want other people to stop them from doing X", which is so wide a net to cast that you can't help but draw in the cantina people.

Expand full comment

EA claims to just be it’s name (effective altruism with lowercase letters) while in fact pushing a very specific moral ideology based on an extreme form of utilitarianism. In fact it is not clear to most people that they should care about others who are far away more than their immediate family. Charles Dickens satirized such characters in a way that makes it easy to understand why.

For many philosophers, utilitarianism faces difficult problems that deontology doesn’t (such as Benthams mugging). That’s assuming you can even accept moral realism, which is a major subject of debate. To someone who isn’t a utilitarian, EA encourages people to look at numbers, statistics, and books about faraway places while neglecting their immediate family and community. Point this out to an adherent and they will immediately protest, “That’s not true! It’s just effective altruism” with lowercase letters. However if you read the materials and observe the behaviors of actual EA adherents, the pattern of extreme utilitarianism is apparent. Utilitarianism also explains why there’s broad spectrum attacks on EA; since most people aren’t utilitarians, people from all areas of the political spectrum will have a bone to pick with EA although they will express that concern in different ways.

Expand full comment

I think movement evangelism contributes to issues discussed in section 6. Any group that perceives itself as doing good on some dimension would naturally conclude that increasing group membership is also a way to achieve that same end.

EA due to its rationalist approach ends up being more explicit about this than most other groups. And so a defining feature of EA ideology becomes movement promotion, at least as perceived by outsiders.

As you say, its probably just something you have to eat and continue your work. But I think something like this is animating freddies post. You are perceived as slapping a self promoting brand on a pre-existing impulse to do good charity.

Expand full comment

"I find the sort of people who go to Libertarian Party meetings to be weird aliens." I feel this way. I also feel this way about rationalists.

Expand full comment

I remember, years and years ago, I read a essay (I can't remember where, maybe it was Paul Graham?) that said that a lot of influential people ended up wasting much of their later life responding to critics rather than getting actual work done. I seem to remember it saying that Newton spent the second half of his life responding to critics rather than doing any actual science. When I saw your second post in two days about a response to what I thought was, frankly, poor-faith EA criticism, I was worried the same thing was happening here. But you completely proved me wrong - this is top-notch.

I remember many years ago you expressed frustration that writing about charity got some of the lowest views, and incendiary topics went super viral. I feel like you've finally cracked that nut (perhaps unintentionally); this article was a blast to read and I wanted to send it to all my friends. It also did a lot to push me in the direction of EA. Great stuff all around.

Expand full comment

I keep picturing people sidling up to you with plates of steaming hot meat now, lol, like a game people play, 'Make Scott Sin' (working title)

I think I'm pretty much on Freddie's side. I think it's understandable to frame things in measurable good, but it's hubris to be so dogmatic or zealous that it obscures or excuses a lack of commitment to your own actual community. I know you can walk and chew gum or both be a good citizen and an effective altruist but the framing always seems so stark as if these causes are the only things that matter, the effectivist altruism that can be visited.

Expand full comment

I think Freddie made some good points. At some level, EA does make “motherhood” statements. Sure, ending homelessness, child hunger….etc etc are undoubtedly good things….not unlike apple pie.

And to borrow loosely from Hitchens, ‘name one good deed an EA adherent can do or say, that a non EA adherent could not’. (The other half doesn’t apply….I can’t think of anything EA would espouse that is inherently bad).

I think your point about people who identify as EA being people who actually walk the walk (and donate their 10%, or roll up their sleeves, etc) is the strongest testament for the value of EA as a movement, in-group, or life philosophy. I do wonder if there is a litmus test of EA “members” that establishes these individuals as doing more than merely paying lip service. If EA encourages or compels people to actually walk the walk and do something “good” (however defined), I would already consider that net “effective”.

I am curious how you would respond to Freddie’s characterization that EA boils down (or ought to) to utilitarianism.

Expand full comment
Dec 1, 2023·edited Dec 1, 2023

"I think most of the people who do all three of these would self-identify as effective altruists"

Unless this means something like 'once you describe the label, they would agree that, yes, what they do sounds a lot like EA', then surely this is [citation needed]; does it feel intuitive that, globally, most of them have even heard the term “Effective Altruism”? Don't people who belong to churches regularly debate what charitable organisations to support, scrutinising their financial reports and other forms of due diligence?

Expand full comment

Scott's post clarifies a confusion I'd had about EA. I had been thinking of it mostly as a philosophy and not a "social technology" or "social cluster." As a philosophy, it is, in my opinion, wanting, and the critiques Freddie lobs at it are pretty good. Or maybe "philosophy" is the wrong word, and I mean something like "ideology" or some other term. (I realize Scott addresses that very point in item #6 of his essay. I'm just saying I'm still working through what I think about it.)

EA, as a philosophy (again, maybe not the right word), just doesn't seem that distinct from already existing approaches to charity. Someone I read online last night chided Scott for being in a Silicon Valley bubble and assuming that people who do all three things listed in item #1 see themselves as EA'ers. That person pointed out lots and lots of people follow and have followed that approach to charity and have done so long before EA ever existed. (I'd link to what I read, but I'm not sure that person wants to be linked to, so I'm not.)

Again, though, none of that is necessarily a criticism of EA as a "social technology."

Even on that front, it's not off the hook. At Freddie's blog, I accused EA of being "cult-like." I think that's probably the wrong term, and I regret using it. But I do think EA is "pre-cultish." By that I mean, it's benign now and may never become a cult, but if EA advocates aren't careful, it may start to become a cult. That, I should add, is true of many (most?) movements/organizations/philosophies/political parties.

Expand full comment

"When I talk to the average person who says “I hate how EAs focus on AI stuff and not mosquito nets”, I ask “So you’re donating to mosquito nets, right?” and they almost never are"

Good call. I just went and donated to AMF so I can own EAs now. (really, thanks for the kick in the pants)

Expand full comment

Really effective altruism would advocate for unbanning DDT or selective mosquito extinction rather than mosquito nets.

Like, the problem I have with EA is not that their intentions are wrong or bad, but that they mostly advocate for ameliorations rather than solutions to problems. And they listen to Peter Singer too much - if a kid is drowning in a pond, just take off your shoes and save the kid.

Expand full comment
Dec 1, 2023·edited Dec 1, 2023

In arguing that EA is unique you write:

"1. Aim to donate some fixed and considered amount of your income (traditionally 10%) to charity, or get a job in a charitable field."

This description does not distinguish EA at all from other charitable endeavors. The 10% donation is just a completely standard Tithe. See: https://en.wikipedia.org/wiki/Tithe, and the get a job in a charitable field is just a slightly more general version of "or pursue missionary work / be involved directly with the charity"

To elaborate on tithes, If you are a Christian then giving to the church is a charity, and hence the church Tithe is exactly what you describe. Tithes are such standard procedures that, for example, in Austria it is automatically collected as tax. See: https://en.wikipedia.org/wiki/Church_tax. I believe other religions, and likely even other (non-religious) charity movements have similar conventions.

This kind of EA uniqueness mindset, when in fact a lot of EA is just doing pretty standard charity/religion/cult/social-movement type stuff while making a lot of noise about being special (which is also pretty standard for a charity/religion/cult/social-movement), is one of the main qualms I have with EA. To clarify though, EA is large group of people and there are people associated with EA doing great things I greatly respect, e.g., the writer of this blog!

Expand full comment

All charities (provided they are not corrupt) will aim to get bang for their buck. E.g., a church wants to ensure the spread of its religion, and considers that charity work. Again, I fail to see how this is unique to EA.

Further, EA has shown it can be just as vulnerable to corrupt members as other movements. For example, Sam Bankman-Fried was benefited from talking a big EA game. It helped him commit financial fraud enabling him to enrich himself and e.g., spending his money on socializing with celebrities.

Expand full comment

When I hear about charities that EA likes, it’s mostly aid for extreme poverty (mosquito nets, etc.) or X risk. What does the movement think about medical research donations? How does increasing the odds of curing lung cancer compare to reducing the risks for AI?

Expand full comment
Dec 1, 2023·edited Dec 1, 2023

> I don’t think a movement our size is capable of rebranding.

Okay, I'm not entirely sold that we should rebrand, but I disagree it would be as hard as many say. IMO, only CEA (and their projects like the forum and conferences) and student and city groups need to rebrand. Anything that has both (1) effective altruism in the name and (2) is public-facing. Then websites like 80K and GWWC that mention EA a lot can selectively change that text out. I don't think this would be a big deal? Only a medium deal ;) Any rebrand just needs some PR note on what is changing in focus* enough to warrant a new name. Again, not totally sold, but think it's doable.

**but actually things wouldn't even have to change much! We could just clear up confusion so the untrue stuff stops being said, at least. A PR statement should just clarify what the general priorities and tactics are, and publicize the many leadership changes that have happened since 2022 that it make it a good time for reaffirming goals. A transparent reasoning of rebranding just needs to be soothing enough to keep critics from spamming "Never forget that [New Name] is just Effective Altruism, evil rebranded!" every few months on Twitter. This kind of affirmation could breathe new life into the movement, or at least help many of us breathe a sigh of relief. "Ah, we are all the same new page, with a fresh start"

Expand full comment

"When I talk to the average person who says “I hate how EAs focus on AI stuff and not mosquito nets”, I ask “So you’re donating to mosquito nets, right?” and they almost never are."

EA's critics are surprisingly similar to LLMs. They're just generating strings of text, for no reason other than that they were prompted to.

Expand full comment

I wrote a much shorter and less precise response before I read Scott's and it makes me happy that we said some of the same things. I particularly like the focus on the "doing" of doing good better. That's what EA is about, getting people who generally do think a certain way to actually act in accordance with that.

Here's what I wrote. If this is considered inappropriate self-promotion, please let me know and sorry. If you think I got anything wrong, I'd also love to hear that.

https://fourofalltrades.substack.com/p/effective-altruism-has-good-consequences

Expand full comment

I've been giving 10% of my income to GiveWell for the past 4 years, 1 year after I read your Giving What We Can post on SSC. I agree with the quoted statement "I hate how EAs focus on AI stuff and not mosquito nets". So you've got at least one person in your supposedly-empty category of non-armchair EA critics.

In fact, I kind of resent your suggestion that anyone who takes EA seriously will naturally end up "tempted" to fund x-risk mitigation. Off the top of my head I can think of three reasons that wouldn't happen:

A. you assign a high discount rate or high opportunity cost to charity with delayed impact. This could be because you care more about the present day or believe that future people will be vastly more capable of dealing with future problems.

B. you disagree that "avoiding extinction" == "saving a life" x 8 billion. Only the most basic form of positive utilitarianism would lead you to that result.

C. You are extremely skeptical about the methods used to measure x-risk, the solutions proposed, or both. You value your money too much to put more than a token amount into moonshot bets.

Expand full comment

You're assuming that "consequentialist reasoning (where eg donating to a fancy college endowment seems less good than saving the lives of starving children)" is some sort of well defined thing. It ISN'T. Not even close.

Even if we think utilitarian, most people (idiots IMHO) want to maximize total utility, others (much less idiots, IMHo, sadly also much less common) want to maximize some sort of mean utility. Others might be concerned with median utility, others with ensuring that the utility of the lowest 10% (of humans? of living things?) is above some level.

And then of course there are completely different prime goals. For some the prime goal is "have you heard the good word of <insert name>". For some it's ensure the survival (as I understand it) of my culture (as I understand it). For some it's spread life and/or intelligence through the universe.

EA insists, and insists loudly, that IT has the one true moral goal, and that every one is not serious about ethics. This is deeply insulting and goes down about as well as every other attempt to insist that your religion makes you holier than me has gone down through history.

And the EA population, much like all these other holier than thou populations, refuses to even see the point from the other side. It's all "I hear what you are saying, but the truth is what god REALLY wants is maximum total utility, so it just doesn't matter what your crazy mean utility idol tells you because there's only one way to get into heaven and it's our way".

Like every good idea in history, EA started off as a heuristic that was reasonable in many situations. But then became weaponized as a way to condemn other people, and that's where we are today. The two crazies kinda work together – you get the fundamentalists who treat the scriptures literally (what if animals could suffer? how about plants? how about electrons?) and are OK with the craziness that results, and you get the social climbers who hear the craziness and think "that would make a fine tool for cutting down other people".

Yes yes, you plea for moderation and common sense. Normal people always plea for moderation and common sense, that's what makes them normal. BUT regardless of that we land up burning people who refuse to concede the particular craziness of the hour...

Expand full comment

Is it just me or do most of the critiques against EA feel like rationalizations to dislike the kind of people that run or participate in EA activities? I do not understand the amount of effort being expended to debate the philosophical underpinnings of a charity or philanthropic belief. If you object to how EA people approach charity or activities the appropriate response is to shrug and move on to the next thing right? Why get so worked up? The most dangerous thing that EA might accomplish is slow down research into AGI. Am I missing some nefarious activity?

Deboer’s critique is what, that EA is dumb? That they have the wrong approach and he can’t take them seriously from a philosophical standpoint? My initial reaction to his article was who cares? Much to my astonishment many people seem to care deeply. I think Scott’s main point is a good one, whatever you may think of the people involved, their motivations, and their priorities you should at a minimum acknowledge that EA has done some good work. I’m sick of the endless trying to see the worst of everything and questioning people’s motives. EA is just one of many approaches to giving and trying to make things better. If you don’t like it there are plenty of other ways you can approach it. I’m sure the people being helped will be thankful regardless of how you decided to do it.

Expand full comment

You are what in Europe we call a Liberal. The term Libertarian, when used in the American sense to mean Minarchism, is associated with some rather questionable individuals and policies.

Expand full comment

I wonder if the crux of the argument is maybe because "Effective Altruism" is a bit like a bistable Necker cube. Depending on how people encounter the message, one person might hear "effective ALTRUISM" as in "do things that are ALTRUISTIC subject to also being more effective in the world", and another might hear "EFFECTIVE altruism" as in "do things that are EFFECTIVE subject to also being more altruistic". Theoretically, the order of what's the main objective and what's the constraint shouldn't matter, but in practice I think it does. One might lead someone to consider ideas like best way to donate money and consider termite suffering. The other might lead one to consider ideas like earning to give and potentially also selling one's granny, apparently (https://www.honest-broker.com/p/why-i-ran-away-from-philosophy-because).

(Full thoughts here: https://aliceandbobinwanderland.substack.com/p/when-you-read-effective-altruism)

Expand full comment

I wanted to comment this to de Boer but I’m not a subscriber so I couldn’t. So I did a restack but I have 0 followers so I will also put it here:

Quotes are from de Boer and it’s directed at him not at Scott:

> It’s not that nothing EA produces is good. It’s that we don’t need EA to produce them.

This is where you are making the mistake most outsider writers make when discussing EA: you’re only looking at the concepts and not reporting out the facts on the ground.

Without the EA movement, the piles of money that have been given to philanthropies that you probably have no objection to would not have been given.

Why? Because EA is more than a concept. It’s a functional community where people hang out together and push each other to give. In that way, there is accountability around giving and so people give more.

Yes, they also have weird earnest conversations about the edge cases of what it means to do good. This is their idea of fun. Who cares? I also think watching football is pretty stupid and trivial but a lot more people do that and I don’t clutch my pearls about it. If they want to debate about termites to pass the time: who gives a shit?

MEANWHILE: While having a very real in person community in major cities all over the world, they also learn about giving and discuss giving, making them more sophisticated around doing so in soft development of cultural capital that has real value for the world.

I am not an EA. But I have looked into it on the ground. There are a lot of people spending a lot of time together, committing to this movement and feeling part of something by doing so. This makes it bigger than a concept and that’s why all this endless prognostication on EA — such as yours — on the purely theoretical level irresponsible.

It is a concept but it is also a committed body of people, bothe famous and rank and file, which makes it also an institution. So if you want to talk about it in a way that is intelligent and doesn’t do a disservice to your readers, you need to actually have a look at how EA is instantiated in the actual world, and that requires more work than reading tweets.

You could be right that as a philosophy it’s not actually all that interesting, but that’s what strengthens my point: it’s not the ideas that matter but the actual engagement between people that EA has engendered.

It’s a global, secular accountability project that convinces young people to share their wealth in a way that I challenge you to find a comparable peer organization.

Addendum:

> This is why EA leads people to believe that hoarding money for interstellar colonization is more important than feeding the poor

👆This is just pure straw man garbage and you know it. STFU.

Expand full comment
Dec 3, 2023·edited Dec 3, 2023

While I think EA is generally dumb (see, for example, my comments elsewhere in this post) I think there's a better response to Freddie and that's that EA does have a unique take on charity: they reduce it to a single quantifiable measure (QALYs saved) and then optimize ruthlessly for that measure. IMO that's actually a significant innovation. Surfacing objective data allows that data to act as a price signal. That, in turn, unleashes the vast power of market forces. I have no idea if market forces have all that much power without some sort of rational selfish interest behind the price signal, but at the very least it provides a mechanism for large-scale coordination and social information processing.

Now I don't think that QALYs is a wise choice of Thing to Optimize For, but I also don't think that you can argue that it's not a legitimate innovation to the space.

Expand full comment

I thought the premise of EA was that nothing like it had existed before EA came along.

So to me its obvious that Bill Gates was doing EA already, so I don't know what EA brought to the table that Bill Gates didn't.

Could someone explain it to me? Like didn't we have charities that tried to use data and metrics and take on some basic utilitarianism before?

What's *groundbreaking*? Because I really feel like EAs sold themselves as groundbreaking. They didn't sell themselves as "The Gates Foundation with (maybe) slightly different napkin math and (maybe) the demographics are a bit different (younger?)."

Why should I put time and energy into EA that I didn't put into The Gates Foundation or things like it? If I already rejected donating to The Gates Foundation, why would I choose EA?

Expand full comment

I'm not a fan of ea. I'm not concerned with AI risk. It's the people that use the AIs that could be troublesome, but that is more of a problem with human nature. I dislike the vegan focus in the ea movement. I know that ea is more than ai risk and veganism, but I hear so much about those topics from ea sources that it is hard to remember sometimes. Plus, there are other organizations that do good without focusing on things that I think are pointless or actively dislike.

Expand full comment

I think if you want to make the argument that EA is good because it's doing things everyone agrees are good, you need to count Bill Gates not just as an EA, but as more of an EA than say Yudkowsky.

Expand full comment

Well, I think the issue is not “oh you aren’t donating to mosquito net people” and more, hey, I volunteer at my local soup kitchen and donate to a local church and I resent the fact that I bunch of self righteous a holes who work in finance and tech of all things are now judging me for the good I do in the world

Moreover, something I stress, doing good has a deontological component as well as a utilitarian one, and focusing on the latter to the complete detriment of the former is ultimately a negative for charity overall.

Expand full comment