Contra DeBoer On Movement Shell Games
"Lots of alcoholics want to quit in principle, but only some join AA"
Followup to: In Continued Defense Of Effective Altruism
Freddie deBoer says effective altruism is “a shell game”:
Who could argue with that! But this summary also invites perhaps the most powerful critique: who could argue with that? That is to say, this sounds like so obvious and general a project that it can hardly denote a specific philosophy or project at all. The immediate response to such a definition, if you’re not particularly impressionable or invested in your status within certain obscure internet communities, should be to point out that this is an utterly banal set of goals that are shared by literally everyone who sincerely tries to act charitably . . . Every do-gooder I have ever known has thought of themselves as shining a light on problems that are neglected. So what?
Generating the most human good through moral action isn’t a philosophy; it’s an almost tautological statement of what all humans who try to act morally do. This is why I say that effective altruism is a shell game. That which is commendable isn’t particular to EA and that which is particular to EA isn’t commendable.
In other words, everyone agrees with doing good, so effective altruism can’t be judged on that. Presumably everyone agrees with supporting charities that cure malaria or whatever, so effective altruism can’t be judged on that. So you have to go to its non-widely-held beliefs to judge it, and those are things like animal suffering, existential risk, and AI. And (Freddie thinks) those beliefs are dumb. Therefore, effective altruism is bad.
(as always, I’ve tried to sum up the argument fairly, but read the original post to make sure.)
Here are some of my objections to Freddie’s point (I already posted some of this as comments on his post):
1: It’s actually very easy to define effective altruism in a way that separates it from universally-held beliefs.
For example (warning: I’m just mouthing off here, not citing some universally-recognized Constitution Of EA Principles):
1. Aim to donate some fixed and considered amount of your income (traditionally 10%) to charity, or get a job in a charitable field.
2. Think really hard about what charities are most important, using something like consequentialist reasoning (where eg donating to a fancy college endowment seems less good than saving the lives of starving children). Treat this problem with the level of seriousness that people use when they really care about something, like a hedge fundie deciding what stocks to buy, or a basketball coach making a draft pick. Preferably do some napkin math, just like the hedge fundie and basketball coach would. Check with other people to see if your assessments agree.
3. ACTUALLY DO THESE THINGS! DON'T JUST WRITE ESSAYS SAYING THEY'RE "OBVIOUS" BUT THEN NOT DO THEM!
I think less than a tenth of people do (1), less than a tenth of those people do (2), and less than a tenth of people who would hypothetically endorse both of those get to (3). I think most of the people who do all three of these would self-identify as effective altruists (maybe adjusted for EA being too small to fully capture any demographic?) and most of the people who don’t, wouldn’t.
Step 2 is the interesting one. It might not fully capture what I mean: if someone tries to do the math, but values all foreigners’ lives at zero, maybe that’s so wide a gulf that they don’t belong in the same group. But otherwise I’m pretty ecumenical about “as long as you’re trying”.
This also explains why I’m less impressed by the global poverty / x-risk split than everyone else. Once you stop going off vibes and you try serious analysis, you find that (under lots of assumptions) the calculations come out in favor of x-risk mitigation. There are assumptions you can add and alternate methods you can use to avoid that conclusion. But it’s a temptation you run into. Anyone who hasn’t felt the temptation hasn’t tried the serious analysis.
Real life keeps proving me right on this. When I talk to the average person who says “I hate how EAs focus on AI stuff and not mosquito nets”, I ask “So you’re donating to mosquito nets, right?” and they almost never are. When I talk to people who genuinely believe in the AI stuff, they’ll tell me about how they spent ten hours in front of a spreadsheet last month trying to decide whether to send their yearly donation to an x-risk charity or a malaria charity, but there were so many considerations that they gave up and donated to both.
2: Part of the role of EA is as a social technology for getting you to do the thing that everyone says they want to do in principle.
I talk a big talk about donating to charity. But I probably wouldn’t do it much if I hadn’t taken the Giving What We Can pledge (a vow to give 10% of your income per year) all those years ago. It never feels like the right time. There’s always something else I need the money for. Sometimes I get unexpected windfalls, donate them to charity while expecting to also make my usual end of year donation, and then - having fulfilled the letter of my pledge - come up with an excuse not to make my usual end-of-year donation too.
Cause evaluation works the same way. Every year, I feel bad free-riding off GiveWell. I tell myself I’m going to really look into charities, find the niche underexplored ones that are neglected even by other EAs. Every year (except when I announce ACX Grants and can’t get out of it), I remember on December 27th that I haven’t done any of that yet, grumble, and give to whoever GiveWell puts first (or sometimes EA Funds).
And I’m a terrible vegetarian. If there’s meat in front of me, I’ll eat it. Luckily I’ve cultivated an EA friend group full of vegetarians and pescetarians, and they usually don’t place meat in front of me. My friends will cook me delicious Swedish meatballs made with Impossible Burger, or tell me where to find the best fake turkey for Thanksgiving (it’s Quorn Meatless Roast). And the Good Food Institute (an EA-supported charity) helps ensure I get ever tastier fake meat every year.
Everyone says they want to be a good person and donate to charity and do the right thing. EAs say this too. But nobody stumbles into it by accident. You have to seek out the social technology, then use it.
I think this is the role of the wider community - as a sort of Alcoholics Anonymous, giving people a structure that makes doing the right thing easier than not doing it. Lots of alcoholics want to quit in principle, but only some join AA. I think there’s a similar level of difference between someone who vaguely endorses the idea of giving to charity, and someone who commits to a particular toolbox of social technology to make it happen.
(I admit other groups have their own toolboxes of social technology to encourage doing good, including religions and political groups. Any group with any toolbox has earned the right to call themselves meaningfully distinct from the masses of vague-endorsers).
3: It’s worthwhile to distinguish the people who focus on a belief from the people who hold it
Everyone wants to end homelessness. But there’s a group near me called the Coalition To End Homelessness. Are these people just virtue-signaling? Is it bad for their coalition to appropriate something everyone believes?
Everyone wants to end homelessness. But I assume the Coalition does things - like run homeless shelters, hold donation drives, and talk to policy-makers - that not everyone does.
If the people in groups like that called themselves Homelessness Enders, and had Homelessness Ender conferences, and tried to convince you that you, too, should become a Homelessness Ender and go to their meetings and participate in their donation drives - this seems like a fine thing for them to do, even though everyone wants to end homelessness.
I want to end homelessness, but I don’t claim to be a Homelessness Ender. It’s not something I put much thought into, or work hard on. If the Homelessness Enders tried to recruit me, I would be facing a real choice about whether to become a different kind of person who prioritizes my desire to end homelessness above other things, and who applies social pressure to myself to become the kind of person who puts significant thought and effort into the problem.
4: It’s tautological that once you take out the parts of a movement everyone agrees with, you’re left with controversial parts that many people hate.
…
5: The “uselessness” of effective altruism as a category disappears when you zoom in and notice it’s made out of parts.
“Why do we need effective altruism? Everyone agrees you should do good charity!”
Effective altruism is composed of lots of organizations like GiveWell and GivingWhatWeCan and 80,000 Hours and AI Impacts. Ask the question for each one of them:
Why do we need GiveWell? To help evaluate which charities are most effective. There’s no contradiction between universal support for charity and needing an organization like that.
Why do we need GivingWhatWeCan? To encourage people to donate and help them commit. There’s no contradiction there either.
Why do we need 80,000 Hours? To help people figure out what jobs have the highest positive impact on the world. Still no contradiction.
Why do we need AI Impacts? To try to predict the future course of advanced AI. No contradiction there either.
Why do we need the average effective altruist who donates a little bit each year and tries to participate in discussion on EA Forum? Because they’re the foundation that supports everyone else, plus they give some money and occasionally make good comments.
You could imagine a world where all these same organizations and people exist, but none of them used the label “effective altruism”. But it would be a weird world. All these groups support each other, always in spirit but sometimes also financially. Staff move from one to another. There are conferences where they all meet and talk about their common interest of promoting effective charitable work. What are you supposed to call the conference? The Conference For The Extensional Set Consisting Of GiveWell, GivingWhatWeCan, 80,000 Hours, AI Impacts, And A Few Dozen Other Groups We Won’t Bother Naming, But This Really Is An Extensional Definition, Trust Us?
Freddie has a piece complaining that woke SJWs get angry when people call them “woke” or “SJW”. He titles it Please Just F@#king Tell Me What Term I Am Allowed To Use For The Sweeping Social And Political Changes You Demand. His complaint, which I think is valid, is that if a group is obviously a cohesive unit that shares basic assumptions and pushes a unified program, people will want to talk about them. If you refuse to name yourself or admit you form a natural category, it’s annoying, and you lose the right to complain when other people nonconsensually name you just so they can talk about you at all.
I was tempted to call this post “Please Just F@#king Tell Me What Term I Am Allowed To Use For The Sweeping Social And Political Changes I Demand”.
6: The ideology is never the movement
I admit there’s an awkwardness here, in that EA is both a philosophy and a social cluster. Bill Gates follows the philosophy, but doesn’t associate with the social cluster. Is he “an EA” or not? I lean towards “yes”, but it’s an awkward answer that would be misleading without more clarification.
But this isn’t EA’s fault. It’s an inevitable problem with all movements.
Camille Paglia calls herself a feminist and shares some foundational feminist beliefs, but she hates all the other feminists and vice versa. She thinks feminists should stop criticizing men, admit gender is mostly biological, stop talking about rape culture, and teach women to solve their own problems. She also has some random right-wing political beliefs like doubting global warming. So is she "a feminist" or not? I don't know. Marginally yes? She sure seems to think a lot about women, but probably wouldn't be welcome at the local NOW chapter dinner.
I sometimes describe myself as “quasi-libertarian”. On most political issues, I try to err on the side of more freedom, and I think markets are pretty great. But I really don’t care about taxes, I have only the faintest idea how guns work, I voted for Obama, Hillary, and Biden, and I find the sort of people who go to Libertarian Party meetings to be weird aliens. Am I a libertarian or not? This is why I just say “quasi-libertarian”.
Freddie deBoer thinks we need to build more housing. But he really doesn’t like most YIMBYs (1, 2, 3, 4). He writes:
I said awhile back that a lot of YIMBYs seem to define YIMBYism and NIMBYism in social terms, not political or policy terms - that they define allies not by who aligns with them in a policy sense but by who fights on their side online. On Reddit and Twitter some YIMBYs responded to that by calling me a NIMBY. In other words, despite my explicit policy beliefs, they think that I’m a NIMBY because I’m not part of their cool online social circle, which is a perfect illustration of the exact point I was making about how YIMBYism actually operates in practice. If I’m a YIMBY [sic] despite my policy preferences and because I’m considered outside of the YIMBY kaffeeklatsch, that means that it isn’t about policy and is about being a cool shitposter.
I agree with Freddie: it’s better to define coalitions by what people believe than by social group. If that’s true, Bill Gates is an EA. But I also agree with Freddie that this is hard, and the social group matters a lot in real life too. In that sense, Bill Gates isn’t an EA.
EA might have screwed this up worse than some other groups, but I don’t think a movement our size is capable of rebranding. We just have to eat the loss. If we were optimizing entirely for clarity and not for attractive-soundingness, I’d go for Systematic Altruism on the one side, and The Network Of People Who All Pursue Systematic Altruism Together In A Way Causally Downstream Of Toby Ord, Will MacAskill, And Nick Bostrom (TONOPWAPSATIAWCDOTOWMAANB) on the other.
In real life I have no solution for these kinds of ambiguities; language is an imperfect medium of communication.
7: Maybe the solution is to look at the marginal effect of more vs. less of a movement.
Yesterday I argued that effective altruism had saved hundreds of thousands of lives, so people should celebrate its successes rather than focusing on SBF and a few other failures.
I checked to see if I was being a giant hypocrite, and came up with the following: wokeness is just a modern intensification of age-old anti-racism. And anti-racism has even more achievements than effective altruism: it’s freed the slaves, ended segregation, etc. But people (including me) mostly criticize wokeness for its comparatively-small failures, like academics getting unfairly cancelled. Why should people judge effective altruism on its big successes, but anti-racism on its small failures?
One answer: don’t have opinions on movements at all, judge each policy proposal individually. Then you can support freeing the slaves, but oppose cancel culture. This is correct and virtuous, but misses something. Most change is effected by big movements; a lot of your impact consists of which movements you join and support, vs. which movements you put down and oppose.
Maybe a better answer is to judge movements on the marginal unit of power. An anti-woke person believes that giving anti-racism another unit of power beyond what it has right now isn’t going to free any more slaves, it’s just going to make cancel culture worse.
(or maybe that should be “giving the anti-racist movement as a social cluster another unit of power…”)
I don’t know exactly what it means to give effective altruism another marginal unit of power (although if we hammered it out, I’d probably support it). Instead, I’ll make the weaker argument that you should, personally, think about how to make the world a better place, and if you notice you’re not doing as good a job as you want, consider using effective altruism’s tools. I think on the margin this is good, and EA’s past successes are a good guide to what another marginal unit of support would produce. The problems of the world are so vast that all of EA’s billions of dollars have barely budged the margin; an extra bed net still does almost as much good today as it did in 2013 when the movement was founded. A marginal AI safety researcher is worth less now than in 2013, but there are still only a few hundred (maybe a thousand now) in the world.
You get different answers if you apply the marginal unit of support to broadening the movement’s base or intensifying the true believers; maybe this is part of why all debates are bravery debates.
Share this post