628 Comments

Regarding terrorism not working, that can't really be true since if the blowback effect is so large as to negate any benefit than it's big enough to harness by engaging in terrorism in the name of a group which is beneficial to create blowback against -- eg maybe join Greenpeace and try to get caught trying to blow up a factory making golden rice.

I think the more accurate response is that terrorism is extremely psychologically difficult to effectuate in a way that achieves your ends because people usually need to be hyped up to get themselves to do it and that process tends to link them to their true cause. Also, if you did have both the kind of epistemic care to ensure you were doing good (w/o asking for advice that would leave a trail) and the self-control to avoid leaving a trail you may have skills that could be more effectively leveraged elsewhere.

EDIT: In case it's not clear, I'm not suggesting terrorism is a useful way to achieve your goals. Rather, I'm pedantically insisting that the right analysis is that -- while in theory under the free agent paradigm we use to understand these questions you could harness that blowback -- in practice (thankfully!!) humans are subject to psychological constraints that make them unable to spend decades living as the enemy before sacrificing themselves in a horrific act apparently aimed at their friends. Or even to simply be able to pull the trigger on the kind of targets that might help rather than those they hate.

Expand full comment
author
May 30·edited May 30Author

I think false flags would work better than regular terrorism, but that they're hard to pull off - Osama can't convincingly pretend to be an anti-Islam extremist. I also think most people smart enough to do them are smart enough not to become terrorists in the first place.

Expand full comment

I agree with the first part of that response ...but surely the second part is circular. Unless you presume ineffectiveness (or personal selfishness) you can't argue that smart people wouldn't do it. May be true but it's not an argument that should convince someone.

Expand full comment

After 9/11 I (young at the time) thought that if Al Qaeda were smart they should get some sympathetic person who's totally straitlaced and has no record to commit a terrorist act, because they would totally avoid suspicion. Since Al Qaeda is presumably smart enough to think of this but it hasn't happened in all this time, I conclude that it's nearly impossible to find someone who fits the bill and is willing to actually do it. Call it smarts or whatever you like.

Expand full comment

Right, that's the part I said above about it being psychologically very difficult. People aren't robots and the kind of stuff needed to work someone up to such an act requires being immersed in a community who supports said deciscion.

But that gets to an issue with how we construct our deciscion making models and what we mean by "you should do X". Ultimately, you do whatever you do, but we idealize the situation by considering a model where you choose between various actions where we pretend psychological constraints don't exist.

In that model, it is true that in many situations some kind of terrorism our violence will be an effective tactic.

But this ends up confusing people with frequently bad results because they don't appreciate just how binding those constraints really are -- though more often it's national intelligence agencies not EAs making the mistake

Expand full comment

I agree. I've had the similar thought that if a politically minded billionaire like George Soros really wanted the Democrats to win, or the Kock brothers wanted the Republicans to win, what they should do is infiltrate the libertarian/green party and run as a 3rd party candidate, and being a billionaire have enough influence to cause a proper spoiler effect against the party they don't like. You probably don't even need to be a billionaire and could be a more moderate millionaire for smaller but still important elections on the state or municipal level. As far as I know, this never happens, despite being possible in every First Past the Post election. Since it doesn't, it must actually be rather difficult to infiltrate, psychologically or otherwise, to infiltrate and become a major member of a movement you hate.

Expand full comment
May 30·edited May 30

"if a politically minded billionaire like George Soros really wanted the Democrats to win, or the Kock brothers wanted the Republicans to win, what they should do is infiltrate the libertarian/green party and run as a 3rd party candidate, and being a billionaire have enough influence to cause a proper spoiler effect against the party they don't like"

The problem with that approach would seem to be "Yeah, but then you have to try and get Libertarians to all pull together". Seemingly there is/was a split within the Libertarian Party, one faction won and after tireless scheming and hard work got their guy into power, and then at the big Libertarian Party conference, he... couldn't keep it together long enough not to abide by "Don't take candy from strangers":

https://sg.news.yahoo.com/libertarian-candidate-reveals-took-edible-161423546.html

"He went on to complain about “all the egregious offenses against our freedom” perpetrated by the Trump administration between 2017 and 2021, at which point a member of the audience shouted out: “How high are you?”

Dr Rectenwald grinned and called back: “Not high enough! Hey, I’m living liberty today!”

When he was later asked about that answer by The Washington Post journalist Meryl Kornfield, Dr Rectenwald confirmed that wasn’t joking: he had taken an edible before going on stage.

“This was not some sort of a major political scandal, okay. I wasn’t found in bed with Stormy Daniels. I’m at a Libertarian Party convention. Somebody offered me something,” he said."

'Somebody gave me something'? And if that was poisoned or laced with something dangerous? 🤦‍♀️ I have to say, Trump had a point when he fired back at them:

"The presumptive Republican presidential nominee had given a speech at the convention in Washington DC on Saturday evening and was booed and jeered by the audience when he urged them to vote for him.

“Maybe you don’t want to win,” the candidate hit back. “Only do that if you want to win. If you want to lose, don’t do that. Keep getting three per cent every four years.”

Expand full comment

I think actually personally running as a spoiler candidate is inefficient and offers poor cover, but wealthy actors who want to influence elections funding spoiler candidates does appear to be a thing that happens.

Expand full comment

Most people probably do not want to commit violent acts against innocent people. But attacking data centers isnt that. There were plenty of normal textile workers who attacked machinery that threatened their way of life in England, they weren’t a bunch of extreme kooks, and so I expect we’ll see the same here. And it did almost work for those textile folks, they won some sympathy from prominent people.

Ultimately there will need to be some threat of force to stop AI, and it may come to actually using it, and a lot of people will decide that rationally they are left with nothing but direct action. Gazans etc can appeal to people and politics, but AI will be taking all those decisions out of human control and normal political processes will be useless, so there may not be an alternative strategy in the end.

Expand full comment

I don't think there's any way scattered attacks on machines could have prevented the industrial revolution.

Expand full comment
founding

If there were only three plants in all the world that could machine pistons precisely enough to work in steam engines, it maybe could have The key elements of the industrial revolution were a lot more decentralized than the key elements of any present AI revolution.

Expand full comment
May 30·edited May 30

Another answer is that even if people agree to do false flag stuff, efficient uses of it would, by definition, be not known to you, unless you're a part of the conspiracy.

Expand full comment

Also, I think that you kinda have to believe that certain acts like this would be super helpful if you take the Yudkowsky story about AGI risk seriously. Now plausibly humans can't figure out which ones they are, but if the AI can't exert control over society by engineering the right kind of events then it's much less dangerous.

So I think it's worth distinguishing the question of whether we can know that a certain instance of terrorism would have a certain outcome and whether an omniscient being would ever find a good way to put it to use. I think lots of the barriers here are epistemic in nature.

Expand full comment

If Omega gives a friendly superintelligence the choice to commit various forms of terrorism, and no other options, yes the friendly superintelligence can probably use this capability to do some good.

But for a friendly ASI with limited resources in the real world, it would be surprising if terrorism was the most effective strategy.

An unfriendly ASI would have less reason to avoid terrorism, but still likely wouldn't do it.

Expand full comment

I think it depends alot on how you imagine the limitations working. For instance, how hard is it to trick people into engaging in attacks which are credited to groups you want to discredit?

Expand full comment

Could a superintelligence find some plan, consisting solely of blowing stuff up, that was better than nothing. Sure. (by choosing the specifics of where, when ...) Could a super intelligence find some plan consisting solely of mailing fish to people that was better than doing nothing. Also yes.

If a superintelligence has a fixed budget (say in money, bandwith, attention, whatever), is blowing stuff up likely to be the best use of that? Are bombs a better bargin than all the other things it could buy? I don't think so.

Expand full comment

I think it depends on what your goals are. If you think the goal of Hamas is to help the Palestinian people, it has been remarkably anti-effective in the past year. But if you think the goal is to isolate and destroy the Israeli state, it has done a lot more toward that in the past year than anyone has accomplished in decades.

Expand full comment

Well, the default scenario seems to be that Israel slowly strangles Palestine and Palestinians. If Hamas's actions cause Israel to collapse instead, after which Palestinians could flourish (as much as third-worlders can, anyway), then there's an argument for it being effective.

Expand full comment

I don't think *instead* or *flourish* is obviously what they particularly care about. If they do, then it's less plausible that they've been effective, and if they don't, then it's more plausible.

Expand full comment

Israel used to have multiple states warring against it at the same time (1948, 1967, 1973). Now it's just at war with Hamas, and further away from destruction than it was warring with actual states.

Expand full comment

I was either not yet born or not yet paying attention during the period from 1973 to 2000. The main thing I have seen is that Israel is currently more isolated and unsupported than it has been at any other time since 2000 (though it's obviously not completely isolated and unsupported). In that sense, Oct. 7 may have been successful, compared to various other activities they and others tried over the past two decades.

Expand full comment

Everything Hamas has tried has failed, so perhaps by that standard it doesn't seem that much worse. The US is still actually supporting Israel in its fight against Hamas, but it's not like Hamas was ever willing to do anything that could get the US to favor them instead.

Expand full comment

I do not claim that Hamas has great odds at success. Luckily, their insane dream of driving the Jews back into the sea seems unlikely to come true.

But in mid-2023, the Israel-Palestine conflict had mostly disappeared from the news. Israel was normalizing its relationship with its Arab neighbors.

Today, the conflict is in the news all the time. Iran shot a few token rockets towards Israel. Several European countries have decided that they will recognize Palestine as a state. On US campuses, where the future elite of Israels most important ally is raised, wokes are happily chanting Hamas slogans.

Hamas knows the Palestinians will never defeat Israel on their own. They do not have a military objective, they have a social objective. The product Hamas is producing is dead Palestinian kids, killed by IDF.

From an instrumental rationality point of view, I can't really fault their strategy. Oct-7 was meant to goad the IDF into killing Gazans. If ones utility function only had a single term, which is "the destruction of Israel", this seems like a pretty optimal strategy, which raises the odds of Israel being destroyed this century (except for AI) by a factor of perhaps 1.5, say from 2% to 3% or so.

Of course anyone with a less monstrous utility function -- which contains terms for dead Israelis or dead Gazans, perhaps -- would conclude that Oct-7 is tremendously net negative, but there is no accounting for utility functions.

Expand full comment

My view is that the governments of these neighboring Arab countries are just keeping their heads down, but haven't changed their minds at all about the desirability with siding with Israel against Iran. At the end of this current war, it will fade into the past just like everything else in the past of the Middle East that one might think would prevent such an alliance.

Expand full comment

I think the presumption here was for EA goals. Trivially, some goals are achieved by terrorism -- for instance increasing the amount of terrorism.

Regarding Hamas, I don't think that's correct. Isolate certainly, but states, especially religious ones, often pull together when they feel isolated. Indeed, I think they succeeded in heading off the possibility where Israel simply devolves into a multi-ethnic, multi-religion democratic country like the US -- indeed, maybe even one where Palestinians are the majority. It wasn't the most likely outcome by far but it was much more possible before these events.

The reaction (and flagrant disregard of it's own foundational document regarding jurisdiction) of the ICC and the ICJ has convinced Israel and many Jews in the diaspora that they can never trust in the international system to give them a fair shake and that there is a truly pressing need to have a Jewish homeland.

Expand full comment

How do you know he wasn't a closeted anti-Islam extremist all along?

Expand full comment

It's hard to engage in terrorism in favor of big AI companies though because they're already approximately getting everything they want. The situation is inherently asymmetric. "Terrorism to create blowback" would have to look like "join OpenAI and advocate for really creepy project ideas." Stuff like the ScarJo voice drama.

Expand full comment

Good thing I support them then.

Expand full comment

I guess you could bomb Rationalist hubs and AI Risk orgs in the name of Effective Accelerationism. (Apologies to the e/acc people for even suggesting this, they're just the only ones to really put a name on it.)

Expand full comment

It depends on what you're trying to accomplish. Did the US do itself serious damage by overreacting to 9/11? Maybe, but how much is hard to estimate, and my one sure prediction is that if the US goes down, people will be arguing about the causes.

Expand full comment

Right, I think it's frequently very difficult to predict the effects of actions in the long term. And that's absolutely a practical concern here. But, if we are being consistent, we need to apply that to all such interventions and be equally worried about whether the real effect of advocating for X will actually bring about X.

Indeed, I do think that epistemic limitations are a strong argument that political interventions tend to have relatively low expected benefit.

Expand full comment

Scott speculates that 99% of terrorists get caught at the “your collaborator is an undercover fed” stage. If that’s accurate to within an order of magnitude, the headlines are much more likely to read “oil executive caught plotting to commit a terrorist attack and blame it on Greenpeace” than “Greenpeace commits terrorist attack.” So the blowback effect would likely help Greenpeace rather than hurt it.

The Reichstag fire worked well for Hitler because he was the head of government at that point and controlled the investigation into the fire.

Expand full comment

Terrorism is also negative-sum. It's totally possible for both terrorist's cause, and the cause they're opposing, to be worse off for it e.g. maybe the backlash against Greenpeace damages the environmental movement making climate change worse, and also the damage to the golden rice factory makes vitamin A deficiency in poor countries worse. Or consider how both Al Qaeda and the US would be better off (at least by many values, I'm not 100% sure what Al Qaeda's value function is here) if 9/11 hadn't happened.

Expand full comment

The most successful terrorist attacks I can think of in history were ones by an extremist subgroup trying to provoke a war against the war-reluctant main group they were part of, such as Franz Ferdinand's assassination by the Black Hand or the series of acts by the Black Dragon Society and the other expansionist ultranationalists that pushed Japan into Asian expansion. All of these relied on harsh reactions from the targets as a source of additional motivation and a whip to goad their unwilling allies into the conflict.

Expand full comment

To that I can add IRA-style terrorism as guerilla warfare, trying to make staying in your country so miserable the occupier leaves. That can work.

Expand full comment

Causing war, chaos and mayhem seems to be something terrorism is good at, yes. Another example would be the assassination of Rabin, which likely did not help the Oslo accords.

Expand full comment

The assassination of Rabin did help the Oslo Accords - it put in power Peres who was more left-wing than Rabin, and discredited the right wing as political murderers.

Expand full comment

In environmentalism, eco-terrorists are stunningly ineffective. I work in fossil fuels. Someone gassed the office building, causing a building-wide evacuation, getting themselves arrested - but the impact on the company's actions and bottom line was.... Negligible.

I mean, I got to enjoy a nice walk outside rather than slave away on my computer. But a day later I was back at it.

The fossil fuels problem is a demand-side problem, not a supply side problem. In 2024, the evil fossil fuel companies aren't actually trying all that hard to sell oil because there will always be buyers as long as people need oil. In fact, they're aggressively trying to sell hydrogen, one of the no carbon things that I think is kind of dumb logistically and has zero customers.

If you wanted to move the dial, you'd protest importing cars that run on petrol, and counter-protest the other environmentalists stopping the construction of new solar farms. Terrorism aimed at supply side does nothing.

Expand full comment

Yes, as I said it is very hard to implement effective terrorism because the kind of actions that would make for effective terrorism aren't the kind of things that inspire people to terrorism. In practice, terrorism tends to attack symbolic targets and cause backlash because people need to be emotionally hyped up to commit the act.

I'm just being pedantic that the correct account isn't that it's not in theory workable but that you can't really behave like the idealized actor who could live a full life pretending to be their enemy and then commit horrific acts in their name.

For instance, the ideal kind of terrorism that would achieve environmental ends is probably working your way up the hierarchy in an oil company and making sure you had heavy financial interests in that company's stock and then assassinating Greta Thunberg (sp?) in a way that tries -- but delibrately fails -- to look like an accident.

Expand full comment

How on earth would a failed assassination false flag thing help??? It wouldn't cause any sanctions on a company, just on that specific executive/board which you can just replace with new people entirely. Also, some Boeing whistleblowers have died under suspicious circumstances and guess what, we're still flying on planes, aren't we?

For this specific industry, the problem is that you can't try to boycott fossil fuels at the current state. It's like trying to boycott Amazon Web Services. Too much vital infrastructure relies on it right now.

Any attempts to "boycott" fossil fuels tend to be geopolitics, e.g China refusing to buy Australian coal for a bit back in 2020 - 2021, and all that did was cause blackouts over there. The attempted sanctions on Russian oil and gas went poorly because the timeframes are way too short.

But I suppose the only way you could force massive investment into switching over is.... If you're using approximately all of it in a war and you can't get any for geopolitical reasons? Which would be a pretty crappy outcome that guarantees civilian suffering and unnecessary cost during the transition (energy rationing generally does not have good outcomes).

Expand full comment

"Works" in the sense of "changing people's attitudes towards animals" doesn't happen. In that sense "terrorism doesn't work" is correct.

"Works" in the sense of "makes people think the authorities are ineffective" can work, but mostly through random violence.

Which is why say, Lenin succeeded, while PETA hasn't.

If your goal is "burn it all down", then attempting to burn it all down does work towards your goal. It is much easier to push Humpty-Dumpty off a wall than to put him back together again afterwards.

note: I am not in favor of terrorism at all.

Expand full comment

Like neonazis bombing the Oktoberfest, of all places?

https://en.wikipedia.org/wiki/Oktoberfest_bombing

Expand full comment

Any conception of the Good which asks us to reach some difficult standard will be attacked by critics who find the standard hard to reach and respond out of jealousy or anger.

In my opinion, EA belongs in the pantheon of Great faiths of the world, which find themselves continually subjected to scorn and derision by persons who claim to be motivated by truth but in reality are animated more by petty jealousy than that magnanimous spirits which animates all men who aspire to greatness.

Welcome to the club.

Scott, your commitment to truth and generosity have been in part responsible for my own dedication to God, which I would see simply as a name for that which is ultimately true and real. Please see these people for what they are: animated by the spirit of Cain.

Expand full comment

I like the phrase, "animated by the spirit of Cain," for the phenomenon.

Expand full comment

"...and this is why we need to end Shrimp Suffering and stop Skynet!!!"

EA is a Religion - you said it yourself. When people reject it, they typically don't reject the concept of "The Good" but *your conception* of "The Good". To claim that anyone that rejects it is "animated by spirit of Cain and petty jealousy" doesn't help swing them to your side.

Expand full comment

The statement was only intended for Scott. I think anyone who believes in the concept of The Good would be wiser to focus on their own capacity for foolishness rather that of other people, which is both easy to do and useless unless it’s to help better understand the human capacity for self deception.

Expand full comment

I'm not going to say the argument that if EA was good, it would improve the charitable giving in Boston and SF is the worst argument I've ever seen, because there's so much competition. It's up there, though.

Is there a term for straw man argument fantasia? This isn't just perfectionism, it's a gish gallop of invented impossible standards.

Expand full comment

I actually think there’s something good about the argument. It’s probably looking at too short a timescale, but Effective Altruists really do hope that the movement will eventually have some effect on its goals above pre-existing trend, and noting that this hasn’t happened (either at the level of objective malaria case rates falling faster for more time, or at the level of more people giving more money) shows that the movement isn’t (yet) having the impact it wants.

Expand full comment

But does his data even demonstrate it? Wouldn't a better measure be, like, donations to GiveWell or things like that?

Expand full comment

Based on their own metrics, they couldn't look at donations but instead the effects of those donations. A billion dollars that doesn't change conditions in reality is no better than a million that makes no change.

It sounds like Stone's criticism might work either way - either there are no measurable downstream effects, in which case EAs are not very effective, or their numbers are too small to be effective, in which case they are not very effective.

In other words, at that point you've got a small number of people making an individually meaningful (maybe?) change that amounts to a rounding error on the problems they are trying to solve.

Expand full comment

GiveWell, as I understand it, disburses money to multiple charities based on effectiveness principles, so it's kind of an EA clearinghouse. Is it true that there are no measurable downstream effects of the money they receive, which I believe has grown recently? In this post Scott suggests that they've saved thousands of lives: https://www.astralcodexten.com/p/in-continued-defense-of-effective

Expand full comment

I'm aware of Scott's previous post, and don't have a particular argument against his numbers. On the other hand, 61 million people died last year. No number Scott shared comes close to a fraction of that.

Stone's argument that you can't see the effects of EA even looking at the two places where they are most influential says that EA is too small to be meaningful, even if everything worked as they hope. 200,000 people saved is nothing.

I'm not saying EAs do nothing useful, or that giving to charity is pointless. I'm saying that EAs do too little to justify their own propaganda and the way they speak about it, which seems to be more Stone's point than anything else.

I like that EA exists and want to see them succeed. I think they've done some positive good, and promoting GiveWell is great. But it's all small time stuff. They can't replace the world's charities, and my speculation is that if they tried they would have the same issues with overly large bureaucracies and disjointed priorities that all large organizations have. That's a big part of the criticism about shrimp charities and AI - even being a small and relatively nimble group, they are already veering into what most people would consider pet projects of those giving or working in the organizations, rather than objectively important life-saving approaches.

Expand full comment

Well, I definitely agree that the Charity/Death matchup was another blowout last year. We'll get 'em this year!

More seriously, I'm inclined to reject in the strongest possible terms the idea that 200,000 people saved is nothing, and while I didn't read Stone's piece I have my doubts that he used that particular argument. But maybe I could be persuaded in that direction if I knew what alternative Stone was pointing to that's superior.

Expand full comment

It feels like "Oh, they do mathy stuff? Well, I can do mathy stuff too, look!" which then fails because they don't have enough practice doing mathy stuff, or enough practice noticing the problems in their own arguments, so they end up making a very stupid argument.

Expand full comment

I think Lyman is working on conflict theory and you're working on mistake theory. One simple question for Lyman: what would change his mind about EAs? I don't think anything would.

Expand full comment

His social circle changing their minds about it. He wouldn't generate that answer himself, but would probably agree that it's true in principle, except impossible in reality.

Expand full comment

Sounds about right

Expand full comment

Where is the conflict theory in his argument?

Expand full comment

Munecat (you know, that YouTube chick) has set her sights on EA. That can't be good for you, can it? Not the usual kind of opposition that you can afford to be excited to face. To what degree are you scared?

Expand full comment

"Munecat (you know, that YouTube chick)"

I don't know. Who is this famous in her own backyard person?

Expand full comment

I guess it doesn't really matter. She'll never be as famous or as EFFECTIVE (lol) as SBF (Sam Bankman-Fried)

Expand full comment

>you know, that YouTube chick

I’m terminally online and I had no idea who this was until I looked her up.

From a quick glance, her channel seems targeted towards heavily left-leaning 20 something’s, the same demographic as Hasan. I can tell you from experience that most of that demographic already hates EA because of its association with Bay Area techbros.

If I was Scott, I’d be about as worried about her coming out against EA as I would be if Andrew Tate did; the overlap between people who are interested in EA and watch either of them is ~0.

Expand full comment

So you agree EA is a cult

Expand full comment

No, niche internet microcelebrities are more cultlike than people who don't care about them.

Expand full comment

You're famous only for SBF

Also, I see that you're a different person, but you just admitted that you don't care, although you claim to want to make the world better. That makes you either evil, or extremely ineffective

Expand full comment

Who do you think you're talking to? I'm not famous at all. When did I claim anything about wanting to make the world better?

Expand full comment

I meant the EA community when I said "you". Assumed you were part of it given the context of the thread

Expand full comment

You attributed to Nalthis the belief that EA is a cult, based on a tendentious reading of his words. (If two groups are disjoint, this doesn't tell you *which* group is the cult, therefore it's not reasonable to claim his words entail a belief that EA is a cult. But I think you know that.)

Now TGGP just "admitted" EA's "don't care", according to you. He "admitted" that EA's don't care about, in his words, "niche internet microcelebrities", which is true of almost everyone, and is a totally different kind of "not caring" than would hinder your effectiveness at large scale goals. I'm sure you know that too, but still you generalize from one to the other.

A bit less sophistry please.

Expand full comment
Jun 1·edited Jun 2

Accuse me of trolling and I might admit it. I'm not even gonna look up what you're accusing me of

A LOT of people watch Andrew Tate. You can call his followers a lot of things, but the usual definition of "cult" does not apply to such a large group. Who endorses EA these days? Post-SBF, even Elon and his followers have moved on from you. So yeah, picking one group and calling it the "cult" is appropriate

Is it a totally different kind of "not caring"? I don't think so. In my opinion (you're welcome to try to disagree), the only kind of "large scale goal" that is *not* unambiguously bad is spreading a message or your own version of the "good news" or your own version of "the truth". I think any individual who has any other "large scale goals" is either delusional or just plain evil. Hitler had large scale goals, for example

So if you don't care about convincing munecat or her followers, you're evil in one of 2 ways: you either think she (and/or her welfare) is worth sacrificing, or you're trying to be "altruistic" toward her without her consent (like force-feeding her). So yeah, both your meanings of not caring are pretty much synonymous to me

Expand full comment

Philosophy Tube seems to make the same genre of content as Munncat and already did a video on EA. PT is a larger channel, and nothing came of that.

Speaking of which, has anyone seen that video? Is it any good?

Expand full comment

It is not especially good. I'm generally.a pretty big fan of PhilosophyTube but felt like that video really missed the mark. PT often takes a sort of soft marxist / anticapitalist conception of the good as a given, and instead of concluding that EA is people trying to good by a different metric than her, tries really hard to suggest that EA is more about status and prestige and signaling and personal power and is intellectually dishonest because it isn't supporting the things she thinks they should.

Expand full comment

Philosophy Tube is rather weak compared to munecat. But yeah, EA IS about status and prestige and signaling

Expand full comment

For all of us ignorant of munecat, why is PT weak in comparison?

Expand full comment

I don't think EA is about status, prestige, and signaling, but even if it is, I don't even care as long as they actually drive money to important charities, and actually help the world.

Expand full comment
May 31·edited May 31

How naive! Let's list some people who have endeavored to help the world: Sam Bankman-Fried, Adolf Hitler, Joseph Stalin, Mao Zedong, every single leader of the Church of Scientology, Elon Musk, the list goes on... Now that I've said something that I believe, let me ask you something: list some charities that you think are important

Expand full comment

This is why you don't just look at the expressed intentions, nor at the actual intentions, but at what they are doing and why.

If you think they are doing important mistakes in their analysis of what to do to help, you will have to debate these, you can't just say "it is just about status and prestige", and expect to make an important argument against it, for two reasons :

- Just stating stuffs like that, without argument, will not convince anybody not already convinced.

- The reason it is done, doesn't help us know if it is a good thing to do it or not (a lot of people honestly trying to help, did a lot of wrong, and the opposite is also true).

It is quite easy to explain where each people on your list did/do make the world worse (even if some people are still delusional about Musk), just do the same thing with EA (but first inform yourself about what they are really doing, I could be wrong, but I think it is possible you aren't really well-informed on it).

Expand full comment

…what is the kind of opposition you *can* be excited to face?

Look, I’m glad that I’ve never had a YouTuber try to run me out of town. Or even go after my job, or an adjacent philosophical movement. But I don’t feel like “excited” or “scared” are really the right terms here.

Expand full comment
founding

> …what is the kind of opposition you *can* be excited to face?

Intellectually honest, creative, constructive, so on. Like, if you believe in debate / collaborative truthseeking / etc., opposition / disagreement is an engine to produce more knowledge.

Expand full comment

I wonder how much of the dislike for EA culture is a reaction to the fact that EA enthusiasts haven't adopted the same kind of norms about when not to share their moral theory that we've developed for religion, vegetarianism etc...

I mean, yes EA is full of a lot of people with half-assed philosophical views. Heck, I'm sure many would put me into that bucket and my PhD involved a substantial philosophy component. But that's much more true of the donors to regular charities, especially religious ones. The number of people who actually know what's in their own religious texts is shockingly low.

But thanks to centuries of conflict we have strong norms about not sharing those views in certain ways.

Expand full comment

I think this is right. Across many arguments on the topic, something I've seen many EA critics say is, "to YOU donating to the local art museum or your alma mater may be less 'effective' than donating bed nets, but that's just your judgment. There's no objectively true measure of effectiveness." To which the obvious answer is, you're right, so that's why we're out here trying to convince people to use our measure of effectiveness. But one gets the sense that's out of bounds.

If a rich person is donating no money to charity, it's socially acceptable to try to convince them to donate some. But once they've decided to donate some, it seems like it's *not* socially acceptable to try to convince them to donate it elsewhere. That seems inconsistent to me but it seems like it's based on some pretty durable norms.

Also, this is another case where the most important part may be the part everyone agrees on but lots of people don't do, namely donate to charity at all. It's not fun to argue about whether one should donate if one can, since almost everyone agrees they should. It's more fun to argue about what donations are "effective" or whether that's even measurable.

Expand full comment

"To which the obvious answer is, you're right, so that's why we're out here trying to convince people to use our measure of effectiveness. But one gets the sense that's out of bounds."

Compare that with all the objections about "imposing your religion" when it comes to the public square and topics such as abortion. Yes, if I could convert everyone to accepting the Catholic theology around sex and reproduction, then we could all agree on the moral value of embryos as human persons. But that ain't gonna happen. Ditto with "if everyone just accepts the EA measure of effectiveness".

Expand full comment

Well, if the standard is "everyone," I agree that it ain't gonna happen. But is that an objection to trying to convince people on the margin? Because that does sometimes work!

Expand full comment

Go forth and save souls, how can I object to that?

Expand full comment

> If a rich person is donating no money to charity, it's socially acceptable to try to convince them to donate some. But once they've decided to donate some, it seems like it's *not* socially acceptable to try to convince them to donate it elsewhere. That seems inconsistent to me

I feel that's perfectly consistent - the former case you are essentially appealing "hey, according to your moral norms (as far as you claim), you should donate", and then the person reflects on that and agrees (or disagrees); but in the latter case you'd be saying "according to your moral norms you think that you should donate to X, but according to my moral norms Y is better" which is.... different. It is generally accepted to point out hypocrisy and align words with deeds, but it's generally not accepted to demand someone to change their reasonable-but-different moral priorities unless they were violating some taboos.

Expand full comment

I think at this point it depends on how one does it. However, I don't think this necessarily entails pressuring someone to change their moral norms. I think there are very few people whose moral norms don't take saving lives as one of the highest causes one can contribute to. Suggesting that they can achieve that goal better is often taken as helpful rather than preachy; at any rate that's how I took it.

Expand full comment

I think the issue is more that it's not really practical to do this well.

The problem is that we can either exercise approval or disapproval and in an ideal situation we would approve of all charitable donations but just approve less of the less effective charity. Unfortunately, in practice, people don't really know how much you would have approved had they done the other donation so often the only way to convey the message sounds like passive aggressive criticism "great but you could have..."

Expand full comment

The "ideology and movement" distinction and trying to be a big tent probably contributes to this issue IMO. EA has a distinct culture that is incredibly elitist and quite off-putting to "normies," but tries to maintain this whole thing about just meaning "doing good better".

So is EA simply "doing good better" by any means at all, or is it trying to claim that blowing hundreds of millions on criminal justice reform and X amount on shrimp suffering are among the most effective possible causes, and also maybe you should go vegan and donate a kidney? Scott showed the most self-awareness on this in his review of WWOTF (https://www.astralcodexten.com/p/book-review-what-we-owe-the-future), ctrl+f "seagulls", and has not returned to such clarity in any of his EA posts since. Clearly, EA isn't *just* an idea; there's a whole lot of cultural assumptions smuggled in.

Expand full comment

It's not the elitism that bothers people. It's the lack of social skills that results in the impression of being looked down on.

People fucking love elites, big celebrities are always sitting on boards for charities or whatever and people love it. But they are careful not to be seen as critical of people without that status.

Expand full comment

I actually think the problem is not sufficently distinguishing the movement and the multiple different related ideas idea in many of these later posts.

I agree the idea isn't merely "it's important to do good effectively," I think that misses some key elements, I think the minimal EA thesis can be summarized as something like:

within the range of charitable interventions widely seen as good [1] the desierability of those interventions can be accurately compared by summing over the individual benefits and when you actually do the math it often reveals huge benefits that would otherwise be overlooked.

That view is something that is more controversial than one might think but the real controversy comes from the other part of the standard EA view. The belief that therefore we should allocate social credit according to the efficacy of someone's charitable giving.

Unfortunately, EA types tend not to be great with social skills so instead of actually conveying more approval for more effective giving what they actually often manage to do is convey disapproval of ineffective giving which upsets people. Not to mention many people just dislike the aesthetics of the movement the same way many greens really dislike the aesthetics of nuclear (and vice versa) prior to any discussion of the policy.

Anyway, long story short, it's better to disentangle all these different aspects.

--

1: so consensual interventions which help currently living participants w/o direct/salient harm to anyone or other weird defeasors.

Expand full comment

I do think those norms have developed because they are important for effectiveness. Vegetarians have learned that preachiness doesn’t actually work.

Expand full comment

That's true, but at some point vegetarians do have to advocate for their ideas and I'm sure they find (as I used to) that just that by itself can be perceived as preachy by people who don't want to be confronted by the ideas no matter how they're packaged, and I think some of that is going on with EA too.

Expand full comment

Sure, advocacy is admirable and useful. Vegans get in trouble when they badly misread the room and try a stunt like pledging to not ever to sit at the same table as non vegans. They tried something like that not long ago. It didn’t play well, as anyone outside their orbit could have easily predicted. You have to meet people where they are, not where your circle of close friends happen to be.

Expand full comment

That's certainly true, but let's be honest, most EAs are in it for the discussion/feeling of being more consistent not the altruism.

And it's wonderful we can convert the former into helping people but I don't think it's possible for the social movement EA to ever act like vegetarians because the motivation isn't deep emotional concern with suffering (for some it is and the rest of us do care) but it's feeling good about ourselves for being consistent. And this does create extra irritation bc people feel they are being looked down on by individuals who are doing the same thing they are -- the few EA saints don't bother people.

Hence the need for a seperate term like "efficient giving" or whatever to solicit donations from people who find the social movement unappealing.

--

And that's just the way of altruistic groups. I care about global warming but I find the aesthetics of most environmental groups repellent. Best you can often do is create a range of aesthetics that work for the same objectives.

Expand full comment
May 31·edited May 31

The quiet altruist is admirable in many ways, but there do need to be people that evangelize in some fashion. I don't have leadership skills and am prone to arrogance, so perhaps quiet altruistrism makes most sense for me to aspire to. But there are people in EA with the right qualities for modest evangelizing.

Expand full comment

Ohh absolutely, but most EAs don't have them and lots of us are into EA because we like the aesthetic and debating this kind of stuff and not everyone is going to realize when they need to turn that off.

Expand full comment
May 31·edited May 31

True. I've been thinking about a related issue regarding comparative advantage. For example, going vegan probably isn't worth it for people with high opportunity cost of time and energy, but may be for those with low OC. But that sort of reasoning is conspicuously vulnerable to abuse (and resentment) because it's basically saying the powerful people get to do fun stuff and the masses should eat bad-tasting food.

Expand full comment

I think that's were it's important to seperate the question of what is good to do and what is good to critisize/praise people for doing.

Also the powerful people likely have more impact, even per dollar, so likely have more obligations.

Expand full comment

"Wokeness" / "social justice" gained a lot of ground through preachiness but also produced a backlash. I'd guess the milder forms of their preachiness were quite effective on net, but the extreme forms were very counterproductive.

Expand full comment

Good point, though sometimes very mild preachiness/disapproval can be helpful ("ohh, you don't use cruelty free eggs?") but it's hard.

The bigger issues EA faces is that even when it's not trying to be preachy it gets perceived as such. A vegetarian can just explain their POV when asked and won't be perceived as judgemental if they restrict themselves to I statements.

Now imagine the same convo w/ an EA. Something like the third question will be why they think their charities are more effective and they have to give a statement that pretty explicitly compares what they do with what others are doing. Also, it's internal deliberations get perceived as such.

Expand full comment

I think this is definitely part of the puzzle. I think another part of the puzzle is that the EA culture is quite weird in a way that seems to drive a lot people to distraction. As Scott notes, EA is a piece of social technology among other things. It has an extremely distinct vibe. Some people are all-in on the vibe, some (like me) have what is mostly an affectionate tolerance for it, and some people seem to really, really loathe it.

Unfortunately, I think the notion that EA should avoid tainting itself by broadening its appeal is wrong on the merits, and that to be maximally effective EA absolutely should moderate and mainstream itself. The resistance to this idea feels mostly like a cope by people who couldn't mainstream themselves if they wanted to -- it's hard to choose not be weird if you are in fact weird. Every time I read the EA forums (which I basically stopped doing because they are exhausting), I find myself wondering if people are just using the phrase "epistemic status" at this point as a sort of normie-repellent.

If this sounds like an attack on EA, it's not meant to be. I find the vituperation in arguments like Stone's to be odd and unfortunate, but also worth understanding.

Expand full comment

*Can* EA go mainstream without being philosophically and compositionally compromised? Organizations that alter their philosophy to appeal to more people tend to end up endorsing the same things as all other mainstream organizations. And gaining members faster than the culture can propagate is going to lead to problems of takeover.

Expand full comment

I think so, absolutely, yes. Scott consistently presents a mainstreamed version of it here: donate 10% of your income and give to charities that have a proven impact on people's lives are both somewhat radical and also reasonably unweird concepts at the core of EA.

Note also that I don't think EA has to jettison all of the weird bits, such as the esoteric cause evaluation. I just think they need to be willing to tailor their message to audience and -- this is probably the important bit -- tolerate the tailoring.

Expand full comment

EA cannot go mainstream, but not for the reasons you listed.

It's already difficult to disburse the amount of money 1 Dustin Moskovitz has in ways that are consistent with how to analyze the evidence, I think of the existing charities, there can probably be around 10x or at most 100x the amount of donations before we are wildly out of distribution (and it's only that high because I'm treating GiveDirectly as a money hole.)

It would certainly be nice if everyone donated to global poverty charities, but at that scale, the type of intervention you'd be thinking about has to start including things like "funding fundamental research" or "scale up developmental economics as a field".

This is something I've been worried about for years, that there aren't big enough money holes for more charity!

Expand full comment

Maybe this is where the watering down part of mainstreaming EA comes in. Sure, we might tap out early on Dustin Moskovitz-grade charities. But here are two somewhat different questions: would it be a net benefit to the world if annual charitable giving was 10% higher than it is today? And is current giving inefficient enough that we could raise the good done by at least 10% if resources were better allocated? I pulled the 10% figures out of nowhere, but the point is just that if you believe the world would be a better place with more and better charity, then that is an argument for mainstreaming EA. Diehard EAists might call this a very weak form of EA, and they'd be right. But scale matters.

Expand full comment

I think at scale, considerations I'd consider facile now, like "is charity really the most efficient thing" would become real live players. For example, I'm not sure throwing 100x times the amount of money into say, SF'a homelessness problem would help, it might literally be better to burn the money or spend it on video games! If you believe Bryan Caplan's myth of the rational voter's thesis that self interested voting would be better than what we have now, because at least one person benefits for sure in that circumstance, as opposed to programs now that benefit no one, you can imagine other similar signaling dynamics start to dominate. Not even going to start on the sheer amount of adversarial selection that would start happening, where honest charities would start losing out to outright fraudulent ones and so on.

I don't think I know when this would start happening, but I'd lower bound it at at least 3rd world NGO scale. I'd be surprised if the upper bound were above 10% of 1st world disposable income.

Expand full comment

Neither of these are radical! It's called a tithe! Giving 10% of your income to your Church (i.e. THE charity that had a proven impact on people's lives) has been the standard for ~1500 years!

Expand full comment

I think they lose the vast majority of people at the "altruism" part (esp. with the way it's usually operationalized), and criticisms around "effectiveness" are post hoc.

Expand full comment

If I understand what you mean, I think I partly agree. I think EA culture makes some assumptions about utilitarian consequentialism, atheism, animal qualia, and so forth, which aren't strictly necessary for the "effectiveness" side to be useful. If I agree with them about human suffering but not animal suffering, I can use their freely-provided resources to help effectively reduce human suffering, and not worry about the parts I don't agree with. And I think that's great.

There's a minor worry that by associating with kind and reasonable people who have a different set of values, I become more likely to adopt those values, but a) that's a risk I'm willing to take, and b) maybe they're right. (Or who knows, I might persuade some of them.)

I do think the "effectiveness" side actually rubs people the wrong way, though, and isn't simply a post hoc complaint. There's a sort of charitable-industrial complex (not really "industrial", but if I replace that word I don't think the reference gets through), which seems to derive from, stereotypically, to me, upper class society ladies. It's about being seen to "do good", and spending an appropriate amount of time and money on "doing good", and it's all interwoven with class-status and signaling and fitting in, and criticism of someone else's choice of charity is a social attack straight out of "Mean Girls". ("She's clearly not one of us, choosing that area is so déclassé.") And there's a bit of practicality there, in that membership numbers are used as a marker of success. But there's also an incentive to contribute little bits to lots of groups, covering everyone's pet causes, to reassure your peers that they have sufficient status that you're willing to adopt their cause as one of yours. And I think this is an upper-class habit that's trickled down to all us temporarily embarrassed millionaires. And when someone criticizes a charity as being ineffective, it's very much taken as a status play. Who are these nerds, to tell me what's important! It's a power move, demanding attention and effort, every bit as much as calling someone "privileged".

Expand full comment

>"If I agree with them about human suffering..."

This is the actual crux for me: I don't. There's a quote from Serenity that conveys the sentiment perfectly: "I look out for me and mine. That don't include you 'less I conjure it does."

To the extent that the society ladies et al. behave as you describe, I think they're working from a similar foundation. Being *seen* to be charitable is necessary to maintain their position and therefore lifestyle but that's all they actually care about, not the cause itself. Criticism of altruism per se would be a defection from the shared fiction and therefore punished socially, but effectiveness is a safe target.

Expand full comment

I've got some sort of scaling function, but I don't think it reaches 0. Human suffering elsewhere is still meaningful to me.

Expand full comment

This is where distinctions between the idea and social movement are important.

If you notice givewell doesn't mention EA on its main page and I think it's probably good if we came up with a term life "efficient giving" to capture the narrow idea that we can and should sum over the benefits provided per dollar and split it away from the social context.

Expand full comment

Your theory doesn't seem to explain the overwhelming cultural success of "social justice"/wokeness. Plenty of people hate it, sure, but nobody becomes big without attracting haters.

Expand full comment

Why would it? My theory correctly predicts that lots of people will feel intense dislike for both EA and wokeness because they feel negatively judged. Obviously, things that upset many people can be popular as well -- and i'm not advancing any theory about which movements/views become popular.

Expand full comment

I'm mostly objecting to the part about "we have strong norms about not sharing those views in certain ways". Clearly the norms aren't strong enough to prevent some movements with aggressive approaches from succeeding.

Expand full comment

Ok, more accurately, what I should have said is that we have strong norms that sharing those views in certain ways is seen as an attack on or disrespect towards those who don't have those views. I presumed that would be clear from context but that's usually a bad thing to assume.

Expand full comment
May 30·edited May 30

>I wonder how much of the dislike for EA culture is a reaction to the fact that EA enthusiasts haven't adopted the same kind of norms about when not to share their moral theory that we've developed for religion, vegetarianism etc...

Interesting! I'm outside the debate (not an altruist, not a utilitarian, not an egalitarian), but haven't negatively reacted to EA. I just view it as outside the areas I'm interested in. Perhaps this is because I've never bumped into EA "in the wild", so I've never been preached at nonconsensually by an EAer, only encountering EA when I choose to read about it (e.g. this post). So, basically I haven't run into the norm violations.

Expand full comment

Presumably the reasoning for the hypothetical anti-AI terrorism would be less that bombing a datacenter causes that much damage in itself but that causing a number of deaths would dissuade people from becoming AI researchers and so on since it demonstrates there's a target on their back.

Of course there have been a large number of terrorist movements (all of them?) that have made the same calculation - "Our opponents are weaklings and wussies, we'll just kill a few of them and the rest scatter!" - and have turned out to be spectacularly wrong, but it does need to be noted that at least the associated rationalist movement tends to strongly signal that they really, really fear death more than the rest of the population and would go to great lengths to avoid it.

Expand full comment

The rationalist movement has established that they take existential risks seriously, however remote, and are willing to deal with them in a deliberate, rigorous, proactive manner. Someone who bombs datacenters, with the explicit subgoal of killing rationalist sympathizers, is effectively jumping up and down shouting "hey, look at me, I'm an existential threat to people and things you care about! What are you, chicken?" Historically speaking, that tends to harden the target group's resolve. https://acoup.blog/2022/10/21/collections-strategic-airpower-101/

Expand full comment

Yes, it would probably end up hardening it, I just mentioned one reason *why* our hypothetical terrorists might end up believing it works.

Expand full comment

Scott's quote of Stone includes the line "waiting until a time when workers have all gone" as part of his bomb plan, which seems to exclude your interpretation of his reasoning.

Expand full comment

What is it about EA that seems to attract scorn? People seem to generate spurious reasons to say it is some how bad - not merely not more effective then general charitable giving but actually bad - worse than not giving.

There appears to be some feature that makes it enemies in a way I don't get, even if I thought it was no better than giving to the college football team.

The nearest I can find it that people think it is smug and that by implication other altruism is not effective, but that would apply to any attempt to research what works best, in virtually any arena of activity.

Expand full comment

"3. ACTUALLY DO THESE THINGS! DON'T JUST WRITE ESSAYS SAYING THEY'RE "OBVIOUS" BUT THEN NOT DO THEM!"

This is an example of the kind of thing that bothers people. If this isn't condescending, I don't know what is. I'll grant you, this isn't any more condescending than Protestant Sunday sermons or identity politics nagging. But there's a heavy backlash against those things, too.

Expand full comment

Does it? Bother people? What kind of people?

Imagine you have a friend who loves talking about exercise, reads all kinds of exercise advice, blogs about benefit of exercise, but, you know, doesn’t. Exercise.

And you like keep gently asking, so what is like your favorite exercise? And the friend talks about his favorite exercise that he read about last months and it’s awesome.

But he hasn’t. Done it.

At some point… you may get exasperated and tell your friend that maybe:

“ACTUALLY DO THESE THINGS! DON'T JUST WRITE ESSAYS SAYING THEY'RE "OBVIOUS" BUT THEN NOT DO THEM!"

Expand full comment

The day I see EA boycotting the Met Gala, which is the most stupendously conspicuous waste of time and effort disguised as 'charity', then I'll take them seriously as doing things, rather than "hey what about lawfare against chicken farming?"

Expand full comment

More than one thing can be true at the same time. Yes, Met Gala is stupid, yes actually donating to charity as opposed to only writing about donating to charity is good.

Expand full comment

Do you mean boycotting or protesting?

I don't think there're EA'ers at the Met Gala, so I think they (we?) are already boycotting.

Expand full comment

So… mission accomplished? EA is already about maximally uninvolved with the Met Gala, to the point where even the term “boycotting” fits oddly due to there not having been any involvement to begin with.

Expand full comment
May 30·edited May 30

That's what I mean; as I said, to me the Met Gala is ludicrous in the amount of sheer wastefulness of money on gowns, 'themes' and publicity. If EA is going to lecture people about the best use of money, then getting out there in public and making statements and even showing up to protest would be *something*.

But no, "Jimmy-Bob and Lucy-Mae tithe to their local church, what maroons" is about the height of it with regard to ordinary people. There's no cost involved, of the type that "Uh, so maybe some of our deep pocket donors go to the Met Gala" would involve.

It's like Alexandria Ocasio Cortez showing up with that "Tax The Rich" dress - pointless signalling where she is in no way, shape or form even the ghost of a threat to the status quo. I'd respect EA more if they stopped worrying about consciousness in shrimp and more to do with "the humans in our town". Once they started hob-nobbing with the political establishment (both Republican and Democrat) then they became the equivalent of AOC and her dress:

https://www.latimes.com/entertainment-arts/story/2021-09-14/met-gala-2021-aoc-tax-the-rich-dress

But that's because I'm old-school Jim Larkin type labour representation, not modern DSA type labour representation where they have little magazines of peerless ideological purity 😁

https://en.wikipedia.org/wiki/James_Larkin#/media/File:James_Larkin_O'Connell_Street.jpg

"Today a statue of "Big Jim" stands on O'Connell Street in Dublin. Completed by Oisín Kelly, and unveiled in 1979, the inscription on the front of the monument is an extract in French, Irish and English from one of his famous speeches:

Les grands ne sont grands que parce que nous sommes à genoux: Levons-nous.

Ní uasal aon uasal ach sinne bheith íseal: Éirímis.

The great appear great because we are on our knees: Let us rise."

Expand full comment

"Go protest this particular ineffective charity!" sounds like the kind of thing you'd do if you cared more about signaling effectiveness than actually being effective. There are much better uses of my time.

Expand full comment

Someone needs to do a study on the effectiveness of boycotting the Met Gala.

Expand full comment
May 30·edited May 30

I've never been to the Gala and I've also never donated to the Met, so boycotting is 100% effective as far as I'm concerned.

Expand full comment

But how do we know that studying the effectiveness of the boycott will be... effective? I think we need a study!

Expand full comment

The normal kind. People generally vaguely agree that charity is good, but don't consider themselves obliged to systematically do it to a definite standard, so strident proselytizing in this direction (by anybody, but even more so by dubious Bay Area types) makes them uncomfortable, because they have no principled objection, and yet don't want to contemplate being morally delinquent.

Expand full comment

I think this "actually do them" sounds super-strident when taken out of context. Yes if one just yells this at random people, it would be horrible. But here the context is that there's a system that actually cajoles people to actually donate, not just ruminate. As an explanation of what the system does it's ok.

To be clear, I'm neither an EA nor particularly like the thing. But this particular thing is not why.

Expand full comment

Well, the very name "effective altruist" contains a not-particularly-veiled insult towards anybody who doesn't share implied precepts. Which is a criticism that EA very much is on board with, but institutional inertia makes the name basically impossible to change, unfortunately for them.

Expand full comment

Yep, the name is... let's not go there :)

They should change it though. Of course the best time to change the name was, like, 10 years ago. But the second best time is now.

But - not my fight, I'm here for Scott's writing and exposure to interesting stuff.

Expand full comment

The thing to understand with that is it's a self-directed psychological technique among rationalists to browbeat themselves into taking rational steps. Thinking deliberately and carefully, as well as acting on it, are very challenging things and so this sudden flurry of rhetorical punches is an intentional method to discipline their/our thought processes where casual or even sophisticated intentionality just won't break through our everyday selfish incentives

Expand full comment

Not strictly a rationalist thing. Protestants and Islamists also flagellate themselves for failing to meet some absurdly high standard.

Expand full comment

Yeah. Scrupulosity is a religious concept, then some in the rationalsphere started talking as if they had invented the notion, or at least just discovered this amazing new psychological term.

Some people have had these kind of ideas before, in the vast expanse of history!

Expand full comment

To me this makes them seem the opposite of condescending. People hate the smug intellectual elite who write essays from their ivory towers and won't get off their ass and actually help anyone. A movement that tells its members, "no, you put your money/effort where your mouth is and actually help people" feels a lot more down-to-Earth and humble.

Expand full comment

Because the people usually associated with it in public consciousness are massive weirdoes. SBF was a massive weirdo, in addition to being a criminal. Now it’s often mentioned in connection to the Collinses, who are massive weirdoes.

Expand full comment

EA as principle is a natural idea. EA as practice (finance/tech bros patting themselves on the back for putting 5% of their income into AI alignment research) is a different matter.

I think you're bang on w/r/t the smugness issue, but I think it's more than 'trying to do better' it's 'convinced they're doing better'. Why I sometimes roll my eyes is a) proponents tendency to ignore/benefit from systemic issues ("take that Palantir job as long as you're donating to effective causes!") b) the movement's tendency to get mired in lowest-common-denominator quantitative discussions (the bikeshed problem for STEM nerds)

Expand full comment

People --normies anyway-- don't like status grabs. Or things that look like status grabs.

Expand full comment

I think because (1) it is very tied in, in its roots and foundation, to a small sub-set of people: the Bay Area rationalists, Oxford philosophy dons and rich Silicon Valley liberals (2) when it was about things like bed nets, people could agree that it was indeed doing good but then it moved on to (3) pet hobby horses, such as "do shrimp feel pain and thus should we ban fish farming" and AI, which until the big commercial corporations got their mitts on it, was seen as SF notion alone.

Things like the Virginia election (meddling in politics and falling on their face massively, in a way which even idiots like me forecast would happen) and of course He Who Should Not Be Named Save By His Initials certainly didn't help the public perception. It was taken as smug condescension to say (or seem to say) "You, poor ignorant fool, may *think* you are doing good by giving to charity, but *we* are doing it better and indeed doing it right, in the only way that's right, and we don't give a damn about your piddly little 'let's feed the poor people in my town' concerns. Our morality is so unimpeachably superior that we are only concerned with strangers hundreds and thousands of miles away".

People may be willing to accept "okay I do it my way, you do it your way" but they don't like "you're dumb and even bigoted for doing it your way, our way is the only right way".

Expand full comment

These are all the same problems religion suffers from, but the odd ideas problem is even worse because there's no central authority to proclaim that shrimp welfare is out of bounds, please stop talking about it.

For what it's worth, I would take SBF over the Spanish Inquisition or religious wars. Obviously, these are different times, so it's not an apples to apples comparison.

But anyway, if you're against the MET Gala and for feeding poor people, you're already on board with trying to make sure your giving has a positive effect, which in my book is the main point.

Expand full comment
May 30·edited May 30

I personally think altruism, whether effective or not, is usually well intentioned. So I like EA. But I think EA attracts criticism more than other charitable causes because it appears to cast judgment on some other charitable giving as “ineffective.” When someone earns $40k a year and gives $40 to the local little league where their kid plays, they resent being told they made an efficiency error or they could’ve done better. Communities traditionally depend on this ineffective altruism and EA kind of indicates to typically blue-collar donors that what felt good and selfless was actually in some sense a dumb mistake. I’m sure EA doesn’t intend this.

Relatedly, I think names are important (many detractors of groups don’t know much about the group besides the name itself), and the name Effective Altruism unintentionally suggests that non-EA altruism is ineffective (dumb). If you belonged to a group who called themselves the Effective Christians, you’d probably receive pushback from the newly downgraded Ineffective Christians, regardless of your good intentions. A name such as GiveWell for the entire movement would’ve been a much more effective name choice (I think).

All that said, I think EA is a net positive and I appreciate what they’re trying to do and thank those who give such a large percentage of their income to others they think need the help.

Edit: removed BLM name comparison to avoid triggering people.

Expand full comment

I think a lot of it is a mix of do-gooder derogation and resentment from other parts of the nonprofit sector.

Expand full comment

It's uncomfortable to think that you could forgo some luxuries and thereby save lives and greatly improve the world. It's psychologically much easier to think that such sacrifices are fruitless and the people telling you to make them are sanctimonious fools.

Expand full comment

Yeah, this, and if you include in "luxury" also spending money for ideological or religious pet causes and status games disguised as charity, it becomes very obvious why some people cannot stand the idea of the kid screaming that the king is naked, or at the very least its supposed clothes are not very effective

Expand full comment

If the king is naked, clothe him. The problem is when the suggested solution is not "give clothes to the naked" but "donate to my foundation to publish a paper on setting up a committee to examine 'is clothing the naked the best use of our money?'"

Expand full comment
founding

As this very post argues, people will not, ex nihilo, successfully come up with the best (according to their own values) places to donate their money, without _somebody_ doing the legwork of figuring out how effective various interventions are. I do wonder if you have some argument against this or are just taking random already-debunked potshots.

Expand full comment

Yes, I think this is one of the main cause, we have the same problem with veganism.

Another one is, I think, a question of aesthetics, they don't like the aesthetics of the EA community, they are weirdos or techbros or something, and so they don't like them, and don't want people to think they are doing something good.

(There are also fair criticisms, but I think most of the strong dislike come from these two things)

Expand full comment

I mean, the obvious answer is 'EA is lethally bad at anything resembling PR/Public Outreach, as is indicated by the very name they chose.' I'm sure they didn't think 'effective altruism, because we're better than all those other idiotic/ineffective altruists,' but the reason they didn't think 'shit, that's a name that's going to make us look like arrogant twits to the very community we want on side' is because...they're incredibly bad at anything resembling PR.

Expand full comment

The name is not terrible in itself, it's the attitudes that crystallised around it that make it the PR disaster. "We're gonna do Do-Gooding Better! Even more, we're gonna do it Right, unlike the other rubes who just throw money at wasteful campaigns in order to feel good about themselves!"

Again, not everybody had that attitude, but it was very easy for writing on the topics to slide into that territory, intentionally or not. "Why do people give to charity? Well, because they're dumb and run on feelings". Ouch.

Expand full comment

I'd say the name is indicative of the broader problem, which is that they're shit at PR and totally unwilling/unable to hire people who are good at PR. You also see this in stuff like 'let's buy a castle, here's the cost benefit analysis showing it's really a good idea,' when 30 seconds with a PR specialist would get you 'no, that's dumb, the cost to your reputation, which you aren't factoring in, will be really large.'

Especially if your PR model is intended to be 'we're ruthlessly focused on helping people, not like giving to the opera, or the Met, or any of the big events/institutions that means you get a chunk of your money back in events/tickets/parties,' but they're really, really allergic to PR.

And in some ways, I find that charming and good. Like, GiveWell's 'our mistakes' page (https://www.givewell.org/about/our-mistakes) is the sort of thing I absolutely wish more organizations would do and is so anti-PR it almost becomes PR, if that makes any sense.

Expand full comment

Here's my pet theory.

I think it all started with SBF. Following that the EA-is-a-doomsday-techbro-cult meme was born. Once the meme was born, a journalist wrote a vaguely sinister piece about EA to make people feel scared, enraged and taken advantage of. That gets clicks. Once this happened, EA has become associated with fear, rage and lurking danger. Techbro cultists are taking over the white house RIGHT NOW! After this anyone with a stake in media presence started capitalising on the meme, finding more and more things to be enraged about. And so the hatred cycle started.

It was so memetic because EA is the perfect media rage soup: techbro + rich + elite + entitled + cult!

EA is the easiest thing to straw-man as a cult. Anything with an organisation and philosophy can be branded a cult. The philosophy is especially easy to brand as a cult if it has something about making the world better and doomsdays. EA is a bingo!

It doesn't matter that most EAs are neither techbros, rich, elite or entitled. They have enough tech people involved. They have rich supporters. They are mostly well educated which is enough to clarify for straw-man them as as elite. They are vocal about controversial opinions which is enough to straw-man them as entitled.

The funny part is that in fact EA are the opposite of entitled. God forbid people with normal income decided to help people with no income. Saving babies, but are they doing it sincerely? What a shameful display of privileged superiority! While the people writing the hit articles are totally entitled, judging baby savers from their moral high-grounds, where they convene to decide what good deeds are good enough.

I also always adore the proud-Aristotle-reader argument: "These guys don't even know philosophy! How dare they think things!" It's truly enraging that someone would go around saving babies without a formal education in philosophy. Worse yet if this person has not saved any babies yet, but is daring to write essays on the internet.

We, as a community of people that only think the correct philosophical things, should stand united against this baby-saving and essay-writing menace.

Expand full comment

"I also always adore the proud-Aristotle-reader argument: "These guys don't even know philosophy! How dare they think things!"

This made me laugh, because in one of the book review post threads I was raging about Sam Harris, described (whether self-described or not, I do not know) as a "philosopher" amongst his other accolades, yet in an excerpt from a book of his, he either was ignorant of Aristotle's views on the Unmoved Mover, or he knew the argument but chose to ignore it in order to punch a tissue paper man of his own (the thing wasn't even strong enough to be a strawman).

So, yeah. Not EA in general, but if you're gonna philosophise, then have some basic knowledge.

Expand full comment

It's a fair comment. I just think it's a poor angle to attack baby savers on this basis.

I also think "EAs should read more basics" doesn't contradict "it's cool EAs are discussing weird things and try to produce original ideas", I can get behind both.

Expand full comment

"EA is a cult" was previously "rationalists are a cult," that's been around since Yud started the Sequences.

Expand full comment

Good point, it's a sequel!

Expand full comment

Wealthy nerds are a historically popular target.

Expand full comment

Once you're in the business of evaluating causes on an objective basis you're unavoidably in the business of telling people that you think *their* cause isn't very good, which feels like an attack to anyone invested in that cause. The "standard" approach to charity, or non-profit work, is to treat it more like a matter of taste. Some people like their local church, some people like art galleries, some people like soup kitchens, and so on, but within a very broad range none of these are "right" or "wrong" choices any more then there are "right" and "wrong" movies to like. Writing essays about how malaria nets are way better then any of that is just rude and disrupts the nice circle of affirmation they had going on. You can see some of that at the end of this post, where Stone admits their real grievance is that EA isn't being a good member of the charity community by affirming everyone else is also doing a good job. "Admit you’re not special and you’re muddling through like everybody else, and then we can be friends again."

Also a lot of this comes from leftists who are upset that we're not communists, or tech people who are upset that we want to restrict AI development. Probably at some point we'll get backlash from ranchers who are upset at meat replacement products, but they don't seem to have noticed EA yet.

Expand full comment

>Probably at some point we'll get backlash from ranchers who are upset at meat replacement products, but they don't seem to have noticed EA yet.

nit: I don't know at whom this ban is aimed (I doubt EA - maybe woke???) but Florida banned lab-grown meat on May 1, 2024: https://www.usatoday.com/story/money/food/2024/05/05/florida-lab-grown-meat-ban/73569976007/

Expand full comment

Yes, I was thinking of that, but it seems to be aimed at "Leftists" and/or "Woke" rather than "Effective Altruists"

Expand full comment
May 31·edited May 31

Many Thanks! Yes, EA seems an unlikely target, and leftists and/or woke do indeed seem more likely. Or a couple of legislators had an unfortunate interaction with a particularly forceful vegan...

Expand full comment

EA is something like ~40% vegan, EA events approach 100% vegan catered, and EA almost certainly has fewer conservatives than Columbia University. It's not concurrent with woke but the overlap is significant. And EA pushes for more lab-grown meat and pea-derived meat-alternatives.

Also Florida has a huge cattle industry (for another odd element, a lot of the industry is Mormon owned); a lot of people only think of the beaches and theme parks. This ban is a pre-emptive strike in favor of a major lobbying industry.

Expand full comment
May 31·edited May 31

Many Thanks!

>This ban is a pre-emptive strike in favor of a major lobbying industry.

Plausible. So it could be more like whacking a competitor than anything else...

<evidence from fiction>

Quoth Corleone:

>It’s not personal. It’s just business.

</evidence from fiction>

Re the cattle industry - yes, that was a surprise to me.

>a lot of people only think of the beaches and theme parks.

Umm... Also NASA? ( Yeah, and retirement communities and citrus groves )

Expand full comment

Whoops, good points on NASA, retirement, and citrus.

My family is very into Florida beaches and theme parks, I'm the only space nerd of the bunch. Skewed perspective.

Expand full comment

Many Thanks!

Expand full comment

Years ago, there was an article about a couple that practiced "extreme altruism". It said they believed (I'm paraphrasing) that each person is equally valuable and they should not put themselves before others. They aimed to donate at least 50% of their income to effective charities. It didn't report them giving to any weird or controversial charities; it was something normal like Oxfam. It was a nice article that didn't criticize the couple for their altruism. It just told their story.

The amount of vitriol and disdain in the comment section, however, was enough to make you think these people must be the next Osama bin Laden. Whatever criticism there may be about extreme altruism, the couple did not deserve that pile on.

The article was written before the term "effective altruism" was widely known. There was no mention of Silicon Valley, or tech bros, or x-risk charities or anything like that. Just one couple who felt obligated to give half their income to charity.

Expand full comment

Interesting! I wonder why the vitriol. I'm not an altruist myself, but I'm not going to attack someone who is, as long as they don't attempt to force me into that role.

Expand full comment

My memory isn't perfect but a lot of the comments were saying stuff like they're crazy, they're narcissistic, they're moralizing, or they're mentally unhealthy. Some were trying to argue it's not moral to help people far away over your own friends and family. It wasn't the arguments alone that were bad. It was the tone. They weren't just debating points academically. They were mad.

Not all the comments were negative; there were supportive comments as well.

If I had to guess, I'd say some people react badly to the implications of the couple's philosophy. Even though they never asked anyone else to donate to charity, if they believe they have a moral obligation to give so much, it implies we all have a similar obligation. Imagine you and your spouse earn $100k income together and you have a lot of expenses and feel like you're struggling. You recently donated $500 to a cause you believe in and you're proud to have contributed. Then this other couple comes along and says they felt morally obligated to donate $50,000 per year of their $100k income. No matter how nice they are, it kind of puts your own efforts to shame.

Expand full comment

Yes, but people still write stories when Shakespeare exists, and yet no one yells at someone saying they like Shakespeare and thinks he's great because that's implicitly belittling your ability to write. Obviously it'd be gauche to mention Shakespeare on the place where someone is writing, but these people are seeing an article in a non private space and responding.

What is going on in the heads of people who hate stories about altruistic virtue that isn't happening in the heads of people who read praise of other virtues?

Expand full comment
May 31·edited May 31

Let me just start by saying I think the people spewing vitriol at this couple in internet comments are shitty people. I think the average person in real life is better than that.

I think part of it is that this was just a normal middle-class American couple doing what they felt they had a moral obligation to do. If you read a story about the pope performing a miracle, well, no one expects you to be the pope. There's no sense that you should be doing miracles too. Shakespeare was a legendary writer; it's okay if you're not Shakespeare. But if this couple's reasoning is right (you shouldn't put yourself before others and all that), that same moral reasoning could apply to any of us.

Expand full comment

Hmmm... this is partially satisfactory but not fully. If someone normal becomes a math genius, like Terrance Tao, it seems like we construct a new category of "genius not at all like me" and then put every single exceptional event in that person's life as further evidence they are unusual mutants and that this has no bearing on me. You can imagine this also happening for someone like Mother Teresa or other Saints (Gandhi, the Buddha, Chiune Sugihara). Maybe the nature of the original article prevents this? But that's even more confusing re: EAs, where people have ready made otherizing excuses, like "they're elites" or "they earn six figures, I don't". Maybe we only accept the existence of saints if they are sufficiently different, and it's easy to construct excuses for why they are special and you aren't.

Expand full comment

Many Thanks!

>Even though they never asked anyone else to donate to charity, if they believe they have a moral obligation to give so much, it implies we all have a similar obligation.

Ah! Personally, as a nonaltruist, I can just quietly ignore them. They do them, I do me, so I see no need for vitriol myself. Now, if they were doing something like what Peter Singer did, and asserting that ethics _demanded_ that everyone, myself included, donate massively, that would put them in the never-darken-my-doorstep-again-and-take-your-entire-field-of-ethics-with-you category. But it is the _demand_, not the _example_, that drives that classification. Since

>they [this couple] never asked anyone else to donate to charity

I view them as harmless.

Re the light this sheds on hostility to EA:

This suggests that e.g. the mathematical analysis part of EA is at least not a _necessary_ part of the trigger for the hostility. I wonder if people outside of EA who donate more than 10% of their income trigger analogous hostility, and, if so, what the threshold is and how sharp it is.

Re the arguments:

>Some were trying to argue it's not moral to help people far away over your own friends and family.

There is actually a point to this, though I would phrase it a bit differently. Forgetting morals and viewing this in terms of alliances, if two people have the same "budget" (broadly speaking, including time and effort) for offering help, and person A "spends" it on a tightly circumscribed set of friends/family/close allies, while person B "spends" it spread thinly across the globe, then person A is a more reliable ally to their friends/family/close allies than person B is. Given a choice of who to pick as an ally, it is rational to pick person A over person B.

Expand full comment

> Now, if they were doing something like what Peter Singer did, and asserting that ethics _demanded_ that everyone, myself included, donate massively

Just to clarify, they didn't say everyone must donate in this particular article from a decade ago, as far as I remember. For all I know they might believe everyone has that moral obligation.

> There is actually a point to this [...]

What you're saying is true, and moreover, many of the points the commenters were making were plausible, or valid under some moral framework. But looking at the comments section as a whole, it's weird and kind of upsetting that people would be so hostile to a couple of people helping people in poor countries.

If there were an article about someone donating money to a local sports league, people could make the same type of critiques. They could say the person donating is narcissistic, the money is better spent elsewhere, this sport is violent, it excludes so-and-so, etc. But they wouldn't. The comment section would typically have just a few positive comments. (Unless the donor is someone widely hated like Donald Trump of Jeff Bezos; they'll attract criticism regardless of the article's content.)

> This suggests that e.g. the mathematical analysis part of EA is at least not a _necessary_ part of the trigger for the hostility.

Yes, this. All the other reasons people complain about EA may be contributing to the hostility, but people are hostile even without those other factors. Though keep in mind this was an article about "extreme altruism", in which the couple sacrifices and donates much more than typical for effective altruism.

Expand full comment

Many Thanks!

>Just to clarify, they didn't say everyone must donate in this particular article from a decade ago, as far as I remember. For all I know they might believe everyone has that moral obligation.

Yes, I draw a sharp distinction between people like the couple who are donating, but _not_ pressuring other people to do so and Peter Singer and people like him, who _do_ pressure other people. The former let me live my life in peace (even if they silently believed that everyone was obligated), the latter are _making demands_, and _not_ letting me live my life in peace.

>But looking at the comments section as a whole, it's weird and kind of upsetting that people would be so hostile to a couple of people helping people in poor countries.

The hostility seems unwarranted to me too. The tone in which some of these comments could be made can vary a lot. I hope that e.g. my phrasing of the consequence of spreading a help "budget" thinly is not phrased in a hostile way. It is just a natural consequence of spreading a "budget" over a wide set of people - no judgement intended.

>All the other reasons people complain about EA may be contributing to the hostility, but people are hostile even without those other factors.

Yes, and that is valuable information! Thank you!

Expand full comment

Not sure I’d use Hamas as an example of terrorism not working. Hamas doesn’t care about Gaza having functional buildings. Hamas cares about destroying the state of Israel. If half of Gaza gets destroyed, but Israel’s reputation in the “international community” is meaningfully tarnished, then Hamas has “won” by the metrics of their own utility function. I don’t *think* the October 7 attacks and the resulting fallout have net tarnished Israel’s reputation, but it’s remarkably close. IMO Hamas would be winning the PR war if not for Jewish billionaires pulling the strings to influence public opinion (not saying this to be antisemitic. I think Israel is in the right and I have deep respect for Jews, but factually, this is what is going on.).

Expand full comment

I think the point is that terrorism against your enemies is generally bad for those who are ostensibly your friends.

Expand full comment

Perhaps Israel is net winning in the US, but internationally Hamas is probably winning the PR war.

1. The Abraham accords could have developed further, to the point that eventually Saudi Arabia recognizes Israel too, but the Arab populous has seen so many images of Gaza now that any Israel-Arab deal is now untenable.

2. Due to a recent UN resolution (https://en.wikipedia.org/wiki/United_Nations_General_Assembly_Resolution_ES-10/23) Palestine now enjoys a greater degree of recognition by a wider group of countries than it ever has before.

Hamas is playing the long game, it is trying to navigate Israel into the same diplomatic situation as Apartheid South Africa found itself. And the way to do it is basically entrapment of Israel.

This strategy mostly works due to contingencies of the Israel-Palestine Conflict. EA probably cannot replicate it.

Expand full comment

Before 10/7, the future looked predictable and not good for Palestinians. Now variance has been introduced, which creates the possibility of Hamas winning, and also the possibility of Palestinians/Israel winning. So Hamas could lose much harder, but "at least" they have a chance of winning so in their minds it probably was a worthwhile gamble.

Expand full comment

The status quo is total Israel domination. For Hamas, it's a total one-sided bet.

Expand full comment

Israeli domination, but Hamas survival and continuation. Now Hamas could well be eliminated.

Expand full comment

Hamas is an idea, which you can't really eliminate while Palestinians continue to hate Israel.

Expand full comment

It's not an idea at all, it's an organization. Palestinians can hate Israel without it existing, but they will probably hate Israel less once the weapons to Hamas are cut off and there is peace.

Expand full comment

Hamas is a branch of the Muslim Brotherhood. Syria had a branch of that which was destroyed by Assad Sr. Nothing impossible about it.

Expand full comment

My point is not that terrorism is some killer app that lets you solve every problem, but that there are in fact some circumstances where terrorism can be “effective”. So without a robust theory of why exactly terrorism is inconsistent with EA thought, ad hoc condemnations of terrorism via the “historically, terrorism tends to be ineffective,” argument will be seen through by bright-eyed young EAs who have thought deeply about [scenario X], and come to the conclusion that [scenario X] is particularly conducive to terrorist intervention.

Expand full comment

Terrorism is particularly ineffective when associated with maximalist goals. "Remove the English ruling class from Ireland" can work because the IRA made it clear they'd be satisfied if said ruling class simply ran off to maintain a similar standard of living for themselves by oppressing a different ethnic group somewhere else.

"Kill all the techbros" is a problem for anybody who'd innately keep being a techbro regardless of where they lived or what project they were applying those tendencies to, and "End all suffering, everywhere" is an intolerable proposition to anyone whose preferred lifestyle fundamentally depends on the ongoing existence of suffering.

Expand full comment

But they aren't at all following the strategy of the ANC, which was explicitly fighting for equal rights within South Africa rather than the destruction of South Africa. Hamas' stance is that they regard any deaths coming their way as good because martyrdom leads to paradise. Israel can just keep obliging them on that.

Expand full comment

That's correct

Expand full comment

Israel isn't destroyed at all. Nor would it having a bad reputation do that. North Korea has a terrible reputation, but keeps on trucking, undestroyed.

Expand full comment

I also think Hamas doesn't care about Gaza, but they aren't really consequentialist, I don't think you can really explain what they do with some clear goals and analysis of how to meet them.

I think it is mostly struggles of the leaders to stay in power, religious beliefs about what God want, and hate.

I think the terrorism is mostly about hate, not some way to meet any particular political goal, they kill people because they hate them (or because they think they are expected to do something like that, and would be considered weak or something otherwise), and that it, not some plan to hurt Israel PR by how they will react.

I can be completely wrong, but I don't think they would end-up on plans like that, if they were really trying to meet a particular political goal, by opposition of mostly being insane.

Expand full comment

Yeah, OK, but I think you're missing the point by writing that it's OK that EAs donate to a whole bunch of disparate causes, they're just less scattered than non-EAs. If they want to be -effective- altruists, a bunch of them ought to get together and decide, rationally or otherwise, what specific charities they want to support and put their money there. It matters less which charities they are, as long as they're worthy ones among the set of worthy ones, than that the EAs make an -effective- contribution.

Expand full comment

The main problem is that philosophy is hard.

Like, most of the reasons to donate to one EA cause area versus another boil down to hard-to-resolve philosophical questions that don't really have an objectively correct answer. Is it better to save one person's life or to cure a hundred cases of blindness? How much is a chicken worth relative to a human? How important is bringing new people into existence? Is it better to save one life with certainty or save a billion lives with one-in-100-million odds? Is it better to focus on immediately solvable problems or on longer-term systemic change?

I think that if you randomly select ten people in the US--or even ten people in the same city or job or club--the odds of convincing all of them to give the same answers to all of the above four questions are pretty much zero, even if you sit them down and have them all argue with you and each other for hours. Things would be so much simpler if we could just solve moral philosophy, but actually *doing* that is the tricky part.

Given that a lot of people have different views on the above questions, even within the subculture of EA, it makes sense to make clusters of recommendations based on which side of each of the above questions people fall on rather than declaring a One True Cause Area.

Expand full comment

Its "philosophy is hard" combined with extreme confidence in one particular take. People don't fume at philosophers who just think philosophy is hard (which is most of them).

Which probably comes under "status grabs".

Expand full comment

Correct, which is part of the reason a lot of people get angry with EAs. On one side they argue that there's a *correct* way to do charity - it's their schtick, what they want to be known for. On the other side, they agree that philosophy is hard and that they can't answer the big questions so end up donating money to shrimp awareness just in case. That's...essentially the same problem that everyone else has.

Expand full comment

I don't think that's where the anger is coming from. I've seen a lot of criticism of the form "not enough systemic change" or "longtermism bad" or "SBF bad", but I haven't seen many people argue that EAs not being able to decide on a single cause is a problem.

I'd also argue that EA has shifted "the same problem that everyone else has" into something new. Most people don't really think much about the questions I mentioned above--population ethics, animal welfare, etc.--or about comparing charities in a systematic way and finding the best ones. If the end result is that they disagree on what's important, they disagree *substantially more* than EAs do because they've barely narrowed things down. See e.g. the plot at the end of section II. EAs aren't exactly marching in lockstep, sure, but IMO they've done quite a lot of work on finding good options for someone who accepts moral claim A vs B vs C, and they also spend more time thinking about those moral claims than any other movement I know about.

Expand full comment

Even normies who put no thought into these things care about effectiveness in charities. I'm old enough to remember when United Way went through a big thing and lost support because they were identified as spending too much on overhead and executive salary. That was long before EA, so they could not have influenced the world to move away from an ineffective charity.

I'm not saying EAs need to pick one thing or even narrow their approach. I think Scott's right that we can only put so much money at a time into each separate question.

My criticism is that their ideological underpinning for who they give to and why is philosophically discordant. There isn't even a relatively short list of priorities that will keep that list together in any meaningful way. AI, Long term risk, X-risk, Animal Welfare, Pain reduction, Human Health, Human Longevity (and/or reducing early death) - all potentially good things depending on your approach, but they're not nearly even in the same realm. It reads like a wishlist for various people's priorities. Again, that's fine, but that doesn't point to a "correct" way to do charity. It points to people trying to figure out what's important to *them* and act on it. Same as everyone else. Inasmuch as EA priorities align with mine, I applaud them. When they spend money on things I find silly, I don't care if they succeed or even resent them taking money from better options - just like everyone else.

Expand full comment