629 Comments

Regarding terrorism not working, that can't really be true since if the blowback effect is so large as to negate any benefit than it's big enough to harness by engaging in terrorism in the name of a group which is beneficial to create blowback against -- eg maybe join Greenpeace and try to get caught trying to blow up a factory making golden rice.

I think the more accurate response is that terrorism is extremely psychologically difficult to effectuate in a way that achieves your ends because people usually need to be hyped up to get themselves to do it and that process tends to link them to their true cause. Also, if you did have both the kind of epistemic care to ensure you were doing good (w/o asking for advice that would leave a trail) and the self-control to avoid leaving a trail you may have skills that could be more effectively leveraged elsewhere.

EDIT: In case it's not clear, I'm not suggesting terrorism is a useful way to achieve your goals. Rather, I'm pedantically insisting that the right analysis is that -- while in theory under the free agent paradigm we use to understand these questions you could harness that blowback -- in practice (thankfully!!) humans are subject to psychological constraints that make them unable to spend decades living as the enemy before sacrificing themselves in a horrific act apparently aimed at their friends. Or even to simply be able to pull the trigger on the kind of targets that might help rather than those they hate.

Expand full comment
author
May 30·edited May 30Author

I think false flags would work better than regular terrorism, but that they're hard to pull off - Osama can't convincingly pretend to be an anti-Islam extremist. I also think most people smart enough to do them are smart enough not to become terrorists in the first place.

Expand full comment

I agree with the first part of that response ...but surely the second part is circular. Unless you presume ineffectiveness (or personal selfishness) you can't argue that smart people wouldn't do it. May be true but it's not an argument that should convince someone.

Expand full comment

After 9/11 I (young at the time) thought that if Al Qaeda were smart they should get some sympathetic person who's totally straitlaced and has no record to commit a terrorist act, because they would totally avoid suspicion. Since Al Qaeda is presumably smart enough to think of this but it hasn't happened in all this time, I conclude that it's nearly impossible to find someone who fits the bill and is willing to actually do it. Call it smarts or whatever you like.

Expand full comment

Right, that's the part I said above about it being psychologically very difficult. People aren't robots and the kind of stuff needed to work someone up to such an act requires being immersed in a community who supports said deciscion.

But that gets to an issue with how we construct our deciscion making models and what we mean by "you should do X". Ultimately, you do whatever you do, but we idealize the situation by considering a model where you choose between various actions where we pretend psychological constraints don't exist.

In that model, it is true that in many situations some kind of terrorism our violence will be an effective tactic.

But this ends up confusing people with frequently bad results because they don't appreciate just how binding those constraints really are -- though more often it's national intelligence agencies not EAs making the mistake

Expand full comment

I agree. I've had the similar thought that if a politically minded billionaire like George Soros really wanted the Democrats to win, or the Kock brothers wanted the Republicans to win, what they should do is infiltrate the libertarian/green party and run as a 3rd party candidate, and being a billionaire have enough influence to cause a proper spoiler effect against the party they don't like. You probably don't even need to be a billionaire and could be a more moderate millionaire for smaller but still important elections on the state or municipal level. As far as I know, this never happens, despite being possible in every First Past the Post election. Since it doesn't, it must actually be rather difficult to infiltrate, psychologically or otherwise, to infiltrate and become a major member of a movement you hate.

Expand full comment
May 30·edited May 30

"if a politically minded billionaire like George Soros really wanted the Democrats to win, or the Kock brothers wanted the Republicans to win, what they should do is infiltrate the libertarian/green party and run as a 3rd party candidate, and being a billionaire have enough influence to cause a proper spoiler effect against the party they don't like"

The problem with that approach would seem to be "Yeah, but then you have to try and get Libertarians to all pull together". Seemingly there is/was a split within the Libertarian Party, one faction won and after tireless scheming and hard work got their guy into power, and then at the big Libertarian Party conference, he... couldn't keep it together long enough not to abide by "Don't take candy from strangers":

https://sg.news.yahoo.com/libertarian-candidate-reveals-took-edible-161423546.html

"He went on to complain about “all the egregious offenses against our freedom” perpetrated by the Trump administration between 2017 and 2021, at which point a member of the audience shouted out: “How high are you?”

Dr Rectenwald grinned and called back: “Not high enough! Hey, I’m living liberty today!”

When he was later asked about that answer by The Washington Post journalist Meryl Kornfield, Dr Rectenwald confirmed that wasn’t joking: he had taken an edible before going on stage.

“This was not some sort of a major political scandal, okay. I wasn’t found in bed with Stormy Daniels. I’m at a Libertarian Party convention. Somebody offered me something,” he said."

'Somebody gave me something'? And if that was poisoned or laced with something dangerous? 🤦‍♀️ I have to say, Trump had a point when he fired back at them:

"The presumptive Republican presidential nominee had given a speech at the convention in Washington DC on Saturday evening and was booed and jeered by the audience when he urged them to vote for him.

“Maybe you don’t want to win,” the candidate hit back. “Only do that if you want to win. If you want to lose, don’t do that. Keep getting three per cent every four years.”

Expand full comment

I think actually personally running as a spoiler candidate is inefficient and offers poor cover, but wealthy actors who want to influence elections funding spoiler candidates does appear to be a thing that happens.

Expand full comment

Most people probably do not want to commit violent acts against innocent people. But attacking data centers isnt that. There were plenty of normal textile workers who attacked machinery that threatened their way of life in England, they weren’t a bunch of extreme kooks, and so I expect we’ll see the same here. And it did almost work for those textile folks, they won some sympathy from prominent people.

Ultimately there will need to be some threat of force to stop AI, and it may come to actually using it, and a lot of people will decide that rationally they are left with nothing but direct action. Gazans etc can appeal to people and politics, but AI will be taking all those decisions out of human control and normal political processes will be useless, so there may not be an alternative strategy in the end.

Expand full comment

I don't think there's any way scattered attacks on machines could have prevented the industrial revolution.

Expand full comment
founding

If there were only three plants in all the world that could machine pistons precisely enough to work in steam engines, it maybe could have The key elements of the industrial revolution were a lot more decentralized than the key elements of any present AI revolution.

Expand full comment
May 30·edited May 30

Another answer is that even if people agree to do false flag stuff, efficient uses of it would, by definition, be not known to you, unless you're a part of the conspiracy.

Expand full comment

Also, I think that you kinda have to believe that certain acts like this would be super helpful if you take the Yudkowsky story about AGI risk seriously. Now plausibly humans can't figure out which ones they are, but if the AI can't exert control over society by engineering the right kind of events then it's much less dangerous.

So I think it's worth distinguishing the question of whether we can know that a certain instance of terrorism would have a certain outcome and whether an omniscient being would ever find a good way to put it to use. I think lots of the barriers here are epistemic in nature.

Expand full comment

If Omega gives a friendly superintelligence the choice to commit various forms of terrorism, and no other options, yes the friendly superintelligence can probably use this capability to do some good.

But for a friendly ASI with limited resources in the real world, it would be surprising if terrorism was the most effective strategy.

An unfriendly ASI would have less reason to avoid terrorism, but still likely wouldn't do it.

Expand full comment

I think it depends alot on how you imagine the limitations working. For instance, how hard is it to trick people into engaging in attacks which are credited to groups you want to discredit?

Expand full comment

Could a superintelligence find some plan, consisting solely of blowing stuff up, that was better than nothing. Sure. (by choosing the specifics of where, when ...) Could a super intelligence find some plan consisting solely of mailing fish to people that was better than doing nothing. Also yes.

If a superintelligence has a fixed budget (say in money, bandwith, attention, whatever), is blowing stuff up likely to be the best use of that? Are bombs a better bargin than all the other things it could buy? I don't think so.

Expand full comment

I think it depends on what your goals are. If you think the goal of Hamas is to help the Palestinian people, it has been remarkably anti-effective in the past year. But if you think the goal is to isolate and destroy the Israeli state, it has done a lot more toward that in the past year than anyone has accomplished in decades.

Expand full comment

Well, the default scenario seems to be that Israel slowly strangles Palestine and Palestinians. If Hamas's actions cause Israel to collapse instead, after which Palestinians could flourish (as much as third-worlders can, anyway), then there's an argument for it being effective.

Expand full comment

I don't think *instead* or *flourish* is obviously what they particularly care about. If they do, then it's less plausible that they've been effective, and if they don't, then it's more plausible.

Expand full comment

Israel used to have multiple states warring against it at the same time (1948, 1967, 1973). Now it's just at war with Hamas, and further away from destruction than it was warring with actual states.

Expand full comment

I was either not yet born or not yet paying attention during the period from 1973 to 2000. The main thing I have seen is that Israel is currently more isolated and unsupported than it has been at any other time since 2000 (though it's obviously not completely isolated and unsupported). In that sense, Oct. 7 may have been successful, compared to various other activities they and others tried over the past two decades.

Expand full comment

Everything Hamas has tried has failed, so perhaps by that standard it doesn't seem that much worse. The US is still actually supporting Israel in its fight against Hamas, but it's not like Hamas was ever willing to do anything that could get the US to favor them instead.

Expand full comment

I do not claim that Hamas has great odds at success. Luckily, their insane dream of driving the Jews back into the sea seems unlikely to come true.

But in mid-2023, the Israel-Palestine conflict had mostly disappeared from the news. Israel was normalizing its relationship with its Arab neighbors.

Today, the conflict is in the news all the time. Iran shot a few token rockets towards Israel. Several European countries have decided that they will recognize Palestine as a state. On US campuses, where the future elite of Israels most important ally is raised, wokes are happily chanting Hamas slogans.

Hamas knows the Palestinians will never defeat Israel on their own. They do not have a military objective, they have a social objective. The product Hamas is producing is dead Palestinian kids, killed by IDF.

From an instrumental rationality point of view, I can't really fault their strategy. Oct-7 was meant to goad the IDF into killing Gazans. If ones utility function only had a single term, which is "the destruction of Israel", this seems like a pretty optimal strategy, which raises the odds of Israel being destroyed this century (except for AI) by a factor of perhaps 1.5, say from 2% to 3% or so.

Of course anyone with a less monstrous utility function -- which contains terms for dead Israelis or dead Gazans, perhaps -- would conclude that Oct-7 is tremendously net negative, but there is no accounting for utility functions.

Expand full comment

My view is that the governments of these neighboring Arab countries are just keeping their heads down, but haven't changed their minds at all about the desirability with siding with Israel against Iran. At the end of this current war, it will fade into the past just like everything else in the past of the Middle East that one might think would prevent such an alliance.

Expand full comment

I think the presumption here was for EA goals. Trivially, some goals are achieved by terrorism -- for instance increasing the amount of terrorism.

Regarding Hamas, I don't think that's correct. Isolate certainly, but states, especially religious ones, often pull together when they feel isolated. Indeed, I think they succeeded in heading off the possibility where Israel simply devolves into a multi-ethnic, multi-religion democratic country like the US -- indeed, maybe even one where Palestinians are the majority. It wasn't the most likely outcome by far but it was much more possible before these events.

The reaction (and flagrant disregard of it's own foundational document regarding jurisdiction) of the ICC and the ICJ has convinced Israel and many Jews in the diaspora that they can never trust in the international system to give them a fair shake and that there is a truly pressing need to have a Jewish homeland.

Expand full comment

How do you know he wasn't a closeted anti-Islam extremist all along?

Expand full comment

It's hard to engage in terrorism in favor of big AI companies though because they're already approximately getting everything they want. The situation is inherently asymmetric. "Terrorism to create blowback" would have to look like "join OpenAI and advocate for really creepy project ideas." Stuff like the ScarJo voice drama.

Expand full comment

Good thing I support them then.

Expand full comment

I guess you could bomb Rationalist hubs and AI Risk orgs in the name of Effective Accelerationism. (Apologies to the e/acc people for even suggesting this, they're just the only ones to really put a name on it.)

Expand full comment

It depends on what you're trying to accomplish. Did the US do itself serious damage by overreacting to 9/11? Maybe, but how much is hard to estimate, and my one sure prediction is that if the US goes down, people will be arguing about the causes.

Expand full comment

Right, I think it's frequently very difficult to predict the effects of actions in the long term. And that's absolutely a practical concern here. But, if we are being consistent, we need to apply that to all such interventions and be equally worried about whether the real effect of advocating for X will actually bring about X.

Indeed, I do think that epistemic limitations are a strong argument that political interventions tend to have relatively low expected benefit.

Expand full comment

Scott speculates that 99% of terrorists get caught at the “your collaborator is an undercover fed” stage. If that’s accurate to within an order of magnitude, the headlines are much more likely to read “oil executive caught plotting to commit a terrorist attack and blame it on Greenpeace” than “Greenpeace commits terrorist attack.” So the blowback effect would likely help Greenpeace rather than hurt it.

The Reichstag fire worked well for Hitler because he was the head of government at that point and controlled the investigation into the fire.

Expand full comment

Terrorism is also negative-sum. It's totally possible for both terrorist's cause, and the cause they're opposing, to be worse off for it e.g. maybe the backlash against Greenpeace damages the environmental movement making climate change worse, and also the damage to the golden rice factory makes vitamin A deficiency in poor countries worse. Or consider how both Al Qaeda and the US would be better off (at least by many values, I'm not 100% sure what Al Qaeda's value function is here) if 9/11 hadn't happened.

Expand full comment

The most successful terrorist attacks I can think of in history were ones by an extremist subgroup trying to provoke a war against the war-reluctant main group they were part of, such as Franz Ferdinand's assassination by the Black Hand or the series of acts by the Black Dragon Society and the other expansionist ultranationalists that pushed Japan into Asian expansion. All of these relied on harsh reactions from the targets as a source of additional motivation and a whip to goad their unwilling allies into the conflict.

Expand full comment

To that I can add IRA-style terrorism as guerilla warfare, trying to make staying in your country so miserable the occupier leaves. That can work.

Expand full comment

Causing war, chaos and mayhem seems to be something terrorism is good at, yes. Another example would be the assassination of Rabin, which likely did not help the Oslo accords.

Expand full comment

The assassination of Rabin did help the Oslo Accords - it put in power Peres who was more left-wing than Rabin, and discredited the right wing as political murderers.

Expand full comment

In environmentalism, eco-terrorists are stunningly ineffective. I work in fossil fuels. Someone gassed the office building, causing a building-wide evacuation, getting themselves arrested - but the impact on the company's actions and bottom line was.... Negligible.

I mean, I got to enjoy a nice walk outside rather than slave away on my computer. But a day later I was back at it.

The fossil fuels problem is a demand-side problem, not a supply side problem. In 2024, the evil fossil fuel companies aren't actually trying all that hard to sell oil because there will always be buyers as long as people need oil. In fact, they're aggressively trying to sell hydrogen, one of the no carbon things that I think is kind of dumb logistically and has zero customers.

If you wanted to move the dial, you'd protest importing cars that run on petrol, and counter-protest the other environmentalists stopping the construction of new solar farms. Terrorism aimed at supply side does nothing.

Expand full comment

Yes, as I said it is very hard to implement effective terrorism because the kind of actions that would make for effective terrorism aren't the kind of things that inspire people to terrorism. In practice, terrorism tends to attack symbolic targets and cause backlash because people need to be emotionally hyped up to commit the act.

I'm just being pedantic that the correct account isn't that it's not in theory workable but that you can't really behave like the idealized actor who could live a full life pretending to be their enemy and then commit horrific acts in their name.

For instance, the ideal kind of terrorism that would achieve environmental ends is probably working your way up the hierarchy in an oil company and making sure you had heavy financial interests in that company's stock and then assassinating Greta Thunberg (sp?) in a way that tries -- but delibrately fails -- to look like an accident.

Expand full comment

How on earth would a failed assassination false flag thing help??? It wouldn't cause any sanctions on a company, just on that specific executive/board which you can just replace with new people entirely. Also, some Boeing whistleblowers have died under suspicious circumstances and guess what, we're still flying on planes, aren't we?

For this specific industry, the problem is that you can't try to boycott fossil fuels at the current state. It's like trying to boycott Amazon Web Services. Too much vital infrastructure relies on it right now.

Any attempts to "boycott" fossil fuels tend to be geopolitics, e.g China refusing to buy Australian coal for a bit back in 2020 - 2021, and all that did was cause blackouts over there. The attempted sanctions on Russian oil and gas went poorly because the timeframes are way too short.

But I suppose the only way you could force massive investment into switching over is.... If you're using approximately all of it in a war and you can't get any for geopolitical reasons? Which would be a pretty crappy outcome that guarantees civilian suffering and unnecessary cost during the transition (energy rationing generally does not have good outcomes).

Expand full comment

"Works" in the sense of "changing people's attitudes towards animals" doesn't happen. In that sense "terrorism doesn't work" is correct.

"Works" in the sense of "makes people think the authorities are ineffective" can work, but mostly through random violence.

Which is why say, Lenin succeeded, while PETA hasn't.

If your goal is "burn it all down", then attempting to burn it all down does work towards your goal. It is much easier to push Humpty-Dumpty off a wall than to put him back together again afterwards.

note: I am not in favor of terrorism at all.

Expand full comment

Like neonazis bombing the Oktoberfest, of all places?

https://en.wikipedia.org/wiki/Oktoberfest_bombing

Expand full comment

Any conception of the Good which asks us to reach some difficult standard will be attacked by critics who find the standard hard to reach and respond out of jealousy or anger.

In my opinion, EA belongs in the pantheon of Great faiths of the world, which find themselves continually subjected to scorn and derision by persons who claim to be motivated by truth but in reality are animated more by petty jealousy than that magnanimous spirits which animates all men who aspire to greatness.

Welcome to the club.

Scott, your commitment to truth and generosity have been in part responsible for my own dedication to God, which I would see simply as a name for that which is ultimately true and real. Please see these people for what they are: animated by the spirit of Cain.

Expand full comment

I like the phrase, "animated by the spirit of Cain," for the phenomenon.

Expand full comment

"...and this is why we need to end Shrimp Suffering and stop Skynet!!!"

EA is a Religion - you said it yourself. When people reject it, they typically don't reject the concept of "The Good" but *your conception* of "The Good". To claim that anyone that rejects it is "animated by spirit of Cain and petty jealousy" doesn't help swing them to your side.

Expand full comment

The statement was only intended for Scott. I think anyone who believes in the concept of The Good would be wiser to focus on their own capacity for foolishness rather that of other people, which is both easy to do and useless unless it’s to help better understand the human capacity for self deception.

Expand full comment

I'm not going to say the argument that if EA was good, it would improve the charitable giving in Boston and SF is the worst argument I've ever seen, because there's so much competition. It's up there, though.

Is there a term for straw man argument fantasia? This isn't just perfectionism, it's a gish gallop of invented impossible standards.

Expand full comment

I actually think there’s something good about the argument. It’s probably looking at too short a timescale, but Effective Altruists really do hope that the movement will eventually have some effect on its goals above pre-existing trend, and noting that this hasn’t happened (either at the level of objective malaria case rates falling faster for more time, or at the level of more people giving more money) shows that the movement isn’t (yet) having the impact it wants.

Expand full comment

But does his data even demonstrate it? Wouldn't a better measure be, like, donations to GiveWell or things like that?

Expand full comment

Based on their own metrics, they couldn't look at donations but instead the effects of those donations. A billion dollars that doesn't change conditions in reality is no better than a million that makes no change.

It sounds like Stone's criticism might work either way - either there are no measurable downstream effects, in which case EAs are not very effective, or their numbers are too small to be effective, in which case they are not very effective.

In other words, at that point you've got a small number of people making an individually meaningful (maybe?) change that amounts to a rounding error on the problems they are trying to solve.

Expand full comment

GiveWell, as I understand it, disburses money to multiple charities based on effectiveness principles, so it's kind of an EA clearinghouse. Is it true that there are no measurable downstream effects of the money they receive, which I believe has grown recently? In this post Scott suggests that they've saved thousands of lives: https://www.astralcodexten.com/p/in-continued-defense-of-effective

Expand full comment

I'm aware of Scott's previous post, and don't have a particular argument against his numbers. On the other hand, 61 million people died last year. No number Scott shared comes close to a fraction of that.

Stone's argument that you can't see the effects of EA even looking at the two places where they are most influential says that EA is too small to be meaningful, even if everything worked as they hope. 200,000 people saved is nothing.

I'm not saying EAs do nothing useful, or that giving to charity is pointless. I'm saying that EAs do too little to justify their own propaganda and the way they speak about it, which seems to be more Stone's point than anything else.

I like that EA exists and want to see them succeed. I think they've done some positive good, and promoting GiveWell is great. But it's all small time stuff. They can't replace the world's charities, and my speculation is that if they tried they would have the same issues with overly large bureaucracies and disjointed priorities that all large organizations have. That's a big part of the criticism about shrimp charities and AI - even being a small and relatively nimble group, they are already veering into what most people would consider pet projects of those giving or working in the organizations, rather than objectively important life-saving approaches.

Expand full comment

Well, I definitely agree that the Charity/Death matchup was another blowout last year. We'll get 'em this year!

More seriously, I'm inclined to reject in the strongest possible terms the idea that 200,000 people saved is nothing, and while I didn't read Stone's piece I have my doubts that he used that particular argument. But maybe I could be persuaded in that direction if I knew what alternative Stone was pointing to that's superior.

Expand full comment

It feels like "Oh, they do mathy stuff? Well, I can do mathy stuff too, look!" which then fails because they don't have enough practice doing mathy stuff, or enough practice noticing the problems in their own arguments, so they end up making a very stupid argument.

Expand full comment

I think Lyman is working on conflict theory and you're working on mistake theory. One simple question for Lyman: what would change his mind about EAs? I don't think anything would.

Expand full comment

His social circle changing their minds about it. He wouldn't generate that answer himself, but would probably agree that it's true in principle, except impossible in reality.

Expand full comment

Sounds about right

Expand full comment

Where is the conflict theory in his argument?

Expand full comment

Munecat (you know, that YouTube chick) has set her sights on EA. That can't be good for you, can it? Not the usual kind of opposition that you can afford to be excited to face. To what degree are you scared?

Expand full comment

"Munecat (you know, that YouTube chick)"

I don't know. Who is this famous in her own backyard person?

Expand full comment

I guess it doesn't really matter. She'll never be as famous or as EFFECTIVE (lol) as SBF (Sam Bankman-Fried)

Expand full comment

>you know, that YouTube chick

I’m terminally online and I had no idea who this was until I looked her up.

From a quick glance, her channel seems targeted towards heavily left-leaning 20 something’s, the same demographic as Hasan. I can tell you from experience that most of that demographic already hates EA because of its association with Bay Area techbros.

If I was Scott, I’d be about as worried about her coming out against EA as I would be if Andrew Tate did; the overlap between people who are interested in EA and watch either of them is ~0.

Expand full comment

So you agree EA is a cult

Expand full comment

No, niche internet microcelebrities are more cultlike than people who don't care about them.

Expand full comment

You're famous only for SBF

Also, I see that you're a different person, but you just admitted that you don't care, although you claim to want to make the world better. That makes you either evil, or extremely ineffective

Expand full comment

Who do you think you're talking to? I'm not famous at all. When did I claim anything about wanting to make the world better?

Expand full comment

I meant the EA community when I said "you". Assumed you were part of it given the context of the thread

Expand full comment

You attributed to Nalthis the belief that EA is a cult, based on a tendentious reading of his words. (If two groups are disjoint, this doesn't tell you *which* group is the cult, therefore it's not reasonable to claim his words entail a belief that EA is a cult. But I think you know that.)

Now TGGP just "admitted" EA's "don't care", according to you. He "admitted" that EA's don't care about, in his words, "niche internet microcelebrities", which is true of almost everyone, and is a totally different kind of "not caring" than would hinder your effectiveness at large scale goals. I'm sure you know that too, but still you generalize from one to the other.

A bit less sophistry please.

Expand full comment
Jun 1·edited Jun 2

Accuse me of trolling and I might admit it. I'm not even gonna look up what you're accusing me of

A LOT of people watch Andrew Tate. You can call his followers a lot of things, but the usual definition of "cult" does not apply to such a large group. Who endorses EA these days? Post-SBF, even Elon and his followers have moved on from you. So yeah, picking one group and calling it the "cult" is appropriate

Is it a totally different kind of "not caring"? I don't think so. In my opinion (you're welcome to try to disagree), the only kind of "large scale goal" that is *not* unambiguously bad is spreading a message or your own version of the "good news" or your own version of "the truth". I think any individual who has any other "large scale goals" is either delusional or just plain evil. Hitler had large scale goals, for example

So if you don't care about convincing munecat or her followers, you're evil in one of 2 ways: you either think she (and/or her welfare) is worth sacrificing, or you're trying to be "altruistic" toward her without her consent (like force-feeding her). So yeah, both your meanings of not caring are pretty much synonymous to me

Expand full comment

Philosophy Tube seems to make the same genre of content as Munncat and already did a video on EA. PT is a larger channel, and nothing came of that.

Speaking of which, has anyone seen that video? Is it any good?

Expand full comment

It is not especially good. I'm generally.a pretty big fan of PhilosophyTube but felt like that video really missed the mark. PT often takes a sort of soft marxist / anticapitalist conception of the good as a given, and instead of concluding that EA is people trying to good by a different metric than her, tries really hard to suggest that EA is more about status and prestige and signaling and personal power and is intellectually dishonest because it isn't supporting the things she thinks they should.

Expand full comment

Philosophy Tube is rather weak compared to munecat. But yeah, EA IS about status and prestige and signaling

Expand full comment

For all of us ignorant of munecat, why is PT weak in comparison?

Expand full comment

I don't think EA is about status, prestige, and signaling, but even if it is, I don't even care as long as they actually drive money to important charities, and actually help the world.

Expand full comment
May 31·edited May 31

How naive! Let's list some people who have endeavored to help the world: Sam Bankman-Fried, Adolf Hitler, Joseph Stalin, Mao Zedong, every single leader of the Church of Scientology, Elon Musk, the list goes on... Now that I've said something that I believe, let me ask you something: list some charities that you think are important

Expand full comment

This is why you don't just look at the expressed intentions, nor at the actual intentions, but at what they are doing and why.

If you think they are doing important mistakes in their analysis of what to do to help, you will have to debate these, you can't just say "it is just about status and prestige", and expect to make an important argument against it, for two reasons :

- Just stating stuffs like that, without argument, will not convince anybody not already convinced.

- The reason it is done, doesn't help us know if it is a good thing to do it or not (a lot of people honestly trying to help, did a lot of wrong, and the opposite is also true).

It is quite easy to explain where each people on your list did/do make the world worse (even if some people are still delusional about Musk), just do the same thing with EA (but first inform yourself about what they are really doing, I could be wrong, but I think it is possible you aren't really well-informed on it).

Expand full comment

…what is the kind of opposition you *can* be excited to face?

Look, I’m glad that I’ve never had a YouTuber try to run me out of town. Or even go after my job, or an adjacent philosophical movement. But I don’t feel like “excited” or “scared” are really the right terms here.

Expand full comment
founding

> …what is the kind of opposition you *can* be excited to face?

Intellectually honest, creative, constructive, so on. Like, if you believe in debate / collaborative truthseeking / etc., opposition / disagreement is an engine to produce more knowledge.

Expand full comment

I wonder how much of the dislike for EA culture is a reaction to the fact that EA enthusiasts haven't adopted the same kind of norms about when not to share their moral theory that we've developed for religion, vegetarianism etc...

I mean, yes EA is full of a lot of people with half-assed philosophical views. Heck, I'm sure many would put me into that bucket and my PhD involved a substantial philosophy component. But that's much more true of the donors to regular charities, especially religious ones. The number of people who actually know what's in their own religious texts is shockingly low.

But thanks to centuries of conflict we have strong norms about not sharing those views in certain ways.

Expand full comment

I think this is right. Across many arguments on the topic, something I've seen many EA critics say is, "to YOU donating to the local art museum or your alma mater may be less 'effective' than donating bed nets, but that's just your judgment. There's no objectively true measure of effectiveness." To which the obvious answer is, you're right, so that's why we're out here trying to convince people to use our measure of effectiveness. But one gets the sense that's out of bounds.

If a rich person is donating no money to charity, it's socially acceptable to try to convince them to donate some. But once they've decided to donate some, it seems like it's *not* socially acceptable to try to convince them to donate it elsewhere. That seems inconsistent to me but it seems like it's based on some pretty durable norms.

Also, this is another case where the most important part may be the part everyone agrees on but lots of people don't do, namely donate to charity at all. It's not fun to argue about whether one should donate if one can, since almost everyone agrees they should. It's more fun to argue about what donations are "effective" or whether that's even measurable.

Expand full comment

"To which the obvious answer is, you're right, so that's why we're out here trying to convince people to use our measure of effectiveness. But one gets the sense that's out of bounds."

Compare that with all the objections about "imposing your religion" when it comes to the public square and topics such as abortion. Yes, if I could convert everyone to accepting the Catholic theology around sex and reproduction, then we could all agree on the moral value of embryos as human persons. But that ain't gonna happen. Ditto with "if everyone just accepts the EA measure of effectiveness".

Expand full comment

Well, if the standard is "everyone," I agree that it ain't gonna happen. But is that an objection to trying to convince people on the margin? Because that does sometimes work!

Expand full comment

Go forth and save souls, how can I object to that?

Expand full comment

> If a rich person is donating no money to charity, it's socially acceptable to try to convince them to donate some. But once they've decided to donate some, it seems like it's *not* socially acceptable to try to convince them to donate it elsewhere. That seems inconsistent to me

I feel that's perfectly consistent - the former case you are essentially appealing "hey, according to your moral norms (as far as you claim), you should donate", and then the person reflects on that and agrees (or disagrees); but in the latter case you'd be saying "according to your moral norms you think that you should donate to X, but according to my moral norms Y is better" which is.... different. It is generally accepted to point out hypocrisy and align words with deeds, but it's generally not accepted to demand someone to change their reasonable-but-different moral priorities unless they were violating some taboos.

Expand full comment

I think at this point it depends on how one does it. However, I don't think this necessarily entails pressuring someone to change their moral norms. I think there are very few people whose moral norms don't take saving lives as one of the highest causes one can contribute to. Suggesting that they can achieve that goal better is often taken as helpful rather than preachy; at any rate that's how I took it.

Expand full comment

I think the issue is more that it's not really practical to do this well.

The problem is that we can either exercise approval or disapproval and in an ideal situation we would approve of all charitable donations but just approve less of the less effective charity. Unfortunately, in practice, people don't really know how much you would have approved had they done the other donation so often the only way to convey the message sounds like passive aggressive criticism "great but you could have..."

Expand full comment

The "ideology and movement" distinction and trying to be a big tent probably contributes to this issue IMO. EA has a distinct culture that is incredibly elitist and quite off-putting to "normies," but tries to maintain this whole thing about just meaning "doing good better".

So is EA simply "doing good better" by any means at all, or is it trying to claim that blowing hundreds of millions on criminal justice reform and X amount on shrimp suffering are among the most effective possible causes, and also maybe you should go vegan and donate a kidney? Scott showed the most self-awareness on this in his review of WWOTF (https://www.astralcodexten.com/p/book-review-what-we-owe-the-future), ctrl+f "seagulls", and has not returned to such clarity in any of his EA posts since. Clearly, EA isn't *just* an idea; there's a whole lot of cultural assumptions smuggled in.

Expand full comment

It's not the elitism that bothers people. It's the lack of social skills that results in the impression of being looked down on.

People fucking love elites, big celebrities are always sitting on boards for charities or whatever and people love it. But they are careful not to be seen as critical of people without that status.

Expand full comment

I actually think the problem is not sufficently distinguishing the movement and the multiple different related ideas idea in many of these later posts.

I agree the idea isn't merely "it's important to do good effectively," I think that misses some key elements, I think the minimal EA thesis can be summarized as something like:

within the range of charitable interventions widely seen as good [1] the desierability of those interventions can be accurately compared by summing over the individual benefits and when you actually do the math it often reveals huge benefits that would otherwise be overlooked.

That view is something that is more controversial than one might think but the real controversy comes from the other part of the standard EA view. The belief that therefore we should allocate social credit according to the efficacy of someone's charitable giving.

Unfortunately, EA types tend not to be great with social skills so instead of actually conveying more approval for more effective giving what they actually often manage to do is convey disapproval of ineffective giving which upsets people. Not to mention many people just dislike the aesthetics of the movement the same way many greens really dislike the aesthetics of nuclear (and vice versa) prior to any discussion of the policy.

Anyway, long story short, it's better to disentangle all these different aspects.

--

1: so consensual interventions which help currently living participants w/o direct/salient harm to anyone or other weird defeasors.

Expand full comment

I do think those norms have developed because they are important for effectiveness. Vegetarians have learned that preachiness doesn’t actually work.

Expand full comment

That's true, but at some point vegetarians do have to advocate for their ideas and I'm sure they find (as I used to) that just that by itself can be perceived as preachy by people who don't want to be confronted by the ideas no matter how they're packaged, and I think some of that is going on with EA too.

Expand full comment

Sure, advocacy is admirable and useful. Vegans get in trouble when they badly misread the room and try a stunt like pledging to not ever to sit at the same table as non vegans. They tried something like that not long ago. It didn’t play well, as anyone outside their orbit could have easily predicted. You have to meet people where they are, not where your circle of close friends happen to be.

Expand full comment

That's certainly true, but let's be honest, most EAs are in it for the discussion/feeling of being more consistent not the altruism.

And it's wonderful we can convert the former into helping people but I don't think it's possible for the social movement EA to ever act like vegetarians because the motivation isn't deep emotional concern with suffering (for some it is and the rest of us do care) but it's feeling good about ourselves for being consistent. And this does create extra irritation bc people feel they are being looked down on by individuals who are doing the same thing they are -- the few EA saints don't bother people.

Hence the need for a seperate term like "efficient giving" or whatever to solicit donations from people who find the social movement unappealing.

--

And that's just the way of altruistic groups. I care about global warming but I find the aesthetics of most environmental groups repellent. Best you can often do is create a range of aesthetics that work for the same objectives.

Expand full comment
May 31·edited May 31

The quiet altruist is admirable in many ways, but there do need to be people that evangelize in some fashion. I don't have leadership skills and am prone to arrogance, so perhaps quiet altruistrism makes most sense for me to aspire to. But there are people in EA with the right qualities for modest evangelizing.

Expand full comment

Ohh absolutely, but most EAs don't have them and lots of us are into EA because we like the aesthetic and debating this kind of stuff and not everyone is going to realize when they need to turn that off.

Expand full comment
May 31·edited May 31

True. I've been thinking about a related issue regarding comparative advantage. For example, going vegan probably isn't worth it for people with high opportunity cost of time and energy, but may be for those with low OC. But that sort of reasoning is conspicuously vulnerable to abuse (and resentment) because it's basically saying the powerful people get to do fun stuff and the masses should eat bad-tasting food.

Expand full comment

I think that's were it's important to seperate the question of what is good to do and what is good to critisize/praise people for doing.

Also the powerful people likely have more impact, even per dollar, so likely have more obligations.

Expand full comment

"Wokeness" / "social justice" gained a lot of ground through preachiness but also produced a backlash. I'd guess the milder forms of their preachiness were quite effective on net, but the extreme forms were very counterproductive.

Expand full comment

Good point, though sometimes very mild preachiness/disapproval can be helpful ("ohh, you don't use cruelty free eggs?") but it's hard.

The bigger issues EA faces is that even when it's not trying to be preachy it gets perceived as such. A vegetarian can just explain their POV when asked and won't be perceived as judgemental if they restrict themselves to I statements.

Now imagine the same convo w/ an EA. Something like the third question will be why they think their charities are more effective and they have to give a statement that pretty explicitly compares what they do with what others are doing. Also, it's internal deliberations get perceived as such.

Expand full comment

I think this is definitely part of the puzzle. I think another part of the puzzle is that the EA culture is quite weird in a way that seems to drive a lot people to distraction. As Scott notes, EA is a piece of social technology among other things. It has an extremely distinct vibe. Some people are all-in on the vibe, some (like me) have what is mostly an affectionate tolerance for it, and some people seem to really, really loathe it.

Unfortunately, I think the notion that EA should avoid tainting itself by broadening its appeal is wrong on the merits, and that to be maximally effective EA absolutely should moderate and mainstream itself. The resistance to this idea feels mostly like a cope by people who couldn't mainstream themselves if they wanted to -- it's hard to choose not be weird if you are in fact weird. Every time I read the EA forums (which I basically stopped doing because they are exhausting), I find myself wondering if people are just using the phrase "epistemic status" at this point as a sort of normie-repellent.

If this sounds like an attack on EA, it's not meant to be. I find the vituperation in arguments like Stone's to be odd and unfortunate, but also worth understanding.

Expand full comment

*Can* EA go mainstream without being philosophically and compositionally compromised? Organizations that alter their philosophy to appeal to more people tend to end up endorsing the same things as all other mainstream organizations. And gaining members faster than the culture can propagate is going to lead to problems of takeover.

Expand full comment

I think so, absolutely, yes. Scott consistently presents a mainstreamed version of it here: donate 10% of your income and give to charities that have a proven impact on people's lives are both somewhat radical and also reasonably unweird concepts at the core of EA.

Note also that I don't think EA has to jettison all of the weird bits, such as the esoteric cause evaluation. I just think they need to be willing to tailor their message to audience and -- this is probably the important bit -- tolerate the tailoring.

Expand full comment

EA cannot go mainstream, but not for the reasons you listed.

It's already difficult to disburse the amount of money 1 Dustin Moskovitz has in ways that are consistent with how to analyze the evidence, I think of the existing charities, there can probably be around 10x or at most 100x the amount of donations before we are wildly out of distribution (and it's only that high because I'm treating GiveDirectly as a money hole.)

It would certainly be nice if everyone donated to global poverty charities, but at that scale, the type of intervention you'd be thinking about has to start including things like "funding fundamental research" or "scale up developmental economics as a field".

This is something I've been worried about for years, that there aren't big enough money holes for more charity!

Expand full comment

Maybe this is where the watering down part of mainstreaming EA comes in. Sure, we might tap out early on Dustin Moskovitz-grade charities. But here are two somewhat different questions: would it be a net benefit to the world if annual charitable giving was 10% higher than it is today? And is current giving inefficient enough that we could raise the good done by at least 10% if resources were better allocated? I pulled the 10% figures out of nowhere, but the point is just that if you believe the world would be a better place with more and better charity, then that is an argument for mainstreaming EA. Diehard EAists might call this a very weak form of EA, and they'd be right. But scale matters.

Expand full comment

I think at scale, considerations I'd consider facile now, like "is charity really the most efficient thing" would become real live players. For example, I'm not sure throwing 100x times the amount of money into say, SF'a homelessness problem would help, it might literally be better to burn the money or spend it on video games! If you believe Bryan Caplan's myth of the rational voter's thesis that self interested voting would be better than what we have now, because at least one person benefits for sure in that circumstance, as opposed to programs now that benefit no one, you can imagine other similar signaling dynamics start to dominate. Not even going to start on the sheer amount of adversarial selection that would start happening, where honest charities would start losing out to outright fraudulent ones and so on.

I don't think I know when this would start happening, but I'd lower bound it at at least 3rd world NGO scale. I'd be surprised if the upper bound were above 10% of 1st world disposable income.

Expand full comment

Neither of these are radical! It's called a tithe! Giving 10% of your income to your Church (i.e. THE charity that had a proven impact on people's lives) has been the standard for ~1500 years!

Expand full comment

I think they lose the vast majority of people at the "altruism" part (esp. with the way it's usually operationalized), and criticisms around "effectiveness" are post hoc.

Expand full comment

If I understand what you mean, I think I partly agree. I think EA culture makes some assumptions about utilitarian consequentialism, atheism, animal qualia, and so forth, which aren't strictly necessary for the "effectiveness" side to be useful. If I agree with them about human suffering but not animal suffering, I can use their freely-provided resources to help effectively reduce human suffering, and not worry about the parts I don't agree with. And I think that's great.

There's a minor worry that by associating with kind and reasonable people who have a different set of values, I become more likely to adopt those values, but a) that's a risk I'm willing to take, and b) maybe they're right. (Or who knows, I might persuade some of them.)

I do think the "effectiveness" side actually rubs people the wrong way, though, and isn't simply a post hoc complaint. There's a sort of charitable-industrial complex (not really "industrial", but if I replace that word I don't think the reference gets through), which seems to derive from, stereotypically, to me, upper class society ladies. It's about being seen to "do good", and spending an appropriate amount of time and money on "doing good", and it's all interwoven with class-status and signaling and fitting in, and criticism of someone else's choice of charity is a social attack straight out of "Mean Girls". ("She's clearly not one of us, choosing that area is so déclassé.") And there's a bit of practicality there, in that membership numbers are used as a marker of success. But there's also an incentive to contribute little bits to lots of groups, covering everyone's pet causes, to reassure your peers that they have sufficient status that you're willing to adopt their cause as one of yours. And I think this is an upper-class habit that's trickled down to all us temporarily embarrassed millionaires. And when someone criticizes a charity as being ineffective, it's very much taken as a status play. Who are these nerds, to tell me what's important! It's a power move, demanding attention and effort, every bit as much as calling someone "privileged".

Expand full comment

>"If I agree with them about human suffering..."

This is the actual crux for me: I don't. There's a quote from Serenity that conveys the sentiment perfectly: "I look out for me and mine. That don't include you 'less I conjure it does."

To the extent that the society ladies et al. behave as you describe, I think they're working from a similar foundation. Being *seen* to be charitable is necessary to maintain their position and therefore lifestyle but that's all they actually care about, not the cause itself. Criticism of altruism per se would be a defection from the shared fiction and therefore punished socially, but effectiveness is a safe target.

Expand full comment

I've got some sort of scaling function, but I don't think it reaches 0. Human suffering elsewhere is still meaningful to me.

Expand full comment

This is where distinctions between the idea and social movement are important.

If you notice givewell doesn't mention EA on its main page and I think it's probably good if we came up with a term life "efficient giving" to capture the narrow idea that we can and should sum over the benefits provided per dollar and split it away from the social context.

Expand full comment

Your theory doesn't seem to explain the overwhelming cultural success of "social justice"/wokeness. Plenty of people hate it, sure, but nobody becomes big without attracting haters.

Expand full comment

Why would it? My theory correctly predicts that lots of people will feel intense dislike for both EA and wokeness because they feel negatively judged. Obviously, things that upset many people can be popular as well -- and i'm not advancing any theory about which movements/views become popular.

Expand full comment

I'm mostly objecting to the part about "we have strong norms about not sharing those views in certain ways". Clearly the norms aren't strong enough to prevent some movements with aggressive approaches from succeeding.

Expand full comment

Ok, more accurately, what I should have said is that we have strong norms that sharing those views in certain ways is seen as an attack on or disrespect towards those who don't have those views. I presumed that would be clear from context but that's usually a bad thing to assume.

Expand full comment
May 30·edited May 30

>I wonder how much of the dislike for EA culture is a reaction to the fact that EA enthusiasts haven't adopted the same kind of norms about when not to share their moral theory that we've developed for religion, vegetarianism etc...

Interesting! I'm outside the debate (not an altruist, not a utilitarian, not an egalitarian), but haven't negatively reacted to EA. I just view it as outside the areas I'm interested in. Perhaps this is because I've never bumped into EA "in the wild", so I've never been preached at nonconsensually by an EAer, only encountering EA when I choose to read about it (e.g. this post). So, basically I haven't run into the norm violations.

Expand full comment

Presumably the reasoning for the hypothetical anti-AI terrorism would be less that bombing a datacenter causes that much damage in itself but that causing a number of deaths would dissuade people from becoming AI researchers and so on since it demonstrates there's a target on their back.

Of course there have been a large number of terrorist movements (all of them?) that have made the same calculation - "Our opponents are weaklings and wussies, we'll just kill a few of them and the rest scatter!" - and have turned out to be spectacularly wrong, but it does need to be noted that at least the associated rationalist movement tends to strongly signal that they really, really fear death more than the rest of the population and would go to great lengths to avoid it.

Expand full comment

The rationalist movement has established that they take existential risks seriously, however remote, and are willing to deal with them in a deliberate, rigorous, proactive manner. Someone who bombs datacenters, with the explicit subgoal of killing rationalist sympathizers, is effectively jumping up and down shouting "hey, look at me, I'm an existential threat to people and things you care about! What are you, chicken?" Historically speaking, that tends to harden the target group's resolve. https://acoup.blog/2022/10/21/collections-strategic-airpower-101/

Expand full comment

Yes, it would probably end up hardening it, I just mentioned one reason *why* our hypothetical terrorists might end up believing it works.

Expand full comment

Scott's quote of Stone includes the line "waiting until a time when workers have all gone" as part of his bomb plan, which seems to exclude your interpretation of his reasoning.

Expand full comment

What is it about EA that seems to attract scorn? People seem to generate spurious reasons to say it is some how bad - not merely not more effective then general charitable giving but actually bad - worse than not giving.

There appears to be some feature that makes it enemies in a way I don't get, even if I thought it was no better than giving to the college football team.

The nearest I can find it that people think it is smug and that by implication other altruism is not effective, but that would apply to any attempt to research what works best, in virtually any arena of activity.

Expand full comment

"3. ACTUALLY DO THESE THINGS! DON'T JUST WRITE ESSAYS SAYING THEY'RE "OBVIOUS" BUT THEN NOT DO THEM!"

This is an example of the kind of thing that bothers people. If this isn't condescending, I don't know what is. I'll grant you, this isn't any more condescending than Protestant Sunday sermons or identity politics nagging. But there's a heavy backlash against those things, too.

Expand full comment

Does it? Bother people? What kind of people?

Imagine you have a friend who loves talking about exercise, reads all kinds of exercise advice, blogs about benefit of exercise, but, you know, doesn’t. Exercise.

And you like keep gently asking, so what is like your favorite exercise? And the friend talks about his favorite exercise that he read about last months and it’s awesome.

But he hasn’t. Done it.

At some point… you may get exasperated and tell your friend that maybe:

“ACTUALLY DO THESE THINGS! DON'T JUST WRITE ESSAYS SAYING THEY'RE "OBVIOUS" BUT THEN NOT DO THEM!"

Expand full comment

The day I see EA boycotting the Met Gala, which is the most stupendously conspicuous waste of time and effort disguised as 'charity', then I'll take them seriously as doing things, rather than "hey what about lawfare against chicken farming?"

Expand full comment

More than one thing can be true at the same time. Yes, Met Gala is stupid, yes actually donating to charity as opposed to only writing about donating to charity is good.

Expand full comment

Do you mean boycotting or protesting?

I don't think there're EA'ers at the Met Gala, so I think they (we?) are already boycotting.

Expand full comment

So… mission accomplished? EA is already about maximally uninvolved with the Met Gala, to the point where even the term “boycotting” fits oddly due to there not having been any involvement to begin with.

Expand full comment
May 30·edited May 30

That's what I mean; as I said, to me the Met Gala is ludicrous in the amount of sheer wastefulness of money on gowns, 'themes' and publicity. If EA is going to lecture people about the best use of money, then getting out there in public and making statements and even showing up to protest would be *something*.

But no, "Jimmy-Bob and Lucy-Mae tithe to their local church, what maroons" is about the height of it with regard to ordinary people. There's no cost involved, of the type that "Uh, so maybe some of our deep pocket donors go to the Met Gala" would involve.

It's like Alexandria Ocasio Cortez showing up with that "Tax The Rich" dress - pointless signalling where she is in no way, shape or form even the ghost of a threat to the status quo. I'd respect EA more if they stopped worrying about consciousness in shrimp and more to do with "the humans in our town". Once they started hob-nobbing with the political establishment (both Republican and Democrat) then they became the equivalent of AOC and her dress:

https://www.latimes.com/entertainment-arts/story/2021-09-14/met-gala-2021-aoc-tax-the-rich-dress

But that's because I'm old-school Jim Larkin type labour representation, not modern DSA type labour representation where they have little magazines of peerless ideological purity 😁

https://en.wikipedia.org/wiki/James_Larkin#/media/File:James_Larkin_O'Connell_Street.jpg

"Today a statue of "Big Jim" stands on O'Connell Street in Dublin. Completed by Oisín Kelly, and unveiled in 1979, the inscription on the front of the monument is an extract in French, Irish and English from one of his famous speeches:

Les grands ne sont grands que parce que nous sommes à genoux: Levons-nous.

Ní uasal aon uasal ach sinne bheith íseal: Éirímis.

The great appear great because we are on our knees: Let us rise."

Expand full comment

"Go protest this particular ineffective charity!" sounds like the kind of thing you'd do if you cared more about signaling effectiveness than actually being effective. There are much better uses of my time.

Expand full comment

Someone needs to do a study on the effectiveness of boycotting the Met Gala.

Expand full comment
May 30·edited May 30

I've never been to the Gala and I've also never donated to the Met, so boycotting is 100% effective as far as I'm concerned.

Expand full comment

But how do we know that studying the effectiveness of the boycott will be... effective? I think we need a study!

Expand full comment

The normal kind. People generally vaguely agree that charity is good, but don't consider themselves obliged to systematically do it to a definite standard, so strident proselytizing in this direction (by anybody, but even more so by dubious Bay Area types) makes them uncomfortable, because they have no principled objection, and yet don't want to contemplate being morally delinquent.

Expand full comment

I think this "actually do them" sounds super-strident when taken out of context. Yes if one just yells this at random people, it would be horrible. But here the context is that there's a system that actually cajoles people to actually donate, not just ruminate. As an explanation of what the system does it's ok.

To be clear, I'm neither an EA nor particularly like the thing. But this particular thing is not why.

Expand full comment

Well, the very name "effective altruist" contains a not-particularly-veiled insult towards anybody who doesn't share implied precepts. Which is a criticism that EA very much is on board with, but institutional inertia makes the name basically impossible to change, unfortunately for them.

Expand full comment

Yep, the name is... let's not go there :)

They should change it though. Of course the best time to change the name was, like, 10 years ago. But the second best time is now.

But - not my fight, I'm here for Scott's writing and exposure to interesting stuff.

Expand full comment

If I did slip up and say this, I'd hopefully apologize later for acting like a bad friend. It's not my role as a friend to bark orders about his personal choices. That's what girlfriends and mothers are for.

Expand full comment

The thing to understand with that is it's a self-directed psychological technique among rationalists to browbeat themselves into taking rational steps. Thinking deliberately and carefully, as well as acting on it, are very challenging things and so this sudden flurry of rhetorical punches is an intentional method to discipline their/our thought processes where casual or even sophisticated intentionality just won't break through our everyday selfish incentives

Expand full comment

Not strictly a rationalist thing. Protestants and Islamists also flagellate themselves for failing to meet some absurdly high standard.

Expand full comment

Yeah. Scrupulosity is a religious concept, then some in the rationalsphere started talking as if they had invented the notion, or at least just discovered this amazing new psychological term.

Some people have had these kind of ideas before, in the vast expanse of history!

Expand full comment

To me this makes them seem the opposite of condescending. People hate the smug intellectual elite who write essays from their ivory towers and won't get off their ass and actually help anyone. A movement that tells its members, "no, you put your money/effort where your mouth is and actually help people" feels a lot more down-to-Earth and humble.

Expand full comment

Because the people usually associated with it in public consciousness are massive weirdoes. SBF was a massive weirdo, in addition to being a criminal. Now it’s often mentioned in connection to the Collinses, who are massive weirdoes.

Expand full comment

EA as principle is a natural idea. EA as practice (finance/tech bros patting themselves on the back for putting 5% of their income into AI alignment research) is a different matter.

I think you're bang on w/r/t the smugness issue, but I think it's more than 'trying to do better' it's 'convinced they're doing better'. Why I sometimes roll my eyes is a) proponents tendency to ignore/benefit from systemic issues ("take that Palantir job as long as you're donating to effective causes!") b) the movement's tendency to get mired in lowest-common-denominator quantitative discussions (the bikeshed problem for STEM nerds)

Expand full comment

People --normies anyway-- don't like status grabs. Or things that look like status grabs.

Expand full comment

I think because (1) it is very tied in, in its roots and foundation, to a small sub-set of people: the Bay Area rationalists, Oxford philosophy dons and rich Silicon Valley liberals (2) when it was about things like bed nets, people could agree that it was indeed doing good but then it moved on to (3) pet hobby horses, such as "do shrimp feel pain and thus should we ban fish farming" and AI, which until the big commercial corporations got their mitts on it, was seen as SF notion alone.

Things like the Virginia election (meddling in politics and falling on their face massively, in a way which even idiots like me forecast would happen) and of course He Who Should Not Be Named Save By His Initials certainly didn't help the public perception. It was taken as smug condescension to say (or seem to say) "You, poor ignorant fool, may *think* you are doing good by giving to charity, but *we* are doing it better and indeed doing it right, in the only way that's right, and we don't give a damn about your piddly little 'let's feed the poor people in my town' concerns. Our morality is so unimpeachably superior that we are only concerned with strangers hundreds and thousands of miles away".

People may be willing to accept "okay I do it my way, you do it your way" but they don't like "you're dumb and even bigoted for doing it your way, our way is the only right way".

Expand full comment

These are all the same problems religion suffers from, but the odd ideas problem is even worse because there's no central authority to proclaim that shrimp welfare is out of bounds, please stop talking about it.

For what it's worth, I would take SBF over the Spanish Inquisition or religious wars. Obviously, these are different times, so it's not an apples to apples comparison.

But anyway, if you're against the MET Gala and for feeding poor people, you're already on board with trying to make sure your giving has a positive effect, which in my book is the main point.

Expand full comment
May 30·edited May 30

I personally think altruism, whether effective or not, is usually well intentioned. So I like EA. But I think EA attracts criticism more than other charitable causes because it appears to cast judgment on some other charitable giving as “ineffective.” When someone earns $40k a year and gives $40 to the local little league where their kid plays, they resent being told they made an efficiency error or they could’ve done better. Communities traditionally depend on this ineffective altruism and EA kind of indicates to typically blue-collar donors that what felt good and selfless was actually in some sense a dumb mistake. I’m sure EA doesn’t intend this.

Relatedly, I think names are important (many detractors of groups don’t know much about the group besides the name itself), and the name Effective Altruism unintentionally suggests that non-EA altruism is ineffective (dumb). If you belonged to a group who called themselves the Effective Christians, you’d probably receive pushback from the newly downgraded Ineffective Christians, regardless of your good intentions. A name such as GiveWell for the entire movement would’ve been a much more effective name choice (I think).

All that said, I think EA is a net positive and I appreciate what they’re trying to do and thank those who give such a large percentage of their income to others they think need the help.

Edit: removed BLM name comparison to avoid triggering people.

Expand full comment

I think a lot of it is a mix of do-gooder derogation and resentment from other parts of the nonprofit sector.

Expand full comment

It's uncomfortable to think that you could forgo some luxuries and thereby save lives and greatly improve the world. It's psychologically much easier to think that such sacrifices are fruitless and the people telling you to make them are sanctimonious fools.

Expand full comment

Yeah, this, and if you include in "luxury" also spending money for ideological or religious pet causes and status games disguised as charity, it becomes very obvious why some people cannot stand the idea of the kid screaming that the king is naked, or at the very least its supposed clothes are not very effective

Expand full comment

If the king is naked, clothe him. The problem is when the suggested solution is not "give clothes to the naked" but "donate to my foundation to publish a paper on setting up a committee to examine 'is clothing the naked the best use of our money?'"

Expand full comment
founding

As this very post argues, people will not, ex nihilo, successfully come up with the best (according to their own values) places to donate their money, without _somebody_ doing the legwork of figuring out how effective various interventions are. I do wonder if you have some argument against this or are just taking random already-debunked potshots.

Expand full comment

Yes, I think this is one of the main cause, we have the same problem with veganism.

Another one is, I think, a question of aesthetics, they don't like the aesthetics of the EA community, they are weirdos or techbros or something, and so they don't like them, and don't want people to think they are doing something good.

(There are also fair criticisms, but I think most of the strong dislike come from these two things)

Expand full comment

I mean, the obvious answer is 'EA is lethally bad at anything resembling PR/Public Outreach, as is indicated by the very name they chose.' I'm sure they didn't think 'effective altruism, because we're better than all those other idiotic/ineffective altruists,' but the reason they didn't think 'shit, that's a name that's going to make us look like arrogant twits to the very community we want on side' is because...they're incredibly bad at anything resembling PR.

Expand full comment

The name is not terrible in itself, it's the attitudes that crystallised around it that make it the PR disaster. "We're gonna do Do-Gooding Better! Even more, we're gonna do it Right, unlike the other rubes who just throw money at wasteful campaigns in order to feel good about themselves!"

Again, not everybody had that attitude, but it was very easy for writing on the topics to slide into that territory, intentionally or not. "Why do people give to charity? Well, because they're dumb and run on feelings". Ouch.

Expand full comment

I'd say the name is indicative of the broader problem, which is that they're shit at PR and totally unwilling/unable to hire people who are good at PR. You also see this in stuff like 'let's buy a castle, here's the cost benefit analysis showing it's really a good idea,' when 30 seconds with a PR specialist would get you 'no, that's dumb, the cost to your reputation, which you aren't factoring in, will be really large.'

Especially if your PR model is intended to be 'we're ruthlessly focused on helping people, not like giving to the opera, or the Met, or any of the big events/institutions that means you get a chunk of your money back in events/tickets/parties,' but they're really, really allergic to PR.

And in some ways, I find that charming and good. Like, GiveWell's 'our mistakes' page (https://www.givewell.org/about/our-mistakes) is the sort of thing I absolutely wish more organizations would do and is so anti-PR it almost becomes PR, if that makes any sense.

Expand full comment

Here's my pet theory.

I think it all started with SBF. Following that the EA-is-a-doomsday-techbro-cult meme was born. Once the meme was born, a journalist wrote a vaguely sinister piece about EA to make people feel scared, enraged and taken advantage of. That gets clicks. Once this happened, EA has become associated with fear, rage and lurking danger. Techbro cultists are taking over the white house RIGHT NOW! After this anyone with a stake in media presence started capitalising on the meme, finding more and more things to be enraged about. And so the hatred cycle started.

It was so memetic because EA is the perfect media rage soup: techbro + rich + elite + entitled + cult!

EA is the easiest thing to straw-man as a cult. Anything with an organisation and philosophy can be branded a cult. The philosophy is especially easy to brand as a cult if it has something about making the world better and doomsdays. EA is a bingo!

It doesn't matter that most EAs are neither techbros, rich, elite or entitled. They have enough tech people involved. They have rich supporters. They are mostly well educated which is enough to clarify for straw-man them as as elite. They are vocal about controversial opinions which is enough to straw-man them as entitled.

The funny part is that in fact EA are the opposite of entitled. God forbid people with normal income decided to help people with no income. Saving babies, but are they doing it sincerely? What a shameful display of privileged superiority! While the people writing the hit articles are totally entitled, judging baby savers from their moral high-grounds, where they convene to decide what good deeds are good enough.

I also always adore the proud-Aristotle-reader argument: "These guys don't even know philosophy! How dare they think things!" It's truly enraging that someone would go around saving babies without a formal education in philosophy. Worse yet if this person has not saved any babies yet, but is daring to write essays on the internet.

We, as a community of people that only think the correct philosophical things, should stand united against this baby-saving and essay-writing menace.

Expand full comment

"I also always adore the proud-Aristotle-reader argument: "These guys don't even know philosophy! How dare they think things!"

This made me laugh, because in one of the book review post threads I was raging about Sam Harris, described (whether self-described or not, I do not know) as a "philosopher" amongst his other accolades, yet in an excerpt from a book of his, he either was ignorant of Aristotle's views on the Unmoved Mover, or he knew the argument but chose to ignore it in order to punch a tissue paper man of his own (the thing wasn't even strong enough to be a strawman).

So, yeah. Not EA in general, but if you're gonna philosophise, then have some basic knowledge.

Expand full comment

It's a fair comment. I just think it's a poor angle to attack baby savers on this basis.

I also think "EAs should read more basics" doesn't contradict "it's cool EAs are discussing weird things and try to produce original ideas", I can get behind both.

Expand full comment

"EA is a cult" was previously "rationalists are a cult," that's been around since Yud started the Sequences.

Expand full comment

Good point, it's a sequel!

Expand full comment

Wealthy nerds are a historically popular target.

Expand full comment

Once you're in the business of evaluating causes on an objective basis you're unavoidably in the business of telling people that you think *their* cause isn't very good, which feels like an attack to anyone invested in that cause. The "standard" approach to charity, or non-profit work, is to treat it more like a matter of taste. Some people like their local church, some people like art galleries, some people like soup kitchens, and so on, but within a very broad range none of these are "right" or "wrong" choices any more then there are "right" and "wrong" movies to like. Writing essays about how malaria nets are way better then any of that is just rude and disrupts the nice circle of affirmation they had going on. You can see some of that at the end of this post, where Stone admits their real grievance is that EA isn't being a good member of the charity community by affirming everyone else is also doing a good job. "Admit you’re not special and you’re muddling through like everybody else, and then we can be friends again."

Also a lot of this comes from leftists who are upset that we're not communists, or tech people who are upset that we want to restrict AI development. Probably at some point we'll get backlash from ranchers who are upset at meat replacement products, but they don't seem to have noticed EA yet.

Expand full comment

>Probably at some point we'll get backlash from ranchers who are upset at meat replacement products, but they don't seem to have noticed EA yet.

nit: I don't know at whom this ban is aimed (I doubt EA - maybe woke???) but Florida banned lab-grown meat on May 1, 2024: https://www.usatoday.com/story/money/food/2024/05/05/florida-lab-grown-meat-ban/73569976007/

Expand full comment

Yes, I was thinking of that, but it seems to be aimed at "Leftists" and/or "Woke" rather than "Effective Altruists"

Expand full comment
May 31·edited May 31

Many Thanks! Yes, EA seems an unlikely target, and leftists and/or woke do indeed seem more likely. Or a couple of legislators had an unfortunate interaction with a particularly forceful vegan...

Expand full comment

EA is something like ~40% vegan, EA events approach 100% vegan catered, and EA almost certainly has fewer conservatives than Columbia University. It's not concurrent with woke but the overlap is significant. And EA pushes for more lab-grown meat and pea-derived meat-alternatives.

Also Florida has a huge cattle industry (for another odd element, a lot of the industry is Mormon owned); a lot of people only think of the beaches and theme parks. This ban is a pre-emptive strike in favor of a major lobbying industry.

Expand full comment
May 31·edited May 31

Many Thanks!

>This ban is a pre-emptive strike in favor of a major lobbying industry.

Plausible. So it could be more like whacking a competitor than anything else...

<evidence from fiction>

Quoth Corleone:

>It’s not personal. It’s just business.

</evidence from fiction>

Re the cattle industry - yes, that was a surprise to me.

>a lot of people only think of the beaches and theme parks.

Umm... Also NASA? ( Yeah, and retirement communities and citrus groves )

Expand full comment

Whoops, good points on NASA, retirement, and citrus.

My family is very into Florida beaches and theme parks, I'm the only space nerd of the bunch. Skewed perspective.

Expand full comment

Many Thanks!

Expand full comment

Years ago, there was an article about a couple that practiced "extreme altruism". It said they believed (I'm paraphrasing) that each person is equally valuable and they should not put themselves before others. They aimed to donate at least 50% of their income to effective charities. It didn't report them giving to any weird or controversial charities; it was something normal like Oxfam. It was a nice article that didn't criticize the couple for their altruism. It just told their story.

The amount of vitriol and disdain in the comment section, however, was enough to make you think these people must be the next Osama bin Laden. Whatever criticism there may be about extreme altruism, the couple did not deserve that pile on.

The article was written before the term "effective altruism" was widely known. There was no mention of Silicon Valley, or tech bros, or x-risk charities or anything like that. Just one couple who felt obligated to give half their income to charity.

Expand full comment

Interesting! I wonder why the vitriol. I'm not an altruist myself, but I'm not going to attack someone who is, as long as they don't attempt to force me into that role.

Expand full comment

My memory isn't perfect but a lot of the comments were saying stuff like they're crazy, they're narcissistic, they're moralizing, or they're mentally unhealthy. Some were trying to argue it's not moral to help people far away over your own friends and family. It wasn't the arguments alone that were bad. It was the tone. They weren't just debating points academically. They were mad.

Not all the comments were negative; there were supportive comments as well.

If I had to guess, I'd say some people react badly to the implications of the couple's philosophy. Even though they never asked anyone else to donate to charity, if they believe they have a moral obligation to give so much, it implies we all have a similar obligation. Imagine you and your spouse earn $100k income together and you have a lot of expenses and feel like you're struggling. You recently donated $500 to a cause you believe in and you're proud to have contributed. Then this other couple comes along and says they felt morally obligated to donate $50,000 per year of their $100k income. No matter how nice they are, it kind of puts your own efforts to shame.

Expand full comment

Yes, but people still write stories when Shakespeare exists, and yet no one yells at someone saying they like Shakespeare and thinks he's great because that's implicitly belittling your ability to write. Obviously it'd be gauche to mention Shakespeare on the place where someone is writing, but these people are seeing an article in a non private space and responding.

What is going on in the heads of people who hate stories about altruistic virtue that isn't happening in the heads of people who read praise of other virtues?

Expand full comment
May 31·edited May 31

Let me just start by saying I think the people spewing vitriol at this couple in internet comments are shitty people. I think the average person in real life is better than that.

I think part of it is that this was just a normal middle-class American couple doing what they felt they had a moral obligation to do. If you read a story about the pope performing a miracle, well, no one expects you to be the pope. There's no sense that you should be doing miracles too. Shakespeare was a legendary writer; it's okay if you're not Shakespeare. But if this couple's reasoning is right (you shouldn't put yourself before others and all that), that same moral reasoning could apply to any of us.

Expand full comment

Hmmm... this is partially satisfactory but not fully. If someone normal becomes a math genius, like Terrance Tao, it seems like we construct a new category of "genius not at all like me" and then put every single exceptional event in that person's life as further evidence they are unusual mutants and that this has no bearing on me. You can imagine this also happening for someone like Mother Teresa or other Saints (Gandhi, the Buddha, Chiune Sugihara). Maybe the nature of the original article prevents this? But that's even more confusing re: EAs, where people have ready made otherizing excuses, like "they're elites" or "they earn six figures, I don't". Maybe we only accept the existence of saints if they are sufficiently different, and it's easy to construct excuses for why they are special and you aren't.

Expand full comment

Many Thanks!

>Even though they never asked anyone else to donate to charity, if they believe they have a moral obligation to give so much, it implies we all have a similar obligation.

Ah! Personally, as a nonaltruist, I can just quietly ignore them. They do them, I do me, so I see no need for vitriol myself. Now, if they were doing something like what Peter Singer did, and asserting that ethics _demanded_ that everyone, myself included, donate massively, that would put them in the never-darken-my-doorstep-again-and-take-your-entire-field-of-ethics-with-you category. But it is the _demand_, not the _example_, that drives that classification. Since

>they [this couple] never asked anyone else to donate to charity

I view them as harmless.

Re the light this sheds on hostility to EA:

This suggests that e.g. the mathematical analysis part of EA is at least not a _necessary_ part of the trigger for the hostility. I wonder if people outside of EA who donate more than 10% of their income trigger analogous hostility, and, if so, what the threshold is and how sharp it is.

Re the arguments:

>Some were trying to argue it's not moral to help people far away over your own friends and family.

There is actually a point to this, though I would phrase it a bit differently. Forgetting morals and viewing this in terms of alliances, if two people have the same "budget" (broadly speaking, including time and effort) for offering help, and person A "spends" it on a tightly circumscribed set of friends/family/close allies, while person B "spends" it spread thinly across the globe, then person A is a more reliable ally to their friends/family/close allies than person B is. Given a choice of who to pick as an ally, it is rational to pick person A over person B.

Expand full comment

> Now, if they were doing something like what Peter Singer did, and asserting that ethics _demanded_ that everyone, myself included, donate massively

Just to clarify, they didn't say everyone must donate in this particular article from a decade ago, as far as I remember. For all I know they might believe everyone has that moral obligation.

> There is actually a point to this [...]

What you're saying is true, and moreover, many of the points the commenters were making were plausible, or valid under some moral framework. But looking at the comments section as a whole, it's weird and kind of upsetting that people would be so hostile to a couple of people helping people in poor countries.

If there were an article about someone donating money to a local sports league, people could make the same type of critiques. They could say the person donating is narcissistic, the money is better spent elsewhere, this sport is violent, it excludes so-and-so, etc. But they wouldn't. The comment section would typically have just a few positive comments. (Unless the donor is someone widely hated like Donald Trump of Jeff Bezos; they'll attract criticism regardless of the article's content.)

> This suggests that e.g. the mathematical analysis part of EA is at least not a _necessary_ part of the trigger for the hostility.

Yes, this. All the other reasons people complain about EA may be contributing to the hostility, but people are hostile even without those other factors. Though keep in mind this was an article about "extreme altruism", in which the couple sacrifices and donates much more than typical for effective altruism.

Expand full comment

Many Thanks!

>Just to clarify, they didn't say everyone must donate in this particular article from a decade ago, as far as I remember. For all I know they might believe everyone has that moral obligation.

Yes, I draw a sharp distinction between people like the couple who are donating, but _not_ pressuring other people to do so and Peter Singer and people like him, who _do_ pressure other people. The former let me live my life in peace (even if they silently believed that everyone was obligated), the latter are _making demands_, and _not_ letting me live my life in peace.

>But looking at the comments section as a whole, it's weird and kind of upsetting that people would be so hostile to a couple of people helping people in poor countries.

The hostility seems unwarranted to me too. The tone in which some of these comments could be made can vary a lot. I hope that e.g. my phrasing of the consequence of spreading a help "budget" thinly is not phrased in a hostile way. It is just a natural consequence of spreading a "budget" over a wide set of people - no judgement intended.

>All the other reasons people complain about EA may be contributing to the hostility, but people are hostile even without those other factors.

Yes, and that is valuable information! Thank you!

Expand full comment

Not sure I’d use Hamas as an example of terrorism not working. Hamas doesn’t care about Gaza having functional buildings. Hamas cares about destroying the state of Israel. If half of Gaza gets destroyed, but Israel’s reputation in the “international community” is meaningfully tarnished, then Hamas has “won” by the metrics of their own utility function. I don’t *think* the October 7 attacks and the resulting fallout have net tarnished Israel’s reputation, but it’s remarkably close. IMO Hamas would be winning the PR war if not for Jewish billionaires pulling the strings to influence public opinion (not saying this to be antisemitic. I think Israel is in the right and I have deep respect for Jews, but factually, this is what is going on.).

Expand full comment

I think the point is that terrorism against your enemies is generally bad for those who are ostensibly your friends.

Expand full comment

Perhaps Israel is net winning in the US, but internationally Hamas is probably winning the PR war.

1. The Abraham accords could have developed further, to the point that eventually Saudi Arabia recognizes Israel too, but the Arab populous has seen so many images of Gaza now that any Israel-Arab deal is now untenable.

2. Due to a recent UN resolution (https://en.wikipedia.org/wiki/United_Nations_General_Assembly_Resolution_ES-10/23) Palestine now enjoys a greater degree of recognition by a wider group of countries than it ever has before.

Hamas is playing the long game, it is trying to navigate Israel into the same diplomatic situation as Apartheid South Africa found itself. And the way to do it is basically entrapment of Israel.

This strategy mostly works due to contingencies of the Israel-Palestine Conflict. EA probably cannot replicate it.

Expand full comment

Before 10/7, the future looked predictable and not good for Palestinians. Now variance has been introduced, which creates the possibility of Hamas winning, and also the possibility of Palestinians/Israel winning. So Hamas could lose much harder, but "at least" they have a chance of winning so in their minds it probably was a worthwhile gamble.

Expand full comment

The status quo is total Israel domination. For Hamas, it's a total one-sided bet.

Expand full comment

Israeli domination, but Hamas survival and continuation. Now Hamas could well be eliminated.

Expand full comment

Hamas is an idea, which you can't really eliminate while Palestinians continue to hate Israel.

Expand full comment

It's not an idea at all, it's an organization. Palestinians can hate Israel without it existing, but they will probably hate Israel less once the weapons to Hamas are cut off and there is peace.

Expand full comment

Hamas is a branch of the Muslim Brotherhood. Syria had a branch of that which was destroyed by Assad Sr. Nothing impossible about it.

Expand full comment

My point is not that terrorism is some killer app that lets you solve every problem, but that there are in fact some circumstances where terrorism can be “effective”. So without a robust theory of why exactly terrorism is inconsistent with EA thought, ad hoc condemnations of terrorism via the “historically, terrorism tends to be ineffective,” argument will be seen through by bright-eyed young EAs who have thought deeply about [scenario X], and come to the conclusion that [scenario X] is particularly conducive to terrorist intervention.

Expand full comment

Terrorism is particularly ineffective when associated with maximalist goals. "Remove the English ruling class from Ireland" can work because the IRA made it clear they'd be satisfied if said ruling class simply ran off to maintain a similar standard of living for themselves by oppressing a different ethnic group somewhere else.

"Kill all the techbros" is a problem for anybody who'd innately keep being a techbro regardless of where they lived or what project they were applying those tendencies to, and "End all suffering, everywhere" is an intolerable proposition to anyone whose preferred lifestyle fundamentally depends on the ongoing existence of suffering.

Expand full comment

But they aren't at all following the strategy of the ANC, which was explicitly fighting for equal rights within South Africa rather than the destruction of South Africa. Hamas' stance is that they regard any deaths coming their way as good because martyrdom leads to paradise. Israel can just keep obliging them on that.

Expand full comment

That's correct

Expand full comment

Israel isn't destroyed at all. Nor would it having a bad reputation do that. North Korea has a terrible reputation, but keeps on trucking, undestroyed.

Expand full comment

I also think Hamas doesn't care about Gaza, but they aren't really consequentialist, I don't think you can really explain what they do with some clear goals and analysis of how to meet them.

I think it is mostly struggles of the leaders to stay in power, religious beliefs about what God want, and hate.

I think the terrorism is mostly about hate, not some way to meet any particular political goal, they kill people because they hate them (or because they think they are expected to do something like that, and would be considered weak or something otherwise), and that it, not some plan to hurt Israel PR by how they will react.

I can be completely wrong, but I don't think they would end-up on plans like that, if they were really trying to meet a particular political goal, by opposition of mostly being insane.

Expand full comment

Yeah, OK, but I think you're missing the point by writing that it's OK that EAs donate to a whole bunch of disparate causes, they're just less scattered than non-EAs. If they want to be -effective- altruists, a bunch of them ought to get together and decide, rationally or otherwise, what specific charities they want to support and put their money there. It matters less which charities they are, as long as they're worthy ones among the set of worthy ones, than that the EAs make an -effective- contribution.

Expand full comment

The main problem is that philosophy is hard.

Like, most of the reasons to donate to one EA cause area versus another boil down to hard-to-resolve philosophical questions that don't really have an objectively correct answer. Is it better to save one person's life or to cure a hundred cases of blindness? How much is a chicken worth relative to a human? How important is bringing new people into existence? Is it better to save one life with certainty or save a billion lives with one-in-100-million odds? Is it better to focus on immediately solvable problems or on longer-term systemic change?

I think that if you randomly select ten people in the US--or even ten people in the same city or job or club--the odds of convincing all of them to give the same answers to all of the above four questions are pretty much zero, even if you sit them down and have them all argue with you and each other for hours. Things would be so much simpler if we could just solve moral philosophy, but actually *doing* that is the tricky part.

Given that a lot of people have different views on the above questions, even within the subculture of EA, it makes sense to make clusters of recommendations based on which side of each of the above questions people fall on rather than declaring a One True Cause Area.

Expand full comment

Its "philosophy is hard" combined with extreme confidence in one particular take. People don't fume at philosophers who just think philosophy is hard (which is most of them).

Which probably comes under "status grabs".

Expand full comment

Correct, which is part of the reason a lot of people get angry with EAs. On one side they argue that there's a *correct* way to do charity - it's their schtick, what they want to be known for. On the other side, they agree that philosophy is hard and that they can't answer the big questions so end up donating money to shrimp awareness just in case. That's...essentially the same problem that everyone else has.

Expand full comment

I don't think that's where the anger is coming from. I've seen a lot of criticism of the form "not enough systemic change" or "longtermism bad" or "SBF bad", but I haven't seen many people argue that EAs not being able to decide on a single cause is a problem.

I'd also argue that EA has shifted "the same problem that everyone else has" into something new. Most people don't really think much about the questions I mentioned above--population ethics, animal welfare, etc.--or about comparing charities in a systematic way and finding the best ones. If the end result is that they disagree on what's important, they disagree *substantially more* than EAs do because they've barely narrowed things down. See e.g. the plot at the end of section II. EAs aren't exactly marching in lockstep, sure, but IMO they've done quite a lot of work on finding good options for someone who accepts moral claim A vs B vs C, and they also spend more time thinking about those moral claims than any other movement I know about.

Expand full comment

Even normies who put no thought into these things care about effectiveness in charities. I'm old enough to remember when United Way went through a big thing and lost support because they were identified as spending too much on overhead and executive salary. That was long before EA, so they could not have influenced the world to move away from an ineffective charity.

I'm not saying EAs need to pick one thing or even narrow their approach. I think Scott's right that we can only put so much money at a time into each separate question.

My criticism is that their ideological underpinning for who they give to and why is philosophically discordant. There isn't even a relatively short list of priorities that will keep that list together in any meaningful way. AI, Long term risk, X-risk, Animal Welfare, Pain reduction, Human Health, Human Longevity (and/or reducing early death) - all potentially good things depending on your approach, but they're not nearly even in the same realm. It reads like a wishlist for various people's priorities. Again, that's fine, but that doesn't point to a "correct" way to do charity. It points to people trying to figure out what's important to *them* and act on it. Same as everyone else. Inasmuch as EA priorities align with mine, I applaud them. When they spend money on things I find silly, I don't care if they succeed or even resent them taking money from better options - just like everyone else.

Expand full comment

A few points.

1. Say we can't universally resolve whether curing blindness in X people is better than saving Y cows. If a particular person wants to donate to animal welfare as opposed to curing blindness, they still have to decide between many charities. Having people analyze which charity is most effective in a certain area is a huge help already. We don't *need* a shortlist of biggest issues for a shortlist *per issue* to be helpful. That's true even if there were 1000 issues up for debate as being most effective.

2. Noticing famous charities being ineffective, or little-known charities/areas to be effective is valuable. Concrete example: I don't think donating against malaria in particular would have been on my mind had I not gotten it explicitly recommend as an effective donation target.

3. Quote: "AI, Long term risk, X-risk, Animal Welfare, Pain reduction, Human Health, Human Longevity (and/or reducing early death) - [...] they're not nearly in the same realm." Firsfly, I disagree: Pain reduction as youput it is very clearly a part of both animal welfare and human health issues. For EA, AI is in large part a worry in the X-risk sense. X-risk is a specific issue of long-term risk (where humanity has literally no future as opposed to a problematic one). Lastly, human health and human longevity are clearly linked - increasing human longevity on the basis of health (vs eg driving safety) requires reducing and postponing deaths by diseases. Again, a very clear link. Secondly, to the degree that you are right (eg X-risk vs animal welfare), that's kind of the point. If you had a shortlist of issues where all the issues were basically similar, then it should be possible to reduce it furthe.

Expand full comment

Even if someone thinks their preferred cause is fundamentally more important than, say, a particular disease which causes blindness, they presumably prefer people committed to working on that disease to do so efficiently, since if it's solved in a lasting way sooner, and at lower cost, that means more resources freed up for the main event.

Expand full comment
May 30·edited May 30

I mean, that's just the problem the Is/Ought / Empirical vs. Moral distinction has been causing for all of human history, right? *Is* this well-building charity better at saving lives than this malaria bednet charity? *Is* investing in NASA to find & deflect incoming asteroids better at preventing the apocalypse than investing in the vaccine development & supply chain to counter the next Black Death level pandemic? *Is* X better than Y at doing the same thing?

But there's no way to go from that to "Oughts". *Ought* you care about saving lives in Africa, if you simply don't? *Ought* you care about investments in preventing apocalypses that will probably happen after you die anyways, because they're so unlikely per year, so it's never actually going to affect you anyways? *Ought* you care about animal cruelty, rather than arguing that caring about animals is a bug in our evolutionary programming as Mr. Stone does? *Ought* you care about X & Y at all in the first place?

There's no way to go from "Is" to "Ought" (unless you believe that everyone secretly already has the same goals & values in the first place [e.g. world revolution, the return of Christ, America taking over the world in the name of Democracy], and just doesn't know it yet) -- that's just philosophy. You can't turn an "Is" into an "Ought". But you sure as hell can go from "no Is" to "Some 'Is'es": empirically, are we at greater risk of another Black Death, or another big asteroid? Empirically, do poor people in Africa need more wells, or more of something else? Empirically, where are the most animals suffering today, rather than just where they're the most visible & easy to see?

So it can simultaneously be true that, for any given charity, there's a "correct" way to do it, the kind of correct that EA is famous for & which infuriates its critics... and also that "philosophy is hard", there's no correct charity, just the correct charity for you. Which also infuriates a different section of critics, since the correct charities for EA are not the correct charities for them: global poverty instead of socialist revolution, preventing the apocalypse instead of hastening Jesus's return, veganism instead of red-blooded American barbecue, etc.

But I don't see any contradiction here. It's just that EA pisses off the people who share the same goals as it, for reasons everyone is now familiar with (implying that alternatives aren't effective because they don't even try to answer the "Is" questions), and also simultaneously pisses off the people who have different goals from it (implying that World Socialist Revolution is less important than alleviating poverty). It's a sign that -- regardless of what else you think about it -- EA really *is* doing something different from everyone else, if it can be this uniquely hated, across the aisle... and it's not just doing the same thing in response to the same problems as everyone else. (Even when funding shrimp awareness.)

It's a uniquely, ah shall we say, 'iconoclastic' approach, admittedly -- but one I don't see any contradictions inside. Some charitable organizations really are more effective than others. And simultaneously, no charitable cause really is better or more moral than any of the others. You just make a lot of enemies saying both at once.

Expand full comment
May 30·edited May 30

I think this is the crux of the matter. If you think shrimp awareness is a vital ethical problem, then you *should* be trying to find out what the most effective and impactful (ugh, I now feel soiled for using that word) charity or intervention is.

You can even go out and evangelise for shrimp awareness charities, that's also part of it.

What you don't get to do is tell people that them giving to local soup kitchens or churches or cuddly puppies and kittens homes is bad, whereas your concern about shrimp is objectively correct by the principles of the universe, and you can say this because you have a lovely mathematico-philosophical-utilitarian-statistical formula to work out that.

I'll take it from a religious body telling me that their One Holy Cause is the way the truth and the life, even if I disagree with them on doctrine or indeed their faith in total, because I share a lot of the same underlying assumptions and I recognise where they are coming from, why, and what. I won't take it from a bunch of people all in agreement that religion is dumb and the appearance of them operating just like a religion is not alone pure coincidence, it's not in fact happening and don't believe your lying eyes.

EDIT: You are also entitled to be concerned about shrimp awareness *and* think that religion is dumb! But you don't get to say "Your religion is dumb but my religion is not a religion, even though it's got many of the hallmarks and practices of one, and it's true and right and just, so yah boo sucks to you".

Expand full comment

You sure are being aggressive Deiseach, to someone who is agreeing with you.

Expand full comment

Being concerned about shrimp awareness is indeed not a religion, it is just an ethical consideration.

It isn't more a religion than being concerned about the well-being of your children or stuffs like that.

But I suppose people have different understanding of what the word "religion" really means.

Expand full comment

I think consolidating efforts on one or a a few specific charities to maximize impact isn't really feasible or desirable. One, as Scott mentioned, different charities/fields are able to absorb different amounts of funding, so eventually you will probably hit diminishing returns on a given field/charity, making another field/charity more cost-effective. Two, donors may not want to risk having zero impact, and so may want to diversify their charitable bets for that reason. Three, different people and organizations have different comparative advantages, so to the extent that you're putting in your own time and effort, you will have diversity for that reason. Four, it's just not feasible to get thousands of people to disagree on a single or very few narrow top priorities; people will disagree about e.g., how to value animals versus humans, because they have different worldviews and fundamental values.

Expand full comment

I feel like Scott already addressed this argument in the post?

Like, read the paragraphs starting from:

> Technically, it’s only correct to focus on the single most important area if you have a small amount of resources relative to the total amount in the system (Open Phil has $10 billion). Otherwise, you should (for example) spend your first million funding all good shrimp welfare programs until the marginal unfunded shrimp welfare program is worse than the best vaccine program. Then you’ll fund the best vaccine program, and maybe they can absorb another $10 million until they become less valuable than the marginal kidney transplant or whatever.

Expand full comment

Proportion of individual income donated to charity is not a good indicator of whether someone will benefit from joining EA or whether EA will benefit from them joining.

There are better traits to go for, and EA's leaders have known and followed this principle for like a decade when evaluating what kinds of people to let in.

Expand full comment

>what kinds of people to let in.

Is there a board that forbids people from joining? How does one get ejected from EA? Are you stripped of your cuddle puddle license?

Expand full comment

Yeah, there aren't really any admission criteria. Nobody's going to interview you if you want to take the GWWC pledge, or do a background check before you show up to an EA meetup.

Expand full comment

I think, due to some Unfortunate Events in the past, that there is some kind of community health watchdog set up to make sure the worst of the obvious grifters and exploiters don't take advantage of the nice people, but how much weight their judgements on "we probably shouldn't let Stevie hang around the groups anymore" has, I have no idea.

Expand full comment

They finally got around to "don't let in actual psychos" and "let's shuffle out likely rapists," but I'm skeptical that any charity is *that* good at avoiding obvious grifters and exploiters.

Expand full comment

"Effective altruism does try to get beyond “I want to donate to my local college’s sports team”. I think this is because that’s an easy question. Usually if somebody says they want to donate there, you can ask “do you really think your local college’s sports team is more important than people starving to death in Sudan?” and they’ll think for a second and say “I guess not”."

After they say "I guess not" they will continue to donate to the sports team. The sports team is not a charity, it is a form of personal consumption. I support it because it gives me pleasure. I will concede that occasionally going to expensive restaurants is less important than helping starving people in Sudan, but that doesn't cause me to redirect my restaurant expenditures to those starving people.

My main problem with EA is its lack of realism about human beings and their motivations.

Expand full comment

"they will continue to donate to the sports team"

Some will, and some won't. Some will continue giving to the sports team, but take it out of the share of expenditures they use on personal expenditures like expensive restaurants, rather than the share they use on charity (Yudkowsky addressed this point: https://www.lesswrong.com/posts/3p3CYauiX8oLjmwRF/purchase-fuzzies-and-utilons-separately). I myself give more to charity than I used to (and not just because I'm making more money) and rely on GiveWell instead of CharityNavigator, because I've been persuaded by EA arguments. EA won't change human motivations, but it can persuade many of us act on our motivations differently, which in aggregate can do a lot of good.

Expand full comment

If you think of donating to a sports team as an act of personal consumption, like going to a restaurant, nobody is really suggesting you should replace that with effective charities. If the standard is to donate 10% of your income to effective charities, then that means 90% can go to things like restaurants and sports team donations. The criticism of donating to sports teams is for those who treat that as part of their charitable giving.

And doesn't the very existence of the movement as it is demonstrate that the criticism can be effective at changing people's actions?

If your problem is that the "unrealistic" approach is simply causing the movement to grow slowly and stay smaller than it could be, I would ask, what's the advantage of growing faster or being bigger if the standards are lower? You can't accomplish a goal better by abandoning it; that would completely defeat the purpose. Or, if you agree it would be good for more people to donate to better causes, what's your alternative solution for how to make that happen?

Expand full comment
May 30·edited May 30

"The criticism of donating to sports teams is for those who treat that as part of their charitable giving."

What if the local sports team I am donating to is set up and run by a youth organisation working for social inclusion of marginalised youth, including those of different racial, ethnic, intellectual and physical ability, and sexual orientations?

Helping buy equipment and pay for premises for something which steers troubled youth off the path to falling into criminal activity may be a charitable endeavour every bit as much as "I worry about fish as persons, that's why I wear this fork bracelet*".

*https://paxfauna.org/vegan-tables-a-letter-to-the-people-i-love/

"Piled on top of that is the guilt I feel every time I don’t speak up. I see people fishing in the park, and I wonder if there’s something I could say to prevent someone from suffocating to death in the next few minutes. I wonder if someone will die this way because of my cowardice, my unwillingness to intervene."

Expand full comment

>Piled on top of that is the guilt I feel every time I don’t speak up. I see people fishing in the park, and I wonder if there’s something I could say to prevent someone from suffocating to death in the next few minutes. I wonder if someone will die this way because of my cowardice, my unwillingness to intervene.

If anyone takes that seriously, donating only 10% of income is unsupportable.

Expand full comment

I think she's a perfectly nice, perfectly sincere woman who has no protection on her nerve endings and is excessively scrupulous. I certainly have no intent to mock her, but I also think this is not normal and is part of what turns ordinary people off when they bang up against "so what do you guys do? okay, you worry about fish?"

Expand full comment

"What if I add a whole bunch of stipulations after-the-fact which weren't part of the original scenario?"

Also, just changing the subject. First we're arguing whether it's worthwhile to try to get people to donate to effective charities at all; now we're arguing the merits of the charities?

On top of that let's throw in some highly emotionally charged rhetoric, to defend your own cause and mock one you don't like.

Overall I find your comment to be in extremely bad faith.

Expand full comment

If you don't want to see those things, then I advise not reading Deiseach comments. They have been around for years, haven't changed and bans don't stick because people apparently enjoy seeing this type of content. You are likely not going to change her mind, and you will likely be mocked or have your point ignored.

Expand full comment
May 30·edited May 30

Is it worthwhile to get people to donate to effective charities? Yes.

What is an effective charity? There we start arguing.

(a) Malaria bed nets save lives, donating to your local sports team doesn't, ergo malaria net charities are effective

Great, fine, we agree

(b) I think shrimp have sentient experience, we should work on allievating shrimp suffering

Uh, well, you do your thing

(c) I think we should work on alleviating shrimp suffering in preference to donating to the local sports team as that is more moral and ethical

Perhaps, but now we're getting into the area of fundamental principles. Maybe in this particular case, donating to the local sports team *does* save lives (see the disadvantaged youth). Maybe I don't care about shrimp and if you give me a choice between shrimp and sports, I'll go for the sports on the grounds that it at least helps humans, if I don't decide I have no obligation to donate to either charity at all.

Yes, we are arguing the merits of the charities, because that's what happens when you brand yourself as EFFECTIVE. And I'm not the person using emotive language about fish are people, that's a lady posting on EA forums in all sincerity. Take it up with her if you don't like how it makes you look.

Expand full comment

> Is it worthwhile to get people to donate to effective charities? Yes.

> Yes, we are arguing the merits of the charities

Not in this thread; if you look at the OP we were discussing the worthiness/effectiveness of trying to convince people to switch their donations to effective charities. If you want to discuss something else you could start your own thread instead of starting an unrelated tangent in this one.

Expand full comment

If you want people to switch to *effective* charities, then how else but by arguing the merits of the charities do you decide which are effective?

"We donated six billion to Clean Up The Beach last year". "Okay, and how clean is the beach?" "Not very?" Then the charity is not effective and you need to give the money elsewhere.

"Okay let's switch that six billion to Buy Top Hats For Snakes". Is that an effective charity, even if every snake is indeed fitted out with a top hat?

Expand full comment

>"Piled on top of that is the guilt I feel every time I don’t speak up. I see people fishing in the park, and I wonder if there’s something I could say to prevent someone from suffocating to death in the next few minutes. I wonder if someone will die this way because of my cowardice, my unwillingness to intervene."

I realize that this is not what she intended, but, if someone wanted to parody and ridicule Peter Singer's "drowning child" scenario, this would be a pretty good text.

Expand full comment

Some charity is better than none.

https://xkcd.com/871/

Expand full comment
founding

A lot of them will continue to donate to the sports team. Very few of those will defend this as a more benevolent and altruistic choice than donating to feed starving Africans or whatever. Those are generally seen as two different classes of expenditure, even if both of them include the word "donate".

Expand full comment

When it comes to Effective Altruism, the reason I haven't signed up - apart from being broke and spending all I make on trying to keep my family sorta-okay (don't ask) - is the "altruist" part.

I really like EA as presented by Scott Alexander here; I think his case for it in "Nobody is Perfect, Everything is Commeasurable" should be read by everyone. But the "altruist" part of this sticks in my craw. If it were called "Effective Benevolence", I would be all over it.

I know EAs are sick of hearing about SBF, but his actions are completely, totally justified by the morality of altruism. If you rob one person to feed ten, that's absolutely allowed. It's the myth of Robin Hood, after all. And SBF is a piker compared with the real monsters down that path.

Okay, I will close that particular note here. Maybe someday I'll start a spin-off with the same ideals and a different name :)

Expand full comment

Why is the name so important? You could take the Giving What We Can pledge without having to say "Effective Altruism" at all :)

Expand full comment

Prussian is a Randian.

Expand full comment

Objectivist. Use the correct words.

Expand full comment

Names have meaning! Do you suppose anyone would want a part of the organization Devoted Effective Valuation Improving Lives?

Expand full comment

Agreed names have meaning! You said above "If it were called "Effective Benevolence", I would be all over it" - if you agree with principles and findings from the EA community, e.g. that donating to highly impactful charities will 10-100x the lives saved by your donation vs a typical charity, and that donating >=10% of your income is a good idea for many people, and that therefore you could save a number of lives via donations... but the reason you don't do so is just because of the name of the philosophical movement that led to these ideas, that seems very costly (maybe counterfactually reducing the number of lives you save!)

But I think I probably am misinterpreting your original comment, given that this sounds like an unlikely view to hold! So I'm intrigued to hear what you mean

Expand full comment

The fact that, as Scott says in the post, SBF’s actions drove people away from EA shows that, even from a strict consequentialist standpoint, they were almost certainly negative.

Expand full comment

> I know EAs are sick of hearing about SBF, but his actions are completely, totally justified by the morality of altruism. If you rob one person to feed ten, that's absolutely allowed

This is completely incorrect. There is nothing about the concept of altruism that inherently implies the necessity of stealing to carry it out.

I would also point out that the entire line of reasoning doesn’t make sense. If you believe that stealing to enable altruism is morally necessary, it is true whatever you call your movement. It doesn’t make sense to say that people who use the word altruist have a different set of moral rules they have to follow than people who don’t.

Expand full comment
May 30·edited May 30

>There is nothing about the concept of altruism that inherently implies the necessity of stealing to carry it out.

There's nothing that, *by itself*, necessitates stealing. But there's something which eliminates the fence that keeps most people from stealing, and once you've eliminated it, other reasons may necessitate stealing.

Typical normies have deontological rules that say not to steal even if you can increase utility. EA gets rid of such rules. Getting rid of them doesn't *necessitate* stealing, but it makes stealing easier to justify.

Expand full comment

You seem to be confusing EA with consequentialism. They are related concepts, but not the same. Not all EAs are consequentialist and not all consequentialists are EA. For example, I am an EA who is a deontologist. If you have an argument about consequentialism, you would probably do better to leave EA out of it.

Expand full comment

Scott, and I would venture 90% of EAs, are consequentialists.

Actually decided to check the old survey: https://rethinkpriorities.org/publications/eas2019-community-demographics-characteristics

A whopping 3% of EAs identify as deontologists; 80% are consequentialist. I deem it entirely fair to assume based on the survey results and all major EA figures that the ideology is *basically* consequentialist.

Expand full comment

Most EAs are consequentialist, but that doesn’t make consequentialism and EA the same thing. They are distinct concepts even though there is some overlap in who holds both views. Even if 100% of EAs were consequentialists and 100% of consequentialists were EAs, they still would be distinct concepts.

The criticism being discussed here is applicable to consequentialism, not EA. I’m struggling to understand why people are grasping at straws to equivocate them.

Expand full comment

Why is stealing wrong by the creed of altruism? Isn't it moral to even degrade yourself, sacrifice your own moral standing for some higher calling?

Expand full comment

I feel that Scott affirms consequentialism but is really a closeted virtue ethicist (it's okay; it gets better)

Expand full comment

> I know EAs are sick of hearing about SBF, but his actions are completely, totally justified by the morality of altruism. If you rob one person to feed ten, that's absolutely allowed. It's the myth of Robin Hood, after all. And SBF is a piker compared with the real monsters down that path.

No, they aren't! SBF didn't take all his clients' money and give it to charity. He didn't run a Ponzi scheme where the "profits" were partly put towards charity. He had a completely sustainable, functional business model (make money on crypto transaction fees and arbitrage and similar), then took half of his clients' money and punted it on stupid bets! FTX went bankrupt because it took its money and lent MORE THAN HALF OF IT to Alameda Research, letting the trading firm dig itself into a truly gargantuan hole from which it... actually wound up making the money back somehow: https://www.ft.com/content/c564c74f-de33-450e-bbc4-250849c980ab [e: Kinda-sorta, not really, see reply. It was made back to the value at time of bankruptcy, not to the value the assets would have had if there had been no bankruptcy.]

But SBF did not "do a good EA." He was not motivated by altruism in these decisions. He punted a bunch of money on stupid bets, and happened to also be an EA.

Expand full comment

They did not make the money back! What happened is by entering bankruptcy they were able to lock in people's claims at the cash value on the date of bankruptcy (and crypto lows) and are now able to pay back that amount as crypto prices have returned to highs. If you had 1 bitcoin on ftx you are getting ~20K back not ~70K (this is reasonable from the point of bankruptcy law but means that SBF committed a real harm, also as the judge noted if you steal money and go to Vegas and win it doesn't mean you didn't commit a crime).

Expand full comment

It's not actually clear that FTX was a sustainable business without the fraud. My understanding is that part of why it succeeded as an exchange was Alameda systemically losing money to FTX customers, and they only had so much money to lose in the first place because it was stolen from those customers to begin with. There's shades here of people investing with Bernie Madoff because they assumed he was getting his returns through insider trading, only to discover the returns were fake the whole time.

Expand full comment

So if he had robbed people, plain and simple, and given that to charity, that would be okay?

Expand full comment

You could at least argue it was in some way an EA-related act if he had done that. When a philanthropist runs a gigantic Ponzi scheme which reroutes all the money to the AMF or something, I'll have to consider whether it reflects poorly on EA.

Expand full comment

And there we go. By the lights of altruism, you don't have any criticism of fraud and robbery, only that he might have benefited from them.

Expand full comment

Please point to where in my post I said that.

Expand full comment

Your first sentence

Expand full comment

SBF took an insane approach to risk in part because of his normative beliefs. Most people with EA beliefs aren't that crazy, but SBF without those normative beliefs probably would have taken smaller risks.

Expand full comment

Who are better known for an insane approach to risk: crypto bros or EAs?

Expand full comment

Let's return to that gift that keeps on giving, the Sequoia puff piece about SBF. They lean very heavily on the influence of Singer, MacAskill, and EA/Rationalism/Utilitarianism as the inspirations and guiding philosophy of SBF.

Now, without EA and the community emphasis on Utilitarianism, would he have done the same dumb stuff? Very probably. But EA, and more importantly Utilitarianism, as a philosophy gave him a rationale for why doing the dumb stuff was not dumb, it was 4-D chess too advanced for normies:

https://web.archive.org/web/20221027180943/https://www.sequoiacap.com/article/sam-bankman-fried-spotlight/

"SBF’s mind had been trained almost from birth to calculate. As a schoolboy the hedonic calculous of utilitarianism had him trying to maximize the utility function (measured in “utils,” of course) for abortion. ... All of it boiled down to expected value. The formula is fairly simple. If the amount won multiplied by the probability of winning a bet is greater than the amount lost multiplied by the probability of losing a bet, then you go for it—irrespective of units. Utils, euros, dollars were all subject to the same reckoning.

...Those who prefer the sure win are “risk-averse,” and those who would rather gamble are “risk-lovers.” But both risk-lovers and the risk-averse are suckers, equally. Because, over the long run, they lose out to the risk-neutral, who take both deals without prejudice.

...To be fully rational about maximizing his income on behalf of the poor, he should apply his trading principles across the board. He had to find a risk-neutral career path—which, if we strip away the trader-jargon, actually means he felt he needed to take on a lot more risk in the hopes of becoming part of the global elite. The math couldn’t be clearer. Very high risk multiplied by dynastic wealth trumps low risk multiplied by mere rich-guy wealth. To do the most good for the world, SBF needed to find a path on which he’d be a coin toss away from going totally bust.

Pressed on how he justified the decision to gamble with his future, SBF turns the question around, imagining a world where operating this way is commonplace. In that world, “There’s a thousand of you. Some of you are going to do good, but the starving child doesn’t give a shit about which person it is who does that good. So why are you concerned about this little term in the equation?” To be clear: that “little term” in SBF’s equation is “you”—in other words, the specific person who saves the starving child. SBF is making a distinction between the selfish desire to be a savior and the selfless desire that children be saved—and choosing selflessness. “Obviously, I care about impact,” he says, and by “impact” he means “maximizing the overall chance that someone saves the child.” Thus, logically, it follows that “I am risk-neutral on overall impact.”

[When he first got into crypto trading with The One Weird Trick]:

Fortunately, SBF had a secret weapon: the EA community. There’s a loose worldwide network of like-minded people who do each other favors and sleep on each other’s couches simply because they all belong to the same tribe. Perhaps the most important of them was a Japanese grad student, who volunteered to do the legwork in Japan. As a Japanese citizen, he was able to open an account with the one (obscure, rural) Japanese bank that was willing, for a fee, to process the transactions that SBF—newly incorporated as Alameda Research—wanted to make. The spread between Bitcoin in Japan and Bitcoin in the U.S. was “only” 10 percent—but it was a trade Alameda found it could make every day. With SBF’s initial $50,000 compounding at 10 percent each day, the next step was to increase the amount of capital. At the time, the total daily volume of crypto trading was on the order of a billion dollars. Figuring he wanted to capture 5 percent of that, SBF went looking for a $50 million loan. Again, he reached out to the EA community. Jaan Tallinn, the cofounder of Skype, put up a good chunk of that initial $50 million.

...The first 15 people SBF hired, all from the EA pool, were packed together in a shabby, 600-square-foot walk-up, working around the clock. The kitchen was given over to stand-up desks, the closet was reserved for sleeping, and the entire space overrun with half-eaten take-out containers. It was a royal mess. But it was also the good old days, when Alameda was just kids on a high-stakes, big-money, earn-to-give commando operation. Fifty percent of Alameda’s profits were going to EA-approved charities.

“This thing couldn’t have taken off without EA,” reminisces Singh, running his hand through a shock of thick black hair. He removes his glasses to think. They’re broken: A chopstick has been Scotch taped to one of the frame’s sides, serving as a makeshift temple. “All the employees, all the funding—everything was EA to start with.”

...The point was this: When SBF multiplied out the billions of dollars a year a successful crypto-trading exchange could throw off by his self-assessed 20 percent chance of successfully building one, the number was still huge. That’s the expected value. And if you live your life according to the same principles by which you’d trade an asset, there’s only one way forward: You calculate the expected values, then aim for the largest one—because, in one (but just one) alternate future universe, everything works out fabulously. To maximize your expected value, you must aim for it and then march blindly forth, acting as if the fabulously lucky SBF of the future can reach into the other, parallel, universes and compensate the failson SBFs for their losses. It sounds crazy, or perhaps even selfish—but it’s not. It’s math. It follows from the principle of risk-neutrality.

“I think it’s hard to justify being risk-averse on your own personal impact,” SBF told me when I quizzed him about it—“unless you’re doing it for personal reasons.” In other words, it’s selfish not to go for broke—if you’re planning on giving it all away in the end anyway.

...“Maybe let’s take a step back,” he says, only to launch into an explanation of his own, personal utility curve: “Which is to say, if you plot dollars-donated on the X axis, and Y is how-much-good-I-do-in-the-world, then what does that curve look like? It’s definitely not linear—it does tail off, but I think it tails off pretty slowly.”

His point seems to be that there is, out there somewhere, a diminishing return to charity. There’s a place where even effective altruism ceases to be effective. “But I think that, even at a trillion, there’s still really significant marginal utility to dollars donated.”

...The scale of his giving, even now, before he has really started to divest, is massive. Alameda Research, the company that generated the FTX grubstake, still exists, and its purpose seems to be to generate profits—on the order of $100 million a year today, but potentially up to a billion—that can be stuffed into the brand-new FTX Foundation. Similarly, even now, 1 percent of net FTX fees are donated to that same foundation, and FTX handles nearly $5 billion dollars’ worth of trades per day. The foundation, in turn, gives to a diversified group of EA-approved charities.

...And, indeed, SBF puts his money where his mouth is. SBF is personally backing a slew of so-called AI alignment nonprofits and public-benefit corporations including Anthropic and Conjecture. He’s also the big money behind a new nonprofit called Guarding Against Pandemics, which, not coincidentally, is run by his brother Gabe. And SBF was the second-largest donor—behind only Mike Bloomberg—for Biden’s successful attempt to dethrone Trump.

[And the killer conclusion to the entire piece, where hindsight really makes you want to bang your head against the wall for how wrong the writer got it]

...Crypto is money that can audit itself, no accountant or bookkeeper needed, and thus a financial system with the blockchain built in can, in theory, cut out most of the financial middlemen, to the advantage of all. Of course, that’s the pitch of every crypto company out there. The FTX competitive advantage? Ethical behavior. SBF is a Peter Singer–inspired utilitarian in a sea of Robert Nozick–inspired libertarians. He’s an ethical maximalist in an industry that’s overwhelmingly populated with ethical minimalists. I’m a Nozick man myself, but I know who I’d rather trust my money with: SBF, hands-down. And if he does end up saving the world as a side effect of being my banker, all the better."

Expand full comment

I think no ethical framework can save you from false beliefs, unknown unknowns, or bad luck.

And I think the main antidote against false beliefs is rationality, and the main antidote against unknown unknowns is wiseness.

If SBF was a well-meaning utilitarian (and I can believe he was), I think he was still wrong in his analysis, and he wasn't very irrational, but he was lacking some wiseness.

Expand full comment

Wasn't he? How would you know? He certainly gave a lot away.

And, against, I want to stress: I said he was moved by the ethic of Altruism and practiced it in the only way it can be practiced.

Expand full comment

> Wasn't he? How would you know? He certainly gave a lot away.

Perhaps his desire to give things away was motivated by genuine altruism, or perhaps merely because he wanted to look good, but this was unrelated to punting his clients' money on stupid bets. I know this, because punting his clients' money on stupid bets does not have anything to do with altruism.

> I said he was moved by the ethic of Altruism and practiced it in the only way it can be practiced.

Where do altruists say you should punt your clients' money on stupid bets? I must have missed that bit.

Expand full comment
Jun 1·edited Jun 1

Yes, I am still unsure, but I believe he was probably well-intentioned.

But no good intentions, nor any kind of ethical framework, can really save you from being wrong on facts or on reasoning.

The exact way it fails will vary from an ethical framework to another, but it will be bad.

And what he did was very different from what a typical EA is doing, who are more consequentialist than average, but not hardcore like that, and even most hardcore utilitarians don't agree we should be risk lover like him.

Also crypto is currently a giant useless Ponzi scheme, wasting a lot of important resources and helping criminals.

I think what the rationalist community should learn from this, is to be more skeptical on supposedly "genius", how rational the market really is, and how justified it is to have billionaires (my belief it that it isn't, if it was unclear).

Expand full comment

You seem to be assuming that clients' have a right to their property and so on. But Altruism flatly denies that. Who are the clients to put their selfish desire for profit over SBF's charitable work? If a few or many go bust - so what? That's sacrifice. Right?

Expand full comment

> But Altruism flatly denies that.

[citation needed]

Expand full comment

...so? He'd argue - does argue - that he was trying to use the money in this scheme to donate it.

But more: let's say he lived in a rented room with charity shop clothes while doing this,giving all else to charity. Wouldn't he then be a saint of altruism?

Expand full comment

I completely disagree that SBF's actions were "totally justified by the morality of altruism". I think his actions had an extremely negative impact from an altruistic point of view, and that this was foreseeable. For example, I expect that if you would have polled all EAs in summer 2022 to ask whether SBF should use customers' funds for Alameda (i.e., commit fraud), 99% would have said "please don't", because it was obviously a bad idea, or its badness was obvious from the outside, anyway.

See also https://rychappell.substack.com/p/naive-vs-prudent-utilitarianism

Expand full comment

How is it not altruist to rob one to feed ten?

Expand full comment

It may or may not be altruistic to rob one to feed ten; I'm saying that robbing one to feed ten is not a good representation of what SBF did.

Expand full comment

It's what he claimed to do and aspired to do that. Again, this is the only way the creed can be practiced

Expand full comment

I think we are talking past each other? You now seem to be focused on SBF's intentions. But I objected to you saying his actions were "totally justified by the morality of altruism". I take that to mean: "altruism provided a good reason for SBF to do what he did".

I'm saying that that's wrong -- SBF's actions (specifically, his decision to use FTX customer funds to make investments with Alameda) were not justified by altruism. If altruism is "selfless concern for the well-being of others", his actions clearly ended up -- and predictably so -- harming the well-being of others, e.g., by defrauding thousands of investors of their money, by destroying trust and causing reputational harm to EA efforts to help others, and so on. Hence, altruism did not provide a good reason for SBF to do what he did, and "robbing one to feed ten" is not a good representation of his actions.

Expand full comment

There's an old 80K podcast interview where Rob Wiblin says something to the effect of "I wonder if it would be better to rebrand to Effective Helping".

Expand full comment

EH.

Expand full comment

Benevolence is the word I would use

Expand full comment

In an earlier thread, people were talking about the possibility of EA moderating their arguments in order to go mainstream. I was pessimistic because I felt that it would rapidly denature into something almost indistinguishable from traditional charities. (Call it "Altruism+"?) Perhaps an EB-like spinoff would be a good idea, especially if it's significantly more or less likely to compromise on its principles. I think having a canary would be more useful than a bulwark, at the moment, but I doubt that's the direction that you want to go in. ;-)

Expand full comment

SBF had an insane approach to risk. It's not justified to take such an approach.

Expand full comment

When I read the first couple sentences and saw Stone was making a point of EAs being concentrated in Boston and the Bay Area, knowing what I do about Lyman I thought he was going to be making a point about how those places have low birthrates and that therefore EA is bad for population growth. Not that different from his arguments that population density mechanistically causes low birthrates, or his real argument about the correlation between EA presence and the rate of charitable giving.

Expand full comment

Not knowing who Lyman is, I thought it was going to be a right-wing attack.

Expand full comment

His main thing is obsessing over population growth. He does a lot of really broad statistical correlations and says "well this option is the one that increases birthrates so that's what we should do."

Expand full comment

> Eliezer Yudkowsky has sometimes been accused of wanting to bomb data centers

He has also advocated for the nuclear annihilation of 99%+ of humans as a means of forestalling AI risk. Can we please stop paying attention to what he thinks? He is insane.

Expand full comment

"Advocated" is a strong word for "preferred over the literal extinction of all life on Earth"

I might prefer to die of cancer over drowning to death but that doesn't mean I'm gonna be snorting asbestos to avoid it.

Expand full comment
May 30·edited May 30

> He has also advocated for the nuclear annihilation of 99%+ of humans as a means of forestalling AI risk.

link?

Expand full comment

If the choice comes down to 99% or 100%, which would you pick? What if you were only 99% confident about the 100% annihilation?

Expand full comment

I'm confident he has done no such thing, and distinctly recall him saying that a nuclear war would do nothing to prevent the issue for re-occurring in a few years or decades after civilization had rebuilt enough to do AI research.

Expand full comment
founding

"99+% of humans" is your own strawman, not anything EY has said and not anything that most people here would believe. No plausible "nuclear annihilation" event is going to kill 99+% of humans, or even 90% of humans,

The number of dead humans in that hypothetical may be too high for your taste, and you wouldn't be remotely alone in that. But grossly exaggerating the number makes people take you less seriously, and attributing the grossly-exaggerated version to someone a lot of people here respect, is going to make them take you a *lot* less seriously.

Expand full comment

I think there are two prongs to EA as a concept: source and destination - the source is the general policy that it's totally fine to maximize your income in a morally neutral/negative job if it pays you a lot and you donate a lot, and the destination prong is the search for the most effective recipient of those donations.

I have a lot of respect for the latter idea but find that the former prong tends to fall flat because, when push comes to shove, a little feels like 'good enough' (especially when laundered through the effectiveness calculus of the destination prong). Saying you're selling out but committing to EA is like telling people you plan to run a marathon or start a diet - it provides the moral satisfaction you seek without any commitment to follow through (and most people, being weak and selfish, fail to follow through). EA should be more than a means of assuaging the guilt of the rich.

Expand full comment

> the source is the general policy that it's totally fine to maximize your income in a morally neutral/negative job if it pays you a lot and you donate a lot

This is a pretty blatant strawman. It isn’t part of EA at all.

Expand full comment
May 30·edited May 30

This used to be a big part of EA (e.g. that it was better to work for a finance firm and donate the extra money than to take a less well paying job). This has become less prominent over time and I know 80,000 hours has distanced themselves from it.

Edit: per the link form Simultan, I would say this continues to be a significant part of EA

"These days when we think of harmful careers, finance is often the first that comes to mind. But the financial sector employs almost 4% of the workforce, or 6 million people in the US.1 These people do a huge variety of tasks, from legal compliance, to IT, to trading, to banking.

First, it’s highly unlikely that all of these positions are net negative. Rather, some do harm, others do good, and many are roughly neutral.

Second, it matters exactly how the negative consequences come about. If a banker is overpaid, that may be socially wasteful and so in effect have a negative impact on others, but ethically it’s very different from a banker who commits fraud."

see also https://80000hours.org/articles/earning-to-give/ which is updated post SBF.

Expand full comment
May 30·edited May 30

I guess I haven't kept up, but it's good to know the parts of the movement I objected to are at least being pushed back on/reassessed (by the major figures if not the acolytes themselves).

Expand full comment
May 30·edited May 30

I now agree more with your statements and consider the previous comments I made to be fig leafs presented by the organization that has not actually reflected on their positions.

Expand full comment

I don't see how yxwvut is vindicated? They wrote "the source is the general policy that it's totally fine to maximize your income in a morally neutral/negative job if it pays you a lot and you donate a lot", which seems pretty clearly untrue (the "totally fine to maximize your income in a morally negative job if it pays you a lot and you donate a lot" bit).

As best as I can tell:

- EAs/80K have always advocated getting neutral or positive, high-earning jobs and donating the money

- EAs/80K have always discouraged getting harmful jobs and donating the money

Is the issue that you don't consider any jobs in finance to be neutral-to-positive, whereas 80K does?

Expand full comment

I think encouraging talented individuals to get jobs in finance is a net negative relative to direct action. I think the EA/80K comes off more like to misquote something I heard on a podcast this week, "Texas telling people not to drink while boating" than an actual strongly held position against ethically dubious work (I do think EAs don't want people to do very negative things a la SBF but were quite happy taking his money when he was only thought to be facilitating ponzi schemes).

Expand full comment

Yep, see for example https://80000hours.org/articles/harmful-career/ (2017): "We believe that in the vast majority of cases, it’s a mistake to pursue a career in which the direct effects of the work are seriously harmful, even if the overall benefits of that work seem greater than the harms. And within a job, we think you should avoid actions that seem very wrong from a commonsense perspective, even if you think it’ll do more good overall."

Expand full comment

(That's for harmful/negative jobs. I think EA has and still does think its great to work a neutral job, earn a lot of money that way, and donate those money. IMO that's a good thing to advocate.)

Expand full comment
May 30·edited May 30

The demographics of the EA movement would tend to disagree with you there. Sure, the movement's figureheads may no longer directly support this idea, but the membership base of EA adherents has their own interpretation. If I criticize American Christianity, pointing to the bible's message to care for the poor and downtrodden is no great rejoinder.

Expand full comment

What demographics are you referring to? I suspect that you are reasoning based on stereotypes rather than fact.

Expand full comment

What kind of nonsense is this "EA hasn't grown that much, therefore your argument is moot" point from Lyman Stone? Effective altruism is younger than Zoomers.

This is akin to a first-century Roman saying that those cultists who follow the teachings of that one executed rebel who nattered on about "forgiving your enemies" and "giving to the poor" are going nowhere, because their rate of converting new believers hardly keeps up with the Roman rate of executing believers. And yet, a couple of thousand years later we're all in agreement that giving to the poor is a good thing, to the extent that Stone can complain about whether EA is doing it right.

Expand full comment

As best I can tell from what's quoted, Stone doesn't live up to the self-imposed charge of arguing that EA is bad. Most of his arguments amount to "EA is not necessarily better than Not-EA", which is rather weak. About the only things that make EA actively bad in his arguments are caring about animals and not supporting terrorism, which are attitudes shared by ~95% of the general population.

Expand full comment

People care about dogs, cats, dolphins and a handful of other favored species. I doubt a majority of people care about factory farm animals.

Expand full comment

Probably not, although when animal welfare bills focused on factory farms have been put to a vote in California, they've passed.

Expand full comment

I think people do care about factory farm animals, though it's true they often don't care enough to make up a strong political faction, or to stop eating meat.

For example. A 2018 survey (https://faunalytics.org/attitudes-towards-farmed-animals-bric-countries) of people in the US, Brazil, Russia, India, and China found that 71% of people believed "it is important for farmed animals to be well cared for", and 49% or 89% (depending on how the question was phrased) said animal welfare is important to them. A 2022 study (https://www.frontiersin.org/articles/10.3389/fanim.2022.960379/full) found that "most participants across all countries agreed that the welfare of both farmed animals and companion animals was important to them, and that laws that protect that welfare were also important". A 2023 survey (https://malta.representation.ec.europa.eu/news/eurobarometer-shows-how-important-animal-welfare-europeans-2023-10-19_en) found that a "large majority of Europeans (84%) believe that the welfare of farmed animals should be better protected in their country than it is now".

Expand full comment

The difference between AI risk and coal plants, Trump, and abortion clinics is that AI risk - as portrayed by its proponents - is paperclips and grey goo masterminded by HAL 9000. It doesn't advance human interests. That's not something reasonable adult humans can hold opposing views on. The data centers, in this situation, are Bin Laden's bunker, not the Alfred P. Murrah federal building.

Basically, if it's paperclips and grey goo, then it's a man vs machine situation where evading automated AI defenses to blow up the data centers in a "terrorist strike" (I think "commando style raid" is a much better way of phrasing this) is a happy ending, cue end credits. If it's not paperclips and grey goo, it's not "AI risk", but just the same old "human beings in charge using tool in misguided fashion".

Expand full comment

I personally don't think Trump advances human interests, at least not beyond the danger he poses. And his danger is located in a very specific place, wherever he happens to be at a given moment (unlike AI, which is located not only across those tens of thousands of datacenters but also in all of the people who have figured out how to make AI and all of the information they've written down, so blowing up one data center would not cause the end credits to roll). Yet I don't (repeat, DON'T) believe anyone should go up to him and kill him to end the threat he poses. Does that mean I don't really believe he's a threat? Or does it mean I believe there are all kinds of good reasons not to do it, including morality and the risk of backlash?

Expand full comment

AI is something of a special case, again. When you say something is an *existential risk*, as in "destroys the entire known universe," then there are very few good reasons not to do extreme things to stop it. If one believes AI has any appreciable degree of existential risk (>0.001% in the next 100 years, say), they should consider Sam Altman roughly 1,000,000% more terrifying than Trump. But one can (and indeed, researchers often do, as I just did) pull those numbers out of the air to justify whatever they want anyways.

Three thoughts on EAs re:AI: A) they don't believe the risk is truly existential and should change rhetoric accordingly, B) they would rather let the universe be destroyed than to do anything beyond writing a harsh essay (because most modern Westerners that enter EA are incredibly domesticated, cooked valley people to the core), and C) complicated mathematical considerations that are essentially bullshit to justify doing whatever you want anyways.

Expand full comment

Suppose you thought that AI has a 50% of driving humanity extinct in the next century. What illegal things do you think would actually improve those odds, as opposed to backfiring and making them worse?

If someone wants to become a terrorist, step 0 is noticing that historically, most terrorists ended up not actually accomplishing their goals (or shooting themselves in the foot; see e.g. Scott's example of 9/11) and killing a lot of innocent people along the way. This holds true even though probably the vast majority of terrorists were certain that *their* particular flavor of terrorism was going to succeed. If you're a terrorist who's confident about your chances, you'd better take a very careful look at your reference class before doing anything, because from the outside view it doesn't look great!

I don't think this is a "complicated mathematical consideration". It's just Chesterton's fence--the observation that breaking well-established moral rules for the sake of some greater good very rarely works in real life, and that this holds regardless of how sure you are that you're an exception. SBF rediscovered this the hard way, and I expect that anyone who decides to become a terrorist/murderer/criminal mastermind/etc over AI risk or animal welfare or mosquito nets or whatever will too.

Expand full comment

Hmm... taking the question somewhat seriously, it's not just the illegal things you think will work, it's the spin you can put on them. If you can make yourself out to be the anti-AI equivalent of Greenpeace or the Weathermen, your odds of backlash are much lower.

That consideration aside, I agree, I don't think there are many illegal acts that would make a difference (short of melting the entire Bay Area using massive space lasers (https://www.amazon.com/Nano-John-Robert-Marlow/dp/0765301296)).

With the complicated mathematical consideration, I was more critiquing the general utilitarian impulse to use numbers to justify what you'd want to do anyways. Like Scott's kidney donation- I think it had very little to do with EA-the-philosophy/ideology, and more that the kind of person attracted to EA is also the kind of person to donate a kidney.

Expand full comment

I think that's fair, although FWIW, I think Scott was pretty upfront about his kidney donation not being motivated by hard utilitarian calculus. I'd be happy to see more of this sort of reasoning.

Expand full comment

> the anti-AI equivalent of Greenpeace

This is mostly a joke, but... How close are the probabilities of a) AI causing the extinction of humans, and b) AI causing the extinction of a cute and fuzzy species, such as cats? My impression is that there's quite a lot of overlap in the scenarios envisioned. "Save the kittens" might have better optics.

Expand full comment

I would not be the least be surprised if focusing on the extinction of cats and koalas (or even that weird fish that’s only found in one puddle in the desert) had much more impact to the popularity.

Also “notkilleveryoneism” is a silly phrase.

Expand full comment

Getting pretty far off the point here, but...

Are you sure terrorists don't already to that analysis, and come back with a different answer than you? Would the terrorists who orchestrated 9/11 really think the aftermath was net negative?

I mean, just speaking trivially, some of them believe if you die in the service of your cause you get lots of really great things in the afterlife.

Keep in mind, people who are attracted to terrorism don't generally share a lot of the same value system as those who are not. Their risk/reward analysis is probably pretty alien.

Expand full comment

Are we considering the right set of groups here? American revolutionaries were once terrorists, and they won completely. Ditto every successful revolution in history. Socialists in the Gilded Age in the US could be said to have successfully used terrorism to get a 40-hour workweek and Social Security, among other things.

On balance I think most terrorist groups do fail, but we can't exclude the winners and then conclude that all of them are losers.

Expand full comment

The only terrorism committed by the revolutionaries I can think of is the tarring and feathering done by some lynch mobs, which I think probably had a negligible effect on winning the war. (The Boston Tea Party wasn't terrorism, for instance.) The 40-hour workweek was won by labor strikes, to my understanding, not terrorism (though some union intimidation of nonunion workers was involved, sure), while Social Security was a government program, part of the New Deal.

Expand full comment

Obviously it depends on what is counted as terrorism but certainly actions taken in support of labor in the 19th century were substantially move violent than anything related to AI has been.

https://en.wikipedia.org/wiki/Battle_of_Blair_Mountain

https://en.wikipedia.org/wiki/Colorado_Labor_Wars

https://en.wikipedia.org/wiki/Molly_Maguires

Expand full comment

As Josh says, Union activity before the NLRA was easily identified as terrorism, and fits most definitions.

Why wouldn't the Boston Tea Party be terrorism? That seems a quibbling of definitions. We could argue about what is or isn't terrorism, but I don't think that would ultimately be fruitful. To me, a group of people who were not a country acted illegally to use violence to get their preferred political ends. That's not the definition of terrorism, but one man's terrorist is another man's freedom fighter. For instance when we (US) supported the Afghanis against the Soviets in the 1980s we didn't call them terrorists.

Expand full comment

It seems like you’re saying that once you calculate a risk is existential, you’re only being consistent if you take extreme actions, regardless of their efficacy. I think the opposite is true: if the threat is existential, efficacy is of the highest importance. A lot of people think climate change is an existential threat; some do take extreme actions, but others do politics to try to improve the situation, and that’s not inconsistent with thinking it’s an existential threat. I think you missed D): they don’t believe extreme action will address the problem so they prefer to do something they believe might.

Expand full comment

Not just existential but how likely it is. Aella had a tweet not long ago along the lines of making lifestyle changes (wilder activity, not saving much money, etc) due to a 70% belief the world would not survive the next 50 years. A reply that stuck out to me was, paraphrase, "so that's a 30% chance of ending up retired and penniless? For me that would result in no behavior change at all."

But yeah, my original comment was missing that option D.

Expand full comment

My take about the drone strike to eliminate AI extermination was that he was obliquely suggesting that people who say today’s children are at risk are not being sincere.

Expand full comment

I agree he was; what I'm saying is that the supposed evidence doesn't support the claim at all.

Expand full comment

I understand.

The problem I see is that it seems silly - to me - to counter as Scott did in his essay, that terrorism is ineffective when that isn’t really what is at issue.

Again, in my opinion only, if you believe Ai is an imminent X risk to humanity, develop that argument rather than respond to what was essentially a red herring.

As a bit of an aside I’m largely sympathetic to EA goals. I, like most people, don’t like animal cruelty either. I’d like to see genetically modified broccoli that was as delicious as and satisfied the same cravings that deep fried chicken does. Hell, I wish there was a way that tested antidepressants that didn’t involve the forced swim test too but I know that in the end you have to play the cards you are dealt and work with the reality in front of you.

In the case of concern for animal qualia you are bucking a few millennia of socially acceptable behavior that is supported by no less an authority than the Old Testament.

You aren’t going to convince anyone, at least not a ‘normie’ - I really hate that word - using an argument that counts and sums neurons.

Expand full comment

> I personally

That's basically it right there. And I suspect even you would agree that Trump does advance the (short-term) interest of (some) humans.

Nobody could, in good faith, make the same argument of an AI grinding up babies to make paperclips, or replicating to grey goo. These serve AN interest, but not the interest of a human being.

Expand full comment

AI also serves some human interests, but I don't actually see what the significance of this point is. Let's say I agree that AI has no benefits and believe it has a high chance of ending humanity. That could make terrorism against it morally justified, but it still wouldn't make any sense to do it unless the terrorism was likely to actually reduce the risk. Indeed, some would argue that the terrorism *isn't* morally justified unless it's likely to address the problem. I don't have any reason to think that bombing a data center or whatever would make AI less of a threat, and I have several good reasons to think it wouldn't, which Scott laid out in the post.

Expand full comment

You're talking theory. I'm saying that in the event that today, May 30th, 2024, an AI is currently masterminding an operation that is hell bent on turning the world into paperclips, nobody in good faith could or would argue that stopping the AI is a bad thing, or that the AIs goals are noble. In that situation, since the AI can be trivially stopped by blowing up the data center housing the glorified amalgamation of spreadsheet and JavaScript that constitutes the AI, a commando raid would be uncontroversial, and not at all analogous to "I think AI might end up some day deciding to maximize paperclips" or "I think Trump / Biden is bad and destroying humanity".

Expand full comment

Oh, well, if you're positing a hypothetical situation like in Terminator 2 where an AI located in one is strongly believed to be THE paperclip AI, then of course blowing it up is justified. But Stone is saying that if people think AI is a long-term existential threat, they should be blowing up data centers now, even though there are thousands of them and we have no idea which one (if any) is the best target, and that's what I took Scott to be disagreeing with.

Expand full comment

"I have recently found myself enjoying a young man online walking backwards into Calvinist Universalism by gradually conceiving of God as an ultimately simple utility monster."

Well, that made me blink! "Calvinist Universalism" could be very dystopian, depending on which side of Double Predestination you come down on - we're *all* damned?

I think a pie chart of where exactly all the various strands of EA giving are directed would be a very useful thing, and if someone hasn't done it yet, they really should do so. Finally we'd have some means of comparison to "don't donate to your local sports club/church" as to where the money is going. Manor houses in Oxford, anyone?

Yes, you can say that's an easy strawman. But speaking as a Catholic, we've had years of people posting as gotchas that "if the Catholic Church is serious about doing good for the poor, why don't they just sell all the Vatican treasures and give away the money?" If we have to explain why no, we can't just offload the Sistine Chapel, then the EA movement if it wants to be a serious player has to get its big boy or big girl boots on and do the same kind of work to respond to critics. Even the ignorant ones.

Expand full comment

Don't forget the criminal justice reform fiasco (https://nunosempere.com/blog/2022/06/16/criminal-justice/) but the usual dodge is that OpenPhil isn't *technically* EA.

Expand full comment

The EAs did sell their castle/abbey/manor thingy.

Expand full comment

Ah, really? Did it not turn out to be the benefit they expected? Too expensive to run, not enough events to make it worthwhile, turns out you *can't* just get the Prime Minister of Belgium to drop in whenever, all the rest of the expectations I had around it?

Expand full comment

>I think a pie chart of where exactly all the various strands of EA giving are directed would be a very useful thing, and if someone hasn't done it yet, they really should do so. Finally we'd have some means of comparison to "don't donate to your local sports club/church" as to where the money is going. Manor houses in Oxford, anyone?

EA is pretty transparent about this, actually. For instance:

- https://www.givewell.org/top-charities-fund

- https://www.openphilanthropy.org/grants/

YMMV on Open Philanthropy's AI risk and animal welfare grants, of course, but they're not lacking in transparency, and I think it's quite hard to argue that GiveWell's Top Charities Fund isn't spending their money well.

Expand full comment

>we're *all* damned

Is that worse than just ceasing to exist after death? If not, then doesn't seem too dystopian.

Expand full comment

Definitely worse, which is why Annihilationists picked that doctrine. Hell is not a fun place and being eternally damned is a horrific experience. Since they don't want to think of even the wicked suffering such a fate, then it is more acceptable in their theology that the damned simply cease to exist after death and judgement.

https://en.wikipedia.org/wiki/Annihilationism

Expand full comment

My main problem with some religions, is the belief in a perfect God and hell together.

So I am happy to see some abandon the excuses on this and their belief in hell.

Expand full comment

> "if the Catholic Church is serious about doing good for the poor, why don't they just sell all the Vatican treasures and give away the money?"

Have you ever seen the film The Shoes of the Fishermen? Anthony Quinn as the new pope and a recently released prisoner of a Soviet gulag does just that selling Vatican art treasures to feed a starving communist China and avert WWIII. Yeah, not an entirely plausible premise. The movie available to stream has been edited to remove a major subplot and it honestly kind of sucks. It was a commercial and critical failure in its original release too.

I saw it a couple times in a theater when it came out because a pal was an usher at the sole movie house in town and he didn’t require a purchased ticket from his friends.

The novel it was based on was changed in significant ways.

Expand full comment

Oh, I saw it years ago. And read the book it was based on. The Jesuit befriended by Quinn's pope was clearly meant to be Teilhard de Chardin, and when John Paul II was elected, a lot of media punditry immediately went back to The Shoes Of The Fisherman for an example of life imitating art about electing a non-Italian pope 😁

Expand full comment

My understanding is that "universalism" in the Christian context means specifically "everyone goes to Heaven." So "Calvinist universalism" does not include a dystopian possibility (unless you think it's dystopian that Hitler will go to heaven or something)

Expand full comment

No, the reason I worry about the dystopia is the "Calvinism". Strict Calvinism is not Universalist, and it does have clear categories of the elect and the reprobate. Due to the doctrine of Double Predestination, not only are some chosen to be saved, some are chosen to be damned. If you're one of the damned, there is nothing you can do about it. You may think you have faith, but it's a false faith, not saving faith. You are deceived and self-deceived and your ultimate destination is Hell.

So a Calvinist Universalist might agree on the ultimate fate of all souls being the same after death, but it's much easier for that ultimate fate - as strict Calvinist - to be *all* shall be damned and not *all* shall be saved.

Hitler is going to Hell either way in that schema. Now, as a Catholic, we cannot say with 100% certainty that even Hitler is damned eternally. There is the possibility of last-minute repentance (a very faint possibility and I wouldn't bet the house on it) but nevertheless only God, and God alone, knows the ultimate fate of Hitler.

Expand full comment

I think the "Calvinist" part here really just means there is no free will, and that everyone is predestined to go to heaven. The "Universalist" part is what changes it from normal Calvinism, which has the elect, reprobate, double predestination, etc. No matter how hard you try, you're going to Heaven.

Expand full comment

"why don't they just sell all the Vatican treasures and give away the money?"

I think things like this come from a lack of understanding how the world works, and envy/jealousy. Something like it is even in the New Testament, Mark 14:5: "For it might have been sold for more than three hundred pence, and have been given to the poor. And they murmured against her."

What happens to the treasures when you sell them? They don't disappear into the ether. Now someone else has them. Sure, they might sell the treasures, too, and give the money to the poor. It's the economy, stupid. The treasures would become worth less and less (in monetary value) and food would get more and more expensive. The same might be said for the herb worth 300 pence.

The trouble is that we only have so much food and ability to make more. Would anyone say we shouldn't bother creating art, when we could devote the time, energy, and materials to making more food?

Expand full comment

The argument that Stone presents "proving" that EA does not increase charitable giving is very bad, and it makes me seriously doubt his credibility. Even the way he summarizes his first point:

"That should all make us think that the rise of “effective altruism” as a social movement has had little or no effect on overall charitableness. This is my first major critique.

Effective altruists propagandize like they are uber-givers, but there isn’t actually evidence that these claims are true. Performative rhetoric seems like a likely explanation."

I'm with him that EA hasn't increased society wide charitableness, but this is a very big jump to say the individual effective altruists are just being performative.

It was a good idea to rebut the related argument that EA should focus on spreading more and gaining more members. I think there is an argument also related to this that there is something wrong or off-putting with the culture of EA (or maybe its portrayal in media?). I volunteer a lot for a global health organization's chapter on a college campus and opinions on the EA movement from my fellow volunteers range from negative to neutral/don't know. I think in the long term EA will lose out on potential good members by having a bad reputation.

Expand full comment

Lyman is clearly a really intelligent and interesting guy, but every time I read him I walk away thinking he's also got some of the strongest blinders/biases towards ideas he doesn't like that I've ever seen.

Expand full comment

It's funny that Stone has to grasp at these straws when I think there is a much better and more obvious critique of EA. They confuse the map for the territory. EAs want to reduce human suffering but they forget the human experience.

I'm not explicitly talking about SBF here; I think he was unable to empathize with other people and a degenerate gambler. If he hadn't hit on EA and maximizing money, he probably would have died playing Russian roulette in a seedy back room or something. I do think the other EAs, which were specifically recruited because of their belief, focused entirely too much on the money making potential of FTX. All of these rationalists got swept away by the idea of making hundreds of millions or billions of dollars and all the good they could do with it. And thus they were willfully blinded to all of the fraud going on and the damage they caused in the end.

Another example is the attitude towards having kids. A child is a big drain on resources like time and money, resources that could be spent on altruism. Think of all the effort over 18 years that goes into raising a kid, and how many other people could be converted to EA in that time. Why make one future EA when the returns on converting other people are so much higher? As if the value of your life or the impact of having children is measured in a spreadsheet.

I want to be very clear that this is a warning, not me dumping on EAs. A lot of the early EAs hired by SBF bailed when they realized how crooked he was. There are EAs like Scott who are going to raise and love their children. I give the EA movement a lot of credit for trying to make the world a better place, even if they don't always succeed. Just don't forget that you want the world to be better for the actual people living in it.

Expand full comment

Also, I just realized the image thumbnail is a rock and Scott is rebutting Stone. Deep stuff.

Expand full comment

Paper beats rock, so codex beats stone?

Expand full comment

Well, the EA movement is very deliberately NOT against its members having kids. Which, in my own opinion, takes a lot of force out of Scott's argument that it's "a social technology for getting you to do the thing that everyone says they want to do in principle". Because, as you said, kids are expensive in terms of effort and money.

Imagine if the EA movement were this careful about expensive gold jewellery. Like, people writing whole essays about how Actually, It's Okay To Buy Jewellery As An EA. That sounds absurd, right? I mean, if you're a GWWC-type effective altruist, you're free to do whatever you want with the remaining 90%. But, like, there's an implicit understanding, what Scott calls "social technology", that jewellery is obviously a frivolous luxury with no real utility behind it, and you should really stop.

Not so with kids, apparently. The risk of being perceived as a cult is greater than the benefit of acknowledging what the movement's principles imply. I read a Brian Tomasik essay about this, it was pretty hilarious. He's a guy who's often happy to say some very outlandish things about ethics, yet when it came to the matter of EAs having kids, he was oh-so-careful to bury his every statement against this under a bunch of qualifiers.

(Obviously I'm a little facetious here - kids, unlike jewellery, have some inherent value to them, not all EAs are utilitarians, etc., etc... See, I can do it too!

...Still. How'd the saying go? Shut up and multiply? Yeah, that.)

Expand full comment

If they are having kids, then they are indeed shutting up and multiplying 😁 Indeed, they are following the oldest command of them all:

"28 And God blessed them. And God said to them, “Be fruitful and multiply and fill the earth and subdue it, and have dominion over the fish of the sea and over the birds of the heavens and over every living thing that moves on the earth.”

Expand full comment

Well, people who want kids care a lot about having them! If you're planning to have 10 kids, then maybe there's an argument for only having 9. But if you want 2 kids, you can hardly have 1.8 — you have to make your sacrifice in units of whole kids.

Asking someone who yearns to be a parent to forgo it entirely is way too unreasonable a demand. Demanding that people barely keep their family fed and warm, and donate all the rest, would be way more sacrifice than I want a movement to promote. Making the family smaller is even more demanding.

I do think having kids is pretty fucked up, because having absolute power over a particularly vulnerable and needy human being for a decade or two is a recipe for abuse. But I'm not going to try to talk people out of it, given how badly they want kids.

Expand full comment

"I do think the other EAs, which were specifically recruited because of their belief, focused entirely too much on the money making potential of FTX. All of these rationalists got swept away by the idea of making hundreds of millions or billions of dollars and all the good they could do with it. And thus they were willfully blinded to all of the fraud going on and the damage they caused in the end."

Yeah. Michael Lewis, who is very sympathetic to SBF and doesn't think he was an intentional fraudster, in his book hammers on this - that without the support and contacts in the EA movement, FTX would never have gotten off the ground. And he hammers on about the mathematical calculation of risk versus reward that they were all operating off. If I hadn't been somewhat familiar already from Scott's blog about EA, I would be convinced they were this weird cult.

Expand full comment
May 30·edited May 30

>Will MacAskill’s What We Owe The Future, sold only 100,000 copies in the whole world.

How many were bought by EA to hand out?

>why EA needs to be a coherent philosophy

Does it have one of those? Isn't that part of your whole motte-and-bailey about "the movement versus the ideology," that it specifically doesn't have a coherent philosophy beyond a few rules like "bed nets are better than art"?

>It’s actually very easy to define effective altruism in a way that separates it from universally-held beliefs.

Disagreed (I'm not gattsuru but he says it better than I have): https://www.tumblr.com/gattsuru/735549623751589888/id-caution-that-this-sort-of-public-thinker

Edit: Also these response posts always come across a bit... sad. I get that EA is your religious analogue; it's very important to you emotionally. You want to defend it, that makes sense!

Maybe, though, making arguments that actually convince people who aren't already entirely predisposed to liking it (as Stone puts it and I've said before, EA doesn't make people nice; it attracts and uses people that are already a very particular kind of nice) would ultimately be a better defense?

I say that as someone that like EA-the-idea and dislikes much of EA-the-culture, of course, so I *would* say that. But I can't imagine fence-sitters and potential-interested people reading this, or any of your defense posts, and thinking "yeah, I really want to be part of that."

Expand full comment
May 30·edited May 30

> Also these response posts always come across a bit... sad. I get that EA is your religious analogue; it's very important to you emotionally. You want to defend it, that makes sense!

I think these defense posts are important on entirely impersonal consequentialist grounds. I mean, I would assume some people reading this are neutral on EA, and may be find Stone's and/or Scott's arguments persuasive. If you believe -- as Scott does -- that EA is a great force for good in the world, discouraging others from considering EA ideas and taking EAish actions is actively harmful, and so responding to attempts at discouragement is prosocial and good for society (not, or not only, important to oneself emotionally). (Of course it's important to figure out what is true and right. I'm not suggesting Scott is or should be deceptive in his advocacy for EA.)

Expand full comment
May 30·edited May 30

I would guess that Scott's audience is at least an order of magnitude larger than Stone's, possibly two (just guessing from the info Medium gives versus Substack, but Stone is also active on twitter so who knows). If that's true, then Scott is also exposing vastly more people to arguments against EA! He's doing so in the process of attempting to refute those arguments, but there is still the potential on the margin that he's giving people arguments against they would otherwise not see.

From my estimation, if the goal is to promote EA, I think the time would be better spent cutting out the middleman and providing more positive arguments on their own rather than refuting bad arguments. Then again, it's also likely that much of Scott's audience is already settled for and against, and if he gave just positive arguments they wouldn't land effectively.

Edit: Indeed, responding to your edit. I don't think I know of a place where Scott has been deceptive (except in the Kolmogorov sense); to my awareness he hasn't had any situation like the controversy around EA vegan advocacy (https://acesounderglass.com/2023/09/28/ea-vegan-advocacy-is-not-truthseeking-and-its-everyones-problem/).

Expand full comment

Let me ask, do you think *you* are changing many minds with lines like "bed nets are better than art" as a summary of AI ideas?

Expand full comment

Absolutely not, but changing minds isn't my point there, nor have I made writing essays to do so a significant portion of my income.

It was a cheap shot, I'll certainly admit. Scott is usually the first to say EA isn't really a coherent philosophy, and I let my snark get the better of me. If I ever make the time to actually critique EA with the intent of changing minds and earn a Scott response essay, I'll keep the snark to a minimum.

Expand full comment

Sure, nothing wrong with snark if it's what you're aiming for, though obviously if I don't agree with the premise I'm not going to think it lands. I just think you should consider that what you find sad or unconvincing is not a good predictor of what others will find so, given your strong feelings on the matter. Same is true of me, of course!

Expand full comment

A suggestion for question 1,537 of the increasingly long ACX annual survey: has anything written about EA on ACX improved your opinion of EA?

Yeah, I don’t think I’m a good predictor and this is just my opinion (but one I’ve heard from a few others), that Scott’s tone and attitude approaching this has shifted since the SSC/ACX change.

Expand full comment

Oh, well, I think his tone has changed in a few ways since then, and not always for the better. So I'm definitely willing to buy that.

Expand full comment

"A suggestion for question 1,537 of the increasingly long ACX annual survey: has anything written about EA on ACX improved your opinion of EA?"

Yeah, I think so. I read about things like the Carrick Flynn débacle and am torn between laughing at them for "what did they think would happen when you try parachuting in an outsider?" and scorn about "so these are the guys trying to save the world via superior smarts?"

Then I read Scott's take on it all, and am reminded that no, these are not actually crazy people, they just have their own little bubble and the values thereof.

Expand full comment

Scott's tone is generally very charitable, even to groups and people who he strongly disagrees with. Sometimes that's been hard for him, like with "you're still crying wolf" posts, but he did it anyway and I applauded his consistency.

These defensive posts do not have the same tone and (slightly) reduce my opinion of Scott and his writing. Or more accurately, significantly reduce my opinion on his responses to certain subjects. I think he has a blind spot to these topics, and struggles to maintain detached rationality or open charity.

That's not much of a criticism, considering he's till in the top 0.0001% of people I've heard from in terms of being charitable with their enemies - people are extremely bad at this. I get it that he hears the same arguments about EA, AI, Psychiatry, etc. and is tired of responding to it. I also get that he feels passionately about these topics and feels the need to respond. I hope he realizes that he may hurt his own brand doing this, and doesn't put too much of his built up goodwill into it, but I wouldn't fault him if he went all in 100% EA and burned all of his credibility towards achieving something important to him. What's he building credibility for, if not to achieve ends that are important to him?

Expand full comment

I think you personally have been very good about supporting your claims, so I'm putting this gripe here precisely because you are without sin and it's not directed at yo.

But man I get annoyed when the words "blind spot" or "bias" pops up in a critique and there's zero justification in the post or otherwise. Aaaah it's like a more sophisticated way of saying writing is bad because I don't like it, except coached in the language of objectivity and mind science.

Don't feel any obligation to respond to my insane ramblings. Just aaahh!!!

Expand full comment

> I say that as someone that like EA-the-idea and dislikes much of EA-the-culture, of course, so I would say that. But I can't imagine fence-sitters and potential-interested people reading this, or any of your defense posts, and thinking "yeah, I really want to be part of that."

I'm in roughly the same place you are, but I think it's working on me, slowly and ethical-system-agnosticly. I appreciate the calm responses in the vein of "no, we're not actually the crazy thing you think, we are capable of modeling the world and doing math, and look, here's where your model and math are failing".

Expand full comment

Phrasing it as “we’re not the weird thing you think” reminds me of Old Scott’s Fear and Loathing at EA Global (https://slatestarcodex.com/2017/08/16/fear-and-loathing-at-effective-altruism-global-2017/) (wait why did the hyperlink load like that? do substack comments work differently from the activity page than from the main thread?), where multiple paragraphs- and apparently Will MacAskill’s keynote speech- are spent on we are that weird thing and that’s good. Times change.

There’s a tension for the community, and I get that- it’s difficult to be weird yet taken seriously. Back then Scott thought the insect thing was good; now he’s tired of it being a bludgeon.

Expand full comment

(I get the same result from clicking your link from the main page and from my activity page and from my email notification. But I've had problems with ampersands in links.)

Expand full comment

To me, there's a difference between "weird" and "crazy"? Christianity was weird for a while, and then it went mainstream in European and European-derived society, but over time there were more and more sub-societies where it became weird again.

I was talking more about the "crazy" side of things, like terrorism or extreme utilitarian straw-men. The 10% thing is a nice example of the opposite of "crazy" - it's a self-imposed limit to allow ordinary people to have ordinary lives.

Mostly I tend to class EA as being a "progressive" movement, coming up with ideas that they think will improve the world. Some ideas may prosper, others may wither, but I'm glad that someone out there is paying attention to them, even if I no longer feel any ethical imperative to join in their reindeer games.

Expand full comment
founding

Another objection to the "you losers haven't become a mass movement" criticism is that EA, unlike most things which get described as movements, is not mainly trying to achieve its goals by influencing democratic government. This gives it a downward scalability which the more usual sort of political movement lacks: with the latter, there's some threshold of support you have to reach to accomplish anything at all. You don't necessarily have to become the majority, but you have to get big enough so that anyone who's trying to assemble a majority needs to bargain with you. On the other hand, a relative handful of EAs can buy a lot of mosquito nets even if they never poll well enough to beat out the lizardmen. In fact, the logic of diminishing marginal returns suggests that you'll get the most bang-for-buck out of that first handful of supporters.

Expand full comment

> The third is that - did I say terrorism didn’t work? I mean it massively massively backfires. ... Osama bin Laden tried terrorism, also did an impressive job, and the US took over the whole country that had supported him, then took over an unrelated country that seemed like the kinds of guys who might support him, then spent ten years hunting him down and killing him and everyone he had ever associated with.

This is true, but it's also entirely orthogonal to the question of "did it work?" As anyone old enough to be alive and aware of the world around them pre-9/11 can tell you, it *did* work. The goal of this act of terror was to instill terror, and he succeeded probably beyond his wildest expectations. So many pieces of modern-day American society would be unrecognizable to a time traveler visiting from August 2001, and not simply because of technological changes. Who can deny, with a straight face, that he set out to destroy America and to a large degree succeeded?

Expand full comment

Agreed. And to add little more flavour:

The risk/reward analysis of a terrorist is likely to be quite alien to someone who is not a terrorist.

I'm pretty sure, as you say, bin Laden and everyone in his orbit thought the aftermath and results were quite pleasing, and anyone else joining their cause would replicate the act and the consequences a hundred times again if they could.

All that said, I don't think we're gonna find any EAs flying airplanes into datacentres or melting down a million GPUs anytime in the near future. I'm pretty sure the EA mindset and the terrorist mindset aren't compatible.

Expand full comment

Just because they had a major effect doesn't mean that it worked. Otherwise it's obvious why you think they worked: you saw that there a lot of changes, declared those changes to be compatible with Al Queda's goals then wondered why people disagreed. This is a fact about your definition of goals, and not a fact about Al Queda's leadership's thinking and influence in the world.

Or are you saying something like "make Americans remove their shoes at airports" was a line item on their list of goals?

See also: https://gwern.net/terrorism-is-not-about-terror

Expand full comment

Are you really going to reduce the massive impacts that 9/11 had on our society, on our laws and on our politics, to "make Americans remove their shoes at airports"?

Expand full comment

The argument doesn't fail even if extended to the entire range of things Americans have changed. Unless you think that Al Queda leaders wanted to be hunted down, wanted the war on Iraq, wanted higher costs for flying and wanted TSA to be a pain and would have been satisfied with it. Which specific things do you think Al Queda had as goals, before 9/11, that they had accomplished? (Beyond, obviously, killing a lot of people by crashing a plane into them)

Expand full comment

Which specific things do you have in mind? 9/11 had the effect of creating the TSA and Homeland Security, and I'm not happy about those.

On the other hand, I suspect that the _general_ trend towards more surveillance (which I'm also not happy about) would have happened anyway. I don't think technological determinism is fully correct, but digital cameras have gotten vastly cheaper since 2001, as has computing in general (and therefore keyword monitoring and transaction monitoring).

Both the left and the right have justifications for surveillance, as does the NSA and various law enforcement agencies. My guess is that surveillance would have had the same trend if bin Laden had tripped in front of a truck in 2000, simply driven by the drop in costs.

Expand full comment

I think lots of details about increased surveillance were pretty contingent on things like the Patriot act, what type of thing are you thinking about? Stuff like NSA snooping being mostly dependent on their technical competence and not really about what politicians immediately are worried about? Or is it something more like "oh yeah it was USA who got hit by 9/11, but looking at first world countries U.K., France etc. we see that things have gotten a lot more invasive for x y z reasons independent of the US"?

And as an objection, I agree that history can be "inevitable" in some sense, but details still matter. Remote work was likely possible years ago, but it still took until Covid before it was adopted! I think you can easily get lags or leads of up to a decade for inventions or institutions (iPhones for example probably happened as early as they could have since they were still missing lots of features on launch) 10 years doesn't mean that much in the pages of a textbook, but have to live through them!

Expand full comment

Many Thanks!

>I think you can easily get lags or leads of up to a decade

Yes, I think that that is plausible. But look at e.g. the proliferation of CCTV cameras. A lot of that isn't even government - many of these systems are just private security hardware. But it adds up to people being recorded multiple times a day in urban areas.

>Stuff like NSA snooping being mostly dependent on their technical competence and not really about what politicians immediately are worried about?

Yes, that is a large chunk of it. An initial mandate plus mission creep plus storage and compute and sensing all getting cheaper. Examples of drivers independent of 9/11: On the left "misinformation" (some of which really is misinformation, some - not so much). On the right - trying to track every pregnancy in red states.

Expand full comment
May 30·edited May 30

I believe that bin Laden was attempting to transform the Middle East, and attacked the United States because he thought that the United States was propping up the status quo in the Middle East. I don’t think that Al Qaeda in Iraq or its successor, ISIS, would have come into existence without the 9/11 attacks. On the other hand, the United States eventually destroyed ISIS. So it seems like the 9/11 attacks brought Al Qaeda closer to achieving its goals, but were ultimately a failure. The problem with the using this as evidence that terrorism doesn’t work is that, at least in my view, Al Qaeda’s goals were impossible to achieve, so “getting closer to achieving its goals” was the best that *any* tactic employed by Al Qaeda could achieve.

Scott also discusses the recent attack by Hamas. Hamas appears to have significantly damaged Israel’s reputation internationally, and I’m not willing to pass judgement on the long term effectiveness of Hamas’s attacks until we see how that plays out. As with Al Qaeda, it may be that terrorism won’t work for Hamas not because terrorism is a bad strategy, but because Hamas is in a no win situation where no strategy will succeed. The PLO, which has basically abandoned terrorism, has not achieved obviously better results than Hamas.

Scott has chosen two examples of terrorists who did not achieve there goals. Two examples of terrorists who did achieve their goals are the Irish Republican Army and the Zonists who founded Israel. I’m not sure about the cause and effect here--perhaps these groups had successes in spite of, rather than because of, their use of terrorist tactics--but only looking at unsuccessful terrorists means drawing conclusions from a biased sample.

I should say explicitly that I think terrorism is wrong regardless of whether it is effective, lest any of the above be interpreted as implying otherwise.

Expand full comment

I'm not sure this is a very realistic view of what happened in the middle east in the recent past. You have to attribute 4D-chess levels of prescience to Al Qaeda. Yes, they saw the US as supporting corrupt regimes in the middle east and wanted to stop that. But it's quite the leap to connect that to the US then invading and destabilizing multiple Arab states, to the extent that Islamic radicals can replace them with their proto-caliphate. Really it's the exact opposite of Al Qaeda philosophy, because if the US loved these apostate dictatorships so much it would never betray them.

Expand full comment

I deny it with a straight face. Hassles at the airport are certainly a negative, but his goal wasn't to annoy. He had actual political objectives (the overthrow of most Arab governments, kicking out the Israelis) and he thought he needed to defeat the US to do that. Arabs are still ruled by regimes he would regard as unacceptable, with the possible exception of Gaza only it's not exactly ruled by anyone right now as a warzone.

Expand full comment

Did you select for location when you used old SSC survey data to see if EA gives more than non EA? If certain states are higher or lower than others or there is a difference between rural and city etc wouldn’t that mess up the data?

Expand full comment

What's the earliest age when children can be culturally independently iq tested to try to find future Von Neumanns? I don't know if this is being done but to my understanding instead of blindly saving millions from malaria or starvation just for them to breed more until malaria or starvation becomes a problem, getting a few hundred probable geniuses to a solid education might yield better results for the entirety of the humankind. Is there such an organisation?

Expand full comment

I'm just here for the EA orgies. If I sign the 10% pledge will I get an invite, preferably hand delivered by a mysterious woman in a carnival mask?

Expand full comment

I think for that to happen, you need to step up to the next level. Adopt the philosophy, argue for it in the right places, make a name for yourself in EA discussion forums, and come to the notice of Important People.

Expand full comment
May 30·edited May 31

Stone previously attempted to dox a significant anonymous figure and then gave contrived reasons for why that weighty act was minor and somehow necessary to some point he was making.

I think this should significantly lower our evaluation of Stone's honesty, integrity, and worth.

Expand full comment

What significant anonymous figure?

Expand full comment

I really like this exchange. Lyman Stone is someone that I very often disagree with, but usually find quite worthwhile to read, and it’s always useful to hear what his disagreements are. I think he’s got something important when he’s pointing out that the most valuable things EA does are things that to some extend predate the movement (like IPA and the whole Gates foundation set of research). But I consider those efforts to be part of the same broader movement, and think that EA grew out of those rather than in competition with them.

I’m a little disappointed in him not realizing that criticizing charities on the basis of overhead is a non-EA criticism, and in being so willing to deny objective morality that he thinks that caring about animal suffering is a sign of going wrong.

But I still found his set of critiques more useful than the majority that come out.

Expand full comment

>in being so willing to deny objective morality that he thinks that caring about animal suffering is a sign of going wrong.

How do you define objective morality, then? Also it's not "caring about animal suffering is a sign of going wrong," it's "there exists a number of shrimp more important than a human life is going wrong."

Stone doesn't deny objective morality AFAIK, but he's a Christian (conservative Lutheran), and puts infinitely greater weight on human life than on shrimp. Which is a particularly easy punching bag of EA and a bit of a petty complaint, but that's not too surprising of a result of the communication dynamics (novel ideas get more attention, inside and out of EA, and this particular idea is so blindingly ridiculous to most people as to reduce the perception of anyone associated). Shrimp is the new poly for outsiders to mock EA.

Expand full comment

Unfortunately, it actually is "caring about animal suffering is a sign of going wrong." I've excerpted the relevant sections below.

"Most of us humans regard animal suffering, especially of vertebrate mammals, with sympathy. Psychologically this isn’t a mystery: vertebrate mammals (and many other creatures but especially vertebrate mammals) have a lot of shared-in-common visible responses to pain and discomfort and so we recognize the pain of other creatures, which hacks our empathy mechanisms which are adapted for intragroup bonding among humans, and causes us to feel for the animal a similar, if perhaps not always as acute, feeling as we would for a suffering human.

*It’s important to grasp that this behavior is, in evolutionary terms, an *error* in our programming.* [emphasis mine] The mechanisms involved are entirely about intra-human dynamics (or, some argue, may also be about recognizing the signs of vulnerable prey animals or enabling better hunting)."

"The EAist faces the trolley problem every morning, and tries to weigh the costs and benefits of the two sides. Whereas, the rest of us simply do not accept that there is actually a trolley problem: the moral weight of animal suffering is not in the animal but in the person and in particular what their behaviors tell us about how they will treat other people, and most particularly how they will treat us and others like us. "

Expand full comment

Fair enough, thanks for the correction

Expand full comment

What did you find useful in Stone's critique?

Expand full comment

I think his discussion of the overall trajectory of charitable donation, and the overall trajectory of malaria deaths, during the period of major EA action, is a helpful reminder of the overall magnitude of EA effectiveness relative to the goals that the moment has.

His discussion of what it would look like to apply EA principles to the saving of souls is informative as well - this is one of many cases where he's been useful at showing me what someone with a very different set of social and religious views would draw different conclusions from premises I agree with. (More often I see it in his discussion of the implications of demographic changes.)

Expand full comment

Thanks for expanding!

Expand full comment

re Saving of souls. Nearing death John Von Neumann converted to Roman Catholicism.

On his deathbed:

Father Strittmatter recalled that even after receiving the last rites, von Neumann did not receive much peace or comfort from it, as he remained terrified of death and unable to accept his circumstances.

Of his religious views, Von Neumann reportedly said, "So long as there is the possibility of eternal damnation for nonbelievers it is more logical to be a believer at the end," referring to Pascal's wager. He confided to his mother, "There probably has to be a God. Many things are easier to explain if there is than if there isn't."

Expand full comment

Note that malaria fights back: mosquitoes develop resistance to insecticides, plasmodia to anti-malarials. And of course, as population grows, the cost of anti-malaria efforts grows with it. Plus there's probably some improvement in reporting of malaria deaths. So the plateau isn't as discouraging as it looks. But yes, we haven't won against malaria, and when we do, the EA movement won't get to hog all the credit.

Expand full comment

I have the strong impression reading Stone's piece that much of his _real_ problem with EA is that it's not religious and (1) he doesn't like things that aren't religious and (2) if you take EA seriously it may make you think worse of religious charities and he doesn't want people to think worse of religious charities.

Expand full comment

I read the critique more that it _IS_ religious, but with different (and conflicting!) beliefs.

Expand full comment

I think you're mistaking a bad-faith rhetorical move for a serious argument. That is: no, obviously EA is not in fact a religion, but a tactic religious people _love_ to use when arguing with non-religious people or groups is to say "well, of course X is also a religion, it just doesn't admit it" where X is atheism, or evolution, or science, or effective altruism, or socialism, or "social justice", or pretty much anything else. But EA is obviously not a religion in any useful sense -- "religion" is a vague fuzzy term but there are lots of characteristics that religions tend to have and things other than religions don't, and EA has approximately none of them.

In any case, while Stone's more recent doubling-down does explicitly take the "EA is a religion too, so there!" route, his original piece doesn't particularly. It says things like "EAists don’t give to missionaries — at all, AFAIK. Religious charity isn’t their jam." and "one reason many of us are skeptical of EAism is because baked into its claims is a denial of the spirit" and it contrasts EAs' alleged unwillingness to be explicit about what they care about with "e.g. religious charities which state theirs [sc. their 'welfare function' -- gjm] very clearly".

So, actually, I slightly take back what I said earlier. I don't think you're mistaking a bad-faith rhetorical move for a serious argument, because that bad-faith rhetorical move isn't really in Stone's article that we're discussing right now, though later on he embraced it gleefully. You're _making_ a bad-faith rhetorical move, and I think it would be better if you didn't.

Expand full comment

"no, obviously EA is not in fact a religion"

I'm NOT religious, and this is not obvious to me at all. A big issue I have with EA is that it really feels like basically a religion.

I see what you mean by "ok well what is a religion, some people say socialism or evolution are religions" and fair. But unless you define religion needing "supernatural theology", there will be grey areas as to what's considered religion or not. Given that "religion" has certain connotations in the modern era, I can see why groups would want to distance themselves from the label.

To me, YOU seem to be operating in bad faith, smearing both ME and Lyman as making "bad faith rhetorical moves". You're entire first comment was a bad faith attack on Lyman! (He's ACTUALLY just religious!)

Expand full comment

I genuinely don't see how EA is "basically a religion" in any useful sense.

Here are some features that things everyone calls religions commonly have and things no one calls religions commonly don't. Belief in gods, spirits, and the like. Belief in a post-mortem judgement and afterlife somehow contingent on it. An elaborate system of doctrines to be believed. An elaborate system of ethical principles to be followed. An elaborate system of actions to be performed at particular times, on particular occasions, etc. Somewhat-obligatory regular gatherings of adherents to perform those actions. A hierarchy of authority figures. The idea that people who are part of the same religion are something-equivalent-to-family. Sacred writings that one is not supposed to doubt. Sacred spaces in which one is supposed to behave in special ways. Activities intended to induce altered states of consciousness. Traditional ideas venerated because of their (actual or alleged) origination from revered figures in the past. The idea that the religion is supposed to be the most important thing in its adherents' lives.

(Of course I am not claiming that any of these is a _necessary_ or a _sufficient_ condition for something to be a religion. Just as, to take a famous example, a "game" doesn't have to be done for recreation, or be competitive, or have precisely-stated rules, or etc. etc.)

You could kinda claim that EA has an elaborate system of ethical principles to be followed, though that seems to me like a bit of a stretch. You could kinda claim that EA is supposed to be the most important thing in its adherents' lives, though that too seems to me like a stretch. I don't think it has _any_ of those other features. (Next-most plausible is the one about actions adherents are supposed to perform, but "giving some money and/or time to allegedly effective charities" doesn't have anything like the _specificity_ of e.g. "facing Mecca and praying five times daily" or "going to church every Sunday". Way too many other things are as religion-like as EA in this respect. Having a job, living somewhere where there are regular garbage collections, having a chronic medical condition that needs medication or exercises, having a partner, having a baby, playing in an orchestra, etc., etc., etc.)

If you tell me that you really, truly think EA seems more like-a-religion than not-like-a-religion to you, then fair enough, and my apologies for the accusation of bad faith. But in that case I just don't understand at all why it seems so. What about it makes it religion-like?

(I agree that groups may want to distance themselves from the label. But if the implication is "... and not liking the term's implications is the only reason why people would say that EA isn't a religion" or something, I don't agree at all and don't see any reason why I should.)

I don't _think_ my original comment was a bad-faith attack on Lyman, nor was its point to say "he's actually just religious". To me "bad faith" implies saying things one doesn't sincerely believe, or that one hasn't troubled to think about the truth of because saying them is too convenient[1]; I really do think that a substantial part of Stone's dislike of EA is that it's in competition with religious charities and tends to be practised by people who aren't religious. That may be _uncharitable_; it may in fact be _wrong_ (though I really don't think it is); it may be _rude_; but I do actually seriously believe it and I'm not arguing in bad faith.

It's possible that I'm too ready to decide that someone making "X is really a religion" arguments is arguing in bad faith. I can think of several ways that could be. (Maybe the way I understand the word "religion" is highly atypical and the people making those arguments genuinely have an entirely different notion of what makes something a religion, according to which EA is in fact very religion-like. Maybe I'm missing lots of ways in which EA is religion-like or imagining some of the ways in which it seems very not-religion-like to me. Maybe I'm right about those things but underestimating how easily a reasonable person can be wrong about them.) In that case, who knows?, maybe I'm all wrong about Stone too.

Perhaps you'll explain why you think EA feels like "basically a religion" and I'll be enlightened. (No promises, though.)

[1] cf. the Frankfurtian notion of "bullshit".

Expand full comment

Religions are hugely complex, multifaceted entities. The attributes that you associate with Religions are pretty important to many, if not most of them! But Religions are not just what they say they are, nor are they just the institutions or organizations that claim to represent them, they're also the objective effects of religions on the people who associate with or follow them. Think of a group of friends who know each other through church, and hang out after - you can't disassociate that group of people's get-together from the religion itself, or even the group itself. After all, they wouldn't be friends without it.

That's my definition - it may be overly expansive, but that's fine - I'm just looking at it from a broader teleological or consequentialist perspective.

Under this notion, were I an anthropologist, I would consider Effective Altruism to be the largest and most influential denomination of the broader New Religious Movement of "Rationalism", with its primary nodes being the Bay Area, the Boston Area, and certain parts of the internet. "Rationalism" is a utilitarian religion that's generally atheistic (but not incompatible with theism), though somewhat interested in the mystical (especially in a Buddhist or New Age sense). It's the most recent millenarian cult out of San Francisco, but certainly not the first, and probably not the last.

Does "Effective Altruism" fit the definition you laid out? Let's see (some of these answers may be a tad tongue-in-cheek - bear with me)

> Belief in gods, spirits, and the like.

No, but generally a belief in "The Singularity" or the potential for such (AI Risk - that AI *could* be risky in such a way). Many believe in Jhanas or similar Buddhist woo

> Belief in a post-mortem judgement and afterlife somehow contingent on it.

Rare. Generally confined to die-hard AI risk folks who believe in ideas like Roko's Basilisk (NOT a niche idea btw - see Musk and Grimes)

> An elaborate system of doctrines to be believed.

Obviously - including weird things like shrimp suffering, AI risk, polyamory.

> An elaborate system of ethical principles to be followed.

Obviously - see the explanations why you need to care about shrimp suffering, or wild fish suffering.

> An elaborate system of actions to be performed at particular times, on particular occasions, etc.

Absent from EA and similar movements/groups, far as I know

> Somewhat-obligatory regular gatherings of adherents to perform those actions.

Mostly absent - though we're starting to get this (see things like ACX meetups, vibecamp, or similar events)

> A hierarchy of authority figures.

We're on the blog of one of them. But seriously, William MacAskill is the de-facto leader, Peter Singer the main ideological fathers, as is Bostrom. Hassenfeld and Karnofsky are important cardinals.

> The idea that people who are part of the same religion are something-equivalent-to-family.

Give it 100 years

>Sacred writings that one is not supposed to doubt.

"Sir, have you read The Sequences?"

> Sacred spaces in which one is supposed to behave in special ways.

LW & ACX comment section

> Activities intended to induce altered states of consciousness.

LOL - as if psychedelic use isn't one of the biggest stereotypes of EA people

> Traditional ideas venerated because of their (actual or alleged) origination from revered figures in the past.

It's only been around a few decades! Give it time!

> The idea that the religion is supposed to be the most important thing in its adherents' lives.

If anything, the biggest issue with EA is it doesn't lean into this *enough*

Expand full comment

You said some things about religions (with which I agree) and then said "That's my definition", but I'm afraid I don't see anything at all definition-like there; I am left without any idea what "your definition" is.

I agree that internet-rationalism has some religion-like features, though not enough that I would be inclined to call it a religion (or basically a religion, or not much different from a religion, or whatever other similar hedged versions might be on offer).

Tongue in cheek is fine, but it seems to me that _all_ your comments about the typical-features-of-religions I mentioned are tongue in cheek. Perhaps I should take that as conceding that in fact EA is not at all religion-like after all? :-). Anyway, treating them as serious in order to respond to each since I don't know _how_ tongue-in-cheek you intend them to be:

Some sorts of singularitarianism are indeed somewhat theism-like, but that sort of singularitarianism isn't a part of EA. Belief in "Buddhist woo" even more obviously so. ("Being English is a religion! Look, lots of English people are members of the Church of England!")

Roko's basilisk is absolutely a niche idea, and the fact that Elon Musk and Grimes have heard of it doesn't make it any less so. (What's relevant isn't who's _heard_ of it, it's who _takes it seriously_.)

You don't need to care about the suffering of shrimp, or believe that AI risk is a big deal, or approve of polyamory, in order to be an EA. (Nor in order to be an EA "in good standing", so to speak.)

Things like ACX meetups are not (for the moment -- I guess all sorts of future developments are imaginable) remotely like (say) church services. No one thinks they're in any way anything like obligatory, they're things people do for fun.

Scott is not an authority figure, however much people like to joke about him being one. Nor Singer, Bostrom, Hassenfeld, or Karnofsky. Being an authority figure doesn't just mean that a bunch of people agree with you and think you have good ideas, it means that ideas are supposed to be believed, or instructions followed, _just because they come from you_. (In terms of actual religions, consider the difference between a _pope_ and an _eminent theologian_. The former is an authority figure, the latter is merely an expert.)

If you want to say "in 100 years, EA might have developed into a religion" then sure, I agree, it might. I thought the question was whether it's (something very like) a religion _now_.

The "Sequences" aren't an EA thing, they're an internet-rationalist thing. And I don't know about you, but I only rather rarely hear people saying "read the Sequences" these days, especially not in a way that suggests they think everything in them should be assumed to be right. But, again, I agree that internet-rationalism is more religion-like than EA.

If LW and ACX comment sections are "sacred spaces" because they have rules, then _everywhere_ is a sacred space.

Beer-drinking is a common stereotype of Germans, but that doesn't mean that being German is religion-like. I'm sure lots of EAs do drugs, but so far as I know they don't do drugs _as EAs_, if you see what I mean.

Again, in a century's time the successors of today's EA may be looking back at Singer and the like as holy figures whose ideas are to be treated with reverence. If so, then EA-in-100-years will be that much more religion-like, but once again the original claim was that EA-now is just like a religion.

I'm afraid that at the end of this I'm still completely confused as to how EA is supposed to be just like a religion. The first bit of your comment says "That's my definition" without offering anything that looks to me at all like a definition. The rest of it jokily claims that EA has all the characteristics I said religions often have, but I can't tell whether you _actually believe_ any of those claims and none of them looks credible to me.

Expand full comment

I think your rebuttal of Lyman Stone is cogent and effective, consistent with most of your postings. I'm sympathetic to the EA movement (enough that I donate to EA/EA-adjacent charities and other charities that reflect my interpretation of the underlying philosophy. However, I find myself in disagreement with some of its tenets, at least as expressed by Peter Singer. I disagree that all human lives should be treated as equally valuable, and that animal lives (or the lives of some animals) should be considered on a par with human life. The question is partly philosophical (no doubt Singer and many others have thought far more deeply than I), but mainly practical. Valuing all as equal leads to obvious conflicts between reason and emotion, and also raises certain kinds of instabilities (e.g., how should I value the hopefully 100 billion human lives of the next 2 centuries against the 8 billion around now). I suggest an inverse-square principle, where charitable donations should align at least somewhat with physical or emotional distance.

Expand full comment

> I disagree that all human lives should be treated as equally valuable, and that animal lives (or the lives of some animals) should be considered on a par with human life.

Hmm, I don't think Singer argues that human lives should be treated as equally valuable as some other animal lives. He argues that we should consider their interests equally, but also that human lives are more valuable than other animal lives (e.g., due to greater capacity for suffering/pleasure, longer life spans, and that sort of thing). And he argues against the view that humans and animals are unequal just because we are humans, and they animals (i.e., speciecism), but as mentioned he thinks humans and animals can have different moral worth for other reasons.

Expand full comment

I expect you're correct, partly because of your better understanding of Singer and partly because of my imprecision of wording. I concur that all humans merit consideration. I'm not sure what "considering their interests equally" actually means, but my core problem remains: there is a conflict between a person's rationally-satisfying philosophical tenet and their viscerally-defined emotional inclination. One can argue that reason should be considered more seriously than emotion, but it seems unwise (and futile) to entirely ignore emotional guidance.

Expand full comment

> I suggest an inverse-square principle, where charitable donations should align at least somewhat with physical or emotional distance.

I think it's kind of insane to see a flaw in human cognition, and a moral principle and then demand the principle bend to the flaws.

I think there's lots of historical precedent to say that this is a bad idea! Do you think someone uncomfortable at the suffering of slaves should have modified their moral principles to define skin color as a relevant moral dimension? That someone abusing a family member should add a moral term for "it's okay if a father does it but not anyone else", exempting them from normal moral intuition? I don't think you would! And I don't think adding terms like inverse square root into those considerations would justify those mental actions either.

Now, maybe you're saying that the way you consider those issues would be hampered too much if you still tried to consider them, and I think it's fine to use this as a coping strategy, but it needs to be explicitly acknowledged as a coping strategy and not part of the moral calculus!

Expand full comment

Arguably I should stay out of topics like this where my knowledge is so limited, but I'm not quite ready to give up. "Moral calculus" doubtless has a clear meaning within a community, but to me it (along with the general EA view) carries overtones of computability/precision that may be unrealistic. I agree that some misbehaviors are or should be universally sanctioned, but indeed I would put up (for example) with certain sorts of misbehavior from my kid or my cat that I would strongly object to in strangers. In fact, I might find such misbehavior amusing when emanating from a few, but seriously annoying from others.

Expand full comment

No it's good to ask when you don't know!

> "Moral calculus" doubtless has a clear meaning within a community, but to me it (along with the general EA view) carries overtones of computability/precision that may be unrealistic.

I don't know how else to say something like "this feels worse to do than that for various morality related reasons that I do not want to go into right now" that also preserves the flow of the sentence. Suggestions welcome.

> In fact, I might find such misbehavior amusing when emanating from a few, but seriously annoying from others.

Yes, and that's fine. But I consider this fine because both of us know and understand that was a statement about your feelings and not about morals. I don't want to see blood and gore on a daily basis, because my temperament isn't suited for it, and also someone bleeding is usually a bad thing. But I don't also go on to argue that trauma medics are immoral because they can handle blood and gore, but I can't.

To a large extent, this type of confusion happens because we derive moral judgment from moral emotion (you can't get to "killing is bad" without having some sort of feeling about the inherent value of people) but there are obvious confounders to this. The biggest reason I am suspect of treating moral feelings as moral fact is that emotions are primarily an artifact of evolutionary pressures, and -*evolution is blind and dumb*. Blind, because it doesn't understand that the-set-of-emotions-encouraging-cooperation can serve as a foundation for anything. Dumb, because if it has the chance to co-opt those emotions for personal genetic gain, it'll do it (see: repeated pattern throughout history of young ideologues claiming to believe in their goals which imply an ascetic life, but oh wait now that they are in power they sure look like they have a lot of kids).

I think acknowledging that the core of our morality is also much less sophisticated than we would like is an act of bravery, and not intelligence. So the sin of pride for EA isn't that they think they're smarter than everyone else, but that they are the only suffering souls who can meet this contradiction and survive intact.

Expand full comment

"I suggest an inverse-square principle, where charitable donations should align at least somewhat with physical or emotional distance."

Funnily enough, Scott *has* talked about exactly this before: https://slatestarcodex.com/2013/05/17/newtonian-ethics/ (Newtonian Ethics):

"We often refer to morality as being a force; for example, some charity is “a force for good” or some argument “has great moral force”. But which force is it?

Consider the possibility that it is gravity. In statements like “Sentencing guidelines should take into account the gravity of the offense”, the words “gravity” and “immorality” are used interchangeably. Gravitational language informs our moral discourse in other ways too: immoral people are described as “fallen”, sin is a “weight” upon the soul, and we worry about society undergoing moral “collapse”. So the argument from common usage (is best argument! is never wrong!) makes a strong case for an unexpected identity between morality and gravity similar to that between (for example) electricity and magnetism.

We can confirm this to the case by investigating inverse square laws. If morality is indeed an unusual form of gravitation, it will vary with the square of the distance between two objects..."

Expand full comment

Your "differentiation" doesn't differentiate EA-ers from anyone else that donates to charity. Every person who has thought about and done charitable giving has the same points of 1, 2, 3 and whatever cause they are involved with is a social technology to keep them giving. Is every intelligent person who has thought about their charitable cause, actually an EA? No, this is a motte and bailey game you are playing, claim EA is this broad thing that everyone agrees with (think about your giving and give! feminism is just equality!) but now we will retreat ...

Following your definition it seems like the core tenet of EA is that you have to think you are better at charity than others. It really doesn't matter what you do exactly (there is no general agreement on EA values or philosophy) as long as you have thought about it enough to personally feel that your cause is righteous and you are a charitable good person, for example because your cause is mathematically superior using some numbers that you invented, or because you are Actually Doing It Unlike Them. You can see how this is obnoxious.

Expand full comment

Sorry, not just that your giving is positive and you're doing good, but actually that you are maximally good or at the very least that your charity is superior to what other charitable givers are doing whose methods are less effective. To be an EA you must have this smug, holier-than-thou outlook, and in fact you don't need anything else. Just argue that you are better than others, that's actually the whole philosophy

Expand full comment

If I try to persuade you to do anything, I have to think that it would be better for you to do it than not do it. That isn't bound up in some sense of my being superior to you, but based on reasons I've considered and found convincing, so I lay them out to you in the hopes that you will find them convincing too. I think it's true that many people will say "you're suggesting I do what you're doing, therefore you think what you're doing is better than what I'm doing, therefore you think you're better than me, therefore your whole philosophy is based on expressing your superiority to me." But I don't think that fallacious chain of reasoning is a good reason not to try to find people who will take me in the spirit that I intend and possibly find my reasons convincing.

Expand full comment

Suggesting people do the morally superior things you do is called being holier-than-thou, and indeed people find it offensive.

My point with effectiveness is that the key differentiating element of EA vs. normal charity is that an EA has to make some sort of justification of why they are effective. Consciously knowing that you are doing the Correct Thing, something that is Better Than Another Thing at the very least, is extremely important. An EA will always have moral advice on hand that they can spout, always able to tell someone what they think is the proper thing to do morally speaking. Not so with other charitable giving that doesn't claim to be better than anything else.

Expand full comment

You added the word "morally." I think that two people who devote the same amount of time and money to a charity are equally moral, all else being equal, regardless of the charity's effectiveness. And of course there's the principle of the widow's mite—the idea that the person who gives a large portion of their meager earnings is more praiseworthy than the person who gives a tiny portion of their large earnings, even if the rich person's donation is larger in absolute terms. When I say one charity may be better than another, I mean it may do more good. That's not a function of the relative morality of the giver. Indeed, someone might hear my arguments for giving to Charity B instead of Charity A, disagree with them, and keep giving to Charity A. That person wouldn't be any less moral in my eyes, even though I might sincerely believe that the same money given Charity B would have saved more lives or otherwise had more benefit.

But I also don't think I buy the idea that recommending an action as moral makes one holier-than-thou. You said that charitable giving doesn't claim to be better than anything else, but it does: it claims to be better than not giving. Based on your framing, it seems like one couldn't make a fundraising appeal on moral grounds without being considered holier than thou. Is that what you mean to say?

Expand full comment

Almost nobody who donates to charity has seriously considered effectiveness (Scott's tenet #2). Donations to (e.g.) museums do not pass even a basic inspection if you're thinking about effectiveness.

Expand full comment

Probably people who donate to museums are not trying to maximize the number of QALYs added per charitable dollar. They have a different value system but that doesn't mean they haven't considered alternatives. Isn't ranking the most effective museums to donate to basically EA, just like Scott said choosing an AI safety institute is?

I thought this conception of EA was value-agnostic and just making sure that if people have some value set, that they are indeed doing the actions that optimize that value set. Common sense! But I guess we both know that actually EA has some values and the version Scott presented is the motte we're running to

Expand full comment

If you take a museum-donator and ask them which is more important:

1. a museum buying (say) 1/1000 of another piece of art

2. saving someone's life who would have died of malaria

they will agree that it's #2, but they don't donate accordingly. Your argument would work if they believed museum art was worth more than African lives (at the relevant dollar amounts), but they don't actually believe that.

Expand full comment

First I am not claiming that every charitable donator has thought about their donation critically, though I would say most probably have some reason why they think they are addressing something important rather than your characterization that the vast majority are ignorant.

For ignorant donators their donation to the museum was not intended to optimally address the top problems in the world. Probably their day to day job is not optimal either, and they don't really think about how many extra minutes they could free up if they swapped to drinking Soylent, which provides excellent nutritional value with minimal prep time and a handy form factor. They are simply trying to do something positive that works for them and their interests, not necessarily the optimum.

Equally likely is they say they care about malaria deaths because of social pressure but actually don't, that would be the revealed preferences take. Another argument would be that they feel responsible for the museum and its survival, that they can be more impactful there than in areas that are not their wheelhouse. Or they feel that art and history writ large is more impactful than malaria prevention, and yes they are a smaller part of that, but overall contributing to that end they feel is more valuable. Many reasons where museums are effective.

Expand full comment

I think this kind of thing is also important, though:

https://www.youtube.com/watch?v=tiL3z0a9-eo&list=PLvb2y26xK6Y4i1rQVRppfR3mBHcwybGA0

I have no problem with both "donate to local museum" *and* "donate to life-saving charity". My beef is with big splashy stupid events like the Met Gala which waste more money than is raised (granted, citation needed, I have no ideas of the figures spent versus raised).

Expand full comment

We can resort to the revealed preferences explanation, though it's usually considered uncharitable. They do believe the art is worth more; they are also aware of just how socially radioactive it is to say that out loud.

People are well-aware that they're *supposed* to care more about some poor rando 10K miles away, and so that's what they say is more important. In reality, in the heart of their subconscious, visiting a museum to see a piece of art they played some (small) role in purchasing is of vastly more value than a person who will live and die without ever knowing or caring they exist.

EA attempts to address this conclusion with various hacks, as they are distinctly aware that a maximalizing ideology is a dangerous thing (https://forum.effectivealtruism.org/posts/T975ydo3mx8onH3iS/ea-is-about-maximization-and-maximization-is-perilous), but for many people that falls rather flat and unsatisfying.

Expand full comment

>In reality, in the heart of their subconscious, visiting a museum to see a piece of art they played some (small) role in purchasing is of vastly more value than a person who will live and die without ever knowing or caring they exist.

Yup! I think of ideologies that give high priority to geographically and socially distant people as profoundly mismatched to what most people _actually_ care about (albeit there seems to be a small niche of people who actually care about distant people).

Expand full comment

The reason that "bed nets" are the single object most associated with EA ("that castle" is #2) is that EA people have always maintained that those nets help save a huge number of lives at trivial cost, and that this is a very good metric for charitable effectiveness. There's no motte and bailey there, at least—they've never hidden that. It's a tremendous part of what Effective Altruism is about, the idea that there are logical arguments to make for some causes relative to others.

Expand full comment

If they weren't trying to make themselves sound better than anyone else, they'd just call it "Altruism"

Of course the Rationalists (the philosophical predecessor to EA), who see themselves as more rational than everyone else don't call themselves the "Actual Rationalists", so maybe that name would stick too

Expand full comment

I don't think EA critics actually care about the specifics of EA at all, and so arguing about them won't help.

I think EAs scare people because, from the perspective of an outsider, EAs are unaligned agents trying to amass as much power as they can. This isn't any different from how EAs react to the prospect of AI agents. No matter how "good", in some abstract sense, your beliefs are, they're always at least partially wrong from the perspective of everyone else. There's no such thing as perfect agreement. Therefore, people don't like groups that try to amass power, because that always means that there will be increased power behind beliefs that they consider "wrong". People are fine with specific EA-supported charities, because those specific charities aren't trying to rally power behind a particular worldview.

Importantly, this has nothing to do with the specifics of EA whatsoever. People are reacting to EAs the same way they would react to Protestant groups trying to amass power, or Catholic groups trying to amass power, or vegan groups trying to amass power, or the Russian government trying to amass power, etc.

Expand full comment

Good way of putting the problem into EA-adjacent terms

Expand full comment
May 30·edited May 30

> without checking the long literature of people discussing it and coming up with responses to it

There is a long literature of people responding to the criticism without understanding it, and it's one of Scott's biggest blind spots too. The criticism of EA is that "it's not much new, cool that you are getting people to think about being more effective, weird that it leads some people to egomaniacal and somewhat evil places like the excesses of 'longtermism.'"

Scott's response to this was to try and pretend that "give 10 percent to charity" is an innovation and not literally a tenant of almost every protestant church in the country. Or that "think about where you give your money" and "actually do the thing you say you are doing" is new or innovative.

It's not new or innovative. It's good, usually, except when people lose touch with reality to the point that that fall into the spell of a grifter like SBF or a ghoul like [name omitted, there are plenty in the movement.]

The thing about the critique about EA being not much new and being treated by its adherents as something totally new is that it's not a statement that the movement is bad. It's a social critique. It's a "y'all are acting like you're smarter than everyone else and you're obviously not. Sit down."

Scott misunderstands the critique, and jumps into defending EA on the merits. It's not a merits critique. It's a perfectly reasonable reason for people who have been doing EA (but not calling it that) for years to think that ya'll are being arrogant assholes. Rather than trying to explain to everyone why you're not an asshole for the way you're explaining things to people...just take the note. It's a good critique. EA is a nice way to get people to give more and think about where they give the money. It's not nearly as clever as you think it is though.

...And it's not going to fix the major problems of the world, because the problems in the world are caused by systems of entrenched power that exist only by virtue of their exploitation of the weak. "EA distracts people from actually effective social action" is a valid critique as well. And it's one you can disagree with or agree with...it's unfalsifiable. That doesn't mean it's wrong though. Accept the fact that a lot of people aren't going to like EA, and if you like it, just keep doing it. If it's actually effective and not $15 Million Cambridge estates, people will get on board over time.

Expand full comment
May 30·edited May 30

I sort of get that, but if you’ve been doing it for years because it’s good, why aren’t you happy that someone else arrived at this same valid conclusion (whether independently or not) and is eager to help out? Is it just a matter of credit?

If the last religion that condoned holy wars finally came to the conclusion that holy wars were immoral and credited that idea to a reinterpretation of their own scripture, should all the religions that already reject holy wars embrace their new allies, or reject their new allies for lack of citation?

Expand full comment
May 30·edited May 30

Some people are happy that they figured out what others had figured out before. Others believe that they think they figured it out but actually didn't because (insert misstep/critique here) and therefore it's bad. Either way, that doesn't invalidate the critique.

And if it's just a matter of credit, the issue is that EA people are taking credit for ideas that have been invented many thousands of times, but get butthurt when people are like "yeah, you didn't invent that." You'd think EA people, in accordance with their principles, would be more than happy to charitably acknowledge the unoriginality of their ideas and bury the hatchet, but they aren't.

Why not? My theory is that it's because EA people are pretty much like non EA people: Petty. And very prone to disguising self-interest in an cloak of self-sacrifice. Intentionally or unintentionally.

Expand full comment

> And if it's just a matter of credit, the issue is that EA people are taking credit for ideas that have been invented many thousands of times, but get butthurt when people are like "yeah, you didn't invent that." You'd think EA people, in accordance with their principles, would be more than happy to charitably acknowledge the unoriginality of their ideas and bury the hatchet, but they aren't.

https://slatestarcodex.com/2014/12/19/nobody-is-perfect-everything-is-commensurable/

Relevant quote

> The most important thing is having a Schelling point, and ten percent is nice, round, divinely ordained, and – crucially – the Schelling point upon which we have already settled.

With divinely ordained linking to tithing

https://www.givingwhatwecan.org/pledge#faqwhy-is-the-pledge-10

Relevant quote:

> Ten percent also carries with it a strong historical connection to the idea of tithing, a tradition in Judaism and Christianity of giving 10% of your income to charity or the church. Islam has a similar practice, Zakat, in which those who are able give between 2.5% and 20% of their wealth to those who are less well-off.

https://80000hours.org/articles/earning-to-give/

> These ideas aren’t wholly original to us or the effective altruism community. John Wesley, the founder of Methodism, preached about a version of earning to give in the 18th century. He advocated the principles: “Gain all you can, save all you can, give all you can.” And he even caveated this advice with sensible constraints, saying in a sermon that we shouldn’t work so hard that we hurt ourselves and that we should “gain all we can without hurting our neighbour.”3

And famously, Friedrich Engels worked in textile manufacturing management in order to have enough money to support Karl Marx as he was writing Das Kapital. Since Engels viewed Marx’s writing as particularly important and potentially highly impactful, this decision is closely aligned with the ideas behind earning to give.

That's three separate sources where EA does not claim to have invented this thing. Where are you seeing these petty EAs taking credit?

Expand full comment

...I was responding to a reply that was suggesting this whole flap was about credit. The 10 percent to charity thing was listed by Scott in this very post as one of three things that "define[s] effective altruism in a way that separates it from universally-held beliefs."

This is wrong, though, and obviously so. There are no such thing as "universally-held beliefs" so I'm going to steelman this and say the meant "widely-held." And none of the three things he cited separate EA from widely-held beliefs.

Expand full comment

Your original post also says that EA is taking credit. Do you disavow that? I don't see anything from you that could be taken as a walk back on that strong statement. You also talk about adherents treating it as something new, which last I checked was a distinct concept from different.

Expand full comment
May 30·edited May 30

EA is not a "thing," capable of taking credit, so no. I have heard EA supporters talk about its tenants as if they are new and innovative. It is not. No part of it is. I stand by those statements (the main points), and am not interested in getting into pedantry otherwise.

Expand full comment

It's a motte and bailey. Just like "hey do you think men and women should be treated equally? congrats, you're a feminist!" EAers are not everyone who has a reason for their charitable giving even if the most broad framing paints it that way. They actually are a more distinct set of people that disproportionately give to certain causes, and who have an identifiable culture. Claiming that EA is something that is broad, well-accepted and positive (actually doing charity, thinking about it) when challenged on its actual nature. Obviously we agree that giving is good ... but is that really all EA is?

What we actually have is a group of people who want to Do Good, and in order to determine what is Good, they insist on performing some sort of pseudo-scientific quantification and rational inquiry. There is no agreement on the value of certain outcomes, or what data should be fed in, but there is an agreement that the method to determine Goodness should be via Science and Quantitative Reasoning, and that's what they will do even if it makes no sense. Obviously what you'd expect is a group of euphoric /r/atheism fedora wearers enlightened not because of some phony god's blessing but by their own intelligence, and indeed that is the case in reality. The issues they find pressing and how exactly the movement operates is more explained by this than any sort of philosophy.

Expand full comment

This is a fantastic and pithy explanation. If you started with a pool of cult members, EA would look like building monuments to their God so he would spare them. If you started with a pool of Republicans, EA would be cutting the social welfare state and redirecting the money to national defense and religious institutions so American values can prevail "in the long term." If it was hippies it might be healing crystals.

Instead it was a bunch of nerds on the spectrum, so you get mosquito nets and "AI risk" and buying fancy Cambridge estates so people have a space to "think." Its fine. I like nice opera houses too. You know, the robber barons believed that by providing cultural institutions to cities, they were acting in the long-term interest of society. **none of this is new.**

In the end, it's still just philanthropy, and most of it will be wasted on symptoms because you can't fix social problems that are caused by systematic social forces through philanthropy.

Expand full comment

Wytham Abbey was near Oxford, not Cambridge (England) if that's what you're alluding to...

Though I wouldn't be entirely surprised if they acquired something fancy in Cambridge (MA) or Cambridge (England).

Expand full comment

"Obviously what you'd expect is a group of euphoric /r/atheism fedora wearers enlightened not because of some phony god's blessing but by their own intelligence, and indeed that is the case in reality."

Ah now, that's too harsh and unfair. They're mainly nice middle to upper-middle class college students who are slightly weird and WEIRD, and they got converted by gurus like MacAskill and Singer into doing things like this. They're from secular backgrounds, so any charitable aspirations were not directed in those channels; traditional social organisations like Boy Scouts (or whatever name they're going by these days) were too normie, so their innate impulses about wanting to help were ready to be steered by the first convincing person who came along and flattered their intellect as to "we can do this really well and even better than the normies because we're going to use our brains and all hail BAYES".

Expand full comment

Not all of /r/atheism was truly euphoric, they were mostly nice awkward weird middle to upper middle class tech nerds. Just the public face of the movement was fedora adorned, and so it is with EA, where I'm sure not everyone is in a polycule even though it is provably more effective and altruistic.

Expand full comment

Public perception is all, unfortunately, and if the public is hearing about EA due to scandals and court cases, well....

Expand full comment

I am happy that other people decided, on a secular basis, "hey, doing good is a good thing".

I am not happy that they like to pat themselves on the back about being the first people ever to realise this, and that all the other organisations and units who thought doing good was a good thing were wrong because they were doing it for the wrong reasons (religion, ugh) or the wrong way or don't recognise their hidden biases ("you see, you only want to donate to the soup kitchen down the street because of Dunbar's Number as it is within your circle of immediate concern, but *we* donate to non-human animal activism and AI long-termism because we're *smart*").

https://www.eaeindhoven.nl/what-is-effective-altruism

"Ending factory farming

Why this issue?

People in effective altruism try to extend their circle of concern – not only to those living in distant countries or future generations, but also to non-human animals."

I'm extremely glad they're donating to providing medical supplies in poor countries, by the way. Keep it up!

But stop trying to turn yourselves into yet another lobbying organisation; someone needs to do the hands-on stuff of cleaning up the shit and vomit, a lot of people already prefer schmoozing in Washington:

"Improving decision-making

Why this issue?

People who want to do good often prefer to directly tackle problems, since it’s more motivating to see the tangible effects of their actions. But what matters is that the world gets better, not that you do it with your own two hands. So people applying effective altruism often try to help indirectly, by empowering others.

One example of this is by improving decision-making. Namely: if key actors — such as politicians, private and third sector leaders, or grantmakers at funding bodies — were generally better at making decisions, society would be in a better position to deal with a whole range of future global problems, whatever they turn out to be.

So, if we can find new, neglected ways to improve the decision-making of important actors, that could be a route to having a big impact. And it seems like there are some promising solutions that could achieve this."

This is what I mean by protesting the Met Gala, by the way; start afflicting the comfortable, not confirming them in how important and marvellous they are, now please look at my proposal.

Expand full comment

I get all that. I guess I’m just pro-charity. I’m not an EA, nor anti-EA. I like positive practices that can be widely adopted of which charity, broadly speaking, appears to be one. I like that religions do it. I like that billionaires do it. I don’t like to discourage charity. It feels weird. I think that’s because there’s no shortage of people who need it. The problem definitely doesn’t appear to be rampant charity and I think both EA folks and anti-EA folks can agree on that, right?

Expand full comment

I think EA as a movement is too broad as it stands, and needs to start paring down and concentrating on "so what do we feel are our really vital core areas?"

That may mean getting slick and concentrating on PR, and quietly disassociating themselves from things like shrimp welfare and their shared roots with Rationalism and Less Wrong set and so forth. Go all-in on the political lobbying stuff and complete the transformation like a butterfly coming out of a chrysalis to "we are concerned about long-term existential risks" and work on that. The malaria nets low-hanging fruit has been plucked by now.

Expand full comment

"The malaria nets low-hanging fruit has been plucked by now."

How so?

Expand full comment

Ah, I thought I read someplace that the evaluation of charities had moved this cause down as being nearly fully funded, so less in need of donations.

This is the kind of good, routine, work that I certainly don't object to, and if this was what EA was all about I think there would be less negative perception. It's when they move into other areas (like AI, like political lobbying, and yes, unhappily, like being snagged up in the SBF scandal) that the trouble begins.

Religious charities, such as World Vision, have been doing things like bed nets and health interventions for decades already:

https://donate.worldvision.org/give/bed-nets

But of course this isn't fancy enough to have Oxford professors getting involved 😀

Expand full comment

"And it's not going to fix the major problems of the world, because the problems in the world are caused by systems of entrenched power that exist only by virtue of their exploitation of the weak."

Yeah, that's part of my discomfort with the "earning to give" thing, where the maximum benefit is getting the best-paying job you can so you can donate the most money. In theory, that sounds really great. In practice? "You could become a doctor, but don't do that! Doctors don't really make that much money, and if you're smart enough to become a doctor, then you're smart enough to get a job pulling down the really big bucks. Then you can donate money towards getting some other schmuck who's too dumb to get a good job trained up as a doctor and they can be turned loose on patients instead".

If everyone does that, who is going to be the doctors and other low-class positions that are needed for society?

What if I think that capitalism is part of the problem? What if I think that getting a job in Jane Street is *not* a good thing, and what is the value of Jane Street anyway? I'm not begrudging anyone the right to make money out of their labour and smarts, but see OpenAI and Sam Altman - the second the prospect of the really big bucks entered the room, principles (and tedious idealist board members) were shoved out the window.

Someone mentioned the work on the criminal justice overhaul in another comment, and yeah, that is part of the entire ball of wax: there are huge entrenched systems in place, they will take a lot of time to dislodge and replace, you cannot come in with a ton of money and a quick fix.

It's great that you've got EA billionaires funding things. But maybe I think billionaires are part of the system that needs to be overhauled, too.

Expand full comment

Stone claimed that "preferences are often idiosyncratic and non-transitive," which is hilariously wrong.

We need a word for the phenomenon of "normie nationalism" where people instinctively run to the defense of and rationalize the behavior of normies, imagining themselves to be sticking up for the little guy. It's the result of a culture where the worst thing one can do is to imagine oneself to be part of an intellectual elite that is superior to the masses.

Expand full comment

"It's the result of a culture where the worst thing one can do is to imagine oneself to be part of an intellectual elite that is superior to the masses."

Well, as one of the dumb masses, prove to me that you are a superior intellectual elite, man.

Expand full comment

I can find Afghanistan on a map, unlike 83% of young people*

https://www.nationalgeographic.com/science/article/geography-survey-illiteracy#:~:text=In%20a%20nation%20called%20the,new%20worldwide%20survey%20released%20today.

It's one thing when normies just go about their lives, working their jobs and contributing to society. But many of them insist they are entitled to an opinion on every subject despite their great ignorance. Then smart people stick up for them out of a misguided sense of humility and egalitarianism.

*Not to pick on young people, I'd bet the olds are just as ignorant.

Expand full comment

> It's one thing when normies just go about their lives, working their jobs and contributing to society. But many of them insist they are entitled to an opinion on every subject despite their great ignorance. Then smart people stick up for them out of a misguided sense of humility and egalitarianism

Oh Jesus, and you wonder why people don’t like you? Are you completely oblivious to how awful this sounds? The truly smart people I’ve met all know better than to speak so smugly. I can only assume when you say ‘normie’ you mean people ‘not on the spectrum.’

Expand full comment

What someone sounds like in an online comment section isn't necessarily what they sound like IRL.

Expand full comment

A low bar, but I'll grant it to you.

Can you find Tourneendohenybeg on a map, though?

https://www.independent.ie/news/the-minister-for-absurdity-returns/25882654.html

Speaking of which, both the local council and the European Parliament elections are on the 7th June this year:

https://www.youtube.com/watch?v=2gvSz3g3N3E

Expand full comment

(I mean, it's easy to see Afghanistan from *space.* It's that bunch of mountains that stick out west from the Himalayas, like a mind flayer trying to eat Persia.)

Expand full comment

I will never cease to be amazed of your ability to respond to blatant low-effort ragebait attacks in the most constructive way possible

Expand full comment

Good piece! Stone's thing was ostensibly a reply to me, so I'm glad someone wrote a reply (I was thinking of doing it, but replying to criticisms of effective altruism gets a bit tedious as the criticisms are about as bad as arguments for things get).

Expand full comment

What do you think of the characterisation of you as "walking backwards into Calvinist Universalism"? I'm not sure if I'm more concerned about the Calvinism or the Universalism 😀

Expand full comment

False about the calvinism true about universalism.

Expand full comment
May 30·edited May 30

I am relieved! I couldn't see where the Universalism and Calvinism came in; reconciling universal salvation with Calvinism seemed difficult if not impossible, but reconciling universal *damnation* with it seemed much more feasible (e.g. "all our righteousness is as filthy rags", "even the heavens are not pure in His sight", "all we have erred and gone astray, and there is no health in us").

I don't know where he gets the Calvinism, though, unless it's the "working it all out according to strict logical principles"!

Expand full comment

The best way to donate in a non effective altruistic way would be to donate in a very ineffective manner, for instance, by giving me the money to use (I already have plenty) If anyone is interested in this, please DM me. 100 cents of every dollar you donate will go toward me.

Expand full comment

>Technically, it’s only correct to focus on the single most important area if you have a small amount of resources relative to the total amount in the system (Open Phil has $10 billion). Otherwise, you should (for example) spend your first million funding all good shrimp welfare programs until the marginal unfunded shrimp welfare program is worse than the best vaccine program. Then you’ll fund the best vaccine program, and maybe they can absorb another $10 million until they become less valuable than the marginal kidney transplant or whatever. This sounds theoretical when I put it this way, but if you work in charity, it can quickly becomes your whole life.

Even from a theoretical perspective, this is wrong - you should put money into shrimp until until the marginal unfunded shrimp welfare program is /equal to/ the best vaccine program, and then split your money (not at 50/50) between shrimp and vaccines until a third cause becomes equally good, and so on.

Expand full comment
May 30·edited May 30

See, this is where EA calculation of value loses me (and I would venture to say, a lot of ordinary people). If it's "do I donate to vaccines or shrimp welfare?" then the shrimp can look out for themselves, *of course* you donate to vaccines. You don't say "but tons of people donate to vaccines, nobody cares about the poor little shrimp".

Yeah, no shit we don't care about the poor little shrimp. There's lots of human suffering and need that can be tackled before we get into worrying about shrimp experience. If you want to donate to shrimp welfare, you do you, but do *not* pretend that "my calculations show that..." is anything more than a rationalisation for doing what you want, just like Ben wants to donate to his local sports team.

You can genuinely hold and believe that shrimp are sentient! That they are people, too! That they are suffering! And you have every right to agitate for their rights! But you do not get to say "This is really good science-based mathematically proven more rational than giving money to the golf club" charity. You care about shrimp, you don't care about golf. This has nothing to do with ethics or morality or more effective giving,.

It's like the vegan lady and fish. If you hate shrimp, if your notion of "the best life for shrimp" is to end up in a curry, and after all *that* you decide that it is, in fact, an ethical question of suffering which should be tackled, then I respect that. If on the other hand, you think fish are people and you get anxiety attacks over seeing people fishing, then working for shrimp rights is just alleviating your own personal discomfort, and is no more admirable (or less) than Ben and Tom and Phil doing fundraising to provide new goal posts for the local U-16 junior B club.

Expand full comment

“If… you think fish are people and you get anxiety attacks over seeing people fishing, then working for shrimp rights is just alleviating your own personal discomfort”

Forgive me, but couldn’t you say the same about lots of other moral/ethical issues?

“If you believe that every embryo and fetus is a precious unborn baybee in the eyes of the Lord, and get sad feels thinking about abortion, then campaigning to ban abortion is just alleviating your personal discomfort.”

Expand full comment
May 30·edited May 30

Society has happily categorised pro-life arguments as personal quirks, fancies, and indeed attempts to punish women, so I think turn about is fair play for the shrimp fanciers.

See your spelling of "baybee", at least I retained enough basic courtesy not to go "shreeempies lil' shreeempies" about those who have such beliefs.

Expand full comment

You're right, I was snarky with my spelling. Apologies.

Here's my serious, snark-free point: if we collapse all moral questions (abortion, veganism, shrimp rights, everything) to personal feelings aka "you're only doing this to avoid discomfort," that doesn't lead anywhere good. There is room for good-faith disagreement and "live and let live" in a pluralistic society, but presumably you wouldn't say "you only oppose rape and murder because it causes you personal discomfort" (and I wouldn't either).

My point is, "it's all about personal discomfort" is a conversation-stopper. I believe it's important to try to appeal to universal moral principles that we can all agree on, regardless of, say, our religious beliefs.

And, ok, I chuckled at "shreempies lil' shreempies."

Expand full comment

You have to make allowance for my very strong anti-shrimp, prawn, crab, crayfish and shellfish in general dislike. It's a prejudice, I have to admit. So if anyone could successfully convince me to be pro-shrimp rights, that would be a triumph of disinterested conviction over base impulses! 🦞🦀🦐

Can't invoke Jesus in this context, so how about "Darwin loves the lil' shreeempies" instead?

https://www.youtube.com/watch?v=GP8_c0UxmdQ

Expand full comment

"Save the whales, kill the krill"?

Expand full comment

Hence "even from a theoretical perspective" - I don't care what the specific example of charities are, the point I was trying to make is that when you reach a balance point you start splitting your money, not putting it exclusively into the new cause, and I just stuck with Scott's examples rather that coming up with new ones myself.

I strongly suspect that shrimp suffering is a dumb thing to care about, and that reducing it is not an effective form of altruism (although I confess that I care about it sufficiently little that I haven't actually thought about that beyond the past 10 seconds, and don't intend to in future).

Expand full comment

I get your point about not dropping one particular charity because you think now it's sufficiently funded that you don't need to keep on supporting it; after all, if everyone does that, then it will soon stop being sufficiently funded, or funded at all.

We're getting a little stuck on the shrimp thing, but it's the most obvious example of "uh-huh, so you lot are all lecturing the rest of us about not donating to our pet projects and invoking Singer like he wrote the Gospels with the damn Drowning Child, but when it comes to your pet projects oh no that's different?"

Expand full comment

<mild snark>

>Even from a theoretical perspective, this is wrong - you should put money into shrimp until until the marginal unfunded shrimp welfare program is /equal to/ the best vaccine program, and then split your money (not at 50/50) between shrimp and vaccines until a third cause becomes equally good, and so on.

So, when the marginal utility of a set of programs becomes equal, does one donate in inverse proportion to the _second_ derivative of utility with respect to funding (under the assumption of negative second derivative - diminishing returns), thus keeping the _first_ derivatives equal? :-)

</mild snark>

( Actually, I'm neither an altruist, utilitarian, or egalitarian, so I'm really outside all this... :-) )

Expand full comment

Effective Altruism is patently not bad, but it is flawed. EA collapses all human good into one dimension: alleviating / mitigating suffering. While this, on its face, sounds like a fair goal, it inspires an inchoate aversion among people who see the value in other goods but can't articulate compelling reasons why (or, at least, reasons that seem compelling in the face of SUFFERING). This aversion enables people to neglect the central insight of EA which is that there are probably more effective and efficient ways to marshal our resources toward the advancement of any good - we say we care about making our altruism effective but in practice rarely do the work of ensuring we are actually doing so.

If EA actually cared about "doing the most good" they would take more time to find ways to anchor their messaging in non-utilitarian ethics like various religious moralities. There is probably a compelling e.g., Muslim argument to be made in favor of donating to mitigate suffering in the most effective way possible, but the EA community has not bothered to try to convince people outside of their comfortable, "rationalist" base. However, doing good effectively is, well, good. We should think more about the effectiveness and efficiency of our philanthropy than we do. We can make the world a better place, along every dimension, by learning this lesson from the EA movement.

More here: https://open.substack.com/pub/commenter42/p/why-effective-altruism-is-half-good?r=75h5x&utm_campaign=post&utm_medium=web

Expand full comment

EAs aren't generally strict negative utilitarians which is why, for example, many of them support malaria nets even though they increase the amount of life-years people can experience suffering in, and the AI people will sometimes talk about the positive good of future filled with human life etc.

The lack of religiously motivated arguments probably comes from a desire to only say things they actually believe, and it's pretty hard to argue in good faith from Islamic or Christian principles if you think Islam and Christianity are just false.

Expand full comment

Your first section seems to be entirely missing Stone's point? He says "the rise of ‘effective altruism’ as a social movement has had little or no effect on overall charitableness", then provides data that confirms it has had little to no effect on overall charitableness. Your response is to point out that his methodology would have been underpowered to pick up increased charitable giving *by EAs specifically*, which is surely true, but it's not what he was doing. His methodology is perfectly sound to investigate the thing that he explicitly said he was trying to investigate.

Expand full comment

Maybe I'm missing something but that seems an extremely weak critique of a movement. "Ok sure, the adherents of your ideology DO give more to charity, but does your ideology make non-adherents give more to charity? Gotcha!"

Expand full comment

That would be a fatuous argument against a movement that took over the entire population, leaving no non-adherents to measure.

Expand full comment
May 30·edited May 30

I think these arguments pretty demonstrably show that Stone is wrong, but I think more attention should be paid to Stone's point about money flowing to people in the EA sphere. Researchers and non-profits are dependent on scarce funding and talent resources, and EA organizations and considerations are playing an increasing role in their allocation. In addition, the philosophical goal of EA — identifying the highest opportunity (easily misread as most important) charitable interventions — sit uneasily with everyone wanting to believe their own projects are significant.

My guess is that people disagree with EA for two reasons: 1) they are not bringing in enough new resources that they are not impacting the status quo of funding and talent allocation, and 2) they disagree with EA's stated priorities. While this competition obviously implicitly existed among many non-profit organizations and philanthropic goals previously, EA makes it explicit and claims that comparisons can be reasonably made.

I like EA and want it to grow, but I disagree with it, largely because it seems to me to imply that if you aren't working on say AI, biosecurity risk, animal ethics, or improving the lot of the world's poorest you are not fulfilling your potential. Maybe this just reflects my own insecurity, but I do think EA both competes for real resources and status resources.

Expand full comment
May 30·edited May 30

Responding to your last paragraph ("it seems to me to imply that if you aren't working on say AI [etc. ...] you are not fulfilling your potential"), I get that it can feel that way. I think any movement that says "here are some really great things you could do to make an impact" can create pressure and lofty ideals. But also, I do think there's a norm in EA that everyone is free to decide how much to sacrifice, and I know there are a bunch of people who are content to donate their 10% or whatever and not sacrifice more than that, and I don't think they're ostracized or anything for it.

Btw, for anyone who's interested, here are some related posts on the EA Forum:

- Can my self-worth compare to my instrumental value? https://forum.effectivealtruism.org/posts/yYiLv7rCHMNP98dZ5/can-my-self-worth-compare-to-my-instrumental-value

- Impact obsession: Feeling like you never do enough good https://forum.effectivealtruism.org/posts/sBJLPeYdybSCiGpGh/impact-obsession-feeling-like-you-never-do-enough-good

- Increasing Demandingness in EA https://forum.effectivealtruism.org/posts/zDgj5Mew7cRhW3BNs/increasing-demandingness-in-ea

Expand full comment

I appreciate this, and I have to say I have never talked to a person involved with EA and felt they were looking down on me. I think that EA as a movement is constantly wrestling with this and aware of that, and I think that is good. I do not think the EA community looks down on people who fail to fulfill their objectives. I think all of these things are good.

I don't think you've fully addressed my concern, but simultaneously, I cannot rationally address the point you are making. For now I will simply make an empirical prediction: I think that EAs would say they feel comparably more regret than non-EAs after adjusting for relevant factors which would cut both ways.

I realized I could look at something like this mid-comment so I did using the 2024 ACX data. (You can read my dumb methodology below if you like.) I found that EA membership was associated with a greater increase in anxiety between the survey and 10 years ago, but also associated with greater job satisfaction, life satisfaction, and social satisfaction (although there is no way to measure change in these values from ten years ago.)

Obviously, this analysis is bunk, and no one should not trust it. I'm mostly posting it to show my hypothesis is ex ante plausible and hopefully will motivate someone to look into this or to get Scott to ask more questions on the next ACX survey. However, I think there is reason to believe the satisfaction scores are biased upward. For one thing, EA's are less likely to be anxious at the time of the survey and ten years prior. As anxiety is negatively correlated with life satisfaction, this suggests it is plausible that EA's had even higher life satisfaction ten years ago. So in short, I'm not sure. I think I still may be right that EA might make people less happy and more regretful.

Dumb Methodology

I looked at four outcomes—the difference in anxiety level between the present and 10 years ago, life satisfaction, job satisfaction, and social satisfaction—and considered the impact of whether the person identified as an EA and whether Scott's blog prompted them to behave more like an EA (if they joined LW or EA, ate less meat, or donated more.) I included basic dummy covariates for age, sex, gender, sexual orientation, and income (yeah I know this is a bad control, I find no evidence that there is a significant relationship error) as a continuous covariate. I used robust standard errors. Obviously this is incredibly stupid and not causal. I found that identification with EA (sorta or above) was associated with a significant increase in anxiety over the past ten years (equivalent to about 1/10th of a SD of anxiety difference and about 1/3rd of the mean.) I also find that identification with EA is associated with greater current life satisfaction, job satisfaction, and social satisfaction. These results are robust to including a number of additional dummy variables (like religious background, job status, and education status) as well as excluding all covariates. Including a covariate for concern about AI X risk does not change my results. To bolster my results, I perform an analysis using coarsened exact matching on the baseline demographic variables. I find that the results have a similar sign, but a smaller magnitude for all outcomes. As you increase the number of strata (by including more baselinie controls) the results for satisfaction get smaller and eventually vanish while the anxiety differences get larger. This doesn't say much, because the strata absorb a significant amount of the variation. Obviously, this whole exercise is kind of pointless as there is no real way to back out causality and results are mixed.

Expand full comment

One thing people have a very hard time reckoning with is how to think about complex topics when power law distributions are present. In this article, and in the comments, there are so many cases where people are explicitly or implicitly making an argument whose relevance and importance rests on a "number of people," but what actually causes impact is first number of dollars, with number of people second. If there are 10,000 people who are EAs, that doesn't matter, maybe they can donate $1,000,000 per year. If there are 10,001 people, and one of them is Dustin Moskovitz, that's a big deal. If EA could convince Bernard Arnault, that would be the equivalent of convincing 50,000 Bay Area software engineers.

Of course, this framing depends on an EA view of the world, because it requires an explicit thinking of outcomes vs. our intuitive judgment based on social instincts.

Who cares about the movement qua movement? Is Scientology winning as a movement? But they have Tom Cruise giving them enough in donations to pay for millions of bed nets or tens of thousands of lives.

Expand full comment
May 30·edited May 30

There's a huge failure mode when it comes to reasoning from evolutionary precursors that goes something like this:

1. Observe something that doesn't have an obvious purpose (compassion for animals).

2. Assume it's an evolutionary error or relic.

3. Declare modern society and evolution to be at odds with one another.

4. Make counterintuitive pronouncements.

The trick here is that while you're saying something that makes no sense, it's okay because Science has been invoked and Science sometimes gives counterintuitive results. What this really amounts to are non-scientific musings about what the author assumes is the explanation for why a behavior was pro-adaptively selected for in the first place.

Most of the time evolutionary explanations are heavily inferred, not directly observed. In other words, most of this crap is guesswork and should not receive the legitimacy it gets by attaching the word 'evolution' to it. The theory of evolution is based solidly in science. Theories about how evolution might have led to behaviors/traits almost never have a scientific basis - especially when parroted through popular media and not in scientific journals.

Expand full comment

Not related to your argument but what are the evolutionary arguments that you would respect? Would something from The Selfish Gene count? Or would it not because it's still guesswork?

Expand full comment

I'll be honest that a lot of the evolutionary arguments I've seen - including from actual evolutionary biologists outside of published research - goes over the line into pseudoscience. I accept evolutionary arguments that are based in evidence. For example, a cladogram crafted from changes in a slow-clock gene common among the various species/traits would be solid evidence. Morphological changes are also acceptable. For example, a few years back I saw a paper where the authors discovered a coral in a niche that had 4-sided symmetry. Their fossil evidence suggested that coral went extinct, but then the niche was filled by a species with 6-sided symmetry. There wasn't enough evolutionary time for significant body-plan changes, suggesting convergent evolution filled the niche from a species unrelated to the 4-sided precursor.

Compare to discussions about how some trait is assumed to have been pro-adaptive because of assumed social dynamics in a prehistoric human population. This is two steps removed from evidence. That doesn't mean it's wrong, just that it's spurious philosophizing that's not rooted in the scientific method. Why care about the difference? Because one can be tested, checked, and falsified, while the other is beyond all those things.

Yet most invocations of evolution I popularly hear are of the second type, and not the first. It's frustrating, because it makes it more difficult for me to help skeptics of evolution (e.g. some family in KY) see the benefits of evolutionary theory and be able to take it seriously. To them it looks like a cheat code you can trot out and unseriously 'win' an argument without having to produce any evidence. And yeah, as far as most invocations of evolution I see are concerned, they're not far off.

Expand full comment

Just so stories, I’ve heard them called.

Expand full comment

Exactly! But with the imprimatur of "science".

Expand full comment

I look forward to reflecting more on these points during EA Global in London this weekend - although curiously I seem to have missed out on the invitation for the effective terrorism teatime meeting...

Expand full comment

Well, there you are then, that explains how you missed out; a really effective terrorist would know you have such meetings at breakfast, not teatime!

“Of course, the Secretary of the branch told you everything that can be told. But the one thing that can never be told is the last notion of the President, for his notions grow like a tropical forest. So in case you don’t know, I’d better tell you that he is carrying out his notion of concealing ourselves by not concealing ourselves to the most extraordinary lengths just now. Originally, of course, we met in a cell underground, just as your branch does. Then Sunday made us take a private room at an ordinary restaurant. He said that if you didn’t seem to be hiding nobody hunted you out. Well, he is the only man on earth, I know; but sometimes I really think that his huge brain is going a little mad in its old age. For now we flaunt ourselves before the public. We have our breakfast on a balcony—on a balcony, if you please—overlooking Leicester Square.”

“And what do the people say?” asked Syme.

“It’s quite simple what they say,” answered his guide. “They say we are a lot of jolly gentlemen who pretend they are anarchists.”

...At one corner of the square there projected a kind of angle of a prosperous but quiet hotel, the bulk of which belonged to a street behind. In the wall there was one large French window, probably the window of a large coffee-room; and outside this window, almost literally overhanging the square, was a formidably buttressed balcony, big enough to contain a dining-table. In fact, it did contain a dining-table, or more strictly a breakfast-table; and round the breakfast-table, glowing in the sunlight and evident to the street, were a group of noisy and talkative men, all dressed in the insolence of fashion, with white waistcoats and expensive button-holes. Some of their jokes could almost be heard across the square. Then the grave Secretary gave his unnatural smile, and Syme knew that this boisterous breakfast party was the secret conclave of the European Dynamiters.

...As Syme and the guide approached the side door of the hotel, a waiter came out smiling with every tooth in his head.

“The gentlemen are up there, sare,” he said. “They do talk and they do laugh at what they talk. They do say they will throw bombs at ze king.”

And the waiter hurried away with a napkin over his arm, much pleased with the singular frivolity of the gentlemen upstairs."

Expand full comment

Thanks for the insider info, I'll look out for those talkative men with white waistcoats and button-hole them later... (or earlier)

Expand full comment

Any gatherings for meals that have you politely turned away at the door are the effective terrorist high council insider meetings. If you're not already in, then you'll never get in!

Expand full comment

I'm a bit disappointed to see this from Stone because he is generally a much better writer than this, but I do want to push back on one of the defenses of EA you mention, which is that only people who actually donate the 10% can be considered part of EA. Stone's choice of definition, "Effective altruism is simply whatever is done, stated, and believer by people who call themselves and are called by others “effective altruists.”" is a fairer metric, because the movement writ large as it is commonly considered includes a lot of people who don't actually take step 3. We would not take this argument seriously:

"Christianity is a great religion because Christians do not murder, lie, rape, or steal. It's expressly forbidden. A zero percent incidence of bad things is unmatched by any other society or religion, ever. What? You say that Christians do, actually, do these things? Those people aren't real Christians, because they aren't following the Ten Commandments."

If EA indeed had the kind of party-style discipline that excommunicated anyone who claimed to be an EA but didn't donate the 10%, then the fact that all EAs donate 10% would in fact be something that could be claimed as a credit to the cause. As is, it's an aspiration, and saying that all EAs enjoy moral superiority because they are the ones who did in fact donate the 10% is dishonest accounting.

Expand full comment

"People having kids of their own instead of donating to sperm banks is in some sense an “error” in our evolutionary program."

No, donating to a sperm bank does not actually make it likely that your donation will be used. There are some people, analogous to the super-sires of domesticated animals, who would have much higher fitness by donating. But most people would not.

"Anyone who cares about a future they will never experience, or about people on far off continents who they’ll never meet, is in some sense succumbing to “errors” in their evolutionary programming."

No, trying to ensure the existence & survival of descendants you will never meet is not in any sense a Darwinian error.

Expand full comment

Regarding section one, I don't think that EA claims to be exceptionally good at extracting donations from their members. I am sure that any cult optimized for donations would perform better than EA if the metric was fraction of the income donated by adherents. (Of course, EA is kind of good at attracting ultrarich donors, which a cult whose metaphysics were entirely shaped by optimizing for donations might not be.)

Regarding section six, I think it is kind of brave of Stone to come out as an Azathoth worshipper [1] here. Implying "the thing evolution is optimizing for is what is good" seems kind of controversial. If he was serious about following the intent of evolution, he would not be writing about EA, because this seems hardly the best way to increase the relative frequency of his genes in the human population, which would be his goal. Instead, he would be running a fertility clinic and swap out the sperm for his own.

Luckily, most people who bring up evolution are not serious about it. They do not consider a man who kills a genetically unrelated man along with that man's sons and young daughters and then impregnates the remaining family members to be doing god's work, even though this man would be better at following the commandments of evolution than almost anyone.

Evolution was the first creator of intelligence, and the intelligence it created was substantially misaligned, because the drives it gave us were only aligned with genetic success in the ancestral environment. We are still shaped by these drives, we watch porn (and have sex with birth control) and eat fatty food and get really enraged about politics, all of which is roughly orthogonal to the direction of evolutionary pressure.

Also, I find that claim ("you only care about animals due to an evolutionary error") to be somewhat condescending. Peter Singer is not a five-year-old who does not want to eat a rabbit because it is cute and cuddly and has big round eyes.

[1] https://www.lesswrong.com/posts/pLRogvJLPPg6Mrvg4/an-alien-god

Expand full comment

>Regarding section six, I think it is kind of brave of Stone to come out as an Azathoth worshipper [1] here. Implying "the thing evolution is optimizing for is what is good" seems kind of controversial.

Yup! Personally, I'm childfree, so I am aware of evolution, but what _it_ maximizes is not what _I_ optimize for. In this case: Yay misalignment!

Expand full comment

Great article, well written, often reads poetically. I think you still miss the point a bit on the last but, as it relates to smartest kid in the class syndrome. I feel it's really easy to end up in unwinnable arguments with EA-adjacent people that seem more pedantic than scientific (which remains my main criticism)

Expand full comment

As an aside, someone is taking the name of the Rightful Caliph in vain. I got a couple of these in my email:

Scott Alexander replied to your comment on Contra Stone On EA.

Thank you for your feedback,for more information on how to be successful ,WH'AT'S'AP'P ME 十 +•1•504=3•2=9•=4=••5=••7=••4=•√•✔️✔️✔️✔***

Clicking on the name link leads to this:

https://substack.com/profile/240658042-scott-alexander?utm_campaign=comment&utm_medium=email&utm_source=substack&utm_content=profile

Expand full comment

I reported them all as spam, not sure how useful that is.

Expand full comment
May 31·edited May 31

Hm. I got a few "Scott Alexander liked your comment" notifications in my email, which is something I've never experienced before. But they don't show up in my Substack activity page, where other "likes" show up. Ah, well. :-)

Expand full comment

I got the same before being hit with the "whatsapp me". So they were trying to soften me up by flattery, I suppose? Joke's on them, I don't have nor have ever used Whatsapp!

Expand full comment

This happened at least once before with what was supposed to be Scott’s phone number. Pranksters R Them having a bit of fun is the most charitable interpretation.

Expand full comment

What there is to learn from how off-putting EA is for so many? Sharpening the counterarguments to defend EA does not seem to add much value to the discourse. Being armed to the teeth with counter arguments is a good way to loose all feedback you are getting for the broader community. Are there action items that would help make EA less abrasive?

I wonder if people cling to the abrasive/different elements of EA so that it can be its own movement. Having EA and what it aspouses be so common-sense that it is not necessary to self declare as an EA should be the end goal. So instead of saying I am an EA, one would just say I really care about charitable donation.

Scott seems to vaguely suggest a unified philosophy of EA. Do people think that this would really change the publics perspective on EA?

Expand full comment

I think the big lesson is: don't let good ideas turn into movements.

If you have a good idea, just publish it and disappear. If your idea is genuinely good then people will read it and accept it.

But once an idea becomes a movement, that's a problem. Now for someone to accept your idea is no longer an intellectual judgment, it's a matter of social reality. Your movement has its own leaders and meetings and rituals which may alienate people. Accepting your idea means joining a whole new social hierarchy in which all the top spots have already been taken, possibly by people you'd ordinarily consider your inferiors.

And if you've started a movement which claims to be the "One True Way To Do X" then everybody outside your movement who does X is now obliged to dislike and dismiss you. If I'm involved in charity then my only options are either to join the EA movement and be socially subordinate to a bunch of fedora-loving polycule nodes, or find something to nitpick about the idea so I can snarkily dismiss it.

Expand full comment

But someone will inevitably turn it into a movement, though. When hasn't this happened?

Expand full comment

"It’s important to grasp that [caring about animals] is, in evolutionary terms, an error in our programming. ..."

Criticizing a charity for not being maximally selfish is a new one to me.

Expand full comment

Reading through Scott's post here, and the comment thread, I definitely get a strong sense that EA is at a cross-roads right now.

Any new organization or community of people that has anything say about politics or philosophy or religion is eventually going to reach a point where it probably will have to choose between remaining ideologically pure but niche or growing through compromising with the mainstream (the exceptions I can think of require a certain authoritarian streak that EAists don't have). At minimum, EAists do have much to say about altruism and philosophy in general.

I think Stone is overly critical of EA, but the data he pointed to does seem to suggest that EA remains rather niche.

And... I'm just going to cut to the chase. Most of what EAists promote is palatable or even admirable to the majority of people. But the shrimp thing? Yeah, you're probably losing a lot of people there. I know it's not really fair, but in our clickbait sensationalistic times, a movement will often get defined by its most unusual ideas or its biggest controversies. This is especially true of new/niche movements that haven't been normalized by long histories.

And even if you feel that you're winning the argument on the value of protecting shrimp life, that doesn't really matter. When it comes to altruism, to charity, reaching people emotionally is much more important than reaching them intellectually. You're just not going to get most people to truly care about the well-being of shrimp, at least not anytime soon. And as long as you actually defend making charitable donations for the sake of shrimp over giving charitable donations that could help even one human being... yeah, you're going to lose people. People that might have otherwise been convinced to become EAists.

So I think EAists probably should ask themselves what's more important to them - growing the EA movement which in turn might cause more altruism in general as well as more altruism of the type EAists like best? Or standing up for shrimp.

If it's growing the EA movement? Then maybe you should just stop talking about the shrimp thing, the same way a political movement might drop their most controversial policy idea in order to appeal more to the mainstream.

If it's standing up for shrimp? Good luck.

Expand full comment
May 31·edited May 31

Vegans already exist and it's probably not too difficult to spin the shrimp as vegan-adjacent. I find for me the true difficult sell is probably AI alignment / super intelligence / most kinds of X-risk work, not the least because most of it goes far beyond what most people consider credible "world ending threats" into sci-fi territory.

Expand full comment

"Vegans already exist and it's probably not too difficult to spin the shrimp as vegan-adjacent."

But that's ignoring the emotional blackmail disguised as rational consideration. The Drowning Child, after all! Why don't the vegans choose the welfare of human children over that of shrimps and fish and cows and chickens?

Now if it just so happens that EA draws from a community where there are a lot of vegans, then this explains the emphasis there - and how EA got off the ground does seem, going back in time, to be more a gathering of people with different ideas about what was the most important thing to work towards, but otherwise with similar backgrounds and value systems. They didn't object to vegans alongside rationalists alongside mosquito nets alongside whatever else, because this wasn't a finished product as yet, it was just "a bunch of us who all live in this general area and interact think this is a good idea to do things like this in a better way".

EA, however, has gotten bigger and more organised as time went on, and they can't keep relying on the stick of "hey normie, stop giving money to things that make you feel all warm and fuzzy, use better considerations as to where your money will do the most good" while also pandering to the subset who fought over vegan catering at the conference to the point of threatening schism and walk-outs.

Expand full comment

One answer is that reducing the amount of factory farming is good for humans too. Factory farming contributes to climate change. And, in case of, as you put it, “shreeempies lil shreeeempiies,” shrimp farms destroy coastal mangrove forests, which provide natural protection against floods. So, more shrimp = more floods = more human suffering.

Expand full comment

If you look at polling, it does not seem to go beyond what most people find credible at all.

Expand full comment
May 31·edited May 31

"And as long as you actually defend making charitable donations for the sake of shrimp over giving charitable donations that could help even one human being... yeah, you're going to lose people."

Yes, that's where the accusation of inconsistency hits. If I am supposed to evaluate "which is more important - donating to the local sports team/museum/library, or saving lives of people overseas?" and gently urged to pick "overseas", then I can turn right around and go "okay then, which is more important - making shrimp feel better, or saving lives overseas?"

If I should give up my preferences around people here that I know in my community in favour of other people far away, there's a heck of a lot more about giving up preferences around non-human animals in favour of actual humans.

If all the low-hanging fruit in effective charity has been picked, so that you can now work on shrimp welfare with no qualms, then I can donate to my local community the same way.

Expand full comment

"If all the low-hanging fruit in effective charity has been picked, so that you can now work on shrimp welfare with no qualms, then I can donate to my local community the same way."

Yes, I think that's a very fair point. The more EA advocates for causes that only tiny minorities of people care about even conceptually, the less appealing its going to be to the mainstream and the more backlash it will receive.

Expand full comment

To be honest, I would rather EA continue to champion weird niche things than sweep the weird things under the rug for mass appeal, because it would be bad if it just turns into another generic "do charity" call - that's neither original nor particularly impactful. Its basically, as the detractors say, the status quo.

IMO it's more valuable if it remains a weird nerd thing hewing closely to weird nerd philosophy, because the fundamental thinking of $/life saved is a valuable addition to the world.

People who are donating to their local community are doing that either way, but this way, someone cares about shrimp, or far away people, and I believe the marginal difference is valuable.

(Also, one of the better shrimp saving ideas I've seen was to develop a synthetic prawn paste - something that does not seem particularly out of reach and seems kind of a good idea, actually. Even if you don't care about the plight of shrimp, I like the idea of my native southeast Asian cuisine, which uses a lot of prawn paste, being kind of future-proofed against any prawn supply problems, and it might help the product become more accessible to eg diaspora.)

Expand full comment
Jun 4·edited Jun 4

It does seem like we're all piling on EA, and that is unfair. Its basic principle - do good, and find the most effective way to do that - is thoroughly unobjectionable. Even "other methods of evaluating charities only focussed on how much donations went to overheads versus the charitable aims, we're doing a better measurement" isn't a bad thing, either; sometimes you *do* have to spend a lot on overheads to get the best results.

Where it started raising hackles was (1) the religious insistence on tithing, let's face it, that's what it comes across as, but accompanied by "we're *not* a religion!!!!" (2) the harnessing of scrupulosity towards its aims, which - given that the rationalist and adjacent community seems to have more than its fair share of the nervous, anxious, and vulnerable to being sniped, could be harmful to some (3) the unfortunately snooty-appearing appeals about doing charity right *for the first time* (4) Singer and the Drowning Child, see appeals to scrupulosity plus, if one does not particularly like Peter Singer, this is an immediate "he can go take a long walk off a short pier before I give a cent to this operation" stimulus (5) the drift towards weirder, niche interests.

Shrimp welfare isn't the worst on that last front, again there are plenty of vegans involved in EA and Rationalism and if they want to work for their pet cause, why not? But it's things like AI, where it *seems* (and there's a whole problem of *perception* versus *reality* going on, that I don't think the EA movement or community or what you will have addressed, and maybe don't even realise it exists) that the kind of argument we ordinary people can recognise - help people on the ground overseas with material, practical solutions like bed nets - became too boring and quotidian, and now the shiny exciting new science-fiction made real cause was here, and everything was going towards that.

Poverty is boring and dull and never-ending and you never seem to make a dent in it in any long-term, meaningful way. Congratulations, now you know what the older charities are dealing with, in regards to the long-term slog of "every time it seems like we get the upper hand on malaria, a new twist to reduce the effectiveness of our approach occurs". It's something that seems like it will never be solved, whereas by contrast, get AI alignment right, and the cornucopia will open and riches for all, unending, will fall out. All our ills will be solved, with super-intelligent Fairy Godmother AI!

(I don't believe that last, but whatever).

Now, I'm not saying this *is* so, but it's very easy, looking in from outside, to think that this is how it *looks* so this is how it *is*.

That's the problem that has to get addressed: perception by the wider public. And sure, keep the weird niche interests! But then don't say that you're better about that, because you have a formula to demonstrate that you caring about shrimp is more ethical and more in tune with the laws of the universe than Jimmy or Lucie giving to their local church mission. That, I think, is the crux of the perception problem: "so you're saying we're dumb, driven by emotion, and too religious to boot, thus you are superior to us about charitable giving?"

EDIT: Even the shrimp thing could be adjusted to make sense to ordinary people! Do away with cruelty in farming, help reduce the impact on the environment, and so forth - ordinary people can grasp that. "I worry that shrimp may suffer because they have minds" is not, because nobody in the general public is going to think a shrimp can think or feel or remember or be anything other than the most basic responses to stimuli of a living organism. Then when you start in on the dust speck aspect ('if we sum up all the suffering of all the zillions of shrimp, it is greater than all the humans who died in wars ten times over'), that's when it sounds crazy.

Expand full comment
Jun 5·edited Jun 5

Fully agree - if EA was trying to recruit Jim Bob and Lucy. Which I argue that they shouldn't.

The little amount of $ Jim, Bob and Lucy can bring will be peanuts compared to their effect on the internal norms and discussion and discourse. I don't think EA should care about what the local church tithers think, as long as the local church tithers don't hate them so much that they try to get EA shut down (which seems unlikely). SBF is a big fail there, but breast cancer research is not any worthy because Susan B Komen is kind of scummy.

I do agree that EA should be less hostile and condescending to the traditional non-profit, charity set and make more friends there. But this is fundamentally a different set to "the general public". The general public is the same group of people that can always rationalise not helping the homeless guy right in front of them. Don't worry about impressing them and keep worrying about attracting the specific people who care about AIs and shrimp.

(Also, to be honest, I think EA deals with the scrupulous better than most charity movements. 10% of your income if it's not gonna cause problems, no more than that. The scrupulous is a vulnerable population and I feel like that's acknowledged a little bit more in EA compared to traditional non-profit circles where burn out is quite common!)

Expand full comment

A comment on art and evolution: I do not think art production is any more of an error of evolution that eating is. In other words, not an error at all. I suspect aesthetic appreciation is the general pleasure associated to successful predictive processing - to successful decoding/prediction of novel and sufficiently challenging sensory input. This reinforces engagement with sensory input that is productively challenging - i.e. which can be learned from. I detailed this thesis on my Substack recently. Beautiful objects have fine tuned entropy.

Expand full comment

> The other half of the answers have to come from intuition, common sense, and moral conservatism. This isn’t embarrassing.

It is if you're trying to replace those, though. If you take this line then why shouldn't I treat rationalism writ large as an Isolated Demand for Rigor? Don't ask me to shut up and multiply QALYs if you can't tell me how to avoid the Repugnant Conclusion!

Likewise: in the Sequences Yudkowsky tries so hard to vivisect reality with Occam's Razor, but then when asked about how to think about the _purpose_ of any of this he suddenly loses his nerve and backpedals. "The Gift We Give to Tomorrow" is so different in tone and style from the rest of his work that it's hard to see it as an answer rather than an evasion-- the elaborate trench built around the thermal exhaust port in his worldview.

Expand full comment

I reread the post and don't see the through line from it to your conclusion about occm's razor. What am I missing?

Expand full comment

I've been reading the sequences since before Scott Alexander acquired his current pseudonym. So this is not convincing at all.

Expand full comment

If the disagreement is just that we have different overall impressions of the sequences then there's probably not much more I can usefully say here.

Expand full comment

Maybe we do, but I have no clue what it is about "The Gift we Give Tomorrow" that has anything to do with Occam's Razor. The world is complicated, and has complicated explanations doesn't seem at all in tension with Occam's Razor in any way.

I separately have no clue about what you mean by "Rationalism is an isolated demand for Rigor" (what is it not demanding rigor from? What posts made you think this? And so on) but that's fine if you don't want to talk about that. The other question is much more concrete and likely to lead to new information.

Expand full comment

One of your arguments in the above analysis strikes me as a dangerous one to propose:

Effective altruism does try to get beyond “I want to donate to my local college’s sports team”. I think this is because that’s an easy question. Usually if somebody says they want to donate there, you can ask “do you really think your local college’s sports team is more important than people starving to death in Sudan?” and they’ll think for a second and say “I guess not”.

Item: Earlier this century, I gave my wife an unusually expensive birthday gift, one of the early iPods. It was a tremendously successful gift: She loves music, and this enabled her to spend much of her time immersed in it, and did a great deal to make her happy. But it can certainly be asked if her having music to listen to is more important than, oh, let's say, mosquito nets for children in tropical countries.

Item: I just donated to a GoFundMe for a person I know only online, who is faced with expensive repairs on a van she uses to travel to science fictional events and sell books. It can certainly be asked if her small business is more important than the starving children who could be saved for what those repairs will cost.

Item: This year I expect to spend a few hundred dollars on books, most of which I will read purely for my own pleasure, Once again, I can easily ask a similar question.

Your question may be rhetorically effective for a lot of people. But if it's meant to embody an actually valid mode of analysis, based on a consistently applied moral principle—then it seems that that principle can be applied so broadly as to invalidate ANY expenditure of money (and perhaps of time or effort) that any of us might want to make, in the face of the extent of human suffering and privation worldwide.

This is exactly the kind of thinking that Ayn Rand pointed to as the basis for condemning altruism. And it's easy to say that no, that's an exaggeration, altruism doesn't really mean that, Rand is attacking a straw man. But here I find you offering, as legitimate, an argument that appeals to exactly the Comtean principle that Rand is attacking: one that invalidates the idea of "the pursuit of happiness," and beyond that any sort of eudaemonistic ethics. And if that is the argument, then I think Rand was 100% right to condemn altruism as an ethical principle.

Or perhaps that isn't what this means; perhaps it isn't actually a principle intended as a basis for all ethical decisions, but simply a convenient bit of rhetoric, to be used when it's handy to win an argument and then set aside in other cases. In that case I find myself recalling Ludwig Wittgenstein's rebuke to Norman Malcolm's comment about "the British national character." And if the rhetorical question is, for example, "Do you really think your wife's happiness at getting to listen to music more easily is more important than the survival of children in Africa," then I can only say that it certainly is more important TO ME and to my own happiness; and that my own happiness is the proper purpose of ethics FOR ME (and yours ought to be for you).

Expand full comment

The obvious counterpoint is that the objection is a purely local one to: "Given that your donation is intended to do good, what is the most good it can do?" Which strikes me as normal and commonsensical. Would you think it'd be a good idea to not give your wife an iPod, if you knew that'd be what she's the happiest at receiving?

In the case of the local sports team, you are correct that it has the character of personal consumption, in the same way that an iPod has, and that part of the appeal of consumption is the warm fuzzies that gives you.

Some people believe that fuzzies should track things in the world, rather than proxies for the thing in the world. Should those people be denied the ability to have warm feelings?

Expand full comment

That seems to me to be a mis-posed objection, in that it supposes that "warm fuzzies" are a criterion of good for me. It seems to me that the pursuit of happiness, if it's to be effective, calls for setting aside any such appeals. Or as Rand puts it, "Happiness is the purpose of ethics, but not the standard."

Expand full comment

They aren't a criterion of good for you so much as a criterion of good for people who donate to local youth baseball teams. Sorry, I silently switched from second person you to the generic you in that sentence.

Expand full comment

Well, in the first place, my argument is that the same argument that applies to the local youth baseball team applies to any of the other cases. Once your criterion is "how important is this [for humanity generally, or for sentient beings as such, or any other large impersonal category]?" essentially anything that an individual human being cares about can be set aside as unimportant.

In the second place, I think that the assumption that thinking about the good of humanity generally is rational, but doing things for, say, the local sports team is sentimental, is not justified. It ought to be possible to judge the good of a sports team rationally; at least, I think it's possible to think rationally about one's own good, or that of a spouse, or a pet, or a country. And on the other hand, surely thinking about the good of humanity as such, or sentient beings as such, is often sentimental; indeed, matters so abstract and so remote from actual experience seem positively to invite vague generalizations onto which we can project anything we like.

How are we defining "altruism"? Is it any action that enhances the welfare of someone other than oneself, or that intends to do so? In that case, I intended to enhance my wife's welfare by giving her that iPod, and I was successful, and that action was altruistic. The person who donates to the sports team is surely aiming to enhance the team's welfare, and thus their action is altruistic. And that's true even if I, or they, are indifferent to the welfare of, say, humanity as a whole. On the other hand, if it only includes actions directed to the welfare of others generally, then talking about people who donate to some specific person or group as being "ineffectively altruistic" is specious: Most of the time they are not even trying to serve the welfare of others generally. You might as well say that an incandescent lamp is an ineffective radiant heating system. And if you take service to others generally as your standard, and judge any activity or goal that doesn't do this as "not important," then you have a criterion by which any individual person's happiness can be judged as "not important," and their pursuit of it can be dismissed as "ineffective"—ineffective in serving the welfare of humanity as a whole, or sentient beings everywhere, no matter how effective it is in letting them lead a happy, rewarding life.

Expand full comment

> Once your criterion is "how important is this [for humanity generally, or for sentient beings as such, or any other large impersonal category]?" essentially anything that an individual human being cares about can be set aside as unimportant.

And once your criterion is "get a friend to go to a convention" posting in blog comments sections or buying iPods also don't maximize that value. It sounds like any values you have would be incoherent, since everything would not be maximizing at least one of those values.

So why do your personal values get a pass when altruistic values don't?

> In the second place, I think that the assumption that thinking about...

I'm not saying "one is sentimental and one is not". I'm saying that, given that you treat acts of altruism as acts of personal consumption, and that acts of personal consumption are matters of taste, by what logic can you *further* condemn an altruistic act that isn't youth sports teams? It seems to me that if you say altruism is a sin, it'd be an equal sin for both the case of the youth team for you or the malaria nets for an EA. If an exception were to be made, I'd venture that it'd happen more for the malaria nets, because it is more likely to from the results of reflective equilibrium than the youth team, insofar as we think personal satisfaction is meant to track the fulfillment of virtues.

> On the other hand, if it only includes actions directed to the welfare of others generally, then talking about people who donate to some specific person or group as being "ineffectively altruistic" is specious: Most of the time they are not even trying to serve the welfare of others generally.

The only people I see talking about "ineffective altruism" are people who take issue with effective altruism. I think it's pretty ironic if you believe in "everyone to their own" and then self harm by making up aspersions from people who are not at all thinking about you!

> And if you take service to others generally as your standard, and judge any activity or goal that doesn't do this as "not important," then you have a criterion by which any individual person's happiness can be judged as "not important," and their pursuit of it can be dismissed as "ineffective"—ineffective in serving the welfare of humanity as a whole, or sentient beings everywhere, no matter how effective it is in letting them lead a happy, rewarding life.

If it is true that these people are so unvirtuous because they lead you think that you should not be leading a happy virtuous life, does this then justify you to make those people think they aren't leading happy virtuous lives, because they can hypothetically say that you are living an ineffective life? I don't see any frame in which you can apply this principle evenhandedly without implicitly having self-serving carveouts. In which case, why should your self serving carveout be considered superior to mine?

I'm saying right out, let's say that we concede that effective altruists are doing effective altruism for maximally self serving reasons (they are sentimental about shrimp, they feel way happier about malaria nets than youth teams) does it not mean that none of your arguments about calling others ineffective apply then? If it does apply, because it makes you feel bad, why is your counter argument that makes an EA feel bad not also have the same force of "don't make other people feel bad and ineffective" apply to it?

Expand full comment

I just attempted to reply to this, and got up to point 9 (nearly the end), and then my browser seems to have irretrievably lost the whole argument. Rather than spend the time for another try, let me simply say that I don't believe you are addressing the point I was actually attempting to make.

Expand full comment

This is exactly why EAs say to give 10% of your money to charity. If you do that , you’ll be way ahead of most Americans in your charitable giving, and then go ahead and spend the other 90% on whatever pleases you. They’re not saying, “you must donate every dollar above a bare subsistence level! No iPad for your wife! No books for you!” That would be a recipe for misery, and nobody would want to be an EA then.

Expand full comment

On one hand, I certainly see that that version lessens the damage caused by altruism. On the other hand, it strikes me as irrelevant to the question of principle that's involved. Let me quote, again, the lines I was commenting on:

Effective altruism does try to get beyond “I want to donate to my local college’s sports team”. I think this is because that’s an easy question. Usually if somebody says they want to donate there, you can ask “do you really think your local college’s sports team is more important than people starving to death in Sudan?” and they’ll think for a second and say “I guess not”.

It seems as if you want to say, on one hand, that feeding people starving to death in the Sudan is more important than buying books I want to read; but on the other hand, that if I have already donated 10% of my income to altruistic causes, that feeding people starving to death in the Sudan becomes less important. And I can't see that as making any sense as a statement of an ethical principle.

If the ethical principle is that the needs of human beings everywhere, or of sentient beings generally, are more important than my own personal happiness, then I don't see why that argument should apply only up to a 10% "tax rate." It seems that it should apply to everything that is not a necessary expense of keeping me alive and able to work productively. And perhaps to more than that: If it costs too much to keep me alive (say, for example, that I need dialysis), could a case be made that my survival costs too much in terms of the lives of people who have less costly needs such as food and mosquito netting, and that I ought to be left to die? Or if the principle is that my own happiness is what's important, then I can certainly choose to help other people as part of my pursuit of happiness, but there need not be any set extent to which I do so, nor do I need to prioritize helping every other person equally.

I think that it's important to adopt the right principle in the first place, rather than to compromise with adopting the wrong principle and then trying to limit its extent. Otherwise we're faced with the old punchline: "We've settled what you are; I just want to haggle over the price."

Expand full comment

>Rand is attacking a straw man.

Hmm... I'm not sure when Rand wrote this. Peter Singer's "Famine, Affluence, and Morality", written in 1971 and published in 1972 could be viewed as the straw man coming to hideous life...

Expand full comment

In her collection of excerpts from her novels titled For the New Intellectual, published 1961, Rand included an excerpt from The Fountainhead under the title "The Soul of an Altruist," delivered in that novel by its central villain, Ellsworth Toohey; the word "altruism" appears in that speech, which first saw print in 1943. Rand also discusses altruism in "The Objectivist Ethics," presented at a symposium on "Ethics in Our Time" in 1959. There are at least mentions of altruism in Atlas Shrugged, published 1957; I can't take the time right now to search through its thousand-odd pages.

Expand full comment

Many Thanks! I'm not trying to create work. You've already established definitively that Rand wrote before Singer on this.

Expand full comment

The assassination of Shinzo Abe is an instance where terrorism worked:

"[The assassin] told investigators that he had shot Abe in relation to a grudge he held against the Unification Church (UC), a new religious movement to which Abe and his family had political ties, over his mother's bankruptcy in 2002.

The assassination brought scrutiny from Japanese society and media against the UC's alleged practice of pressuring believers into making exorbitant donations. Japanese dignitaries and legislators were forced to disclose their relationship with the UC, and [Prime Minister] Kishida was forced to reshuffle his cabinet amid plummeting public approval. On 31 August the LDP announced that it would no longer have any relationship with the UC and its associated organisations, and would expel members who did not break ties with the group. On 10 December, the House of Representatives and the House of Councillors passed two bills to restrict the activities of religious organisations such as the UC and provide relief to victims.

Abe's killing has been described as one of the most effective and successful political assassinations in recent history due to the backlash against the Unification Church that it provoked. The Economist remarked that "Mr Yamagami’s political violence has proved stunningly effective...Political violence seldom fulfils so many of its perpetrator’s aims." Writing for The Atlantic, Robert F. Worth agreed, describing Yamagami as "among the most successful assassins in history.""

https://en.wikipedia.org/wiki/Assassination_of_Shinzo_Abe

Expand full comment

Not all assassinations are examples of terrorism. This guy isn't part of a larger group who will be able to maintain a campaign of terrorism.

Expand full comment

>Not all assassinations are examples of terrorism.

I'm confused. Wikipedia's definition of terrorism ( https://en.wikipedia.org/wiki/Terrorism ) is:

>Terrorism, in its broadest sense, is the use of violence against non-combatants to achieve political or ideological aims.

I could see the assassination of a general as not meeting this definition, since a general is a combatant. Other than assassinations of combatants, how can assassinations fail to meet the definition of terrorism? They are certainly violence. My guess is that they almost always attempt to achieve political or ideological aim (ok, if someone assassinates a politician because the politician is having an affair with the assassin's spouse I'll except that as well).

I don't mean this to be a criticism of assassinations. If there is going to be politically motivated violence, I'd rather it happen by the handfuls than by the hundreds of thousands (albeit sometimes one assassination triggers a WWI...). In and of itself, assassinations look to me like very small scale terrorism.

Expand full comment

Wikipedia's definition of terrorism is wrong (a correct bit comes later "There are various different definitions of terrorism"). The government arresting people who block traffic in front of polling places uses violence to achieve a political end, but it's not terrorism because terror is not its aim.

Expand full comment

Many Thanks!

>The government arresting people who block traffic in front of polling places uses violence to achieve a political end, but it's not terrorism

Fair, though I'd say that, depending on how roughly the arrests take place, it could be intermediate between using violence and threatening violence. I do agree that it is either not terrorism or very far from a central example.

>because terror is not its aim

Not sure to what extent I agree or disagree here.

Yet another quote from the Wikipedia page:

>In 2006, it was estimated that there were over 109 different definitions of terrorism.

Yetch.

Expand full comment
Jun 1·edited Jun 1

I agree with your first sentence but not the second. If Abe was killed in a yakuza hit, I think that wouldn't be terrorism, even though it would be carried out by a larger group. The definition of terrorism is obviously very loaded but my first approximation definition would be "murder as protest", but your point seems to be that it's not terrorism unless it's actually inciting terror, and a one off attack doesn't incite terror. That's a fair criticism in general but in this context irrelevant since it doesn't matter whether anti-AI violence is technically terrorism, and all of the points made in the post apply equally to non-terrorist violence.

Expand full comment

I do think a one-off attack (say, by someone whose mom was run over by a self-driving car) wouldn't have much effect on AI. You would need a larger campaign to be effective.

Expand full comment

My beef with EA is this: I am confident that altruism, and the desire to get value for [precursors of] money pre-date the invention of the wheel by hundreds of millennia, and are still universal. I selectively and intelligently donate to charity. I am not an Rffective Altruist. It's as if a bunch of Bay Area techbros declared themselves Axiliar Rotarians and tried to prove that more and better use of wheeled vehicles occurred in San Francisco than the rest of the world. If the only basis of the claim is the self-labelling it doesn't seem worth even investigating.

Expand full comment

> his model is things like Israel bombing Iraq’s nuclear program in the context of global norms limiting nuclear proliferation

Note that according to Wikipedia, the UN Security Council issued a resolution "strongly condemning" this bombing, and Israel's closest ally (the US) voted for that condemnation and temporarily suspended arms deliveries to Israel.

So even though there is theoretically a norm against proliferation, in practice the norm is the opposite, to condemn attempts to stop proliferation by force. (At least, if Israel is the one using the force)

Expand full comment
May 31·edited May 31

"Yudkowsky supports international regulations backed by military force - his model is things like Israel bombing Iraq’s nuclear program in the context of global norms limiting nuclear proliferation" I hope Yudkowsky model is a bit more sophisticated than this. Global norms limiting nuclear proliferation exist and they are codified by a treaty https://en.wikipedia.org/wiki/Treaty_on_the_Non-Proliferation_of_Nuclear_Weapons . Israel is one out of 4 UN members who are NOT parties to this treaty, so whatever we think of Iran and its violations of this treaty, Israel has a pretty flimsy right to enforce the treaty. Israel itself has built its nuclear program with the assistance if several states which were part of the treaty and violated the treaty by providing this assistance. So I do not think that this is a very good example of rules based global norms enforcement.

Expand full comment

>Israel is one out of 4 UN members who are parties to this treaty

Typo? Missing "not"?

>Four UN member states have never accepted the NPT, three of which possess or are thought to possess nuclear weapons: India, Israel, and Pakistan. In addition, South Sudan, founded in 2011, has not joined.

Expand full comment

Yes, thank you

Expand full comment

Many Thanks!

Expand full comment

I'm disappointed in this post. You addressed all of Stone's weakest arguments (and I agree they were bad and you answered them sufficiently). But you ignore his two strongest and most interesting points: 1. EA completely ignores the possibility of the soul/afterlife/God, which might radically alter the value calculus, and 2. EA necessitates the choosing of a very specific value function, and the ignoring of the implications of the value functions that it *does* choose

Expand full comment

It appears that the premise that morality should be derived from evolution (see section VI in Scott's post) is not Stone’s position. Stone’s position is that animal suffering is not a moral wrong. On the other hand, if someone takes pleasure in animal suffering, that individual is likely a bad person, presumably because a person who takes pleasure in animal suffering is likely to also take pleasure in human suffering. Depending on what implications you draw from these propositions, there may be a role for laws against cruelty to animals, restrictions on factory farming, and the like.

Essentially, Stone’s objection to effective altruists trying to find the most effective way to reduce animal suffering is that Stone doesn’t think there is a moral imperative to reduce animal suffering in the first place.

Expand full comment

I thought Scott just happens to dislike animal suffering and wants to be a person that dislikes it. I don't think rules against animal cruelty are derived from evolution, parasitic wasps certainly don't care about any suffering they inflict on their hosts.

Expand full comment

I’m new to this Substack and this EA idea. My initial take on EA is that if somebody is giving away 10% of their income and is directing that money towards the most effective charities then it’s hard to see the problem, and these people are better than most, including me.

Expand full comment
founding

While I also am skeptical of the "blowing up data centers" strategy, I think it is important to note that "terrorism" is the wrong framing here. Stone erred in using that term, and Scott should probably not have followed suit.

Blowing up data centers because you want to destroy the data in them, is not terrorism. It is *sabotage*. And while sabotage has few outright victories to claim in its own right, it has been a useful adjunct in the pursuit of victory. See e.g. https://en.wikipedia.org/wiki/Norwegian_heavy_water_sabotage

And for that matter, killing people making essential contributions to an adversary's work, because the loss of those people will materially impair the enemy's plans, is also not terrorism. It is assassination, which again claims few outright victories but has had a substantial impact on the world.

These distinctions re important, because the reasons terrorism almost always fail, are not generally applicable to sabotage and assassination. Those have to be assessed on their own terms. Hamas on 10/7 killed about a thousand innocent people, including three hundred participants in a music festival, and kidnapped a couple hundred more. None of these people were targeted for their essential role in the survival of Israel, and so that was simply terrorism. It failed for the reason terrorism usually fails, because a thousand or so deaths almost never actually terrifies a nation of millions to the extent of making the sort of concessions terrorists usually demand.

If someone shoots up the right party in San Francisco, then spreads out to kill or kidnap almost a thousand other select individuals around the Bay Area, that might materially deprive AI research programs of the specialized talent and capital they need to make rapid progress on AI, and so buy another decade to find a more enduring solution. Or it may not, and as I said I'm skeptical of the strategy. But that's the metric on which it would have to be evaluated, not "terrorism, so fail".

Same goes for material sabotage. AI development is critically dependent on exactly three chip fabs in all the world, any of which would be very tedious and expensive to rebuild. No terrorist has ever had such a vulnerability to target in pursuit of their goals. Maybe blowing up the high-end chip fabs wouldn't slow things down enough to matter, but you'd have to actually do the math on that rather than default to "terrorism, so fail".

And as for blowback, that's going to manifest differently. First, because e.g. blowing up chip fabs might be accomplished with few or no casualties - plant the bombs, then call in a warning. But more than that, blowback mostly hurts people who are pursuing positive goals that require people to come out into the open to enjoy them. Hamas wants to create Greater Palestine, and while the Hamas operatives who go around killing Jews might be able to slip away undetected, anyone who try to live in Greater Palestine becomes an obvious target for retaliation. If someone's goal is entirely negative, e.g. "No AI", then an attack which destroys the critical resources necessary to develop AI is a success even if the perpetrators are quickly hunted down and/or have to spend the rest of their lives in hiding.

The future existence of GPT 7, is on a much more precarious foundation than the continued existence of Israel, so strategies of violence which could not hope to defeat Israel might still be effective at preemptively defeating GPT 7 at least for a time. I don't think that's the case, but it's not *obviously* wrong, so it seems like something EA needs to do the math on if it's taking AI risk seriously.

Expand full comment

>Same goes for material sabotage. AI development is critically dependent on exactly three chip fabs in all the world, any of which would be very tedious and expensive to rebuild. No terrorist has ever had such a vulnerability to target in pursuit of their goals.

Very true! As someone who want to _see_ AGI, and have a nice quiet chat with a 21st century HAL 9000, I rather hope that the vulnerability isn't exploited.

I wouldn't even phrase it as "blowing up" chip fabs. A handful of dust in the wrong place, a couple of holes in some HEPA filters, a bypass to the surge protectors on the power lines, any of a hundred vulnerabilities ... those fabs are _delicate_.

Expand full comment

So to summarize the negative waves regarding EA here: Maybe EAers don’t have the One Weird Trick that is the all time best in the eyes of everyone. A little humility wouldn’t hurt. But when the dust settles, Scott is still a thoroughly decent and lovable guy. He really doesn’t need to rise up in defense of every criticism that inevitably falls out of the SBF meltdown. All he needs to do is go on doing what he believes is best joyfully and enjoy the trials and tribulations of bringing up his children.

We’re all so very different, after all.

Expand full comment

I found the EA positive safety action group- really nice guys (natch) and they have a well thought out plan to save the world through targeted cost-effective terrorism. Sorry I promised I wouldn’t tell… what happens at EA Global etc.

Expand full comment

I’d rather see Scott put the effort into responding to good faith critiques, but it’s his Stack and maybe this got under his skin.

Expand full comment

Cool, you got me. I freely admit that in fully altruist nations like, e.g., the USSR or Imperial Japan,the leaders are better off

Expand full comment