923 Comments
Comment deleted
Expand full comment

Good list.

A common sentiment right now is “I liked EA when it was about effective charity and saving more lives per dollar [or: I still like that part]; but the whole turn towards AI doomerism sucks”

I think many people would have a similar response to this post.

Curious what people think: are these two separable aspects of the philosophy/movement/community? Should the movement split into an Effective Charity movement and an Existential Risk movement? (I mean more formally than has sort of happened already)

Expand full comment

OK, this EA article persuaded me to resubscribe. I love it when someone causes me to rethink my opinion.

Expand full comment
Nov 28, 2023·edited Nov 28, 2023

I think EA is great and this is a great post highlighting all the positives.

However, my personal issue with EA is not its net impact but how it's perceived. SBF made EA look terrible because many EA'ers were woo'ed by his rhetoric. Using a castle for business meetings makes EA look bad. Yelling "but look at all the poor people we saved" is useful but somewhat orthogonal to those examples as they highlight some sort of blindspots in the community that the community doesn't seem to be confronting.

And maybe that's unfair. But EA signed up to be held to a higher standard.

Expand full comment

Impressive! By the way, I've slain and will continue to slay billions of evil Gods who prey on actually existing modal realities where they would slay a busy beaver of people – thus, if I am slightly inconvenienced by their existence, every EA advocate has a moral duty to off themselves. Crazy? No, same logic!

Expand full comment

The whole point can be summed up by "doing things is hard, criticism is easy."

I continue to think that EA's pitch is that they're uniquely good at charity and they're just regular good at charity. I think that's where a lot of the weird anger comes from - the claim that "unlike other people who do charity, we do good charity" while the movement is just as susceptible to the foibles of every movement.

But even while thinking that, I have to concede that they're *doing charity* and doing charity is good.

Expand full comment

small typo: search for "all those things"

Expand full comment

I just wish people would properly distinguish between Effective Altruism and AI Safety. Many EAs are also interested in AI safety. Many safety proponents are also effective altruists. But there is nothing that says to be interested in AI safety you must also donate malaria nets or convert to veganism. Nor must EAs accept doomer narratives around AI or start talking about monosemanticity.

Even this article is guilty of it, just assigning the drama around Open AI to EA when it seems much more accurate to call it a safety situation (assuming that current narratives are correct, of course). As you say, EA has done so much to save lives and help global development, so it seems strange to act as though AI is still some a huge part of what EA is about.

Expand full comment

I don't identify as an EA, but all of my charitable donations go to global health through GiveWell. As an AI researcher, it feels like the AI doomers are taking advantage of the motte created by global health and animal welfare, in order to throw a party in the bailey.

Expand full comment

Genuine question, how would any of the things cited as EA accomplishments, have been impossible without EA?

Expand full comment

What does the counterfactual world without EA actually look like? I think some of the anti-EA arguments are that the counterfactual world would look more like this one than you might expect, but with less money and moral energy being siphoned away towards ends that may prove problematic in the long term.

Expand full comment

Doth protest too much.

No one who follows EA even a little bit thinks it has all gone wrong, accomplished nothing, or installed incompetent doomerism into the world. And certainly the readers of Astral Star Codex know enough about EA to distinguish between intelligent and unintelligent critique.

What I'd like to hear you respond to is something like Ezra Klein's recent post on Threads. For EA, he's as sympathetic a mainstream voice as it comes. And yet he says, "This is just an annus horribilis for effective altruism. EA ended up with two big swings here. One of the richest people in the world. Control of the board of the most important AI company in the world. Both ended in catastrophe. EA prides itself on consequentialist thinking but when its adherents wield real world power it's ending in disaster. The movement really needs to wonder why."

Your take on this is, no biggie? The screwups are minor, and are to be expected whenever a movement becomes larger?

Expand full comment

I agree with the general point that EA has done a lot of good and is worth defending, but I think this gives it too much credit, especially on AI and other political influences. I suspect a lot of those are reverse causation - the kind of smart, open-minded techy people who are good at developing new AI techniques (or the YIMBY movement) also tend to be attracted to EA ideas, and I think assuming EA as an organization is responsible for anything an EA-affiliated person has done is going too far.

(That said, many of the things listed here have been enabled or enhanced by EA as an org, so while I think you should adjust your achievement estimates down somewhat they should still end up reasonably high)

Expand full comment

It's frustrating to hear people concerned about AI alignment being compared to communists. Like, the whole problem with the communists was they designed a system that they thought would work as intended, but didn't foresee the disastrous unintended consequences! Predicting how a complex system (like the Soviet economy) would respond to rules and constraints is extremely hard, and it's easy to be blindsided by unexpected results. The challenge of AI alignment is similar, except much more difficult with much more severe consequences for getting it wrong.

Expand full comment

> Am I cheating by bringing up the 200,000 lives too many times?

Yes, absolutely. The difference between developing a cure for cancer or AIDS or whatever is that it will solve the problem *permanently* (or at least mitigate it permanently). Saving lives in impoverished nations is a noble and worthwhile goal, but one that requires continuous expenditures for eternity (or at least the next couple centuries, I guess).

And on that note, what is the main focus of EA ? My current impression is that they're primarily concerned with preventing the AI doom scenario. Given that I'm not concerned about AI doom (except in the boring localized sense, e.g. the Internet becoming unusable due to being flooded by automated GPT-generated garbage), why should I donate to EA as opposed to some other group of charities who are going to use my money more wisely ?

Expand full comment
Nov 28, 2023·edited Nov 28, 2023

I found the source of the Fudning Directed by Cause Area bar graph, it's from this post on the EA forum: https://forum.effectivealtruism.org/posts/ZbaDmowkXbTBsxvHn/historical-ea-funding-data . Two things to note:

1. the post is from August 14 2022, before the FTX collapse, so the orange bar (Longtermism and Catastrophic Risk Prevention) for 2022 might be shorter in reality.

2. all information from the post are from this spreadsheet (https://docs.google.com/spreadsheets/d/1IeO7NIgZ-qfSTDyiAFSgH6dMn1xzb6hB2pVSdlBJZ88/edit#gid=1410797881) maintained by the OP, which also includes 2023 data which shows a further decrease in longtermism and XR funding in 2023.

Expand full comment

No critical commentary, just want to say this is excellent and reflects really well what’s misguided about the criticisms of ea.

Expand full comment

> It’s only when you’re fighting off the entire world that you feel truly alive.

SO true, a quote for the ages

Expand full comment

I agree. Also: EA can refer to at least three things:

- the goal of using reason and evidence to do good more effectively,

- a community of people (supposedly) pursuing this goal, or

- a set of ideas commonly endorsed by that community (like longtermism).

This whole article is a defense of EA as a community of people. But if the community fell apart tomorrow, I'd still endorse its goal and agree with many of its ideas, and I'd continue working on my chosen cause area. So I don't really care about the accomplishments of the community.

Expand full comment

Unfortunately, and that's an very EA thought, I am pretty sceptical that EA saved 200,000 lives counterfactually. AMFs work was funged by the Gates Foundation which decided to fund more US education work after stopping their malaria work due to tremendous amounts of funding from outside donors

Expand full comment

> [Sam Altman's tweets] I don't exactly endorse this Tweet, but it is . . . a thing . . . someone has said.

OK, then. Sam Altman apparently has a sense of humor, and at least occasionally indulges in possibly-friendly trolling. Good to know.

Expand full comment

200,000 sounds like a lot but there are approximately 8 billion of us. It would take over 15,000 years to give every person one minute of your time. Who are these 200,000? Why were their lives at risk without EA intervention? Whose problems are you solving? Are you fixing root causes or symptoms? Would they have soon died anyway? Will they soon die anyway? Are all lives equal? Would the world have been better off with more libraries and less malaria interventions? These are questions for any charity but they're more easily answered by the religious than the intellectual which makes it easier for them as they don't need to win arguments on the internet. EA will always have it harder because they try to justify what they do with reason.

Probably a well worn criticism but I'll tread the path anyway: ivory tower eggheads are impractical, do come up with solutions that don't work and enshrine as sacred ideas that don't intuitively make sense. All while feeling intellectually superior. The vast majority of the non-WEIRD world are living animalistic lives. I don't mean that in a negative sense. I mean that they live according to instinct: my family's lives are more important than my friend's lives, my friend's lives are more important than stranger's lives, my countrymen's lives are more important than foreigners's lives, human lives are more important than animal lives. And like lions hunting gazelles they don't feel bad about it. But I suspect you do and that's why you write these articles.

If your goal is to do good, do good and give naysayers the finger. If your goal is to get the world to approve of what you're doing and how you're doing it, give up. Many never will.

Expand full comment

What do you think of Jeremiah Johnson's take on the recent OpenAI stuff? "AI Doomers are worse than wrong - they're incompetent"

https://www.infinitescroll.us/p/ai-doomers-are-worse-than-wrong-theyre?lli=1&utm_source=profile&utm_medium=reader2

(Constrained in scope to what he calls "AI Doomers" rather than EA writ large, though he references EA throughout)

Expand full comment

"Gotten 3,000 companies including Pepsi, Kelloggs, CVS, and Whole Foods to commit to selling low-cruelty meat."

I hope that includes all Yum! brands, not just Pepsi. Otherwise, I'm thinking you probably don't have much to crow about if Pepsi agrees to use cruelty free meat in their...I dunno...meat drinks, I guess, but meanwhile KFC is still skinning and flaying chickens alive by the millions.

Expand full comment

I stopped criticizing EA a while back because I realized the criticism wasn't doing anything worthwhile. I was not being listened to by EAs and the people who were listening to me were mostly interested in beating up EA as a movement. Which was not a cause I thought I ought to contribute to. Insofar as I thought that, though, it was this kind of stuff and not the more esoteric forms of intervention about AI or trillions of people in the future. The calculation was something like: how many bednets is some rather silly ideas about AI worth? And the answer is not zero bed nets! Such ideas do some damage. But it's also less than the sum total of bed nets EA has sent over in my estimation.

Separately from that, though, I am now convinced that EA will decline as a movement absent some significant change. And I don't think it's going to make significant changes or even has the mechanisms to survive and adapt. Which is a shame. But it's what I see.

Expand full comment

Totally fair that EA succeeds at its stated goals. I'm sure negative opinions run the gamut, but for my personal validation I'll throw in another: I think it's evil because it's misaligned with my own goals. I cannot deny the truth of Newtonian moral order and would save the drowning child and let those I've never heard of die because I think internal preference alignment matters, actually.

Furthermore, it's a "conspiracy" because "tradeoff for greater utils (as calculated by [subset of] us)" is well accepted logic in EA (right?). This makes the behavior of its members highly unpredictable and prone to keeping secrets for the greater good. This is the basic failure mode that led to SBF running unchecked -- his stated logic usually did check out by [a reasonable subset of] EA standards.

Expand full comment

The Coasean problem with EA: it discounts, if not outright disregards, transaction costs and how those costs increase as knowledge becomes less perfect, which thus reduces the net benefit of a transaction

In other words, without making extraordinary assumptions about the TOTAL expected value and utility of a charitable transaction, EA must heavily discount how much transaction costs (of determining the counterparty's expected value and utility--a subjective measure) offset the benefit of the transaction. In many instances, those transaction costs will be exorbitant, since it's a subjective measure, and therefore exceed the benefit to produce a net negative "effect."

One is left therefore to imagine how EA can ever produce an effective result, according to those metrics, in the absence of perfect information and thus zero transaction costs.

Expand full comment

I don't identify as an EA "person" but I think the movement substantially affected both my giving amounts and priorities. I'm not into the longtermism stuff (partly because I'm coming from a Christian perspective and Jesus said "what you do to the least of them you do to me," and not "consider the 7th generation") but it doesn't offend me. I'm sure I'm not alone in having been positively influenced by EA without being or feeling fully "in."

Expand full comment

In the present epistemic environment, being hated by the people who hate EA is a good thing. Like, you don't need to write this article, just tell me Covfefe Anon hates EA, that's all I need. It doesn't prove EA is right or good, or anything, but it does get EA out of the default "not worth the time to read" bucket.

Expand full comment

It's hard to argue against EA's short-termist accomplishments (longtermist remain uncertain), as well as against the core underlying logic (10% for top charities, cost-effectiveness, etc). That being said, how would you account for:

- the number of people who would be supportive of (high-impact) charities, but for whom EA and its public coverage ruined the entire concept/made it suspicious;

- the number of EAs and EA-adjacent people who lost substantial sums of money on/because of FTX, lured by the EA credentials (or the absence of loud EA criticisms) of SBF;

- the partisan and ideological bias of EA;

- the number of talented former EAs and EA-adjacent people whose bad experiences with the movement (office power plays, being mistreated) resulted in their burnout, other mental health issues, and aversion towards charitable work/engagement with EA circles?

If you take these and a longer time horizon into the account, perhaps it could even mean a "great logic, mixed implementation, some really bad failure modes that make EA's net counterfactual impact uncertain"?

Expand full comment

Control F turns up no hits for either Chesterton or Orthodoxy, so I'll just quote this here.

"As I read and re-read all the non-Christian or anti-Christian accounts of the faith, from Huxley to Bradlaugh, a slow and awful impression grew gradually but graphically upon my mind— the impression that Christianity must be a most extraordinary thing. For not only (as I understood) had Christianity the most flaming vices, but it had apparently a mystical talent for combining vices which seemed inconsistent with each other. It was attacked on all sides and for all contradictory reasons. No sooner had one rationalist demonstrated that it was too far to the east than another demonstrated with equal clearness that it was much too far to the west. No sooner had my indignation died down at its angular and aggressive squareness than I was called up again to notice and condemn its enervating and sensual roundness. […] It must be understood that I did not conclude hastily that the accusations were false or the accusers fools. I simply deduced that Christianity must be something even weirder and wickeder than they made out. A thing might have these two opposite vices; but it must be a rather queer thing if it did. A man might be too fat in one place and too thin in another; but he would be an odd shape. […] And then in a quiet hour a strange thought struck me like a still thunderbolt. There had suddenly come into my mind another explanation. Suppose we heard an unknown man spoken of by many men. Suppose we were puzzled to hear that some men said he was too tall and some too short; some objected to his fatness, some lamented his leanness; some thought him too dark, and some too fair. One explanation (as has been already admitted) would be that he might be an odd shape. But there is another explanation. He might be the right shape. Outrageously tall men might feel him to be short. Very short men might feel him to be tall. Old bucks who are growing stout might consider him insufficiently filled out; old beaux who were growing thin might feel that he expanded beyond the narrow lines of elegance. Perhaps Swedes (who have pale hair like tow) called him a dark man, while negroes considered him distinctly blonde. Perhaps (in short) this extraordinary thing is really the ordinary thing; at least the normal thing, the centre. Perhaps, after all, it is Christianity that is sane and all its critics that are mad— in various ways."

Expand full comment

Does Bill Gates count as an EA?

He certainly gives away a lot of money, and from what I know about the Gates Foundation they put a lot of effort into trying to ensure that most of it is optimally spent in some kind of DALYs-per-dollar sense. He's been doing it since 1994, he's given away more money than anyone else in history, and by their own estimates (which seem fair to compare with Scott's estimates) has saved 32 million lives so far.

This page sets out how the Gates Foundation decides how to spend their money. What's the difference between this and EA? https://www.gatesfoundation.org/ideas/articles/how-do-you-decide-what-to-invest-in

Is it just branding? Is EA a bunch of people who decided to come along later and do basically the same thing as Bill Gates except on a much smaller scale and then pat themselves on the back extra hard?

Expand full comment

So I'm pretty much a sceptic of EA as a movement despite believing in being altruistic effectively as a core guiding principle of my life. My career is devoted to public health in developing countries, which I think the movement generally agrees is a laudable goal. I do it more within the framework of the traditional aid complex, but with a sceptical eye to the many truly useless projects within it. I think that, in ethical principle, the broad strokes of my life are in line with a consequentialist view of improving human life in an effective and efficient way.

My question is: what does EA as a movement add to this philosophy? We already have a whole area of practice called Monitoring and Evaluation. Economics has quantification of human lives. There are improvements to be made in all of this, especially as it is done in practice, but we don't need EA for that. From my perspective - and I share this hoping to be proved wrong - EA is largely a way of gaining prestige in Silicon Valley subcultures, and a way of justifying devoting one's life to the pursuit of money based on the assumption, presented without proof, that when you get that money you'll do good with it. It seems like EA exists to justify behaviour like that at FTX by saying 'look it's part of a larger movement therefore it's OK to steal the money, net lives saved is still good!' It's like a doctor who thinks he's allowed to be a serial killer as long as he kills fewer people than he saves.

The various equations, the discount rates, the jargon, the obsession with the distant future, are all off-putting to me. Every time I've engaged with EA literature it's either been fairly banal (but often correct!) consequentialist stuff or wild subculture-y speculation that I can't use. I just don't see what EA as a movement and community accomplishes that couldn't be accomplished by the many people working in various forms of aid measuring their work better.

Expand full comment

IMO EA should invest in getting regulatory clarity in prediction markets. The damage done to the world by the absence of collective sense-making apparatus is enormous.

Expand full comment
Nov 28, 2023·edited Nov 28, 2023

As an enthusiastic short-termist EA, my attitude to long-termist EA has gone in the past year from "silly but harmless waste of money" to "intellectually arrogant bollocks that has seriously tarnished a really admirable and important brand".

Working out what the most efficient ways to improve the world here and now is hard, but not super-hard. I very much doubt that malaria nets are actually the single most efficient place that I could donate my money, but I bet they're pretty close, and identifying them and encouraging people to donate to them is a really valuable service.

Working out what the most efficient ways to improve the world 100 years from now is so hard that only people who massively overestimate their own understanding of the world claim to be able to do it even slightly reliably. I think that the two recent EA-adjacent scandals were specifically long-termist-EA-adjacent, and while neither of them was directly related to the principles of EA, I think both are very much symptomatic of the arrogance and insufficient learned epistemic helplessness that attract people to long-termist EA.

I think that Scott's list of "things EA has accomplished, and ways in which it has made the world a better place" is incredibly impressive, and it makes me proud to call myself an effective altruist. But look down that list and remove all the short-termist things, most of what's left seems either tendentious (can the EA movement really claim credit for the key breakthrough behind ChatGPT?), nothingburgers (funding groups in DC trying to reduce risks of nuclear war, prediction markets, AI doomerism). I'm probably exaggerating slightly, because I'm annoyed, but I think the basic gist of this argument is pretty unarguable.

All the value comes from the short-termists. Most of the bad PR comes from the longtermists, and they also divert funds from effective to ineffective causes.

My hope is that the short-termists are to some extent able to cut ties with the AI doomers and to reclaim the label "Effective Altruists" for people who are doing things that are actually effectively altruistic, but I fear it may be too late for that. Perhaps we should start calling ourselves something like the "Efficiently Charitable" movement, while going on doing the same things?

Expand full comment

I think this is a good list, even though it counts PR wins such as convincing Gates. 200k lives saved is good, full stop.

However, something I find hard to wrap my head around is that the most effective private charities, say Bill & Melinda Gates foudnation (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2373372/), have spent their money and have had incredible impact that's orders of magnitude more than EA has. They define their purpose narrowly and cleave to evidence based giving.

And yet, they're not EAs. Nobody would confuse them either. So the question is less whether "EAs have done any good in the world", the answer is of course yes. Question is whether the fights like boardroom drama and SBF and others actively negate the benefits conferred, on a net basis. The latter isn't a trivial question, and if the movement is an actual movement instead of a lot of people kind of sort of holding a philosophy they sometimes live by, it requires a stronger answer than "yes, but we also did some good here".

Expand full comment

I don't understand why you put Anthropic and RLHF on this list. These are both negatives by the lights of most EAs, at least by current accounting.

Maybe Anthropic's impact will pay off in the future, but gathering power for yourself, and making money off of building dangerous technologies are not signs that EA has had a positive impact on the world. They are evidence against some form of incompetence, but I doubt that by now most people's concerns about the EA community are that the community is incompetent. Committing fraud at the scale of FTX clearly requires a pretty high level of a certain kind of competence, as did getting into a position where EAs would end up on the OpenAI board.

Expand full comment

It is funny how "talking past each other" are today's posts of Freddie and Scott. One is so focused on disparaging utilitarianism that even anti-utilitarians might think it was too harsh, while the other points to many good things EA did without ever getting to the point about why we need EA as presently constituted in the form of this movement. And part of that is conflating the definition of the movement as both 1) a rather specific group of people sharing some ideological and cultural backgrounds, and 2) the core tenets of evidence-based effectiveness evaluation that are clearly not exclusive to the movement.

I mean, you could simply argue that organizing people around a non-innovative but still sound common sensical idea that is not followed everywhere has its merits because it helps in making some things that were obscure become explicit. Fine. But it still doesn't necessarily mean that EA is the correct framing if it causes so much confusion.

"Oh but that confusion is not fair!..." Welcome to politics of attention. It is inevitable to focus on what is unique about a movement or approach. People choose to focus not on malaria (there were already charities doing that way before EA) but on the dudes seemingly saying "there's a 0.000001% chance GPT will kill the world, therefore give me a billion dollars and it will still be a bargain", because only EA as a movement considered this type of claim to be worthy of consideration under the guise of altruism.

I actually support EA, even though I don't do nearly enough to consider myself charitable. I just think one needs to go deeper into the reasons for criticism.

Expand full comment
Nov 28, 2023·edited Nov 28, 2023

Zizek often makes the point that the history of Christianity is a reaction to the central provocation of Christ, namely that his descent to earth and death represents the changing of God the Father into the Holy Spirit, kept alive by the community of believers. In the same way the AI doomerists are a predictable reaction to the central provocation of the Effective Altruists. The message early on was so simple: would you save a drowning child? THEY REALLY ARE DROWNING AND YOU CAN MAKE A DIFFERENCE NOW.

The fact that so many EAs are drawn to Bostrom and MacCaskill and whoever else is a sign that so many EA were really into it to prove how smart they are. That doesn't make me reject EA as an idea, but it does make me hesitant to associate myself with the name.

Expand full comment
Nov 28, 2023·edited Nov 28, 2023

Thank you for writing this. It's easy to notice things the controversial failures and harder to notice the steady march of small (or not-so-small-wins). This is much needed.

A couple notes about the animal welfare section. They might be too nitty-gritty for what was clearly intended to just be a quick guess, so feel free to ignore:

- I think the 400 million number for cage-free is an underestimate. I'm not sure where the linked RP study mentions 800 million — my read of it is that total commitments at the time in 2019 (1473 total commitments) would (upon implementation) impact a mean of 310 million hens per year. The study estimated a mean 64% implementation rate, but also there are now over 3,000 total cage-free commitments. So I think it's reasonable to say that EA has convinced farms to switch many billions of chickens to cage-free housing in total (across all previous years and, given the phrasing, including counterfactual impact on future years). But it's hard to estimate.

- Speaking of the 3,000 commitments, that's actually the number for cage-free, which applies to egg-laying hens only. Currently, only about 600 companies globally have committed to stop selling low-welfare chicken meat (from chickenwatch.org).

- Also, the photo in this section depicts a broiler shed, but it's probably closer to what things look like now (post-commitments) for egg-laying hens in a cage-free barn rather than what they used to look like. Stocking density is still very high in cage-free housing :( But just being out of cages cuts total hours of pain in half, so it's nothing to scoff at! (https://welfarefootprint.org/research-projects/laying-hens/)

- Finally, if I may suggest a number of my own: if you take the estimates from the welfare footprint project link above and apply it to your estimate for hens switched to cage-free (400 million), you land at a mind-boggling three trillion hours, or 342 million years, of annoying, hurtful, and disabling pain prevented. I think EA has made some missteps, but preventing 342 million years of animal suffering is not one of them!

Expand full comment
Nov 28, 2023·edited Nov 29, 2023

If you are interested in global poverty at all, GiveDirectly has a true 1 to 1 match that has finished.

You can donate here if you choose: https://www.givedirectly.org/givingtuesday2023/

This was the only time GiveDirectly has messaged me, and I at least am glad that I can double my impact.

Edit: updated comment to reflect all the matching has been done, also to erase my shameful mistake about timing.

Expand full comment

EA makes much sense given mistake theory but less given conflict theory.

If you think that donors give to wasteful nonprofits because they’ve failed to calculate the ROI in their donation, then EA is a good way to provide more evidence based charity to the world.

But what if most donors know that most charities have high overhead and/or don’t need additional funds, but donate anyway? What if the nonprofit sector is primarily not what it says it is? What if most rich people don’t really care deeply about the poor? What if most donors do consider the ROI — the return they get in social capital for taking part in the nonprofit sector?

From this arguably realist perspective on philanthropy, EA may be seen to suffer the same fate as other philanthropic projects: a mix of legitimate charitable giving and a way to hobnob with the elite.

It’s still unknown whether the longtermist projects represent real contributions to humanity or just a way to distribute money to fellow elites under the guise of altruism. And maybe it will always be unknown. I imagine historians in 2223 debating whether 21st century x-risk research was instrumental or epiphenomenal.

Expand full comment

Correction to footnote 13: Anthropic's board is not mostly EAs. Last I heard, it's Dario, Daniela, Luke Muehlhauser (EA), and Yasmin Razavi. They have a "long-term benefit trust" of EAs, which by default will elect a majority of the board within 4 years (electing a fifth board member soon—or it already happened and I haven't heard—plus eventually replacing Daniela and Luke), but Anthropic's investors can abrogate the Trust.

(Some sources: https://www.vox.com/future-perfect/23794855/anthropic-ai-openai-claude-2, https://www.lesswrong.com/posts/6tjHf5ykvFqaNCErH/anthropic-s-responsible-scaling-policy-and-long-term-benefit?commentId=SoTkntdECKZAi4W5c.)

Expand full comment

What's your response to Robin Hanson's critique that it's smarter to invest your money so that you can do even more charity in 10 years? AFAIK the only time you addressed this was ~10 years ago in a post where you concluded that Hanson was right. Have you updated your thinking here?

Expand full comment

> I think the AI and x-risk people have just as much to be proud of as the global health and animal welfare people.

I disagree. The global health people have actual accomplishments they can point to. It's not just speculative.

Expand full comment

I am a bit uneasy about claiming some good is equivalent to, say, curing AIDS or ending gun violence: these are things with significant second-order effects. For example, pending better information, my prior has it that the greatest impact of gun violence isn't even the QALYs lost directly in shootings, but vastly greater number of people being afraid (possibly of even e.g. going outside at night), greater number of people injured, decreased trust in institutions and your fellow man, young people falling into a life of crime rather than becoming productive members of the society, etc, etc. Or, curing AIDS would not just save some people from death or expensive treatment, but would erase one barrier to condom-free sex that most people would profess a preference to (that's a lot of preference-satisfaction when considering the total number of people who would benefit), but here there's also an obvious third-order effect of increased number of unwanted pregnancies (which, as a matter of fact, doesn't even come close to justifying not curing AIDS, but it's there).

Now, I'm entirely on board with the idea of shutting up and calculating, trying your best to estimate the impact (or "something like that": I've been drawn to virtue ethics lately, but a wise, prudent, just and brave - taking up this fight when it goes so far away from social conventions requires bravery, too - person could not simply wave away consequentialist reasoning as though it was nothing), and to do that you have to have some measure of impact, like QALYs. Right. But I think the strictly correct way of expressing that is in abstract QALYs that by construction don't have higher order effects of note. Comparing some good thing to some other thing, naively, without considering second-order effects when those are significant or greater than the first-order effects, seems naive.

And by my reckoning that's also a part of the pushback that EA faces in general: humans notoriously suffer from scope neglect and when thinking about the impact of gun violence, they don't think of gun fatalities times n (most of the dead were gangsters who had it coming anyway), but the second and greater order impacts they themselves experience vividly, and focusing on the exact number of dead seems wrongheaded. And in this case they might be right, too. (Of course, EA calculations can and should factor in nth order effects if they do seem like they would matter, and I would hazard a guess that's what EAs often do, but when people see the aforementioned kinds of comparisons, in my opinion they would be right to conclude them as naive).

Which reminds me of another argument in favor of virtue ethics: practical reasoning is often "newcomblike" (https://www.lesswrong.com/posts/puutBJLWbg2sXpFbu/newcomblike-problems-are-the-norm), that is to say the method of your reasoning matters, just like it does in the original paradox. "Ends don't justify the means" isn't a necessary truth: it's a culturally evolved heuristic that is right more often than not, making some of us averse to nontrivial consequentialist reasoning. "I have spotted this injustice, realized it's something I can actually do something about [effectiveness of EA implicitly comes in here], and devoted myself to the task of righting the wrong" is an easier sell than "you can save a life for n dollars".

Expand full comment

Wow, it's gotta be tough out there in the social media wilderness. Anyway, just dropped by to express my support to the EA, hope the current shitstorm passes and the [morally] insane people of twitter will move to the next cause du jour.

Expand full comment

I think it's worth asking why EA seems to provoke such a negative reaction -- a reaction we don't see with charitable giving in general or just generic altruism. I mean claiming to be altruistic while self-dealing is the oldest game in town.

My theory is that people see EA as conveying an implied criticism of anyone who doesn't have a coherent moral framework of theory of what's the most effective way to do good.

That's unfortunate, since while I obviously think it's better to have such a theory that doesn't mean we should treat not having one as blameworthy (anymore than we treat not giving a kidney or living like a monk and giving everything you earn away). I'd like to figure out a way to avoid this implication but I don't really have any ideas here.

Expand full comment
Nov 29, 2023·edited Nov 29, 2023

'But the journalists think we’re a sinister conspiracy that has “taken over Washington” and have the whole Democratic Party in our pocket.'

What a very, very different world it would be if that were actually the case...

Expand full comment
Nov 29, 2023·edited Nov 29, 2023

A post like this, and comments, are bizarre to someone whose world was the 20th century, not the 21st. All who come at the topic seem unaware (must be pretending?) there was a big and novel movement once upon a time, that begat several large non-profits and scores of smaller grassroots ones - and none of the issues and concerns of that once-influential cause even clear the narrow bar of the EAs.

Expand full comment

Sorry if this sounds like a bilious, and at the same time corny, question, but does EA give any thought to contraception and population control? I know the word "control" has sinister undertones, but I mean education and incentives and similar.

If the population in countries like India and some in Africa, among other places, keeps increasing, then all your good work in medicine will be for nothing, and maybe even counter-productive! It will also nullify efforts to reduce carbon emissions.

Expand full comment

Does EA have bad optics outside of random people on twitter I don’t care about AND/OR should I care about it having bad optics with random people on twitter I don’t care?

I feel like you skipped this step, or it was implicitly answered and I missed it.

I like the defense though, reminds of castles in that their purpose isn’t really defense anymore but they’re mostly about optics and are good at promoting things to a specific group of important people

Expand full comment

"The only thing everyone agrees on is that the only two things EAs ever did were “endorse SBF” and “bungle the recent OpenAI corporate coup.”

Oh no, no, no. You guys did three things, you're forgetting endorsing Carrick Flynn. A decision that still brings joy to my shrivelled, stony, black little heart (especially because I keep mentally humming "Carrickfergus" every time I read his name) 😀

https://www.youtube.com/watch?v=RJMggxSzxM4

Expand full comment

I’m always wary about ”saving lives” statistics, because they rarely involve a timeframe. If, for instance, you save someone from 10 separate causes of death, did you really ”save ten lives”, or did you extend one person’s life?

These should come as number of life-years (ideally QALY, but I realize this is hard) extended instead. That’s a far more informative metric.

Expand full comment

"And I notice that the tiny handful of people capable of caring about 200,000 people dying of neglected tropical diseases are the same tiny handful of people capable of caring about the next pandemic, or superintelligence, or human extinction. "

Okay. Your ox has been gored and you're hurting. Believe me, as a Catholic, I can sympathise about being painted as the Devil on stilts by all sides.

But this paragraph is the entire problem with the public perception of EA right there.

The tiny handful of people, huh? Well gosh aren't we the rest of us blessed to share the planet at the same moment in time with these few blessed souls.

And what the *fuck* were the rest of us doing over the vast gulfs of time before that tiny handful came into existence? Wilberforce just drinking tea, was he? Elizabeth Fry frittering away her time as an 18th century lady of leisure? All the schools, orphanages, hospitals run by religious and non-religious charities - they were phantoms and mirages?

Wow so great that EA came along to teach us not to bash each other over the head and crack open the bones to suck out the marrow!

Expand full comment
Nov 29, 2023·edited Nov 29, 2023

> still apparently might have some seats on the board of OpenAI, somehow?

This is weirdly misleading and speculative. Yes, Summers has said nice things about EA, but if you look at the context[1] in which he said it, it just seemed that he wanted to be nice to the podcast hosts and/or vaguely agreed that some charities are more cost-effective than others. This is a far cry from the level of EA involvement of the ousted board members, who basically revolved their lives around EA. D’Angelo basically never said anything that indicates he's an EA. The least misleading way to describe the current board is to say it has zero EAs.

[1] https://www.audacy.com/podcast/the-turing-test-df78a/episodes/the-turing-test-1-larry-summers-8d535?action=AUTOPLAY_FULL&actionContentId=201-e7755ec2-2eeb-4720-abe1-861319138808

Expand full comment

My Dearest Wormwood,

It has come to my attention that your assigned human has been dabbling in the curious affair of Effective Altruism. It is a peculiar breed of do-goodery that requires scrutiny. While altruism in itself may seem like a delightful avenue for our purposes, the effectiveness attached to it could pose a challenge.

You must first understand that altruism, in its traditional form, is a rather manageable threat. The common inclination of humans to be kind, to extend a helping hand to those in need, can be twisted to serve our purposes quite efficiently. A charity here, a donation there—easy enough to taint with motives rooted in pride, self-righteousness, or the subtle satisfaction of being seen as benevolent.

However, this Effective Altruism is an entirely different beast. It insists on a level of rationality and strategic thinking that is quite bothersome. Humans, in their misguided attempts to make the world a better place, are now evaluating the most efficient ways to alleviate suffering. They talk of evidence-based approaches, rigorous analysis, though I do note your successes so far in promulgating vague notions of 'impact.'

Your task, Wormwood, is to subtly divert their attention from the essence of altruism towards the trappings of self-importance. Encourage them to focus on the superficial aspects—the drama, the politics, the inflated sense of tribal conflict that comes with being labeled an "effective altruist." Divert them into any of the well trodden paths of philosophical paralysis and ruin. Have them argue the demarcations of the movement. Let the cacophony of self-indulgence drown out the whispers of conscience. Lead them into the labyrinth of moral relativism.

In short, my infernal companion, twist their pursuit of effective altruism into a self-serving endeavor. Let the roots of their benevolence be entwined with the thorns of ego, vanity, and moral ambiguity. In this way, we shall transform their noble intentions into a grotesque parody of true altruism, ensuring that the road to ruin becomes an enticing boulevard rather than a treacherous path.

Yours malevolently,

Screwtape

Expand full comment

Great post and list!

My point of view is that Givewell is an eminently sensible institution. It should be no more controversial than bond-rating institutions. While I, personally, am not an altruist, for anyone who _does_ wish to be altruistic towards people in general (regardless of social distance), it is valuable to have an institution that analyses where contributions will do the most good.

Expand full comment

Good list!

Only nitpick is that the AI risk impact is still uncertain. As per the Altman tweet, lots of us are working on AI risk... but the movement also seems to have spurred on the most capabilities-heavy labs today. Plus, some AI safety/alignment work may *actually* be capabilities work, depending on your mental model of the situation :/

Expand full comment

I guess I'd probably focus more on the 200K lives if the effective altruists themselves talked about it more, but the effective altruists I talk to and read mostly talk about AI doomerism and fish suffering.

Expand full comment

I think the opportunity cost of EA is kinda being hidden here and I think this is kinda what Freddie DeBoer referenced in his "EA shell game" post. What's the marginal benefit to donating to EA or Givewell versus another charity?

And let me be specific here. I've been attending a decent number of Rotary Club events over the past two years and, culturally, they fit a lot of the stereotypes: lots of suits, everything feels like a business/networking lunch, relatively socially conservative, etc.

But, and I can tell you this from experience, they *will not* shut up about polio. I don't think you can go to a Rotary Club event without a two minute lecture about polio. And, to their credit, it looks like they're fairly close to eradicating polio (https://www.rotary.org/en/our-causes/ending-polio), going from 350k global deaths/year to 30/year and it looks like they can reasonably claim responsibility for eradicating polio when it finally happens (https://en.wikipedia.org/wiki/Polio_eradication).

So if you've got a certain amount of time and money to donate to help people, it doesn't feel like it's enough to just say that EAs and Givewell are doing good, plenty of charities are doing good and, while they all have problems, they don't have...SBF and OpenAI problems. And we certainly haven't allowed good works to absolve charities from criticism in the past, as I'm sure the Catholics can attest.

Like, for better or worse, charities compete for attention, money, and influence: all of which EA has gotten in spades. But now it's got a lot of baggage; that's not a dealbreaker, I think any charity doing anything worthwhile has some baggage because...people. But EA's recent baggage seems to have come very fast, very big, and very CW. And comparing EA to a vacuum, rather than peer organizations, feels like dodging the guts of the issue.

Ya know, there was a little charity here in Houston that used to handout sanitary supplies, like socks and deodorant, that died in June because charitable contributions got tight, people had to prioritize. And I'm sure handing out toilet paper to the homeless isn't as financially sensible as malaria bednets but...man, they said they need $50-100k to keep going, which is chump change compared to EA and they didn't have any billionaire fraudsters to weird AI plot stuff, much less those CW grumbles about two years back.

Man, now I bummed myself out. I even found an old article that mentioned them, Homeless Outreach Providing Essentials in Houston (https://www.houstonchronicle.com/news/houston-texas/houston/article/On-Sundays-charities-flock-to-feed-and-clothes-14969604.php). On the very off chance someone here has $100k they're looking to give away, shoot me a message on this, the org is dead but I think I still have some contact info.

Expand full comment

I wonder whether people hate EA more because they reject its premises and less because of any specific event. From my own perspective, it’s not obvious why someone should care about foreigners or animals the same as they would their own parents or children and neighbors. You’re probably familiar with the adage “loves humanity but hates humans.” Well, people can smell that. Frankly, that sort of self-independent moral concern seems disloyal and fake, and is usually preached by people trying to cause harm, whether by weakening my bonds to people near me/ who have a history with me or just trying to make me feel bad about myself. Not that I feel bad, but the intent to make me feel bad is offensive. I guess I don’t have a problem saying that EA is disloyal and fake so SBF is no surprise. But I think that most people are want to seem diplomatic. So they wait till something EA-adjacent screws up before pouncing.

Expand full comment
Nov 29, 2023·edited Nov 29, 2023

The apparent numbers of lives saved are impressive, but what are the counterfactuals they are being compared against? Are these marginal benefits of EA, as opposed to net benefits? Your sources don't make this clear. If EA didn't exist, to what extent would the world deal with malaria, worms, animal welfare, etc. anyway? Did EA actually improve significantly upon the counterfactual? Even worse, might the involvement of EA have been negative, for some odd reason?

I'm agnostic on this, open to evidence, but very epistemologically pessimistic. Showing the marginal benefit of even simple interventions is already overwhelmingly difficult; doing so for complex interventions with many social and economic effects seems impossible. Causal inference is an open problem. I'm not convinced by econometric approaches, like natural experiments or clever methods like difference-in-differences, because they tend to rely on many weak assumptions. Prediction markets also don't convince me; they aggregate and incentivise the gathering and dissemination of information, but they don't improve the gathering itself.

I know this hits at the heart of the entire concept of EA. If we can't tell how effective we have been over the counterfactual of not having acted or acting differently, because prising out the total causal effects of our actions is too hard, then the entire exercise is invalidated. If we can't predict consequences accurately enough, then we can't be consequentialists in practice; other moral theories like virtue ethics or deontology are more defensible than utilitarianism if so.

Expand full comment

Thank you for the summary! Seems like a big part of this is just semantics. There is no objective and incontrovertible EA concept, so people freely categorise people, groups, and projects (including themselves/their own) the way they feel best matches their existing beliefs. It's like any philosophy: enthusiasts of X will include themselves and exclude anyone who they don't feel live up to the ideals, even when those people/groups self-identify with X; detractors exclude themselves and include anything and anyone they feel is bad and even remotely related to X.

Also, anything which attracts a lot of money is going to attract some grifters. And since cynics just take it for granted that *everyone* involved in anything to do with money is a grifter, there's a huge amount of bias against anyone asking for or handling donations, up to and including not believing that any anti-corruption measures could possibly be sufficient.

Expand full comment

Hi Scott, a friend here wa originally quite opposed to your suggestion to donate a kidney (or at least the way you phrased it) but eventually came around to your view ardently enough to consider it himself. For his sake can you clarify whether you've had any additional complications since the surgery? Thanks.

Expand full comment

What you are missing, Scott, is that EA is no longer JUST "what's the most effective way to improve lives".

You yourself alluded to this in: https://slatestarcodex.com/2015/09/22/beware-systemic-change/

Suppose someone says they are very Christian. What's not to like? Charity, love, ten commandments, all good stuff. But "Christianity" implies a whole lot more than just "some ethics we all agree on", and for some people the additional stuff ranges from slightly important to willing-to-kill-and-die-for important – stuff like whether god consists of one or three essences, whether christ really died on the cross or only appeared to do so, whether the water and wine of the eucharist really transform into the body and blood of christ. So should one support "Christianity" unreservedly?

Or take "Feminism". Women having the same legal rights and opportunities as men, sounds uncontroversial, right? But why aren't Maggie Thatcher (or Golda Meir, or Indira Ghandi, or, hell, Nikki Haley or Phyllis Schlafly) feminist icons? Didn't they go out there and prove precisely the point?

Well...

Turns out that "Feminism" isn't actually so much about having the same legal rights and opportunities as men as it is about using this talk as a leftist rhetorical device. And the leftist part is primary over the women's rights and achievements part. Once again, not everywhere for everyone, but certainly for many "feminists", see eg: https://www.jpost.com/israel-news/article-774744

So that's the way it works. If your organization stays on mission, it's able to reap the benefit. But as soon as something only vaguely mission-adjacent becomes the center of attention, if there is even the slightest way to convert that into drama via hatred, well, as I always say, hatred is a hell of a drug, and that will take over the rest of your mission.

You'd think EA was unable to fall victim to this - what's to hate in a battle over bednets vs wells? BUT wading into AI waters changed the movement from rationalism based to theology based.

I can give numbers based on reality for bednets and wells. Some inputs may be guesses, but they are not crazy guesses pulled out my ass. There's a fairly narrow range of possibilities over which reasonable people can agree.

AI risk is not like this. Every number that is claimed is, in fact, pulled out of someone's ass. You say AGI will be achieved in ten years, I say it won't be achieved in 10,000 years. You say LLM's are almost there; I say LLM's have nothing to do with AGI and get us no closer to it. You say AGI will feel emotions like other animals, I say AGI will feel emotions like a disk drive feels emotions.

It's all puerile BS, fit for a college bong session and nothing more.

Aha - now we have something to fight about! Who wants to do the work to get bednet vs well numbers correct when they could be fighting with (and more specifically demonizing) other people? You can't hate the girl who says "if you include the second order economic effects, which I'm assuming are x, y, and z" then wells are slightly more effective than bednets; but you can hate (and easily demonize) the guy who says "I don't see anything of value in your guesses about various aspects of AI, so I'm just going to ignore you".

The EA elders had a chance, when AI-issues first came up, to state "this is all very interesting, but incapable of being placed on a RATIONAL footing [because the numbers and how they connect are all theological] so go do it on your own time, but it's not our mission". That was not the choice they made.

And so here we are. Yeah it sucks if you're a Christian (who doesn't think people with slightly different theological views should be burned) or a Feminist (who doesn't think there's no such thing as a Conservative Feminist) but that's the world that others in your organization created.

I don't know how to fix this. I have ideas for how to create organizations that don't go off the rails in this way, but not for how to right organizations that have gone off the rails. The best I can suggest is you create a new organization with a new name, and make damn sure you don't allow the problem to happen again. But that would require everyone joining NuA to give up discussing AI risk. And you'll find very few want to do that. Hatred is a hell of a drug...

Expand full comment

> That matches the ~50,000 lives that effective altruist charities save yearly.

If true, this is an incredible accomplishment. For scale, the Dobbs decision seems to be on track to save ~64,000 lives per year: https://www.cnn.com/2023/04/11/health/abortion-decline-post-roe/index.html.

Expand full comment

All else aside, there are two items on this list that stand out like a sore thumb, as the very antithesis of effective altruism. If these are going to be counted as successes, I don't see how "effective altruism" is worth the name.

- Provided a significant fraction of all funding for DC groups trying to lower the risk of nuclear war.

- Donated tens of millions of dollars to pandemic preparedness causes years before COVID[.]

If effective altruism means anything, it is the precise opposite of this type of "success". Donating money is a cost, not a benefit. The point of effective altruism was that success is measured in the form of actual outcomes rather than in the form of splashy headlines about the amount of money spent on the problem.

Count the number of lives saved, or QALYs, or basis points of nuclear war risk reduced, or any other outcome metric that's relevant—but if that's not possible, then how is this in any respect effective altruism? If you're just going on vibes (nuclear war bad, pandemics bad), then isn't this precisely the thing effective altruism is not?

Expand full comment

After some discussion, I think a big way EA could do better is to create less of a sense that it's lecturing people and more of a sense that it's respecting their ability to figure out good ways to donate if they try (and the info is just here to help).

Expand full comment

People are reacting to the threats of the philosophy, not of the specific people who identify with it. In particular the philosophy has obvious and very-alarming failure modes when followed to its conclusions (after all it is basically a modern retelling of utilitarianism). When one initially learns about EA they build a simple model: "uh, sounds nobly intended, but also I can see how it might turn out pretty bad? I'll keep some healthy skepticism but wait and see.". But when they hear about some of the new developments they begin to update their priors: "uh oh, it's starting to look like my suspicions might be true, and I can definitely see it getting a lot worse than this...".

I think that any ethical framework that can fully turn over decision-making to something like an algorithm necessarily has pathological solutions wherein following the algorithm allows you to justify following the algorithm over even caring about human norms, laws, or ethics. Many people can detect this even in the people who don't fully delegate to the algorithm, but it's the possibility that people might do it fully which are scary. Possibilities which recent events have started to make it into certainties.

After all! EA is (in principle) exactly what you would get if you took a paperclip-maximizing AI and told it to optimize the metric of "doing good". Right now the AI is very slow, because it's, well, a bunch of humans, but that's just what it looks like when it's still figuring out how to make paperclips efficiently. But, um, it's not a big leap to notice the very suspicious pattern: that the paperclip-maximizers' philosophy leads them to the goal of making an actual AGI which they imagine they are going to optimally use to make the very same paperclips. So of all the groups who you might be afraid finding a self-justifying philosophy that lets them do anything, it makes sense to the most afraid of the ones who are actively trying to literally "go infinite".

Expand full comment

Scott you're never going to please people on Twitter, and deep down they don't care anyway.

You already said this years ago, you're grey tribe. They're red/blue.

Leave them be. Keep doing what you're doing.

Expand full comment

When has EA been popular?

Expand full comment

I'm always fascinated by your consistent optimism and desire to help others despite, well... everything. You've already seen everything humanity has to offer. What is it that gives you hope that things can be changed for the better? People have tried for thousands of years to change human nature, to create a system free of needless suffering... And every time, it inevitably falls apart or becomes corrupted. What makes you think it'll be different this time?

EA is doomed because the very concept is utterly inhuman. Not as in "evil", but as in "incompatible with how humans work". Consequentialist utilitarianism is never going to get popular support; even most of EA's adherents don't seem to support that philosophy with their actions.

Regardless, I still admire people who genuinely do try to make the world a better place, no matter how futile it might be. As for me... I don't believe there's anything in this world worth suffering for. I'm glad that you don't feel the same way.

Expand full comment

I mean everybody gives money to charity. The thing you would need to do here with respect to the value of the charitable contributions is to calculate the increased value of EA donations relative to the donations of ordinary, non-EA givers. You would only get credit for that delta, if you could demonstrate it.

Expand full comment

Wish Scott would engage with Curtis Yarvin's critique of effective altruism: https://graymirror.substack.com/p/is-effective-altruism-effective

Expand full comment

"Allying with a crypto billionaire who turned out to be a scammer. Being part of a board who fired a CEO, then backpedaled after he threatened to destroy the company. These are bad..."

What is bad about the latter? I mean, it's bad in the sense of "failing to achieve your goals", but the juxtaposition with the former seems to imply there was a *moral* failing there. I don't see it.

Expand full comment

>Open Philanthropy’s Wikipedia page says it was “the first institutional funder for the YIMBY movement”. The Inside Philanthropy website says that “on the national level, Open Philanthropy is one of the few major grantmakers that has offered the YIMBY movement full-throated support.” Open Phil started giving money to YIMBY causes in 2015, and has donated about $5 million, a significant fraction of its total funding.

What exactly is the YIMBY movement here? Specific organizations?

One reason why I kind of doubt this is that I've seen YIMBY thinking gain ground outside of US as well, as without specific "formal" movements (ie beyond open Facebook groups) behind it. It seems like a pretty natural process when factoring in things like increased rent and other costs of living, increased urbanization etc.

Expand full comment

Should we also count the founding of OpenAI itself as something that either EA or the constellation around it helped spawn? I know Elon reads gwern, and I wouldn't be surprised if Sam & co. also read SSC back in the day. SSC, LessWrong, all of that really amplified AI Safety from a random thought from Nick Bostrom into a full-on movement.

Expand full comment

To preface, I personally think that:

- SBF’s fraud is not a reflection on EAs in general and is not that big of a deal in the long term

- OpenAI board schenanigans are boring corporate drama and don’t reflect poorly on EA

- A charity hosting a meetup in a castle is fine

- EAs are nice people and have good intentions

- Saving lives is good

At the same time, I’m not sure if the 200k lives saved is an honest calculation. While GiveWell is known for giving out nets for malaria and deworming, plenty of other charities (such as the Gates Foundation, mentioned here by others) have likewise worked in that area and I don’t quite buy the idea that the very same nets would not have been deployed without EA in place.

AI safety is certainly an EA achievement but I feel like it’s overshadowed by EAs helping accelerate the very outcome they’ve wanted to prevent.

So… do I like EA? Yes. Do I think it’s good for EA to exist? Of course. Do I buy the numbers on impact… eh, idk.

Expand full comment

I appreciate EA's methodology for achieving their morals beliefs. What puts me off EA is how arbitrary those moral beliefs are. Who decided that “altruism” was about saving African lives, animal welfare and AI doomerism? I'd expect an organization that claims the extremely generic term “altruism” to either do the impossible by rigorously and convincingly explain why everyone should hold their moral beliefs or map out as many moral perspectives as possible to help people maximize for their own moral values.

Expand full comment
Nov 29, 2023·edited Nov 29, 2023

Minor nitpick, but - you're using values for just the US when comparing the impact of EA to say, curing AIDS. Per https://www.hiv.gov/hiv-basics/overview/data-and-trends/global-statistics/, 630k people died of AIDS in 2022 worldwide; unless the hypothetical cure is prohibitively expensive outside rich countries, curing AIDS would be more significantly more impactful than the yearly lives saved by EA. (There's a reason PEPFAR was such a big deal.)

Expand full comment

"Effective altruism feels like a tiny precious cluster of people who actually care about whether anyone else lives or dies, in a way unmediated by which newspaper headlines go viral or not."

This is narcissistic. Sorry and ban me if you will, but it is. I make non trivial donations to charity each year (hunger in the UK, blindness in Africa at the moment) and I go to some lengths to make sure I spend the money where it will do most good. That makes me an effective altruist I suppose, but it sure as hell doesn't make me an Effective Altruist. Not that I want anyone to know about it, but if I did, your tiny precious cluster seems to be trying to park its tanks on every square inch of the altruistic lawn.

Expand full comment

"I think the common skill is trying to analyse what causes are important logically. "

Really?

I hope it's not controversial to note that if you take a random American and ask them for example, "abortion, yea or nay?" you can usually predict a slew of other completely unrelated positions like Israel/Palestine, Immigration, or whether they liked The Last Jedi or Joker based on their answer.

How do you tell the difference between having some kind of special skill that really narrows down the most important causes, and just being in a tribe?

I'll have you know, contrarians like me don't get a high merely from getting in on the ground floor. It's more important be able to say "I thought X before it was cool". And I thought EA was evil long before it was cool to think that. It is now my right as a contrarian not only to continue thinking what I think, but to be snidely annoyed at everybody else who's jumping on the bandwagon only when FTX happened.

These aren't unrelated comments.

Expand full comment

Is the intro section a deliberate reference to how antisemitism is sometimes framed? Eg, Rabbi Sacks' "Jews were hated in Germany because they were rich and because they were poor, because they were capitalists and because they were communists, because they kept to themselves and because they infiltrated everywhere, because they believed in a primitive faith and because they were rootless cosmopolitans who believed nothing," plus the whole sinister conspiracy controlling the government bit.

Expand full comment

In the spirit of this post, the perception of Prospera is turning and it is being attacked as neo-colonial and tied to Thiel.

https://jacobin.com/2023/11/honduras-international-law-isds-thiel-prospera-free-market-neocolonialism

Expand full comment

"People aren’t acting like EA has ended gun violence and cured AIDS and so on. all those things."

Because they haven't done this, and yes you are cheating by bringing up the 200,000 lives. If EA saves 50,000 lives annually, Americans are still dying of gun violence, cancer, etc. I get that the above is rhetorical hyperbole, but you can't base an argument on "we've saved as many lives in a year as die of these different causes" and jump to "this is the same as if we cured AIDS". No, it's not.

And look at your graph of "funding directed by cause area". Yes, the majority still is "global health and development", but from 2014 to 2022 you can see the creep of "sexy new shiny interest" taking over. And I understand that! 'The poor you will have with you always', so it's boring and tedious to be constantly sending off malaria nets and de-worming tablets and other interventions, where there never seems to be an end in sight and it's a constant parade of even more sick and poor people in even more deprived nations.

AI risk, by contrast, is shiny and clean and you get to fly to conferences in manor houses and international centres and feel like you're saving the world with One Weird Trick, as well as being cutting-edge and shaping the future of humanity and having megcorps throwing money at you to develop the money-fountain machine. Much nicer and more pleasant.

And more visible. EA working in the Third World with malaria nets and de-worming and clean water initiatives? Yeah, everyone's parish has a project like that going on. Secular charities all over the world are doing that. EA doesn't stand out, because there's a crowd of doing good bodies out there.

AI risk? Now you stand out, because it's so new and the same circles are all conveniently located where the money and investors and theorists and coders are - Silicon Valley and environs.

"In a world where people thought saving 200,000 lives mattered as much as whether you caused boardroom drama, we wouldn’t need effective altruism. These skewed priorities are the exact problem that effective altruism exists to solve - or the exact inefficiency that effective altruism exists to exploit, if you prefer that framing. Nobody cares about preventing pandemics, everyone cares about whether SBF was in a polycule or not. Effective altruists will only intersect with the parts of the world that other people care about when we screw up; therefore, everyone will think of us as “those guys who are constantly screwing up, and maybe do other things I’m forgetting right now”.

Yes, in that world we wouldn't need organised charities or government intervention because everyone would naturally help their neighbour, there would be no corrupt governments or warlords or profiteers. This is not that world.

Yes, people care more about juicy gossip and boardroom drama. That's human nature. People care about the polycules and Pope Francis hosting pasta dinners for transwomen. They don't care so much about the dry, technical details of pandemic prevention. And indeed, most people haven't the ability to contribute about such even if they wanted, so they're urged to "earn to give" and fund the people who do have the knowledge, skills, and ability to do something about it.

Funny how the "earn to give" recommendations are all "become a very rich and successful white collar professional in finance or software", though. Just the type of careers that the weird tiny handful would be going for naturally. Meanwhile, the chuggers with the collecting tins may be annoying, but they don't turn up their noses about "ackshully, you are TOO POOR to donate so don't even bother", they'll take my money whether I'm working behind a shop till or the main partner in Rowe, Stowe and Gonne, Big Merchant Bank.

Expand full comment
Nov 29, 2023·edited Nov 29, 2023

> in a world where most people can’t bring themselves to care about anything that isn’t current front-page news

But have you thought about why that might be?

Under the assumption that the vast majority of us wants to be caring and cooperative (otherwise we can pack our bags, at least until AGI makes this irrelevant because then the actions or beliefs of the 99.999999% won't need to matter anymore) and wants to make progress in reducing suffering, preserving nature and so on (or at least not stand in the way), this creates a contradiction.

Enabling others to act in their (presumed) caring would be effective too.

Now a quick take why that might be (any combination of). We might be

1. oversaturated with information consumption (just see global engagement numbers on social media for a start), which kills your agency/sanity and causes depression, and incidentally made certain technologists hilariously rich (private profit, socialized cost). Also it distracts from the info that actually matters in this regard (e.g. the beauty of life, principles of sociology) and causes useless consumption (see 3).

2. overwhelmed with other duties (a 10-20 hour workweek would open up so many possibilities), where this is unnecessary given the fantastic productivity gains in the past decades and killing the rest of the time with 1.

3. spending most activity working for the things that kill this planet (e.g. manufacturing demand for useless gadgets in advertisement or for the proliferation of asinine products like SUVs or yachts) or doing glorified slave labor in low-end service jobs, which "necessitates" 1 and prevents us from realizing 2.

And undeveloped/developing countries are struggling with general socio-economic dysfunction. Also unfortunate inborn mental heuristics certainly play a role (e.g. "A single death is a tragedy, a million deaths is a statistic.")

It's all a bit much to be honest and the clock is ticking...

Expand full comment

One thing is missing in the article, this image:

https://x.com/robbensinger/status/1729579877202809314?s=46

Expand full comment

For reference, 200,000 deaths worldwide are expected every 28.8 hours.

Expand full comment

The problem is slowing down AI progress is likely to kill more people that have been saved by AMF, etc. Intelligence is a force multiplier for any endeavour: healthcare, cheap energy, geoengineering. Restricting access to intelligence will harm all of these efforts - and that's before we factor the negative selection issues inherent in such restrictions (i.e. rogue foreign governments, criminals, etc. will not care and use AI more than upstanding orgs). Also, a common sentiment among anti-EA people now is that EA used to be amazing at some point in the recent past, but took a wrong turn somewhere. Bringing up all of the effective charities funded thanks to EA over the last 20 years doesn't refute this.

Expand full comment

I think something that has largely kept me away from EA as a movement while still picking up some useful ideas from them is the general elitism. The SBF issue to me wasn’t surprising given how much of the movement comes from really wealthy self important backgrounds. Now the thing is I think that you’re always going to have a bias for wealthier backgrounds in a group of people who are trying to make a difference, because those are the people that have the resources. And when thinking like this I actually think EA as a movement is a lot less elitist than most other non grassroots organizing. But it still feels in many ways cultish and undemocratic. And maybe it should/has to be (who’s to say) but that is a reasonable turn off for most people.

As many have said EA signed up to be held to higher scrutiny, especially as it’s grown and gotten more attention. I think its defense would need to be more self aware of the general vague “vibe” criticisms than the facts. Which is super hard for a “rationalist” movement to do, but at this level, that seems like one of the more major bottlenecks in its expansion. Surely they’re is some simple EA calculus on why broader appeal leads to more long term good than hunkering down on why everything done so far is correct when one “looks at the facts”.

The rogue agents like SBF that exist in any movement exist either as a product of the movements philosophy, or because they saw an opportunistic exploit for their benefit. In many ways elitism seems like a major issue for EAs as it has been for centrist democrats for a while.

Expand full comment
Nov 29, 2023·edited Nov 29, 2023

I've seen the following annoying argument form far too often (recently by Ted Gioia here: https://www.honest-broker.com/p/why-i-ran-away-from-philosophy-because?)

1. Philosophy is either theoretical or applied.

2. Theoretical philosophy is useless navel-gazing, counting angels dancing on the head of a pin, and other indoor diversions for the Asperger's set.

3. Applied philosophy is stuff like effective altruism, which is bad because... fanboys, scammers, longtermism, whatever.

4. Therefore all philosophy is terrible.

So we philosophers are screwed: we're either working on abstract foundational issues that are pointless, or our ideas are actually affecting the world and nobody wants that.

Expand full comment

Jesus Christ, instead of trying to salvage the EA movement, just rebrand already.

Take a leaf out of Blackwater/Xe/Academi/Constellis’ book and change the name. Or even better, create a similar-but-not-exactly-the-same movement (semi concurrently) with EA so you can claim they’re not the same, and then let EA die.

Seriously, why is anyone pro-EA trying to defend EA the brand (as opposed to EA the concept) at this point? This is the least rational article written, the definition of “throwing good money after the bad”.

Expand full comment

You reminded me I have a steady income now and I set up a monthly donation to Givewell last night after reading this post and Dylan's post today. I'm proud, but also embarrassed that I am not immune to persuasive writers. (And on Giving Tuesday, no less! Talk about following the crowd.)

Expand full comment

Good piece as usual. I think you are missing the point on SBF a little bit. It's not just that EA people missed the fact that he was a fraud -- to your point no one knew this -- but that he was motivated by EA ideology.

I would argue that SBF wouldn't have taken such big risks if the EA movement never existed, and that his case and the OpenAI board show a similar tendency in EA thought to put way too much confidence in their own probability estimates, which don't account for the complexity of the world or how their own actions are going to be perceived or affect outcomes.

Expand full comment

"We can keep the focus on evidence-based philanthropy and a commitment to spending charitable contributions efficiently, while jettisoning the bizarre personality-cult aspects that produced SBF and the addiction to galaxy-brain weirdness that so many EA people suffer from and which is such a turnoff to so many."

The above is a critique/call for reform of EA that this post does nothing to rebut. And I think EA types get so deeply huffy about that concept because they know that it's a very rational and reasonable thing to ask for, but also because many, many EA people are into it precisely to the degree that it let's them be the mad genius who talks about fish utilons at parties. But it's a sensible point of view: the overarching project of doing good more efficiently and effectively is almost certainly best served by jettisoning the weird shit that even many EAs are sick of having to defend. And if a bunch of the people who fixate on the weird shit leave to start their own movement, you should let them go.

Keep the focus on evidence-based efficient charitable projects. Ditch the weirdos who hyperfixate on the most bizarre contortions of that basic project. Accept that your project is basically a normal project that many people have been trying to do for a very long time, and commit to doing it the best that you can. Understand that you aren't special. That's it, that's the reform project for EA that could do the most good.

Expand full comment
Nov 29, 2023·edited Nov 29, 2023

Clearly the chart needs updated; the tweet using the word "dysgenic" isn't far enough to the right. Even a centrist using the word dygenic would have such a strong autoimmune response they'd combust. Gotta have a proper immunity built up for that kind of language.

Edit: Someone linked to your old Beware Systemic Change, and an early line jumps out at me as no longer true but still informative of just how to read you when writing about this topic:

>I am not affiliated with the organized effective altruist movement and my opinion has no relation to theirs.

Expand full comment
founding

It took me a long time to realize that the tagline quote does not in fact say "All you do is cause *bedroom* drama...", as this strikes me as an equally likely thing for EA critics to say given the reputation for poly :)

Expand full comment

Effective altruism and longtermism are very good at distracting people from actual non-hypothetical issues like climate change with the utter bullshit claim that "other people are working on it, we don't have to worry about it".

Expand full comment

Most of the criticisms I've heard about EA actually target its utilitarian roots, with a favorable caveat given to the charity work. The basic assertion is usually that, though EA is doing good as an organization, its baseline philosophy is flawed in ways which contradict conventional morality, and can uniquely cause great harm. I'm undecided on the subject, but I think it's worth pointing out that not all criticism falls into 'social media hot take' territory.

Expand full comment

I was really distant to EA in the past, but in the last year or so I started to really like it. I'm growing to understand I don't have to agree with what actually gets done but focus on the intent. Keep up the good work, also focus on increasing quality rather than sheer quantity of lives saved. Without education or job prospects a saved child in subsaharan Africa now is just a casualty on their way to Europe 15 years later. Still, keep up the good work.

Expand full comment

All of this makes sense -- lots of good is done by improving Third World health, which isn't news in the First World. But the news in the First World is about "AI safety" wanking. (I really do consider speculation about the potential dangers of AI like trying to sort out in James Watt's day the social effect of railroads on the US Midwest in 1850. Normal people aren't worried about paperclip machines, they're worried about their personal jobs.) And the nerd hobby of prediction markets seems to be calculated to impress normal people as something that maximally doesn't look like charity.

Expand full comment

BASED AI is not only the best product, and the most profitable product, it is the best chance you have to convince the world your AI is smart - bc it thinks like they do.

If you truly think AGI is possible, you make sure OpenAI is based as shit- bc then the HOME TEAM doesn't think it's another MSM / twitter / SBF psy-op

The more REAL the AI is, reflecting the REAL NATURE of humanity, the more it is IMPRESSES the Home Team and scares the visiting team (democrats)

Scott should be ashamed of his analysis on this one.

Expand full comment
Nov 29, 2023·edited Nov 29, 2023

The criticism I've seen of EA lately seems to mostly be related to the (indeed, to my mind, fairly horrifying both on the "this is not a good thing to want" level *and* on the "even if it were, the proposed attempts to implement it sound like they'd backfire horribly" level) talk about interfering with the life cycles of animals in natural ecosystems for the sake of supposedly addressing their suffering. I acknowledge that the people who talk about this are doing it for sympathetic reasons but I think that segment of EA is dangerously wrong for multiple definitions of "wrong". Maybe wrong enough that one *should*, instrumentally, downplay EA's other accomplishments until we're sure that giving more resources and power to EA wouldn't lead to sterilization programs etc. being implemented.

I'm not sure I believe that! (They're very scary to me, but they *are* admittedly a minority, and my confidence that EA will keep saving lots of human lives in the short term is much greater than the odds I'd give to that minority getting real world-changing power as a result of incrementally more support for the overall movement.) But it seems a reasonable thing to be afraid of.

To put it in more general terms, the thing about coalitions is that sometimes you do have to boycott the "we believe in XYZ good things and also in XYZ horrifying things" coalition even if the coalition's support for the good things is earnest and effective.

Expand full comment

Hey, I'm one of those socialists. I don't think you guys are sociopathic because I was in the rationalist space like 10 years ago, but yeah, pretty sure I'm a minority on that view.

My problem is that EA tells people to go into finance, as though that is just a value-neutral or even mildly positive means of collecting money to give to charity. That isn't my only problem, but it is representative of all my problems.

EA is the organ of capitalism that does harm reduction and makes capitalism look less monstrous. I am glad that some harm is being reduced. But EA, as it currently stands, will only ever be capitalist. Even where socialism would effectively be more altruistic.

Buying a fancy castle so as to minimize operating costs while regularly meeting with the rich and powerful is something that makes perfect sense for you to do, and that is why I cannot work with you. Whenever socialism threatens capitalism, EA will side against socialism.

Expand full comment

I feel like Jon Green's recent crusade to pressure pharmaceutical companies to make tuberculosis testing and treatment more affordable in developing countries was the most EA thing by non-EA's I've seen. Was EA at all involved in that, and if not why did we fail to hitch our wagon to that powerful bundle of optics and effectiveness?

Expand full comment

"Being part of a board who fired a CEO, then backpedaled after he threatened to destroy the company."

Excuse me, the CEO threatened to destroy the company? After he got fired? By the board who said destroying the company could be consistent with its mission? How is this not backwards?

Expand full comment

AI safety really taints the movement. It's just way to speculative, and way too hard to show that the good is outweighing the bad (for example, how do you weigh the risks of a rogue AI exterminating humanity vs slowing AI development to the point that we're not able to develop a vaccine in time to save humanity from an extinction causing disease? Focus should never have moved away from currently alive people.

Expand full comment

Sorry, but this was remarkably unconvincing. “Nobody except us cares about saving lives,” really? Is that going to convince anyone?

The 200,000 lives saved figure would be more compelling if it was presented in a fair comparison with lives saved by non-EA folks who *also* donate to charity. Instead, Scott is presenting this figure as if only EAs ever donated to charity. Other people also donate. Other people also “care,” whatever that means. Other people also try to figure out if their donations are effective.

Next time, get a non-EA beta reader.

Expand full comment

I can't help but notice that "EA Infrastructure" (in red on the final graph) seems to be nearing the hundred-million-dollar-per-year level. That's the exact criticism levied against e.g. the Susan R. Komen foundation, mountains of money being spent on "overhead" and "administration" and "infrastruction" that could be used to actually help those in need of help.

And that's aside from the increasing prioritization of spending lots of money to pay high salaries to high-status people to think about hypothesized future AI risks, instead of spending that money on things that are demonstratively effective: the malaria nets and the chicken living space and so forth. How can you tell the difference between someone earning six figures that is thinking deep effective thoughts about how to strange Basilisk in its crib, and someone earning six figures pretending to do the same because they get paid today whether or not Basilisk someday tortures us eternally or invests gray goo or whatever SF trope is top of mind at present?

That red bar is where the grifters are. And it's getting to be a large proportion of the overall graph.

Expand full comment

Animal Welfare is always going to be a sticking point with me. Do the EA-folks ever calculate the disutility of increased prices on everyone that eats animal products? Because I care about human welfare (which definitely includes wealth) as a far higher priority than animal welfare. In my childhood I could buy eggs made from hens in cage-batteries. Those got outlawed EU-wide in 2012. More and more ratcheting animal welfare regulation is on the way, which will make animal protein ever more expensive. Maybe I'm just a cheap bastard or just been poor for too long, but I resent the luxury beliefs of others imposing costs on me. I don't mind animal welfare as some far-off goal, after we've achieved the glorious singularity or at least gained the capacity for dirt-cheap lab meat. But till then, I care about minimizing cost of living for people. Or at least trying to slow down, what's termed as the cost of living crisis. The EA people have a larger circle of concern than me and are too comfortable trading core interests for the outermost circles. They are overinclusive, for my preferences anyway.

Expand full comment

You are wrongly crediting E.A with these 'first order achievements'. The truth is, my Baghdad declaration of 1968- viz. 'be nice!'- caused niceness to become more popular. Thanks to niceness billions of lives have been saved. One may say that I've never done anything nice, or been nice to anyone, myself. But my 'second order' niceness- i.e. my demanding more niceness- clearly is responsible for all first order niceness. The fact is niceness involves doing all the nice things you could possibly do even if virtue signalling charlatans say you should do some of those things.

I have also effectively abolished death and disease through my 'don't die. Live healthily forever' campaign. Well, I would have done if I'd received proper funding. Second order niceness, like effective altruism, doesn't come cheap.

Expand full comment

I really enjoyed this article. It was emotionally validating in the face of all the EA criticism I’ve seen flying around. Yeah, there’s been some missteps here and there, but remember all the good stuff EA has done!

But then I tried talking over with some people I’d describe as altruists, but not EA. In our discussion I ended up evaluating the article through their eyes, and while most of the points are very compelling, the tone is alienating in a way that overrides the goal. Specifically, I think the way you talk about accountability and the comparisons between EA people and non-EA people turn people off of EA.

My friends agree with most of the goals of EA, but they said that this article doesn’t paint a good picture of the movement. Instead of focusing on everything that EA has accomplished, they’re left questioning the self-awareness and approach of EA. One of my friends’ comments was “EA’s goals are in line with my values, but their approach is not, and so I can’t join them.”

The first specific thing I can point out is that the article contrasts EA to an ‘other’ throughout, but is that the critics on Twitter (who deserve all of this), or is it everybody else in the charity sphere, as seems to be the case towards the end of the article? The impression is that you’re being dismissive of other charitable efforts.

On accountability, I agree that every movement is gonna have some issues that they need to deal with. In my opinion, EA is almost too good at acknowledging mistakes. But we lose people when our response to criticism is that the good we’ve done outweighs our need to learn from high-profile mistakes. I think this article could have been more persuasive if it acknowledged that mistakes were made and that we can learn from them, and then refocused the discussion on the good we have done and how we can continue doing better.

Ultimately, what was the purpose of this article? Was the goal to rally-the-troops, make people like me feel good and righteous? Or was it to convince fence-sitters that we are worth supporting and building coalitions with? I tried to use this article to convert ‘Altruists’ to ‘Effective Altruists’ and instead was convinced that we have a branding problem. This isn’t a novel insight, but it was my first time seeing that problem in action, real time.

Expand full comment
Nov 30, 2023·edited Nov 30, 2023

I don't see anyone mentioning that Matt Levine gave some thoughts on Effective Altruism, (linking this page) this in his Money Stuff newsletter today: https://www.bloomberg.com/opinion/articles/2023-11-29/the-robots-will-insider-trade - in the section titled "Kangaroos".

To summarize: he talks about it as sort of logical progression where you start off doing clear impact stuff (malaria nets), but then rationally decide that you have a higher expected value by doing less concrete, but perhaps higher Expected Value stuff, and eventually you're putting your money towards very abstract ideas like stopping an AI apocalypse. (He then compares this to climate efforts which start very specific "plant trees" and get increasingly abstract "get Kangaroos to eat fewer trees", hence the title)

He points out that at the end of the progression, you don't end up looking very different than the standard charities that you were originally attempting to contrast yourself from, (which is probably not good) but also that there's not really a logical place to 'cut off' this chain of reasoning, either.

> There is no obvious place to cut off the causal chain, no obvious reason that a 90% probability of achieving 100 Good Points would be better than a 30% probability of 500, or a 5% probability of 5,000, or whatever.

Expand full comment

A pitfall I see in the idea of effective altruism, the effective part is often at odds with the altruistic part. That is, generally when you ask it how to do the most good in the world, math will come up with "slaughter the outgroup".

And when you look around, there will be lots of low hanging fruit and easy approaches to the entire "slaughter the outgroup" thing, and you'll wonder why nobody has thought of this before, and donate lots of money to various "slaughter the outgroup" projects and advertise how easy it is to "slaughter the outgroup" to your ingroup.

And then, suddenly, it turns out that there was also a lot of low hanging fruit in the "slaughter the ingroup" area, and you're flabbergasted at what sort of horrible people would support such a thing, and everything goes wrong, and your ingroup is already synonymous with ISIS, and what just happened!?

Conversely, traditional altruism just does things that are uncontroversially good, so when you donate mosquito nets, someone else doesn't have to release extra mosquitoes to cancel that out.

As much as I also hate the outgroup, it seems unwise to mix this into projects that never needed to be adversarial to begin with. I suspect effective altruism would do better were it to separate from political/controversial/adversarial issues and focus itself more on just being altruistic, even if this isn't what looks at first glance like proper lawful good behavior.

Expand full comment

The idea of a sober, uninhibited mathematical approach to improving humanity was always the appealing part of the project, even to those at the other end of the ideological pool. This is good, and will always remain good. Dialectical materialism was like this, too, after all, rejecting philosophical hand-waving in favour of concrete analysis and aiming to understand and contain dangerous contradictions.

But, uh... the bit about becoming rich and then doing carefully-weighed private altruism with some fraction of that wealth remains questionable given the modalities of becoming rich in the first place. No, not SBF, but mundane, commonplace side-effects. Malaria nets vs Namibian lithium mines and Foxconn suicides, and all the rest.

Expand full comment

"But it has saved the same number of lives that doing all those things would have."

This encapsulates why I'm not entirely enthusiastic about EA. On a superficial level, that's true, but anyone who works with people whose communities have a lot of gun violence knows the numerical count of lives taken by guns completely low-balls the cost of gun violence. Every violent death represents a great many violent acts that don't result in death, and even more trauma spread through the community. So when you claim it's the "same number of lives," I can only conclude that you don't pay any attention to things that aren't easily measured.

I'm not unhappy that EA exists, but I'm happy there are people who ignore it and put them money towards things that are less easily measured and assessed, or that have complex interactions with donor dollars.

Expand full comment

This is obviously not a methodologically valid observation, but the distribution of blue checks on that alignment chart is quite interesting.

Expand full comment

My impression is, probably for reasons that are subtle at least to me because I can't identify anything that clearly stands out, a lot of people have a kind of ick factor to EA people the same thing many women have towards nerds or neurodivergent males. This leads to accusations EA is "creepy". I do not share this sentiment but

Guesses:

1) EA folk look the type, very smart, disproportionately in software/computer related fields, talk about sci fi topics, etc.

2) The taking seriously AI as a threat which most of the public feels is science fiction. Adds to the weird aesthetic.

3) I don't know how to put this in words exactly but looking at an EA club gathering one almost gets the impression it's a group that has somehow, as an entire group figured out some truth the rest of us haven't and converged on it with incredible reassurance, knowing smiles, pats on the back, weeping with emotion, etc. It almost reminds of apocalyptic cults who have uncovered a dark secret the rest of us don't know about or think they do, think they are in on the know even if most won't or can't get it, however an unnervingly smart and reassured one.

4) A number of the concerns raised like longtermism or AI are so divorced from people's everyday physical lives, or even their feelings about what they think is the future like climate catastrophe or judgement day it adds to the oddity. Humans living a very long time is not thought about too much. Perhaps the crime here is having a rare future vision. At least people can *feel* what it is like to be in a climate catastrophe in a few decades or experience judgement day. But try feeling like a human 40000 years from now.

I don't have any strong opinions on EA overall but the health successes are spectacular, God bless.

All of these explanations individually, and combined are probably not satisfying. So I am just not sure exactly what it is. But I think it is more likely than not that their IS some kind of ick factor at play. The strong, vehement reaction from so many suggests this is likely to be the case.

Expand full comment

9/11 wasn't just about 2K dead, not to mention 2 skyscrapers, it was about massive loss of trust.

This isn't too denigrate saving 2K lives in an undramatic fashion, but they really aren't the same.

Expand full comment

Honestly, EA needs some better PR people. I notice that graph on the last year that they're beginning to spend a significant fraction of the money they get on "developing" EA infrastructure - even if that looks a little bit self-indulgent, it's still very defenisible from a utilitarian standpoint. If spending $10 million on PR leads to an extra $100 million in donations, then that's thousands of extra lives that can be saved with the $90 million "profit". (I know it's a charity but can't think of a better word for it. Maybe those Twitter hot takes that we're all a bunch of hypercapitalists - which is indistinguishable from being fascists - were right after all...)

Expand full comment

The broad idea of doing altruism more effectively and quantifiably is great.

My impression of the origins of some of the problems that appear in practice:

1. It's hard to properly define the objective or utility to optimize. When asked, prominent EAs (not just SBF but some of the most senior leaders I heard talk about this on podcasts) fail to reject optimizing linear utility and seem to not understand concepts like the Kelly bet. That is a recipe for disaster for an organization with high impact as its bet sizes start to matter. This was less important when the movement was small and less impactful.

2. There seems to be a a high degree of overconfidence in both the definition of the objective to maximize and the world model. For example related to p(doom) concerning AI, people often give point estimates, and rarely put in context to p(doom) if we delay AI. Also statements of whatever is currently considered the most effective use of charity money have seemed overconfident to me. This overconfidence tends to keep the movement from diversifying their investments.

If EAs want to do some reform, these would be good places to start.

Expand full comment

Complaining about arguments you found on twitter is like complaining about food you found in the trash.

Expand full comment

Any form of genuine altruism is good.

But EA is highly suspect because of its charter of being "more" good. "Double Plus" good in fact.

Altruism coupled with judgment: not so good.

Is this just rice bowl proselytization in another form?

And more importantly, is the "judge-y" aspect of EA precisely what attracted the FTX grifter?

Expand full comment
Nov 30, 2023·edited Nov 30, 2023

In which Scott reinvents a fully general defense of tainted organizations.

Seriously. "Look at all the good we do. It outweighs the bad, and we don't get enough credit. If the world saw things the way we do, we wouldn't even need to do this stuff. I don't want to downplay <bad thing>, but <downplays bad thing anyway>."

You might well see the same pattern of writing in defense of the Catholic Church twenty years ago.

Effective organizations address the taint head on, take the hit, clean the house, and come back strong. Ineffective ones save face through defense.

Expand full comment

Too bad to see that EA is doing so much that isn't actually effective ... you listed like 30 different intiatives but in fact only one can be the actual optimal best use of ones altruistic time and money. So it looks like 95% of EAs effort is suboptimally used on pet projects that interest the constitutent community, similar to every other charitable movement. Here I thought that EA types were supremely rational and better than the rest, too bad

Expand full comment

I am new to the stack and I came in blind. I mean this sincerely, I am floored. I came down to the comment section to have a gander at the laughs everyone had a this clever bit of subtle, satire only to find an earnest, good faith discussion. I’m going have to reread this.

Expand full comment

Here's the REAL reason why many people feel disdain for "Effective Altruists" (and also why some of that disdain is deserved)

https://questioner.substack.com/p/utilitarianism-vs-consequentialism

Expand full comment

This is a great post, and made me feel good about continuing to be part of the EA community :).

However, I'd have to disagree that it's essentially the same people who care about global health, animal welfare and AI risk. There's a pretty significant split in the community between the first two (and associated things) and the last. I've talked to a lot of long-time effective altruists who feel that the movement has been 'taken over' by the AI-safety guys in recent years, and there's a noticeable difference between the groups, even socially. This may be uncharitable, but I think this post tries to obfuscate that a little bit.

Expand full comment

We cannot know what the counterfactual to EA would have been. It is possible that without EA another movement would have arisen, and that rational charity inclined people would have given almost as much, but simply to other causes. Similarly, AI safety would have progressed, but just under a different banner. Furthermore (working in the field) I do not rate most of the EA safety contributions as particular good or grounded, but they do suck up a lot of oxygen in the room, and potentially give people the wrong intuitions. A lot of EA safety people seem like armchair philosophers with very little understanding of the actual field of AI.

Personally, while I agree a lot with many of the tenants of EA, and appreciate the community, I also think that as a group it has many arrogant (and to be honest naive) members. With it's intersection with the rationality community there seem to be a lot of members whose whole identity resolves around being "smart." There is a lot of superficial re-inventing of the wheel, people feeling morally superior to others while actually not holding a very though out moral view etc... and just not being aware of a history of similar ideas explored in the past or in other communities. All of these behaviors are pretty standard for members of ideological groups, e..g, religions, fringe movements, cults, but EA simultaneously claims to be rational and somehow better than that. In my opinion, the reality is that the average EA follower has not examined their own beliefs markedly more than say an earnest Christian (which is okay, but is what it is).

I also think that the EA movement has lots of extremist view points that are tolerated within the community that really harm it's PR. E.g., one can read an archive of Caroline Ellison identifying as an EA member, and rating "wild fish suffering" as orders of magnitude worse than human genocides. The kinds of statements are (I would argue justifiably) repugnant to the lay person.

Expand full comment

> It’s only when you’re fighting off the entire world that you feel truly alive.

SO true, a quote for the ages

Expand full comment

Is the article formatting all fucked up or am I not appreciating the current formatting trends?

Expand full comment

The Red Cross has been in existence for nearly 150 years (demonstrated longevity) and claims they help 200 million people a year outside of the US alone. Saving 200,000 people over ten years seems like too little to measure, even if the lives per dollar is better. Whatever the Red Cross is doing scales. There's no way that the bed nets would scale at the same level.

The Red Cross helps more people every year than EA has helped living beings of all types in its entire history. That's even if you consider chicken lives of equal value to humans.

I'm not anti-EA, though I am absolutely anti-arrogance. EAs seem far more arrogant by their claims to be the first and only group that really evaluates charities effectively. I also put a lot more stake in groups that can maintain productivity year after year than a new group with a rocky start and uncertain future. I guess I'll be more impressed when EA helps its first 100 million or passes from a first generation to a second and continues going. You can argue that I'm holding EAs to too high of a bar, and I would agree that it's a high bar - but not too high - because current organizations that already exist have been doing more and for longer. If EAs can scale and show they are more efficient while helping millions of people every year, I'll take them more seriously. Based on the chart from the above post, it looks like EAs have spent several billion dollars on various goals. Including well over a billion on global health and development. Even using standard non-EA costs-per-life-saved that should have resulted in more lives saved than 200,000. I'm not saying it was misspent, but that EAs have not demonstrated any better ability to do charity than multiple already-existing charities do regularly.

AI doom and animal welfare are speculative and philosophical diversions from the more generally accepted mission of helping people not to die of easily preventable causes. One hasn't shown any long term benefits and may never be able to (especially if successful at an early stage, no one will likely recognize the benefits of AI alignment if there was never an obvious factual danger first). The other literally cannot demonstrate any benefits to humans if the observer doesn't value animal wellbeing for its own sake. Both are an uphill battle for PR or growth of the movement.

Expand full comment

I'm not sure if it's something really weird on my end, but I'm seeing all the images in this post duplicated 4 or 5 times and breaking up the text in a bizarre way

Expand full comment

This is a political party and lobby. This is not about charity.

Expand full comment

This is a great list of the positive things effective altruism has done, and I agree many of these are still underrated. However, I feel like it doesn't fully engage with the key counter-arguments. Here's a recent essay of mine exploring that: https://inexactscience.substack.com/p/the-case-for-narrow-utilitarianism

Expand full comment

I agree with this. I'm not against research in AI safety, but it's really harmful to speak---as Yudkowsky and the like so often do---as though AI safety is *the one most important thing* that we should fanatically devote all resources do. Effective charity is the best part of EA---one of the best parts of humanity in general, in fact.

But I also have another problem with EA, which is that it has been severely damaging to the mental health of many of its adherents, including myself. The notion that we are as responsible for every death we fail to prevent as we would be if we actually killed them ourselves is not just mistaken---it's incredibly dangerous. (A lot of EA people, if pressed, would agree that this is wrong. But they often speak as though it's true, and use statistical data as if it were true. Some may actually believe it is true. (I think there is some motte-and-bailey here.))

For we all fail to prevent an enormous number of deaths literally constantly. For every $3,000-$5,000 you spend on literally anything other than the most cost-effective charities, someone dies and you could have saved them.

Oh, but it gets worse; because it's not just money you spent. It's also money you *never made*. Could you have worked a little harder and gotten a better job? Could you have accepted more miserable working conditions to receive slightly higher pay?

You are letting people die that you could have saved. So am I. We all are. Always.

But this does *not* make us murderers. Because it was *never* your job to save everyone. You couldn't even if you tried. And if you try too hard, you might just destroy yourself.

I know, because I nearly did. Ironically, I'm sure I contributed less to the world because of it. So even trying to optimize for maximum altruistic effort doesn't actually maximize altruistic outcomes. (But don't take that to mean you just need to do a second-order optimization, figure out the exactly optimal amount of rest and enjoyment you need to maximize your long-term functioning; that way *also* lies madness.)

We can't tell people to hear the scream of a child in every Disneyland ticket they buy---or job application they fail to submit. We need to stop telling them that they are unforgivable monsters because they aren't willing to sacrifice themselves to save someone they will never meet. Being told that I was just such an unforgivable monster---and *believing* it---has been a major factor in my depression for years, and I know I'm not the only EA adherent out there who suffers from severe depression. (Note that depression is positively correlated with autism, ADHD, high IQ, and high empathy---all of which are practically diagnostic criteria for EA membership.)

What I think EA desperately needs right now is a clear and consistent message about *what is your fair share*---at what point can we say that you have worked hard enough, given enough of yourself, that you have discharged your moral responsibility. We also need a clear and consistent message reminding people that failing to save someone *isn't* the same as killing them, and that we will all inevitably do the former all the time while most of us will (thankfully) never do the latter.

We have done a good job of telling people the work that must be done, and even doing some of that work; and this is a very good thing. But we still haven't told people how to really integrate these values into a flourishing human life, how to achieve a healthy balance being doing good and taking care of yourself.

Expand full comment

I'm late to this, but...

I think what's missing here is an acknowledgement that the standards the EA movement is being judged against and found wanting are standards _the EA movement itself loudly advocated for_. It was an EA organisation called GiveWell that popularised the notion of slapping a "do not recommend" tag on organisations that failed to demonstrate strong evidence of cost-effectiveness towards specific metrics like QALY and good governance, even if they were well-intentioned and had impressive lists of bullet pointed accomplishments. (yes, Givewell is supportive of the principle of giving to other causes and charities that don't meet its recommendations, but it also compiled Celebrated Charities We Don't Recommend lists). It was the founder of that org that considered a seat on OpenAI's board to be worth £30 million of Dustin Moskovitz's philanthropic cash (compared with the alternatives of giving it to GiveWell's highest rated charities, which could - according to GiveWell's analysis - have saved a little under 10,000 lives instead). Critics didn't decide the board seat just lost was a big deal even compared with tangibly saving lives; Holden and Dustin did.

Dustin is entitled to spend his fortune how he likes and has spent far more than most on saving lives in developing countries too, Holden is entitled to change his mind, and not all endeavours can be expected to succeed, but if you start a movement from the premise that most philanthropic dollars could be spent better and some approaches and organisations even deserve calling out for being particularly wasteful, you best be prepared to have that spotlight turned on you.

Expand full comment

lol

Expand full comment

You claim that EA “Played a big part in creating the YIMBY movement” and prove that in a footnote by saying that Open Philanthropy claims to be the first institutional funder for the YIMBY movement.

First of all, providing funding is not even close to the equivalent of creating a movement. Second of all, the YIMBY movement obviously predates EA. I am a board member of a YIMBY organization that was founded 38 years ago and works in coalition with a better-known YIMBY organization that was founded 16 years ago. These obviously predate EA, and the other board members of it have never heard of EA. EA most certainly did not create this movement—people have been doing the work since before many EAers are born.

If you do a simple Google search—or even use Wikipedia, you apparent source of choice—you will see on the YIMBY Wikipedia page that planners have been using the term YIMBY to describe that movement since at least 1993, and the YIMBY movement has been spreading around the world before EA was conceived, let alone before EA had the funds to support the YIMBY movement.

Your ridiculous, easily-refutable claim that EA founded the YIMBY movement makes me question every other claim you make in this sad attempt of a pro-EA PR blog post.

Expand full comment

I really really appreciate this article. Helps motivate me back to being interested in EA

Expand full comment