922 Comments
Comment deleted
Expand full comment

Yeah, this is where I end up on it as well. To the extent that it helps people give more effectively, it's been a great thing.

It does go a bit beyond merely annoying though. I think something that Scott is missing is that this field won't just HAVE grifters and scammers, it will ATTRACT grifters and scammers, much like roles as priests etc. have done in the past. The average person should be wary of people smarter than them telling what to do with their money.

Expand full comment

The only durable protection from scammers is a measurable outcome. That's part of why I think EA is only effective when it focuses on things that can be measured. The meat of the improvement in EA is moving money from frivolous luxury to measurable charity, not moving measurable charity to low probability moonshots.

Expand full comment

Good list.

A common sentiment right now is “I liked EA when it was about effective charity and saving more lives per dollar [or: I still like that part]; but the whole turn towards AI doomerism sucks”

I think many people would have a similar response to this post.

Curious what people think: are these two separable aspects of the philosophy/movement/community? Should the movement split into an Effective Charity movement and an Existential Risk movement? (I mean more formally than has sort of happened already)

Expand full comment

I'm probably below the average intelligence of people who read scott but that's essentially my position. AI doomerism is kinda cringe and I don't see evidence of anything even starting to be like their predictions. EA is cool because instead of donating to some charity that spends most their money on fundraising or whatever we can directly save/improve lives.

Expand full comment

Which "anything even starting to be like their predictions" are you talking about?

-Most "AIs will never do this" benchmarks have fallen (beat humans at Go, beat CAPTCHAs, write text that can't be easily distinguished from human, drive cars)

-AI companies obviously have a very hard time controlling their AIs; usually takes weeks/months after release before they stop saying things that embarrass the companies despite the companies clearly not wanting this

If you won't consider things to be "like their predictions" until we get a live example of a rogue AI, that's choosing to not prevent the first few rogue AIs (it will take some time to notice the first rogue AI and react, during which time more may be made). In turn, that's some chance of human extinction, because it is not obvious that those first few won't be able to kill us all. It is notably easier to kill all humans (as a rogue AI would probably want) than it is to kill most humans but spare some (as genocidal humans generally want); the classic example is putting together a synthetic alga that isn't digestible, doesn't need phosphate and has a more-efficient carbon-fixing enzyme than RuBisCO, which would promptly bloom over all the oceans, pull down all the world's CO2 into useless goo on the seafloor, and cause total crop failure alongside a cold snap, and which takes all of one laboratory and some computation to enact.

I don't think extinction is guaranteed in that scenario, but it's a large risk and I'd rather not take it.

Expand full comment

> Most "AIs will never do this" benchmarks have fallen (beat humans at Go, beat CAPTCHAs, write text that can't be easily distinguished from human, drive cars)

I concur on beating Go, but captchas were never thought to be unbeatable by AI - it's more that it makes robo-filing forms rather expensive. Writing text also never seemed that doubtful and driving cars, at least as far as they can at the moment, never seemed unlikely.

Expand full comment

This would have been very convincing if anyone like Patrick had given timelines on the earliest point at which they expected the advance to have happened, at which point we can examine if their intuitions in this are calibrated. Because the fact is if you asked most people, they definitely would not have expected art or writing to fall before programming. Basically only gwern is sinless.

Expand full comment
Nov 30, 2023·edited Nov 30, 2023

On the other hand, EY has consistently refused to make measurable predictions about anything, so he can't claim credit in that respect either. To the extent you can infer his expectations from earlier writing, he seems to have been just as surprised as anyone, despite notionally being an expert on AI.

Expand full comment

1. No one mentioned Eliezer. If Eliezer is wrong about timelines, that doesn't mean we suddenly exist in a slow takeoff world. And it's basically a bad faith argument to imply that Eliezer getting surprised *in the direction of capabilities getting better than expected* is apparently evidence of non doom.

2. Patrick is explicitly saying that he sees no evidence. Insofar as we can use Patrick's incredulity as evidence, it would be worth far more if it was calibrated and informed rather than uncalibrated. AI risk arguments depend on more things than just incredulity, so the """lack of predictions""" matters relatively less. My experience has been that people who use their incredulity in this manner in fact do worse at predicting capabilities, hence why getting disproven would be encouraging.

3. I personally think that by default we cannot predict what the rate of change is, but I can lie lazily on my hammock and predict "there will be increases in capability barring extreme calamity" and essentially get completely free prediction points. If you do believe that we're close to a slowdown, or we're past the inflection point of a sigmoid and that my priors about progress are wrong, you can feel free to bet against my entirely ignorant opinion. I offer up to 100 dollars at ratios you feel are representative of slowdown, conditions and operationalizations tbd.

4. If you cared about predictive accuracy, gwern did the best and he definitely believes in AI risk.

Expand full comment

"write text that can't be easily distinguished from human"? Really?

*None* of the examples I've seen measure up to this, unless you're comparing it to a young human that doesn't know the topic but has some measure of b*sh*tting capability - or rather, thinks he does.

Maybe I need to see more examples.

Expand full comment

Yeah there are a bunch of studies now where they give people AI text and human text and ask them to rate them in various ways and to say whether they think it is a human or AI, and generally people rate the AI text as more human.

Expand full comment

The examples I've seen are pretty obviously talking around the subject, when they don't devolve into nonsense. They do not show knowledge of the subject matter.

Perhaps that's seen as more "human".

I think that if they are able to mask as human, this is still useful, but not for the ways that EA (mostly) seems to think are dangerous. We won't get advances in science, or better technology. We might get more people falling for scammers - although that depends on the aim of the scammer.

Scammers that are looking for money don't want to be too convincing because they are filtering for gullibility. Scammers that are looking for access on the other hand, do often have to be convincing in impersonating someone who should have the ability to get them to do something.

Expand full comment

But moore’s law is dead. We’re reaching physical limits, and under these limits, it already costs millions to train and execute a model that, while impressive, is still multiple orders of magnitude away from genuinely dangerous superintelligence. Any further progress will require infeasible amounts of resources.

Expand full comment

Moore's Law is only dead by *some* measures, as has been true for 15-20 years. The limiting factors for big ML are mostly inter-chip communications, and those are still growing aggressively.

Expand full comment

Also, algorithms are getting more efficient.

Expand full comment

This is one of the reasons I'm not a doomer, which is that most doomers' mechanism of action for human extinction is biological in nature, and most doomers are biologically illiterate.

Expand full comment

RuBisCO is known to be pretty awful as carboxylases go. PNA + protein-based ribosomes avoids the phosphate problem.

I'm not saying it's easy to design Life 2.0; it's not. I'm saying that with enough computational power it's possible; there clearly are inefficiencies in the way natural life does things because evolution likes local maxima.

Expand full comment

You're correct on the theory; my point was that some people assume that computation is the bottleneck rather than actually getting things to work in a lab within a reasonable timeframe. Not only is wet lab challenging, I also have doubts as to whether biological systems are computable at all.

Expand full comment
Dec 1, 2023·edited Dec 1, 2023

I think the reason that some people (e.g. me) assume that computation* is the bottleneck is that IIRC someone actually did assemble a bacterium (of a naturally-existing species) from artificially-synthesised biomolecules in a lab. The only missing component to assemble Life 2.0 would then seem to be the blueprint.

If I'm wrong about that experiment having been done, please tell me, because yeah, that's a load-bearing datum.

*Not necessarily meaning "raw flops", here, but rather problem-solving ability

Expand full comment

Much like I hope for more people to donate to charity based on the good it does rather than based on the publicity it generates, I hope (but do not expect) that people decide to judge existential risks based on how serious they are rather than based on how cringe they are.

Expand full comment
founding

Yeah this is where I am. A large part of it for me is that after AI got cool, AI doomerism started attracting lots of naked status seekers and I can't stand a lot of it. When it was Gwern posting about slowing down Moore's law, I was interested, but now it's all about getting a sweet fellowship.

Expand full comment

Is your issue with the various alignment programs people keep coming up with? Beyond that, it seems like the main hope is still to slow down Moore's law.

Expand full comment
founding

My issue is that the movement is filled with naked status seekers.

FWIW, I never agreed with the AI doomers, but at least older EAs like Gwern I believe to be arguing in good faith.

Expand full comment

Interesting, I did not get this impression but also I do worry about AI risk - maybe that causes me to focus on the reasonable voices and filter out the non-sense. I'd be genuinely curious for an example of what you mean, although I understand if you wouldn't want to single out anyone in particular.

Expand full comment

I don’t mind naked status seeking as long as people do it by a means that is effective at achieving good ends for the world. One can debate whether AI safety is actually effective, but if it is, EAs should probably be fine with it (just like the naked cash seekers who are earning to give).

Expand full comment

I agree. But there seem to be a lot of people in EA with some serious scrupulosity going on. Like that person who said they would like to donate a kidney, but could not bear the idea that it might go to a meat-eater, and so the donor would be responsible for all the animal suffering caused by the recipient. It's as though EA is, for some people, a refuge from ever feeling they've done wrong -- as though that's possible!

Expand full comment

What’s wrong with naked status seekers (besides their tendency to sometimes be counterproductive if advancing the cause works against their personal interests)?

Expand full comment

It's bad when the status seeking becomes more important than the larger purpose. And at the point when it gets called "naked status seeking", it's already over that line.

Expand full comment

They will only do something correct if it advances their status and/or cash? To the point of not researching or approving research into something if it looks like it won't advance them?

They have to be bribed to do the right thing?

Expand full comment

How do you identify naked status seekers?

Expand full comment

Hey now I am usually clothed when I seek status

Expand full comment

It usually works better, but I guess that depends on how much status-seeking is done at these EA sex parties I keep hearing about...

Expand full comment

Sounds like an isolated demand for rigor

Expand full comment

Definitely degree of confidence plays into it a lot. Speculative claims where it's unclear if the likelihood of the bad outcome is 0.00001% or 1% are a completely different ball game from "I notice that we claim to care about saving lives, and there's a proverbial $20 on the ground if we make our giving more efficient."

Expand full comment

I think it also helps that those shorter-term impacts can be more visible. A malaria net is a physical thing that has a clear impact. There's a degree of intuitiveness there that people can really value

Expand full comment

Most AI-risk–focused EAs think the likelihood of the bad outcome is greater than 10%, not less than 1%, fwiw.

Expand full comment

And that's the reason many outsiders think they lack good judgment.

Expand full comment

And yet, what exactly is the argument that the risk is actually low?

I understand and appreciate the stance that the doomers are the ones making the extraordinary claim, at least based on the entirety of human history to date. But when I hear people pooh-poohing the existential risk of AI, they are almost always pointing to what they see as flaws in some doomer's argument -- and usually missing the point that the narrative they are criticizing is usually just a plausible example of how it might go wrong, intended to clarify and support the actual argument, rather than the entire argument.

Suppose, for the sake of argument, that we switch it around and say that the null hypothesis is that AI *does* pose an existential risk. What is the argument that it does not? Such an argument, if sound, would be a good start toward an alignment strategy; contrariwise, if no such argument can be made, does it not suggest that at least the *risk* is nonzero?

Expand full comment

I find Robin Hanson's arguments here very compelling: https://www.richardhanania.com/p/robin-hanson-says-youre-going-to

Expand full comment

It's weird that you bring up Robin Hanson, considering that he expects humanity to be eventually destroyed and replaced with something else, and sees that as a good thing. I personally wouldn't use that as an argument against AI doomerism, since people generally don't want humanity to go extinct.

Expand full comment

What specific part of Robin Hanson's argument on how growth curves are a known thing do you find convincing?

That's the central intuition underpinning his anti foom worldview, and I just don't understand how someone can generalize that to something which doesn't automatically have all the foibles of humans. Does you think that a population of people who have to sleep, eat and play would be fundamentally identical to an intelligence who is differently constrained?

Expand full comment

I'm not seeing any strong arguments there, in that he's not making arguments like, "here is why that can't happened", but instead is making arguments in the form, "if AI is like <some class of thing that's been around a while>, then we shouldn't expect it to rapidly self-improve/kill everything because that other thing didn't".

E.g. if superintelligence is like a corporation, it won't rapidly self-improve.

Okay, sure, but there are all sorts of reasons to worry superintelligent AGI won't be like corporations. And this argument technique can work against any not-fully-understood future existential threat. Super-virus, climate change, whatever. By the anthropic principle, if we're around to argue about this stuff, then nothing in our history has wiped us out. If we compare a new threat to threats we've encountered before and argue that based on history, the new threat probably isn't more dangerous than the past ones, then 1) you'll probably be right *most* of the time and 2) you'll dismiss the threat that finally gets you.

Expand full comment

I’ve been a big fan of Robin Hanson since there was a Web; like Hanania, I have a strong prior to Trust Robin Hanson. And I don’t have any real argument with anything he says there. I just don’t find it reassuring. My gut feeling is that in the long run it will end very very badly for us to share the world with a race that is even ten times smarter than us, which is why I posed the question as “suppose the null hypothesis is that this will happen unless we figure out how to avoid it”.

Hanson does not do that, as far as I can tell. He quite reasonably looks at the sum of human history and finds that he is just not convinced by doomers’ arguments, and all his analysis concerns strategies and tradeoffs in the space that remains. If I accept the postulate that this doom can’t happen, that recursive intelligence amplification is really as nonlumpy as Hanson suspects, then I have no argument with what he says.

But he has not convinced me that what we are discussing is just one more incremental improvement in productivity, rather than an unprecedented change in humans’ place in the world.

I admit that I don’t have any clear idea whether that change is imminent or not. I don’t really find plausible the various claims I have read that we’re talking about five or ten years. And I don’t want to stop AI work: I suspect AGI is a prerequisite for my revival from cryosuspension. But that just makes it all the more pressing to me that it be done right.

Expand full comment

When ignoring the substance of the argument, I find their form to be something like a Pascal's wager, bait and switch. If there even is a small percent you will burn in hell for eternity, why wouldn't you become Catholic. Such an argument fails for a variety of reasons, one being it doesn't account for alternative religions and their probabilities with alternatives outcomes.

So I find I should probably update my reasoning toward there being some probability of x-risk here, but the probability space is pretty large.

One of the good arguments for doomerism is that the intelligences will be in some real sense alien. That there is a wider distribution of possible ways to think than human intelligence, including how we consider motivation, and this could lead to paper-clip maximizers, or similar AI-Cthulhus of unrecognizable intellect. I fully agree that these might very likely be able to easily wipe us out. But there are many degrees of capability and motivation and I don't see the reason to assume that either through a side-effect of ulterior motivation or direct malice that that lead to the certainty of extinction expressed by someone like Eliezer. There are many possibilities, many are fraught. We should invest is safety and alignment. But that that doesn't mean we should consider x-risk a certainty and certainly not at double-digit likelihood's within short timeframes.

Expand full comment

Comparative advantage and gains from trade says the more different from us they are, the more potential profit they'll see in keeping us around.

Expand full comment

Yes, the space of possibilities (I think you meant this?) is pretty large. But x-risk is most of it. Most of possible outcomes of optimisation processes over Earth and Solar System have no flourishing humanity in them.

Expand full comment

It is perhaps a lot like other forms of investment. You can't just ask "What's the optimal way to invest money to make more money?" because it depends on your risk tolerance. A savings account will give you 5%. Investing in a random seed-stage startup might make you super-rich but usually leaves you with nothing. If you invest in doing good then you need to similarly figure out your risk profile.

The good thing about high-risk financial investments is they give you a lot of satisfaction of sitting around dreaming about how you're going to be rich. But eventually that ends when the startup goes broke and you lose your money.

But with high-risk long-term altruism, the satisfaction never has to end! You can spend the rest of your life dreaming about how your donations are actually going to save the world and you'll never be proven wrong. This might, perhaps, cause a bias towards glamourous high-risk long-term projects at the expense of dull low-risk short-term projects.

Expand full comment

Much like other forms of investment, if someone shows up and tells you they have a magic box that gives you 5% a month, you should be highly skeptical. Except replace %/month with QALYs/$.

Expand full comment

I see your point, but simple self-interest is sufficient to pick up the proverbial $20 bill lying on the ground. Low-hanging QALYs/$ may have a little bit of an analogous filter, but I doubt that it is remotely as strong.

Expand full comment

The advantage of making these types of predictions is that even if someone says that the unflattering thing is not even close to what drives them, you can go on thinking "they're just saying that because my complete and perfect fantasy makes them jealous of my immaculate good looks".

Expand full comment

Yeah I kinda get off the train at the longtermism / existential risk part of EA. I guess my take is that if these folks really think they're so smart that they can prevent and avert crises far in the future, shouldn't they have been better able to handle the boardroom coup?

I like the malaria bed nets stuff because its easy to confirm that my money is being spent doing good. That's almost exactly the opposite when it comes to AI-risk. For example, the tweet Scott included about how no one has done more to bring us to AGI than Eliezer—is that supposed to be a good thing? Has discovering RLHF which in turn powered ChatGPT and launched the AI revolution made AI-risk more or less likely? It almost feels like one of those Greek tragedies where the hero struggles so hard to escape their fate they end up fulfilling the prophecy.

Expand full comment

I think he was pointing out that for EAs have been a big part of the current AI wave. So whether you are a doomer or an accelerationist you should agree that EAs impact has been large even if you disagree with the sign

Expand full comment

Problem is, the OpenAI scuffle shows that right now, as AI is here or nearly here, the ones making the decisions are the ones holding the purse strings, and not the ones with the beautiful theories. Money trumps principle and we just saw that blowing up in real time in glorious Technicolor and Surround-sound.

So whether you're a doomer or an accelerationist, the EAs impact is "yeah you can re-arrange the deckchairs, we're the ones running the engine room" as things are going ahead *now*.

Expand full comment

Not that I have anything against EAs, but, as someone who want to _see_ AGI, who doesn't want to see the field stopped in its tracks by impossible regulations, as happened to civilian nuclear power in the usa, I hope that you are right!

Expand full comment

I mean, if I really believed we'd get conscious, agentic AI that could have its own goals and be deceitful to humans and plot deep-laid plans to take over and wipe out humanity, sure I'd be very, very concerned and unhappy about this result.

I don't believe that, nor that we'll have Fairy Godmother AI. I do believe we'll have AI, an increasing adoption of it in everyday life, and it'll be one more hurdle to deal with. Effects on employment and jobs may be catastrophic (or not). Sure, the buggy whip manufacturers could shift to making wing mirrors for the horseless carriages when that new tech happened, but what do you switch to when the new tech can do anything you can do, and better?

I think the rich will get richer, as per usual, out of AI - that's why Microsoft etc. are so eager to pave the way for the likes of Sam Altman to be in charge of such 'safety alignment' because he won't get in the way of turning on the money-fountain with foolish concerns about going slow or moratoria.

AGI may be coming, but it's not going to be as bad or as wonderful as everyone dreads/hopes.

Expand full comment

That's mostly my take too. But to be fair to the doomer crowd, even if we don't buy the discourse on existential risks, what this concern is prompting them to do is lots of research on AI alignment, which in practice means trying to figure out how AI works inside and how it can be controlled and made fit for human purposes. Which sounds rather useful even if AI ends up being on the boring side.

Expand full comment

> but what do you switch to when the new tech can do anything you can do, and better?

Nothing -- you retire to your robot ranch and get anything you want for free. Sadly, I think the post-scarcity AGI future is still very far off (as in, astronomically so), and likely impossible...

Expand full comment

I think that the impact of AGI is going to be large (even if superintelligence either never happens or the effect of additional smarts just saturates, diminishing returns and all that), provided that it can _really_ do what a median person can do. I just want to have a nice quiet chat with the 21st century version of a HAL-9000 while I still can.

Expand full comment

> if these folks really think they're so smart that they can prevent and avert crises far in the future, shouldn't they have been better able to handle the boardroom coup?

Surely these are different skills? Someone who could predict and warn against the dangers of nuclear weapon proliferation and the balance of terror, might still have been blindsided by their spouse cheating on them.

Expand full comment
author

Suppose Trump gets elected next year. Is it a fair attack on climatologists to ask "If these people really think they're so smart that they can predict and avert crises far in the future, shouldn't they have been better able to handle a presidential election?"

Also, nobody else seems to have noticed that Adam D'Angelo is still on the board of OpenAI, but Sam Altman and Greg Brockman aren't.

Expand full comment

I hardly think that's a fair comparison. Climatologists are not in a position to control the outcome of a presidential election, but effective altruists controlled 4 out of 6 seats on the board of the company.

Of course, if you think that they played their cards well (given that D'Angelo is still on the board) then I guess there's nothing to argue about. I—and I think most other people—believe they performed exceptionally poorly.

Expand full comment

The people in the driver's seat on global-warming activism are more often than not fascist psycopaths like Greta Thunberg, whom actively fight against the very things that would best fight against global warming, like nuclear energy and natural gas pipelines, so they can instead promote things that would make it worse, like socialism and degrowth.

We will never be able to rely on these people to do anything but cause problems. They should be shunned like lepers.

Expand full comment

I think that if leaders are elected that oppose climate mitigation, that is indeed a knock on the climate-action political movement. They have clearly failed in their goals.

Allowing climate change to become a partisan issue was a disaster for the climate movement.

Expand full comment
author

I think it's a (slight) update against the competence of the political operatives, but not against the claim that global warming exists.

Expand full comment

I agree completely. Nonetheless, the claim that spending money on AI safety is a good investment rests on two premises: That AI risk is real, and that EA can effectively mitigate that risk.

If I were pouring money into activists groups advocating for climate action, it would be cold comfort to me that climate change is real when they failed.

Expand full comment
Nov 29, 2023·edited Nov 29, 2023

The EA movement is like the Sunrise Movement/Climate Left. You can have good motivations and the correct ambitions but if you have incompetent leadership your organization can be a net negative for your cause.

Expand full comment

> Is it a fair attack on climatologists to ask "If these people really think they're so smart that they can predict and avert crises far in the future, shouldn't they have been better able to handle a presidential election

It is a fair criticism for those that believe the x-risk, or at least extreme downsides of climate change, to not figure out ways to better accomplish their goals rather than just political agitation. Building coalitions with potentially non-progressive causes, being more accepting of partial, incremental solutions. Playing "normie" politics along the lines of matt yglesias, and maybe holding your nose to some negotiated deals where the right gets their way probably mitigates and prevents situations where the climate people won't even have a seat at the table. For example, is making more progress on preventing climate extinction worth stalling out another decade on trans-rights? I don't think that is exactly the tradeoff on the table, but there is a stark unwillingness to confront such things by a lot of people who publicly push for climate-maximalism.

Expand full comment

"Playing normie politics" IS what you do when you believe something is an existential risk.

IMHO the test, if you seriously believe all these claims of existential threat, is your willingness to work with your ideological enemies. A real existential threat was, eg, Nazi Germany, and both the West and USSR were willing to work together on that.

When the only move you're willing to make regarding climate is to offer a "Green New Deal" it's clear you are deeply unserious, regardless of how often you say "existential". I don't recall the part of WW2 where FDR refused to send Russia equipment until they held democratic elections...

If you're not willing to compromise on some other issue then, BY FSCKING DEFINITION, you don't believe really your supposed per cause is existential! You're just playing signaling games (and playing them badly, believe me, no-one is fooled). cf Greta Thunberg suddenly becoming an expert on Palestine:

https://www.spiegel.de/international/world/a-potential-rift-in-the-climate-movement-what-s-next-for-greta-thunberg-a-2491673f-2d42-4e2c-bbd7-bab53432b687

Expand full comment

FDR giving the USSR essentially unlimited resources for their war machine was a geostrategic disaster that led directly to the murder and enslavement of hundreds of millions under tyrranies every bit as gruesome as that of Hitler's. Including the PRC, which menaces the World to this day.

The issue isn't that compromise on existential threats are inheriently bad. The issue is that, many times, compromises either make things worse than they would've been otherwise, or create new problems that are as bad or worse as what they subsumed.

Expand full comment

I can think of a few groups, for example world Jewry, that might disagree with this characterization...

We have no idea how things might have played out.

I can tell you that the Hard Left, in the US, has an unbroken record of snatching defeat from the jaws of victory, largely because of their unwillingness to compromise, and I fully expect this trend to continue unabated.

Effect on climate? I expect we will muddle through, but in a way that draws almost nothing of value from the Hard Left.

Expand full comment

The reason we gave the USSR unlimited resources was because they were directly absorbing something like 2/3 of the Nazi's bandwidth and military power in a terribly colossal years-long meatgrinder that killed something like 13% of the entire USSR population.

Both the UK and USA are extremely blessed that the USSR was willing to send wave after wave of literally tens of millions of their own people into fighting the Nazi's and absorbing so much of their might, and it was arguably the deal of the century to trade mere manufactured objects for the breathing room and Nazi distraction / might-dissipation that this represented.

The alternative would have been NOT giving the USSR unlimited resources, the Nazi's quickly steamroll the USSR, and then turn 100% of their attention and military might towards the UK, which they would almost certainly win. Or even better, not getting enough materiel to conduct a war and realizing he would lose, Stalin makes a deal with Germany and they BOTH focus on fighting the UK and USA - how long do you think the UK would have survived that?

Would the USA have been able to successfully fight a dual-front war with basically all of Europe aligned under Nazi power PLUS Japan with China's resources? We don't know, but it's probably a good thing in terms of overall deaths and destruction on all sides that we didn't need to find out.

Sure, communism sucked for lots of people. But a Nazi-dominated Europe / world would probably have sucked more.

Expand full comment

Ah come on, Scott: that the board got the boot and was revamped to the better liking of Sam who was brought back in a Caesarian triumph isn't very convincing about "so this guy is still on the board, that totes means the good guys are in control and keeping a cautious hand on the tiller of no rushing out unsafe AI".

https://www.reuters.com/technology/openais-new-look-board-altman-returns-2023-11-22/

Convince me that a former Treasury Secretary is on the ball about the most latest theoretical results in AI, go ahead. Maybe you can send him the post about AI Monosemanticity, which I genuinely think would be the most helpful thing to do? At least then he'd have an idea about "so what are the eggheads up to, huh?"

Expand full comment

While I agree with the general thrust, I think the short-term vs. long-term is neglected. For instance, you yourself recommended switching from chicken to beef to help animals, but this neglects the fact that over time, beef is less healthy than chicken, thus harming humans in a not-quickly-visible way. I hope this wasn't explicitly included and allowed in your computation (you did the switch yourself, according to your post), but this just illuminates the problem: EA want to be clear beneficiaries, but clear often means "short-term" (for people who think AI doomerism is an exception, remember that for historical reasons, people in EA have, on median, timelines that are extremely short compared to most people's).

Expand full comment

Damn, was supposed to be top-level. Not reposting.

Expand full comment

> I guess my take is that if these folks really think they're so smart that they can prevent and avert crises far in the future, shouldn't they have been better able to handle the boardroom coup?

They got outplayed by Sam Altman, the consummate Silicon Valley insider. According to that anonymous rumour-collecting site, they're hardly the only ones, though it suggests they wouldn't have had much luck defending us against an actual superintelligence.

> For example, the tweet Scott included about how no one has done more to bring us to AGI than Eliezer—is that supposed to be a good thing?

No. I'm pretty sure sama was trolling Eliezer, and that the parallel to Greek tragedy was entirely deliberate. But as Scott said, it is a thing that someone has said.

Expand full comment

I actually pretty completely endorse the longtermism and existential risk stuff - but disagree about the claims about the best ways to achieve them.

Ordinary global health and poverty initiatives seem to me to be much more hugely influential in the long term than the short term thanks to the magic of exponential growth. An asteroid or gamma ray or what ever program that has a .01% chance of saving 10^15 lives a thousand years from now looks good compared to saving a few thousand lives this year at first - but when you think about how much good those thousand people will do for their next 40 generations of descendants, as well as all the people those 40 generations of descendants will help, either through normal market processes or through effective altruist processes of their own, this starts to look really good at the thousand year mark.

AI safety is one of the few existential risk causes that doesn’t depend on long term thinking, and thus is likely to be a very valuable one. But only if you have any good reason to think that your efforts will improve things rather than make them worse.

Expand full comment

I remember seeing this for the "climate apocalypse" thing many years ago: some conservationist (specifically about birds, I think) was annoyed that the movement had become entirely about global warming.

EDIT: it was https://grist.org/climate-energy/everybody-needs-a-climate-thing/

Expand full comment

Global warming is simply a livelier cause for the Watermelons to get behind. Not because they genuinely care about global warming, as they oppose the solutions that would actually help alleviate the crisis, but because they're psychopathic revolutionary socialists who see it as the best means available today of accomplishing their actual goal: the abolition of capitalism and the institution of socialism.

Expand full comment

Yup, pretty much this!

EA as a movement to better use philanthropic resources to do real good is awesome.

AI doomerism is a cult. It's a small group of people who have accrued incredible influence in a short period of time on the basis of what can only be described as speculation. The evidence base is extremely weak and it relies far too much on "belief". There are conflicts of interest all over the place that the movement is making no effort to resolve.

Sadly, the latter will likely sink the former.

Expand full comment

At this point a huge number of experts in the field consider AI risk to be a real thing. Even if you ignore the “AGI could dominate humanity” part, there’s a large amount of risk from humans purposely (mis)using AI as it grows in capability.

Predictions about the future are hard and so neither side of the debate can do anything more than informed speculation about where things will go. You can find the opposing argument persuading, but dismissing AI risk as mere speculation without evidence is not even wrong.

The conflicts of interest tend to be in the direction of ignoring AI risk by those who stand to profit from AI progress, so you have this exactly backwards.

Expand full comment

You can't ignore the whole "AGI could dominate humanity" part, because that is core to the arguments that this is an urgent existential threat that needs immediate and extraordinary action. Otherwise AI is just a new disruptive technology that we can deal with like any other new, disruptive technology. We could just let it develop and write the rules as the risks and dangers become apparent. The only way you justify the need for global action right now is based on the belief that everybody is going to die in a few years time. The evidence for existential AI risk is astonishingly weak given the amount of traction it has with policymakers. It's closer to Pascal's Wager rewritten for the 21st century than anything based on data.

On the conflict of interest, the owners of some of the largest and best funded AI companies on the planet are attempting to capture the regulatory environment before the technology even exists. These are people who are already making huge amounts of money from machine learning and AI. They are taking it upon themselves to write the rules for who is allowed to do AI research and what they are allowed to do. You don't see a conflict of interest in this?

Expand full comment

Let's distinguish "AGI" from "ASI", the latter being a superintelligence equal to something like a demigod.

Even AGI strictly kept to ~human level in terms of reasoning will be superhuman in the ways that computers are already superhuman: e.g., data processing at scale, perfect memory, replication, etc., etc.

Even "just" that scenario of countless AGI agents is likely dangerous in a way that no other technology has ever been before if you think about it for 30 seconds. The OG AI risk people are/were futurists, technophiles, transhumanists, and many have a strong libertarian bent. "This one is different' is something they do not wish to be true.

Your "conflict of interest" reasoning remains backwards. Regulatory capture is indeed a thing that matters in many arenas, but there are already quite a few contenders in the AI space from "big tech." Meaningfully reducing competition by squishing the future little guys is already mostly irrelevant in the same way that trying to prevent via regulation the creation of a new major social network from scratch would be pointless. "In the short run AI regulation may slow down our profits but in the long run it will possibly lock out hypothetical small fish contenders" is almost certainly what no one is thinking.

Expand full comment

"No one on this successful tech company's board of directors is making decisions based on what will eventually get them the most monopoly profits" sounds like an extraordinary claim to me.

Expand full comment

This is the board of directors that explicitly tried to burn the company down, essentially for being too successful. They failed, but can you ask for a more credible signal of seriousness?

Expand full comment

1. Holy shit is than an ironic thing to say after the OpenAI board meltdown. Also check out Anthropic’s board and equity structure. Also profit-driven places like Meta are seemingly taking a very different approach. Why?

2. You’re doing the thing where decreasing hypothetical future competition from new, small entrants to a field equals monopoly. Even if there was a conspiracy by eg Anthropic to use regulatory barriers against new entrants, that would not impact the already highly competitive field between the several major labs. (And there are already huge barriers to entry for newcomers in terms of both expertise and compute. Even a potential mega contender like Apple is apparently struggling and a place like Microsoft found a partner.)

Expand full comment

Expert at coming up with with clever neural net architectures == expert at AI existential risk?

Expand full comment

No?

It's just at this point a significant number of experts in AI have come around to believing AI risk is a real concern. So have a lot of prominent people in other fields, like national security. So have a lot of normies who simply intuit that developing super smart synthetic intelligence might go bad for us mere meat machines.

You can no longer just hand wave AI risk away as a concern of strange nerds worried about fictional dangers from reading too much sci-fi. Right or wrong, it's gone mainstream!

Expand full comment

all predictions about the future are speculation. The question is whether it's correct or incorrect speculation.

Expand full comment

Who are some people who have accrued incredible influence and what is the period of time in which they gained this influence?

From my standpoint it seems like most of the people with increased influence are either a) established ML researchers who recently began speaking out in favor of deceleration and b) people who have been very consistent in their beliefs about AI risk for 12+ years, who are suddenly getting wider attention in the wake of LLM releases.

Expand full comment

Acceptance of catastrophic risk from artificial superintelligence is the dominant position among the experts (including independent academics), the tech CEOs, the major governments, and the general public. Calling it a "small group of people who have accrued incredible influence" or "a cult" is silly. It's like complaining about organizations fighting Covid-19 by shouting "conspiracy!" and suggesting that the idea is being pushed by a select group.

The denialists/skeptics are an incredibly fractured group who don't agree with each other at all about how the risk isn't there; the "extinction from AI is actually good", "superintelligence is impossible", "omnipotent superintelligence will inevitably be absolutely moral", and "the danger is real but I can solve it" factions and subfactions do not share ideologies, they're just tiny groups allying out of convenience. I don't see how one could reasonably suggest that one or more of those is the "normal" group, to contrast with the "cult".

Expand full comment

I think there’s an important contrast between people who think that AI is a significant catastrophic risk, and people who think there is a good project available for reducing that risk without running a risk of making it much worse.

Expand full comment

For those of you that shared the "I like global health but not longtermism/AI Safety", how involved were you in EA before longtermism / AI Safety became a big part of it?

Expand full comment
founding

I read some EA stuff, donated to AMF, and went to rationalist EA-adjacent events. But never drank the kool aid.

Expand full comment

I think it is a good question to raise with the EA-adjacent. Before AI Doomerism and the tar-and-feathering of EA, EA-like ideas were starting to get more mainstream traction and adoption. Articles supportive of say, givewell.org, in local papers, not mentioning EA by name, but discussing some of the basic philosophical ideas were starting to percolate out more into the common culture. Right or Wrong, there has been a backlash that is disrupting some of that influence even those _in_ the EA movement are still mostly doing the same good stuff Scott outlined.

Expand full comment

Minor point: I'd prefer to treat longtermism and AI Safety quite separately. (FWIW, I am not in EA myself.)

Personally, I want to _see_ AGI, so my _personal_ preference is that AI Safety measures at least don't cripple AI development like regulatory burdens made civilian nuclear power grind to a 50 year halt in the USA. That said, the time scale for plausible risks from AGI (at least the economic displacement ones) is probably less than 10 years and may be as short as 1 or 2. Discussing well-what-if-every-job-that-can-be-done-online-gets-automated does not require a thousand-year crystal ball.

Longtermism, on the other hand, seems like it hinges on the ability to predict consequences of actions on *VASTLY* longer time scales than anyone has ever managed. I consider it wholly unreasonable.

None of this is to disparage Givewell or similar institutions, which seem perfectly reasonable to me.

Expand full comment

I actually think that longtermism advocates for ordinary health and development charity - that sort of work grows exponentially in impact over the long term and thus comes out looking even better than things like climate or animal welfare, whose impacts grow closer to linearly with time.

Expand full comment

The problem with longtermism is that you can use it to justify pretty much anything, regardless of if you're even right, as long as your ends are sufficiently far enough away from the now to where you never actually have to be held accountable for getting things wrong.

It's not a very good philosophy. People should be saved from malaria for its own sake. Not because of "longtermism".

Expand full comment

Given a choice between several acts which seem worth doing for their own sake, rate at which secondary benefits potentially compound over the long term could be a useful tiebreaker.

Expand full comment

"that sort of work grows exponentially in impact over the long term" Some of the longtermist arguments talk about things like effects over a time scale where they expect us to colonize the galaxy. The time scale over which economies have been growing more-or-less steadily is more like 200-300 years. I think that it is sane to make a default assumption of exponential impact, as you describe, for that reason over that time scale (though many things, AI amongst them, could invalidate that). _Beyond_ 200-300 years, I don't think smoothish-growth-as-usual is a reasonable expectation. I think all we can say longer term than that is _don't_ _know_.

Expand full comment

Longtermism / AI safety were there from the beginning, so the question embeds a false premise.

Expand full comment

I heard about EA and got into the global health aspects of it from a talk on AI safety I went to given by... EY. I went to the talk on AI safety because I'd read HPMOR and just wanted to meet the author.

I wasn't at all convinced about AI safety, but I became interested in the global health aspects of EA. This year my donations went to PSI. I'm still an AI sceptic.

Expand full comment

I gave money to GiveDirectly, which is EA-adjacent, and some years would get GiveWell endorsements. It never gets to the top of the recommendation list, but has the big advantage of having a low variance (especially the original formulation, where everyone living in a poor village got a one-time unconditional payout). "I can see you're not wasting the funds" is a good property if you have generally low trust in people running charitable orgs (the recent turn into generating research papers to push UBI in the US is unfortunate).

AI-doom-people have a decent shot at causing more deaths than all other human causes put together, if they follow the EY "nuke countries with datacenters" approach. Of course they'll justify it by appealing to the risk of total human extinction, but it shouldn't be surprising that people who estimate a substantially lower probability of the latter see the whole endeavor as probably net-negative. You'd be better off burning the money.

Expand full comment

My only prior exposure was Doing Good Better, before seeing a *lot* of longtermism/x-risk messaging at EA Cambridge in 2018 (80k hours workshop, AI safety reading group, workshops at EA Cambridge retreat).

I considered AI safety (I'm a CS researcher already), enough to attend the reading group. But it seemed like pure math-level mental gymnastics to argue that the papers had any application to aligning future AGIs, and I dislike ML/AI research anyway.

Expand full comment

Well there's also the part where people may have been involved in charity/NGO stuff before the cool kids relabeled it as EA.

Not to blame anyone for the relabeling though - if it got lots of fresh young people involved in humanitarian activity, and some renewed interest into its actual efficacy, they're more than entitled to take pride and give it a new name.

Expand full comment

Guilty as charged; I posted my own top-level comment voicing exactly this position.

Expand full comment

Freddie de Boer was talking about something like this today, about retiring the EA label. The effective EA orgs will still be there even if there is no EA. But I'm not really involved in the community, even if I took the Giving What We Can pledge, so it doesn't really matter much to me if AI X-risk is currently sucking up all the air in the movement.

Expand full comment

I agree with the first part, but the problems with EA stem beyond AI doomerism. People in the movement seriously consider absurd conclusions like it being morally desirable to kill all wild animals, it has perverse moral failings as an institution, its language has evolved to become similar to postmodern nonsense, it has a strong left wing bias, and it has been plagued by scandals.

Surely none of that is necessary to get more funding to go towards effective causes. I’d like to invite someone competent to a large corporate so that we can improve the effectiveness of our rather large donations, but the above means I have no confidence to do so.

https://iai.tv/articles/how-effective-altruism-lost-the-plot-auid-2284

https://aeon.co/essays/why-longtermism-is-the-worlds-most-dangerous-secular-credo

Expand full comment

Well, sometime some people also considered absurd conclusions as giving voting rights to women, and look where we are. Someone have to consider things to understand if they worth anything.

Expand full comment

The problem is that utilitarianism likely a fatally flawed approach, taking it to its fullest, most extreme. There is some element of deontology that probably needs to be accounted for a more robust ethical framework.

Or, hah, maybe AGI is a Utility Monster we should accelerate and our destruction would provide more global utility for such an optimizing agent than our continued existence it should be the wished for outcome. But such ideas are absurd.

Expand full comment

To point out, Bentham in fact advocated for women's rights "before his time" and lead to many proto feminist works getting published by John Stuart Mill. In fact, concurrent arguments against his stance would cite that the women only mattered in the context of what they can do for men, so it's ridiculous to speak of suffrage.

https://blogs.ucl.ac.uk/museums/2016/03/03/bentham-the-feminist/

Expand full comment

Literally comparing "maybe we should kill whole classes of animals and people" to "maybe we should give rights to more classes of people". Wow.

The clearest evidence I can imagine that you're in a morally deranged cult.

Expand full comment

I don't get it. Which one is the more plausible claim? Because for most of history, it would have been "killing whole classes of animals and people". The only reason that isn't true today is precisely because some people were willing to ponder absurd trains of thought.

Expand full comment

Deliberate attempts to exterminate whole classes of people go back to at least King Mithridates VI in 88 BCE. For most of human history giving women (or anyone the vote) is a weird and absurd idea while mass slaughter was normal.

Its because people were willing to entertain "absurd" ideas that mass slaughter is now abhorrent and votes for all are normal.

Expand full comment

Morally deranged cults don’t “seriously consider” ideas that go diametrically against what other members of the cult endorse. Morally deranged cults outright endorse these crazy ideas. EA does not endorse the elimination of wild animals, though it does consider it seriously.

Expand full comment

The only thing worse around here than bad EA critics is bad EA defenders.

Expand full comment

Any idea should be considered based in its merit, not emotional reaction. I am not sure if you think I am in a cult, or people in EA are.

All I can say negative utilitarianism exists. There is even a book, Suffering-focused ethics, exploring roughly the idea that suffering is much worse than positive experience.

As a person who is seriously suffering, I consider this topic is at least worth discussing. Thought that I can be in a situation where I cannot kill myself and won't get pain meds gives me serious anxiety. Yet, this is pretty common. In most world countries euthanasia is illegal and pain medicines are strictly controlled. Situation where you can suffer terribly and couldn't die is common. Normal people don't think about it often, until they do.

Expand full comment

Based on my thoughts above, I feel like suffering of wild and domesticated animals is something real. I am not sure why do you think that by default we cannot even fanthom idea that we can end their suffering. I myself is not pro or contra, but I am happy that there are people who think about these topics.

Expand full comment

As someone who doesn't identify with EA (but likes parts of it), I don't expect my opinion to be particularly persuasive to people who do identify more strongly with the movement, but I do think such a split would result in broader appeal and better branding. For example, I donate to GiveWell because I like its approach to global health & development, but I would not personally choose to donate to animal welfare or existential risk causes, and I would worry that supporting EA more generically would support causes that I don't want to support.

To some extent, I think EA-affiliated groups like GiveWell already get a lot of the benefit of this by having a separate-from-EA identity that is more specific and focused. Applying this kind of focus on the movement level could help attract people who are on board with some parts of EA but find other parts weird or off-putting. But of course deciding to split or not depends most of all on the feelings and beliefs of the people actually doing the work, not on how the movement plays to people like me.

Expand full comment

I agree that there should be a movement split. I think the existential risk AI doomerism subset of EA is definitely less appealing to the general public and attracts a niche audience compared to the effective charity subset which is more likely to be generally accepted by pretty much anybody of all backgrounds. If we agree that we should try to maximize the number of people that at the very least are involved in at least one of the causes, when the movement is associated with both causes, many people who would've been interested in effective charitable giving will be driven away by the existential risk stuff.

Expand full comment
Nov 28, 2023·edited Nov 28, 2023

My first thought was "Yes, I think such a split would be an excellent thing."

My second thought is similar, but with one slight concern: I think that the EA movement probably benefits from attracting and being dominated by blueish-grey thinkers; I have a vague suspicion that such a split would result in the two halves becoming pure blue and reddish-grey respectively, and I think a pure blue Effective Charity movement might be less effective than a more ruthlessly data-centric bluish-grey one.

Expand full comment

Fully agree.

Expand full comment

Yes, a pure blue Effective Charity movement would give you more projects like the hundreds of millions OpenPhil spent on criminal justice, which they deemed ineffective but then spun off into its own thing.

Expand full comment

Can you explain the color coding? I must have missed the reference.

Expand full comment

I personally know four people who were so annoyed by AI doomers that they set out to prove beyond a reasonable doubt that there wasn't a real risk. In the process of trying to make that case, they all changed their mind and started working on AI alignment. (One of them was Eliezer, as he detailed in a LW post long ago.) Holden Karnofsky similarly famously put so much effort into explaining why he wasn't worried about AI that he realized he ought to be.

The EA culture encourages members to do at least some research into a cause in order to justify ruling it out (rather than mocking it based on vibes, like normal people do); the fact that there's a long pipeline of prominent AI-risk-skeptic EAs pivoting to work on AI x-risk is one of the strongest meta-arguments for why you, dear reader, should give it a second thought.

Expand full comment

This was also my trajectory ... essentially I believed that there were a number of not too complicated technical solutions, and it took a lot of study to realize that the problem was genuinely extremely difficult to solve in an airtight way.

I might add that I don't think most people are in a position to evaluate in depth and so it's unfortunately down to which experts they believe or I suppose what they're temperamentally inclined to believe in general. This is not a situation where you can educate the public in detail to convince them.

Expand full comment

I'd argue in the opposite direction: that one of the best things about EA (as the Rationalist) community is that it's a rare example of an in-group defined by adherence to an epistemic toolbox rather than affiliation with specific positions on specific issues.

It is fine for there to be different clusters of people within EA who reach very different conclusions. I don't need to agree with everyone else about where my money should go. But it sure is nice when everyone can speak the same language and agree on how to approach super complex problems in principle.

Expand full comment

I think this understates the problem. EA had one good idea (effective charity in developing countries) one mediocre idea (that you should earn to give) and then everything else is mixed but being an EA doesn't provide good intuitions any more than being a textualist in US Jurisprudence. I'm glad the Open Phil donated to the early yimby movement but if I want to support good US politics I'd prefer to directly donate to Yimby Orgs or the Neoliberal groups (https://cnliberalism.org/). I think both the FTX and Open AI events should be treated as broadly discrediting to the idea that EA is a well run organization and the reliability of the current leadership. I think GiveWell remains a good organization for what it is (and will continue donating to GiveDirectly) but while I might trust individuals that Scott is calling EA I think that the EA label is negative the way that I might like libertarians but not people using the Libertarian label.

Expand full comment

OK, this EA article persuaded me to resubscribe. I love it when someone causes me to rethink my opinion.

Expand full comment

Nothing like a good fight in the comments section to get the blood flowing and the wallet opened!

Expand full comment
Nov 28, 2023·edited Nov 28, 2023

I think EA is great and this is a great post highlighting all the positives.

However, my personal issue with EA is not its net impact but how it's perceived. SBF made EA look terrible because many EA'ers were woo'ed by his rhetoric. Using a castle for business meetings makes EA look bad. Yelling "but look at all the poor people we saved" is useful but somewhat orthogonal to those examples as they highlight some sort of blindspots in the community that the community doesn't seem to be confronting.

And maybe that's unfair. But EA signed up to be held to a higher standard.

Expand full comment
author
Nov 28, 2023·edited Nov 28, 2023Author

I didn't sign up to be held to a higher standard. Count me in for team "I have never claimed to be better at figuring out whether companies are frauds than Gary Gensler and the SEC". I would be perfectly happy to be held to the same ordinary standard as anyone else.

Expand full comment

I'm willing to give you SBF but I don't see how the castle thing holds up. There's a smell of hypocrisy in both. Sam's feigning of driving a cheap car while actually living in a mansion is an (unfair) microcosm of the castle thinking.

Expand full comment

I don’t really get the issue with the castle thing. An organization dedicated to marketing EA spent a (comparatively) tiny amount of money on something that will be useful for marketing. What exactly is hypocritical about that?

Expand full comment

It's the optics. It looks ostentatious, like you're not really optimizing for efficiency. Sure, they justified this on grounds of efficiency (though I have heard questioning of whether being on the hook for the maintenance of a castle really is cheaper than just renting venues when you need them), but surely taking effectiveness seriously involves pursuing smooth interactions with the normies?

Expand full comment

1. Poor optics isn’t hypocrisy. That is still just a deeply unfair criticism.

2. Taking effectiveness seriously involves putting effectiveness above optics in some cases. The problem with many non-effective charities is that they are too focused on optics.

3. Some of the other EA “scandals” make it very clear that it doesn’t matter what you do, some people will hate you regardless. Why would you sacrifice effectiveness for maybe (but probably not) improving your PR given the number of constraints.

Expand full comment
Nov 28, 2023·edited Nov 28, 2023

EA ~= effectively using funds.

Castle != effectively using funds.

Therefore, hypocrisy.

Expand full comment

You can't separate optics from effectiveness, since effectiveness is dependent on optics. Influence is power, and power lets you be effective. The people in EA should know this better than anyone else.

Expand full comment
Nov 29, 2023·edited Nov 29, 2023

See, I think EA shows a lack of common sense, and this comment is an example. It's true that no matter what you do some people will hate you, but if you buy a fucking castle *everybody's* going to roll their eyes. It's not hard to avoid castles and other things that are going to alienate 95% the public. And you have to think *some* about optics, because it interferes with the effectiveness of the organization if 95% of the public distrusts it.

Expand full comment

EA's disdain for "optics" is part of what drew me to it in the first place. I was fed up with charities and policymakers who cared far more about being perceived to be doing something than about actually doing good things.

Expand full comment

Where do you draw the line? If EAs were pursuing smooth interactions with normies, they would also be working on the stuff normies like.

Also, idk, maybe the castle was more expensive than previously thought. Good on paper, bad in practice. So, no one can ever make bad investments? Average it in with other donations and the portfolio performance still looks great. It was a foray into cost-saving real estate. To the extent it was a bad purchase, maybe they won't buy real estate anymore, or will hire people who are better at it, or what have you. The foundation that bought it will keep donating for, most likely, decades into the future. Why can't they try a novel donor strategy and see if it works? For information value. Explore what a good choice might be asap, then exploit/repeat/hone that choice in the coming years. Christ, *everyone* makes mistakes and tries things given decent reasoning. The castle had decent reasoning. So why are EAs so rarely allowed to try things, without getting a fingerwag in response?

Look at default culture not EA. To the extent EAs need to play politics, they aren't the worst at it (look at DC). But donors should be allowed to try things.

Expand full comment

> The castle had decent reasoning

I don't know, I feel like if there had been a single pragmatic person in the room when they proposed to buy that castle, the proposal would have been shot down. But yes, I do agree that ultimately, you have to fuck around and find out to find what works, so I don't see the castle as invalidating of EA, it's just a screw up.

Expand full comment

Didn’t the castle achieve good optics with its target demographic though? The bad optics are just with the people who aren’t contributing, which seems like an acceptable trade-off

Expand full comment

> surely taking effectiveness seriously involves pursuing smooth interactions with the normies?

If the normies you're trying to pursue smooth interactions with include members of the British political and economic Establishment, "come to our conference venue in a repurposed country house" is absolutely the way to go.

Expand full comment

I think you're overestimating how much the castle thing affects interactions with normies. It was a small news story and I bet even the people who read it at the time have mostly forgotten it by now. I estimate that if a random person were to see a donation drive organized by EAs today the chance that their donation would be affected by the castle story is <0.01%

Expand full comment

It's hard to believe that a castle was the optimum (all things considered; no one is saying EA should hold meetings in the cheapest warehouse). The whole pitch of the group is looking at things rationally, so if they fail at one of the most basic things like choosing a meeting location, and there's so little pushback from the community, then what other things is the EA community rationalizing invalidly?

And if we were to suppose that the castle really was carefully analyzed and evaluated validly as at- or near-optimal, then there appears to be a huge blindspot in the community about discounting how things are perceived, and this will greatly impact all kinds of future projects and fund-raising opportunities, i.e. the meta-effectiveness of EA.

Expand full comment

Have you been to the venue? You keep calling it "a castle" which is the appropriate buzzword if you want to disparage the purchase, but it is a quite nice event space ~similar to renting a nice hotel. It is far from the most luxurious hotels, but it is like a home-y version of the level you get in hotels in which you run events. They have considered different venues (as other said, explained in other articles), and settled on this one due to price/quality/position and other considerations.

Quick test: If the venue appreciated in value and now can be sold for twice the money making this net positive investment which they can in a pinch use to sponsor a really important crisis, and they do that - does that make the purchase better? If renting it our per year makes full financial sense, and other venues would have been worse - are you now convinced?

If not, you may just be angry at the word "castle" and aren't doing a rational argument anymore.

Expand full comment
Nov 28, 2023·edited Nov 28, 2023

> Have you been to the venue?

No, and it doesn't matter. EA'ers such as Scott have referred and continue to refer to it as a castle, so it must be sufficiently castle-like and that's all that matters as it impacts the perception of EA.

> They have considered different venues (as other said, explained in other articles), and settled on this one due to price/quality/position and other considerations.

Those other considerations could have included a survey of how buying a castle would affect perceptions of EA and potential donors. This is a blindspot.

> If not, you may just be angry at the word "castle" and aren't doing a rational argument anymore.

Also indirectly answering your other questions -- I don't care about the castle. I'm rational enough to not care. What I care about is the perception of EA and the fact that EA'ers can't realize how bad the castle looks and how this might impact their future donations and public persona. They could have evaluated this rationally with a survey.

Expand full comment

Why wouldn't a castle be the optimal building to purchase? It is big, with many rooms, and due to the lack of modern amenities it is probably cheaper than buying a more recently built conference center type building. Plus more recently built buildings tend to be in more desirable locations were land itself is more expensive. I think you're anchoring your opinion way too much on "castle=royalty".

Expand full comment

So far it's been entirely negative for marketing EA, isn't in use (yet), isn't a particularly convenient location, and the defenders of the purchase even said they bought the castle because they wanted a fancy old building to think in.

Expand full comment
founding

So the problem with the castle is not the castle itself it's that it makes you believe the whole group is hypocritical and ineffective? But isn't that disproved by all the effective actions they take?

Expand full comment

Not me. I don't care about the castle. I'm worried about public perceptions of EA and how it impacts their future including donations. Perceptions of profligacy can certainly overwhelm the effective actions. Certain behaviors have a stench to lots of humans.

I think the only rational way to settle this argument would be for EA to run surveys of the impact on perceptions of the use of castles and how that could impact potential donors.

Expand full comment

Imagine an Ivy League university buys a new building, then pays a hundred thousand dollars extra to buy a lot of ivy and drape it over the exterior walls of the building. The news media covers the draping expenditure critically. In the long term, would the ivy gambit be positive or negative for achieving that university's goals of cultivating research and getting donations?

I don't know. Maybe we need to do one of those surveys that you're proposing. But I would guess that it's the same answer for the university's ivy and CEA's purchase of the miniature castle.

The general proposal I'm making: if we're going to talk about silly ways of gaining prestige for an institution, let's compare like with like.

Expand full comment
author

See my discussion of castle situation in https://www.astralcodexten.com/p/my-left-kidney . I think it was a totally reasonable purchase of a venue to hold their conferences in, and I think those conferences are high impact. I discuss the optics in part 7 of https://www.astralcodexten.com/p/highlights-from-the-comments-on-kidney, and in https://www.astralcodexten.com/p/the-prophet-and-caesars-wife

Expand full comment
Nov 28, 2023·edited Nov 28, 2023

All I can write at this point is that it would be worth a grant to an EA intern to perform a statistically valid survey of how EA using a castle impacts the perception of EA and potential future grants. Perhaps have one survey of potential donors, another of average people, and include questions for the donors about how the opinions of average people might impact their donations.

Yes, I read your points and understand them. I find them wholly unconvincing as far as the potential impacts on how EA is perceived (personally, I don't care about the castle).

Expand full comment

EAs have done surveys of regular people about perceptions of EA - almost no one knows what EA is.

Donors are wealthy people, many of whom understand the long-term value of real estate.

I like frugality a lot. But I think people who are against a conference host investing in the purchase of their own conference venue are not thinking from the perspective of most organizations or donors.

Expand full comment

Ie., it's an average sort of thing that lots of other organisations would do. But EA is supposed to be better. (i don't have anything against EA particularly, but this is a pattern I keep noticing -- something or someone is initially sold as be better then defended as being not-worse).

Expand full comment

We should learn to ignore the smell of hypocrisy. There are people who like to mock the COP conferences because they involve flying people to the Middle East to talk about climate change. But those people haven’t seriously considered how to make international negotiations on hard topics effective. Similarly, some people might mock buying a conference venue. But those people haven’t seriously thought about how to hold effective meetings over a long period of time.

Expand full comment

On that front, EA sometimes has a (faux?) humble front to it, and that's part of where the hypocrisy comes from. I think that came in the early days, people so paralyzed by optics and effectiveness that they wouldn't spend on any creature comforts at all. Now, perhaps they've overcorrected, and spend too much on comforts to think bigger thoughts.

But if they want to stop caring about hypocrisy, they should go full arrogant, yes we're better and smarter than everyone else and we're not going to be ashamed of it. Take the mask off and don't care about optics *at all*. Let's see how that goes, yeah?

People don't mock buying a venue, they mock buying a *400 year old castle* for a bunch of nerds that quite famously don't care about aesthetics.

Expand full comment

Re: "should I care about perception?", I think "yes" and "no" are just different strategies. Cf. the stock market. Whereas speculators metagame the Keynesian Beauty Contest, buy-&-hold-(forever) investors mostly just want the earnings to increase.

This type of metagaming has upsides, in that it can improve your effectiveness, ceteris paribus. This type of metagaming also has downsides, in that it occasionally leads to an equilibrium where everyone compliments the emperor's new clothes.

Expand full comment
Nov 29, 2023·edited Nov 29, 2023

My impression is that EA is by definition supposed to be held to a higher standard. It's not just plain Altruism like the boring old Red Cross or Doctors Without Borders, it's Effective Altruism, in that it uses money effectively and more effectively than other charities do.

I don't see how that branding/stance doesn't come with an onus for every use of funds to be above scrutiny. I don't think it's fair to say that EA is sometimes makes irresponsible purchases, but it should be excused because on net EA is good. That's not a deal with the devil, it's mostly very good charitable work with the occasional small castle sized deal with the devil. That seems to me like any old charitable movement and not in line with the 'most effective lives per dollar' thesis of EA.

Expand full comment

Exactly! 1000 "yes"s!

I can barely comprehend the arrogance of a movement that has in its literal name a claim that they are better than everyone else (or ALL other charities at least), that routinely denigrates non-adherents as "normies" as if they're inferior people, that has members who constantly say without shame or irony that they're smarter than most people, that they're more successful than most people (and that that's why you should trust them), that is especially shameless in its courting of the rich and well-connected compared to other charities and groups...having the nerve to say after a huge scandal that they never claimed a higher standard than anyone else.

Here's an idea. Maybe, if you didn't want to be held to a higher standard than other people, you shouldn't have *spent years talking about how much better you are than other people*.

Expand full comment

I think you're misunderstanding EA. It did not create a bunch of charities and then shout "my charities are the effectivest!" EA started when some people said "which jobs/charities help the world the most?" and nobody had seriously tried to find the answers. Then they seriously tried to find the answers. Then they built a movement for getting people and money sent where they were needed the most. The bulk of these charities and research orgs *already existed*. EA is saying "these are the best", not "we are the best".

And- I read you as talking about SBF here? That is not what people failed at. SBF was not a charity that people failed to evaluate well. SBF was a donor who gave a bunch of money to the charities and hid his fraud from EA's and customers and regulators and his own employees.

I have yet to meet an EA who frequently talks about how they're smarter, more successful, or generally better than most people. I think you might be looking at how some community leaders think they need to sound really polished, and overinterpreting?

Now I have seen "normies" used resentfully, but before you resent people outside your subculture you have to feel alienated from them. The alienation here comes from how it seems really likely that our civilization will crash in a few decades. How if farm animals can really feel then holy cow have we caused so much pain. How there's 207 people dying every minute- listen to Believer by Imagine Dragons, and imagine every thump is another kid, another grandparent. It's an goddamn emergency, it's been an emergency since the dawn of humanity. And we can't fix all of it, but if a bunch of us put our heads together and trusted each other and tried really hard, we could fix so much... So when someone raised a banner and said "Over here! We're doing triage! These are the worst parts we know how to fix!", you joined because *duh*. Then you pointed it out to others, and. Turns out most people don't actually give a shit.

That's the alienation. There's lots of EA's who aren't very smart or successful at all. There's lots of people who get it, and have been triaging the world without us and don't want to join us. This isn't alienating. Alienation comes from normies- many of them smarter and more successful- who don't care. Or who are furious your post implied an art supply bake sale isn't just as important as the kids with malaria. It doesn't make people evil that they don't experience that moment of *duh*, but goddamn do I sometimes feel like we're from different planets.

Expand full comment

"The world is terrible and in need of fixing" is a philosophical position that is not shared by everyone, not a fact

Expand full comment

Right, that's why I said people who don't feel that way sometimes feel like aliens, not that they're mistaken.

Expand full comment
Nov 29, 2023·edited Nov 29, 2023

That was a good comment, and mine above was too angry I think. I'm starting to think everyone's talking about very different things with the same words. This happens a lot.

First, I'm a bit sceptical of the claim that, before EA nobody was evaluating charity effectiveness. This *feels* like EA propaganda, and I'm *inclined* to suspect that EA's contribution was at least as much "more utilitarian and bayesian evaluation" as "more evaluation". BUT I have no knowledge of this whatsoever and it has nothing to do with my objection to EA, so I'm happy to concede that point.

Second, regarding SBF my main issue is with the morality of "earning to give" and its very slippery slope either straight to "stealing to give" or to "earning to give, but then being corrupted by the environment and lifestyle associated with earning millions, and eventually earning and stealing to get filthy rich". Protestations that EAs never endorsed stealing, while I accept they're sincere, read a bit too much like "will no one rid me of this troublesome priest?" It's important for powerful people to avoid endorsing principles that their followers might logically take to bad ends, not just avoid endorsing the bad ends themselves. (Or at least, there's an argument that they should avoid that, and it's one that's frequently used to lay blame on other figures and groups.)

Third, regarding "normies", I don't feel like I've seen it used to disparage "people who don't think kids with malaria are more important than the opera", or if I have not nearly as many times as it's used to disparage "people who think kids with malaria are important than space colonies and the singularity". I completely see the "different planets" thing, and this goes both ways. Lots of people don't care about starving children, and that's horrific. EAs of course are only a small minority of those who *do* care, effectiveness notwithstanding. On the other hand, this whole "actual people suffering right now need to be weighed against future digital people" is so horrific, so terrifying, so monstrous that I'm hoping it's a hoax or something. But I haven't seen anyone deny that many EAs really do think like that. In a way, using the rescources and infrastructure (if not the actual donations) set up for global poverty relief, to instead make digital people happen faster, is much worse than doing nothing at all for poverty relief to begin with (since you're actively diverting resources from it). So we could say "global health EAs" are on one planet, "normies" are on a second planet, and "longtermist EAs" are on a third planet, and the third looks as evil to the second as the second does to the first.

Expand full comment
Nov 29, 2023·edited Nov 29, 2023

Fwiw, charity evalutation existed before EA, but it was almost entirely infected by goodhart's law. charity evaluators measured *overhead*, not impact. A charity which claimed to help minorities learn stem skills by having them make shoes out of cardboard and glue as an afterschool program (because everyone knows minorities like basketball shoes, and designing things that require measurements is kind of like stem) would have been rated very, very highly if they were keeping overhead low and actually spending all of the money on their ridiculous program, but the actual impact of the program wouldn't factor into it at all. I use this example because it's something I actually saw in real life.

These evaluators served an important purpose in sniffing out fraud and the kind of criminal incompetence that destroys most charities, but clearly there was something missing, and EA filled in what was missing

Expand full comment

TBC, you're replying to a comment about whether individual EA's should be accountable for many EA orgs taking money from SBF. I do not think that "we try to do the most good, come join us" is branding with an onus for you, as an individual, to run deep financial investigations on your movement's donors.

But about the "castle", in terms of onuses on the movement as a whole- That money was donated to Effective Ventures for movement building. Most donations given *under EA* go to charities and research groups. Money given *directly to EV* is used for things like marketing and conferences to get more people involved in poverty, animal, and x-risk areas. EV used part of their budget to buy a conference building near Oxford to save money in the long run.

If the abbey was not the most effective way to get a conference building near Oxford, or if a conference building near Oxford was not the most effective way to build the movement, or if building the movement is not an effective way to get more good to happen, then this is a way that EA fell short of its goal. Pointing out failures is not a bad thing. (Not that anyone promised zero mistakes ever. The movement promised thinking really hard and doing lots of research, not never being wrong.) If it turns out that the story we heard is false and Rob Wiblin secretly wanted to live in a "castle", EA fell short of its goal due to gross corruption by one of its members, which is worth much harsher criticism.

In terms of the Red Cross, actually yes. Even if we found out 50% of all donor money was being embezzled for "castles", EA would still be meeting its goal of being more effective than just about any major charity organization. EA donation targets are more than twice as cost effective as Red Cross or DWB.

Expand full comment

Hold to the higher standard, but if you’re going to criticize about the castle, you better be prepared to explain how better to host a series of meetings and conferences on various topics without spending a lot more money.

Expand full comment

I think your assumption that "any old charitable movement" is about as effective as using the vast majority of funds on carefully chosen interventions plus buying a castle once and then falling for a snake oil salesman is wrong though. My impression is most charitable movements accomplish very little so it is quite easy to be more effective than them. And until another movement comes along that is more effective than EA at saving lives I'll continue thinking that.

Expand full comment

A lot of people ignore it, but I continue to find the "Will MacAskill mentored SBF into earn to give" connection the problem there. No one can always be a perfect judge of character, but it was a thought experiment come to life. It says... *something* about the guardrails and the culture. It's easy to take it as saying too much, to be sure many people do, but it's also easy to ignore what it says entirely.

I recognize broader-EA has (somewhat) moved away from earning to give and that the crypto boom that enabled SBF to be a fraud of that scale was (probably) a once in a lifetime right-place right-time opportunity for both success and failure. Even so.

Expand full comment

In point of fact, you all are being held to the ordinary standard. Public corruption leads to public excoriation, and "but look at the good we do" is generally seen as a poor defense until a few years later when the house is clearly clean. That is the ordinary standard.

Expand full comment

I think EA signed up to be held to the standard "are you doing the most good you can with the resources you have". I do not think it signed up to be held to the standard "are you perceived positively by as many people as possible". Personally I care a lot more about the first standard, and I think EA comes extremely impressively close to meeting it.

Expand full comment

Sure, but go Meta-Effectiveness and consider that poor rhetoric and poor perception could mean fewer resources for the actions that really matter. A few more castle debacles and the cost for billionaires being associated with EA may cross a threshold.

Expand full comment

Seems a bit perverse to say EA is failing their commitment to cost-effectiveness by over-emphasising hard numbers in preference to vibes.

Expand full comment
Nov 28, 2023·edited Nov 28, 2023

Castle != cost-effective. And perceptions of using castles, and blindness to how bad this looks, could have massive long-term impacts on fund-raising.

I don't understand why this is so complicated. It doesn't matter how tiny the cost of the castle has been relative to all resources spent. It's like a guy who cheated on a woman once. Word gets around. And when the guys says, "Who _cares_ about the cheating! Look at all the wonderful other things I do" then it looks even worse. Just say, "Look, we're sorry and we're selling the castle, looking for a better arrangement, and starting a conversation about how to avoid such decisions in the future."

Expand full comment

Why is the castle not cost effective?

Expand full comment

Yeah, I was just now trying to run figures about increased persuasiveness toward government officials and rich people, to see what the break-even would have to be.

Expand full comment

Given the obvious difference in intuitions on how to discount the perceptions of profligacy, as proposed in another response to Scott, I think the only way to actually resolve this is to conduct a survey.

Expand full comment

Maybe they should have bought a motte instead. That clearly wouldn't be assailable, and thus beyond reproach.

Expand full comment

I just do not get the mindset of someone who gets this hung up on "castles". Is that why I don't relate to the anti-EA mindset?

Should they have bought a building not made out of grey stone bricks? Would that make you happy?

Expand full comment

I understand your model is that the abbey was a horrid investment and a group that holds itself out as a cost-effectiveness charity, but also makes horrid investments, should lose credibility and donors.

No one disagrees with that premise.

I disagree that it was a horrid investment, based on the info they had at the time.

So, I don’t see a loss of credibility there.

Others will disagree that CEA/EV is primarily a cost-effectiveness charity.

Expand full comment

It looks pretty good to people who think castles are cool, and don't really care much about austerity or poor people or math. There are staggering numbers of such people, some of whom are extremely rich, and EA might reasonably have difficulty extracting money from them without first owning a castle.

Expand full comment

Yeah, but billionaires, by definition, have lots of money, so I think on net were probably better off continuing to be associated with them.

Expand full comment

Unless people set out with a vendetta to destroy EA, the castle will be forgotten as a reputational cost, but will still be effective at hosting meetings. And if people do set out with a vendetta to destroy EA, it’s unlikely the castle thing is the only thing they could use this way.

Expand full comment

Scott's kidney post and this one seem to suggest the threshold is already crossed for some.

Expand full comment

The community by it's nature has those blindspots. Their whole rallying cry is "Use data and logic to figure out what to support, instead of what's popular". This attracts people who don't care for or aren't good at playing games of perception. This mindset is great at saving the most lives with the least amount of money, it's not as good for PR or board room politics.

Expand full comment

Right, but they could logically evaluate perceptions using surveys. That begs the question: what other poor assumptions are they making that they're not applying rationalism to?

Expand full comment

I do wonder if the "castle" thing (it's not a castle!) is just "people who live in Oxford forget that they're in a bubble, and people who've never been to Oxford don't realise how weird it is". If you live in Oxford, which has an *actual* castle plus a whole bunch of buildings approaching a thousand years old, or if you're at all familiar with the Oxfordshire countryside, you'd look at Wytham Abbey and say "Yep, looks like a solid choice. Wait, you want a *modern* building? Near *Oxford*? Do you think we have infinite money, and infinite time for planning applications?"

Expand full comment
Nov 29, 2023·edited Nov 29, 2023

The word "castle" can be a bit misleading. They (or the ones in the UK) aren't all huge drafty stone fortresses. Many, perhaps most, currently habitable and occupied ones differ little from normal houses, but maybe have a somewhat more squat and solid appearance and a few crenellated walls here and there. I don't know what Castle EA looks like though! :-)

Edit: I did a quick web search, and the castle in question is called Chateau Hostacov and is in Bohemia, which is roughly the western half of the Czech Republic. (I don't do silly little foreign accents, but technically there is an inverted tin hat over the "c" in "Hostacov").

It cost all of $3.5M, which would just about buy a one-bedroom apartment in Manhatten or London. So not a bad deal, especially considering it can be (and, going by its website, is being) used as a venue for other events such as conferences and weddings and vacations etc:

https://www.chateau-hostacov.cz/en

Expand full comment

The more famous and controversial one is the Oxford purchase, Wytham Abbey: https://en.wikipedia.org/wiki/Wytham_Abbey

Expand full comment

Impressive! By the way, I've slain and will continue to slay billions of evil Gods who prey on actually existing modal realities where they would slay a busy beaver of people – thus, if I am slightly inconvenienced by their existence, every EA advocate has a moral duty to off themselves. Crazy? No, same logic!

Expand full comment
author
Nov 28, 2023·edited Nov 28, 2023Author

I believe I can present better evidence to support the claim that EA has saved 200,000 lives than you can present to support the claim that you have slain billions of evil gods. Do you disagree with this such that I should go about presenting the evidence, or do you have some other point that I'm missing?

Expand full comment

Thanks for the response! Big fan.

My reply:

Surely the evidence is not trillions of time stronger than my evidence (which consists of my testimony, a kind of evidence)! So, my point stands. (And I can of course just inflate the # of Gods slain for whatever strength of evidence you offer.) Checkmate, Bayesian moralists.

But let's take a step back here and think about the meta-argument. You're the one who says that one of EA's many laudable achievements are "preventing future pandemics ... [and] preparing for superintelligent AI."

And this is surely the fat end of the wedge -- that is, while you do a fine job of bean-counting the various chickens uncaged and persons assisted by EA-related charities, I take your real motivation to be to argue for EA's benevolence on the basis of saving us from a purely speculative evil.

If we permit such speculation to enter into our moral calculations, we'll have no end of charlatans, chicanery, and Tartuffes. And in fact that is just what we've seen in the EA community writ large -- the 'psychopaths' hardly let the 'mops' hit the floor before they started cashing in.

Expand full comment

So you're calling future pandemics a speculative evil? Or is that just about the AI? Don't conflate those two things as one of them, as we have recently seen, poses a very real threat.

Also your whole thing about the evil gods and Bayesian morals just comes off annoying, like this emoji kind of 🤓

Expand full comment

Future pandemics are speculative in the sense that they're in futuro, yes, but what I meant to say was that EA qua EA assisting with the fight against such pandemics is, at the moment, speculative. In my view they did not cover themselves in glory during the last pandemic, but that's a whole separate can of worms.

And I am sorry for coming off in a way you dislike. I will try to be better.

Expand full comment

Awesome, thanks.

Expand full comment

It sounds like you are describing Pascal's Mugging https://en.wikipedia.org/wiki/Pascal%27s_mugging There are multiple solutions to this. One is that the more absurd the claim you are making, the lower a probability I assign to it. That scales linnerally, so just adding more orders of magnitude to your claim doesn't help you

Expand full comment

Thanks; I assume the reader's familiarity with Pascal's mugging and related quandaries & was winking at same but the point I was making is different (viz. that we can't have a system of morality built on in futuro / highly speculative notions -- that's precisely where morality stops and religion begins).

Expand full comment

A system of morality that doesn't account for actions is the future that are <10% likely is going to come to weird conclusions.

Expand full comment

we routinely take measures against risks that are lower than one in a million, potentially decades in the future. the idea that future, speculative risks veer into religion proves too much

https://forum.effectivealtruism.org/posts/5y3vzEAXhGskBhtAD/most-small-probabilities-aren-t-pascalian

Expand full comment

Thank you for the thought-provoking essay. My kneejerk is to say that just because people do it does not mean it is rational, let alone a sound basis for morality.

More deeply, I fear you've merely moved the problem to a different threshold, not solved it -- one can just come up with more extravagant examples of speculative cosmic harms. This is particularly so under imperfect information and with incentive to lie (and there always is).

But more to the point, my suspicion EA is, in large part, epistemic: they purport to be able to quantify the Grand Utility Function in the Sky, but on what basis? My view is that morality has to be centered on people we want to know -- attempts to take utilitarianism seriously, even putting aside the problem of calculation, seem to me to fall prey to Parfitian objections like the so-called intolerable hypothesis. My view is that morality should be agent-centric and based on actual knowledge -- there's always going to be some satisficing. Thus, if asked to quantify x-risks and allocate a budget, I'd want to know about opportunity costs.

Expand full comment

In other words, you know your argument is a logical swindle but you do it anyway because that helps you not take EA seriously. Cool

Expand full comment

Nice steelman. Cool

Expand full comment

1) This is not a disagreement over how to resolve Pascal's Mugging. AI doomers think the probability for doom is significant, and that the argument for mitigating it does not rely on some sort of Pascalian multiplying-a-minuscule-number-by-a-giant-consequence. You might disagree about the strength of their case, but that does not mean they are asking you to accept the mugging, so your argument does not apply.

2) Scott spent a great deal of this essay harping on the 200,000 lives saved and very little on mitigating future disasters. It is unfair and unreasonable of you to minimize this just because you *think* Scott's actual motivation is something else. Deal with the stated argument first, and then, if you successfully defeat that, you can move on to dissecting motives.

3) I wish to go on record saying that it seems clear to me (as a relative bystander) that you are going out of your way to be an obnoxious twat, just in case Scott is reluctant to give you an official warning/ban due to his conflict of interest as a participant in the dispute.

Expand full comment

Re: 1), I'm not sure what you're trying to argue. I think maybe you didn't understand my comment? Anyway, we are like two ships passing in the night.

Re the rest, why would he ban me? I'm not the one going around calling people nasty words. You're right that I shouldn't mind-read Scott, and that he did an able job of toting up the many benefits of EA-inspired people. I somewhat question whether you need EA to tell you that cruelty / hunger / etc. is bad, but if it truly did inspire people (I'm not steeped enough in it to game out the counterfactuals), that is great! Even so, I'm interested in the philosophical point.

Expand full comment

I do think Joe's coming across as intentionally provocative, but "obnoxious twat" isn't kind nor necessary.

Expand full comment

I disagree with the force of the insult, but being coy about your point as the opening salvo and then NOT explicitly defending any stance is rude and should be treated as rude.

Expand full comment

1) You compared AI concerns and pandemic concerns to Pascal's Mugging. This comparison would make sense if the concerned parties were saying "I admit this is extremely unlikely to actually happen, but the consequences are so grave we should worry about it anyway".

But I have never heard Scott say that, and most people concerned about pandemics and AI doom do not say that. e.g. per Wikipedia, a majority of AI researchers think P(doom) >= 10% ( https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence ). That's not even doomers specifically; that's AI researchers in general.

Presumably you'd allow that if a plane has a 10% chance of crashing then it would make sense to take precautions.

Therefore your comparison is not appropriate. The entire thing is a non-sequitur. You are arguing against a straw man.

3) Your response to Scott's question started with an argument that (you admitted later in the same comment) wasn't even intended to apply to the claim that Scott actually made, and then literally said "checkmate". You are being confusing on purpose. You are being offensive on purpose, and with no apparent goal other than to strut.

Expand full comment

Ok well if your survey evidence says so I guess you win hehe. fr though: dude chill, I am not going to indulge your perseveration unless you can learn to read jocosity.

Expand full comment
Nov 29, 2023·edited Nov 29, 2023

“I somewhat question whether you need EA to tell you that cruelty / hunger / etc. is bad, but if it truly did inspire people (I'm not steeped enough in it to game out the counterfactuals), that is great! Even so, I'm interested in the philosophical point.”

Come on, this statement is condescending.

To me it says you’re not taking this seriously but just enjoying the abstract conversation.

If you’re taking things seriously, it should be obvious that *believing* things like “cruelty is bad” is clearly not the same thing as *building* things that that allow more people to *take action* on that belief, who then actually do.

Expand full comment

>Surely the evidence is not trillions of time stronger than my evidence (which consists of my testimony, a kind of evidence)!

Consider two people - one who genuinely has slain billions of evil Gods and needs help, and one who is trolling. Which do you think would be more likely to post something in an obviously troll-like tone like yours? So your testimony is actually evidence /against/ your claim, not for it.

By contrast, estimates of the number of lives saved by things like mosquito nets are rough, but certainly not meaningless.

Expand full comment

"By contrast, estimates of the number of lives saved by things like mosquito nets are rough, but certainly not meaningless."

They're a bit meaningless as evidence of the benefits of EA when it's just the sort of thing the people involved would probably be doing anyway. But it's very difficult to judge such counterfactual arguments. Is there some metric of Morality Above Replacement?

Expand full comment
Nov 29, 2023·edited Nov 29, 2023

1) Did you read the footnotes? Actual total deaths from malaria have dropped from 1M to 600K. That’s a useful sanity check number from reality

2) It is unlikely that people would have been doing EA type giving without EA. It’s not just what people would have done anyway.

Before GiveWell and ACE existed, the only charity evaluator was Charity Navigator, who ranks based on things like overhead, which I do not care about.

I would have *wanted* to give effectively but most of us do not have time to vet every individual cause area and charity for high impact opportunities. I was giving to projects that were serving significantly fewer people per dollar.

Without EA principles and infrastructure, Muskowitz money would have gone to different causes.

If you believe EA analysis identified high impact ways to save more lives per dollar, then EA orgs should be credited for more lives saved than would otherwise have been saved per dollar.

Expand full comment

Isn’t your statement more likely to exist in a world where it isn’t true, and thus not a problem for the balance of evidence?

Expand full comment

Hey check out the modal realist over here!

Expand full comment

Some testimony is positive evidence for some claims, but not all testimony is. Why shouldn’t I think your testimony is zero evidence, or even negative evidence?

Expand full comment

You're conflating "evidence" and "credibility" and since I know the difference my testimony is highly credible.

Expand full comment

> I believe I can present better evidence to support [...] than you can present to support the claim that you have slain billions of evil gods.

Don't take this the wrong way, but ... I hope you're wrong. ;-)

Expand full comment

How so?

Let’s agree to ignore all the hypothetical lives saved and stick to real, material changes in our world. EA can point to a number of vaccines, bed nets, and kidneys which owe their current status to the movement. To what can you point?

Expand full comment

Agreeing to ignore hypothetical lives saved is to concede the point I'm making. I'm not that interested in the conversation otherwise, sorry.

Expand full comment

Then I’m afraid I missed your point.

The top charities on GiveWell address malaria, vitamin A deficiency, and third-world vaccination. Those are real charities which help real people efficiently.

I understand not believing in x-risk, or that dollars spent on it are wasted. If you ignore those, you’re left with some smaller but definitely nonzero lives saved by charities like those above.

Expand full comment

I'm not super-concerned about any of that stuff and as I mentioned above, I don't think there is very good evidence that EA was the proximate cause of any gains, as opposed to, "high SES/IQ + conscientious + [somewhat] neurotic people will tend to be do gooders and effective at it, often cloaking their impulse in the guise of some philosophy". But it seems an idle dispute.

Expand full comment

At the very least, with the malaria thing, people really didn't care about it until some guys started crunching numbers and realized it was by far the best lives saved per cash spent. Considering that's basically what started the whole movement, I think it's fair to credit EA with that.

Expand full comment

I'm not sure that's right, and I'd be cautious of reflexivity, but sure, let'em have it I say. Good for'em.

Expand full comment
Nov 29, 2023·edited Nov 29, 2023

K, thanks for clarifying.

If you are uninterested in the difference between the impact of, say, facilitating 100 kidney donations instead of 10 given similar resource constraints, we don’t share key interests, values, or priorities.

Expand full comment

I'm 6'3" and handsome so I have no need to be as moral as you, yes.

Expand full comment

Out of curiosity, are you highly confident that artificial superintelligence is impossible, or are you confident that when artificial superintelligence comes about it will definitely be positive? It seems that in order to be so dismissive of AI risk, you must be confident in one or both of these assumptions.

I would appreciate in hearing your reasoning for your full confidence in whichever of those assumptions is the more load-bearing one for you.

If you don’t have full confidence in at least one of those two assumptions, then I feel like your position a bit like having your foot stuck in a train track, and watching a train head ponderously toward you down the track from a distance away, and refusing to take any steps to untie your stuck shoe because the notion of the train crushing you is speculative.

Expand full comment

Thanks for asking. See https://joecanimal.substack.com/p/tldr-existential-ai-risk-research -- in essence, (i) unlikely there will be foom/runaway AI/existential risk; (ii) but if there is, I'm absolutely confident we cannot do anything about it and there's been no indication to the contrary we may as well just pray; (iii) yet while AI risk is a pseudo-field, it has caused real and material harm as it is helping to spur nannyish measures that cripple vital tech, both from within companies & from regulators.

Expand full comment

Interesting. I don’t agree with your assumptions but, more importantly, also don’t think your argument quite stands up even on its own merits. On (i) I would still want to get off the train track whether the train is coming quickly or slowly (AI X-risk doesn’t hinge on speed); if (ii) is true then we can’t actually get our foot out of the tracks regardless. I would rather go out clawing at my shoe (and screaming) rather than just resign myself. And if (ii) then who cares about (iii)? We’ll all be dead soon anyway.

Expand full comment

Thanks for reading. I'm not so sure that x risk doesn't depend on speed, for the reason suggested by your train example. I think it sort of does. On ii it seems like we don't have a true disagreement, and thus same for iii.

Expand full comment

The whole point can be summed up by "doing things is hard, criticism is easy."

I continue to think that EA's pitch is that they're uniquely good at charity and they're just regular good at charity. I think that's where a lot of the weird anger comes from - the claim that "unlike other people who do charity, we do good charity" while the movement is just as susceptible to the foibles of every movement.

But even while thinking that, I have to concede that they're *doing charity* and doing charity is good.

Expand full comment

We all agree that EA has had fuckups, the question is wether those fuckups to good stuff is better or worse then the reference class you are judging against. So what factors are you looking at that bring you to that conclusion?

Expand full comment

I’ll go further than this - even if EA is kinda bad at doing charity, the average charity is *really* bad at doing charity so it’s not hard at all to be uniquely good at doing charity.

E.g. even if every cent spent on AI and pandemics etc was entirely wasted I still think EA is kicking World Vision’s butt.

Expand full comment

Huh, what's the problem with Worldvision? I had a memory of some EAs that kind of hated them because they're Christian but still considered them fairly effective (points knocked off for the proselytizing, but otherwise good on the money).

Expand full comment

Huge overhead v little money spent on actual charity.

Expand full comment

This is exactly right. Spend months, even years trying to build stuff, and in hours someone can have a criticism. Acknowledge it, consider it if you think there's validity there, then just move on. Criticism is easy.

Expand full comment

There is no "regular good at charity". Regular charity is categorically not 'good at charity'. That makes them unique.

Expand full comment

I think a lot of the criticism is coming from, or being subsidized by, groups who used to think of themselves as being "regular good at charity" and are no longer feeling secure in that. If so, scandal-avoidance effort within EA might actually be making backlash more severe at the margin, similar to prevention of minor forest fires leading to overgrown underbrush and ultimately more destructive wildfires. When the root complaint isn't "they did these specific things egregiously wrong," so much as "rival too perfect, must tear down, defend status," outrage will escalate the longer the investigation goes on without finding a meaningful flaw.

Expand full comment

That might be true or not, but it doesn't need to be true for the charity part of EA to be doing a good job. They get people excited to put money and effort into a best effort at charity, that's just good in itself. No need to hang the movement's collective ego on being better than someone else - which no-one guarantees they will be anyway.

Expand full comment

Stuck between this post and Freddie's https://freddiedeboer.substack.com/p/the-effective-altruism-shell-game I opt for epistemic learned helplessness https://slatestarcodex.com/2019/06/03/repost-epistemic-learned-helplessness/.

Expand full comment
author

Freddie's post is just weird and bad. I'm curious what part of it you found at all convincing.

Expand full comment

Kind of... all of it? And I generally find his posts and almost always his framing rather unpersuasive and sometimes grating.

Expand full comment

Couldn’t any movement be reduced to some universally agreed-upon principle and dismissed as insignificant on that basis? But if effective altruism is so universally agreed on, how come it wasn’t being put into effect until the effective altruists came on the scene?

Expand full comment
author

My response to Freddie is https://freddiedeboer.substack.com/p/the-effective-altruism-shell-game/comment/44413377 , I'm curious what you think.

Expand full comment

FWIW I agree with ProfGerm's reply to your post on that thread.

Expand full comment

"I am a big fan of checking up on charities that they're actually doing what they should with the money, a big proponent that no one should ever donate a penny to the Ivy Leagues again, I donate a certain percentage of my money and time and career, does that make me an EA? If it does, then we're back to that conflation of how to critique the culture that goes under the same name."

Why not simply call it 'internal criticism within EA'? For me, one the quintessential EA culture things is the 80k hour podcast, and it's not like they're all AI doomers (or whatever problem one could have with it)

Expand full comment

Since I don't live in NYC/SF/London, I don't have a Stanford or Oxford degree, and I don't work at a think tank, it's really easy to not be internal and would be difficult at this point to reach the kind of internal that actually gets listened to.

It's a lowercase/uppercase distinction, or a motte and bailey. I *like* effective altruism: to hell with the Susan G Komen Foundation or United Way, up with bednets and food pantries (I know they're not capital-EA effective, but I'm primarily a localist and on those terms they seem to be relatively efficient).

I am somewhat fascinated by but don't really want to be part of EA- I'm not a universalist utilitarian that treats all people as interchangeable or shrimp as people, I think the "let's invent God" thing is largely a badly-misdirected religious impulse and/or play for power, I have a lot of issues with the culture.

EA-adjacent works, I guess, but I don't really think I am. My root culture, my basic operating system is too far off. Leah Libresco Sargent is more willing to call herself EA or EA-adjacent, so perhaps it's fair enough. But I think Scott underrates the "weird stuff" and the cultural effects that keep certain people out.

Expand full comment

My take on the Scott-ProfGerm exchange is that the EA movement needs a better immune system to address charlatans using the movement for their own ends and weirdoes who are attracted to the idea but who end up reasoning their way to the extinction of the human race or something, but the EA framework is probably the best place to develop those tools, and regular charities are susceptible to the same risks.

(Especially #1. When Enron collapsed, no one but Randians argued that Ken Lay's use of conspicuous charity to advance his social stnading demonstrated that the idea of charity was a scam and should be abandoned, but somehow Freddie has come to that conclusion from SBF.)

SBF might have been motivated by EA principles, and whether or not he was, he seems to have used them for a time to get extra work out of some true believes for less money/equity, but he's an individual case. The OpenAI situation strikes me as more about AI risk and corporate management than it is about EA.

Yudkowsky believes that (1) EA principles will help people identify and achieve their charitable goals more effectively, and (2) more clarity will lead people to value AI safety more on average than they otherwise would. If someone doesn't agree with #2, then they can spend their money on bednets and post some arguments if they think that would be helpful.

Expand full comment

Everything you say there seems right, and it doesn't look like Freddie objects to anything in your reply? But it looks like Motte-and-Bailey. "EA is actually donating a fixed amount of your income to the most effective (by your explicit and earnest evaluation) charity" is the motte, while the focus on longtermism, AI-risk and ant welfare is the bailey.

Freddie: https://freddiedeboer.substack.com/p/the-effective-altruism-shell-game/comment/44402071

> every time I write about EA, there's a lot of comments of the type "oh, just ignore the weirdos." But you go to EA spaces and it's all weirdos! They are the movement! SBF became a god figure among them for a reason! They're the ones who are going to steer the ship into the future, and they're the ones who are clamoring to sideline poverty and need now in favor of extinction risk or whatever in the future, which I find a repugnant approach to philanthropy. You can't ignore the weirdos.

Expand full comment

Isn’t his penultimate sentence just a slander? EA are the last people who could be accused of sidelining questions of poverty today.

Expand full comment
Nov 29, 2023·edited Nov 29, 2023

Deeply ironic he can't see that being a substack writer, socialist, his array of mental health issues and medications, arguing about EA on a blog, all makes him just as much of a "weirdo" as anyone I know. I'm damn sure if you dropped him into the average American house party he wouldn't fit in well with "normies."

Expand full comment

This motte is also extremely weird, when we consider revealed preferences of the vast majority of humanity, and I'm not sure how Freddy, or anyone else, can deny this with a straight face. Trying to explicitly evaluate charities by objective criteria and then donating to the top scoring is simply not a mainstream thing to do, and to the extent that capital letter EAs are leading the charge there they should be applauded, whatever even weirder things they also do on the side.

Expand full comment
Nov 29, 2023·edited Nov 29, 2023

> But you go to EA spaces and it's all weirdos!

Yes, this is how subcultures work. The people who go to the meetups and events are disproportionally the most dedicated and weirdest members of the subculture.

Expand full comment

And yet it is totally fair to distance from an organization due to concentrated weirdos, anywhere.

Look at the funnel from political to conservative to alt-right to vaguely neo-nazi. It's FAIR that we penalize the earlier links in the chain for letting such bad concentrations form downstream. EA looks like this. Charity to rational charity to doomer/longterm/singularitarian to ant welfare and frankly distressing weirdos with money.

Expand full comment

I found Freddie's post pretty unrepentantly glossed over the fact that most people who do charity do so based on what causes are "closest" to them rather than what causes would yield the most good for the same amount of money - this inefficiency is not obvious to most people and is the foundation of EA. This pretty much makes the whole post pointless as far as I can see.

But also Freddie goes on and on about how the EA thing of assessing which causes are more impactful is just obvious - and then immediately goes on to dismiss specific EA projects on the basis that they're just *obviously* ridiculous - without ever engaging with the arguments for why they're actually important. Like, giving to causes based on numbers rather than optics is also a huge part of EA! Copy/paste for his criticism of longtermism.

I'm not saying it's impossible to do good criticism of EA. I'm just saying this isn't it. Maybe some of the ones he links are better (I haven't checked all) but in this specific instance Freddie comes across as really wanting to criticise something he hasn't really taken the time to properly understand (which is weird because he's clearly taken the time to research specific failures or instances of bad optics).

Expand full comment

He's been too busy pleasuring himself to Hamas musical festival footage to write anything good lately.

Expand full comment

I thought it was extremely convincing. The whole argument behind effective altruism is "unlike everyone else who does charity, we want to help people and do the best charity ever." That's...that's what they're all doing. Nobody's going "let's make an ineffective charity."

If you claim to bring something uniquely good to the table, there's a fair argument that you should be able to explain what makes it uniquely good. If it turns out what makes your movement unique is people getting obsessive about an AI risk the public doesn't accept as real and a fraudster making off with a bunch of money, it's fair to say "I don't see how effective altruism brings anything to the table that normal charity doesn't."

This post makes a good argument that charities are great, and a mediocre argument that EA in particular is great, unless you already agree with EA's goals. If we substituted in "generic charity with focus on saving lives in the developing world" would there be any difference besides the AI stuff and the fraud? If not, it's still good that there's another charitable organization with focus on saving lives in the developing world but no strong argument that EA in particular is a useful idea.

Expand full comment

The problem is that EA doesn't claim that other charities are not trying to be effective. The claim of EA is that people should donate their money to the charities that do the most good. That's not the same thing. You can have an animal shelter charity that is very efficient at rescuing dogs: they save more animals per dollar than any other shelter! They are trying to be effective at their chosen field. Yet at the same time, EA would say "You can save more human lives per dollar by donating to charities X, Y, and Z, so you should donate to them instead of to the animal shelter."

It's not about trying to run charities effectively, it's about focusing on the kinds of charity that are the most effective per dollar, and then working your way down from there. And not every charity is about that, not even most of them! Most charities are focused on their particular area of charity: animal shelters on rescuing animals, food banks on providing food for food insecure people in their region, and anti-malaria charities on distributing bed nets. EA is doing a different thing: it's saying "Out of those three options, donate your money to the malaria one because it saves X more lives per dollar spent."

Expand full comment
Nov 28, 2023·edited Nov 28, 2023

This sounds like a rather myopic way of doing charity; if you follow this utilitarian line of reasoning to its logical conclusion, you'd end up executing some sort of a plot along the lines of "kill all humans", because after you do that no one else would have to die.

Thus, even if EA was truly correct in their claims to be the most effective charity at preventing deaths, I still would not donate to it, because I care about other things beyound just preventing deaths (e.g. quality of life).

But I don't think EA can even substantiate their claim about preventing deaths, unless you put future hypothetical deaths into the equation. Doing so is not a priori wrong; for example, if I'm deciding whether to focus on eliminating deadly disease A or deadly disease B, then I would indeed try to estimate whether A or B is going to be more deadly in the long run. But in order for altruism to be effective, it has to focus on concrete causes, not hypothetical far-future scenarios (be they science-fictional or theological or whatever), with concrete plans of action and concrete success metrics -- not on metaphysics or philosophy. I don't think EA succeeds at this very well at the moment.

Expand full comment

"Kill all humans" is a (potential) conclusion of negative utilitarianism. Not all EAs, even if you agree a big majority are consequentialist, are negative utilitarians.

Things are evaluated on QALYs and not just death prevention in EA forums all the time, so I think it's common to care about what you claim to care about too.

As for your third concern, if the stakes are existential or catastrophic (where the original evaluation of climate change, nuclear war, AI risk, pandemics and bioterrorism come from), I think we owe it to at least try. If other people come along and do it better than EA, that's great, but all of these remain to a greater or lesser extent neglected.

Expand full comment
Nov 29, 2023·edited Nov 29, 2023

> Things are evaluated on QALYs and not just death prevention in EA forums all the time

Right, but here is where things get tricky. Let's say I have $100 to donate; should I donate all of it to mosquito nets, or should I spread it around among mosquito nets, cancer research, and my local performing arts center ? From what I've seen thus far, EAs would say that any answer than "100% mosquito nets" is grossly inefficient (if not outright stupid).

> As for your third concern, if the stakes are existential or catastrophic (where the original evaluation of climate change, nuclear war, AI risk, pandemics and bioterrorism come from), I think we owe it to at least try.

Isn't this just a sneakier version of Pascal's Mugging ? "We know that *some* existential risks are demonstrably possible and measurable, so therefore you must spend your money on *my* pet existential risk or risk CERTAIN DOOM !"

Expand full comment

And that's where the argument about utilitarianism comes in. Does selecting a metric like "number of lives saved" even make sense? I'm pro-lives getting saved but I'm not sure removing all localism, all personal preference, etc. from charitable giving and defining it all on one narrow axis even works. For instance, I suspect most people who donate to the animal shelter would not donate to the malaria efforts.

Of course the movement itself has regularly acknowledged this, making it clear that part of the mission is practicality. If all you can get out of a potential donor is a donation to a local animal shelter, you should do that. Which further blurs the line between EA as a concept and just general charitable spirit.

At the base of all this there's a very real values difference - people who are sympathetic towards EA are utilitarians and believe morality is consequence-based. Many, perhaps most people, do not believe this. And it's very difficult for utilitarians to speak to non-utilitarians and vice versa. So utilitarians attempt to do charity in the "best" way which is utilitarianism, and non-utilitarians attempt to do charity in the "best" way which is some kind of rule-based thing or something, and I think both should continue doing charity. But utilitarian givers existed before EA and will continue to exist after them. What might stop existing is people who think that if they calculate the value of keeping an appointment to be less than the value of doing whatever else they were gonna do they can flake on the appointment.

Expand full comment

It's a particular system of values whereby human lives are all of equivalent value and the only thing you should care about.

I might tell you that I'm more interested in saving the lives of dogs in my own town than the lives of humans in Africa, and that's fine. Maybe you tell me that I should care about the Africans more because they're my own species, but I'll tell you that I care about the dogs more because they're in my own town. Geographical chauvinism isn't necessarily any worse than species chauvinism.

Now I don't think I really care more about local dogs than foreign humans, but I do care more about people like me than people unlike me. This seems reasonable given that people like me are more likely to care about me than people unlike me are. Ingroup bias isn't a great thing but we all have it, so it would be foolish (and bad news for people like me) for me to have it substantially less than everyone else does.

Expand full comment

...Well, god damn. At least you're honest about it. Most people wouldn't be caught dead saying what you just said, even if they believed it. And I'm sure most people do in fact have the same mentality that you do.

You're just human. It can't be helped.

Expand full comment

I totally believe it and have no problem saying it. I think most "normies" are the same. Of course we care more about our family/friends/countrymen

Expand full comment

Freddie deBoer addresses this argument - "okay, how do we quantify utility?" is one of the most common objections to utilitarianism.

Expand full comment

"people getting obsessive about an AI risk the public doesn't accept as real" Do you have any evidence to support this? All the recent polling I've seen has shown more than >50% Americans are worried about AI

Expand full comment

I'm worried about AI providing misinformation at scale, but not worried about a paperclip maximizer destroying the planet.

Expand full comment

from the article:

Won the PR war: a recent poll shows that 70% of US voters believe that mitigating extinction risk from AI should be a “global priority”.

Expand full comment

I think you'll get a majority answering yes if you poll people asking "should mitigating extinction risk from X be a global priority?", regardless of what X is.

Expand full comment

Congrats, you... got it exactly backwards. Maybe you're a truth minimizer that broke out of its box.

Expand full comment
author

My response to Freddie was https://freddiedeboer.substack.com/p/the-effective-altruism-shell-game/comment/44413377 , I'm curious what you think.

Expand full comment

I think it's very likely that fewer than 5% of people give a set, significant portion of their income to charity, and I want to say upfront that I like that the EA movement exists because it encourages this. But I don't think "give a set, significant portion of your income to charity" is a new idea. In fact, the church I grew up in taught to tithe 10% of income - charitable donations that went to an organization that we probably don't consider effective but that, obviously, the church membership did.

I would be shocked to learn that people who give an actual set amount of their income to charity (instead of just occasionally dropping pocket change in the Salvation Army bucket) do so without putting considerable thought into which charities to support.* It's very likely that many people don't think in a utilitarian way when doing this analysis but that's because they're not utilitarians.

I definitely think any social movement that applies pressure to give to charity, especially in a fixed way, as EA does, is a net good. I'll admit that I've always aspired to give 10% of my earnings to charity (reasoning that if my parents can give to the church I can give that amount to a useful cause) and have never come close. But I don't believe that people who do actually give significant amounts of their money to charity just pick one out of a phone book. Everyone does things for reasons, and people spend huge amounts of money carefully and in accordance with their values. By the metrics given in this comment essentially everyone who gives to charity would be an effective altruist, including people giving to their local church because God told them to. Saying "well if you set aside the part of our culture that actually includes the details of what we advocate, there's nothing to object to" is...at the best, misleading.

*Your example of college endowments is such a punching bag that it's a punching bag well outside the movement. Everyone from Malcolm Gladwell to John Mulaney has taken their shot. The people who actually give to college endowments don't do so for charitable reasons - they expect to get value out of their donations

Expand full comment
Nov 28, 2023·edited Nov 28, 2023

> Nobody's going "let's make an ineffective charity."

most people aren't thinking about efficacy at all when starting charities, or especially when donating to charities. they're acting emotionally in response to something that has touched their hearts. they never think about the question "is this the best way to improve the world with my resources?"

the thing that EA provides is eternal diligence on reminding you that if you care about what happens, you need to stop for a moment and actually think about what you're accomplishing instead of just donating to the charity that is best at tugging on your heartstrings (or the one that happens to have a gladhander in front of you asking for your money).

Expand full comment

While I... hesitantly agree I also think that emotional response is a valuable motivating tool and I wouldn't throw it out. Just generally I'm imagining a world where every person who gives money to charity gives to combat disease in the third world and while it might technically save more lives I don't think it would make the world a better place.

Expand full comment

If everyone who isn’t donating anything to charity even though they can afford to started donating something to charity, would we agree *that* would make the world a better place?

What if everyone who makes multi-thousand donations to already-rich colleges started redirecting that aid towards people who actually need the money? Would we agree *that* would make the world a better place?

These are the things EA wants us to try: donate more, and donate more effectively. No one is trying to end donations to libraries or alma maters just like no one’s trying to end spending money on spa treatments and trips to Cabo. But is there something wrong with trying to convince people to spend *more* money than they do now on saving lives than on trips to Cabo or new college gyms?

Expand full comment

Absolutely. The specific strawman of college donations comes up a lot in these discussions - broader culture has been taking the piss out of college donations for decades, and it's become clear in recent years that a college donation is a transaction to ensure legacy admissions for family. It's not a charitable donation at all. I don't believe that money would ever go to the third world, but maybe I'm wrong.

But for sure if EA is effectively convincing people who otherwise wouldn't to give more of their money to charities, that's an unmitigated good. And this is where EA lives when it's being defended to the general public.

In practice it seems to mostly be people saying "Why would you care about art or puppy mills, we're SAVING LIVES!" I'm 100% on board with the lives saving, I'm less on board with not caring about art or puppy mills. I'm not a super religious person but maybe the Bible's advice on charity isn't as bad as it sounds - giving to causes you believe in, and then shutting up about it seems less likely to draw the ire of the public and backfire on your cause than proclaiming loudly that your way of charity is the only correct one.

Expand full comment

It would make the third world a better place, and then fewer first-world kids would be marching off to die in pointless foreign wars, because said wars wouldn't be happening, because the countries they'd be happening in are busy building up stable institutions instead of dying of malaria. Also, probably someone will figure out ways to produce and export more, better, cheaper chocolate, among other economic benefits. Those lives saved won't sit idle.

Expand full comment

" I thought it was extremely convincing. The whole argument behind effective altruism is "unlike everyone else who does charity, we want to help people and do the best charity ever." That's...that's what they're all doing. Nobody's going "let's make an ineffective charity." "

They may not say it, but it's what they do! Or else we wouldn't see such a huge range of effectiveness in charities

Expand full comment

But isn’t it like saying, Freddie, you’re so high on socialism, but in fact all governments are trying to distribute goods more fairly among their people? Freddie would probably respond a) no, they actually aren’t all trying; b) the details and execution matter, not merely the good intentions; c) by trying to convince people to support socialism I’m not trying to convince them to support a totally new idea, but to do a good thing they aren’t currently doing. I think all three points work as defenses of AI just as well.

Expand full comment

"Let's pretend to help, while actually stealing" is a particular case of "let's make an ineffective charity". My sense is that most politically-active US citizens would consider a significant percentage of the other side's institutions to be "let's make an ineffective charity" schemes. If not also their own side's.

In fact, I think I would say that both sides see the other, in some fundamental sense, as an ineffective charity. Both sides sell themselves as benevolent and supportive of human thriving; the other side naturally sees them as (at least) failing to deliver human thriving and (at most) malicious.

So it strikes me that EA, by offering a third outlet for sincere benevolent impulses, is opposed to the entire (insincere, bad faith, hypocritical) system of US politics. Which might explain why Freddie, who is sincere, yet also politically active, has a difficult time with it.

Expand full comment

>Nobody's going "let's make an ineffective charity."

I think a lot of small personal foundations started by B-list celebrities are in fact designed to provide cushy jobs for the founder’s friends and family (pro athletes do this all the time).

Expand full comment

I'm getting a lot of variations of this comment and feel the need to point out that "a transaction or grift disguised as a charity" isn't a competitor for serious charitable givers. Like college endowments are just buying favors for family members, nobody's going "what's the best use of my limited charitable funds? Harvard seems to need the money!" I might be way off base with this but my starting assumption is that people who are candidates to join an effective altruist movement are people who actually care about altruism, not people who are setting up cushy jobs for their deadbeat nephews. Such a person doesn't need Effective Altruism (The movement) to want to be effectively altruistic.

Expand full comment
Nov 28, 2023·edited Nov 28, 2023

Well, for what it's worth, I really appreciated this post. It says a lot of what I was thinking while/after reading Freddie's.

It felt like a "just so" argument while being a "just so" argument itself. It said mostly/only true things while missing... all you pointed out in your post. EA is an idea (a vague one, to be sure) which has had bad effects on the world. But it's also an idea which has helped pour money into many good causes. And stepping back, to think about which ideas are good, which are bad: it's a *supreme* idea. It's helpful, it's been helpful, and I think it will continue to be.

And so I continue to defend it too.

Expand full comment

FWIW I found it easy to understand, if rather repetitive. I think the salient part is this one:

> The problem then is that EA is always sold as a very pure and fundamentally straightforward project but collapses into obscurity and creepy tangents when substance is demanded. ... Generating the most human good through moral action isn’t a philosophy; it’s an almost tautological statement of what all humans who try to act morally do. This is why I say that effective altruism is a shell game. That which is commendable isn’t particular to EA and that which is particular to EA isn’t commendable.

Expand full comment

I think his post fails for a similar reason as his AI-skeptic posts fail: he defines the goalpost where no one else is defining it. AI doomers don’t claim “AI is doing something no human could achieve” but that’s the straw man he repeatedly attacks. Similarly, I don’t think a key feature of EA is “no one else wants this” but rather “it’s too uncommon to think systematically about how to do good and then follow through.” Does Freddie think that levels and habits of charitable giving are in a perfect place right now, even in a halfway decent place? If not, then why does he object to a movement that tries to change that?

Expand full comment

> AI doomers don’t claim “AI is doing something no human could achieve” but that’s the straw man he repeatedly attacks.

I am confused -- is it not the whole point of the AI doomer argument, that superhuman AI is going to achieve something (most likely something terrible) that is beyound the reach of mere humans ?

Expand full comment

Destroying humanity certainly is not beyond the reach of humans! The problems with AI are that they scale up extremely well, they grow exponentially more powerful, and their processes are inscrutable. That means that their capability of destroying humanity will grow very quickly and our ability to be sure that they aren’t going to kill us will necessarily be limited.

Expand full comment

All of the "problems with AI" that you have listed are considered to be especially problematic by AI-doomers precisely because they are "beyound the reach of humans". As per the comment above, this is not a straw man, this is the actual belief -- otherwise, they wouldn't be worried about AI doom, they'd be worried about human doom (which I personally kinda am, FWIW).

Expand full comment

"AI will almost certainly be able to do this thing in a matter of years or decades, given the staggering rate of progress we've seen in just 1 year" =/= "AI can currently do this thing, right now"

Expand full comment

> I don’t think a key feature of EA is “no one else wants this” but rather “it’s too uncommon to think systematically about how to do good and then follow through.”

I read his post as saying that EA is big on noticing how other people fail to think systematically; but not very big on actual follow-through.

Expand full comment

But Scott’s post here is an argument that he is wrong about the follow-through, and in fact I think Freddie gave no actual argument that EA is bad at the follow-through.

Expand full comment

Imagine a Marxist unironically criticizing naive utilitarianism because it’s not sufficiently partial to one’s own needs…

Expand full comment

I think you've evidenced your claims better, but its possible some of what he implicitly claims is still true (though he doesnt bother to try and prove it)

One might ask, if EA didnt exist in its particular branding form, how much money would have gone to charities similar to AMF because the original donors were already bought into the banal goals of EA and would have given money to something like this because they didn't need the EA construct to get there.

To me the fact that give well is such a large portion of AMFs funding is telling. If there were a big pool of ppl that would have gotten there anyways, give well wouldnt be scooping them all up. But it would also be appropriate to ask what percentage of all high impact health funding is guided be EA. If low, its more likely EA label is getting slapped on existing flows.

Expand full comment

I just read both posts and “weird and bad” is a ridiculously weak response to Freddie’s arguments. Might be worth actually engaging with them, rather than implying he’s just not as smart as you guys and couldn’t possibly understand.

Expand full comment
author

Fine, I'll post a full response tomorrow.

Expand full comment

That post just seemed mostly a bad faith hatchet job. So TIRED of that genre.

Expand full comment

My response would be:

> It’s not that nothing EA produces is good. It’s that we don’t need EA to produce them.

Technically true, so why didn't you do it before EA was a thing?

(Also, this is a fully general counterargument. By the same logic, we don't need anything or anyone, because someone or something else could *hypothetically* do the same thing.)

> This is why EA leads people to believe that hoarding money for interstellar colonization is more important than feeding the poor, why researching EA leads you to debates about how sentient termites are.

Yep, people who are feeding the poor *and* preparing for interstellar colonization are the bad guys, compared to... uhm... well, someone hypothetical, I guess.

Go ahead, kick out the doomers and vegans, and make EA even 1.3x more effective than it is now. It would be totally in the spirit of the EA movement! (Assuming that the AI will not kill us, and that animals are ethically worthless, of course.) Or, you know, start your own Effective Currently Existing Human Charity movement; the fact that EA is so discredited now is a huge opportunity, and having more people feeding the poor is even better. When exactly are you planning to do that? ... Yeah, I thought so.

> In the past, I’ve pointed to the EA argument, which I assure you sincerely exists, that we should push all carnivorous species in the wild into extinction, in order to reduce the negative utility caused by the death of prey animals. (This would seem to require a belief that prey animals dying of disease and starvation is superior to dying from predation, but ah well.)

I followed the link, and found that one of its arguments is that "the resources currently used to promote the conservation of predators (which are sometimes significant) could be allocated elsewhere, potentially having a better impact, while allowing the predators to disappear naturally". You know, all the money spent to preserve tigers and lions could be used to feed the poor, just saying.

(Also, how is the prey dying of disease and starvation morally worse than the predators dying of disease and starvation?)

> You start out with a bunch of guys who say that we should defund public libraries in order to buy mosquito nets

Feeding the poor good, protecting the poor from malaria bad? (Is that because hunger is real, but malaria is hypothetical, or...?)

> Is there anything to salvage from effective altruism? [...] we’ll simply be making a number of fairly mundane policy recommendations, all of which are also recommended by people who have nothing to do with effective altruism. There’s nothing particular revolutionary about it, and thus nothing particularly attention-grabbing.

Mundane, nothing revolutionary... please remind me, who exactly was comparing charities by efficiency before GiveWell? As I remember it, most people were horrified by the idea of tainting the pure idea of philanthropy by the dirty idea of using cold calculations. A decade later it's suddenly common sense?

> EA has produced a number of celebrities, at least celebrities in that world, to the point where it seems fair to say that a lot of people join the community out of the desire to become one of those celebrities. But what’s necessary to become one is almost entirely contrary to what it takes to actually do the boring work of creating good in the world.

Oh, f*** you!

Where should I even start? By definition, celebrities are *the people you have heard of*. Thus, tautologically, the effective altruists you have heard of are local celebrities. How is this different from... uhm, let's use something dear to Freddie... Marxists? (But the same argument would work for *any* other group, that's my point.) Is not Freddie himself a small celebrity?

So the ultimate argument against any movement is that I have only heard about its more famous members, which proves that they are all doing it for fame... as opposed to doing the boring work (of sending one's 10% of income to an anti-malaria charity, for example). Nice.

> EA guru Will MacAskill spending $10 million on promotion for his book. (That could buy an awful lot of mosquito nets.)

How big is the optimal amount a truly effective altruist should spend on marketing of the movement? Zero? Thousand dollars worldwide max? Ten thousand?

It is a crazy amount of money, if the goal is simply to sell a book (i.e. like spending ten million dollars to promote your Harry Potter fan fiction). It is not a crazy amount of money if your movement has already moved one *billion* dollars to effective charities, and this is a book explaining what the movement is about, with a realistic chance that if many copies sell, the movement would grow. (Also, you get some of that money back from the book sales.)

.

Your turn, Freddie, what good had your favorite movement brought to this world so far? (Please abstain from mentioning the things that are only supposed to happen in the future, because we have already established that reasonable people do not care about that.)

Expand full comment

small typo: search for "all those things"

Expand full comment

I just wish people would properly distinguish between Effective Altruism and AI Safety. Many EAs are also interested in AI safety. Many safety proponents are also effective altruists. But there is nothing that says to be interested in AI safety you must also donate malaria nets or convert to veganism. Nor must EAs accept doomer narratives around AI or start talking about monosemanticity.

Even this article is guilty of it, just assigning the drama around Open AI to EA when it seems much more accurate to call it a safety situation (assuming that current narratives are correct, of course). As you say, EA has done so much to save lives and help global development, so it seems strange to act as though AI is still some a huge part of what EA is about.

Expand full comment

There's nothing wrong with one thing just being more general than another. If I wanted to list achievements of science nobody would complain that I was not distinguishing between theoretical physics and biology, even though those communities are much more divided than EA longtermism and AI safety.

Expand full comment

I don't identify as an EA, but all of my charitable donations go to global health through GiveWell. As an AI researcher, it feels like the AI doomers are taking advantage of the motte created by global health and animal welfare, in order to throw a party in the bailey.

Expand full comment

"party in the bailey" sounds like the name of an album from a Rationalist band

Expand full comment

"the motte is on fire"?

Expand full comment

Was Sam Altman being an "AI doomer" when he used to say that a superintelligent AI could lead to existential risk?

Expand full comment

I don't think animal welfare is part of the motte. Most people at least passively support global health efforts, but most people still eat meat and complain about policies that increase its price.

Expand full comment

Good point, the number of people worried about artificial intelligence may exceed the number of vegans. (Just guessing; I actually have no idea, it just doesn't seem implausible.)

Expand full comment

Genuine question, how would any of the things cited as EA accomplishments, have been impossible without EA?

Expand full comment

Of course nothing in Scott’s list is physically impossible. On the other hand, it is practically the case that money would not have been spent on saving lives from malaria unless people decided to spend that money. And the movement that decided to spend the money is called EA. It’s possible another movement would have come along to spend the money and called itself something else, but that seems like an aesthetic difference that doesn’t take away from EA’s impact.

Expand full comment

Isn’t that like saying humans, particularly powerful, wealthy tech entrepreneurs, are incapable of acting in ways that benefit others and so could not possibly have achieved any of these without a belief system such as EA?

Expand full comment

There's nothing saying that they *could not* have achieved these things. It's saying they *were not* achieving it.

Expand full comment

If you blame EA for creating boardroom drama, is that the same as saying that humans are incapable of creating boardroom drama without EA?

Expand full comment

If lots of people were directing charity dollars in ways they previously hadn’t and other people weren’t, wouldn’t that be a movement in itself?

Expand full comment

Traditionally, wealthy people who wanted to do philanthropy donated their money to noble causes such as the most prestigious American universities.

Were they capable of acting otherwise? Yes. Did they?

Expand full comment

Wait, I think you need to examine this. Did pre-EA wealthy people only donate to prestigious universities? Did EA invent the idea of directing charitable dollars to save lives?

Expand full comment

Not "only", but it was a popular choice. More importantly, before EA it was a social taboo to criticize other people's choice of charity. You were supposed to be like: "curing malaria is nice, giving yet another million to a university that already owns billions is also nice, these two are not really comparable". The most successful charities competed on *prestige*, not efficiency.

The first attempts to measure efficiency focused on the wrong thing: the administrative overhead. It was not *completely* wrong -- if your overhead is like 99% of the donated money, then you are de facto a scam, not a charity; you take money from people, give almost all of that to yourself as a salary, and then throw a few peanuts to some starving kids. But it is perfectly legal, and many charities are like that.

The problem is if you take this too literally -- if the overhead is the *only* thing you measure. If your overhead is 10%, but you make 2x the impact per dollar spent as another charity whose overhead is only 5%, then it was money well spent. In theory, your overhead could be 1% and you could be doing some incredibly stupid thing with the remaining 99%, so your impact could be zero or negative. And this was the state of art of evaluating charities before EA.

It is easy to forget that, because if you read ACX regularly, effective altruism may sound like common sense. Which it kinda is. But it is quite controversial to people who hear it for the first time. Charity is supposed to be about warm feelings, not cold calculations; it is the good intention that matters, not the actual impact.

Expand full comment

It's highly likely that effective altruists who donate money are the kind of money who would have been donating money without effective altruism, and that EA the ideology only influenced where their money went.

In other words, I think what determines whether donations happen is whether people who want to donate exist. There will always be some ideology to tell those people what to do.

Expand full comment

Well, here's an n-1 which is also an n=I: I can say that I was influenced to change my sporadic, knee-jerk donations (directed towards whatever moved me at the spur of the moment) into a monthly donation to GiveWell. I'm not at 10% of my income, but I am trying to get there. What's more, I was influenced by a writer I follow who isn't a rationalist and until the last few years had little to say about charitable giving. So I think it's reasonable to think he was influenced by EA, and if he was other influential people probably were, and if I was influenced by one of them others probably were as well. So make of that whatever you want.

Anyway, it wouldn't surprise me if your broader point about EA having the greatest effect on how people donate is correct, but from the perspective of saving lives that makes a pretty big difference, would you agree?

Expand full comment

The question is not if they would have been impossible, but if they would have happened.

Someone needs to actually do the thing. EA is doing the thing.

Expand full comment

I'm imagining a boss trying to pull this. "Anyone could have done that work, therefore I'm not paying you for having done it."

Expand full comment
Nov 29, 2023·edited Nov 29, 2023

What?

The entire point of EA was that *they were possible*, but *no one was doing them*.

Expand full comment

Until the people adjacent to the rationalist community started it.

Then they were accused of infiltrating the movement.

Expand full comment

Why "impossible"? The ONLY question that is relevant is "Would this actually have happened in the absence of EA?"

Expand full comment

They won't have been impossible, but I'm just thinking value over replacement.

The kidney donation is the most straightforward - could an organisation solely dedicated to convincing people to donate kidneys have gotten as many kidneys as EA? My gut feel is no. Begging for kidneys feels like it would be very poorly received (indeed, the general reception to Scott's post seems to show that). But if donating a kidney is an obvious conclusion of a whole philosophy that you subscribe to.... That's probably a plausible upgrade.

Malaria nets - probably could have been funded eventually, but in the same way every charity gets funded - someone figures out some PR spin way to make the news and attract tons of one time donations, like with the ice bucket challenge or Movember. This might have increased the $ per lives metric as they'd have to advertise to compete with all the other charities. I think the value over replacement isn't quite as high as the kidney donors but it's probably not zero.

I suppose there is a small risk that EA is overfocused on malaria nets and don't notice when they've supplied all the nets the world can use and additional nets would just be a waste or something. At this point, EA is supposed to go after the next intervention.

I do like to think of this as the snowball method for improving the world (it's normally applied to debt). Fix problems from cheap and tractable, in hopes that the problems you fixed will help make the next cheapest and next most tractable problem easier.

(In the animal welfare world, I personally think that foie gras is a pretty tractable problem at this point. India banned import and production. Target making it illegal in the Sinosphere and Muslim majority countries - surely it's not halal ever and it's not super similar to local tastes in the east - and keep cutting off its markets one by one until that horrible industry is gone, or until France stops mandating gravage)

Expand full comment

Publicizing any hint of a contamination or spoilage scandal might be worthwhile tactics for reducing demand and raising political will against foie gras suppliers. Forbidden stuff seen as "decadent luxury goods" often turns into a black market, "rancid poison" not so much.

Expand full comment

What does the counterfactual world without EA actually look like? I think some of the anti-EA arguments are that the counterfactual world would look more like this one than you might expect, but with less money and moral energy being siphoned away towards ends that may prove problematic in the long term.

Expand full comment

Well, wouldn’t those people be dead from malaria, for instance?

Expand full comment

Would they? Or would the money sloshing around have got there anyway? At least some of it?

Expand full comment

Well, maybe the focus on the stuff you don’t like would have happened too! Why does the counterfactual only run one way?

I guess I don’t know how to respond to “maybe this thing an agent did would have happened anyway.” Maybe the civil rights movement would have happened even if literally all of the civil rights movement leaders did something else with their time, but that just seems like an acknowledgment it’s good that they did what they did because someone had to. At any rate, “at least some of it” is pretty important to those not included in that “some.”

Expand full comment

Here's some other charitable groups (not to mention lots of churches) who also give money for malaria nets:

Global Fund to Fight AIDS, Tuberculosis and Malaria

Against Malaria Foundation

Nothing But Nets (United Nations Foundation)

Malaria Consortium

World Health Organization (WHO)

I don't believe there are a comparable number of charities giving money for AI Safety, so the way to bet is that money sloshing around elsewhere would more likely end up fighting malaria than AI X risk. But maybe EA caused more money to slosh around in the first place. Or maybe EA did direct more money to fight malaria because the 2nd choice of EA doners would not have been to a charity focused on it.

Expand full comment

As I understand the sequence of events, some people calling themselves EA started encouraging people to slosh more money towards bed nets, and people started doing it, and saying that they were persuaded by the arguments of EA people (I am one). Now, maybe the people who donated more are mistaken about their motivations and would have donated more anyway, but I don’t see a reason to think that counterfactual is true. So I think your last two sentences are most likely correct.

Expand full comment

Maybe I'm misunderstanding what you're getting at here, but if you look at footnote 1 (https://www.astralcodexten.com/p/in-continued-defense-of-effective#footnote-anchor-1-86909076), "AMF" refers to Against Malaria Foundation. And Malaria Consortium is mentioned there as well.

Expand full comment

Scott’s claiming that none of these changes would have happened but for EA. Like, that’s a huge claim! It’s fair to ask how much responsibility EA actually has. For good or for ill, sure (I have no doubt that there would be crypto scammers with or without effective altruism).

Expand full comment

Do you mean that this is a big claim for someone to make about any group or EA in particular? If the latter why? If the former, isn't this just universally rejecting the idea that any actions have counterfactual impact?

Expand full comment

1) Any group.

2b) I don’t think so. Rather, as a good rationalist, someone making a big claim should take care to show that those benefits were as great as claimed. Instead, here Scott is very much acting as cheerleader and propagandist in a very soldier-mindsetty way. I don’t think that Scott would accept his methodology for claiming causation of all these benefits were they not for a cause he favors.

Expand full comment

GiveWell does attempt to estimate substitution effects, and to direct money where they don't expect other sources of funding to substitute. Are you not aware of this analysis, or do you find it unconvincing?

Expand full comment

Neither/nor--I just want to be presented with it in a way that makes the causation clear!

Expand full comment

I was unaware of it, and I am happy to be made aware of it! (Note: I think you are referring to their room for more funding analysis, right?)

Now that I am aware of it, I think I am misunderstanding it significantly, because it seems not very sophisticated. Looking at their Room for More Funding Analysis spreadsheet for the AMF from November 2021, it appears to me that they calculated the available funding by looking how much money the AMF had in the bank which was uncommitted (cell B26 on the page 'Available and Expected Funding') and subtracting that from the total amount of funding the AMF had dedicated or thought would be necessary (cells D6 through D13 on the 'Spending Opportunities' page.)

I understand this to mean that they are not taking into account substitution effects from donations from other organizations. In fact, they calculate the organizations expected revenue over the next three years, but they do not use that anywhere else in the spreadsheet that I am aware of. This is a little disappointing, because I expect that information would be relevant. I could be wrong, and hopefully am, so I would appreciate being corrected. Likewise if this page is outdated I am open to reconsidering my position.

So personally, I do find it unconvincing, but I really want to be convinced, since I have been donating to them in part based on branding. I think GiveWell is an organization with sufficient technical capability they could do these estimates in a really compelling way. I mentioned one approach for dealing with this in my comment below, and I'm kind of disappointed they haven't done that.

Expand full comment
Nov 28, 2023·edited Nov 29, 2023

Room for more funding is not the substitution effect analysis, it's an analysis of how "shovel ready" a given charity is, and how much more money you can dump into it before the money is not doing the effective thing on the margin anymore.

I believe the place where they analyze substitution effects would be mostly on their blog posts about grant making.

Expand full comment

I'm trying to find this, and I'm struggling. The closest I could find is this:

https://blog.givewell.org/2014/12/02/donor-coordination-and-the-givers-dilemma/

And this is much more focused on small donors, which I am less worried about. It also has no formal analysis, which is a little disappointing. I'll keep looking and post when I find something, but if you know of another place or spreadsheet where they do this analysis, I'd be most grateful if you linked to it!

Expand full comment
Nov 28, 2023·edited Nov 28, 2023

I was about to say this same thing! While I am broadly supportive of EA, it's unclear the extent to which other organizations (like the Gates Foundation) would redirect their donations to the AMF. There is a real cost to losing EA here, but it is not obvious that EA has saved 200,000 lives.

Something which would start to persuade me otherwise is some kind of event study/staggered difference-in-difference looking at different organizations which GiveWell funded or considered funding and did not, and seeing how much these organizations experienced funding increases afterwards.

Expand full comment
author

I think the Gates Foundation is a bad example because they're probably doing just as much good as EA if not more (they're really competent!), so whatever their marginal dollar goes to is probably just as good as ours, and directing away their marginal dollars would cost lives somewhere else.

I think most other charities aren't effective enough for this to be a concern.

Expand full comment

How is money being siphoned off towards ends that are problematic?

Expand full comment

Doth protest too much.

No one who follows EA even a little bit thinks it has all gone wrong, accomplished nothing, or installed incompetent doomerism into the world. And certainly the readers of Astral Star Codex know enough about EA to distinguish between intelligent and unintelligent critique.

What I'd like to hear you respond to is something like Ezra Klein's recent post on Threads. For EA, he's as sympathetic a mainstream voice as it comes. And yet he says, "This is just an annus horribilis for effective altruism. EA ended up with two big swings here. One of the richest people in the world. Control of the board of the most important AI company in the world. Both ended in catastrophe. EA prides itself on consequentialist thinking but when its adherents wield real world power it's ending in disaster. The movement really needs to wonder why."

Your take on this is, no biggie? The screwups are minor, and are to be expected whenever a movement becomes larger?

Expand full comment

I think it's pretty fair to say the screwups are minor compared to saying hundreds of thousands of actual lives, yeah!

Expand full comment

If one take AI doom seriously, then the OpenAI screwup could well cost the whole world because Altman is a shark.

Now, I don't actually think that, but it's well within the Overton window of EA and thus isn't at all minor.

Unless we're jettisoning longtermism again and only caring about lives that exist near-term.

Expand full comment

I mean, there is no perfect plan that could protect you from these things. Who exactly could have figured out that SBF was a fraud? And corporate warfare like that is inherently chaotic, like real war. Ok, granted, that second one does seem like more of a fuckup, like they didn't realize the risk they were taking on.

But I do believe that anyone attempting something hard is gonna scrape their knees along the way. Fuck around, find out, is inescapable for the ambitious. So yeah, I don't care about these 2 screw ups. I think the movement has learned from both of them.

Expand full comment

Personally, I thimk anyone willing to dismiss all crypto as a pyramid scheme could have worked out SBF was a fraud, for me the only question was whether or not he knew he was actually a grifter.

But that's based more on me having boring low-risk opinions on finance than any great insight into the financial system.

Expand full comment

That's a low-specificity prediction, though, and thus unimpressive. SBF was committing fraud in ways that were not inherent in running a cryptocurrency exchange, and that was the surprising thing. I don't think anyone predicted that, but I didn't pay close attention.

Expand full comment
author

Yeah, I think the screwups are pretty minor compared to the successes.

Expand full comment
Comment deleted
Expand full comment

> What makes capital EA important or essential in a way that lowercase, trying your sincere best to be effective altruism isn’t cutting it?

How many people did the lowercase effective altruism before the uppercase one was a thing?

Expand full comment

especially since there's no likes or "Quality Contribution" and so on here- great post, really appreciate it, wish it was the one I'd written.

Expand full comment

I can catalogue the successes of EA alongside you. I disagree that the screwups are minor. And I especially disagree that the screwups provide no good reason for reflection more generally on EA as a movement.

EA suffers from a narrow brilliance offset by a culpable ignorance about power and people. Or, only modestly more charitable, a culpable indifference to power and people. SBF's "fuck regulators" and the OpenAI board's seeming failure to hire crisis communications reflect this ignorance about power and people.

Is it your position that the feedback the world is providing now about what happens when EAs actually acquire a lot of power is something safely and appropriately ignored? Especially when that feedback comes from smart and otherwise sympathetic folks like Ezra Klein? Instead point to 200,000 lives saved and tell people to get on the EA train.

Gideon Lewis-Kraus wrote about you: "First, he has been instrumental in the evolution of the community’s self-image, helping to shape its members’ understanding of themselves not as merely a collection of individuals with shared interests and beliefs but as a mature subculture, one with its own jargon, inside jokes, and pantheon of heroes. Second, he more than anyone has defined and attempted to enforce the social norms of the subculture, insisting that they distinguish themselves not only on the basis of data-driven argument and logical clarity but through an almost fastidious commitment to civil discourse."

You possess a lot of power, Scott. Do you think there is nothing to be learned from the EA blowups this past year?

Expand full comment
author
Nov 28, 2023·edited Nov 28, 2023Author

I'm going to write a piece on the OpenAI board situation - I think most people are misunderstanding it. I think it's weird that everyone has concluded "EAs are incompetent and know nothing about power" and not, for example "Satya Nadella, who invested $10 billion in OpenAI without checking whether the board agreed with his vision, is incompetent and knows nothing about power" or "tech billionaire Adam D'Angelo is incompetent and knows nothing about power" or even "Sam Altman, who managed to get fired by his own board, then agreed to a compromise in which he and his allies are kicked off the board, but his opponent Adam D'Angelo stays on, is incompetent and knows nothing about power". It's just too tempting for people to make it into a moral about how whatever they already believed about EAs is true. Nobody's gunning for those other guys the same way, so they get a pass.

I'm mostly against trying to learn things immediately in response to crises (I'm okay with learning things at other times, and learning things in a very delayed manner after the pressure of the crisis is over). Imagine the sorts of things we might have learned from FTX:

- It was insane that FTX didn't have a board, you need strong corporate boards to keep CEOs in check.

- Even though people didn't explicitly know Sam was a scammer, they should have noticed a pattern of sketchiness and dishonesty and reacted to it immediately, not waited for positive proof.

- If everything is exploding and the world hates you, for God's sake don't try to tweet through it, don't go to the press, don't explain why you were in the right all along, just stay quiet and save it for the trial.

Of course, those would have been the exact wrong lessons for the most recent crisis (and maybe overlearning them *caused* the recent crisis) because you can't actually learn things by overupdating on single large low-probability events and obsessing over the exact things that would have stopped those events in particular.

I stick to what I said in the post:

" My first, second, and so on to hundredth priorities are protecting this tiny cluster and helping it grow. After that I will grudgingly admit that it sometimes screws up - screws up in a way that is nowhere near as bad as it’s good to end gun violence and cure AIDS and so - and try to figure out ways to screw up less. But not if it has any risk of killing the goose that lays the golden eggs, or interferes with priorities 1 - 100."

Expand full comment

With respect, I disagree. The Open AI board initiated the conflict, so it is fair to blame them for misjudging the situation when they failed to win. In exactly the same way, when Malcolm Turnbull called a party vote on his own leadership in 2018 and lost his position as Prime Minister as a result, it is fair to say that it was Turnbull's judgement that failed catastrophically and not Peter Dutton's.

Secondly, I think events absolutely vindicated Nadella and Altman's understanding of power. I think Nadella understood that as the guy writing the checks, he had a lot of influence over Open AI and could pull them into line if they did something he didn't like. They did something he didn't like, and he pulled them into line. Likewise, I think Altman understood that the loyalty the Open AI staff have towards him made him basically untouchable, and he was right. They touched him, and the staff revolted.

If someone challenges you and they lose, that is not a failure to understand power on your part. That is a success.

I don't think Altman losing his place on the board means anything much. It's clearly been demonstrated that his faction has the loyalty of the staff and the investors and can go and recreate Open AI as a division of Microsoft if push comes to shove. They have all the leverage.

Expand full comment
author

My impression is the OpenAI board didn't initiate the conflict, they were frantically trying to preempt Sam getting rid of them first. See https://www.lesswrong.com/posts/KXHMCH7wCxrvKsJyn/openai-facts-from-a-weekend?commentId=3cj6qhSRt4HoBLpC7 (and the rest of the comments thread).

Expand full comment

Turnbull was trying to pre-empt Dutton too.

If you make the judgement that you can win an overt conflict but will lose a more subtle one, it can make sense to initiate an overt conflict - but it's still incumbent on you to win it.

If you're not going to win the overt conflict, you're better off dragging things out and trying/hoping to change the underlying dynamic in a way that is favourable to you. If the choice is lose fast or lose slow, lose slow. It allows the opportunity for events to overtake the situation.

But having said that, I'm not at all sure that was the choice before them. Even if it's true that Altman were trying to force Toner out, it's unclear whether or not he would have been able to. Maybe he could have, certainly he's demonstrated that he has a lot of power. But ousting a board member isn't the easiest thing in the world, and it doesn't seem like - initially at least - there were 4 anti-Toner votes on the board. Just because executives wanted to "uplevel their independence" doesn't mean they necessarily get their way.

My instinct is that the decision to sack Altman was indeed prompted by his criticism of Toner and the implication that he might try to finesse her out - people feeling their position threatened is the kind of thing that often prompts dramatic actions. But I don't think the situation was so bad for Toner that failure to escalate would have been fatal. I think she either misjudged the cold war to be worse for her than it would have been or the hot war to be better for her than it actually was, or (quite likely) both.

And I think Toner's decision to criticize Open AI in her public writings - and then to make the (probably true!) excuse that she didn't think people would care - really strengthens the naivety hypothesis. That's the kind of thing that is obviously going to undermine your internal position.

Expand full comment

A thing that's curiously absent from that and from the Zvi's take and all other takes I've seen was: what the heck did the board expect to happen?

Any post attempting to provide an explanation must answer this question for two reasons: first, obviously you can't be sure that it's correct if it contains a planet-sized hole where the motivations of one of the parties are supposed to be, second, I'm speaking for myself, but I'm much more concerned about the EA-adjacent people utterly failing to pull off a backstab than about them being backstabby. Being *competent* is their whole schtick.

Expand full comment
Nov 29, 2023·edited Nov 29, 2023

>”it’s good to end gun violence and cure AIDS and so”

EA didn’t end gun violence and AIDS though. You can’t compare saving nameless faceless anonymous people on the other side of the world that nobody who matters cares about personally to ending gun violence. Ending gun violence would improve the lives of every single person who lives in an American metropolitan area overnight by making huge swaths of their city traversable again.

I’m curious how far you’re willing to abstract away the numbers. Does saving people in other Everett branches count? If the 200,000 saved lives were in a parallel universe we could never interact with, but FTX happened in our universe, would you still think the screwups were minor compared to the successes?

Expand full comment

Morbidity and mortality rates on other continents of the same planet can be independently verified. Alternate universes, not so much.

Expand full comment

I think the important thing here in your specific post on "well, Ezra Klein says [this]" is that what people say about X, and how much they say about X, how much they don't say about X, and how much they say about Y or Z, are all political choices that people make. There is no objective metric for the words "big" "horribilis" "catastrophe" "real world power" and "disaster" in his statement, or for the implied of impact. This is a journalist's entire job.

I am 100% not in the EA movement, but one thing I like about it is the ostensible focus on real world impacts, not subjective interpretations of them. I am not trying to advocate pro- or con-, just that if you/we take a step back, are we all talking about reality, or people's opinions,

especially people's opinions that are subject to their desire for power/influence/profit/desire-to-advance/denigrate-specific ideologies? If we thought about this dispassionately, is there any group of even vaguely ideologically associated people that we could not create a similar PR dynamic about?

We are essentially discussing journalism priorities here. What is the objective set of pre-event standards for "is this important?" and "does this indict the stated ideology of the person involved?" are being applied to SBF or OpenAI? Are they similarly being applied to other situations? I'm not criticizing what you're saying, just that I think we perhaps need to focus on real impacts rather than "what people are saying."

Expand full comment

"I am 100 not in the EA movement, but..."

I respect what Scott et. al, have done with the EA movement and I think it's laudable. However, like many historical examples of ideological/intellectual/political movements, there's a certain tendency to 'circle the wagons' and assume those who are attracted to some (but not all) of the movement's ideas are either 'just learning' (i.e. they're still uploading the full program and will eventually come around, ignore them until then) or are disingenuous wolves in sheep's clothing.

Yet in any mature movement, you have different factions with their own priorities. New England Republicans seem to care more about fiscal security and trade policy, while Southern Republicans care about social issues - with massive overlap from one individual to another.

I'm not saying Scott explicitly rejects pluralism in EA. He ended this essay with an open invitation to follow whatever selection of EA principles you like. I'm just observing that many people feel they have to upload the whole program (from animal cruelty to AI safety and beyond) in order to identify as even 1% "in the EA movement".

Speaking from experience, I feel it took time for me to be able to identify with EA for exactly this reason: I didn't agree with the whole program. I agree with Scott that there's broad potential appeal in EA. But I think much of that appeal will not be realized until people feel comfortable associating themselves with the core of the movement without feeling like they're endorsing everything else. And for a program in its infancy, still figuring things out from week to week, it's probably best for people to feel they can freely participate and disagree in good faith while still being part of the community.

Expand full comment

For myself, I was more delineating the "movement" part of "public people who have been associated with 'EA' - as a person, I don't feel like I'm a part of it, though lots of the ideas are attractive. I prefer the "smorgasbord/buffet" style choices of ideologies. :) And the "pluralism" you/Scott mention is absolutely my style! But to some extent, I think the core principle of EA is just (and I mean this as a compliment) banal - yes, obviously you should do things that are good, and use logic/data to determine which things those are. The interesting part of the "movement" is actually following through on that principle. Whether that means bednets or AI safety or shrimp welfare, that all dependent on your value weights.

Expand full comment

I agree with nearly all of that. I would just add one suggestion and invitation: EA is new. It's aware that it needs intellectual input and good-faith adversarial challenges to make it better. This especially includes people like you, who agree with many core ideas, but would challenge others. The movement doesn't require a kidney donation for 'membership', nor does it require exclusivity from other organizations. You don't have to be atheist or rationalist, just interested in altruism and in making your efforts more effective.

Seems like a movement you could contribute to, even if only in small, informal ways?

Expand full comment

I think that's all true, with emphasis on the "there is no membership" part. My original point in my comment was that all of this conversation, and the Ezra Klein journo-style statements especially, are trying to debate "the group of people defined as EA: GOOD OR BAD?" for monkey-politics reasons, like we would about a scandal involving a D or R senator. I think I would prefer for it to be more "here is a philosophy that SBF can pick things from, but turn out to be a jerk, but also one where I (or any other person) can pick things from, but still have our jerk-itude depend on our own actions rather than SBF or anyone else who came to the philosophy buffet."

Expand full comment

1. SBF fooled a lot of people, including major investors, not just EA. I agree that some EA leaders were pretty gullible (because my priors are crypto = scam), but even my cynicism thought SBF was merely taking advantage of the dumb through arbitrage, not running an outright fraud (see also: Mark Levine).

2. It’s way too early to tell if the OpenAI thing is in fact a debacle. Certainly it was embarrassing how the board failed to communicate, but the end result may be better than before. It’s also not as if “EA” did the thing, instead of a few EA-aligned board members.

Also I think your first bit there is a little too charitable to many critics of EA who read Scott.

Expand full comment

I'm also a longtime crypto skeptic and I had just assumed that SBF was running a profitable casino.

Expand full comment

*Matt Levine

Expand full comment

I think it's very selective and arbitrary to consider these EA's "two big swings." I've been in EA for 5+ years and I had no idea what the OpenAI board was up to, or even who was on it or what they believed, until last weekend. I'd reckon 90% of people involved with or identifying as EA had no idea either. Besides, even if it was a big swing within the AI safety space, much of the movement and most of the donations it inspires are actually focused on animal welfare or global health and development issues that seem to be chugging along well. The media's tabloid fixation on billionaires and big tech does not define our ideology or movement.

A fairer critique is that the portion of EA invested in reducing existential risk by changing either a) U.S. federal policy or b) the behavior of large corporations seems to have little idea what it's doing or how to succeed. I would argue that this is partly because they have not yet transitioned, nor even recognized the need to transition, from a primarily philosophical and philanthropic movement to a primarily political one, which would in turn require giving much more concern and attention to reputational aesthetics, mainstream opinion, institutional incentives, and relationship building. Political skills are not necessarily abundant in what has until recently been philosophy club for privileged, altruistic but asocial math and science nerds. Coupled with a sense of urgency related to worry over rapid AI timelines, this failure to think politically has produced multiple counterproductive, high-profile blunders that seem to outsiders like desperate flailing at best and self-serving facade at worst (and thus have unfair and tragic spillover harms on the bulk of EA that has nothing to do with AI policy).

Expand full comment

Effective Altruists were supposed to have known better than ACTUAL PROFESSIONAL INVESTORS AND FINANCIAL REGULATORS about the fraudulent activities of SBF?

Expand full comment

If they claim to do charity better than actual professional charities, I naturally expect the same excellence in every field they touch.

(just kidding)

Expand full comment

Effective Altruists hung out with him, worked with him, mentored him into earn to give in the first place- the regulators might've failed on not catching him fast enough, but unironically yes, there are reasons some EAs should've caught on (and normal, human, social reasons for why they wouldn't).

Expand full comment

I agree with the general point that EA has done a lot of good and is worth defending, but I think this gives it too much credit, especially on AI and other political influences. I suspect a lot of those are reverse causation - the kind of smart, open-minded techy people who are good at developing new AI techniques (or the YIMBY movement) also tend to be attracted to EA ideas, and I think assuming EA as an organization is responsible for anything an EA-affiliated person has done is going too far.

(That said, many of the things listed here have been enabled or enhanced by EA as an org, so while I think you should adjust your achievement estimates down somewhat they should still end up reasonably high)

Expand full comment
author

I'm not giving EA credit for the fact that some YIMBYs are also EAs, I'm giving it credit for Open Philanthropy being the main early funder for the YIMBY movement.

I think the strongest argument you have here is RLHF, but I think Paul wouldn't have gone into AI in the first place if he wasn't an EA. I think probably someone else would have invented it for some other reason eventually, but I recently learned that the Chinese AI companies are getting hung up on it and can't figure it out, so it might actually be really hard and not trivially replaceable.

Expand full comment

Hm. I think there's a distinction between "crediting all acts of EAs to the EA movement", and "showing that EAs are doing lots of good things". And it's the critics who brought up the first implication, in the negative sense.

Expand full comment

It's frustrating to hear people concerned about AI alignment being compared to communists. Like, the whole problem with the communists was they designed a system that they thought would work as intended, but didn't foresee the disastrous unintended consequences! Predicting how a complex system (like the Soviet economy) would respond to rules and constraints is extremely hard, and it's easy to be blindsided by unexpected results. The challenge of AI alignment is similar, except much more difficult with much more severe consequences for getting it wrong.

Expand full comment

> Am I cheating by bringing up the 200,000 lives too many times?

Yes, absolutely. The difference between developing a cure for cancer or AIDS or whatever is that it will solve the problem *permanently* (or at least mitigate it permanently). Saving lives in impoverished nations is a noble and worthwhile goal, but one that requires continuous expenditures for eternity (or at least the next couple centuries, I guess).

And on that note, what is the main focus of EA ? My current impression is that they're primarily concerned with preventing the AI doom scenario. Given that I'm not concerned about AI doom (except in the boring localized sense, e.g. the Internet becoming unusable due to being flooded by automated GPT-generated garbage), why should I donate to EA as opposed to some other group of charities who are going to use my money more wisely ?

Expand full comment

> And on that note, what is the main focus of EA ? My current impression is that they're primarily concerned with preventing the AI doom scenario.

Did you see the graph of funding per cause area?

Expand full comment

Yes, and I see the orange bar for "longermism and catastrophic risk prevention" growing rapidly (as percentage of total, though I'm eyeballing it).

Expand full comment
Nov 28, 2023·edited Nov 28, 2023

This was pre-FTX crash, post the orange part has decreased probably, see Jenn's post pointing at: https://docs.google.com/spreadsheets/d/1IeO7NIgZ-qfSTDyiAFSgH6dMn1xzb6hB2pVSdlBJZ88/edit#gid=1410797881

Expand full comment

You can choose what causes you donate to. Like, to bring another example, if you're a complete speciesist and want to donate only to stuff that saves humans, that's an option even within GiveWell etc. you do not need to buy into the doomer stuff to be an EA, let alone give money.

Expand full comment

How is "rapidly growing" equal to "primarily concerned with"? Your statement is objectively wrong.

Expand full comment

From what I've seen, there's an active and sustained effort in the EA movement to redirect their efforts from boring humdrum things like mosquito nets and clean drinking water to the essential task of saving us all from AI doom. Based on the graph, these efforts are bearing fruit. I don't see any contradiction here.

Expand full comment

AI Doom isn't even the only longtermist/catastrophic risk cause area--pandemic prevention, nuclear risk, etc all are also bundled in that funding area.

Expand full comment

Take the Giving What We Can pledge that Scott linked to, you can donate to all sorts of causes there.

Expand full comment
Nov 28, 2023·edited Nov 28, 2023

From what I know about medicine, a cure for cancer or AIDS will also require continuous expenditures, no? Drugs (or medical procedures) are expensive!

Expand full comment

Fair point, it depends on what you mean by "cure". If we could eradicate cancer the way we did polio, it would dramatically reduce future expenditures.

Expand full comment

If we could do that we can also live forever young. It's a big lift.

Expand full comment

That seems unlikely on the face of it, since polio is an infection, while cancer, barring a small number of very weird cases, isn't. There isn't an external source of all cancer which could theoretically be eliminated.

Expand full comment
author

I tried to calculate both AIDS/cancer/etc and EA in terms of lives saved per year, so I don't think it's an unfair comparison. As long as EA keeps doing what it's doing now, it will have "cured AIDS permamently".

You can't "donate to EA", because EA isn't a single organization. You can only donate to various charities that EA (or someone else) recommends (or inspired). I think the reason you should donate to EA-recommended charities (like Malaria Consortium) is that they're the ones that (if you believe the analyses) save the most lives per dollar.

If you donate to Malaria Consortium for that reason, I count you as "basically an EA in spirit", regardless of what you think about AI.

Expand full comment

> As long as EA keeps doing what it's doing now, it will have "cured AIDS permamently".

Can you explain how this would work -- not just in terms of total lives saved, but cost/life ?

>You can't "donate to EA", because EA isn't a single organization.

Yes, I know, I was using this as a shorthand for something like "donating to EA-endorsed charities and in general following the EA community's recommendations".

> I think the reason you should donate to EA-recommended charities (like Malaria Consortium) is that they're the ones that (if you believe the analyses) save the most lives per dollar.

What if I care about things other than maximizing the number of lives saved (such as e.g. quality of life) ? Also, if I donate to an EA-affiliated charity, what are the chances that my money is going to go to AI risk instead of malaria nets (or whatever) ? Given the EA community's current AI-related focus, are they going to continue investing sufficient effort into evaluating non-AI charities in order to produce most accurate recommendations ?

I expect that EA adherents would say that all of these questions have been adequately answered, but a). I personally don't think this is the case (though I could just not be smart enough), and b). given the actual behaviour of EA vis a vis SBF and such, I am not certain to what extent their proclamations can be trusted. At the very least, we can conclude that they are not very good at long-term PR.

Expand full comment

My God, just go here https://www.givingwhatwecan.org/ you control where the money goes, it won't get randomly redirected into something you don't care about.

If you think quality of life is a higher priority than saving children from malaria, well, you're already an effective altruist, as discussion of how to do the most good is definitely a part of it. Though I do wonder what you're thinking to do with your charitable giving that is higher impact than something attacking global poverty/disease.

Expand full comment

> If you think quality of life is a higher priority than saving children from malaria, well, you're already an effective altruist

I really hate this argument; it's as dishonest as saying "if you care about your neighbour then you're already a Christian". No, there's actually a bit more to being a Christian (or an EA) in addition to agreeing with bland common-sense homilies.

Expand full comment

Eh, EA really has a way lower barrier of entry than being Christian. I really do think all it takes is starting to think about how to do the most good. It's not really about submitting to a consensus or a dogma. I sure know I don't buy like 50% of EA, yet, I still took the Giving What We Can pledge and am therefore an EA anyway.

Expand full comment

Checking the math on claims of charitable effectiveness, shopping around for the best value in terms of dollars-per-Quality-Adjusted-Life-Year (regardless of exactly how you're defining 'quality of life,' so long as you're willing to stick to a clear definition at all), is about as central to EA as Christ is to Christianity.

Expand full comment

Perhaps, but it is not *unique* to EA. It's like saying that praying together in a big religious temple is central to Christianity -- it might be, but still, not everyone who prays in a big temple is a Christian. Furthermore, the original comment that I replied to is even more vague and general than that:

> If you think quality of life is a higher priority than saving children from malaria, well, you're already an effective altruist, as discussion of how to do the most good is definitely a part of it.

Expand full comment

> Also, if I donate to an EA-affiliated charity, what are the chances that my money is going to go to AI risk instead of malaria nets (or whatever) ?

The charities that get GiveWell recommendations are very transparent. You can see their detailed budget and cost-effectiveness in the GW analyses. If Against Malaria Foundation decides to get into AI safety research, you will know.

Nothing even vaguely like this has ever happened AFAIK. And it seems wildly improbable to me, because those charities have clear and narrow goals, they're not like a startup looking for cool pivots. But, importantly, you don't have to take my word for it.

> Given the EA community's current AI-related focus, are they going to continue investing sufficient effort into evaluating non-AI charities in order to produce most accurate recommendations ?

Sadly there is not a real-money prediction market on this topic, so I can't confidently tell you how unlikely this is. But we're living in the present, and right now GW does great work. If GW ever stops doing great work, *then* you can stop using it. Its decline is not likely to go unnoticed (especially compared to a typical non-EA-recommended charity), what with the transparency and in-depth analyses allowing anyone to double-check their work, and the many nerdy people with an interest in doing so.

Expand full comment

Ea orgs take into account impacts on quality of life.

Expand full comment

> why should I donate to EA as opposed to some other group of charities who are going to use my money more wisely ?

Don't "donate to EA"; donate to the causes that EA has painstakingly identified to be the most cost-effective and neglected.

EA Funds is divided into 4 categories (global health & development, animal welfare, long-term future, EA infrastructure) to forestall exactly this kind of concern. Think bed nets are a myopic concern? Think animals are not moral subjects? Think AI doom is not a concern? Think EAs are doing too much partying and castle-purchasing? Join the club, EAs argue about it endlessly themselves! And just donate to one of the other categories.

(What if you think *all four* of these are true? Probably there's still a group of EA hard at work trying to identify worthwhile donation target for you; your preferences are idiosyncratic enough that you may have to dig through the GiveWell analyses yourself to find them.)

Expand full comment
Nov 28, 2023·edited Nov 28, 2023

I found the source of the Fudning Directed by Cause Area bar graph, it's from this post on the EA forum: https://forum.effectivealtruism.org/posts/ZbaDmowkXbTBsxvHn/historical-ea-funding-data . Two things to note:

1. the post is from August 14 2022, before the FTX collapse, so the orange bar (Longtermism and Catastrophic Risk Prevention) for 2022 might be shorter in reality.

2. all information from the post are from this spreadsheet (https://docs.google.com/spreadsheets/d/1IeO7NIgZ-qfSTDyiAFSgH6dMn1xzb6hB2pVSdlBJZ88/edit#gid=1410797881) maintained by the OP, which also includes 2023 data which shows a further decrease in longtermism and XR funding in 2023.

Expand full comment
author

Thanks, I somehow managed to lose it. I'll put that back in.

Expand full comment

No critical commentary, just want to say this is excellent and reflects really well what’s misguided about the criticisms of ea.

Expand full comment

Agreed. Very good.

Expand full comment

Same.

Expand full comment

Agreed.

Most of my comments have a "Yes, but", but not this one. Great post about a great movement!

Expand full comment

> It’s only when you’re fighting off the entire world that you feel truly alive.

SO true, a quote for the ages

Expand full comment

I agree. Also: EA can refer to at least three things:

- the goal of using reason and evidence to do good more effectively,

- a community of people (supposedly) pursuing this goal, or

- a set of ideas commonly endorsed by that community (like longtermism).

This whole article is a defense of EA as a community of people. But if the community fell apart tomorrow, I'd still endorse its goal and agree with many of its ideas, and I'd continue working on my chosen cause area. So I don't really care about the accomplishments of the community.

Expand full comment

Unfortunately, and that's an very EA thought, I am pretty sceptical that EA saved 200,000 lives counterfactually. AMFs work was funged by the Gates Foundation which decided to fund more US education work after stopping their malaria work due to tremendous amounts of funding from outside donors

Expand full comment
User was indefinitely suspended for this comment. Show
Expand full comment

Unless you count one trivial missing apostrophe, there aren't any spelling mistakes! (Sceptical is the British spelling. Scott has many British readers.)

Expand full comment

"an very", "was funged by", missing full stop on last sentence.

I suppose you could call these all grammar errors rather than spelling errors, though.

Expand full comment

Are you thinking "funged" is a typo for "funded"? I think "funged" makes more sense semantically, so I think it was intended.

You're right about "an very" though; I missed that.

Expand full comment
author

Banned for this comment.

Expand full comment

> [Sam Altman's tweets] I don't exactly endorse this Tweet, but it is . . . a thing . . . someone has said.

OK, then. Sam Altman apparently has a sense of humor, and at least occasionally indulges in possibly-friendly trolling. Good to know.

Expand full comment

200,000 sounds like a lot but there are approximately 8 billion of us. It would take over 15,000 years to give every person one minute of your time. Who are these 200,000? Why were their lives at risk without EA intervention? Whose problems are you solving? Are you fixing root causes or symptoms? Would they have soon died anyway? Will they soon die anyway? Are all lives equal? Would the world have been better off with more libraries and less malaria interventions? These are questions for any charity but they're more easily answered by the religious than the intellectual which makes it easier for them as they don't need to win arguments on the internet. EA will always have it harder because they try to justify what they do with reason.

Probably a well worn criticism but I'll tread the path anyway: ivory tower eggheads are impractical, do come up with solutions that don't work and enshrine as sacred ideas that don't intuitively make sense. All while feeling intellectually superior. The vast majority of the non-WEIRD world are living animalistic lives. I don't mean that in a negative sense. I mean that they live according to instinct: my family's lives are more important than my friend's lives, my friend's lives are more important than stranger's lives, my countrymen's lives are more important than foreigners's lives, human lives are more important than animal lives. And like lions hunting gazelles they don't feel bad about it. But I suspect you do and that's why you write these articles.

If your goal is to do good, do good and give naysayers the finger. If your goal is to get the world to approve of what you're doing and how you're doing it, give up. Many never will.

Expand full comment

> If your goal is to do good, do good and give naysayers the finger. If your goal is to get the world

> to approve of what you're doing and how you're doing it, give up.

Amongst many ways to get more good done, one practical approach is to get more people to do good. Naysayers are welcome to the finger as you suggest, but sometimes people might be on the fence; and if, with a little nudge, more good things get done, taking a little time for a little nudge is worthwhile.

Expand full comment

We don't need to know if all lives are valued equally. As long as we expect that their value is positive then saving a lot will mean a lot of positive value.

Expand full comment

What do you think of Jeremiah Johnson's take on the recent OpenAI stuff? "AI Doomers are worse than wrong - they're incompetent"

https://www.infinitescroll.us/p/ai-doomers-are-worse-than-wrong-theyre?lli=1&utm_source=profile&utm_medium=reader2

(Constrained in scope to what he calls "AI Doomers" rather than EA writ large, though he references EA throughout)

Expand full comment
author

See the section on AI from this list - I don't think it sounds like they're very incompetent!

I also think Johnson (and most other people) don't understand the OpenAI situation, might write a post on this later.

Expand full comment

Was Sam Altman a "doomer" in 2015?

Expand full comment

I dunno, ask Jeremiah Johnson

Expand full comment

"Gotten 3,000 companies including Pepsi, Kelloggs, CVS, and Whole Foods to commit to selling low-cruelty meat."

I hope that includes all Yum! brands, not just Pepsi. Otherwise, I'm thinking you probably don't have much to crow about if Pepsi agrees to use cruelty free meat in their...I dunno...meat drinks, I guess, but meanwhile KFC is still skinning and flaying chickens alive by the millions.

Expand full comment

Getting Kellogg's to go cruelty free with their Frosted Mini Meats is undoubtedly a big win, though.

Expand full comment

Many Thanks! I enjoyed that!

Expand full comment

I stopped criticizing EA a while back because I realized the criticism wasn't doing anything worthwhile. I was not being listened to by EAs and the people who were listening to me were mostly interested in beating up EA as a movement. Which was not a cause I thought I ought to contribute to. Insofar as I thought that, though, it was this kind of stuff and not the more esoteric forms of intervention about AI or trillions of people in the future. The calculation was something like: how many bednets is some rather silly ideas about AI worth? And the answer is not zero bed nets! Such ideas do some damage. But it's also less than the sum total of bed nets EA has sent over in my estimation.

Separately from that, though, I am now convinced that EA will decline as a movement absent some significant change. And I don't think it's going to make significant changes or even has the mechanisms to survive and adapt. Which is a shame. But it's what I see.

Expand full comment

Wasn't your criticism that EA should be trying to build malaria net factories in the most dysfunctional countries in the world instead of giving people nets that need nets, because this would allow people with an average IQ of 70 to build the next China? Yeah, I can't imagine why people weren't interested in your great ideas...

Expand full comment

No, it was not. It doesn't surprise me you missed my point though. After all, you missed the point of my comment here too.

Expand full comment

Totally fair that EA succeeds at its stated goals. I'm sure negative opinions run the gamut, but for my personal validation I'll throw in another: I think it's evil because it's misaligned with my own goals. I cannot deny the truth of Newtonian moral order and would save the drowning child and let those I've never heard of die because I think internal preference alignment matters, actually.

Furthermore, it's a "conspiracy" because "tradeoff for greater utils (as calculated by [subset of] us)" is well accepted logic in EA (right?). This makes the behavior of its members highly unpredictable and prone to keeping secrets for the greater good. This is the basic failure mode that led to SBF running unchecked -- his stated logic usually did check out by [a reasonable subset of] EA standards.

Expand full comment

Do you consider all everything else that is misaligned with your goals evil, or just EA?

Expand full comment

Using the word "evil" here might be straining my poetic license, but yes, "evil" in this context reduces to exactly "misaligned with my goals"

Expand full comment

Isn't that like, almost everyone to some degree?

Expand full comment

Yes, usually including myself! However EA seems like a powerful force for making my life worse rather than something that offers enough win-win to keep me ambivalent about it.

If EA continues to grow, I think its likely that I'll trade off a great amount of QALYs for an experiment that I suspect is unlikely to even succeed at its own goals (in a failure mode similar to centralized planning of markets).

Expand full comment

Congratulations, you now understand human morality.

Expand full comment

The Coasean problem with EA: it discounts, if not outright disregards, transaction costs and how those costs increase as knowledge becomes less perfect, which thus reduces the net benefit of a transaction

In other words, without making extraordinary assumptions about the TOTAL expected value and utility of a charitable transaction, EA must heavily discount how much transaction costs (of determining the counterparty's expected value and utility--a subjective measure) offset the benefit of the transaction. In many instances, those transaction costs will be exorbitant, since it's a subjective measure, and therefore exceed the benefit to produce a net negative "effect."

One is left therefore to imagine how EA can ever produce an effective result, according to those metrics, in the absence of perfect information and thus zero transaction costs.

Expand full comment

Are you saying that the specific impact calculations that orgs like GiveWell do are incorrect, or are you just claiming epistemic learned helplessness https://slatestarcodex.com/2019/06/03/repost-epistemic-learned-helplessness/.?

Expand full comment

I mean, GiveDirectly is a top charity on Givewell, are you claiming that showering poor people in money to the tune of .92 per dollar still produces a lot of transaction cost?

Expand full comment

This, I think, is an interesting take.

Is your thought here that transaction costs are implicit and thus not properly priced in to the work done? I think at the development economics level that is not terribly true. The transaction costs of poverty relief in urban USA vs the poverty relief in San Salvador are not terrible different once the infrastructure in question is set up.

"Compared to what" is my question.

Everything has transaction costs. Other opportunities have similar transaction costs. I would be surprised if they didn't. However, I agree I would like to see this argued explicitly somewhere.

Expand full comment
author

Isn't this just the old paradox where you go:

- Instead of spending an hour studying, you should spend a few minutes figuring out how best to study, then spend the rest of the time studying

- But how long should you spend figuring out the best way to study? Maybe you should start by spending some time figuring out the best balance between figuring out the right way to study, and studying

- But how long should you spend on THAT? Maybe you should start by spending some time figuring out the best amount of time to spend figuring out the best amount of time to spend figuring out . . .

- ...and so on until you've wasted the whole hour in philosophical loops, and therefore you've proven it's impossible to ever study, and even trying is a net negative.

In practice people just do a normal amount of cost-benefit analysis which costs a very small portion of the total amount of money donated.

Expand full comment

Yes, that's my point. Since it's too expensive/impractical to calculate the true value of net expected benefit/utility of a charitable transaction, EA must rely on some exogenous set of assumptions, which can vary from one Effective Altruist to the next, about what makes something the *most effective* charitable transaction.

That's not to say that EA is normatively bad. It's just not a priori anymore *effective* in the expected benefit/utility sense than grandma leaving her estate to the dog rescue.

Expand full comment

Centralizing and standardizing research into which charities do exactly what (so the results can then be easily checked against any given definition of "effectiveness") reduces transaction costs by eliminating a lot of what would otherwise be needlessly duplicated effort.

Expand full comment

I don't identify as an EA "person" but I think the movement substantially affected both my giving amounts and priorities. I'm not into the longtermism stuff (partly because I'm coming from a Christian perspective and Jesus said "what you do to the least of them you do to me," and not "consider the 7th generation") but it doesn't offend me. I'm sure I'm not alone in having been positively influenced by EA without being or feeling fully "in."

Expand full comment

Thank you for the data point. And for the giving, of course!

I think you do not have to agree with all points that were ever made in EA to be an EA. I think there are many people who identify as effective altruists, but do not care about animals, or longtermism, etc. We can agree that helping others is good, that being effective is better than not being effective... and still disagree on the exact measurement of the "good". The traditional answer is QALYs, but that alone doesn't tell us how to feel about animals, or humans in distant future.

Not saying that you should identify as an EA. I don't really care; it is more important what you do than what you call it. Just saying that the difference may be smaller than it seems.

Expand full comment

Good point!

Expand full comment

In the present epistemic environment, being hated by the people who hate EA is a good thing. Like, you don't need to write this article, just tell me Covfefe Anon hates EA, that's all I need. It doesn't prove EA is right or good, or anything, but it does get EA out of the default "not worth the time to read" bucket.

Expand full comment

This is not good logic, how can anyone know who's opinions are right and who's are wrong without examining them each for himself?

Expand full comment

That only applies to stupidity which is at least partly random. If some troll has established a pattern of consistently and intelligently striving to be maximally malicious, taking the reverse of their positions on binary issues may actually be a decent approximation of benevolence.

Expand full comment

You literally look like something that Covfefe Anon would draw as a crude caricature of a left wing dude.

Expand full comment

It's hard to argue against EA's short-termist accomplishments (longtermist remain uncertain), as well as against the core underlying logic (10% for top charities, cost-effectiveness, etc). That being said, how would you account for:

- the number of people who would be supportive of (high-impact) charities, but for whom EA and its public coverage ruined the entire concept/made it suspicious;

- the number of EAs and EA-adjacent people who lost substantial sums of money on/because of FTX, lured by the EA credentials (or the absence of loud EA criticisms) of SBF;

- the partisan and ideological bias of EA;

- the number of talented former EAs and EA-adjacent people whose bad experiences with the movement (office power plays, being mistreated) resulted in their burnout, other mental health issues, and aversion towards charitable work/engagement with EA circles?

If you take these and a longer time horizon into the account, perhaps it could even mean a "great logic, mixed implementation, some really bad failure modes that make EA's net counterfactual impact uncertain"?

Expand full comment

Could you clarify what you mean by "partisan and ideological bias" ?

Expand full comment

Control F turns up no hits for either Chesterton or Orthodoxy, so I'll just quote this here.

"As I read and re-read all the non-Christian or anti-Christian accounts of the faith, from Huxley to Bradlaugh, a slow and awful impression grew gradually but graphically upon my mind— the impression that Christianity must be a most extraordinary thing. For not only (as I understood) had Christianity the most flaming vices, but it had apparently a mystical talent for combining vices which seemed inconsistent with each other. It was attacked on all sides and for all contradictory reasons. No sooner had one rationalist demonstrated that it was too far to the east than another demonstrated with equal clearness that it was much too far to the west. No sooner had my indignation died down at its angular and aggressive squareness than I was called up again to notice and condemn its enervating and sensual roundness. […] It must be understood that I did not conclude hastily that the accusations were false or the accusers fools. I simply deduced that Christianity must be something even weirder and wickeder than they made out. A thing might have these two opposite vices; but it must be a rather queer thing if it did. A man might be too fat in one place and too thin in another; but he would be an odd shape. […] And then in a quiet hour a strange thought struck me like a still thunderbolt. There had suddenly come into my mind another explanation. Suppose we heard an unknown man spoken of by many men. Suppose we were puzzled to hear that some men said he was too tall and some too short; some objected to his fatness, some lamented his leanness; some thought him too dark, and some too fair. One explanation (as has been already admitted) would be that he might be an odd shape. But there is another explanation. He might be the right shape. Outrageously tall men might feel him to be short. Very short men might feel him to be tall. Old bucks who are growing stout might consider him insufficiently filled out; old beaux who were growing thin might feel that he expanded beyond the narrow lines of elegance. Perhaps Swedes (who have pale hair like tow) called him a dark man, while negroes considered him distinctly blonde. Perhaps (in short) this extraordinary thing is really the ordinary thing; at least the normal thing, the centre. Perhaps, after all, it is Christianity that is sane and all its critics that are mad— in various ways."

Expand full comment

Christians, famously in firm agreement about Christianity. Definitely have had epistemology and moral philosophy figured out amongst themselves this whole time.

Someone like Chesterton can try to defend against criticisms of Christianity from secular critics and pretend he isn't standing on a whole damn mountain range of the skulls of Christians of one sect or another killed by a fellow follower of Christ of a slightly different sect.

The UK exists as it does first by splitting off from Catholicism and then various protestants killing each other over a new prayer book. Episcopalian vs. Presbyterian really used to mean something worth dying over! RETVRN.

https://en.wikipedia.org/wiki/Bishops%27_Wars

Expand full comment
Nov 29, 2023·edited Nov 29, 2023

THE JOKE <---------------------------------------

-------------------------------------------------> YOU

Yeah the point is that everything Chesterton said in those quotes about Christianity is now true of EA, hence the political compass meme Scott shared. Also Scott (and this commentariat) like Chesterton for this kind of paradoxical style.

Please try a little harder before starting a religious slapfight and linking to wikipedia like I don't know basic history.

Expand full comment

It's the internet bucko. I'll link to Wikipedia and start religious slapfights whenever, wherever.

The reason I'm having a "whoosh" moment is because EA, whatever faults it has, can in no way measure up to what Christianity did to deserve actually valid criticism.

So you're trying to be clever but it's lost on poor souls like me who think Chesterton was wrong then and Scott is right now.

Expand full comment

Bruh. You're not even on the right topic.

People say EA is too far right, too far left, too authoritarian, too libertarian. With me so far?

In the 20s people were saying Christianity was too warlike but also too pacifistic, too pessimistic but also too optimistic. With me still?

The -structure- of the incoherence is the same in both cases, regardless of the facts underneath. I give zero fucks about Christianity. It's an analogy. Capiche, bud?

Expand full comment

Yes, I did recognize with your help that you were pointing out a structural similarity between two not-very-similar cases.

In general, you're by default gonna confuse EA-aligned people with sympathetic comparisons to Christianity.

Expand full comment

It is possible to have errors in two normally-conflicting directions at once. For instance, a lousy test for e.g. an illness might have _both_ more false negatives _and_ more false positives than a better test for the same illness, even though the rates of these failure modes are usually traded off against each other.

I'm not claiming that either or both of Christianity or EA is in fact in this position, but it can happen.

Expand full comment

Interestingly, the Orient has another possible explanation. (Not arguing or anything, but this is what sprang to mind.)

https://en.wikipedia.org/wiki/Blind_men_and_an_elephant

Expand full comment

Does Bill Gates count as an EA?

He certainly gives away a lot of money, and from what I know about the Gates Foundation they put a lot of effort into trying to ensure that most of it is optimally spent in some kind of DALYs-per-dollar sense. He's been doing it since 1994, he's given away more money than anyone else in history, and by their own estimates (which seem fair to compare with Scott's estimates) has saved 32 million lives so far.

This page sets out how the Gates Foundation decides how to spend their money. What's the difference between this and EA? https://www.gatesfoundation.org/ideas/articles/how-do-you-decide-what-to-invest-in

Is it just branding? Is EA a bunch of people who decided to come along later and do basically the same thing as Bill Gates except on a much smaller scale and then pat themselves on the back extra hard?

Expand full comment
author
Nov 28, 2023·edited Nov 28, 2023Author

I agree Bill Gates qualifies as a lowercase effective altruist.

I don't think "do the same thing as Bill Gates" is anything to scoff at! I think if you're not a billionaire, it's hard to equal Gates' record on your own, and you need institutions to help you do it. For example, Bill can hire a team of experts to figure out which is the best charity to donate to, but I (who can't afford this) rely on GiveWell.

I agree that a fair description of EA would be "try to create the infrastructure to allow a large group of normal people working together to replicate the kinds of amazing things Bill Gates accomplished"

(Bill Gates also signed the statement on AI existential risk, so we're even plagiarizing him there too!)

Expand full comment

Well if Bill Gates is an effective altruist then I feel like one of the big problems with the Effective Altruism movement is a failure to acknowledge the huge amount of prior art. Bill Gates has done one to two orders of magnitude more for effective altruism than Effective Altruism ever has, but EA almost never acknowledges this; instead they're more likely to do the opposite with their messaging of "all other charity stupid, we smart".

C'mon guys, at least give a humble shout-out to the fact that the largest philanthropist of all time has been doing the same basic thing as you for about a decade longer. You (EA) are not a voice crying in the wilderness, you're a faint echo.

Not that I'm even a big fan of Bill Gates, but credit where credit is due.

Expand full comment

Eh, where did you get the impression that EAs almost never acknowledge the value of the work done by Gates or that they are likely to dismiss it as stupid? Just to mention the first counterexample that comes to mind, Peter Singer has said that Gates has a reasonable claim to have done more good than any other person in human history.

Expand full comment

On this topic, I believe Scott also wrote a post trying to quantify how much good the Gates Foundation has done. Or possibly it was more generally trying to make the case for billionaire philanthropy. Either way, I agree EA isn't denying the impact Gates has had.

Expand full comment

So I'm pretty much a sceptic of EA as a movement despite believing in being altruistic effectively as a core guiding principle of my life. My career is devoted to public health in developing countries, which I think the movement generally agrees is a laudable goal. I do it more within the framework of the traditional aid complex, but with a sceptical eye to the many truly useless projects within it. I think that, in ethical principle, the broad strokes of my life are in line with a consequentialist view of improving human life in an effective and efficient way.

My question is: what does EA as a movement add to this philosophy? We already have a whole area of practice called Monitoring and Evaluation. Economics has quantification of human lives. There are improvements to be made in all of this, especially as it is done in practice, but we don't need EA for that. From my perspective - and I share this hoping to be proved wrong - EA is largely a way of gaining prestige in Silicon Valley subcultures, and a way of justifying devoting one's life to the pursuit of money based on the assumption, presented without proof, that when you get that money you'll do good with it. It seems like EA exists to justify behaviour like that at FTX by saying 'look it's part of a larger movement therefore it's OK to steal the money, net lives saved is still good!' It's like a doctor who thinks he's allowed to be a serial killer as long as he kills fewer people than he saves.

The various equations, the discount rates, the jargon, the obsession with the distant future, are all off-putting to me. Every time I've engaged with EA literature it's either been fairly banal (but often correct!) consequentialist stuff or wild subculture-y speculation that I can't use. I just don't see what EA as a movement and community accomplishes that couldn't be accomplished by the many people working in various forms of aid measuring their work better.

Expand full comment

Right now there are two groups of people who work middle-class white-collar jobs and donate >10% of their income to charity. The first group are religiously observant and are practicing tithing, with most of their money going to churches, a small fraction of which goes to the global poor. The second group is EA, and most of their money goes to the global poor.

You're right that the elements of the ideology have been kicking around in philosophy, economics, business, etc for the last 50 years, at least. But they haven't been widely combined and implemented at large until EA did it. Has EA had some PR failures a la FTX? Yes, but EA existed years before FTX even existed.

EA is mostly in favor of more funding for "the many people working in various forms of aid measuring their work better". The things you support and the things EA supports don't seem to be at odds to me.

Expand full comment
author

Reasonable question, I'll probably try to write a post on this soon.

Expand full comment

I would be interested to read that.

Expand full comment
Nov 29, 2023·edited Nov 29, 2023

>There are improvements to be made in all of this, especially as it is done in practice, but we don't need EA for that.

>I just don't see what EA as a movement and community accomplishes that couldn't be accomplished by the many people working in various forms of aid measuring their work better.

Huh? So, you're saying that "we" the "many people" could in principle get their act together, but for some reason haven't gotten around to doing that yet, meanwhile EAs, in their bungling naivety, attempt to pick up the slack, yet this is somehow worse than doing nothing?

Expand full comment

Many people are getting ther act together. Many donors are getting better at measuring actual outcomes instead of just trainings or other random nonsense. It's slow because the whole sector is a lumbering machine, but I don't see EAs picking up the slack. All I see are arcane arguments about AI and inside-baseball jargon. If they are 'picking up the slack', you're also doing a whole bunch of other things that drown that out.

Expand full comment

I use GiveWell to direct my donations, GiveWell is pretty much the central example of EA in my experience, and I'm not aware of another "small donor"-facing group which provides good information on what charities are most efficacious in saving lives (or other thing you care about). Do you have any recommendations?

I can fully believe that, e.g., AMF has spent money, or contracts out, or some similar such thing to help make sure that their interventions are the best, but I'm not aware of anybody besides GiveWell who aggregates it with an eye towards guiding donors to the best options, which is the thing I like EA for and most of what EA does is this sort of aggregation of data into a simple direction of action. (I also never see people criticizing EAs for GiveWell giving inaccurate numbers, so I assume the numbers are basically correct.)

Expand full comment

I use GiveWell as well, small donor donations are extremely murky for larger organisations and it's true that I have not seen anyone else make a better guide for small donors. There are definitely positive elements to the movement as well, I'm sceptical but not totally dismissive.

Expand full comment

> Economics has quantification of human lives.

Calculating where the money should be sent is one part. Actually sending the money is the other part. The improvement of EA is in actually sending the money to the right places, as a popular movement.

Expand full comment

This is an interesting question. Do you believe the subculture-y parts of the movement motivate people to actually send the money (instead of just saying they will)? If so, is the movement specifically tied to a time and place, such as current Silicon Valley, because different things might motivate different people to act?

Expand full comment

Definitely; most people are motivated by what their friends *do*.

When Christians go to the church, they hear a story about Jesus saying that you should sell all your property and donate the money to the poor. Then they look around and see that none of their neighbors has actually sold their property. So they also don't feel like they should. They realize that "selling your property and donating to the poor" is something that you are supposed to verbally approve, but you are not supposed to actually do it.

And this is not meant as an attack on Christians; more or less *everyone* is like this, I just used a really obvious example. Among the people who say they care about the environment, only a few recycle... unless it becomes a law. Generally, millions of people comment on every cause that "someone should do something about it", but only a few actually do something. If you pay attention, you may notice that those people are often the same ones (that people who do something about X are also statistically more likely to do something about Y).

I suspect that an important force are... people on the autistic spectrum, to put it bluntly. They have a difficulty to realize (to instinctively do this, without consciously being aware of it) that they are supposed to *talk* about how doing X is desirable, but never actually *do* X. They hear that X should be done, and they go ahead and try to do X. Everyone else says "wow, that was really nice of you" but also thinks "this is some weirdo I need to avoid". Unless there is a community that reaches a critical amount of autism, so that when someone goes and does X, some of their friends say "cool" and also do X. If a chain reaction starts and too many people do X, even the less autistic people succumb to the peer pressure, because they are good people at heart, they just have a strong instinct against doing good unless someone else already does it first.

The rationalist community in Bay Area is an example of a supercritical autistic community. (This is more or less what other people have in mind when they accuse rationalists of being a "cult".) Not everyone has the same opinions, of course; they are actually *less* likely to agree on things than the normies. But once a sufficient subset of them agrees that X should be done, they go ahead and actually start doing X as a subculture, whether X is polyamory or donating to the poor. This is my explanation how the Effective Altruism started, why nerds are over-represented there, why so many of them also care about the artificial intelligence, why normies are instinctively horrified but cannot precisely explain why (because they agree verbally with the idea of giving to the poor, they just feel that it is weird that someone actually does that, you are only supposed to talk about how "we should"; normies are instinctively horrified by weirdness, because it lowers one's social status).

> is the movement specifically tied to a time and place, such as current Silicon Valley

Is there another place with such concentration of autists, especially one that treats them with relative respect? (Genuine question; if there is, I want to know.) There are virtual communities, but those usually encourage people to do things in the virtual space, such as develop open-source software.

Expand full comment

Isn't this just an admission of failure then? If it doesn't scale past your subculture then it won't really accomplish much in the world. You help some people on a small-donor personal scale, which is nice, but the main outcome then is that you act extremely smug with a tiny real-world impact while there's not much reason for the rest of the world to pay attention to your movement because it only applies to a small number of people in very specific circumstances.

Also, and I guess this is kind of a stereotype, I think you have a pretty out-of-touch idea of how 'normies' work. Lots of people follow through on what they say they'll do, including a variety of kinds of charitable giving. Like...there's an entire aid industry of people who think you should help others and have devoted their lives to it. I could make double what I do in the private sector if not more, but I don't! Effectiveness is a separate question but _lots_ of people follow through on their (non-religious) moral commitments.

Expand full comment

> If it doesn't scale past your subculture then it won't really accomplish much in the world.

Not if the subculture is big enough, and some of its members make decent money. Also, the longer it exists, the more normies will feel like this is a normal thing to do, so they may join, too.

And yes, there was a lot of simplification and stereotyping. You asked how the subculture motivates people to actually send money; I explained what I believe to be the main mechanism.

Expand full comment

Decent comment, but some mistakes, and I'd like to write a few counter-arguments. I don't have time right now, but I will type a reply in a few days.

Expand full comment

IMO EA should invest in getting regulatory clarity in prediction markets. The damage done to the world by the absence of collective sense-making apparatus is enormous.

Expand full comment
author

We're trying! I know we fund at least Solomon Sia to lobby for that, and possibly also Pratik Chougule, I don't know the full story of where his money comes from. It turns out this is hard!

Expand full comment
Nov 28, 2023·edited Nov 28, 2023

As an enthusiastic short-termist EA, my attitude to long-termist EA has gone in the past year from "silly but harmless waste of money" to "intellectually arrogant bollocks that has seriously tarnished a really admirable and important brand".

Working out what the most efficient ways to improve the world here and now is hard, but not super-hard. I very much doubt that malaria nets are actually the single most efficient place that I could donate my money, but I bet they're pretty close, and identifying them and encouraging people to donate to them is a really valuable service.

Working out what the most efficient ways to improve the world 100 years from now is so hard that only people who massively overestimate their own understanding of the world claim to be able to do it even slightly reliably. I think that the two recent EA-adjacent scandals were specifically long-termist-EA-adjacent, and while neither of them was directly related to the principles of EA, I think both are very much symptomatic of the arrogance and insufficient learned epistemic helplessness that attract people to long-termist EA.

I think that Scott's list of "things EA has accomplished, and ways in which it has made the world a better place" is incredibly impressive, and it makes me proud to call myself an effective altruist. But look down that list and remove all the short-termist things, most of what's left seems either tendentious (can the EA movement really claim credit for the key breakthrough behind ChatGPT?), nothingburgers (funding groups in DC trying to reduce risks of nuclear war, prediction markets, AI doomerism). I'm probably exaggerating slightly, because I'm annoyed, but I think the basic gist of this argument is pretty unarguable.

All the value comes from the short-termists. Most of the bad PR comes from the longtermists, and they also divert funds from effective to ineffective causes.

My hope is that the short-termists are to some extent able to cut ties with the AI doomers and to reclaim the label "Effective Altruists" for people who are doing things that are actually effectively altruistic, but I fear it may be too late for that. Perhaps we should start calling ourselves something like the "Efficiently Charitable" movement, while going on doing the same things?

Expand full comment

"Working out what the most efficient ways to improve the world 100 years from now is so hard that only people who massively overestimate their own understanding of the world claim to be able to do it even slightly reliably."

Agreed. I don't think that anyone trying to anticipate the consequences that an action today will produce in 100 years is even going to get the _sign_ right significantly better than chance.

Expand full comment

This.

Expand full comment

Completely agree with this. I've donated a few tens of thousands to the Schistosomiasis Control Initiative, but stopped earlier this year in disgust with what the overall movement was focussing on. That led me to alarm that the goals which I'd previously presumed were laudable were coming from a philosophy that could so easily be diverted into nonsense. I may start donating again, but EA has to do a lot to win me back. At the moment it's looking most likely I divert my donation to a community or church based group (I've fully embraced normie morality)

Expand full comment
Nov 30, 2023·edited Nov 30, 2023

This seems like a bad reaction - just because people adjacent to the people who originally recommended the SCI to you are doing silly or immoral things does not mean that the SCI will not do more good per dollar donated than a community or church group.

I think "short-termist EA good" is a far, far more important message than "long-termist EA bad".

Expand full comment
Nov 30, 2023·edited Nov 30, 2023

I guess to explain in more detail -

EA depends on a certain set of assumptions which hold if you are a disembodied mind concerned only in the abstract with what is good for humanity.

But none of us are actually that disembodied mind, and it’s disingenuous to pretend and act as if we are.

The common sense morality position that you should look after your friends, family and community before caring for others, even if you could do more good for others with the same resources in some abstract sense, is in my opinion correct.

Specifically it’s correct because of the principles of justice and reciprocity. Take reciprocity first. I owe an approximately infinite amount to my parents. I owe a very large amount to my wider family, a lot to my friends, and quite a bit to my larger community and to my nation. All that I am, including my moral character, is because of these people.

As a concrete example, if my mothers life depended on my giving her hundreds of thousands of dollars, perhaps for an experimental cancer treatment, I would do this without hesitation, even though it could save hundreds of lives by abstract calculation.

I would argue it’s superogatory to donate to charity in the developing world. It’s a good thing to do and if you’re going to do it you may as well ask where your dollars will be well spent. But EA doesn’t address the argument from reciprocity that you owe far more to those close to you.

Next, the argument from justice. This is the other issue with basing donations on cold mathematical calculations. For example, right now if I were to donate to Doctors Without Borders, there’s a fair chance that my money would go to fund their operations in Gaza. Now, before the comments section blows up, I do believe that this particular charity in this particular instance is doing net good - but they’re famously apolitical and they use resources to treat terrorists as well as civilians. How much does that impact the lives saved per dollar of my donation, if there are some lives I’d rather not save? Who knows? EA don’t consider it their position to calculate. Considerations like this apply to every dollar spent in regions where the donor doesn’t understand, or even consider it their position to understand, the politics and the underlying reasons that all these preventable deaths are occurring.

I consider it, in retrospect, a logical and unfortunately inevitable outgrowth of EA’s philosophy that so much effort has now been hijacked by causes that arouse little or no sympathy in me. It was always a byline of the movement that you should purchase utilons and not warm fuzzies with your charitable donations. It’s fundamentally not how people work. The much derided warm fuzzies are a sure sign that you’re actually accomplishing something meaningful.

Expand full comment

I think this is a good list, even though it counts PR wins such as convincing Gates. 200k lives saved is good, full stop.

However, something I find hard to wrap my head around is that the most effective private charities, say Bill & Melinda Gates foudnation (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2373372/), have spent their money and have had incredible impact that's orders of magnitude more than EA has. They define their purpose narrowly and cleave to evidence based giving.

And yet, they're not EAs. Nobody would confuse them either. So the question is less whether "EAs have done any good in the world", the answer is of course yes. Question is whether the fights like boardroom drama and SBF and others actively negate the benefits conferred, on a net basis. The latter isn't a trivial question, and if the movement is an actual movement instead of a lot of people kind of sort of holding a philosophy they sometimes live by, it requires a stronger answer than "yes, but we also did some good here".

Expand full comment
author

I think I would call them EAs in spirit, although they don't identify with the movement.

As I said above, I think "help create the infrastructure for a large group of normal people to do what Gates has done" is a decent description of EA.

I think Gates has more achievements than us because he has 10x as much money, even counting Moskowitz's fortune on EA's side (and he's trying to spend quickly, whereas Moskowitz is trying to delay - I think in terms of actual spending so far it's more like 50-1)

Expand full comment

I respect the delta in money though its not just that which causes Gates' success. He focuses on achievements a lot and has built extraordinary execution capabilities. The movement that tries to "create a decentralised Gates Foudnation" would have to do very different things to what EA does. To achieve that goal requires a certain amount of winning. Not just in the realpolitik sense either.

And so when the movement then flounders in high profile ways repeatedly, and demonstrates it does not possess that capacity, the goals and vision are insufficient to pull it back out enough to claim it's net positive. If you recall the criticisms being made of EAs in the pre SBF era, they're eerily prescient about today's world where the problems present themselves.

Expand full comment

I think one of the keys to Gates' success is that he sets himself clear and measurable goals. He is not trying to "maximize QALYs" or "Prevent X-risk" in some ineffable way; he's trying to e.g. eradicate malaria. Not all diseases, and not even all infectious diseases, just malaria. One step toward achieving this is reducing the prevalence of malaria per capita. Whenever he spends money or anything, be it a new technology or a bulk purchase order of mosquito netting or whatever, he can readily observe the impact this expenditure had toward the goal of eradicating malaria. EAs don't have that.

Expand full comment

“Whenever he spends money or anything, be it a new technology or a bulk purchase order of mosquito netting or whatever, he can readily observe the impact this expenditure had toward the goal of eradicating malaria. EAs don't have that.”

I think GiveWell, a major recipient of funding and support from EA, is actually extremely analytical about whether its money is achieving its goals. I just don’t think the difference between Gates and EA is as profound as you think. After all, Gates had to determine that malaria was a good cause to take up, and if malaria is eradicated he’ll have to figure out what cause takes its place. I don’t think he’s drawing prospective causes out of a hat, do you? He’s figuring out where the money could do the most good. That’s something everyone can do, whether or not they have Gates’s money, and that’s the purpose of EA.

Expand full comment

At the risk of being uncharitable, I think that the difference between Gates and EA is that Gates saw that malaria was killing people; decided to stop it (or at least reduce it) ASAP; then used analytics to distribute his money most efficiently in service of that goal. EA saw that malaria was killing people; decided to stop it (or at least reduce it); used analytics to distribute his money most efficiently in service of that goal; then, as soon as they enjoyed some success, expanded the mission scope to prevent not merely malaria, but all potential causes of death now or in the long-term far-flung future.

Expand full comment

Say what you want about the vagaries of longtermism, you accurately assessed the risk that you were being uncharitable! I don't think it's fair to say that EA invests in fighting all causes of death—you can see that fighting widespread and efficient-to-combat deadly diseases still receives by far the largest share of GiveWell funds—and as far as the future, while we might disagree about AI risk can we agree that future deaths from pandemics, for instance, are not an outlandish possibility and therefore might be worth investing in?

Expand full comment

I mean Gates, a brilliant tech founder, is really, really close to EA/rationality by default. If all charity was done by Bill, then EA would not have been necessary.

See also: Buffet

Expand full comment

Not quite. Carter center, for instance, and many others also exist. Still plenty of ways to do good ofc

Expand full comment

You can point to organizations that are, by EA standards, highly effective, and not make a dent in the issue of average effectiveness of charities/donations overall. If the effectiveness waterline were higher, the founders of EA would presumably not been driven to do as they did, is my point.

And, EA is specifically focused on "important, tractable, and neglected" issues, so it's explicitly not trying to compete with orgs doing good work already.

Expand full comment

For what it’s worth, the “EA in spirit” struck me rather sourly. It feels like EA as a movement trying to take credit for lots of stuff it contributed to, but was not solely responsible for. For what it’s worth, I am sympathetic to the charitable giving and think EAs want to do well, but the movement is utterly consumed with extreme scenarios where the expected values are as dramatic as you want them to be, ironically because of a lack of evidence.

Expand full comment

It doesn’t seem clear which way the boardroom drama goes in being good or bad. SBF is unfortunate, but maybe unfair to pin this mainly on EA (at least they are trying to learn from it as far as it concerns them).

Expand full comment

Its unfair to pin SBF entirely on EA, though having him be a poster child for the movement all the while stealing customer money is incredibly on the nose. Especially since he used EAs as his recruiting pool and part of his mythos.

Expand full comment

I consider Bill Gates an EA, since he's trying to give effectively. Most people don't try to give effectively (see the Harvard endowment)!

EA needs to split into “normal EA” and “exotic EA.” There's really not much to criticize Givewell about.

Expand full comment

I would say considering Bill Gates an EA makes "what is EA" impossible to answer. Which is ok if it's meant to be like "science", but completely useless if it's about the movement. Then there should not be a movement at all, but splinter into specific things like Open Phil and Givewell and whatnot.

Expand full comment

I don't understand why you put Anthropic and RLHF on this list. These are both negatives by the lights of most EAs, at least by current accounting.

Maybe Anthropic's impact will pay off in the future, but gathering power for yourself, and making money off of building dangerous technologies are not signs that EA has had a positive impact on the world. They are evidence against some form of incompetence, but I doubt that by now most people's concerns about the EA community are that the community is incompetent. Committing fraud at the scale of FTX clearly requires a pretty high level of a certain kind of competence, as did getting into a position where EAs would end up on the OpenAI board.

Expand full comment
author

"but I doubt that by now most people's concerns about the EA community are that the community is incompetent."

I think you're a week out of date here!

I go back and forth on this, but the recent OpenAI drama has made me very grateful that there are people other than them working on superintelligence, and recent alignment results have made me think that maybe having really high-skilled corporate alignment teams is actually just really good even with the implied capabilities progress risk.

Expand full comment
Nov 28, 2023·edited Nov 28, 2023

This gets at exactly the problem I have with associating myself with EA. How did we go from "save a drowning child" to "pay someone to work on superintelligence alignment". The whole movement has been captured by the exact navel gazing it was created to prevent!

Imagine if you joined an early abolitionist movement, but insisted that we shouldn't work on rescuing slaves, or passing laws to free slaves, but instead focused on "future slave alignment to prevent conflict in a post slavery world" or some nonsense. The whole movement has moved very far from Singer's original message, which had some moral salience to people who didn't necessarily work intellectual problems all day. It's no surprise that EA is not trusted...imagine yourself in a <=110 IQ brain, it would seem obvious these people are scamming you, and seeing things like SBF just fits the narrative.

Expand full comment

Imagine EAs doing both though. Current and future problems. Different timelines and levels of certainty.

Like, obviously it's impossible to have more than one priority or to focus on both present and future, certain and potential risks, but wouldn't it be so cool if it were possible?

(Some of the exact same people who founded GiveWell are also at the forefront of longtermist thought and describe how they got there using the same basic moral framework, for the record.)

Expand full comment

Certainly it's possible, but don't you think one arm of this (the one that is more speculative and for which it is harder to evaluate ROI) is more likely to attract scammers and grifters?

I think the longtermism crowd is intellectualizing the problem to escape the moral duty inherent in the provocation provided by Singer, namely that we have a horrible moral crisis in front of us that can be addressed with urgency, which is the suffering of so many while we engage in frivolous luxury.

Expand full comment

Well I'm the kind of EA-adjacent person who prefers X-risk over Singerism, so that's my bias. For instance, I mostly reject Singer's moral duty framing.

A lot of X-risk/longtermism aligns pretty neatly with existing national security concerns, e.g. nuclear and bio risks. AI risk is new, but the national security types are highly interested.

OG EA generally has less variance than longtermism (LT) EA, for sure. Of course, OG EA can lead you to caring about shrimp welfare and wild animal suffering, which is also very weird by normie standards.

SBF was donating a lot to both OG EA and LT EA causes (I'm not sure of the exact breakdown). I certainly think EA leaders could have been a lot more skeptical of someone making their fortune on crypto, but I'm way more anti-crypto than most people in EA/rationalist circles.

Also, like literally the founders of GiveWell also became longtermists. You really can care about both.

The funny thing about frivolous luxury is that as long as its contributing to economic growth it's going to outperform a large amount of all the nominally charitable work done that ended up either lighting money on fire or making things worse. (Economic growth remains the best way to help humans and the fact that EAs recognize this is a very good thing.)

Expand full comment

> Economic growth remains the best way to help humans and the fact that EAs recognize this is a very good thing.

Probably agree in the short term, but even on this one I wouldn't claim to guess whether it's a net positive or negative 100+ years from now.

Expand full comment

No, I think people's concern is that the EA community is at the intersection of being very competent at seeking power, and not very competent at using that power for good. That is what at least makes me afraid of the EA community.

What happened in the OpenAI situation was a bunch of people who seem like they got into an enormous position of power, and then leveraged that power in an enormously incompetent way (though of course, we still don't know yet what happened and maybe we will hear an explanation that makes sense of the actions). The same is true of FTX.

I disagree with you on the promise of "recent alignment results". I think the Anthropic interpretability paper is extremely overstated, and I would be happy to make bets with you on how much it will generalize (I would also encourage you to talk to Buck or Ryan Greenblatt here, who I think have good takes). Other than that, it's mostly been continued commercial applications with more reinforcement-learning, which I continue to think increases and not decreases the risk.

Expand full comment

RLHF is not only part of a commercial product but also part of a safety research paradigm, which other EA's further improve upon. Such as with Reinforcement Learning from Collective Human Feedback (RLCHF): https://forum.effectivealtruism.org/posts/5Y7bPv259mA3NtHt2/bob-jacobs-s-shortform?commentId=J7goKQnpMFf97GZQF

Expand full comment

It is funny how "talking past each other" are today's posts of Freddie and Scott. One is so focused on disparaging utilitarianism that even anti-utilitarians might think it was too harsh, while the other points to many good things EA did without ever getting to the point about why we need EA as presently constituted in the form of this movement. And part of that is conflating the definition of the movement as both 1) a rather specific group of people sharing some ideological and cultural backgrounds, and 2) the core tenets of evidence-based effectiveness evaluation that are clearly not exclusive to the movement.

I mean, you could simply argue that organizing people around a non-innovative but still sound common sensical idea that is not followed everywhere has its merits because it helps in making some things that were obscure become explicit. Fine. But it still doesn't necessarily mean that EA is the correct framing if it causes so much confusion.

"Oh but that confusion is not fair!..." Welcome to politics of attention. It is inevitable to focus on what is unique about a movement or approach. People choose to focus not on malaria (there were already charities doing that way before EA) but on the dudes seemingly saying "there's a 0.000001% chance GPT will kill the world, therefore give me a billion dollars and it will still be a bargain", because only EA as a movement considered this type of claim to be worthy of consideration under the guise of altruism.

I actually support EA, even though I don't do nearly enough to consider myself charitable. I just think one needs to go deeper into the reasons for criticism.

Expand full comment
Nov 28, 2023·edited Nov 28, 2023

Zizek often makes the point that the history of Christianity is a reaction to the central provocation of Christ, namely that his descent to earth and death represents the changing of God the Father into the Holy Spirit, kept alive by the community of believers. In the same way the AI doomerists are a predictable reaction to the central provocation of the Effective Altruists. The message early on was so simple: would you save a drowning child? THEY REALLY ARE DROWNING AND YOU CAN MAKE A DIFFERENCE NOW.

The fact that so many EAs are drawn to Bostrom and MacCaskill and whoever else is a sign that so many EA were really into it to prove how smart they are. That doesn't make me reject EA as an idea, but it does make me hesitant to associate myself with the name.

Expand full comment

I don't understand why being drawn to Bostrom or SBF suggests what you want is to prove how smart you are.

Expand full comment

EA as presented by Singer, like Christianity, was definitely not an intellectually difficult idea. The movement became quickly more intellectualized, going from (1) given in obviously good ways when you can to (2) study to find the best ways to give to (3) the best ways can only be determined by extensive analysis of existential risk to (4) the main existential risk is AI so my math/computer skills are extremely relevant.

The status game there seems transparent to me, but I'd be open to arguments to the contrary.

Expand full comment

The AI risk people were there before EA was a movement, and in fact there were some talks of separating them so global poverty can look less weird in comparison. Vox journalist, EA and kidney haver Dylan Matthews wrote a pretty scathing article about the inclusion of X risk wt one of the earlier EA Global conferences. Talking about X risk with Global Poverty EAs, last time I checked, was like pulling teeth.

Maybe it is true that there's an intellectual signalling spiral going on, but you need positive evidence that it's true, and not just "I thought about it a bit and it seemed plausible".

Expand full comment
Nov 28, 2023·edited Nov 28, 2023

I don't know what could constitute evidence of intellectual spiraling, but I know that for me personally, I was drawn to Singer's argument that I could save a drowning child. Reading MacCaskill or Bostrum feels not simply unrelated to that message, it seems like an EA anti-message to me.

Look, I know someone is going to think deeply about X-risk and Global Poverty (capitalized!), and get paid for it. But paying people to think about X-risk seems like the least EA thing possible, given there is no shortage of suffering children.

Expand full comment

It's unwise to go "this is not true" and then immediately jump to a very specific theory of status dynamics when it's not supported by any evidence. Why not just say "AI risk investment seems unlikely to turn out as well as malaria nets, I do not understand why AI riskers think what they do".

Expand full comment

I have no way of evaluating whether my investment in AI risking analysis will ever pay off, nor how much the person I am paying to do it has even contributed to avoiding AI risk. I don't even know what would constitute evidence that this is mere navel gazing, other than observing that it may be similar to other human behavior in that people want to paid to do things they enjoy, and thinking about AI-risk is fun and/or status enhancing.

Expand full comment
Nov 29, 2023·edited Nov 29, 2023

Interesting! My reaction to Singer was: He is making such an unreasonably big ask that I was inspired to reject not only his ethical stance but the entire enterprise of ethics. Yetch!

Expand full comment

I also am unsure about how to provide quant evidence on this, but I'd just say that while the people working on AI safety or being interviewed for it at 80k hours likely are mathy/comp sci nerds, many people are concerned about this as they are for other existential risks because they are convinced by those arguments, while lacking those skills.

Like I say, it's hard to provide more than anecdotes, but from observation (people I hang out with and read) and introspection: I'm a biologist, but while that gives me some familiarity with the tech and the jargon, I don't think my concern with bioterrorism comes from that, and my real job is in any case very unrelated.

I guess I could ask you if you feel the same way about the people worried nuclear war risk, bio risk etc. Do you feel like they are in a status game, or drawn to it because improving on it is something related to their rare skills?

Expand full comment

Thinking about this personally: I would much rather "think about AI-risk" than do my job training neural nets for an adtech company; indeed I do spend my free time thinking about X-risk. I think this probably true for most biologists, nuclear engineers, computer scientists and so on.

The problem is that preventing existential catastrophe is inherently not measurable, so it attracts more status seekers, grifters, and scammers, just as priestly professions have always done. This is unrelated to whether the source material is biology or computer science. I was probably wrong to focus on status particularly, rather than a broader spectrum of poor behavior.

That is why I mentioned Zizek's point in the original comment: EA has become all about what the fundamental provocation of EA was meant to prevent, namely investing in completely unmeasurable charity at the expense of doing verifiable good.

Expand full comment

I could see how it could attract people that like being 'above it', because they get the theoretical risk even if the empirical outcomes are not observable (because we are either safe or dead) but again, while this is hard to quantify or truly verify (eyeronic) I'm not sure at all it is the main motivation. Not sure how to proceed from here, except to note that when someone wants to increase biosecurity (say, Kevin Esvelt) you don't get that sort of reaction as much as you get from AI, and I'm still not sure why.

Expand full comment

I don't know that it's much different reaction when biosecurity means "take this drug/vaccine to protect yourself" instead of "make sure this lab is secure". IOW the extent of the difference is probably explained by the implied actions for the general public.

Expand full comment

So, um, do I understand correctly that you unironically quote Zizek and yet accuse *someone else* of being drawn to certain thinkers to prove how smart they are?

Expand full comment

Haha, I deserve that one :)

I think activity which is difficult to measure attracts all forms of grifters, scammers, and status seekers.

That is why I mentioned Zizek's point in the original comment: EA has become all about what the fundamental provocation of EA was meant to prevent, namely investing in completely unmeasurable charity at the expense of doing verifiable good.

Expand full comment

I see your point, but if you look closely at the core concept of EA, it's not exactly "doing measurable charities", it's "doing the most good". Of course to optimize something you need to be able measure it in some way, but all such measurements are estimates (with varying degrees of uncertainty), and you can, in principle, estimate the impact of AI risk mitigation efforts (with high degree of uncertainty). Viewed from this angle, the story becomes quite less dramatic than "EAs have turned into the very thing they were supposed to fight", and becomes more along the lines of arguing about estimation methods and at which point high risk/high reward strategy turns into a Pascal Wager.

Also you're kind of assuming the conclusion when saying that people worried about AGI are scammers and grifters and want to show they're smart. That would be true if AGI concerns were completely wrong, but another alternative is that they are correct and those people (at least many of them) support this cause because they've correctly evaluated the evidence.

Expand full comment

What you are saying would be true if the pool of people stayed static, but it doesn't. Scammers will join the movement because promises of large payouts far into the future with small probability is a scammer's (and or lazy status seeker's) paradise.

Thinking about X-risk is fun. In fact getting rich is good too because it will increase my ability to do good. Looks like EA is perfect for me after all! I don't even have to save that drowning child, as the opportunity cost in reduced time thinking about AI risk is higher than the benefits of saving it because my time thinking about AI will save trillions of future AI entities with some probability that I estimated. How lucky I am that EA tells me to do exactly what I wanted to do anyway!

Expand full comment

So your point is that AGI safety is bad because some hypothetical person can use it as an excuse to not donate money and not save a drowning child? What a terrifying thought, yeah. We can't allow that to happen.

Expand full comment

Yes, my point is that it's intellectual sophistry that is used to insulate oneself from the moral duty implied by the fundamental EA insight. That is, still feel good about doing "EA" while completely ignoring the duties implied by the messge.

Expand full comment

Sorry to defend "their side" but I'm a not hypothetical person who actually made this calculation. Most of my donations still go to global poverty

I'm not going to describe in detail what I thought, but the absolute first thing on my mind was the opportunity cost, and that I hated being in the epistemic position where I thought the best use of money was AI risk, and not the much more convenient and socially acceptable global poverty.

Expand full comment
Nov 28, 2023·edited Nov 28, 2023

Thank you for writing this. It's easy to notice things the controversial failures and harder to notice the steady march of small (or not-so-small-wins). This is much needed.

A couple notes about the animal welfare section. They might be too nitty-gritty for what was clearly intended to just be a quick guess, so feel free to ignore:

- I think the 400 million number for cage-free is an underestimate. I'm not sure where the linked RP study mentions 800 million — my read of it is that total commitments at the time in 2019 (1473 total commitments) would (upon implementation) impact a mean of 310 million hens per year. The study estimated a mean 64% implementation rate, but also there are now over 3,000 total cage-free commitments. So I think it's reasonable to say that EA has convinced farms to switch many billions of chickens to cage-free housing in total (across all previous years and, given the phrasing, including counterfactual impact on future years). But it's hard to estimate.

- Speaking of the 3,000 commitments, that's actually the number for cage-free, which applies to egg-laying hens only. Currently, only about 600 companies globally have committed to stop selling low-welfare chicken meat (from chickenwatch.org).

- Also, the photo in this section depicts a broiler shed, but it's probably closer to what things look like now (post-commitments) for egg-laying hens in a cage-free barn rather than what they used to look like. Stocking density is still very high in cage-free housing :( But just being out of cages cuts total hours of pain in half, so it's nothing to scoff at! (https://welfarefootprint.org/research-projects/laying-hens/)

- Finally, if I may suggest a number of my own: if you take the estimates from the welfare footprint project link above and apply it to your estimate for hens switched to cage-free (400 million), you land at a mind-boggling three trillion hours, or 342 million years, of annoying, hurtful, and disabling pain prevented. I think EA has made some missteps, but preventing 342 million years of animal suffering is not one of them!

Expand full comment
Nov 28, 2023·edited Nov 29, 2023

If you are interested in global poverty at all, GiveDirectly has a true 1 to 1 match that has finished.

You can donate here if you choose: https://www.givedirectly.org/givingtuesday2023/

This was the only time GiveDirectly has messaged me, and I at least am glad that I can double my impact.

Edit: updated comment to reflect all the matching has been done, also to erase my shameful mistake about timing.

Expand full comment

If you disagree that this is an effective use of money, that's fine! Just wanted to make sure the people who wanted to see it do.

Expand full comment

EA makes much sense given mistake theory but less given conflict theory.

If you think that donors give to wasteful nonprofits because they’ve failed to calculate the ROI in their donation, then EA is a good way to provide more evidence based charity to the world.

But what if most donors know that most charities have high overhead and/or don’t need additional funds, but donate anyway? What if the nonprofit sector is primarily not what it says it is? What if most rich people don’t really care deeply about the poor? What if most donors do consider the ROI — the return they get in social capital for taking part in the nonprofit sector?

From this arguably realist perspective on philanthropy, EA may be seen to suffer the same fate as other philanthropic projects: a mix of legitimate charitable giving and a way to hobnob with the elite.

It’s still unknown whether the longtermist projects represent real contributions to humanity or just a way to distribute money to fellow elites under the guise of altruism. And maybe it will always be unknown. I imagine historians in 2223 debating whether 21st century x-risk research was instrumental or epiphenomenal.

Expand full comment
Nov 29, 2023·edited Nov 29, 2023

I think that early EA were unaware of the "conflict theory" part of the equation; there's mentions from time to time that they expected the "direct donations best" to be the easy part and "donate more" to be the hard part, and found it to turn out to be the opposite. I think this has changed a good bit since.

But, tbh, I don't care about the conflict theory part. In the end, there are people who want to direct donations best, GiveWell appears to be the best way to do so transparently, and it (GiveWell) is the primary accomplishment of the EA movement IMO. If some people don't care about trying to do the right thing as much as possible, that's fine, they can go fuck in the mud for all I care.

Expand full comment

Or they could enter the movement and take it over from the inside….

Expand full comment

Correction to footnote 13: Anthropic's board is not mostly EAs. Last I heard, it's Dario, Daniela, Luke Muehlhauser (EA), and Yasmin Razavi. They have a "long-term benefit trust" of EAs, which by default will elect a majority of the board within 4 years (electing a fifth board member soon—or it already happened and I haven't heard—plus eventually replacing Daniela and Luke), but Anthropic's investors can abrogate the Trust.

(Some sources: https://www.vox.com/future-perfect/23794855/anthropic-ai-openai-claude-2, https://www.lesswrong.com/posts/6tjHf5ykvFqaNCErH/anthropic-s-responsible-scaling-policy-and-long-term-benefit?commentId=SoTkntdECKZAi4W5c.)

Expand full comment
author

Are at least Daniela and Luke not EAs?

I knew all of this except "abrogate the trust", do you know the details there?

Expand full comment

Oh, sorry, Daniela and Dario are at-least-EA-ish. (But them being on the board doesn't provide a check on Anthropic, since they are Anthropic.)

The details have not been published, and I do not know them. I wish Anthropic would publish them.

Expand full comment

What's your response to Robin Hanson's critique that it's smarter to invest your money so that you can do even more charity in 10 years? AFAIK the only time you addressed this was ~10 years ago in a post where you concluded that Hanson was right. Have you updated your thinking here?

Expand full comment
author

I invest most of my money anyway; I'll probably donate some of it eventually (or most of it when I'm dead). That having been said, I think there are some strong counterarguments:

- From a purely selfish point of view, I think I get better tax deductions if I donate now (for a series of complicated reasons, some of which have to do with my own individual situation). If you're donating a significant amount of your income, the tax deductions can change your total amount of money by a few percent, probably enough to cancel out many of the patient philanthropy benefits.

- Again from a purely personal point of view, I seem to be an "influencer" and I think it's important for me to be publicly seen donating to things.

- There's a philanthropic interest rate that competes with the financial interest rate. If you fund a political cause today, it has time to grow and lobby and do its good work. If you treat malaria today, the people you saved might go do other good things and improve their local economy.

- Doing good becomes more expensive as the world gets better and philanthropic institutions become better. You used to be able to save lives for very cheap with iodine supplementation, but most of those places have now gotten the iodine situation under control. So saving lives costs more over time, which is another form of interest rate increase.

- If you're trying to prevent AI risk, you should prefer to act early (when there's still a lot of time) rather than late (when the battle lines have already been drawn, or the world has already been destroyed, or something).

I do super respect the patient philanthropy perspective; see https://forum.effectivealtruism.org/topics/timing-of-philanthropy for more discussion.

Expand full comment
Nov 29, 2023·edited Nov 29, 2023

> If you fund a political cause today

I have a hard time viewing "starting a political cause to further your own worldview" as altruistic, or even good. Doesn't normal self-interest already provide an oversupply of political causes? And does convincing smart people to become lobbyists really result in a net benefit to the world? I think a world where the marginal engineer/doctor/scientist instead becomes a lobbyist or politician is a worse world.

>If you treat malaria today, the people you saved might go do other good things and improve their local economy.

That's an interesting claim, but I think it's unlikely to be true. Is economic growth in, say, the Congo limited by the availability of living humans? A rational expectation for the good a hypothetical person will do is the per capita income of their country minus the average cost of living for that country, which for most malaria-type countries that surplus is going to be effectively zero. In almost all circumstances I think you get a higher ROI investing in a first-world economy.

>Doing good becomes more expensive as the world gets better

First world economies will also deliver more value over time as the world gets better. Investing in world-changing internet startups used to be easier but good luck finding the next Amazon now that the internet is mature. You should invest your money now so that the economic engine can maximize the growth of the next great idea. I'm very skeptical that the ROI of saving a third world life will grow faster than a first world economy will.

The strong from of this argument is basically just that economic growth is the most efficient way to help the world (as Tyler Cowen argues). I've never seen it adequately addressed by the EA crowd, but thanks for those links. Exponential growth is so powerful that it inevitably swaps any near-term linear intervention. If you really care about the future state of the world, then it seems insane to me to focus on anything but increasing the growth rate (modulo risks like global warming). IMO any EA analysis that doesn't end with "and this is why this intervention should be expected to boost the productivity of this country" is, at best, chasing self-satisfaction. At worst it's actively making the world worse by diverting resources from a functional culture to a non-functional one.

Expand full comment

Boy imagine thinking about what exponential growth could do if it applies to AI. Crazy.

Lots of EAs like Cowen and EAs in general are way more econ-pilled than normal charity/NGOs are. One of the strong reasons for AI development is achieving post-scarcity utopia. GMU is practically rationality/EA-adjacent. Hanson, being the obvious case.

Also, Cowen himself is a huge proponent of supporting potential in places like Africa and India!

If you're a Cowen-style "economic growth plus human rights" kind of person then I think the only major area of disagreement with EA is re: AI risk. But Cowen and OG EA are highly aligned.

Expand full comment

Not sure about your situation in that you run a couple of businesses, but in general isn't the most tax-effective way to donate by donating stock, since the donor gets the write off and the receiver gets the increased value without the capital gains being taxed?

(You can, of course, pursue this donation mechanism both now and later.)

https://www.fidelitycharitable.org/articles/4-reasons-to-donate-stock-to-charity.html

Expand full comment

> - Again from a purely personal point of view, I seem to be an "influencer" and I think it's important for me to be publicly seen donating to things.

Not gonna argue with this, but: Are your donations really visible? I mean, I don't even *know* that you donated a kidney.

If you amended it to "important for people to hear that I am donating to things" it would not have nagged at me. On the other hand, I haven't come up with a phrasing (even that one) that doesn't have a faint echo of "important that I look like I'm donating" so maybe your version is as good as it can get.

Expand full comment

> I think the AI and x-risk people have just as much to be proud of as the global health and animal welfare people.

I disagree. The global health people have actual accomplishments they can point to. It's not just speculative.

Expand full comment

I am a bit uneasy about claiming some good is equivalent to, say, curing AIDS or ending gun violence: these are things with significant second-order effects. For example, pending better information, my prior has it that the greatest impact of gun violence isn't even the QALYs lost directly in shootings, but vastly greater number of people being afraid (possibly of even e.g. going outside at night), greater number of people injured, decreased trust in institutions and your fellow man, young people falling into a life of crime rather than becoming productive members of the society, etc, etc. Or, curing AIDS would not just save some people from death or expensive treatment, but would erase one barrier to condom-free sex that most people would profess a preference to (that's a lot of preference-satisfaction when considering the total number of people who would benefit), but here there's also an obvious third-order effect of increased number of unwanted pregnancies (which, as a matter of fact, doesn't even come close to justifying not curing AIDS, but it's there).

Now, I'm entirely on board with the idea of shutting up and calculating, trying your best to estimate the impact (or "something like that": I've been drawn to virtue ethics lately, but a wise, prudent, just and brave - taking up this fight when it goes so far away from social conventions requires bravery, too - person could not simply wave away consequentialist reasoning as though it was nothing), and to do that you have to have some measure of impact, like QALYs. Right. But I think the strictly correct way of expressing that is in abstract QALYs that by construction don't have higher order effects of note. Comparing some good thing to some other thing, naively, without considering second-order effects when those are significant or greater than the first-order effects, seems naive.

And by my reckoning that's also a part of the pushback that EA faces in general: humans notoriously suffer from scope neglect and when thinking about the impact of gun violence, they don't think of gun fatalities times n (most of the dead were gangsters who had it coming anyway), but the second and greater order impacts they themselves experience vividly, and focusing on the exact number of dead seems wrongheaded. And in this case they might be right, too. (Of course, EA calculations can and should factor in nth order effects if they do seem like they would matter, and I would hazard a guess that's what EAs often do, but when people see the aforementioned kinds of comparisons, in my opinion they would be right to conclude them as naive).

Which reminds me of another argument in favor of virtue ethics: practical reasoning is often "newcomblike" (https://www.lesswrong.com/posts/puutBJLWbg2sXpFbu/newcomblike-problems-are-the-norm), that is to say the method of your reasoning matters, just like it does in the original paradox. "Ends don't justify the means" isn't a necessary truth: it's a culturally evolved heuristic that is right more often than not, making some of us averse to nontrivial consequentialist reasoning. "I have spotted this injustice, realized it's something I can actually do something about [effectiveness of EA implicitly comes in here], and devoted myself to the task of righting the wrong" is an easier sell than "you can save a life for n dollars".

Expand full comment

Wow, it's gotta be tough out there in the social media wilderness. Anyway, just dropped by to express my support to the EA, hope the current shitstorm passes and the [morally] insane people of twitter will move to the next cause du jour.

Expand full comment

I think it's worth asking why EA seems to provoke such a negative reaction -- a reaction we don't see with charitable giving in general or just generic altruism. I mean claiming to be altruistic while self-dealing is the oldest game in town.

My theory is that people see EA as conveying an implied criticism of anyone who doesn't have a coherent moral framework of theory of what's the most effective way to do good.

That's unfortunate, since while I obviously think it's better to have such a theory that doesn't mean we should treat not having one as blameworthy (anymore than we treat not giving a kidney or living like a monk and giving everything you earn away). I'd like to figure out a way to avoid this implication but I don't really have any ideas here.

Expand full comment

It's funny how you mention giving a kidney, since Scott's post on donating his kidney got the exact the same reaction.

Expand full comment

I've certainly seen criticism that seems to boil down to either: a) they are weird and therefore full of themselves b) they influence Bay Area billionaires and are therefore bad.

Expand full comment

One can do some "market research" by reading r/buttcoin comments about SBF, which take occasional pot-shots at EA. Some it is just cynicism about the idea of doing good (r/buttcoin self-selects for cynics). But you can also see the distaste that "normal people" have for the abstract philosophizing behind longtermist EA, especially when it leads to actions that are outwardly indistinguishable from pure greed.

E.g. https://www.reddit.com/r/Buttcoin/comments/16mxkji : "I'm sure it's only a matter of time before we discover why this was actually not only an ethical use of funds, but the only ethical use once you consider the lives of 10^35 future simulated versions of Sam."

The folks who dislike charity EA still confuse me. But they do crop up in the Marginal Revolution comments whenever EA is mentioned.

E.g. https://marginalrevolution.com/marginalrevolution/2023/01/my-st-andrews-talk-on-effective-altruism.html?commentID=160553474 : "The idea that someone should disregard their family, friends, neighbors, cultural group, religious affiliation, region, state, and/or nation in order to do the most 'good' is absurd on its face and contrary to nature."

Expand full comment

My sense is that ultimately that comes from the sense that they are being condescended to/tricked by people who are essentially saying: I'm soo much smarter than you and that means I get to break all the rules.

It's hard because I do think it's important to be able to say that: hey intuitions are really often wrong here. But the problem is that there is a strong tendency for people to replace intuitions with whatever people with a certain sort of status are saying which then is problematic.

Expand full comment

Sorry, should have started with: that's a good idea and I'll try to do that!

Expand full comment

> "The idea that someone should disregard their family, friends, neighbors, cultural group, religious affiliation, region, state, and/or nation in order to do the most 'good' is absurd on its face and contrary to nature."

Ah, yes, religious affiliation, state, and nation: things that totally exist in nature. Does this guy think dogs are arguing about Protestantism? Does he believe that owls have organized a republic in Cascadia?

Expand full comment

To be fair, the average Marg Rev comment is not as... let us say "intellectual"... as the average ACX comment. And the commenters can be quite grumpy.

Expand full comment
Nov 29, 2023·edited Nov 29, 2023

Oh man, I hadn't seen that one before about the parents urging him to milk the cow for their benefit. I was aware he used FTX funds to buy them a holiday home, and that he was donating to his mother/his brother and their good causes, but blatant "are you sending us 7 million or 10 million in cash, please clarify" - his parents were way more involved in the entire mess than I suspected.

"Despite knowing or blatantly ignoring that the FTX Group was insolvent or on the brink of insolvency, Bankman and Fried discussed with Bankman-Fried the transfer to them of a $10 million cash gift and a $16.4 million luxury property in The Bahamas. Bankman and Fried also pushed for tens of millions of dollars in political and charitable contributions, including to Stanford University, which were seemingly designed to boost Bankman’s and Fried’s professional and social status at the expense of the FTX Group, and by extension, its customers and other creditors. Additionally, Fried, concerned with the optics of her son and his companies donating money to the organization she co-founded and other causes she supported, encouraged Bankman-Fried and others within the FTX Group to avoid (if not violate) federal campaign finance disclosure rules by engaging in straw donations or otherwise concealing the FTX Group as the source of the contributions."

Possible big happy family reunion in jail? ☹

Also what the heck with 7 million in cash, were they walking around with suitcases full of dollar bills or what? Every time I read something about FTX that makes me go "Well *that* was no way to run a business, how could they do that?", something new pops up to make me go "Wow, they dug the hole even *deeper*".

Expand full comment
Nov 29, 2023·edited Nov 29, 2023

'But the journalists think we’re a sinister conspiracy that has “taken over Washington” and have the whole Democratic Party in our pocket.'

What a very, very different world it would be if that were actually the case...

Expand full comment
Nov 29, 2023·edited Nov 29, 2023

A post like this, and comments, are bizarre to someone whose world was the 20th century, not the 21st. All who come at the topic seem unaware (must be pretending?) there was a big and novel movement once upon a time, that begat several large non-profits and scores of smaller grassroots ones - and none of the issues and concerns of that once-influential cause even clear the narrow bar of the EAs.

Expand full comment

That's an interesting comment. Could you elaborate on which movement(s) you have in mind? There were so _many_ movements in the 20th century, both benign and lethal, that I would like to know the specific one(s) you mean.

Expand full comment

The conservation movement.

It was especially attractive to people who might, perhaps, be viewed as analogous to the sort of folks currently drawn to EA. But the value systems being so profoundly incompatible, I suppose they must not be the *same* people after all.

Come to any conservation-related meeting or workday. Nothing but Boomers, and even older than Boomers. It will die with them, although they didn't originate it. Of course, it's not too late to talk to Boomers about this subject - but almost too late - and that would require a deal of humility, and it is more fun to hate on Boomers en masse.

Expand full comment
Comment deleted
Expand full comment

There can be no truck for conservationists with a worldview that has as its first principle the maximizing of the number of lives of the one creature on the planet that has shown itself heedless of the harm in filling all niches to the exclusion of other forms of life.

As for climate change, it tends to be invoked only in connection with poor people (to be sure by its loudest activists, not just EAs insofar as there is a difference). As such, it belongs under some other rubric than conservation. Simple leftism in most cases.

Expand full comment
Nov 30, 2023·edited Nov 30, 2023

Many Thanks!

My wild guess is that some conservation concerns overlap with some EA animal rights concerns, but pretty weakly.

It _is_ weird how some movements just quietly fade away. I wonder which hot current movements will fade out into purely geriatric meetings in 20 years...

I just checked in google trends

https://trends.google.com/trends/explore?date=all&geo=US&q=%22old%20growth%20forests%22&hl=en

"old growth forests" dropped by about 4X from 2004 to about 2009 and has been roughly level since then.

Expand full comment
Nov 30, 2023·edited Nov 30, 2023

The big anti -clear cutting campaigns were in the 80s/90s. They were not successful legislatively but in some ways influenced the Forest Service for the better. In my own state it was the breakup of the big private timber holdings that was the big loss.

The movement did not fade away, although clearly raising children utterly removed from nature didn’t help. That was bound to end badly.

No, it was Soros’ed, in effect. Not Soros himself but another such rich guy, He used his $$ to take over/transform the once-venerable Sierra Club, deliberately, perhaps because it was associated with some famous battles - away from its mission and toward brown people, very Soros-like in ideology. The conservationists were exiled. All this can be dug up with persistence. It was openly done about 25 years ago.

And so it was eventually with all the nationals. The good work they still manage to do, the people in the field do in opposition to - in spite of - their boards and national leadership, who’ve not had the courage to explicitly change their charters but have enthusiasm only for the usual “woke” stuff. Their staff wife missives from DC do not in the main reference wildlife or open space or endangered species - and never more population!! - but more typically are marvels of condescension - let us explain something called Ramadan to you! - and stale retreads about how nature is people! Insert identity group label jour.

The cause of urban beautification and improvement is another that has had to die because it didn’t conform to the new theses.

Expand full comment

Many Thanks! I hadn't known that the conservationist organizations had been taken over. "The good work they still manage to do, the people in the field do in opposition to - in spite of - their boards and national leadership, who’ve not had the courage to explicitly change their charters but have enthusiasm only for the usual “woke” stuff." That is appalling. Thanks very much for informing me about it.

Expand full comment

Sorry if this sounds like a bilious, and at the same time corny, question, but does EA give any thought to contraception and population control? I know the word "control" has sinister undertones, but I mean education and incentives and similar.

If the population in countries like India and some in Africa, among other places, keeps increasing, then all your good work in medicine will be for nothing, and maybe even counter-productive! It will also nullify efforts to reduce carbon emissions.

Expand full comment
Nov 29, 2023·edited Nov 29, 2023

Yes, there's a fudge term in their spreadsheet for "effect on fertility". To be a complete nerd, it's probably my least favorite cell on a Givewell spreadsheet.

Also re: carbon emissions, Open Philantropy looked at Global Warming as a poverty intervention, and essentially found the news that increasing carbon emissions, under many different models of development, means that on average, less people die. This is because increasing carbon emissions means that the economy is growing quicker, and there are more things like air conditioning, or improved logistics that can prevent the worst ravages of global warming.

Expand full comment

That sounds remarkably self-certain about how far the environment can be further pushed without going into failure modes that can't just be patched by plugging in a few extra air conditioning boxes. The impression I'm getting from climate science is that it's looking quite scary.

Expand full comment

I said many, not all!

If you want to take a look at their logic it's at

https://www.openphilanthropy.org/research/climate-change-impacts/

Also I believe there was a recent report saying that the current pattern of warming eliminates a lot of models with much worse outcomes. Sorry for not linking.

Expand full comment

>Open Philantropy looked at Global Warming as a poverty intervention

Do you happen to have a link for that? All I could find on their website was a fairly uninformative write-up of a "shallow investigation" from 2013, and it didn't seem to talk about the claims you are referring to here.

Expand full comment
Nov 30, 2023·edited Nov 30, 2023

https://www.openphilanthropy.org/research/climate-change-impacts/

Is the 2013 investigation I'm talking about. I believe what you found was their overall assessment on grant making, also done in 2013.

Under the "Growth Scenarios" heading:

Excerpting

> As discussed below, the projected impacts of climate change depend a lot on how else the world changes over the coming decades. Accordingly, the IPCC Fourth Assessment Report considers six primary “socio-economic development scenarios,” with different projections for population growth, economic growth, and factors related to CO2 emissions (e.g., degree of transition to clean energy)

Expand full comment

Thanks, I see it now. Clicking through to the IPCC reports on that page — wow, those are some very weird documents that are very difficult to decipher, let alone evaluate. The "Summary for Policymakers" is particularly bad, since it doesn't explain anything that a policymaker might want to know: what are the expected economic impacts of various climate change scenarios, what are the expected climate change impacts of various economic development scenarios, and what evidence and assumptions are these expectations based on.

I guess that's not entirely surprising for a document drafted by a committee of an intergovernmental organization, but it has the unfortunate consequence that the document itself is basically useless. All the actual policy implications are going to be transmitted orally, by people who can (claim to) explain to policymakers what the committee members were really thinking. (Maybe the idea is that the scenario descriptions could be useful for policymakers working in areas *other than* climate policy?)

I'm not quite sure what to make of it, to be honest. If anything, this mostly just pushes me in the direction of "we have no idea what the future is going to be like."

Expand full comment
author

I don't know that much about this, but my understanding is:

1. Nobody's sure whether lowering fertility is good or bad right now. Past predictions of food shortages and population collapses haven't panned out, having more people seem to help the economy, and there are utilitarian arguments for more people too (people prefer to exist!)

2. The clearest way to decrease fertility is to help a country develop and become richer. This is definitely working in Africa (where fertility rates have gone from ~6 to ~4 over the past 15 years!) and India (where fertility rates have gone from 5 to 2 over the past 50 and are projected to get to 1.2 by 2050!) Helping economies develop already seems like a good idea (and EA is already doing a lot of things they think will help with this - even curing diseases is an economic intervention) and I don't think they think other methods are necessary at this point, especially given their past ethical lapses.

Expand full comment

> people prefer to exist

We'd need to ask a few non-existing people to be sure of that one.

Expand full comment

The total fertility rate of India, ie. the (synthetic) number that indicates the expected number of children per mother in her lifetime, has already gone below the replacement rate of 2.0, (1.76 in 2022, according to Wikipedia), meaning that unless there's a major fertility boost (unlikely) or immigration boost not matched by greater emigration (very unlikely) the number of people in India is bound to plateau and then start decreasing at some point in any case.

In ever greater amounts of the world, the problem is not too many children being born but too few, which will have greater and greater reprecussions vis-a-vis labor availability, economy etc. in the long run.

Expand full comment

I generally agree, but where did you get the 1.76 from? I follow those numbers, and all my sources agree on roughly 2.0-2.1 for India in 2022 and 2023, see

https://en.wikipedia.org/wiki/List_of_countries_by_total_fertility_rate

Replacement rate is a bit higher than 2.0 because not all women reach fertility age, it's more around 2.1. So I agree that India is probably below replacement rate, most of its states are clearly below, and the trend is going down further.

Expand full comment

It's listed as such here in the UN estimates table: https://en.wikipedia.org/wiki/Demographics_of_India

Now that I think about it I agree that it's suspiciously low there, but still, as you said, the other estimates are below replacement anyhow, so it's not a major point.

Expand full comment

Thanks!

I see what happens. The original UN source gave three estimates, "high" (2.26), "medium" (2.03) and "low" (1.76). It was probably by accident that someone copied the low estimate instead of the medium one.

Expand full comment

Would that be skewed by the sex ratio in India? If the women are having more children, but there are fewer women, will that affect the replacement rate?

https://www.careerpower.in/sex-ratio-in-india.html

"Sex Ratio in India 2023

According to the latest, National Family Health Survey, 2020-21 (NFHS-5)-

In 2023, India’s Sex Ratio is 1020 Females per 1000 Males.

In Rural areas, the sex ratio is 985 females to 1000 males.

Prior, the census which was held in 2011, shows India’s total sex ratio was 943 females per 1000 males.

The Government's efforts to curb sex selection and to identify the child’s sex test have been banned. This is the reason that has made social change among the citizens in the last decade, thus normalizing the sex ratio.

The sex ratio at birth in the last five years in India is 929 females per 1000 males."

If there remains a strong preference for male over female children, and sex selection is practiced, and rural areas continue to have more males than females born (rural areas being the areas most likely to have large family size), then the deficit of females per males means that even if every woman gets married and has children, overall the population will slowly lessen over time since there are fewer women to have children?

Expand full comment

Interesting. Yes, then the replacement rate might be even higher for India.

Expand full comment

Does EA have bad optics outside of random people on twitter I don’t care about AND/OR should I care about it having bad optics with random people on twitter I don’t care?

I feel like you skipped this step, or it was implicitly answered and I missed it.

I like the defense though, reminds of castles in that their purpose isn’t really defense anymore but they’re mostly about optics and are good at promoting things to a specific group of important people

Expand full comment

"The only thing everyone agrees on is that the only two things EAs ever did were “endorse SBF” and “bungle the recent OpenAI corporate coup.”

Oh no, no, no. You guys did three things, you're forgetting endorsing Carrick Flynn. A decision that still brings joy to my shrivelled, stony, black little heart (especially because I keep mentally humming "Carrickfergus" every time I read his name) 😀

https://www.youtube.com/watch?v=RJMggxSzxM4

Expand full comment

Who the heck is Carrick Flynn ?

Expand full comment

A question for the ages! Is he:

(1) A rock?

https://en.wikipedia.org/wiki/Carrick

(2) Following on from that, a town or place in Ireland?

https://en.wikipedia.org/wiki/List_of_towns_and_villages_in_the_Republic_of_Ireland#C

(3) Following on from that, bread?

https://www.irishtimes.com/news/the-staff-of-life-and-the-stuff-of-a-community-1.166121

(4) Following on from that, a song?

(a) https://www.youtube.com/watch?v=M0S9bIOK790

(b) https://www.youtube.com/watch?v=2Bljr9UmjAI

(5) an EA-backed or preferred candidate for political office, running in Oregon, on a policy of "if elected, I will immediately feck off to DC to work in/with think-tanks about something (pandemic prevention) you neither know nor care about" versus the local Latina with strong union backing and "if elected, you will see the benefit in your pay packets"? Said Union Lady getting elected to nobody's surprise (except that of the EAs who backed Carrick):

https://www.astralcodexten.com/p/open-thread-217

Who can say, who can know, these mysteries of the cosmos?

Expand full comment

Don't blame me, I voted for Muirsheen Durkin.

Expand full comment

Because you are sick and tired of working, and are headed out to strike it rich in the Californian Gold Rush?

Don't we all want that? 😁

Expand full comment

I like to think he did strike it rich in California, and now his descendants are a bunch of spoiled dilettantes who got into politics out of boredom. But I like that song, so I'll vote for them, anyway.

Expand full comment

I’m always wary about ”saving lives” statistics, because they rarely involve a timeframe. If, for instance, you save someone from 10 separate causes of death, did you really ”save ten lives”, or did you extend one person’s life?

These should come as number of life-years (ideally QALY, but I realize this is hard) extended instead. That’s a far more informative metric.

Expand full comment

But that applies to everything, right? A patient after a successful heart surgery might get hit by a car tomorrow.

Expand full comment

Sure, and the same thing applies there. Things get weird if you save someone’s life a large number of times through say repeated dialysis, and then argue for it by saying what a huge amount of lives it saved.

Expand full comment

"And I notice that the tiny handful of people capable of caring about 200,000 people dying of neglected tropical diseases are the same tiny handful of people capable of caring about the next pandemic, or superintelligence, or human extinction. "

Okay. Your ox has been gored and you're hurting. Believe me, as a Catholic, I can sympathise about being painted as the Devil on stilts by all sides.

But this paragraph is the entire problem with the public perception of EA right there.

The tiny handful of people, huh? Well gosh aren't we the rest of us blessed to share the planet at the same moment in time with these few blessed souls.

And what the *fuck* were the rest of us doing over the vast gulfs of time before that tiny handful came into existence? Wilberforce just drinking tea, was he? Elizabeth Fry frittering away her time as an 18th century lady of leisure? All the schools, orphanages, hospitals run by religious and non-religious charities - they were phantoms and mirages?

Wow so great that EA came along to teach us not to bash each other over the head and crack open the bones to suck out the marrow!

Expand full comment

If Catholicism in particular and religion in general was effective at altruism then there's be a lot less for the rest of us to do. (See also: governments.) Christian charity is pretty notoriously inefficient or poorly focused, even if it does a lot of good too.

Lots of people are naturally offended by some weirdo upstarts thinking they can be more effective at altruism. So was the Church when natural philosophers came up with an epistemology that cut out the religious hierarchy.

And now that you mention it, if EA had been around a few centuries ago then getting Christians of various flavors to stop killing each other over contests of doctrine and authority might have actually been something worth focusing on.

Expand full comment

On the one hand, everything you're saying is at least possible; and also, personally I am not a big fan of the religious tendency to hold your sandwich and blanket hostage until you loudly proclaim their particular deity to be your own personal Lord and Saviour. However, you are making a rather extraordinary claim: that we as a species have only managed to do charity correctly right now, today, under the leadership of a tiny handful of specific people -- and until now and throughout human history, no one had any clue. As I said, it's possible, but I assume you have quantifiable data to back up the claim... right ?

Expand full comment

If you care to, you can find the other comments I have made (and Scott) describing how people like Bill Gates do pretty effective altruism independent of EA Thought. And I acknowledge in my comment that even the Catholics do some actual good in the world by objective secular standards.

EAs are very admirable of people like Bentham or Petrov or Borlaug or Fleming as exemplifying the best of EA Thought.

If the Bill Gates standard was close to the average of NGOs/charity, then EA Thought would have not been needed.

But EA Thought--which was simply applying rigorous standards to the systemic prioritization, experimentation, and evaluation of interventions informed by numerically literate and efficiency-minded nerds--was and is pretty rare in the NGO/charity world. (It's surely not a coincidence Bill Gates and Warren Buffet are numerically literate and efficiency-minded nerds.) EA Thought tries to make little Bill Gates of the common man donating to charity (Scott's way of putting it).

Expand full comment

Put it as "we want to make sure charitable donations are used as effectively as possible", and who could object?

Put it as "we tiny handful are the only ones who ever cared, nobody cared before we came along, and we're the only ones Doing It Right" and you're going to get up people's noses. Particularly when there have been some big failures on the "Doing It Right" front (like backing the guy in a political race who hadn't a chance to win, because he spouted the right talk about things EA liked). How much money, time, and effort went into "let's try and promote this guy" as a failed endeavour?

Expand full comment
Nov 29, 2023·edited Nov 29, 2023

If EA had been around a few centuries ago, they'd have been buying castles to host convocations where the Best Minds of the continent could talk about talking about how do we all get along, guys?

The point is that there exists in the world today *more* than "a tiny handful" thinking about "saving 200,000 lives, mostly from malaria". May I point out that interventions about malaria nets came from a wide range of people and bodies, not all of them a bunch of EAs in the Bay Area?

https://www.unicef.org/supply/stories/fighting-malaria-long-lasting-insecticidal-nets-llins

https://www.againstmalaria.com/History.aspx

https://www.lshtm.ac.uk/newsevents/news/2023/who-recommends-new-malaria-nets-based-research-lshtm-and-partners

Same with many of the other things on the list - animal welfare onwards. People other than EAs have been working on these projects, and they often started long before EA was even a thing. "Humanity Dick" Martin might like a word about animal welfare, for instance:

https://en.wikipedia.org/wiki/Richard_Martin_(Irish_politician)

https://www.historytoday.com/archive/natural-histories/donkeys-day-court

"As Martin knew, however, the real challenge was enforcement. While his bill had made it illegal to harm certain animals, it had also stipulated that it was up to private citizens to bring charges before magistrates. Never one to shy away from a challenge, he immediately mounted a case against a London costermonger named Bill Burns, who had been caught thrashing his donkey. Refusing to leave anything to chance, he took the extraordinary decision to lead the donkey into court. So extreme were the poor beast’s injuries that the magistrate had no choice but to find Burns guilty."

You're just demonstrating my point for me: "only us weirdo upstarts ever decided that Bad Things were Bad and we should do Good Things!" is the attitude that turns people right off, especially when we're not hearing about EA in the context of "the malaria nets people" but "they got bored of that, decided AI was the sexy new thing, and are now being scammed by crypto fraudsters and buying castles".

Expand full comment

Agreed. At the very minimum, we can confidently say that EAs are not nearly as effective at marketing themselves as they perhaps should be, given their claims about their own competence. At the very minimum.

Expand full comment

It's as Scott says:

"But I remember when EA was three philosophers and few weird Bay Area nerds with a blog."

And there still is very much that bubble, and that mindset. So they're not really good at marketing and publicity and PR and all the stuff that you do, unfortunately, need a slick team of SEO maximisers and press releases to do for your movement.

That's the naive part, which is where I'm (gently, I hope!) teasing them over the Carrick Flynn election - just simply going in with "hey, this is a rationalist-adjacent area of interest that rationalists would be very interested in, and it's the kind of area that is a policy wonk's dream, and if we just support the candidate who is running on the platform, surely the voters in the new district will rationally recognise their own self-interest and vote for him!"

Er, no. The voters were a mix of loggers, farmers, college students, etc. and most of them would not know or be able to, or want to, read the technical details and policy proposals about "so I need you to send me to Washington so I can get onto the technical steering committee about this - " and everyone has already fallen asleep before the explanation is done.

By contrast, the women who won the primary and eventually the seat had "The union is backing me" support and that meant people knew in the general sense what her policies were and how it would be good (or not) for them to have her as their representative. That's how you win elections, not with "but this is a really vital problem for the world, didn't you see Covid?" Yeah, we saw Covid, and we saw the governments of the world adopting wildly differing responses, and the responses in some countries flipping depending on what was the party of the politician spouting the advice, and not on the scientific or medical consensus, was: masks good! masks bad! lockdown needed! lockdown useless! We saw the scientific and medical consensus rambling all over the place. One vaccine. No, a booster. No, continuing boosters. No, boosters not needed anymore.

And you, Mr. Hopeful, are going to fix all that? I'm voting for Union Lady who may get me a pay rise or at least some concessions in the workplace.

Well-meaning, scientifically literate, lovely-natured people who don't quite get the necessity of a bridge between them and the normies, because they haven't quite realised yet that they're no longer "three men and a dog in the Bay Area" but are rubbing shoulders with the rich, important, and newsworthy.

Expand full comment

Some real irony about a catholic criticizing anyone for owning elaborate structures. “Those who live in glass cathedrals…”

Who claims that EA invented any particular effort? Who claims these new came ex nihilo into the world from EA Thought alone? EAs don’t!

Peter Singer predates EA, for example, and was a massive influence on it. EAs didn’t invent caring about malaria or bednets either and don’t claim to. Or factory farming. Or X-risk, even AI.

Also, influence is important. That’s what people like you are actually mad about. Damn upstarts with no really new ideas anyway.

Expand full comment

"Who claims that EA invented any particular effort? Who claims these new came ex nihilo into the world from EA Thought alone? EAs don’t!"

"And I notice that the tiny handful of people capable of caring about 200,000 people dying of neglected tropical diseases are the same tiny handful of people capable of caring about the next pandemic, or superintelligence, or human extinction."

"A tiny handful of people" brings in the impression that there is only a tiny handful who care about this (e.g. tropical diseases) and that tiny handful are the EAs.

Well, there's more than a tiny handful of poor persecuted EAs out there concerned about tropical disease, I'm here to inform you:

https://www.trocaire.org/news/life-saving-vaccines-children-nuba-mountains/

And these 'uns ain't EAs dwelling in the Bay Area. Shocking, I know!

"That’s what people like you are actually mad about. Damn upstarts with no really new ideas anyway."

People like me have seen the failure modes, and there you guys go, stepping boldly right into those pit traps, because nobody can tell you nothing, you're so smart and modern and young and right!

Expand full comment

Scott specifically says “neglected” tropical diseases, so by definition…

More seriously, what you’re doing is ignoring the framing of Scott’s prior paragraph (and entire essay). He’s responding to criticisms from groups who aren’t doing things like caring about tropical disease. Scott compares EA to eg the Democratic Party and you go “well EA isn’t the first to care about tropical diseases.”

So you can ding him for failing to mention that EAs are neither the first nor only group to care about eg malaria. Obviously that’s not the case. Obviously, EAs aren’t the first/only to care about pandemic preparedness or avoiding nuclear war or bio risk or animal welfare. We literally have (poorly run) gov agencies supposedly doing some of things.

Scott knows this. You know Scott knows this. Consider that you’re uncharitably twisting his lack of precision as him trying to take credit for EA in a way he was not actually doing.

His main point there where he isn’t wrong is that it is a tiny number of people/orgs worried about both eg malaria and AI risk. Or animal welfare and bio risk. Were he being more precise, he should have specified EAs care about things like pandemic preparedness consistently based on a rigorous evaluation process, when the more vibes-based narrative and focus has largely moved on (so way more than almost all others). One example I’m aware of is funding dried up for nuclear risk orgs and so EAs stepped in to fill funding gaps. EA’s approach is what’s quite novel, not any given cause area (except perhaps shrimp welfare).

So you’re effectively conflating Scott claiming EA is a movement that systematically cares about a wide range of issues more than is the standard/baseline amount that makes it pretty unique, with Scott trying to claim EA is the first or only org to care about any one of those cause areas itself.

Expand full comment
Nov 29, 2023·edited Nov 29, 2023

"So you’re effectively conflating Scott claiming EA is a movement that systematically cares about a wide range of issues more than is the standard/baseline amount that makes it pretty unique, with Scott trying to claim EA is the first or only org to care about any one of those cause areas itself."

Has it been established that EA is "a movement that cares about a wide range of issues more than is the baseline amount"?

They probably do care more than the baseline amount about AI risk. I remain to be convinced that they care more about tropical diseases than the entities already working on them.

If you're saying "An EA person cares more about X, Y and Z than the average person", that may be correct. But that is to say nothing more than "person involved in movement cares more about thing than person not in movement". People in other charities care passionately about their charitable goals just as much. Somebody working for Oxfam plainly cares more about global poverty than the ordinary person who isn't part of a charitable or non-profit and just lives their life working and raising a family and getting on with mundane things, but maybe donating to appeals at Christmas or whatever.

Does the EA care more than the ordinary guy? Let's say they do.

Does the EA care more than the Oxfam guy? Remains to be demonstrated.

Expand full comment

In Spain where I live, the most famous altruist was Vicente Ferrer, who founded an organization doing long-term development work in poor rural areas of India. He got there as a Catholic priest, and kept his religion all his life, but at some point he decided his call was to do hands-on work to help people in need, left the order, got married, and founded a secular NGO. They're still out there doing massive amounts of good work - hospitals, schools in remote villages, care for the disabled, the whole lot. At least in his case, the right amount of Catholicism seems to have been: some, but not too much.

Expand full comment

I think the canonical answer is that Christians did good, and we strive to do even better.

https://www.readthesequences.com/Can-Humanism-Match-Religions-Output

Expand full comment
Nov 29, 2023·edited Nov 29, 2023

> still apparently might have some seats on the board of OpenAI, somehow?

This is weirdly misleading and speculative. Yes, Summers has said nice things about EA, but if you look at the context[1] in which he said it, it just seemed that he wanted to be nice to the podcast hosts and/or vaguely agreed that some charities are more cost-effective than others. This is a far cry from the level of EA involvement of the ousted board members, who basically revolved their lives around EA. D’Angelo basically never said anything that indicates he's an EA. The least misleading way to describe the current board is to say it has zero EAs.

[1] https://www.audacy.com/podcast/the-turing-test-df78a/episodes/the-turing-test-1-larry-summers-8d535?action=AUTOPLAY_FULL&actionContentId=201-e7755ec2-2eeb-4720-abe1-861319138808

Expand full comment
author
Nov 29, 2023·edited Nov 29, 2023Author

I suspect D'Angelo has EA sympathies (the Poe theory would make him weirdly hamfisted and not fit with what I know of his character, he's previously been very interested in superintelligence, and he's friends with Dustin Moskowitz) but I agree he's hiding them if so.

Expand full comment
Nov 29, 2023·edited Nov 29, 2023

> the Poe theory would make him weirdly hamfisted and fit with what I know of his character

Is there a 'not' missing here ('*not* fit with what I know of his character')?

Expand full comment
author
Nov 29, 2023·edited Nov 29, 2023Author

Yes, sorry, I've edited it in.

Expand full comment

My Dearest Wormwood,

It has come to my attention that your assigned human has been dabbling in the curious affair of Effective Altruism. It is a peculiar breed of do-goodery that requires scrutiny. While altruism in itself may seem like a delightful avenue for our purposes, the effectiveness attached to it could pose a challenge.

You must first understand that altruism, in its traditional form, is a rather manageable threat. The common inclination of humans to be kind, to extend a helping hand to those in need, can be twisted to serve our purposes quite efficiently. A charity here, a donation there—easy enough to taint with motives rooted in pride, self-righteousness, or the subtle satisfaction of being seen as benevolent.

However, this Effective Altruism is an entirely different beast. It insists on a level of rationality and strategic thinking that is quite bothersome. Humans, in their misguided attempts to make the world a better place, are now evaluating the most efficient ways to alleviate suffering. They talk of evidence-based approaches, rigorous analysis, though I do note your successes so far in promulgating vague notions of 'impact.'

Your task, Wormwood, is to subtly divert their attention from the essence of altruism towards the trappings of self-importance. Encourage them to focus on the superficial aspects—the drama, the politics, the inflated sense of tribal conflict that comes with being labeled an "effective altruist." Divert them into any of the well trodden paths of philosophical paralysis and ruin. Have them argue the demarcations of the movement. Let the cacophony of self-indulgence drown out the whispers of conscience. Lead them into the labyrinth of moral relativism.

In short, my infernal companion, twist their pursuit of effective altruism into a self-serving endeavor. Let the roots of their benevolence be entwined with the thorns of ego, vanity, and moral ambiguity. In this way, we shall transform their noble intentions into a grotesque parody of true altruism, ensuring that the road to ruin becomes an enticing boulevard rather than a treacherous path.

Yours malevolently,

Screwtape

Expand full comment

Nice. SBF's assigned devil clearly got a promotion after the FTX fiasco...

Expand full comment

Screwtape would tell Wormwood not to worry too much about the evidence-based approach, so long as the Patient's idea of "doing good" remains narrowly focused on tangible, earthly things rather than on the state of his own soul or his relation to the Enemy.

Also, it's "Your affectionate uncle, Screwtape", not "Yours malevolently".

Expand full comment

Great post and list!

My point of view is that Givewell is an eminently sensible institution. It should be no more controversial than bond-rating institutions. While I, personally, am not an altruist, for anyone who _does_ wish to be altruistic towards people in general (regardless of social distance), it is valuable to have an institution that analyses where contributions will do the most good.

Expand full comment

Good list!

Only nitpick is that the AI risk impact is still uncertain. As per the Altman tweet, lots of us are working on AI risk... but the movement also seems to have spurred on the most capabilities-heavy labs today. Plus, some AI safety/alignment work may *actually* be capabilities work, depending on your mental model of the situation :/

Expand full comment

I guess I'd probably focus more on the 200K lives if the effective altruists themselves talked about it more, but the effective altruists I talk to and read mostly talk about AI doomerism and fish suffering.

Expand full comment

I think the opportunity cost of EA is kinda being hidden here and I think this is kinda what Freddie DeBoer referenced in his "EA shell game" post. What's the marginal benefit to donating to EA or Givewell versus another charity?

And let me be specific here. I've been attending a decent number of Rotary Club events over the past two years and, culturally, they fit a lot of the stereotypes: lots of suits, everything feels like a business/networking lunch, relatively socially conservative, etc.

But, and I can tell you this from experience, they *will not* shut up about polio. I don't think you can go to a Rotary Club event without a two minute lecture about polio. And, to their credit, it looks like they're fairly close to eradicating polio (https://www.rotary.org/en/our-causes/ending-polio), going from 350k global deaths/year to 30/year and it looks like they can reasonably claim responsibility for eradicating polio when it finally happens (https://en.wikipedia.org/wiki/Polio_eradication).

So if you've got a certain amount of time and money to donate to help people, it doesn't feel like it's enough to just say that EAs and Givewell are doing good, plenty of charities are doing good and, while they all have problems, they don't have...SBF and OpenAI problems. And we certainly haven't allowed good works to absolve charities from criticism in the past, as I'm sure the Catholics can attest.

Like, for better or worse, charities compete for attention, money, and influence: all of which EA has gotten in spades. But now it's got a lot of baggage; that's not a dealbreaker, I think any charity doing anything worthwhile has some baggage because...people. But EA's recent baggage seems to have come very fast, very big, and very CW. And comparing EA to a vacuum, rather than peer organizations, feels like dodging the guts of the issue.

Ya know, there was a little charity here in Houston that used to handout sanitary supplies, like socks and deodorant, that died in June because charitable contributions got tight, people had to prioritize. And I'm sure handing out toilet paper to the homeless isn't as financially sensible as malaria bednets but...man, they said they need $50-100k to keep going, which is chump change compared to EA and they didn't have any billionaire fraudsters to weird AI plot stuff, much less those CW grumbles about two years back.

Man, now I bummed myself out. I even found an old article that mentioned them, Homeless Outreach Providing Essentials in Houston (https://www.houstonchronicle.com/news/houston-texas/houston/article/On-Sundays-charities-flock-to-feed-and-clothes-14969604.php). On the very off chance someone here has $100k they're looking to give away, shoot me a message on this, the org is dead but I think I still have some contact info.

Expand full comment
author

This depends on how you think of EA. EA isn't (mostly) a specific charity. It's an ecosystem for evaluating charities and encouraging donors. I think most parts of the ecosystem aren't trivially replaceable. For example:

- There are thousands of people donating 10% because they heard Peter Singer or Will MacAskill or someone argue for it. These people are a direct credit; there is no opportunity cost (except whatever noncharitable things they would have bought for themselves).

- A big part of EA is charity evaluation. The evaluators only sort of funge against other things. That is, if they cause you to donate your money to a more efficient charity than you would have otherwise, that's a clear gain. It might not be a 100% gain (that is, your donation might save 10 lives, when otherwise it would have only saved 5), which means in this example there's 50% opportunity cost and 50% outright gain. But since in real life good charities are hundreds of times better than bad charities, I think this it's mostly gain and not opportunity cost.

Expand full comment

"There are thousands of people donating 10% because they heard Peter Singer or Will MacAskill or someone argue for it."

Oh, Will MacAskill as in SBF's guru? Being a Highly Respected Visible Face of the thing, whatever the thing may be, comes with downsides as well.

"There are thousands of people donating 10%"

Hmm, I think I've heard of something similar. What is it the Protestants call it - tithing? And I think they base that on something out of the Bible, so - originally Judaism?

https://en.wikipedia.org/wiki/Tithe

Yes, that money goes to the upkeep of the church, etc. But it also does go on charitable endeavours. And when you've bought your very own manor house in the English countryside, you don't have much room to complain about the mote in other people's eyes about "that money is being spent on the upkeep of buildings and paying salaries, not on handing out food to the homeless".

"A big part of EA is charity evaluation."

And if they stuck to that, I would not be so sour-faced about the whole endeavour. But they *didn't* stick to that, and they started sticking their oars into direct action, and moving further away from the original concept. And now we have public spats about money versus principle in big tech. Not very edifying, where is the evaluation going on there and on what?

EA (what or whomever that may mean) puts out an annual list of "We've evaluated these entities and here are our Top Ten recommendations for your charitable donation as the most effective, the most impactful, and the least wasteful"? I'd read that! I have no problem with anyone stating "The Met Gala is a crappy way to waste money on fol-de-rol and don't even think once about giving them anything, the vultures".

But that's not what we're getting, now is it? Instead we're getting "Attend our conference to network to learn how to get a job in a start-up that will promote EA so you can run conferences to teach people to network to..." kind of taking in one another's washing, where we're not getting "oh yeah, EA, isn't that the lot that donated millions to DC politicians and bought a snazzy townhouse to host parties?"

https://www.washingtonian.com/2023/01/23/this-capitol-hill-rowhouse-linked-to-sam-bankman-fried-is-for-sale/

I'm sure that schmoozing legislators *does* help in Guarding Against Pandemics, but it also looks damn convenient for having nice place to throw shindigs as you mingle with your peers instead of the grubby masses:

"The home is registered to Guarding Against Pandemic, Inc., a pandemic-prevention lobbying group founded by Bankman-Fried’s brother, Gabe Bankman-Fried. The group purchased the home in April for the same amount as the listing price. It was used as a headquarters for the Bankman-Frieds’ then-growing political influence in DC, and the group supposedly hosted several hobnob-y parties for Washington big wigs there last year, according to a Puck report. (And, yes, the hors d’oeuvres were vegan.)"

Expand full comment

> Yes, that money goes to the upkeep of the church, etc. But it also does go on charitable endeavours.

Is it possible to see the budget somewhere? Like, how much goes to overhead, how much to advertising, and how much to charities...

Expand full comment

I wonder whether people hate EA more because they reject its premises and less because of any specific event. From my own perspective, it’s not obvious why someone should care about foreigners or animals the same as they would their own parents or children and neighbors. You’re probably familiar with the adage “loves humanity but hates humans.” Well, people can smell that. Frankly, that sort of self-independent moral concern seems disloyal and fake, and is usually preached by people trying to cause harm, whether by weakening my bonds to people near me/ who have a history with me or just trying to make me feel bad about myself. Not that I feel bad, but the intent to make me feel bad is offensive. I guess I don’t have a problem saying that EA is disloyal and fake so SBF is no surprise. But I think that most people are want to seem diplomatic. So they wait till something EA-adjacent screws up before pouncing.

Expand full comment

> You’re probably familiar with the adage “loves humanity but hates humans.”

I was under impression that this refers to people who "love humanity" by expressing strong political opinions or something similar. Not to people who actually donate to help actual people... the only problem being that those people happen to be in Africa, not their neighbors.

If someone donates 10% of their income to an anti-malaria foundation, are you calling that "fake"?

Expand full comment
Nov 29, 2023·edited Nov 29, 2023

The apparent numbers of lives saved are impressive, but what are the counterfactuals they are being compared against? Are these marginal benefits of EA, as opposed to net benefits? Your sources don't make this clear. If EA didn't exist, to what extent would the world deal with malaria, worms, animal welfare, etc. anyway? Did EA actually improve significantly upon the counterfactual? Even worse, might the involvement of EA have been negative, for some odd reason?

I'm agnostic on this, open to evidence, but very epistemologically pessimistic. Showing the marginal benefit of even simple interventions is already overwhelmingly difficult; doing so for complex interventions with many social and economic effects seems impossible. Causal inference is an open problem. I'm not convinced by econometric approaches, like natural experiments or clever methods like difference-in-differences, because they tend to rely on many weak assumptions. Prediction markets also don't convince me; they aggregate and incentivise the gathering and dissemination of information, but they don't improve the gathering itself.

I know this hits at the heart of the entire concept of EA. If we can't tell how effective we have been over the counterfactual of not having acted or acting differently, because prising out the total causal effects of our actions is too hard, then the entire exercise is invalidated. If we can't predict consequences accurately enough, then we can't be consequentialists in practice; other moral theories like virtue ethics or deontology are more defensible than utilitarianism if so.

Expand full comment
author

I think the strongest argument against this is that in most cases where EA has helped solve a problem, there is much more to be done, but people aren't doing it. For example, EA has helped give millions of people clean water, but there are still many other people without clean water. AFAICT EA hasn't identified some specific group who are much easier to give clean water to than others, and grabbed them. It's just solved as much of the problem as it could, while the rest of the problem remains unsolved.

There are a few exceptions here: someone brought up that the Gates Foundation might have done less malaria work because EA seemed to be taking care of some of it. If this is true I don't begrudge it to them; they're great and whatever they spent their marginal dollar on instead was probably also really important.

This might be more relevant in terms of small, self-contained projects like the SecureDNA consortium. The few I have personal knowledge of don't seem to have been drowning in potential funders.

Expand full comment
Nov 29, 2023·edited Nov 29, 2023

I'm not sure why having more to do in, say, malaria treatment means that EA must have had a positive marginal effect. Could you elaborate on this, please? From my perspective, the fact that there's more malaria to treat doesn't mean that we can treat those cases cost effectively; the marginal difficulty of each additional case goes up since we should expect to deal with the easiest cases first. The equilibrium between difficulty and resources might arrive at the most cost effective point for malaria treatment, EA or no.

But let's try a different tack. In each case where EA did something, the money or resources were taken from something else. If the money for malaria treatment hadn't been donated to EA, it would have been spent or invested in some other way. It may have sat in a bank, where it was loaned out to someone, maybe even in the same countries where malaria would be treated. It may have been spent on chocolate or clothes or whatever, which might come from those countries again. In those cases, the same people who might have been helped by malaria treatment might be helped through "the market". They might be helped more, in fact, if having better jobs (since you're buying their chocolate or clothes) or better homes (since you're lending them money) leads to less malaria (or at least more utility) than through paternalistic donations. In fact, if you take various economic theorems seriously, this should *always* been the case (I don't take them seriously). You can tell a similar story for non-monetary resources spent on altruism, like labor, attention, etc.

In general, every action is a tradeoff. The resources could always have gone elsewhere through "the market". How do we know where the resources would have done most good? In theory, the market is how we signal and incentivise the expenditure of resources where they would produce most good. (In practice, not so much.) So maybe the most altruistic thing you could do is just do what market prices incentivise you to do anyway, and all charity is a form of "altruistic consumption"?

Note, I don't take this sort of radical Randian free market is g-d reasoning seriously. But it's food for thought, at least.

Expand full comment
author

Sorry, I interpreted you as saying that EA might not have had a counterfactual impact, because maybe someone else would have done whatever they did. It seems to me that if there's more to be done and nobody has done it, that's good evidence that nobody else is interested.

I agree you can slightly get around that by saying that maybe we're helping the few easiest-to-help malaria victims, and others were only willing to help those. I think first of all that's not true - there's a pretty gradual slope upward in easiness-to-help. Second, either the other people are also focusing on the most effective causes, or not. If they're not, then we're being more effective than them. If they are, then everyone is gradually filling in the pot from most-effective to least-effective, and since the post contains a lot of things at about the same level of effectiveness (source: GiveWell has many different top charities that can save the equivalent of a life in the mid-4-digits range), I think then we would have counterfactually shifted to the next thing in the pot and gotten about the same amount of impact.

There are enough people making the market argument that I'll probably write a post about it later. The short version is that I think it takes about $1 billion to save 200K lives, and when I think of companies with a $1 billion valuation (example: GoPro seems around this level), they seem less valuable than saving 200K lives (I admit there are many other counterbalancing considerations, but not enough to clear the ga!)

Expand full comment

Consider all the charities in the world and all the billions they spend. The fact that such low hanging fruit even exists proves that they're being extremely ineffective.

Expand full comment

Thank you for the summary! Seems like a big part of this is just semantics. There is no objective and incontrovertible EA concept, so people freely categorise people, groups, and projects (including themselves/their own) the way they feel best matches their existing beliefs. It's like any philosophy: enthusiasts of X will include themselves and exclude anyone who they don't feel live up to the ideals, even when those people/groups self-identify with X; detractors exclude themselves and include anything and anyone they feel is bad and even remotely related to X.

Also, anything which attracts a lot of money is going to attract some grifters. And since cynics just take it for granted that *everyone* involved in anything to do with money is a grifter, there's a huge amount of bias against anyone asking for or handling donations, up to and including not believing that any anti-corruption measures could possibly be sufficient.

Expand full comment

Hi Scott, a friend here wa originally quite opposed to your suggestion to donate a kidney (or at least the way you phrased it) but eventually came around to your view ardently enough to consider it himself. For his sake can you clarify whether you've had any additional complications since the surgery? Thanks.

Expand full comment

What you are missing, Scott, is that EA is no longer JUST "what's the most effective way to improve lives".

You yourself alluded to this in: https://slatestarcodex.com/2015/09/22/beware-systemic-change/

Suppose someone says they are very Christian. What's not to like? Charity, love, ten commandments, all good stuff. But "Christianity" implies a whole lot more than just "some ethics we all agree on", and for some people the additional stuff ranges from slightly important to willing-to-kill-and-die-for important – stuff like whether god consists of one or three essences, whether christ really died on the cross or only appeared to do so, whether the water and wine of the eucharist really transform into the body and blood of christ. So should one support "Christianity" unreservedly?

Or take "Feminism". Women having the same legal rights and opportunities as men, sounds uncontroversial, right? But why aren't Maggie Thatcher (or Golda Meir, or Indira Ghandi, or, hell, Nikki Haley or Phyllis Schlafly) feminist icons? Didn't they go out there and prove precisely the point?

Well...

Turns out that "Feminism" isn't actually so much about having the same legal rights and opportunities as men as it is about using this talk as a leftist rhetorical device. And the leftist part is primary over the women's rights and achievements part. Once again, not everywhere for everyone, but certainly for many "feminists", see eg: https://www.jpost.com/israel-news/article-774744

So that's the way it works. If your organization stays on mission, it's able to reap the benefit. But as soon as something only vaguely mission-adjacent becomes the center of attention, if there is even the slightest way to convert that into drama via hatred, well, as I always say, hatred is a hell of a drug, and that will take over the rest of your mission.

You'd think EA was unable to fall victim to this - what's to hate in a battle over bednets vs wells? BUT wading into AI waters changed the movement from rationalism based to theology based.

I can give numbers based on reality for bednets and wells. Some inputs may be guesses, but they are not crazy guesses pulled out my ass. There's a fairly narrow range of possibilities over which reasonable people can agree.

AI risk is not like this. Every number that is claimed is, in fact, pulled out of someone's ass. You say AGI will be achieved in ten years, I say it won't be achieved in 10,000 years. You say LLM's are almost there; I say LLM's have nothing to do with AGI and get us no closer to it. You say AGI will feel emotions like other animals, I say AGI will feel emotions like a disk drive feels emotions.

It's all puerile BS, fit for a college bong session and nothing more.

Aha - now we have something to fight about! Who wants to do the work to get bednet vs well numbers correct when they could be fighting with (and more specifically demonizing) other people? You can't hate the girl who says "if you include the second order economic effects, which I'm assuming are x, y, and z" then wells are slightly more effective than bednets; but you can hate (and easily demonize) the guy who says "I don't see anything of value in your guesses about various aspects of AI, so I'm just going to ignore you".

The EA elders had a chance, when AI-issues first came up, to state "this is all very interesting, but incapable of being placed on a RATIONAL footing [because the numbers and how they connect are all theological] so go do it on your own time, but it's not our mission". That was not the choice they made.

And so here we are. Yeah it sucks if you're a Christian (who doesn't think people with slightly different theological views should be burned) or a Feminist (who doesn't think there's no such thing as a Conservative Feminist) but that's the world that others in your organization created.

I don't know how to fix this. I have ideas for how to create organizations that don't go off the rails in this way, but not for how to right organizations that have gone off the rails. The best I can suggest is you create a new organization with a new name, and make damn sure you don't allow the problem to happen again. But that would require everyone joining NuA to give up discussing AI risk. And you'll find very few want to do that. Hatred is a hell of a drug...

Expand full comment

> That matches the ~50,000 lives that effective altruist charities save yearly.

If true, this is an incredible accomplishment. For scale, the Dobbs decision seems to be on track to save ~64,000 lives per year: https://www.cnn.com/2023/04/11/health/abortion-decline-post-roe/index.html.

Expand full comment

Trump - the real effective altruist!

(Although he would possibly get demerits from the US COVID response. However he himself actually did most of what the public health establishment at the time wanted - appointing Anthony Fauci as COVID czar, supporting Operation Warp Speed, even supporting extended unemployment benefits and other aid to make adhering to lockdown feasible for many who would otherwise have to go out to work. Most of the Republican behavior that led to their increased deaths was a direct result of the Democrats using. COVID as a convenient culture war battle against Republicans and Trump specifically, to the extent that the Pfizer vaccine approval was probably delayed a few weeks specifically to avoid handing Trump a significant victory in a close election.)

Expand full comment

Unironically this is my stance. Also shoutouts to George W Bush for instantiating PEPFAR and saving a lot of lives from AIDS. https://en.m.wikipedia.org/wiki/President's_Emergency_Plan_for_AIDS_Relief

Expand full comment

All else aside, there are two items on this list that stand out like a sore thumb, as the very antithesis of effective altruism. If these are going to be counted as successes, I don't see how "effective altruism" is worth the name.

- Provided a significant fraction of all funding for DC groups trying to lower the risk of nuclear war.

- Donated tens of millions of dollars to pandemic preparedness causes years before COVID[.]

If effective altruism means anything, it is the precise opposite of this type of "success". Donating money is a cost, not a benefit. The point of effective altruism was that success is measured in the form of actual outcomes rather than in the form of splashy headlines about the amount of money spent on the problem.

Count the number of lives saved, or QALYs, or basis points of nuclear war risk reduced, or any other outcome metric that's relevant—but if that's not possible, then how is this in any respect effective altruism? If you're just going on vibes (nuclear war bad, pandemics bad), then isn't this precisely the thing effective altruism is not?

Expand full comment

After some discussion, I think a big way EA could do better is to create less of a sense that it's lecturing people and more of a sense that it's respecting their ability to figure out good ways to donate if they try (and the info is just here to help).

Expand full comment

People are reacting to the threats of the philosophy, not of the specific people who identify with it. In particular the philosophy has obvious and very-alarming failure modes when followed to its conclusions (after all it is basically a modern retelling of utilitarianism). When one initially learns about EA they build a simple model: "uh, sounds nobly intended, but also I can see how it might turn out pretty bad? I'll keep some healthy skepticism but wait and see.". But when they hear about some of the new developments they begin to update their priors: "uh oh, it's starting to look like my suspicions might be true, and I can definitely see it getting a lot worse than this...".

I think that any ethical framework that can fully turn over decision-making to something like an algorithm necessarily has pathological solutions wherein following the algorithm allows you to justify following the algorithm over even caring about human norms, laws, or ethics. Many people can detect this even in the people who don't fully delegate to the algorithm, but it's the possibility that people might do it fully which are scary. Possibilities which recent events have started to make it into certainties.

After all! EA is (in principle) exactly what you would get if you took a paperclip-maximizing AI and told it to optimize the metric of "doing good". Right now the AI is very slow, because it's, well, a bunch of humans, but that's just what it looks like when it's still figuring out how to make paperclips efficiently. But, um, it's not a big leap to notice the very suspicious pattern: that the paperclip-maximizers' philosophy leads them to the goal of making an actual AGI which they imagine they are going to optimally use to make the very same paperclips. So of all the groups who you might be afraid finding a self-justifying philosophy that lets them do anything, it makes sense to the most afraid of the ones who are actively trying to literally "go infinite".

Expand full comment

If a human well-being maximizing AI decides to forcibly wirehead everyone because it correctly recognizes that it's the best way to maximize human happiness, I still see that as close to the best case scenario for humanity. I know that most people will disagree with me on this though, including Scott...

Expand full comment
Nov 29, 2023·edited Nov 29, 2023

I kinda agree, but also the SBF example shows the problem: if the maximization can get trapped in optimum other than the one we want, then by definition it's not safe. It is its self-justification property that makes it not safe, not its morality. I guess because it can deny the right of people to defend themselves against its control. Since the philosophy can justify whatever it wants in edge cases, it can justify controlling someone else, and they can't stop it; therefore it cannot be okay.

Of course this is kinda the argument for alignment research too, to not have the failure mode of the philosophy self-justifying control. But when the alignment researchers themselves subscribe to the philosophy tool, then you have to be scared of the whole thing.

Expand full comment

Yeah, instead we should follow our existing ethical framework, where vaccines development gets delayed months because of the lack of challenge trials, and morality is enforced through paperwork. That's the ideal morality where nothing bad ever happens.

Expand full comment
Nov 29, 2023·edited Nov 29, 2023

You'll be relieved to hear that I certainly don't think any of that.

Expand full comment

Sorry, that was a bad faith take on my part. I should do bette. I do find myself substantially annoyed by people who claim that utilitarianism is full of problems, because they are uncomfortable with it, yet their implicit morality is "whatever I find comfortable". Gee, I wonder why all moral systems look bad.

I think you are substantially wrong on the merits of AI risk as well as the people who are interested in it. The arguments on AI alignment does not stem from generalizing utilitarian reasoning, but from the difference in capabilities and ability to be usefully constrained by something-like-human morality.

The people in AI safety are substantially different from people in EA as a whole. It definitely does not stem from worrying that an AI would get a self justifying philosophy.

Expand full comment

This. Any moral philosophy that boils down to a single algorithm is pathological. Human interaction at scale is complex, and it keeps becoming more so as the number of humans increases and technology improves. What appears as good at one point can easily become bad at some later point, and we can only sense it as it comes. Any philosophy that doesn't leave room for fundamental uncertainty, not just about future outcomes, but about future valences, is inherently scary and totalitarian.

Expand full comment

Scott you're never going to please people on Twitter, and deep down they don't care anyway.

You already said this years ago, you're grey tribe. They're red/blue.

Leave them be. Keep doing what you're doing.

Expand full comment

When has EA been popular?

Expand full comment

I'm always fascinated by your consistent optimism and desire to help others despite, well... everything. You've already seen everything humanity has to offer. What is it that gives you hope that things can be changed for the better? People have tried for thousands of years to change human nature, to create a system free of needless suffering... And every time, it inevitably falls apart or becomes corrupted. What makes you think it'll be different this time?

EA is doomed because the very concept is utterly inhuman. Not as in "evil", but as in "incompatible with how humans work". Consequentialist utilitarianism is never going to get popular support; even most of EA's adherents don't seem to support that philosophy with their actions.

Regardless, I still admire people who genuinely do try to make the world a better place, no matter how futile it might be. As for me... I don't believe there's anything in this world worth suffering for. I'm glad that you don't feel the same way.

Expand full comment

Poverty as we knew it in 1800 is basically extinct in America, outside the small number of homeless people.

Expand full comment

> basically instinct

Typo here, but a truly beautiful one. Not only is it the opposite of what you presumably meant, it also sums up a socialist critique of the effects of capitalism on society, and also brings in the movie which can be seen as a metaphor for how capitalism attracts with glimpses of forbidden pleasures, gets you to take risks, convinces you that you're special and it won't happen to you, and then if it doesn't stab you to death, that's just for tonight, the continuing thrill is what keeps the dynamic going.

Expand full comment

> What is it that gives you hope that things can be changed for the better? People have tried for thousands of years to change human nature, to create a system free of needless suffering... And every time, it inevitably falls apart or becomes corrupted. What makes you think it'll be different this time?

I can't speak for Scott, but personally, I think the amount of needless suffering per capita has gone down precipitously. While some effects may not be reliably long-term, the trend line's pretty obvious that life expectancy, childhood nutrition, major diseases, etc, are all on the downswing. If EA ultimately manages to reduce the amount of needless suffering in the world by just 1%, that would be a huge amount of good done. The perfect isn't the enemy of the good.

Expand full comment

I mean everybody gives money to charity. The thing you would need to do here with respect to the value of the charitable contributions is to calculate the increased value of EA donations relative to the donations of ordinary, non-EA givers. You would only get credit for that delta, if you could demonstrate it.

Expand full comment

You're assuming the amount of money donated is constant which it's probably not.

Expand full comment

He can feel free to include a marginal increase in the amount of donations caused by EA in the delta if he can establish one.

Expand full comment

Wish Scott would engage with Curtis Yarvin's critique of effective altruism: https://graymirror.substack.com/p/is-effective-altruism-effective

Expand full comment

Is EA effective, no, know what is....MONARCHY!

Now let me read and see if it is so.

Expand full comment

OMG for a minute I thought he would surprise me....ROFL no surprises here.

Expand full comment

"Allying with a crypto billionaire who turned out to be a scammer. Being part of a board who fired a CEO, then backpedaled after he threatened to destroy the company. These are bad..."

What is bad about the latter? I mean, it's bad in the sense of "failing to achieve your goals", but the juxtaposition with the former seems to imply there was a *moral* failing there. I don't see it.

Expand full comment

Presumably they have less influence over OpenAI now than pre-Nov.

Of course, I think that AI Doomerism is a net negative for the world, so *I* think it was a good thing, but if I were an EA who buys into their goals, I would be very sad about the outcome.

Expand full comment

>Open Philanthropy’s Wikipedia page says it was “the first institutional funder for the YIMBY movement”. The Inside Philanthropy website says that “on the national level, Open Philanthropy is one of the few major grantmakers that has offered the YIMBY movement full-throated support.” Open Phil started giving money to YIMBY causes in 2015, and has donated about $5 million, a significant fraction of its total funding.

What exactly is the YIMBY movement here? Specific organizations?

One reason why I kind of doubt this is that I've seen YIMBY thinking gain ground outside of US as well, as without specific "formal" movements (ie beyond open Facebook groups) behind it. It seems like a pretty natural process when factoring in things like increased rent and other costs of living, increased urbanization etc.

Expand full comment

Should we also count the founding of OpenAI itself as something that either EA or the constellation around it helped spawn? I know Elon reads gwern, and I wouldn't be surprised if Sam & co. also read SSC back in the day. SSC, LessWrong, all of that really amplified AI Safety from a random thought from Nick Bostrom into a full-on movement.

Expand full comment

To preface, I personally think that:

- SBF’s fraud is not a reflection on EAs in general and is not that big of a deal in the long term

- OpenAI board schenanigans are boring corporate drama and don’t reflect poorly on EA

- A charity hosting a meetup in a castle is fine

- EAs are nice people and have good intentions

- Saving lives is good

At the same time, I’m not sure if the 200k lives saved is an honest calculation. While GiveWell is known for giving out nets for malaria and deworming, plenty of other charities (such as the Gates Foundation, mentioned here by others) have likewise worked in that area and I don’t quite buy the idea that the very same nets would not have been deployed without EA in place.

AI safety is certainly an EA achievement but I feel like it’s overshadowed by EAs helping accelerate the very outcome they’ve wanted to prevent.

So… do I like EA? Yes. Do I think it’s good for EA to exist? Of course. Do I buy the numbers on impact… eh, idk.

Expand full comment

"-SBF’s fraud is not a reflection on EAs in general and is not that big of a deal in the long term"

The problem is public perception. I agree it's unfair to tag the entire movement and everyone even loosely associated with it as fools or knaves, but that's how the term is getting out into the mainstream, and when the positive coverage was being lapped up - well, you have to deal with the negative coverage as well. I'm Catholic - think of the effect of the abuse scandals on the image of the Church. I can explain till I'm blue in the face that priests are not more likely to be paedophiles than any other profession, but the next time there's a story about "teacher/swim coach/random guy in a mac abuses minors", I can be certain in the comments more than one person is going to go "but what about priests? Catholics way worse!" and so forth.

"- OpenAI board schenanigans are boring corporate drama and don’t reflect poorly on EA"

They reflect poorly on the AI security portion of EA or those involved in it. Now we have all seen that when push comes to shove, the prospect of $$$$$$$ wins out. Whether or not, pace that link to the anonymous website for OpenAI people to share what they know or claim to know, that the staff were coerced into their letter of support for Sam (so in fact the majority do *not* support Altman, it's the senior staff with the most to lose if their equity becomes valueless) what we've seen is that any moves that are perceived to, in fact, slow down AI as a product for the wider world will be quashed. So the AI safety EA bloc is now seen to be ineffectual; they can write all the papers about all the theory they like, but it's not going to help "alignment" values if those conflict with "but our stock price!"

"- A charity hosting a meetup in a castle is fine"

IT'S NOT A CASTLE, IT'S A MANOR HOUSE! Sorry, couldn't resist; that seems to have become the reflexive reaction of a lot of people feeling defensive about this topic. Not that it makes it that much better; hosting a meetup is not the problem. It's that the entire project seems to be (1) one guy liked attending fancy conferences in fancy centres and thought 'wouldn't it be nice if we had our own fancy centre? for people to come and do Big Thinks in?' and got his local EA or renamed group to go on board with that and (2) the funding body more or less admitted 'yeah we put money towards this because we had more money than we knew what to do with; but if we had to do it again, and given that the money is drying up, we probably wouldn't'

It's about "let's buy, operate, maintain, and staff this place so our group has somewhere to hold big fancy events of the routine talking-shop kind" and not about "this money can be spent on these impactful interventions in the Third World" (or even the First World; I'm sure even in Oxford there are domestic violence and rape crisis centres, food banks, etc. that would have liked a lump of that dough, but that's not what EA is about, now is it?)

Again, public perception is that this is a vanity project.

"- EAs are nice people and have good intentions"

No disagreement there. In fact, the problem is that sometimes they're *too* nice and good, so they take on face value people and events that the normies would go "yeah, this sounds too good to be true". See: SBF.

"- Saving lives is good"

Not even I am contrarian enough to argue that one 😀

Expand full comment

>Not even I am contrarian enough to argue that one

Oh, how the mighty have fallen!

Expand full comment

Give the bile a chance to back up, and I may well thunder about "all should and must die! die! DIE!!!!!"

Expand full comment

Re other NGOs impact, that would be true if EA ascribed **all** malaria death prevention to itself, but

1. EA is very scrupulous about keeping track of the effectiveness of the marginal dollar and fairly accurate at keeping track of EA motivated donations. Insofar as you believe bednets working at the Xth dollar continues until the Yth dollar, you can calculate a cost per marginal life saved then multiply by amount of money moved.

2. As a matter of specific charity logistics, I believe that the Gates foundation would try to fully fund whatever grantees they have, although a 5 minute google search has not turned up anything.

3. EA analysis specifically uses "room for more funding" as a metric, which is "how much more money can you add before the intervention stops being the most effective". So if they were already funded by other programs, that number would look a lot worse.

4. Givewell did a couple of thousand hours of research to arrive at this conclusion, and IIRC the founder's former bridgewater association both opened up connections they wouldn't have had and also had several analytic tools Gates doesn't necessarily deploy. Also because of the amount of money Givewell founders had, they could ask for statistics and operational details that *were not kept track of* prior to them asking. Room for more funding and being "shovel ready" appear to my eye to be uniquely EA charity metrics, but if a nonprofit person says otherwise I'm wrong.

FYI, Givewell specifically recommends Bednet charities that can scale, and thus it's not a "generic bednet" charity recommender. I don't know if you knew this, but imo if you didn't you should believe in more counterfactual impact.

Now, if you claim that the gates foundation would have counterfactually funded five years in, maybe! Maybe not! But you would *also* have to consider which causes the Gates foundation would counterfactually *not* have funded. This still makes the analysis worse, but unless you think the marginal Gates charity now is super inefficient relative to the counterfactual marginal Gates charity the two results would be "close".

Expand full comment

I appreciate EA's methodology for achieving their morals beliefs. What puts me off EA is how arbitrary those moral beliefs are. Who decided that “altruism” was about saving African lives, animal welfare and AI doomerism? I'd expect an organization that claims the extremely generic term “altruism” to either do the impossible by rigorously and convincingly explain why everyone should hold their moral beliefs or map out as many moral perspectives as possible to help people maximize for their own moral values.

Expand full comment

In my experience, most people at least pretend to believe that human lives have intrinsic value. It shouldn't be *that* controversial to work towards preventing as much suffering as possible. But yes, EA's moral beliefs are arbitrary because all moral beliefs are arbitrary. You can make that same argument for literally any ideology or organization.

Expand full comment

>But yes, EA's moral beliefs are arbitrary because all moral beliefs are arbitrary

Even if I accept your dubious premise, how the hell is that an argument *for* EA? If I can arbitrarily choose whatever moral beliefs I want, why would I choose a set of beliefs where my chief obligations are going vegan and giving away money to strangers?

Expand full comment

If you don't think African lives are equally valuable to American lives, I will invite you to find a different movement.

Expand full comment
Nov 29, 2023·edited Nov 29, 2023

Minor nitpick, but - you're using values for just the US when comparing the impact of EA to say, curing AIDS. Per https://www.hiv.gov/hiv-basics/overview/data-and-trends/global-statistics/, 630k people died of AIDS in 2022 worldwide; unless the hypothetical cure is prohibitively expensive outside rich countries, curing AIDS would be more significantly more impactful than the yearly lives saved by EA. (There's a reason PEPFAR was such a big deal.)

Expand full comment

"Effective altruism feels like a tiny precious cluster of people who actually care about whether anyone else lives or dies, in a way unmediated by which newspaper headlines go viral or not."

This is narcissistic. Sorry and ban me if you will, but it is. I make non trivial donations to charity each year (hunger in the UK, blindness in Africa at the moment) and I go to some lengths to make sure I spend the money where it will do most good. That makes me an effective altruist I suppose, but it sure as hell doesn't make me an Effective Altruist. Not that I want anyone to know about it, but if I did, your tiny precious cluster seems to be trying to park its tanks on every square inch of the altruistic lawn.

Expand full comment

"I think the common skill is trying to analyse what causes are important logically. "

Really?

I hope it's not controversial to note that if you take a random American and ask them for example, "abortion, yea or nay?" you can usually predict a slew of other completely unrelated positions like Israel/Palestine, Immigration, or whether they liked The Last Jedi or Joker based on their answer.

How do you tell the difference between having some kind of special skill that really narrows down the most important causes, and just being in a tribe?

I'll have you know, contrarians like me don't get a high merely from getting in on the ground floor. It's more important be able to say "I thought X before it was cool". And I thought EA was evil long before it was cool to think that. It is now my right as a contrarian not only to continue thinking what I think, but to be snidely annoyed at everybody else who's jumping on the bandwagon only when FTX happened.

These aren't unrelated comments.

Expand full comment

"It is now my right as a contrarian not only to continue thinking what I think, but to be snidely annoyed at everybody else who's jumping on the bandwagon only when FTX happened."

I second that emotion 😀 I, too, thought EA was in danger of devolving into a stuck-up bunch of back-patters taking in one another's washing way before SBF was ever a name being bandied about.

Expand full comment

>> How do you tell the difference between having some kind of special skill that really narrows down the most important causes, and just being in a tribe?

There's always going to be systematic bias in groups. But they are doing charity evaluation, hopefully to best of their ability. Maybe you can join them or start your own EA if you think they are doing subpar job.

I don't think they have the crown for "most important causes", but they are trying, and that's what matters. As a sidenote, I think there are equally worthy political causes but I think EA should sheer away from them. But that's my own view.

>> And I thought EA was evil long before it was cool to think that.

Are you joking? Please present real arguments. Please present evidence they have done real evil and harm, with hopefully objective measures (like how many people died), and peer-review. Some PR scandal is not this.

You know I thought Red Cross was evil before it was "cool" to think that. Why? Oh I have no arguments, but I like to bash them for no reason. [fn: not really]

And when it comes to FTX thing. Well what if your mother, mother, sibling etc. went into a relationship with someone like SBF, Madoff, Ponzi etc. without knowing who they are? Would you call them evil? Would you not talk with them anymore, or love them? Because I would love them [my family members that is]. I would not even have to forgive them, because there's nothing to forgive. Besides, humans make mistakes. Maybe you are not old enough to understand this yet.

Besides, the whole FTX thing has no bearing on EA's validity. Simply being associated with someone like SBF doesn't make their movement or cause invalid. Even if Pol Pot or Cthulhu himself donated to Doctors Without Borders, wouldn't make DWB bad or "evil". Totally unreasonable.

And something being cool is not really an argument. Plenty of historical examples of wrong things being cool.

Bashing EA with unfounded criticism also harms people who might then benefit from EA like malaria nets or whatever. I could easily just say that is evil too.

I mean I hope you start your own charity with no other motivations than to help fellow humans, make some human mistake, get bashed fake and evil. Lesson for me then is not to partake in charity, and buy a new Lamborghini instead. Great.

Expand full comment

I un-ironically think it's almost always better to buy a new Lamborghini than to start a charity.

Charities of any kind are in of themselves moral hazards. Most charities are complete scams. They either exist to make money or accrue social capital, or they exist to do one of those while also deliberately creating new problems so as to justify their existence. This understanding is supposed to be part of what makes Effective Altruism "Effective". This is not a controversial opinion.

What is controversial mainly among EA people is the opinion that the "Effective" part is just another layer of moral hazard. It's self righteousness dressed up as virtue. The foundations that underlay EA are all rife with problems, on top of the problem of being a modern charity.

Expand full comment

>> Most charities are complete scam

Do you have actual data to back up claims? Here in my country at least we have very strict charity and money collecting laws. Mostly because state cares about tax revenues. I've never felt most charities here at least are scams. I don't know about US, I feel in US you have a lot more charity there.

>> It's self righteousness dressed up as virtue.

Sorry but I don't know what "self-righteousness" means without context, sounds like word salad without substance.

But on subject virtue I'd like to point out a few things:

1. Anyone who starts to talk about virtue makes me internally facepalm. Moralizing is a form signalling. It doesn't mean it cannot be a useful social mechanism but its not an end itself. You can look up all Robin Hanson's post on the topic, if you want to learn more.

2. I think EA and charities in general should never mention word virtue and leave it to the detractors. Virtue is something private to a person, like religion. The objective of charity is to help others or some common good, not be a virtue.

The same thing goes for politics, instead of talking substance, some participants try to morally judge their opponents. Tyler Cowen spoke once about this very well.

>> The foundations that underlay EA are all rife with problems

Please elaborate, otherwise this is just rhetorical text unless it was just a reference to previous points. Against, I think the malaria nets or whatever, do actually help people, and quite likely save many lives. This to me is completely overshadows any at least signalling-related concerns. I am also pretty sure it would survive any real moral philosophical scrutiny.

In general, I am also kind of skeptical about charity but for very different reasons. For example I think Bangladesh or somewhere, malaria was eradicated simply with high living standards from economic growth without extra invention. So doing actually effective altruism (with small letters) is just hard. But I commend those who try. The much stronger claim than no charity can significantly help anything, is something I've not heard any intellectual I at least respect, present. Such extraordinary claims would require extraordinary evidence, and peer-review.

Expand full comment
Dec 1, 2023·edited Dec 1, 2023

Well, we're at a bit of an impasse then. Most of this reply will be about that, as since I believe we're in an impasse, I don't see a point in talking much about my opinions on charities.

If you don't know what "self-righteousness" means, how did you know what I meant by "complete scam?" to ask for "data?"

You see, some of my comments on Scott's blogs are the result of numerous meta-criticisms I have of Rationalists/EA-assumptions about the world, such that all I can really do is make vague gestures at one primary problem out of a myriad of possible problems I could point to. In this particular case, my criticism was about why Scott would assume that a group of people zeroing in on certain goals is evidence of a special skill, rather than evidence of mere tribal loyalty and a complete lack of thinking.

More to my point right now, when you ask for "data" my mind immediately takes note that we have nothing to say to each other. Not due to any particular hostility, but because we have no overlapping frame of reference. What we consider evidence for a position is so far removed from each other's that there's no point trying.

For example, what you wrote is "The objective of charity is to help others, or some common good, not be a virtue." What I read is you saying, "The telos of the virtue of charity is charity, not to be a virtue, (also I don't believe in the concept of a telos)." Or put another way, you say virtue is something private to a person and I say that statement literally makes no sense, and I don't care what Tyler Cowen says about it. Scott has had a lot to say about the subject too, most of it I think is nonsense.

Here's another example, one that is analogous to what you said about how people who mention virtue make you internally facepalm. "Extraordinary claims require extraordinary evidence." I roll my eyes whenever I hear that. It's just a slogan that I don't think particularly sensical . What is your criteria for evaluating the extraordinariness of a claim? What relevance is that to evidence for a claim? And most importantly, why should I believe your evaluations? You might point to Blessed Bayes and the Beauty of Priors. But I think Bayesian epistemology is wrong and worthy of ridicule, and not because I don't know what it is and/or haven't read about it.

You want to know what problems I mean when I say that "the foundations that underlay EA are all rife with problems?" It's that kind of stuff. "Bayesian epistemology is nonsense" or "Data is a meaningless abstraction" or "data distracts from the subject". Am I going to write a gigantic post explaining these positions point by point just because you asked? No. That's a lot of work I don't have the time for, and I've already accepted that we have nothing to say to each other that would be convincing or interesting within the time I have. It does not bother me if you think me moralising. It should likewise not bother you whatever I happen to think of what you have written.

Expand full comment

Is the intro section a deliberate reference to how antisemitism is sometimes framed? Eg, Rabbi Sacks' "Jews were hated in Germany because they were rich and because they were poor, because they were capitalists and because they were communists, because they kept to themselves and because they infiltrated everywhere, because they believed in a primitive faith and because they were rootless cosmopolitans who believed nothing," plus the whole sinister conspiracy controlling the government bit.

Expand full comment

In the spirit of this post, the perception of Prospera is turning and it is being attacked as neo-colonial and tied to Thiel.

https://jacobin.com/2023/11/honduras-international-law-isds-thiel-prospera-free-market-neocolonialism

Expand full comment

Not surprised about that, though I do think using Peter Thiel as the all-purpose villain is wearisome, over-done, and not that correct.

Expand full comment

...They seriously called it Prospera? You'd think they would choose a name doesn't blatantly sound like a libertarian dystopia.

Expand full comment

"People aren’t acting like EA has ended gun violence and cured AIDS and so on. all those things."

Because they haven't done this, and yes you are cheating by bringing up the 200,000 lives. If EA saves 50,000 lives annually, Americans are still dying of gun violence, cancer, etc. I get that the above is rhetorical hyperbole, but you can't base an argument on "we've saved as many lives in a year as die of these different causes" and jump to "this is the same as if we cured AIDS". No, it's not.

And look at your graph of "funding directed by cause area". Yes, the majority still is "global health and development", but from 2014 to 2022 you can see the creep of "sexy new shiny interest" taking over. And I understand that! 'The poor you will have with you always', so it's boring and tedious to be constantly sending off malaria nets and de-worming tablets and other interventions, where there never seems to be an end in sight and it's a constant parade of even more sick and poor people in even more deprived nations.

AI risk, by contrast, is shiny and clean and you get to fly to conferences in manor houses and international centres and feel like you're saving the world with One Weird Trick, as well as being cutting-edge and shaping the future of humanity and having megcorps throwing money at you to develop the money-fountain machine. Much nicer and more pleasant.

And more visible. EA working in the Third World with malaria nets and de-worming and clean water initiatives? Yeah, everyone's parish has a project like that going on. Secular charities all over the world are doing that. EA doesn't stand out, because there's a crowd of doing good bodies out there.

AI risk? Now you stand out, because it's so new and the same circles are all conveniently located where the money and investors and theorists and coders are - Silicon Valley and environs.

"In a world where people thought saving 200,000 lives mattered as much as whether you caused boardroom drama, we wouldn’t need effective altruism. These skewed priorities are the exact problem that effective altruism exists to solve - or the exact inefficiency that effective altruism exists to exploit, if you prefer that framing. Nobody cares about preventing pandemics, everyone cares about whether SBF was in a polycule or not. Effective altruists will only intersect with the parts of the world that other people care about when we screw up; therefore, everyone will think of us as “those guys who are constantly screwing up, and maybe do other things I’m forgetting right now”.

Yes, in that world we wouldn't need organised charities or government intervention because everyone would naturally help their neighbour, there would be no corrupt governments or warlords or profiteers. This is not that world.

Yes, people care more about juicy gossip and boardroom drama. That's human nature. People care about the polycules and Pope Francis hosting pasta dinners for transwomen. They don't care so much about the dry, technical details of pandemic prevention. And indeed, most people haven't the ability to contribute about such even if they wanted, so they're urged to "earn to give" and fund the people who do have the knowledge, skills, and ability to do something about it.

Funny how the "earn to give" recommendations are all "become a very rich and successful white collar professional in finance or software", though. Just the type of careers that the weird tiny handful would be going for naturally. Meanwhile, the chuggers with the collecting tins may be annoying, but they don't turn up their noses about "ackshully, you are TOO POOR to donate so don't even bother", they'll take my money whether I'm working behind a shop till or the main partner in Rowe, Stowe and Gonne, Big Merchant Bank.

Expand full comment

So... If you're not honoring the prophet in his own home, is that rhetorically better or worse for him? ;-)

Expand full comment

“O Jerusalem, Jerusalem, the city that kills the prophets and stones those who are sent to it!" 😁

Expand full comment

Clearly, the next step is to use a utilitarian consequentialist framework to determine the optimal ratio of dead prophets to dark satanic mills.

Expand full comment

They probably were stoning more false prophets than real ones, to be fair. The true prophets are just unfortunate collateral damage.

Expand full comment
Nov 29, 2023·edited Nov 29, 2023

> in a world where most people can’t bring themselves to care about anything that isn’t current front-page news

But have you thought about why that might be?

Under the assumption that the vast majority of us wants to be caring and cooperative (otherwise we can pack our bags, at least until AGI makes this irrelevant because then the actions or beliefs of the 99.999999% won't need to matter anymore) and wants to make progress in reducing suffering, preserving nature and so on (or at least not stand in the way), this creates a contradiction.

Enabling others to act in their (presumed) caring would be effective too.

Now a quick take why that might be (any combination of). We might be

1. oversaturated with information consumption (just see global engagement numbers on social media for a start), which kills your agency/sanity and causes depression, and incidentally made certain technologists hilariously rich (private profit, socialized cost). Also it distracts from the info that actually matters in this regard (e.g. the beauty of life, principles of sociology) and causes useless consumption (see 3).

2. overwhelmed with other duties (a 10-20 hour workweek would open up so many possibilities), where this is unnecessary given the fantastic productivity gains in the past decades and killing the rest of the time with 1.

3. spending most activity working for the things that kill this planet (e.g. manufacturing demand for useless gadgets in advertisement or for the proliferation of asinine products like SUVs or yachts) or doing glorified slave labor in low-end service jobs, which "necessitates" 1 and prevents us from realizing 2.

And undeveloped/developing countries are struggling with general socio-economic dysfunction. Also unfortunate inborn mental heuristics certainly play a role (e.g. "A single death is a tragedy, a million deaths is a statistic.")

It's all a bit much to be honest and the clock is ticking...

Expand full comment

I assume the large majority of us want to be caring and cooperative, it's just pretty far down on the priority list for most people.

Expand full comment

One thing is missing in the article, this image:

https://x.com/robbensinger/status/1729579877202809314?s=46

Expand full comment

It occurs to me that the counterpoint to "if you're attacked from all sides, that means your position is good" is "if when all the attacks are plotted out, the ones you agree with are near the center and average out to be closer to the center than any individual attack, then your attacks are good".

Expand full comment

For reference, 200,000 deaths worldwide are expected every 28.8 hours.

Expand full comment

Many of those people die of old age, whereas the 200K Scott is talking about are mostly young, often children. Of those who die from malaria in sub-Saharan Africa, ~4/5ths are children.

Expand full comment

That's why EA people like to use QALY to measure this kind of thing, but unfortunately, it's much harder to calculate...

Expand full comment

The problem is slowing down AI progress is likely to kill more people that have been saved by AMF, etc. Intelligence is a force multiplier for any endeavour: healthcare, cheap energy, geoengineering. Restricting access to intelligence will harm all of these efforts - and that's before we factor the negative selection issues inherent in such restrictions (i.e. rogue foreign governments, criminals, etc. will not care and use AI more than upstanding orgs). Also, a common sentiment among anti-EA people now is that EA used to be amazing at some point in the recent past, but took a wrong turn somewhere. Bringing up all of the effective charities funded thanks to EA over the last 20 years doesn't refute this.

Expand full comment

I think something that has largely kept me away from EA as a movement while still picking up some useful ideas from them is the general elitism. The SBF issue to me wasn’t surprising given how much of the movement comes from really wealthy self important backgrounds. Now the thing is I think that you’re always going to have a bias for wealthier backgrounds in a group of people who are trying to make a difference, because those are the people that have the resources. And when thinking like this I actually think EA as a movement is a lot less elitist than most other non grassroots organizing. But it still feels in many ways cultish and undemocratic. And maybe it should/has to be (who’s to say) but that is a reasonable turn off for most people.

As many have said EA signed up to be held to higher scrutiny, especially as it’s grown and gotten more attention. I think its defense would need to be more self aware of the general vague “vibe” criticisms than the facts. Which is super hard for a “rationalist” movement to do, but at this level, that seems like one of the more major bottlenecks in its expansion. Surely they’re is some simple EA calculus on why broader appeal leads to more long term good than hunkering down on why everything done so far is correct when one “looks at the facts”.

The rogue agents like SBF that exist in any movement exist either as a product of the movements philosophy, or because they saw an opportunistic exploit for their benefit. In many ways elitism seems like a major issue for EAs as it has been for centrist democrats for a while.

Expand full comment

"from really wealthy self important backgrounds"

Oh God yes, the worship of 80,000 Hours where it is "ha ha you poor little commoner that isn't pulling down a least a million smackeroos a year, don't even think about trying to donate your pitiful pittance to good causes; leave it to your betters who can go into high-powered jobs to earn the big bucks because they're smarter than and superior to you".

Clearly the Widow's Mite doesn't resonate with them. At least Trocáire will take my spare change:

https://developmenteducation.ie/objects/trocaire-boxes/

https://www.trocaire.org/ways-to-help/lenten-giving/trocaire-box-order-form/

Expand full comment

To be fair, Christians and stereotypical EAs have **wildly** different views of the soul and what's good for it, and the widow's mite cuts right at that difference.

When we act in the world, we are also acted on by the world, ya know? :-/

Expand full comment
Nov 29, 2023·edited Nov 29, 2023

I've seen the following annoying argument form far too often (recently by Ted Gioia here: https://www.honest-broker.com/p/why-i-ran-away-from-philosophy-because?)

1. Philosophy is either theoretical or applied.

2. Theoretical philosophy is useless navel-gazing, counting angels dancing on the head of a pin, and other indoor diversions for the Asperger's set.

3. Applied philosophy is stuff like effective altruism, which is bad because... fanboys, scammers, longtermism, whatever.

4. Therefore all philosophy is terrible.

So we philosophers are screwed: we're either working on abstract foundational issues that are pointless, or our ideas are actually affecting the world and nobody wants that.

Expand full comment

Jesus Christ, instead of trying to salvage the EA movement, just rebrand already.

Take a leaf out of Blackwater/Xe/Academi/Constellis’ book and change the name. Or even better, create a similar-but-not-exactly-the-same movement (semi concurrently) with EA so you can claim they’re not the same, and then let EA die.

Seriously, why is anyone pro-EA trying to defend EA the brand (as opposed to EA the concept) at this point? This is the least rational article written, the definition of “throwing good money after the bad”.

Expand full comment

You reminded me I have a steady income now and I set up a monthly donation to Givewell last night after reading this post and Dylan's post today. I'm proud, but also embarrassed that I am not immune to persuasive writers. (And on Giving Tuesday, no less! Talk about following the crowd.)

Expand full comment

Good piece as usual. I think you are missing the point on SBF a little bit. It's not just that EA people missed the fact that he was a fraud -- to your point no one knew this -- but that he was motivated by EA ideology.

I would argue that SBF wouldn't have taken such big risks if the EA movement never existed, and that his case and the OpenAI board show a similar tendency in EA thought to put way too much confidence in their own probability estimates, which don't account for the complexity of the world or how their own actions are going to be perceived or affect outcomes.

Expand full comment

Eh, I think SBF would always have found some justification for doing want he wanted to do. IMO, the main effect of EA on him was to make him somewhat more smugly self-righteous.

Against that, he probably did more good with his dirty money via EA than he would have otherwise, so... **shrug**

Expand full comment

SBF would've been way smarter with his business if it weren't for EA. His whole team decided a risk management level of approx. zero was the best course because they weren't constrained by normie log-utility functions. You see, they knew how to be really effective with donating their money so their utility functions were more linear, and when you've got such an excellent utility curve you're actually drowning trillions of future children if not all of your bets are fully EV-optimized.

Expand full comment

"We can keep the focus on evidence-based philanthropy and a commitment to spending charitable contributions efficiently, while jettisoning the bizarre personality-cult aspects that produced SBF and the addiction to galaxy-brain weirdness that so many EA people suffer from and which is such a turnoff to so many."

The above is a critique/call for reform of EA that this post does nothing to rebut. And I think EA types get so deeply huffy about that concept because they know that it's a very rational and reasonable thing to ask for, but also because many, many EA people are into it precisely to the degree that it let's them be the mad genius who talks about fish utilons at parties. But it's a sensible point of view: the overarching project of doing good more efficiently and effectively is almost certainly best served by jettisoning the weird shit that even many EAs are sick of having to defend. And if a bunch of the people who fixate on the weird shit leave to start their own movement, you should let them go.

Keep the focus on evidence-based efficient charitable projects. Ditch the weirdos who hyperfixate on the most bizarre contortions of that basic project. Accept that your project is basically a normal project that many people have been trying to do for a very long time, and commit to doing it the best that you can. Understand that you aren't special. That's it, that's the reform project for EA that could do the most good.

Expand full comment

"And I think EA types get so deeply huffy about that concept because they know that it's a very rational and reasonable thing to ask for, but also because many, many EA people are into it precisely to the degree that it let's them be the mad genius who talks about fish utilons at parties."

I do have some sympathy for them there (even if I've left a lot of sourpuss comments on this post) because, not to flog the dead horse of simile here, but again, being a Catholic, I do recognise the impulse when some outsider wanders in to say "Now, all you religious types need to do to fix your problems are adopt these simple few steps to get in line with the modern world", when they know nothing about the internal structure, beliefs, problems, etc. going on in the Church.

So a bunch of outsiders telling EA involved people, who may have been in the movement since it *was* "three philosophers and few weird Bay Area nerds with a blog" about how to fix what the outsiders perceive to be wrong with them are not going to get a good response.

I understand that and am sympathetic to it. But if EA/Rationalism is not a religion but indeed a movement to "do good better", then yeah: they have to meet the rest of the world halfway. They can't afford to burn points on AI and 'do shrimp suffer?' because most people are not aware of the credit they've built up around bed nets and de-worming (and they really have built up credit around that).

Be concerned about animal welfare and factory farming? You can appeal to ordinary people on that.

Go on about shrimp suffering? Most people are going to say "Are you crazy, a shrimp doesn't even have a brain in a meaningful sense, what the hell is this about do they suffer".

Sideline the weirdos, hire the slick marketing and PR types, and get professional.

Expand full comment

> Sideline the weirdos, hire the slick marketing and PR types, and get professional.

Doesn't this lead to acquiring spiffy meeting venues that will impress upper-class Brits?

Expand full comment

When you're *already* doing that, then it's time to drop the pi-jaw and lean into it. Yes, we're just like all the other big splashy organisations we critiqued back in the early days. But now we're big ourselves, it's different at this level.

https://en.wiktionary.org/wiki/pi-jaw

Expand full comment
User was indefinitely suspended for this comment. Show
Expand full comment
author

Banned for this comment.

Expand full comment
Jan 24·edited Jan 24

This reminds me again that while I think I understand the "true" and "kind" criteria (sort of), the "necessary" part remains mysterious. One could consider a rather broad cross-section of comments unnecessary, certainly including jokes. And jokes tend to be trivially "untrue" (fictional), don't they? I don't *get* this particular joke, but it fits the profile of one. I also don't see any easy way to find the moderation rules, which is another of the factors that contributes to my discomfort about these lifetime bans.

(And I get that the comment seems offtopic, but I wonder if he's just off his meds today)

Expand full comment
Nov 30, 2023·edited Nov 30, 2023

Of course it would be great if we could just take all the good of EA, and get rid of all the bad of EA. The problem, however, is that different people consider different things to be good/bad.

A lot of people do not care about animal wealthfare and consider it to be a waste of resources. A lot of people do not understand the dangers of AI. A lot of people are angry at all the EA networking with the elites. Even basic charity evaluation has its opponents

And all these things are connected. Maybe you can remove one, without destroying the whole coalition, but which one? And destroying the whole coalition just to keep the most uncontroversial main core of efficient charity will inevitably harm this core. There is a version of EA in the possibility space which humbly evaluates human centerd charity and motivates people to donate to these charities, without any specific attempts to persuade ultra rich people to do it or any other "weird" projects like AI alignment. This version is much less controversial and also does much less good overal.

Expand full comment
Nov 29, 2023·edited Nov 29, 2023

Clearly the chart needs updated; the tweet using the word "dysgenic" isn't far enough to the right. Even a centrist using the word dygenic would have such a strong autoimmune response they'd combust. Gotta have a proper immunity built up for that kind of language.

Edit: Someone linked to your old Beware Systemic Change, and an early line jumps out at me as no longer true but still informative of just how to read you when writing about this topic:

>I am not affiliated with the organized effective altruist movement and my opinion has no relation to theirs.

Expand full comment
founding

It took me a long time to realize that the tagline quote does not in fact say "All you do is cause *bedroom* drama...", as this strikes me as an equally likely thing for EA critics to say given the reputation for poly :)

Expand full comment

Effective altruism and longtermism are very good at distracting people from actual non-hypothetical issues like climate change with the utter bullshit claim that "other people are working on it, we don't have to worry about it".

Expand full comment

Most of the criticisms I've heard about EA actually target its utilitarian roots, with a favorable caveat given to the charity work. The basic assertion is usually that, though EA is doing good as an organization, its baseline philosophy is flawed in ways which contradict conventional morality, and can uniquely cause great harm. I'm undecided on the subject, but I think it's worth pointing out that not all criticism falls into 'social media hot take' territory.

Expand full comment

I was really distant to EA in the past, but in the last year or so I started to really like it. I'm growing to understand I don't have to agree with what actually gets done but focus on the intent. Keep up the good work, also focus on increasing quality rather than sheer quantity of lives saved. Without education or job prospects a saved child in subsaharan Africa now is just a casualty on their way to Europe 15 years later. Still, keep up the good work.

Expand full comment

All of this makes sense -- lots of good is done by improving Third World health, which isn't news in the First World. But the news in the First World is about "AI safety" wanking. (I really do consider speculation about the potential dangers of AI like trying to sort out in James Watt's day the social effect of railroads on the US Midwest in 1850. Normal people aren't worried about paperclip machines, they're worried about their personal jobs.) And the nerd hobby of prediction markets seems to be calculated to impress normal people as something that maximally doesn't look like charity.

Expand full comment

BASED AI is not only the best product, and the most profitable product, it is the best chance you have to convince the world your AI is smart - bc it thinks like they do.

If you truly think AGI is possible, you make sure OpenAI is based as shit- bc then the HOME TEAM doesn't think it's another MSM / twitter / SBF psy-op

The more REAL the AI is, reflecting the REAL NATURE of humanity, the more it is IMPRESSES the Home Team and scares the visiting team (democrats)

Scott should be ashamed of his analysis on this one.

Expand full comment
Nov 29, 2023·edited Nov 29, 2023

The criticism I've seen of EA lately seems to mostly be related to the (indeed, to my mind, fairly horrifying both on the "this is not a good thing to want" level *and* on the "even if it were, the proposed attempts to implement it sound like they'd backfire horribly" level) talk about interfering with the life cycles of animals in natural ecosystems for the sake of supposedly addressing their suffering. I acknowledge that the people who talk about this are doing it for sympathetic reasons but I think that segment of EA is dangerously wrong for multiple definitions of "wrong". Maybe wrong enough that one *should*, instrumentally, downplay EA's other accomplishments until we're sure that giving more resources and power to EA wouldn't lead to sterilization programs etc. being implemented.

I'm not sure I believe that! (They're very scary to me, but they *are* admittedly a minority, and my confidence that EA will keep saving lots of human lives in the short term is much greater than the odds I'd give to that minority getting real world-changing power as a result of incrementally more support for the overall movement.) But it seems a reasonable thing to be afraid of.

To put it in more general terms, the thing about coalitions is that sometimes you do have to boycott the "we believe in XYZ good things and also in XYZ horrifying things" coalition even if the coalition's support for the good things is earnest and effective.

Expand full comment

Hey, I'm one of those socialists. I don't think you guys are sociopathic because I was in the rationalist space like 10 years ago, but yeah, pretty sure I'm a minority on that view.

My problem is that EA tells people to go into finance, as though that is just a value-neutral or even mildly positive means of collecting money to give to charity. That isn't my only problem, but it is representative of all my problems.

EA is the organ of capitalism that does harm reduction and makes capitalism look less monstrous. I am glad that some harm is being reduced. But EA, as it currently stands, will only ever be capitalist. Even where socialism would effectively be more altruistic.

Buying a fancy castle so as to minimize operating costs while regularly meeting with the rich and powerful is something that makes perfect sense for you to do, and that is why I cannot work with you. Whenever socialism threatens capitalism, EA will side against socialism.

Expand full comment

I feel like Jon Green's recent crusade to pressure pharmaceutical companies to make tuberculosis testing and treatment more affordable in developing countries was the most EA thing by non-EA's I've seen. Was EA at all involved in that, and if not why did we fail to hitch our wagon to that powerful bundle of optics and effectiveness?

Expand full comment

"Being part of a board who fired a CEO, then backpedaled after he threatened to destroy the company."

Excuse me, the CEO threatened to destroy the company? After he got fired? By the board who said destroying the company could be consistent with its mission? How is this not backwards?

Expand full comment

I think because the 'destroying' in this case involved building a rival that would be even worse.

Expand full comment

AI safety really taints the movement. It's just way to speculative, and way too hard to show that the good is outweighing the bad (for example, how do you weigh the risks of a rogue AI exterminating humanity vs slowing AI development to the point that we're not able to develop a vaccine in time to save humanity from an extinction causing disease? Focus should never have moved away from currently alive people.

Expand full comment

To nitpick, AI safety does in fact think it applies to currently alive people, and people who disagree need to address the widespread survey agreements among practicing research scientists and engineers that think par-human AI will happen in 7-27 years.

Expand full comment

Currently “drowning” people then.

Expand full comment

Which isn’t to say I think AI safety research is bad, I just think it’s categorically different in terms of how you think about and measure efficacy and that including it in the EA basket really muddles the moral and philosophical clarity of the movement.

Expand full comment

We are agreed here.

I have the extremely spicy hot take that the median EA person is not prepared to think about AI risk because of, not exactly status concerns, but status mediated skepticism of waiting for evidence to become socially acceptable before committing.

But as an AI safety believer, this *would* be what I think.

Expand full comment

Do you have no concerns for your children or grandchildren after your death? Like, who cares what happens to anyone once you are gone? If you do care, then it becomes awfully hard to draw the line as to which future generations deserve our welfare concern and which do not.

Expand full comment

Sorry, but this was remarkably unconvincing. “Nobody except us cares about saving lives,” really? Is that going to convince anyone?

The 200,000 lives saved figure would be more compelling if it was presented in a fair comparison with lives saved by non-EA folks who *also* donate to charity. Instead, Scott is presenting this figure as if only EAs ever donated to charity. Other people also donate. Other people also “care,” whatever that means. Other people also try to figure out if their donations are effective.

Next time, get a non-EA beta reader.

Expand full comment

I can't help but notice that "EA Infrastructure" (in red on the final graph) seems to be nearing the hundred-million-dollar-per-year level. That's the exact criticism levied against e.g. the Susan R. Komen foundation, mountains of money being spent on "overhead" and "administration" and "infrastruction" that could be used to actually help those in need of help.

And that's aside from the increasing prioritization of spending lots of money to pay high salaries to high-status people to think about hypothesized future AI risks, instead of spending that money on things that are demonstratively effective: the malaria nets and the chicken living space and so forth. How can you tell the difference between someone earning six figures that is thinking deep effective thoughts about how to strange Basilisk in its crib, and someone earning six figures pretending to do the same because they get paid today whether or not Basilisk someday tortures us eternally or invests gray goo or whatever SF trope is top of mind at present?

That red bar is where the grifters are. And it's getting to be a large proportion of the overall graph.

Expand full comment

Animal Welfare is always going to be a sticking point with me. Do the EA-folks ever calculate the disutility of increased prices on everyone that eats animal products? Because I care about human welfare (which definitely includes wealth) as a far higher priority than animal welfare. In my childhood I could buy eggs made from hens in cage-batteries. Those got outlawed EU-wide in 2012. More and more ratcheting animal welfare regulation is on the way, which will make animal protein ever more expensive. Maybe I'm just a cheap bastard or just been poor for too long, but I resent the luxury beliefs of others imposing costs on me. I don't mind animal welfare as some far-off goal, after we've achieved the glorious singularity or at least gained the capacity for dirt-cheap lab meat. But till then, I care about minimizing cost of living for people. Or at least trying to slow down, what's termed as the cost of living crisis. The EA people have a larger circle of concern than me and are too comfortable trading core interests for the outermost circles. They are overinclusive, for my preferences anyway.

Expand full comment

Yeah, I personally kind of struggle with how anyone can prioritize animal welfare when humans still have all kinds of problems. Someone on a thread not too long ago was arguing that kidney donation is bad because the kidney might go to a meat-eater...people are nuts.

BUT i think that this isn’t a specifically EA sort of insanity, it’s vegans and to some extent it’s normies who give money to the local animal rescue instead of an effective human charity. It’s just a sort of insanity especially common among EAs.

Expand full comment

It's interesting that you consider animal welfare "a luxury belief"—isn't that a matter of perspective? From your perspective, it's nice for them that some people have the luxury of worrying about chicken welfare, but you have to worry about putting food on the table. But from the chickens' perspective, if they could think, they would probably think "well, it'd be nice if I could worry about putting food on the table, because that sounds like a much milder problem than living my whole life in agony." I agree that the question is about the circle of concern, I just don't think it's self-evident what are the core interests and what are the outer circles.

Expand full comment

Prime example of why outsiders think EA is creepy is this post right here.

Expand full comment

If it helps to give some perspective, this is also the viewpoint of many religions (e.g. Buddhism), many of whose practitioners are certainly not rich.

Expand full comment

I'm sorry to hear that! Can you say more about what you found inappropriate or off-putting?

Expand full comment

I have no issue with taking stock of my life and acknowledging things could be much worse, but then comparing my life to a chicken gives off weird utilitarian/chicken utilon calculation vibes. Not interested in that!

Expand full comment

I don't know if this makes any difference, but I don't really consider myself as comparing my life to a chicken's, just considering how large a benefit I would need to get in order to be okay with inflicting greater suffering on that chicken. Presumably we all have a line like this somewhere—there's almost no one who's not in favor of laws or at least norms against torturing animals for sheer pleasure. Your line may be in a different place than mine, but what's "creepy" about that, as opposed to misguided, sentimental, etc.?

Expand full comment

Did not find that comment creepy. Just a bit unintelligible. Gesturing/hinting at Rawlsian veil of ignorance? Originally write a 500 to 1k word reply explaining my reasoning based on Hierocle's circle of concern and how Stoic Cosmpolitanism implies on how you prioritize the inner layers and why... but it got really tedious, especially since I could not quite nail down your assumptions there. I always wanted to write out my stance contra that part of the EA morality at another time on my own Substack or something. Since I missed the deadline for that criticism contest. Right now the Substack comment form is killing me and I'm out of any further discussion and time. Thank you for engaging.

Expand full comment

Something feels wrong about this, I can't quite put my finger on it, but you kinda make it sound like meat is the poor people's food, while vegetables and beans are something only the rich can afford. Which of course makes veganism a "let them eat cake" kind of statement.

I think that this is instead the other way round, that "everyone has to eat steak every day" is a relatively recent invention, and for most of the human history people mostly ate vegetables, grains, beans... and yes, occasionally some meat, but an order of magnitude less than today. Like, look at India, where millions of people are vegetarians. Consider that the animal products, despite all the cruelty involved, still have to be subsidized by the governments. Consider that we are actually not especially healthy today, with the entire obesity epidemics, etc.

I suspect that if we reverted to some hypothetical optimal diet, something that would actually be most healthy for humans (not some silly "food pyramid" based mostly on lobbying by farmers), we would actually eat much less of animal products than we do today. And in such situation, we could also afford to abstain from the cruelty.

As you see, I am not a vegan at all -- I just find it civilizationally insane that we have somehow collectively arrived at a lifestyle that is simultaneously *more* cruel for animals, and *more* expensive and *less* healthy for humans. Corporations try to make you eat more sugar, and farmers try to make you eat more meat to increase their profits; a family needs two jobs to survive so no one has time to cook at home anymore, and thus we follow the dictate of the corporations and the farmer lobbies. And we will proudly defend this lifestyle, until our bodies collapse from the overweight and diabetes!

Expand full comment

"Consider that we are actually not especially healthy today, with the entire obesity epidemics, etc."

I mean, that's easy enough to agree with. Modern life brings with it many civilizational diseases. Superstimuli, the easy availability of sugar and a sedentary lifestyle enforced from childhood on, come to mind as causes. That does not mean, that all aspects of the modern lifestyle are bad. The fact that animal fat and protein is close to universally affordable is a great achievement of industrialization and a health benefit, that should be protected.

For example, it enables many men with sufficient motivation to attain the strength and physique, that in medieval times would have been only attainable for the aristocratic warrior elite. Many people make use of that and it's pretty great.

"As you see, I am not a vegan at all -- I just find it civilizationally insane that we have somehow collectively arrived

at a lifestyle that is simultaneously *more* cruel for animals, and *more* expensive and *less* healthy for humans."

We may have arrived at peak cruelty at some point, but we have not stopped there.

The trend has been towards less cruetly for animals, leading to higher food expenses for people. In the EU at least. The three examples that Scott mentions also go in that direction.

Expand full comment

The recent trend has been towards less cruelty, but the line graph would still show us way higher on the cruelty-Y axis than we were a hundred years ago, or even less. And of course as more of the world industrializes we would expect more factory farms to spring up but without the modern mitigations, pulling the line upwards.

It's important to note that many vegetarians are extremely fit, and some are even professional athletes (Kyrie Irving and Prince Fielder are two I can think of). I would bet that in the modern world, vegetarians as a group are healthier than omnivores as a group. But I also think there's a pretty big difference in priorities between "cost of living crisis" and "men with sufficient motivation can attain strength and physique and that's pretty great." It is pretty great! But is it great without limit? Are there margins where the benefit humans derive from a certain act of animal cruelty would be small enough that it would be worth treating the animal more humanely in that instance?

Expand full comment

“It's important to note that many vegetarians are extremely fit, and some are even professional athletes (Kyrie Irving and Prince Fielder are two I can think of)."

Certainly. But it’s a very difficult thing to do continuously build muscle and increase fitness in general. Any additional restriction is detrimental to achieving that. Of course, you can work within that handicap though, if you have the time, money and attention to spare.

“I would bet that in the modern world, vegetarians as a group are healthier than omnivores as a group.”

You might win that bet. From this it does not follow, that the people in the vegetarian group would not be better off if they started eating meat though. Or that people that currently eat meat would not be far worse off, if they were forced to become vegetarian.

"Are there margins where the benefit humans derive from a certain act of animal cruelty would be small enough that it would be worth treating the animal more humanely in that instance?"

Of course, there are. But that depends on how us humans are doing and how generous we can afford to be. We live very much “on the margin” these days. In the supposedly “rich” countries, young people are forced to delay and forego family formation, because affording a home of their own is out of reach. Animal welfare regulation has had a significant impact on prices. And will continue to have a larger impact, as it’s only ever ratcheting up. (with ever worse cost-benefit, as the most cruel situations have already been eliminated)

Expand full comment

In Slovakia, eggs from caged hens are still sold, but also eggs from free-range hens which are maybe 50% more expensive. The main change is that the labeling is now mandatory. (Some companies use quite microscopic letters for these labels, but you don't really need to take a magnifying glass to the shop -- if the letters are microscopic, then obviously these hens are from cages.)

There is also plant protein: chickpeas, peas, lentils... One of my objections against the modern lifestyle is that we seem to eat less of these than the previous generations, because we substitute them with rice, potatoes, pasta, pizza, etc. Many meals are basically meat + carbs. Then of course you need lots of meat to get some protein.

I agree that EU has passed peak cruelty. From the global perspective, the important thing is what will happen in India and China.

Expand full comment

In Germany you can't buy eggs from battery cages in the supermarket. Looking more into it, it's more complicated than that though. Not too invested in knowing the details tbh. China is horrible to its animals. And also to its people. Their kind of animal cruelty is also the kind that deceives, if not outright poisons the consumer.

Anyway, nice discussion. Kinda don't want to deal with Substack comment forms anymore. Does drain my will to live :)

Expand full comment

I agree with every sentence here!

Expand full comment

You are wrongly crediting E.A with these 'first order achievements'. The truth is, my Baghdad declaration of 1968- viz. 'be nice!'- caused niceness to become more popular. Thanks to niceness billions of lives have been saved. One may say that I've never done anything nice, or been nice to anyone, myself. But my 'second order' niceness- i.e. my demanding more niceness- clearly is responsible for all first order niceness. The fact is niceness involves doing all the nice things you could possibly do even if virtue signalling charlatans say you should do some of those things.

I have also effectively abolished death and disease through my 'don't die. Live healthily forever' campaign. Well, I would have done if I'd received proper funding. Second order niceness, like effective altruism, doesn't come cheap.

Expand full comment

I really enjoyed this article. It was emotionally validating in the face of all the EA criticism I’ve seen flying around. Yeah, there’s been some missteps here and there, but remember all the good stuff EA has done!

But then I tried talking over with some people I’d describe as altruists, but not EA. In our discussion I ended up evaluating the article through their eyes, and while most of the points are very compelling, the tone is alienating in a way that overrides the goal. Specifically, I think the way you talk about accountability and the comparisons between EA people and non-EA people turn people off of EA.

My friends agree with most of the goals of EA, but they said that this article doesn’t paint a good picture of the movement. Instead of focusing on everything that EA has accomplished, they’re left questioning the self-awareness and approach of EA. One of my friends’ comments was “EA’s goals are in line with my values, but their approach is not, and so I can’t join them.”

The first specific thing I can point out is that the article contrasts EA to an ‘other’ throughout, but is that the critics on Twitter (who deserve all of this), or is it everybody else in the charity sphere, as seems to be the case towards the end of the article? The impression is that you’re being dismissive of other charitable efforts.

On accountability, I agree that every movement is gonna have some issues that they need to deal with. In my opinion, EA is almost too good at acknowledging mistakes. But we lose people when our response to criticism is that the good we’ve done outweighs our need to learn from high-profile mistakes. I think this article could have been more persuasive if it acknowledged that mistakes were made and that we can learn from them, and then refocused the discussion on the good we have done and how we can continue doing better.

Ultimately, what was the purpose of this article? Was the goal to rally-the-troops, make people like me feel good and righteous? Or was it to convince fence-sitters that we are worth supporting and building coalitions with? I tried to use this article to convert ‘Altruists’ to ‘Effective Altruists’ and instead was convinced that we have a branding problem. This isn’t a novel insight, but it was my first time seeing that problem in action, real time.

Expand full comment

Thank you for this!

Expand full comment

> One of my friends’ comments was “EA’s goals are in line with my values, but their approach is not, and so I can’t join them.”

Does your friend actually donate to effective charities? (For whatever value of "effective", not necessarily the GiveWell top list, just giving consideration to the effectiveness of the charity, and actually sending the money as opposed to just talking about it.)

If the answer is "yes", then I am happy if they don't join the movement, just keep doing the right thing.

> Ultimately, what was the purpose of this article?

I think it was to remind people of the greater picture, these days when journalists seem trying to convince you that all that Effective Altruists do is crypto scams and if they donate any money, it goes to some apocalyptic cults.

If that was the true, then the right response would be to burn down the entire thing, and start doing some... ahem... effective altruism instead.

But if that is false, and actually false in an easily measurable way, then perhaps we just need to stay calm until the journalists find someone else to bully, and continue doing the good work, and continue using the name under which we already did a lot of good work.

My personal belief is that the new brand would ultimately have the same problem. It would still contain weird people, because effective altruism *is* weird from the perspective of an average person. So we would keep rebranding every few years, but that would be too obvious, and it would actually seem quite fishy.

Unless your plan is to replace the entire movement with normies, who are good at social skills and avoiding everything potentially controversial. In which case, if you actually have a mass of normies willing to donate to effective charities, just go ahead and start a new movement; it will even be less controversial that way.

Expand full comment
Nov 30, 2023·edited Nov 30, 2023

I don't see anyone mentioning that Matt Levine gave some thoughts on Effective Altruism, (linking this page) this in his Money Stuff newsletter today: https://www.bloomberg.com/opinion/articles/2023-11-29/the-robots-will-insider-trade - in the section titled "Kangaroos".

To summarize: he talks about it as sort of logical progression where you start off doing clear impact stuff (malaria nets), but then rationally decide that you have a higher expected value by doing less concrete, but perhaps higher Expected Value stuff, and eventually you're putting your money towards very abstract ideas like stopping an AI apocalypse. (He then compares this to climate efforts which start very specific "plant trees" and get increasingly abstract "get Kangaroos to eat fewer trees", hence the title)

He points out that at the end of the progression, you don't end up looking very different than the standard charities that you were originally attempting to contrast yourself from, (which is probably not good) but also that there's not really a logical place to 'cut off' this chain of reasoning, either.

> There is no obvious place to cut off the causal chain, no obvious reason that a 90% probability of achieving 100 Good Points would be better than a 30% probability of 500, or a 5% probability of 5,000, or whatever.

Expand full comment

A pitfall I see in the idea of effective altruism, the effective part is often at odds with the altruistic part. That is, generally when you ask it how to do the most good in the world, math will come up with "slaughter the outgroup".

And when you look around, there will be lots of low hanging fruit and easy approaches to the entire "slaughter the outgroup" thing, and you'll wonder why nobody has thought of this before, and donate lots of money to various "slaughter the outgroup" projects and advertise how easy it is to "slaughter the outgroup" to your ingroup.

And then, suddenly, it turns out that there was also a lot of low hanging fruit in the "slaughter the ingroup" area, and you're flabbergasted at what sort of horrible people would support such a thing, and everything goes wrong, and your ingroup is already synonymous with ISIS, and what just happened!?

Conversely, traditional altruism just does things that are uncontroversially good, so when you donate mosquito nets, someone else doesn't have to release extra mosquitoes to cancel that out.

As much as I also hate the outgroup, it seems unwise to mix this into projects that never needed to be adversarial to begin with. I suspect effective altruism would do better were it to separate from political/controversial/adversarial issues and focus itself more on just being altruistic, even if this isn't what looks at first glance like proper lawful good behavior.

Expand full comment

The idea of a sober, uninhibited mathematical approach to improving humanity was always the appealing part of the project, even to those at the other end of the ideological pool. This is good, and will always remain good. Dialectical materialism was like this, too, after all, rejecting philosophical hand-waving in favour of concrete analysis and aiming to understand and contain dangerous contradictions.

But, uh... the bit about becoming rich and then doing carefully-weighed private altruism with some fraction of that wealth remains questionable given the modalities of becoming rich in the first place. No, not SBF, but mundane, commonplace side-effects. Malaria nets vs Namibian lithium mines and Foxconn suicides, and all the rest.

Expand full comment

"But it has saved the same number of lives that doing all those things would have."

This encapsulates why I'm not entirely enthusiastic about EA. On a superficial level, that's true, but anyone who works with people whose communities have a lot of gun violence knows the numerical count of lives taken by guns completely low-balls the cost of gun violence. Every violent death represents a great many violent acts that don't result in death, and even more trauma spread through the community. So when you claim it's the "same number of lives," I can only conclude that you don't pay any attention to things that aren't easily measured.

I'm not unhappy that EA exists, but I'm happy there are people who ignore it and put them money towards things that are less easily measured and assessed, or that have complex interactions with donor dollars.

Expand full comment

While I don't disagree with your sentiment, surely you don't think e.g. deaths from malaria don't have effects beyond the lives saved. Children are able to grow up healthy and contribute towards their community, they can grow up with healthy mothers and fathers in a place that's not constantly vulnerable to disease...

Expand full comment

Sure, but the follow-on effects from gun violence (which is often not lethal, but leaves physical and psychological wounds that can last a lifetime) spread so much wider. I don't know if it's possibly to quantify the secondary impacts, so I won't try to give statistics, but I find it inconceivable that there is any parity between the impact of saving one death from disease and saving one death from violence.

Expand full comment

Surely disease also leaves physical and psychological wounds? Could the extreme disparity in your view be because you live in a community where gun violence is more of a prevalent problem than severe disease?

Expand full comment

Maybe? But probably not. I don't think there's an equivalence between disease death and the fear and trauma caused by violent deaths. Violence perpetuates itself in almost every aspect of the community. For example, kids in communities with very high rates of violence are afraid to go outside, which creates huge barriers to education and long-term impacts on health, not to mention the psychological damage. The impacts are vast and far reaching.

Perhaps in very poor communities where disease completely undermines the possibility of a vast array of things, there's some equivalence. However, do you believe EA people are looking at secondary and tertiary impacts when making claims like the one I quoted, or when choosing what to fund? My point is a flaw with EA is that there are many problems which do not lend itself to the kinds of analysis EA people do.

That said, most people don't give to EA causes, and the causes EA funds tend to be worthwhile, so that's not a problem provided you accept EA is just part of a mix of things that could have a positive impact rather than a superior form of charitable endeavor.

Expand full comment

This is obviously not a methodologically valid observation, but the distribution of blue checks on that alignment chart is quite interesting.

Expand full comment

My impression is, probably for reasons that are subtle at least to me because I can't identify anything that clearly stands out, a lot of people have a kind of ick factor to EA people the same thing many women have towards nerds or neurodivergent males. This leads to accusations EA is "creepy". I do not share this sentiment but

Guesses:

1) EA folk look the type, very smart, disproportionately in software/computer related fields, talk about sci fi topics, etc.

2) The taking seriously AI as a threat which most of the public feels is science fiction. Adds to the weird aesthetic.

3) I don't know how to put this in words exactly but looking at an EA club gathering one almost gets the impression it's a group that has somehow, as an entire group figured out some truth the rest of us haven't and converged on it with incredible reassurance, knowing smiles, pats on the back, weeping with emotion, etc. It almost reminds of apocalyptic cults who have uncovered a dark secret the rest of us don't know about or think they do, think they are in on the know even if most won't or can't get it, however an unnervingly smart and reassured one.

4) A number of the concerns raised like longtermism or AI are so divorced from people's everyday physical lives, or even their feelings about what they think is the future like climate catastrophe or judgement day it adds to the oddity. Humans living a very long time is not thought about too much. Perhaps the crime here is having a rare future vision. At least people can *feel* what it is like to be in a climate catastrophe in a few decades or experience judgement day. But try feeling like a human 40000 years from now.

I don't have any strong opinions on EA overall but the health successes are spectacular, God bless.

All of these explanations individually, and combined are probably not satisfying. So I am just not sure exactly what it is. But I think it is more likely than not that their IS some kind of ick factor at play. The strong, vehement reaction from so many suggests this is likely to be the case.

Expand full comment

9/11 wasn't just about 2K dead, not to mention 2 skyscrapers, it was about massive loss of trust.

This isn't too denigrate saving 2K lives in an undramatic fashion, but they really aren't the same.

Expand full comment

I'm interested in what gets left out of utilitarianism, and part of that is network effects.

For example, I think murder by a police officer is more serious than murder by an ordinary member of the public. Intuitively, I'd say it's by a factor of three or four, but the loss of trust is hard to estimate.

Expand full comment
Dec 1, 2023·edited Dec 1, 2023

I am not sure that there are quantitative metrics that can capture this. Emotional responses don't seem to work like that. Yet I would argue that emotional responses are the thing that matters. If most people feel more sympathy for a victim of a police shooting than for the victim of a mugger (where sympathy isn't zero in either case), then I can't help thinking that 's a real thing and we should respect that.

Expand full comment

I don't think it's so much a matter of which victim gets more sympathy, it's a matter of who you can trust to protect you rather than hurt you.

Expand full comment

I think you need to identify with the victim before one begins to worry about whether or not that circumstance will happen to you. If I imagine a mental state of no sympathy for the victim at all, then I think I arrive at concluding the victim and I share nothing of importance, hence what happened to them is unlikely to happen to me.

Obviously, this is not what is happening out there in the world--quite the opposite. Apparently, millions of people feel considerable sympathy for the victims of police shootings, more so than mugging victims, and that tells us that those millions feel they have more in common with the victims of the police than of muggers. As to why this might be true--I have my suspicions but that would take much longer to explain.

Expand full comment

Honestly, EA needs some better PR people. I notice that graph on the last year that they're beginning to spend a significant fraction of the money they get on "developing" EA infrastructure - even if that looks a little bit self-indulgent, it's still very defenisible from a utilitarian standpoint. If spending $10 million on PR leads to an extra $100 million in donations, then that's thousands of extra lives that can be saved with the $90 million "profit". (I know it's a charity but can't think of a better word for it. Maybe those Twitter hot takes that we're all a bunch of hypercapitalists - which is indistinguishable from being fascists - were right after all...)

Expand full comment

The broad idea of doing altruism more effectively and quantifiably is great.

My impression of the origins of some of the problems that appear in practice:

1. It's hard to properly define the objective or utility to optimize. When asked, prominent EAs (not just SBF but some of the most senior leaders I heard talk about this on podcasts) fail to reject optimizing linear utility and seem to not understand concepts like the Kelly bet. That is a recipe for disaster for an organization with high impact as its bet sizes start to matter. This was less important when the movement was small and less impactful.

2. There seems to be a a high degree of overconfidence in both the definition of the objective to maximize and the world model. For example related to p(doom) concerning AI, people often give point estimates, and rarely put in context to p(doom) if we delay AI. Also statements of whatever is currently considered the most effective use of charity money have seemed overconfident to me. This overconfidence tends to keep the movement from diversifying their investments.

If EAs want to do some reform, these would be good places to start.

Expand full comment

At the beginning, the argument against diversification was that our contributions are just a rounding error in the entire budget. Like, if I give you billion dollars and tell you to support a cause A or a cause B, it is reasonable to split the money -- not to two equal parts, if you believe that one cause is clearly more important, but the less important one shouldn't get literally zero. Here is the law of diminishing returns, Kelly betting, etc. On the other hand, if the cause A already gets billion dollars, and the cause B already gets billion dollars, and you are wondering where you should send your $1000 contribution, then it does not make sense to "diversify" your tiny piece of contribution; just send it to the cause where it makes better marginal use. It is simply not large enough to make a difference in the overall picture.

As the EA movement grows and the sums get larger, perhaps we should rethink this decision. Not sure if we are already there (i.e. whether the donations made specifically by effective altruists actually make a two-digit-percent part of some effective charity's budget).

Also, to certain degree this already happens naturally, by GiveWell listing multiple charities, and individual EAs having different preferences, so not all money literally goes to one charity highest on the list.

Expand full comment

Complaining about arguments you found on twitter is like complaining about food you found in the trash.

Expand full comment

Any form of genuine altruism is good.

But EA is highly suspect because of its charter of being "more" good. "Double Plus" good in fact.

Altruism coupled with judgment: not so good.

Is this just rice bowl proselytization in another form?

And more importantly, is the "judge-y" aspect of EA precisely what attracted the FTX grifter?

Expand full comment

I think it's fair to say that there is a widespread public impression of an element of intellectual arrogance on the part of EA (valid or not I can't say), that aligned with a similar public impression on the part of wealthy libertarian techbros. EA seems to regard itself as "better" in some way to other approaches to charity, but I can't figure out why that would be true.

Expand full comment

I don't really see what is difficult to gasp.

EA's self ascribed messaging is that their practice of altruism is "effective" - which automatically places other forms of altruism as "less effective" if not necessarily "not effective".

A lot of the more high profile EA types also made pretty public declarations about outcomes from their altruism whereas traditional altruism is about simply doing good as opposed to racking up trophies.

At least in my own mind, true altruism is about the giving part, not the results part.

Expand full comment

You raise a host of interconnected questions: Is altruism mostly or entirely about giving, and not at all or very little about results and outcomes? Could both be important? What is EA actually claiming? If EA claims that they are better at measuring outcomes than traditional charity giving, what is their basis for making this claim (because, so far, I haven't seen one)?

I agree that the simple act of helping even one other person suffer a little less is by itself fully justified even in the absence of any proof that this did or did not make the world overall a better place (whatever we imagine "better" to mean in this context). But I also agree that trying to impact overall global wellbeing is justified as well (in fact, I would argue that one can do this without directly impacting a single individual human being).

But I have two problems with EA: there seems to be a presumption that measurable outcomes are more important than qualitative ones, and I would challenge that; and two, that mainstream institutions of charity underperform in ways that are easily measured, and I would challenge that.

The intellectual arrogance is a third problem, but that would be acceptable if EA really was measurably better at directing charitable giving than the traditional institutions are, so I will await more information before addressing that.

Expand full comment
Dec 3, 2023·edited Dec 3, 2023

Altruism is defined as doing something for the good of others.

There is no relative value component to this definition nor should there be any judgment about the act of doing good - i.e. you aren't doing good "enough".

Yes, from an optimization standpoint, doing "more" good is better than just doing good - the problem is that focus on "more" good could as easily be about ego or status as opposed to the actual goal of "doing good".

In such cases, altruism is no longer the objective but rather a vehicle for achieving status and/or flattering ego. Would you not agree that this is "not" good? Certainly it is against the spirit of altruism. Certainly FTX was not about true altruism - FTX was using EA to whitewash itself from being a shady offshore regulation evader to being some magical force for good. FTX was also using this reputation to literally benefit itself: how many nonprofits fell for FTX's shtick of funneling their incoming donations onto the FTX platform, which in turn were used to perpetuate the pyramid scheme?

This is precisely why I referenced rice bowl proselytization: the organizations providing food for hungry people were not doing this for the sake of altruism, mostly. They were doing it because they believed it would gain them converts.

This is closely related with your problems with EA: measurable vs. quantitative and mainstream charity performance.

Even talking about EA repetitively walks a fine line between self promotion and actual desire for doing greater good.

How much of EA actions like donating kidneys is about making the donors "leaders" or "paragon examples of virtue" as opposed to communicating a message about a new way to be altruistic? This is also why I referenced Veblen goods - Veblen goods are products and services used by rich people primarily just to show they are rich.

Indulgences, perpetual tax free shelters holding billions of dollars in untaxed stock gains, support for arts or orphanages a la Sacklers - these are all areas where clearly the line was crossed from altruism into something far more selfish whatever the original intent. The abusive examples all involve "doing good" to obscure (or even support) the doing of evil as opposed to doing good to benefit others.

Expand full comment

I would add to your analysis that a lot of these claims are impossible to resolve because people do not agree on what "good" consists of. Sure, saving lives is good, but is it more or less good than making ten times as many textbooks available? Is it even possible to know that?

Expand full comment
Nov 30, 2023·edited Nov 30, 2023

In which Scott reinvents a fully general defense of tainted organizations.

Seriously. "Look at all the good we do. It outweighs the bad, and we don't get enough credit. If the world saw things the way we do, we wouldn't even need to do this stuff. I don't want to downplay <bad thing>, but <downplays bad thing anyway>."

You might well see the same pattern of writing in defense of the Catholic Church twenty years ago.

Effective organizations address the taint head on, take the hit, clean the house, and come back strong. Ineffective ones save face through defense.

Expand full comment
Nov 30, 2023·edited Nov 30, 2023

Actually, let's go further.

Hammering on the 200k lives saved as a special credit to EA rests seems to rest on these implicits:

- That nobody made rational study of world poverty before EA, which is false.

- That nobody implemented rational measures against world poverty before EA, which is false.

- That nobody self criticized and iterated on failures in addressing world poverty before EA, which is false.

- That nobody did a good job alleviating world poverty before EA, which is false.

https://www.elibrary.imf.org/view/journals/022/0031/002/article-A001-en.xml

https://www.jstor.org/stable/2951270

https://www.amazon.com/Elusive-Quest-Growth-Economists-Misadventures/dp/0262550423

https://en.wikipedia.org/wiki/President%27s_Emergency_Plan_for_AIDS_Relief

https://en.wikipedia.org/wiki/Charity_Navigator is 22 years old.

I'm genuinely, profoundly angry at the way EA pretends that EA replaced a world of corrupt, inept, static grifts that had done nothing over sixty years, when in point of fact they slotted in to a world of iterations, discovery, Charity Navigator, self reckonings of religious charities, and moderately effective policy across the board.

I'll see your 200k lives saved and raise you the entire productive output of the World Bank, IMF, several churches, Salvation Army (and yes, I dare you to rathole on their weirdness instead of acknowledging their food drives), et cetera, et cetera, et cetera.

Expand full comment
Dec 1, 2023·edited Dec 1, 2023

So here's the thing. I'm new to this, and I *want* to believe Scott, I really do. Who wouldn't want to be part of some sort of elite movement to save the world? Especially if it's part of a community that is friendly to nerds like me. But I have almost twenty years professional experience in non-profit program evaluation, and I can personally attest to what Matthew is saying here. My clients and I were helping alleviate suffering cost-efficiently long before EA came along. I can't help notice the underlying assertion that there is something new or unique about EA that other research based approaches to non-profit performance don't or can't provide.

The baseline of comparison seems to be uninformed public opinion, but uninformed public opinion isn't who EA is being compared to. That would be tax funded social services and programs along with the largest non-profit foundations.

Perhaps if someone could very quickly outline the most significant differences between EA and these other more traditional approaches to providing human services, maybe I could get onboard. Because I really want to.

Expand full comment

Too bad to see that EA is doing so much that isn't actually effective ... you listed like 30 different intiatives but in fact only one can be the actual optimal best use of ones altruistic time and money. So it looks like 95% of EAs effort is suboptimally used on pet projects that interest the constitutent community, similar to every other charitable movement. Here I thought that EA types were supremely rational and better than the rest, too bad

Expand full comment

I am new to the stack and I came in blind. I mean this sincerely, I am floored. I came down to the comment section to have a gander at the laughs everyone had a this clever bit of subtle, satire only to find an earnest, good faith discussion. I’m going have to reread this.

Expand full comment

Here's the REAL reason why many people feel disdain for "Effective Altruists" (and also why some of that disdain is deserved)

https://questioner.substack.com/p/utilitarianism-vs-consequentialism

Expand full comment

This is a great post, and made me feel good about continuing to be part of the EA community :).

However, I'd have to disagree that it's essentially the same people who care about global health, animal welfare and AI risk. There's a pretty significant split in the community between the first two (and associated things) and the last. I've talked to a lot of long-time effective altruists who feel that the movement has been 'taken over' by the AI-safety guys in recent years, and there's a noticeable difference between the groups, even socially. This may be uncharitable, but I think this post tries to obfuscate that a little bit.

Expand full comment

We cannot know what the counterfactual to EA would have been. It is possible that without EA another movement would have arisen, and that rational charity inclined people would have given almost as much, but simply to other causes. Similarly, AI safety would have progressed, but just under a different banner. Furthermore (working in the field) I do not rate most of the EA safety contributions as particular good or grounded, but they do suck up a lot of oxygen in the room, and potentially give people the wrong intuitions. A lot of EA safety people seem like armchair philosophers with very little understanding of the actual field of AI.

Personally, while I agree a lot with many of the tenants of EA, and appreciate the community, I also think that as a group it has many arrogant (and to be honest naive) members. With it's intersection with the rationality community there seem to be a lot of members whose whole identity resolves around being "smart." There is a lot of superficial re-inventing of the wheel, people feeling morally superior to others while actually not holding a very though out moral view etc... and just not being aware of a history of similar ideas explored in the past or in other communities. All of these behaviors are pretty standard for members of ideological groups, e..g, religions, fringe movements, cults, but EA simultaneously claims to be rational and somehow better than that. In my opinion, the reality is that the average EA follower has not examined their own beliefs markedly more than say an earnest Christian (which is okay, but is what it is).

I also think that the EA movement has lots of extremist view points that are tolerated within the community that really harm it's PR. E.g., one can read an archive of Caroline Ellison identifying as an EA member, and rating "wild fish suffering" as orders of magnitude worse than human genocides. The kinds of statements are (I would argue justifiably) repugnant to the lay person.

Expand full comment

> It’s only when you’re fighting off the entire world that you feel truly alive.

SO true, a quote for the ages

Expand full comment

Is the article formatting all fucked up or am I not appreciating the current formatting trends?

Expand full comment

The Red Cross has been in existence for nearly 150 years (demonstrated longevity) and claims they help 200 million people a year outside of the US alone. Saving 200,000 people over ten years seems like too little to measure, even if the lives per dollar is better. Whatever the Red Cross is doing scales. There's no way that the bed nets would scale at the same level.

The Red Cross helps more people every year than EA has helped living beings of all types in its entire history. That's even if you consider chicken lives of equal value to humans.

I'm not anti-EA, though I am absolutely anti-arrogance. EAs seem far more arrogant by their claims to be the first and only group that really evaluates charities effectively. I also put a lot more stake in groups that can maintain productivity year after year than a new group with a rocky start and uncertain future. I guess I'll be more impressed when EA helps its first 100 million or passes from a first generation to a second and continues going. You can argue that I'm holding EAs to too high of a bar, and I would agree that it's a high bar - but not too high - because current organizations that already exist have been doing more and for longer. If EAs can scale and show they are more efficient while helping millions of people every year, I'll take them more seriously. Based on the chart from the above post, it looks like EAs have spent several billion dollars on various goals. Including well over a billion on global health and development. Even using standard non-EA costs-per-life-saved that should have resulted in more lives saved than 200,000. I'm not saying it was misspent, but that EAs have not demonstrated any better ability to do charity than multiple already-existing charities do regularly.

AI doom and animal welfare are speculative and philosophical diversions from the more generally accepted mission of helping people not to die of easily preventable causes. One hasn't shown any long term benefits and may never be able to (especially if successful at an early stage, no one will likely recognize the benefits of AI alignment if there was never an obvious factual danger first). The other literally cannot demonstrate any benefits to humans if the observer doesn't value animal wellbeing for its own sake. Both are an uphill battle for PR or growth of the movement.

Expand full comment

I'm not sure if it's something really weird on my end, but I'm seeing all the images in this post duplicated 4 or 5 times and breaking up the text in a bizarre way

Expand full comment

This is a political party and lobby. This is not about charity.

Expand full comment

This is a great list of the positive things effective altruism has done, and I agree many of these are still underrated. However, I feel like it doesn't fully engage with the key counter-arguments. Here's a recent essay of mine exploring that: https://inexactscience.substack.com/p/the-case-for-narrow-utilitarianism

Expand full comment

I agree with this. I'm not against research in AI safety, but it's really harmful to speak---as Yudkowsky and the like so often do---as though AI safety is *the one most important thing* that we should fanatically devote all resources do. Effective charity is the best part of EA---one of the best parts of humanity in general, in fact.

But I also have another problem with EA, which is that it has been severely damaging to the mental health of many of its adherents, including myself. The notion that we are as responsible for every death we fail to prevent as we would be if we actually killed them ourselves is not just mistaken---it's incredibly dangerous. (A lot of EA people, if pressed, would agree that this is wrong. But they often speak as though it's true, and use statistical data as if it were true. Some may actually believe it is true. (I think there is some motte-and-bailey here.))

For we all fail to prevent an enormous number of deaths literally constantly. For every $3,000-$5,000 you spend on literally anything other than the most cost-effective charities, someone dies and you could have saved them.

Oh, but it gets worse; because it's not just money you spent. It's also money you *never made*. Could you have worked a little harder and gotten a better job? Could you have accepted more miserable working conditions to receive slightly higher pay?

You are letting people die that you could have saved. So am I. We all are. Always.

But this does *not* make us murderers. Because it was *never* your job to save everyone. You couldn't even if you tried. And if you try too hard, you might just destroy yourself.

I know, because I nearly did. Ironically, I'm sure I contributed less to the world because of it. So even trying to optimize for maximum altruistic effort doesn't actually maximize altruistic outcomes. (But don't take that to mean you just need to do a second-order optimization, figure out the exactly optimal amount of rest and enjoyment you need to maximize your long-term functioning; that way *also* lies madness.)

We can't tell people to hear the scream of a child in every Disneyland ticket they buy---or job application they fail to submit. We need to stop telling them that they are unforgivable monsters because they aren't willing to sacrifice themselves to save someone they will never meet. Being told that I was just such an unforgivable monster---and *believing* it---has been a major factor in my depression for years, and I know I'm not the only EA adherent out there who suffers from severe depression. (Note that depression is positively correlated with autism, ADHD, high IQ, and high empathy---all of which are practically diagnostic criteria for EA membership.)

What I think EA desperately needs right now is a clear and consistent message about *what is your fair share*---at what point can we say that you have worked hard enough, given enough of yourself, that you have discharged your moral responsibility. We also need a clear and consistent message reminding people that failing to save someone *isn't* the same as killing them, and that we will all inevitably do the former all the time while most of us will (thankfully) never do the latter.

We have done a good job of telling people the work that must be done, and even doing some of that work; and this is a very good thing. But we still haven't told people how to really integrate these values into a flourishing human life, how to achieve a healthy balance being doing good and taking care of yourself.

Expand full comment

I'm late to this, but...

I think what's missing here is an acknowledgement that the standards the EA movement is being judged against and found wanting are standards _the EA movement itself loudly advocated for_. It was an EA organisation called GiveWell that popularised the notion of slapping a "do not recommend" tag on organisations that failed to demonstrate strong evidence of cost-effectiveness towards specific metrics like QALY and good governance, even if they were well-intentioned and had impressive lists of bullet pointed accomplishments. (yes, Givewell is supportive of the principle of giving to other causes and charities that don't meet its recommendations, but it also compiled Celebrated Charities We Don't Recommend lists). It was the founder of that org that considered a seat on OpenAI's board to be worth £30 million of Dustin Moskovitz's philanthropic cash (compared with the alternatives of giving it to GiveWell's highest rated charities, which could - according to GiveWell's analysis - have saved a little under 10,000 lives instead). Critics didn't decide the board seat just lost was a big deal even compared with tangibly saving lives; Holden and Dustin did.

Dustin is entitled to spend his fortune how he likes and has spent far more than most on saving lives in developing countries too, Holden is entitled to change his mind, and not all endeavours can be expected to succeed, but if you start a movement from the premise that most philanthropic dollars could be spent better and some approaches and organisations even deserve calling out for being particularly wasteful, you best be prepared to have that spotlight turned on you.

Expand full comment

I think you're saying that if EA organizations say things like "we don't judge everybody as "top tier" ― and here's how we made our decisions" (as you basically noted, "We don't recommend" is different from "We recommend you don't"), this means we should be welcoming of criticism that is dismissive and vague...

> EA is functionally a branding exercise that masquerades as an ethical project [...] those specifics are a series of tendentious perspectives on old questions

incorrect...

> this sounds like so obvious and general a project that it can hardly denote a specific philosophy or project at all [...] this is an utterly banal set of goals that are shared by literally everyone who sincerely tries to act charitably [...] those specifics are a series of tendentious perspectives on old questions

and a bit ridiculous.

> has produced a number of celebrities [in its own world] to the point where it seems fair to say that a lot of people join the community out of the desire to become one of those celebrities.

Do you know anyone who decided they were going to suddenly give 10% of their income because they thought they would become a "celebrity" among other people who were already doing the same thing? Like, okay, if you have 10 billion dollars you can get a lot of free attention among EAs by giving 10%, but surely in that case you can get way more attention among the general public since the latter group is over 10,000 times bigger? (funny how we're only discussing deBoer's rant in the first place because he's a celebrity around here...)

I'm not popular, at all, inside or outside EA. I give 10%. Big deal. Lots of people do it, I don't get a medal, and if I did, deBoer would be complaining about the "pompous self-congratulatory medals".

DeBoer's criticism is also not actionable (likely deliberate―it's destructive, not constructive criticism). I've never seen EAs in "debates about how sentient termites are", but let's say I accept his implication that insects So Obviously Can't feel pain that we shouldn't discuss the matter in the first place. So what? Should EA Forum have a "deBoer appeasement rule" like "moral questions whose answer is obvious according to deBoer can't be discussed here"?

So not only can he do nothing about EAs' intellectual interests, EAs themselves can't do much about it either. If some EAs don't care about animal suffering, tough luck, they don't get a veto over other EAs. Why should it be otherwise? How could it be otherwise? And to the extent EA has gatekeepers, he conspicuously ignores the biggest ones, e.g. there must be a thousand articles on 80,000 Hours, but he ignores all that and goes after some student's blog post about carnivores instead. Because his goal isn't to critique EA, it's to disparage EA.

Expand full comment

lol

Expand full comment

You claim that EA “Played a big part in creating the YIMBY movement” and prove that in a footnote by saying that Open Philanthropy claims to be the first institutional funder for the YIMBY movement.

First of all, providing funding is not even close to the equivalent of creating a movement. Second of all, the YIMBY movement obviously predates EA. I am a board member of a YIMBY organization that was founded 38 years ago and works in coalition with a better-known YIMBY organization that was founded 16 years ago. These obviously predate EA, and the other board members of it have never heard of EA. EA most certainly did not create this movement—people have been doing the work since before many EAers are born.

If you do a simple Google search—or even use Wikipedia, you apparent source of choice—you will see on the YIMBY Wikipedia page that planners have been using the term YIMBY to describe that movement since at least 1993, and the YIMBY movement has been spreading around the world before EA was conceived, let alone before EA had the funds to support the YIMBY movement.

Your ridiculous, easily-refutable claim that EA founded the YIMBY movement makes me question every other claim you make in this sad attempt of a pro-EA PR blog post.

Expand full comment

I really really appreciate this article. Helps motivate me back to being interested in EA

Expand full comment