Yeah, this is where I end up on it as well. To the extent that it helps people give more effectively, it's been a great thing.
It does go a bit beyond merely annoying though. I think something that Scott is missing is that this field won't just HAVE grifters and scammers, it will ATTRACT grifters and scammers, much like roles as priests etc. have done in the past. The average person should be wary of people smarter than them telling what to do with their money.
The only durable protection from scammers is a measurable outcome. That's part of why I think EA is only effective when it focuses on things that can be measured. The meat of the improvement in EA is moving money from frivolous luxury to measurable charity, not moving measurable charity to low probability moonshots.
I mean, GiveDirectly is a top charity on Givewell, are you claiming that showering poor people in money to the tune of .92 per dollar still produces a lot of transaction cost?
Is your thought here that transaction costs are implicit and thus not properly priced in to the work done? I think at the development economics level that is not terribly true. The transaction costs of poverty relief in urban USA vs the poverty relief in San Salvador are not terrible different once the infrastructure in question is set up.
"Compared to what" is my question.
Everything has transaction costs. Other opportunities have similar transaction costs. I would be surprised if they didn't. However, I agree I would like to see this argued explicitly somewhere.
- Instead of spending an hour studying, you should spend a few minutes figuring out how best to study, then spend the rest of the time studying
- But how long should you spend figuring out the best way to study? Maybe you should start by spending some time figuring out the best balance between figuring out the right way to study, and studying
- But how long should you spend on THAT? Maybe you should start by spending some time figuring out the best amount of time to spend figuring out the best amount of time to spend figuring out . . .
- ...and so on until you've wasted the whole hour in philosophical loops, and therefore you've proven it's impossible to ever study, and even trying is a net negative.
In practice people just do a normal amount of cost-benefit analysis which costs a very small portion of the total amount of money donated.
Centralizing and standardizing research into which charities do exactly what (so the results can then be easily checked against any given definition of "effectiveness") reduces transaction costs by eliminating a lot of what would otherwise be needlessly duplicated effort.
A common sentiment right now is “I liked EA when it was about effective charity and saving more lives per dollar [or: I still like that part]; but the whole turn towards AI doomerism sucks”
I think many people would have a similar response to this post.
Curious what people think: are these two separable aspects of the philosophy/movement/community? Should the movement split into an Effective Charity movement and an Existential Risk movement? (I mean more formally than has sort of happened already)
I'm probably below the average intelligence of people who read scott but that's essentially my position. AI doomerism is kinda cringe and I don't see evidence of anything even starting to be like their predictions. EA is cool because instead of donating to some charity that spends most their money on fundraising or whatever we can directly save/improve lives.
Which "anything even starting to be like their predictions" are you talking about?
-Most "AIs will never do this" benchmarks have fallen (beat humans at Go, beat CAPTCHAs, write text that can't be easily distinguished from human, drive cars)
-AI companies obviously have a very hard time controlling their AIs; usually takes weeks/months after release before they stop saying things that embarrass the companies despite the companies clearly not wanting this
If you won't consider things to be "like their predictions" until we get a live example of a rogue AI, that's choosing to not prevent the first few rogue AIs (it will take some time to notice the first rogue AI and react, during which time more may be made). In turn, that's some chance of human extinction, because it is not obvious that those first few won't be able to kill us all. It is notably easier to kill all humans (as a rogue AI would probably want) than it is to kill most humans but spare some (as genocidal humans generally want); the classic example is putting together a synthetic alga that isn't digestible, doesn't need phosphate and has a more-efficient carbon-fixing enzyme than RuBisCO, which would promptly bloom over all the oceans, pull down all the world's CO2 into useless goo on the seafloor, and cause total crop failure alongside a cold snap, and which takes all of one laboratory and some computation to enact.
I don't think extinction is guaranteed in that scenario, but it's a large risk and I'd rather not take it.
> Most "AIs will never do this" benchmarks have fallen (beat humans at Go, beat CAPTCHAs, write text that can't be easily distinguished from human, drive cars)
I concur on beating Go, but captchas were never thought to be unbeatable by AI - it's more that it makes robo-filing forms rather expensive. Writing text also never seemed that doubtful and driving cars, at least as far as they can at the moment, never seemed unlikely.
This would have been very convincing if anyone like Patrick had given timelines on the earliest point at which they expected the advance to have happened, at which point we can examine if their intuitions in this are calibrated. Because the fact is if you asked most people, they definitely would not have expected art or writing to fall before programming. Basically only gwern is sinless.
On the other hand, EY has consistently refused to make measurable predictions about anything, so he can't claim credit in that respect either. To the extent you can infer his expectations from earlier writing, he seems to have been just as surprised as anyone, despite notionally being an expert on AI.
1. No one mentioned Eliezer. If Eliezer is wrong about timelines, that doesn't mean we suddenly exist in a slow takeoff world. And it's basically a bad faith argument to imply that Eliezer getting surprised *in the direction of capabilities getting better than expected* is apparently evidence of non doom.
2. Patrick is explicitly saying that he sees no evidence. Insofar as we can use Patrick's incredulity as evidence, it would be worth far more if it was calibrated and informed rather than uncalibrated. AI risk arguments depend on more things than just incredulity, so the """lack of predictions""" matters relatively less. My experience has been that people who use their incredulity in this manner in fact do worse at predicting capabilities, hence why getting disproven would be encouraging.
3. I personally think that by default we cannot predict what the rate of change is, but I can lie lazily on my hammock and predict "there will be increases in capability barring extreme calamity" and essentially get completely free prediction points. If you do believe that we're close to a slowdown, or we're past the inflection point of a sigmoid and that my priors about progress are wrong, you can feel free to bet against my entirely ignorant opinion. I offer up to 100 dollars at ratios you feel are representative of slowdown, conditions and operationalizations tbd.
4. If you cared about predictive accuracy, gwern did the best and he definitely believes in AI risk.
"write text that can't be easily distinguished from human"? Really?
*None* of the examples I've seen measure up to this, unless you're comparing it to a young human that doesn't know the topic but has some measure of b*sh*tting capability - or rather, thinks he does.
Yeah there are a bunch of studies now where they give people AI text and human text and ask them to rate them in various ways and to say whether they think it is a human or AI, and generally people rate the AI text as more human.
The examples I've seen are pretty obviously talking around the subject, when they don't devolve into nonsense. They do not show knowledge of the subject matter.
Perhaps that's seen as more "human".
I think that if they are able to mask as human, this is still useful, but not for the ways that EA (mostly) seems to think are dangerous. We won't get advances in science, or better technology. We might get more people falling for scammers - although that depends on the aim of the scammer.
Scammers that are looking for money don't want to be too convincing because they are filtering for gullibility. Scammers that are looking for access on the other hand, do often have to be convincing in impersonating someone who should have the ability to get them to do something.
But moore’s law is dead. We’re reaching physical limits, and under these limits, it already costs millions to train and execute a model that, while impressive, is still multiple orders of magnitude away from genuinely dangerous superintelligence. Any further progress will require infeasible amounts of resources.
Moore's Law is only dead by *some* measures, as has been true for 15-20 years. The limiting factors for big ML are mostly inter-chip communications, and those are still growing aggressively.
This is one of the reasons I'm not a doomer, which is that most doomers' mechanism of action for human extinction is biological in nature, and most doomers are biologically illiterate.
RuBisCO is known to be pretty awful as carboxylases go. PNA + protein-based ribosomes avoids the phosphate problem.
I'm not saying it's easy to design Life 2.0; it's not. I'm saying that with enough computational power it's possible; there clearly are inefficiencies in the way natural life does things because evolution likes local maxima.
You're correct on the theory; my point was that some people assume that computation is the bottleneck rather than actually getting things to work in a lab within a reasonable timeframe. Not only is wet lab challenging, I also have doubts as to whether biological systems are computable at all.
I think the reason that some people (e.g. me) assume that computation* is the bottleneck is that IIRC someone actually did assemble a bacterium (of a naturally-existing species) from artificially-synthesised biomolecules in a lab. The only missing component to assemble Life 2.0 would then seem to be the blueprint.
If I'm wrong about that experiment having been done, please tell me, because yeah, that's a load-bearing datum.
*Not necessarily meaning "raw flops", here, but rather problem-solving ability
Much like I hope for more people to donate to charity based on the good it does rather than based on the publicity it generates, I hope (but do not expect) that people decide to judge existential risks based on how serious they are rather than based on how cringe they are.
Yeah this is where I am. A large part of it for me is that after AI got cool, AI doomerism started attracting lots of naked status seekers and I can't stand a lot of it. When it was Gwern posting about slowing down Moore's law, I was interested, but now it's all about getting a sweet fellowship.
Is your issue with the various alignment programs people keep coming up with? Beyond that, it seems like the main hope is still to slow down Moore's law.
Interesting, I did not get this impression but also I do worry about AI risk - maybe that causes me to focus on the reasonable voices and filter out the non-sense. I'd be genuinely curious for an example of what you mean, although I understand if you wouldn't want to single out anyone in particular.
I don’t mind naked status seeking as long as people do it by a means that is effective at achieving good ends for the world. One can debate whether AI safety is actually effective, but if it is, EAs should probably be fine with it (just like the naked cash seekers who are earning to give).
I agree. But there seem to be a lot of people in EA with some serious scrupulosity going on. Like that person who said they would like to donate a kidney, but could not bear the idea that it might go to a meat-eater, and so the donor would be responsible for all the animal suffering caused by the recipient. It's as though EA is, for some people, a refuge from ever feeling they've done wrong -- as though that's possible!
What’s wrong with naked status seekers (besides their tendency to sometimes be counterproductive if advancing the cause works against their personal interests)?
It's bad when the status seeking becomes more important than the larger purpose. And at the point when it gets called "naked status seeking", it's already over that line.
They will only do something correct if it advances their status and/or cash? To the point of not researching or approving research into something if it looks like it won't advance them?
Definitely degree of confidence plays into it a lot. Speculative claims where it's unclear if the likelihood of the bad outcome is 0.00001% or 1% are a completely different ball game from "I notice that we claim to care about saving lives, and there's a proverbial $20 on the ground if we make our giving more efficient."
I think it also helps that those shorter-term impacts can be more visible. A malaria net is a physical thing that has a clear impact. There's a degree of intuitiveness there that people can really value
And yet, what exactly is the argument that the risk is actually low?
I understand and appreciate the stance that the doomers are the ones making the extraordinary claim, at least based on the entirety of human history to date. But when I hear people pooh-poohing the existential risk of AI, they are almost always pointing to what they see as flaws in some doomer's argument -- and usually missing the point that the narrative they are criticizing is usually just a plausible example of how it might go wrong, intended to clarify and support the actual argument, rather than the entire argument.
Suppose, for the sake of argument, that we switch it around and say that the null hypothesis is that AI *does* pose an existential risk. What is the argument that it does not? Such an argument, if sound, would be a good start toward an alignment strategy; contrariwise, if no such argument can be made, does it not suggest that at least the *risk* is nonzero?
It's weird that you bring up Robin Hanson, considering that he expects humanity to be eventually destroyed and replaced with something else, and sees that as a good thing. I personally wouldn't use that as an argument against AI doomerism, since people generally don't want humanity to go extinct.
What specific part of Robin Hanson's argument on how growth curves are a known thing do you find convincing?
That's the central intuition underpinning his anti foom worldview, and I just don't understand how someone can generalize that to something which doesn't automatically have all the foibles of humans. Does you think that a population of people who have to sleep, eat and play would be fundamentally identical to an intelligence who is differently constrained?
I'm not seeing any strong arguments there, in that he's not making arguments like, "here is why that can't happened", but instead is making arguments in the form, "if AI is like <some class of thing that's been around a while>, then we shouldn't expect it to rapidly self-improve/kill everything because that other thing didn't".
E.g. if superintelligence is like a corporation, it won't rapidly self-improve.
Okay, sure, but there are all sorts of reasons to worry superintelligent AGI won't be like corporations. And this argument technique can work against any not-fully-understood future existential threat. Super-virus, climate change, whatever. By the anthropic principle, if we're around to argue about this stuff, then nothing in our history has wiped us out. If we compare a new threat to threats we've encountered before and argue that based on history, the new threat probably isn't more dangerous than the past ones, then 1) you'll probably be right *most* of the time and 2) you'll dismiss the threat that finally gets you.
I’ve been a big fan of Robin Hanson since there was a Web; like Hanania, I have a strong prior to Trust Robin Hanson. And I don’t have any real argument with anything he says there. I just don’t find it reassuring. My gut feeling is that in the long run it will end very very badly for us to share the world with a race that is even ten times smarter than us, which is why I posed the question as “suppose the null hypothesis is that this will happen unless we figure out how to avoid it”.
Hanson does not do that, as far as I can tell. He quite reasonably looks at the sum of human history and finds that he is just not convinced by doomers’ arguments, and all his analysis concerns strategies and tradeoffs in the space that remains. If I accept the postulate that this doom can’t happen, that recursive intelligence amplification is really as nonlumpy as Hanson suspects, then I have no argument with what he says.
But he has not convinced me that what we are discussing is just one more incremental improvement in productivity, rather than an unprecedented change in humans’ place in the world.
I admit that I don’t have any clear idea whether that change is imminent or not. I don’t really find plausible the various claims I have read that we’re talking about five or ten years. And I don’t want to stop AI work: I suspect AGI is a prerequisite for my revival from cryosuspension. But that just makes it all the more pressing to me that it be done right.
When ignoring the substance of the argument, I find their form to be something like a Pascal's wager, bait and switch. If there even is a small percent you will burn in hell for eternity, why wouldn't you become Catholic. Such an argument fails for a variety of reasons, one being it doesn't account for alternative religions and their probabilities with alternatives outcomes.
So I find I should probably update my reasoning toward there being some probability of x-risk here, but the probability space is pretty large.
One of the good arguments for doomerism is that the intelligences will be in some real sense alien. That there is a wider distribution of possible ways to think than human intelligence, including how we consider motivation, and this could lead to paper-clip maximizers, or similar AI-Cthulhus of unrecognizable intellect. I fully agree that these might very likely be able to easily wipe us out. But there are many degrees of capability and motivation and I don't see the reason to assume that either through a side-effect of ulterior motivation or direct malice that that lead to the certainty of extinction expressed by someone like Eliezer. There are many possibilities, many are fraught. We should invest is safety and alignment. But that that doesn't mean we should consider x-risk a certainty and certainly not at double-digit likelihood's within short timeframes.
Yes, the space of possibilities (I think you meant this?) is pretty large. But x-risk is most of it. Most of possible outcomes of optimisation processes over Earth and Solar System have no flourishing humanity in them.
It is perhaps a lot like other forms of investment. You can't just ask "What's the optimal way to invest money to make more money?" because it depends on your risk tolerance. A savings account will give you 5%. Investing in a random seed-stage startup might make you super-rich but usually leaves you with nothing. If you invest in doing good then you need to similarly figure out your risk profile.
The good thing about high-risk financial investments is they give you a lot of satisfaction of sitting around dreaming about how you're going to be rich. But eventually that ends when the startup goes broke and you lose your money.
But with high-risk long-term altruism, the satisfaction never has to end! You can spend the rest of your life dreaming about how your donations are actually going to save the world and you'll never be proven wrong. This might, perhaps, cause a bias towards glamourous high-risk long-term projects at the expense of dull low-risk short-term projects.
Much like other forms of investment, if someone shows up and tells you they have a magic box that gives you 5% a month, you should be highly skeptical. Except replace %/month with QALYs/$.
I see your point, but simple self-interest is sufficient to pick up the proverbial $20 bill lying on the ground. Low-hanging QALYs/$ may have a little bit of an analogous filter, but I doubt that it is remotely as strong.
The advantage of making these types of predictions is that even if someone says that the unflattering thing is not even close to what drives them, you can go on thinking "they're just saying that because my complete and perfect fantasy makes them jealous of my immaculate good looks".
Yeah I kinda get off the train at the longtermism / existential risk part of EA. I guess my take is that if these folks really think they're so smart that they can prevent and avert crises far in the future, shouldn't they have been better able to handle the boardroom coup?
I like the malaria bed nets stuff because its easy to confirm that my money is being spent doing good. That's almost exactly the opposite when it comes to AI-risk. For example, the tweet Scott included about how no one has done more to bring us to AGI than Eliezer—is that supposed to be a good thing? Has discovering RLHF which in turn powered ChatGPT and launched the AI revolution made AI-risk more or less likely? It almost feels like one of those Greek tragedies where the hero struggles so hard to escape their fate they end up fulfilling the prophecy.
I think he was pointing out that for EAs have been a big part of the current AI wave. So whether you are a doomer or an accelerationist you should agree that EAs impact has been large even if you disagree with the sign
Problem is, the OpenAI scuffle shows that right now, as AI is here or nearly here, the ones making the decisions are the ones holding the purse strings, and not the ones with the beautiful theories. Money trumps principle and we just saw that blowing up in real time in glorious Technicolor and Surround-sound.
So whether you're a doomer or an accelerationist, the EAs impact is "yeah you can re-arrange the deckchairs, we're the ones running the engine room" as things are going ahead *now*.
Not that I have anything against EAs, but, as someone who want to _see_ AGI, who doesn't want to see the field stopped in its tracks by impossible regulations, as happened to civilian nuclear power in the usa, I hope that you are right!
I mean, if I really believed we'd get conscious, agentic AI that could have its own goals and be deceitful to humans and plot deep-laid plans to take over and wipe out humanity, sure I'd be very, very concerned and unhappy about this result.
I don't believe that, nor that we'll have Fairy Godmother AI. I do believe we'll have AI, an increasing adoption of it in everyday life, and it'll be one more hurdle to deal with. Effects on employment and jobs may be catastrophic (or not). Sure, the buggy whip manufacturers could shift to making wing mirrors for the horseless carriages when that new tech happened, but what do you switch to when the new tech can do anything you can do, and better?
I think the rich will get richer, as per usual, out of AI - that's why Microsoft etc. are so eager to pave the way for the likes of Sam Altman to be in charge of such 'safety alignment' because he won't get in the way of turning on the money-fountain with foolish concerns about going slow or moratoria.
AGI may be coming, but it's not going to be as bad or as wonderful as everyone dreads/hopes.
That's mostly my take too. But to be fair to the doomer crowd, even if we don't buy the discourse on existential risks, what this concern is prompting them to do is lots of research on AI alignment, which in practice means trying to figure out how AI works inside and how it can be controlled and made fit for human purposes. Which sounds rather useful even if AI ends up being on the boring side.
> but what do you switch to when the new tech can do anything you can do, and better?
Nothing -- you retire to your robot ranch and get anything you want for free. Sadly, I think the post-scarcity AGI future is still very far off (as in, astronomically so), and likely impossible...
I think that the impact of AGI is going to be large (even if superintelligence either never happens or the effect of additional smarts just saturates, diminishing returns and all that), provided that it can _really_ do what a median person can do. I just want to have a nice quiet chat with the 21st century version of a HAL-9000 while I still can.
> if these folks really think they're so smart that they can prevent and avert crises far in the future, shouldn't they have been better able to handle the boardroom coup?
Surely these are different skills? Someone who could predict and warn against the dangers of nuclear weapon proliferation and the balance of terror, might still have been blindsided by their spouse cheating on them.
Suppose Trump gets elected next year. Is it a fair attack on climatologists to ask "If these people really think they're so smart that they can predict and avert crises far in the future, shouldn't they have been better able to handle a presidential election?"
Also, nobody else seems to have noticed that Adam D'Angelo is still on the board of OpenAI, but Sam Altman and Greg Brockman aren't.
I hardly think that's a fair comparison. Climatologists are not in a position to control the outcome of a presidential election, but effective altruists controlled 4 out of 6 seats on the board of the company.
Of course, if you think that they played their cards well (given that D'Angelo is still on the board) then I guess there's nothing to argue about. I—and I think most other people—believe they performed exceptionally poorly.
The people in the driver's seat on global-warming activism are more often than not fascist psycopaths like Greta Thunberg, whom actively fight against the very things that would best fight against global warming, like nuclear energy and natural gas pipelines, so they can instead promote things that would make it worse, like socialism and degrowth.
We will never be able to rely on these people to do anything but cause problems. They should be shunned like lepers.
I think that if leaders are elected that oppose climate mitigation, that is indeed a knock on the climate-action political movement. They have clearly failed in their goals.
Allowing climate change to become a partisan issue was a disaster for the climate movement.
I agree completely. Nonetheless, the claim that spending money on AI safety is a good investment rests on two premises: That AI risk is real, and that EA can effectively mitigate that risk.
If I were pouring money into activists groups advocating for climate action, it would be cold comfort to me that climate change is real when they failed.
The EA movement is like the Sunrise Movement/Climate Left. You can have good motivations and the correct ambitions but if you have incompetent leadership your organization can be a net negative for your cause.
> Is it a fair attack on climatologists to ask "If these people really think they're so smart that they can predict and avert crises far in the future, shouldn't they have been better able to handle a presidential election
It is a fair criticism for those that believe the x-risk, or at least extreme downsides of climate change, to not figure out ways to better accomplish their goals rather than just political agitation. Building coalitions with potentially non-progressive causes, being more accepting of partial, incremental solutions. Playing "normie" politics along the lines of matt yglesias, and maybe holding your nose to some negotiated deals where the right gets their way probably mitigates and prevents situations where the climate people won't even have a seat at the table. For example, is making more progress on preventing climate extinction worth stalling out another decade on trans-rights? I don't think that is exactly the tradeoff on the table, but there is a stark unwillingness to confront such things by a lot of people who publicly push for climate-maximalism.
"Playing normie politics" IS what you do when you believe something is an existential risk.
IMHO the test, if you seriously believe all these claims of existential threat, is your willingness to work with your ideological enemies. A real existential threat was, eg, Nazi Germany, and both the West and USSR were willing to work together on that.
When the only move you're willing to make regarding climate is to offer a "Green New Deal" it's clear you are deeply unserious, regardless of how often you say "existential". I don't recall the part of WW2 where FDR refused to send Russia equipment until they held democratic elections...
If you're not willing to compromise on some other issue then, BY FSCKING DEFINITION, you don't believe really your supposed per cause is existential! You're just playing signaling games (and playing them badly, believe me, no-one is fooled). cf Greta Thunberg suddenly becoming an expert on Palestine:
FDR giving the USSR essentially unlimited resources for their war machine was a geostrategic disaster that led directly to the murder and enslavement of hundreds of millions under tyrranies every bit as gruesome as that of Hitler's. Including the PRC, which menaces the World to this day.
The issue isn't that compromise on existential threats are inheriently bad. The issue is that, many times, compromises either make things worse than they would've been otherwise, or create new problems that are as bad or worse as what they subsumed.
I can think of a few groups, for example world Jewry, that might disagree with this characterization...
We have no idea how things might have played out.
I can tell you that the Hard Left, in the US, has an unbroken record of snatching defeat from the jaws of victory, largely because of their unwillingness to compromise, and I fully expect this trend to continue unabated.
Effect on climate? I expect we will muddle through, but in a way that draws almost nothing of value from the Hard Left.
The reason we gave the USSR unlimited resources was because they were directly absorbing something like 2/3 of the Nazi's bandwidth and military power in a terribly colossal years-long meatgrinder that killed something like 13% of the entire USSR population.
Both the UK and USA are extremely blessed that the USSR was willing to send wave after wave of literally tens of millions of their own people into fighting the Nazi's and absorbing so much of their might, and it was arguably the deal of the century to trade mere manufactured objects for the breathing room and Nazi distraction / might-dissipation that this represented.
The alternative would have been NOT giving the USSR unlimited resources, the Nazi's quickly steamroll the USSR, and then turn 100% of their attention and military might towards the UK, which they would almost certainly win. Or even better, not getting enough materiel to conduct a war and realizing he would lose, Stalin makes a deal with Germany and they BOTH focus on fighting the UK and USA - how long do you think the UK would have survived that?
Would the USA have been able to successfully fight a dual-front war with basically all of Europe aligned under Nazi power PLUS Japan with China's resources? We don't know, but it's probably a good thing in terms of overall deaths and destruction on all sides that we didn't need to find out.
Sure, communism sucked for lots of people. But a Nazi-dominated Europe / world would probably have sucked more.
Ah come on, Scott: that the board got the boot and was revamped to the better liking of Sam who was brought back in a Caesarian triumph isn't very convincing about "so this guy is still on the board, that totes means the good guys are in control and keeping a cautious hand on the tiller of no rushing out unsafe AI".
Convince me that a former Treasury Secretary is on the ball about the most latest theoretical results in AI, go ahead. Maybe you can send him the post about AI Monosemanticity, which I genuinely think would be the most helpful thing to do? At least then he'd have an idea about "so what are the eggheads up to, huh?"
While I agree with the general thrust, I think the short-term vs. long-term is neglected. For instance, you yourself recommended switching from chicken to beef to help animals, but this neglects the fact that over time, beef is less healthy than chicken, thus harming humans in a not-quickly-visible way. I hope this wasn't explicitly included and allowed in your computation (you did the switch yourself, according to your post), but this just illuminates the problem: EA want to be clear beneficiaries, but clear often means "short-term" (for people who think AI doomerism is an exception, remember that for historical reasons, people in EA have, on median, timelines that are extremely short compared to most people's).
> I guess my take is that if these folks really think they're so smart that they can prevent and avert crises far in the future, shouldn't they have been better able to handle the boardroom coup?
They got outplayed by Sam Altman, the consummate Silicon Valley insider. According to that anonymous rumour-collecting site, they're hardly the only ones, though it suggests they wouldn't have had much luck defending us against an actual superintelligence.
> For example, the tweet Scott included about how no one has done more to bring us to AGI than Eliezer—is that supposed to be a good thing?
No. I'm pretty sure sama was trolling Eliezer, and that the parallel to Greek tragedy was entirely deliberate. But as Scott said, it is a thing that someone has said.
I actually pretty completely endorse the longtermism and existential risk stuff - but disagree about the claims about the best ways to achieve them.
Ordinary global health and poverty initiatives seem to me to be much more hugely influential in the long term than the short term thanks to the magic of exponential growth. An asteroid or gamma ray or what ever program that has a .01% chance of saving 10^15 lives a thousand years from now looks good compared to saving a few thousand lives this year at first - but when you think about how much good those thousand people will do for their next 40 generations of descendants, as well as all the people those 40 generations of descendants will help, either through normal market processes or through effective altruist processes of their own, this starts to look really good at the thousand year mark.
AI safety is one of the few existential risk causes that doesn’t depend on long term thinking, and thus is likely to be a very valuable one. But only if you have any good reason to think that your efforts will improve things rather than make them worse.
I remember seeing this for the "climate apocalypse" thing many years ago: some conservationist (specifically about birds, I think) was annoyed that the movement had become entirely about global warming.
Global warming is simply a livelier cause for the Watermelons to get behind. Not because they genuinely care about global warming, as they oppose the solutions that would actually help alleviate the crisis, but because they're psychopathic revolutionary socialists who see it as the best means available today of accomplishing their actual goal: the abolition of capitalism and the institution of socialism.
EA as a movement to better use philanthropic resources to do real good is awesome.
AI doomerism is a cult. It's a small group of people who have accrued incredible influence in a short period of time on the basis of what can only be described as speculation. The evidence base is extremely weak and it relies far too much on "belief". There are conflicts of interest all over the place that the movement is making no effort to resolve.
At this point a huge number of experts in the field consider AI risk to be a real thing. Even if you ignore the “AGI could dominate humanity” part, there’s a large amount of risk from humans purposely (mis)using AI as it grows in capability.
Predictions about the future are hard and so neither side of the debate can do anything more than informed speculation about where things will go. You can find the opposing argument persuading, but dismissing AI risk as mere speculation without evidence is not even wrong.
The conflicts of interest tend to be in the direction of ignoring AI risk by those who stand to profit from AI progress, so you have this exactly backwards.
You can't ignore the whole "AGI could dominate humanity" part, because that is core to the arguments that this is an urgent existential threat that needs immediate and extraordinary action. Otherwise AI is just a new disruptive technology that we can deal with like any other new, disruptive technology. We could just let it develop and write the rules as the risks and dangers become apparent. The only way you justify the need for global action right now is based on the belief that everybody is going to die in a few years time. The evidence for existential AI risk is astonishingly weak given the amount of traction it has with policymakers. It's closer to Pascal's Wager rewritten for the 21st century than anything based on data.
On the conflict of interest, the owners of some of the largest and best funded AI companies on the planet are attempting to capture the regulatory environment before the technology even exists. These are people who are already making huge amounts of money from machine learning and AI. They are taking it upon themselves to write the rules for who is allowed to do AI research and what they are allowed to do. You don't see a conflict of interest in this?
Let's distinguish "AGI" from "ASI", the latter being a superintelligence equal to something like a demigod.
Even AGI strictly kept to ~human level in terms of reasoning will be superhuman in the ways that computers are already superhuman: e.g., data processing at scale, perfect memory, replication, etc., etc.
Even "just" that scenario of countless AGI agents is likely dangerous in a way that no other technology has ever been before if you think about it for 30 seconds. The OG AI risk people are/were futurists, technophiles, transhumanists, and many have a strong libertarian bent. "This one is different' is something they do not wish to be true.
Your "conflict of interest" reasoning remains backwards. Regulatory capture is indeed a thing that matters in many arenas, but there are already quite a few contenders in the AI space from "big tech." Meaningfully reducing competition by squishing the future little guys is already mostly irrelevant in the same way that trying to prevent via regulation the creation of a new major social network from scratch would be pointless. "In the short run AI regulation may slow down our profits but in the long run it will possibly lock out hypothetical small fish contenders" is almost certainly what no one is thinking.
"No one on this successful tech company's board of directors is making decisions based on what will eventually get them the most monopoly profits" sounds like an extraordinary claim to me.
This is the board of directors that explicitly tried to burn the company down, essentially for being too successful. They failed, but can you ask for a more credible signal of seriousness?
1. Holy shit is than an ironic thing to say after the OpenAI board meltdown. Also check out Anthropic’s board and equity structure. Also profit-driven places like Meta are seemingly taking a very different approach. Why?
2. You’re doing the thing where decreasing hypothetical future competition from new, small entrants to a field equals monopoly. Even if there was a conspiracy by eg Anthropic to use regulatory barriers against new entrants, that would not impact the already highly competitive field between the several major labs. (And there are already huge barriers to entry for newcomers in terms of both expertise and compute. Even a potential mega contender like Apple is apparently struggling and a place like Microsoft found a partner.)
It's just at this point a significant number of experts in AI have come around to believing AI risk is a real concern. So have a lot of prominent people in other fields, like national security. So have a lot of normies who simply intuit that developing super smart synthetic intelligence might go bad for us mere meat machines.
You can no longer just hand wave AI risk away as a concern of strange nerds worried about fictional dangers from reading too much sci-fi. Right or wrong, it's gone mainstream!
Who are some people who have accrued incredible influence and what is the period of time in which they gained this influence?
From my standpoint it seems like most of the people with increased influence are either a) established ML researchers who recently began speaking out in favor of deceleration and b) people who have been very consistent in their beliefs about AI risk for 12+ years, who are suddenly getting wider attention in the wake of LLM releases.
Acceptance of catastrophic risk from artificial superintelligence is the dominant position among the experts (including independent academics), the tech CEOs, the major governments, and the general public. Calling it a "small group of people who have accrued incredible influence" or "a cult" is silly. It's like complaining about organizations fighting Covid-19 by shouting "conspiracy!" and suggesting that the idea is being pushed by a select group.
The denialists/skeptics are an incredibly fractured group who don't agree with each other at all about how the risk isn't there; the "extinction from AI is actually good", "superintelligence is impossible", "omnipotent superintelligence will inevitably be absolutely moral", and "the danger is real but I can solve it" factions and subfactions do not share ideologies, they're just tiny groups allying out of convenience. I don't see how one could reasonably suggest that one or more of those is the "normal" group, to contrast with the "cult".
I think there’s an important contrast between people who think that AI is a significant catastrophic risk, and people who think there is a good project available for reducing that risk without running a risk of making it much worse.
For those of you that shared the "I like global health but not longtermism/AI Safety", how involved were you in EA before longtermism / AI Safety became a big part of it?
I think it is a good question to raise with the EA-adjacent. Before AI Doomerism and the tar-and-feathering of EA, EA-like ideas were starting to get more mainstream traction and adoption. Articles supportive of say, givewell.org, in local papers, not mentioning EA by name, but discussing some of the basic philosophical ideas were starting to percolate out more into the common culture. Right or Wrong, there has been a backlash that is disrupting some of that influence even those _in_ the EA movement are still mostly doing the same good stuff Scott outlined.
Minor point: I'd prefer to treat longtermism and AI Safety quite separately. (FWIW, I am not in EA myself.)
Personally, I want to _see_ AGI, so my _personal_ preference is that AI Safety measures at least don't cripple AI development like regulatory burdens made civilian nuclear power grind to a 50 year halt in the USA. That said, the time scale for plausible risks from AGI (at least the economic displacement ones) is probably less than 10 years and may be as short as 1 or 2. Discussing well-what-if-every-job-that-can-be-done-online-gets-automated does not require a thousand-year crystal ball.
Longtermism, on the other hand, seems like it hinges on the ability to predict consequences of actions on *VASTLY* longer time scales than anyone has ever managed. I consider it wholly unreasonable.
None of this is to disparage Givewell or similar institutions, which seem perfectly reasonable to me.
I actually think that longtermism advocates for ordinary health and development charity - that sort of work grows exponentially in impact over the long term and thus comes out looking even better than things like climate or animal welfare, whose impacts grow closer to linearly with time.
The problem with longtermism is that you can use it to justify pretty much anything, regardless of if you're even right, as long as your ends are sufficiently far enough away from the now to where you never actually have to be held accountable for getting things wrong.
It's not a very good philosophy. People should be saved from malaria for its own sake. Not because of "longtermism".
Given a choice between several acts which seem worth doing for their own sake, rate at which secondary benefits potentially compound over the long term could be a useful tiebreaker.
"that sort of work grows exponentially in impact over the long term" Some of the longtermist arguments talk about things like effects over a time scale where they expect us to colonize the galaxy. The time scale over which economies have been growing more-or-less steadily is more like 200-300 years. I think that it is sane to make a default assumption of exponential impact, as you describe, for that reason over that time scale (though many things, AI amongst them, could invalidate that). _Beyond_ 200-300 years, I don't think smoothish-growth-as-usual is a reasonable expectation. I think all we can say longer term than that is _don't_ _know_.
I heard about EA and got into the global health aspects of it from a talk on AI safety I went to given by... EY. I went to the talk on AI safety because I'd read HPMOR and just wanted to meet the author.
I wasn't at all convinced about AI safety, but I became interested in the global health aspects of EA. This year my donations went to PSI. I'm still an AI sceptic.
I gave money to GiveDirectly, which is EA-adjacent, and some years would get GiveWell endorsements. It never gets to the top of the recommendation list, but has the big advantage of having a low variance (especially the original formulation, where everyone living in a poor village got a one-time unconditional payout). "I can see you're not wasting the funds" is a good property if you have generally low trust in people running charitable orgs (the recent turn into generating research papers to push UBI in the US is unfortunate).
AI-doom-people have a decent shot at causing more deaths than all other human causes put together, if they follow the EY "nuke countries with datacenters" approach. Of course they'll justify it by appealing to the risk of total human extinction, but it shouldn't be surprising that people who estimate a substantially lower probability of the latter see the whole endeavor as probably net-negative. You'd be better off burning the money.
My only prior exposure was Doing Good Better, before seeing a *lot* of longtermism/x-risk messaging at EA Cambridge in 2018 (80k hours workshop, AI safety reading group, workshops at EA Cambridge retreat).
I considered AI safety (I'm a CS researcher already), enough to attend the reading group. But it seemed like pure math-level mental gymnastics to argue that the papers had any application to aligning future AGIs, and I dislike ML/AI research anyway.
Well there's also the part where people may have been involved in charity/NGO stuff before the cool kids relabeled it as EA.
Not to blame anyone for the relabeling though - if it got lots of fresh young people involved in humanitarian activity, and some renewed interest into its actual efficacy, they're more than entitled to take pride and give it a new name.
Freddie de Boer was talking about something like this today, about retiring the EA label. The effective EA orgs will still be there even if there is no EA. But I'm not really involved in the community, even if I took the Giving What We Can pledge, so it doesn't really matter much to me if AI X-risk is currently sucking up all the air in the movement.
I agree with the first part, but the problems with EA stem beyond AI doomerism. People in the movement seriously consider absurd conclusions like it being morally desirable to kill all wild animals, it has perverse moral failings as an institution, its language has evolved to become similar to postmodern nonsense, it has a strong left wing bias, and it has been plagued by scandals.
Surely none of that is necessary to get more funding to go towards effective causes. I’d like to invite someone competent to a large corporate so that we can improve the effectiveness of our rather large donations, but the above means I have no confidence to do so.
Well, sometime some people also considered absurd conclusions as giving voting rights to women, and look where we are. Someone have to consider things to understand if they worth anything.
The problem is that utilitarianism likely a fatally flawed approach, taking it to its fullest, most extreme. There is some element of deontology that probably needs to be accounted for a more robust ethical framework.
Or, hah, maybe AGI is a Utility Monster we should accelerate and our destruction would provide more global utility for such an optimizing agent than our continued existence it should be the wished for outcome. But such ideas are absurd.
To point out, Bentham in fact advocated for women's rights "before his time" and lead to many proto feminist works getting published by John Stuart Mill. In fact, concurrent arguments against his stance would cite that the women only mattered in the context of what they can do for men, so it's ridiculous to speak of suffrage.
I don't get it. Which one is the more plausible claim? Because for most of history, it would have been "killing whole classes of animals and people". The only reason that isn't true today is precisely because some people were willing to ponder absurd trains of thought.
Deliberate attempts to exterminate whole classes of people go back to at least King Mithridates VI in 88 BCE. For most of human history giving women (or anyone the vote) is a weird and absurd idea while mass slaughter was normal.
Its because people were willing to entertain "absurd" ideas that mass slaughter is now abhorrent and votes for all are normal.
Morally deranged cults don’t “seriously consider” ideas that go diametrically against what other members of the cult endorse. Morally deranged cults outright endorse these crazy ideas. EA does not endorse the elimination of wild animals, though it does consider it seriously.
Any idea should be considered based in its merit, not emotional reaction. I am not sure if you think I am in a cult, or people in EA are.
All I can say negative utilitarianism exists. There is even a book, Suffering-focused ethics, exploring roughly the idea that suffering is much worse than positive experience.
As a person who is seriously suffering, I consider this topic is at least worth discussing. Thought that I can be in a situation where I cannot kill myself and won't get pain meds gives me serious anxiety. Yet, this is pretty common. In most world countries euthanasia is illegal and pain medicines are strictly controlled. Situation where you can suffer terribly and couldn't die is common. Normal people don't think about it often, until they do.
Based on my thoughts above, I feel like suffering of wild and domesticated animals is something real. I am not sure why do you think that by default we cannot even fanthom idea that we can end their suffering. I myself is not pro or contra, but I am happy that there are people who think about these topics.
As someone who doesn't identify with EA (but likes parts of it), I don't expect my opinion to be particularly persuasive to people who do identify more strongly with the movement, but I do think such a split would result in broader appeal and better branding. For example, I donate to GiveWell because I like its approach to global health & development, but I would not personally choose to donate to animal welfare or existential risk causes, and I would worry that supporting EA more generically would support causes that I don't want to support.
To some extent, I think EA-affiliated groups like GiveWell already get a lot of the benefit of this by having a separate-from-EA identity that is more specific and focused. Applying this kind of focus on the movement level could help attract people who are on board with some parts of EA but find other parts weird or off-putting. But of course deciding to split or not depends most of all on the feelings and beliefs of the people actually doing the work, not on how the movement plays to people like me.
I agree that there should be a movement split. I think the existential risk AI doomerism subset of EA is definitely less appealing to the general public and attracts a niche audience compared to the effective charity subset which is more likely to be generally accepted by pretty much anybody of all backgrounds. If we agree that we should try to maximize the number of people that at the very least are involved in at least one of the causes, when the movement is associated with both causes, many people who would've been interested in effective charitable giving will be driven away by the existential risk stuff.
My first thought was "Yes, I think such a split would be an excellent thing."
My second thought is similar, but with one slight concern: I think that the EA movement probably benefits from attracting and being dominated by blueish-grey thinkers; I have a vague suspicion that such a split would result in the two halves becoming pure blue and reddish-grey respectively, and I think a pure blue Effective Charity movement might be less effective than a more ruthlessly data-centric bluish-grey one.
Yes, a pure blue Effective Charity movement would give you more projects like the hundreds of millions OpenPhil spent on criminal justice, which they deemed ineffective but then spun off into its own thing.
I personally know four people who were so annoyed by AI doomers that they set out to prove beyond a reasonable doubt that there wasn't a real risk. In the process of trying to make that case, they all changed their mind and started working on AI alignment. (One of them was Eliezer, as he detailed in a LW post long ago.) Holden Karnofsky similarly famously put so much effort into explaining why he wasn't worried about AI that he realized he ought to be.
The EA culture encourages members to do at least some research into a cause in order to justify ruling it out (rather than mocking it based on vibes, like normal people do); the fact that there's a long pipeline of prominent AI-risk-skeptic EAs pivoting to work on AI x-risk is one of the strongest meta-arguments for why you, dear reader, should give it a second thought.
This was also my trajectory ... essentially I believed that there were a number of not too complicated technical solutions, and it took a lot of study to realize that the problem was genuinely extremely difficult to solve in an airtight way.
I might add that I don't think most people are in a position to evaluate in depth and so it's unfortunately down to which experts they believe or I suppose what they're temperamentally inclined to believe in general. This is not a situation where you can educate the public in detail to convince them.
I'd argue in the opposite direction: that one of the best things about EA (as the Rationalist) community is that it's a rare example of an in-group defined by adherence to an epistemic toolbox rather than affiliation with specific positions on specific issues.
It is fine for there to be different clusters of people within EA who reach very different conclusions. I don't need to agree with everyone else about where my money should go. But it sure is nice when everyone can speak the same language and agree on how to approach super complex problems in principle.
I think this understates the problem. EA had one good idea (effective charity in developing countries) one mediocre idea (that you should earn to give) and then everything else is mixed but being an EA doesn't provide good intuitions any more than being a textualist in US Jurisprudence. I'm glad the Open Phil donated to the early yimby movement but if I want to support good US politics I'd prefer to directly donate to Yimby Orgs or the Neoliberal groups (https://cnliberalism.org/). I think both the FTX and Open AI events should be treated as broadly discrediting to the idea that EA is a well run organization and the reliability of the current leadership. I think GiveWell remains a good organization for what it is (and will continue donating to GiveDirectly) but while I might trust individuals that Scott is calling EA I think that the EA label is negative the way that I might like libertarians but not people using the Libertarian label.
I think EA is great and this is a great post highlighting all the positives.
However, my personal issue with EA is not its net impact but how it's perceived. SBF made EA look terrible because many EA'ers were woo'ed by his rhetoric. Using a castle for business meetings makes EA look bad. Yelling "but look at all the poor people we saved" is useful but somewhat orthogonal to those examples as they highlight some sort of blindspots in the community that the community doesn't seem to be confronting.
And maybe that's unfair. But EA signed up to be held to a higher standard.
I didn't sign up to be held to a higher standard. Count me in for team "I have never claimed to be better at figuring out whether companies are frauds than Gary Gensler and the SEC". I would be perfectly happy to be held to the same ordinary standard as anyone else.
I'm willing to give you SBF but I don't see how the castle thing holds up. There's a smell of hypocrisy in both. Sam's feigning of driving a cheap car while actually living in a mansion is an (unfair) microcosm of the castle thinking.
I don’t really get the issue with the castle thing. An organization dedicated to marketing EA spent a (comparatively) tiny amount of money on something that will be useful for marketing. What exactly is hypocritical about that?
It's the optics. It looks ostentatious, like you're not really optimizing for efficiency. Sure, they justified this on grounds of efficiency (though I have heard questioning of whether being on the hook for the maintenance of a castle really is cheaper than just renting venues when you need them), but surely taking effectiveness seriously involves pursuing smooth interactions with the normies?
1. Poor optics isn’t hypocrisy. That is still just a deeply unfair criticism.
2. Taking effectiveness seriously involves putting effectiveness above optics in some cases. The problem with many non-effective charities is that they are too focused on optics.
3. Some of the other EA “scandals” make it very clear that it doesn’t matter what you do, some people will hate you regardless. Why would you sacrifice effectiveness for maybe (but probably not) improving your PR given the number of constraints.
You can't separate optics from effectiveness, since effectiveness is dependent on optics. Influence is power, and power lets you be effective. The people in EA should know this better than anyone else.
See, I think EA shows a lack of common sense, and this comment is an example. It's true that no matter what you do some people will hate you, but if you buy a fucking castle *everybody's* going to roll their eyes. It's not hard to avoid castles and other things that are going to alienate 95% the public. And you have to think *some* about optics, because it interferes with the effectiveness of the organization if 95% of the public distrusts it.
EA's disdain for "optics" is part of what drew me to it in the first place. I was fed up with charities and policymakers who cared far more about being perceived to be doing something than about actually doing good things.
Where do you draw the line? If EAs were pursuing smooth interactions with normies, they would also be working on the stuff normies like.
Also, idk, maybe the castle was more expensive than previously thought. Good on paper, bad in practice. So, no one can ever make bad investments? Average it in with other donations and the portfolio performance still looks great. It was a foray into cost-saving real estate. To the extent it was a bad purchase, maybe they won't buy real estate anymore, or will hire people who are better at it, or what have you. The foundation that bought it will keep donating for, most likely, decades into the future. Why can't they try a novel donor strategy and see if it works? For information value. Explore what a good choice might be asap, then exploit/repeat/hone that choice in the coming years. Christ, *everyone* makes mistakes and tries things given decent reasoning. The castle had decent reasoning. So why are EAs so rarely allowed to try things, without getting a fingerwag in response?
Look at default culture not EA. To the extent EAs need to play politics, they aren't the worst at it (look at DC). But donors should be allowed to try things.
I don't know, I feel like if there had been a single pragmatic person in the room when they proposed to buy that castle, the proposal would have been shot down. But yes, I do agree that ultimately, you have to fuck around and find out to find what works, so I don't see the castle as invalidating of EA, it's just a screw up.
Didn’t the castle achieve good optics with its target demographic though? The bad optics are just with the people who aren’t contributing, which seems like an acceptable trade-off
> surely taking effectiveness seriously involves pursuing smooth interactions with the normies?
If the normies you're trying to pursue smooth interactions with include members of the British political and economic Establishment, "come to our conference venue in a repurposed country house" is absolutely the way to go.
I think you're overestimating how much the castle thing affects interactions with normies. It was a small news story and I bet even the people who read it at the time have mostly forgotten it by now. I estimate that if a random person were to see a donation drive organized by EAs today the chance that their donation would be affected by the castle story is <0.01%
It's hard to believe that a castle was the optimum (all things considered; no one is saying EA should hold meetings in the cheapest warehouse). The whole pitch of the group is looking at things rationally, so if they fail at one of the most basic things like choosing a meeting location, and there's so little pushback from the community, then what other things is the EA community rationalizing invalidly?
And if we were to suppose that the castle really was carefully analyzed and evaluated validly as at- or near-optimal, then there appears to be a huge blindspot in the community about discounting how things are perceived, and this will greatly impact all kinds of future projects and fund-raising opportunities, i.e. the meta-effectiveness of EA.
Have you been to the venue? You keep calling it "a castle" which is the appropriate buzzword if you want to disparage the purchase, but it is a quite nice event space ~similar to renting a nice hotel. It is far from the most luxurious hotels, but it is like a home-y version of the level you get in hotels in which you run events. They have considered different venues (as other said, explained in other articles), and settled on this one due to price/quality/position and other considerations.
Quick test: If the venue appreciated in value and now can be sold for twice the money making this net positive investment which they can in a pinch use to sponsor a really important crisis, and they do that - does that make the purchase better? If renting it our per year makes full financial sense, and other venues would have been worse - are you now convinced?
If not, you may just be angry at the word "castle" and aren't doing a rational argument anymore.
No, and it doesn't matter. EA'ers such as Scott have referred and continue to refer to it as a castle, so it must be sufficiently castle-like and that's all that matters as it impacts the perception of EA.
> They have considered different venues (as other said, explained in other articles), and settled on this one due to price/quality/position and other considerations.
Those other considerations could have included a survey of how buying a castle would affect perceptions of EA and potential donors. This is a blindspot.
> If not, you may just be angry at the word "castle" and aren't doing a rational argument anymore.
Also indirectly answering your other questions -- I don't care about the castle. I'm rational enough to not care. What I care about is the perception of EA and the fact that EA'ers can't realize how bad the castle looks and how this might impact their future donations and public persona. They could have evaluated this rationally with a survey.
Why wouldn't a castle be the optimal building to purchase? It is big, with many rooms, and due to the lack of modern amenities it is probably cheaper than buying a more recently built conference center type building. Plus more recently built buildings tend to be in more desirable locations were land itself is more expensive. I think you're anchoring your opinion way too much on "castle=royalty".
So far it's been entirely negative for marketing EA, isn't in use (yet), isn't a particularly convenient location, and the defenders of the purchase even said they bought the castle because they wanted a fancy old building to think in.
So the problem with the castle is not the castle itself it's that it makes you believe the whole group is hypocritical and ineffective? But isn't that disproved by all the effective actions they take?
Not me. I don't care about the castle. I'm worried about public perceptions of EA and how it impacts their future including donations. Perceptions of profligacy can certainly overwhelm the effective actions. Certain behaviors have a stench to lots of humans.
I think the only rational way to settle this argument would be for EA to run surveys of the impact on perceptions of the use of castles and how that could impact potential donors.
Imagine an Ivy League university buys a new building, then pays a hundred thousand dollars extra to buy a lot of ivy and drape it over the exterior walls of the building. The news media covers the draping expenditure critically. In the long term, would the ivy gambit be positive or negative for achieving that university's goals of cultivating research and getting donations?
I don't know. Maybe we need to do one of those surveys that you're proposing. But I would guess that it's the same answer for the university's ivy and CEA's purchase of the miniature castle.
The general proposal I'm making: if we're going to talk about silly ways of gaining prestige for an institution, let's compare like with like.
All I can write at this point is that it would be worth a grant to an EA intern to perform a statistically valid survey of how EA using a castle impacts the perception of EA and potential future grants. Perhaps have one survey of potential donors, another of average people, and include questions for the donors about how the opinions of average people might impact their donations.
Yes, I read your points and understand them. I find them wholly unconvincing as far as the potential impacts on how EA is perceived (personally, I don't care about the castle).
EAs have done surveys of regular people about perceptions of EA - almost no one knows what EA is.
Donors are wealthy people, many of whom understand the long-term value of real estate.
I like frugality a lot. But I think people who are against a conference host investing in the purchase of their own conference venue are not thinking from the perspective of most organizations or donors.
Ie., it's an average sort of thing that lots of other organisations would do. But EA is supposed to be better. (i don't have anything against EA particularly, but this is a pattern I keep noticing -- something or someone is initially sold as be better then defended as being not-worse).
We should learn to ignore the smell of hypocrisy. There are people who like to mock the COP conferences because they involve flying people to the Middle East to talk about climate change. But those people haven’t seriously considered how to make international negotiations on hard topics effective. Similarly, some people might mock buying a conference venue. But those people haven’t seriously thought about how to hold effective meetings over a long period of time.
On that front, EA sometimes has a (faux?) humble front to it, and that's part of where the hypocrisy comes from. I think that came in the early days, people so paralyzed by optics and effectiveness that they wouldn't spend on any creature comforts at all. Now, perhaps they've overcorrected, and spend too much on comforts to think bigger thoughts.
But if they want to stop caring about hypocrisy, they should go full arrogant, yes we're better and smarter than everyone else and we're not going to be ashamed of it. Take the mask off and don't care about optics *at all*. Let's see how that goes, yeah?
People don't mock buying a venue, they mock buying a *400 year old castle* for a bunch of nerds that quite famously don't care about aesthetics.
Re: "should I care about perception?", I think "yes" and "no" are just different strategies. Cf. the stock market. Whereas speculators metagame the Keynesian Beauty Contest, buy-&-hold-(forever) investors mostly just want the earnings to increase.
This type of metagaming has upsides, in that it can improve your effectiveness, ceteris paribus. This type of metagaming also has downsides, in that it occasionally leads to an equilibrium where everyone compliments the emperor's new clothes.
My impression is that EA is by definition supposed to be held to a higher standard. It's not just plain Altruism like the boring old Red Cross or Doctors Without Borders, it's Effective Altruism, in that it uses money effectively and more effectively than other charities do.
I don't see how that branding/stance doesn't come with an onus for every use of funds to be above scrutiny. I don't think it's fair to say that EA is sometimes makes irresponsible purchases, but it should be excused because on net EA is good. That's not a deal with the devil, it's mostly very good charitable work with the occasional small castle sized deal with the devil. That seems to me like any old charitable movement and not in line with the 'most effective lives per dollar' thesis of EA.
I can barely comprehend the arrogance of a movement that has in its literal name a claim that they are better than everyone else (or ALL other charities at least), that routinely denigrates non-adherents as "normies" as if they're inferior people, that has members who constantly say without shame or irony that they're smarter than most people, that they're more successful than most people (and that that's why you should trust them), that is especially shameless in its courting of the rich and well-connected compared to other charities and groups...having the nerve to say after a huge scandal that they never claimed a higher standard than anyone else.
Here's an idea. Maybe, if you didn't want to be held to a higher standard than other people, you shouldn't have *spent years talking about how much better you are than other people*.
I think you're misunderstanding EA. It did not create a bunch of charities and then shout "my charities are the effectivest!" EA started when some people said "which jobs/charities help the world the most?" and nobody had seriously tried to find the answers. Then they seriously tried to find the answers. Then they built a movement for getting people and money sent where they were needed the most. The bulk of these charities and research orgs *already existed*. EA is saying "these are the best", not "we are the best".
And- I read you as talking about SBF here? That is not what people failed at. SBF was not a charity that people failed to evaluate well. SBF was a donor who gave a bunch of money to the charities and hid his fraud from EA's and customers and regulators and his own employees.
I have yet to meet an EA who frequently talks about how they're smarter, more successful, or generally better than most people. I think you might be looking at how some community leaders think they need to sound really polished, and overinterpreting?
Now I have seen "normies" used resentfully, but before you resent people outside your subculture you have to feel alienated from them. The alienation here comes from how it seems really likely that our civilization will crash in a few decades. How if farm animals can really feel then holy cow have we caused so much pain. How there's 207 people dying every minute- listen to Believer by Imagine Dragons, and imagine every thump is another kid, another grandparent. It's an goddamn emergency, it's been an emergency since the dawn of humanity. And we can't fix all of it, but if a bunch of us put our heads together and trusted each other and tried really hard, we could fix so much... So when someone raised a banner and said "Over here! We're doing triage! These are the worst parts we know how to fix!", you joined because *duh*. Then you pointed it out to others, and. Turns out most people don't actually give a shit.
That's the alienation. There's lots of EA's who aren't very smart or successful at all. There's lots of people who get it, and have been triaging the world without us and don't want to join us. This isn't alienating. Alienation comes from normies- many of them smarter and more successful- who don't care. Or who are furious your post implied an art supply bake sale isn't just as important as the kids with malaria. It doesn't make people evil that they don't experience that moment of *duh*, but goddamn do I sometimes feel like we're from different planets.
That was a good comment, and mine above was too angry I think. I'm starting to think everyone's talking about very different things with the same words. This happens a lot.
First, I'm a bit sceptical of the claim that, before EA nobody was evaluating charity effectiveness. This *feels* like EA propaganda, and I'm *inclined* to suspect that EA's contribution was at least as much "more utilitarian and bayesian evaluation" as "more evaluation". BUT I have no knowledge of this whatsoever and it has nothing to do with my objection to EA, so I'm happy to concede that point.
Second, regarding SBF my main issue is with the morality of "earning to give" and its very slippery slope either straight to "stealing to give" or to "earning to give, but then being corrupted by the environment and lifestyle associated with earning millions, and eventually earning and stealing to get filthy rich". Protestations that EAs never endorsed stealing, while I accept they're sincere, read a bit too much like "will no one rid me of this troublesome priest?" It's important for powerful people to avoid endorsing principles that their followers might logically take to bad ends, not just avoid endorsing the bad ends themselves. (Or at least, there's an argument that they should avoid that, and it's one that's frequently used to lay blame on other figures and groups.)
Third, regarding "normies", I don't feel like I've seen it used to disparage "people who don't think kids with malaria are more important than the opera", or if I have not nearly as many times as it's used to disparage "people who think kids with malaria are important than space colonies and the singularity". I completely see the "different planets" thing, and this goes both ways. Lots of people don't care about starving children, and that's horrific. EAs of course are only a small minority of those who *do* care, effectiveness notwithstanding. On the other hand, this whole "actual people suffering right now need to be weighed against future digital people" is so horrific, so terrifying, so monstrous that I'm hoping it's a hoax or something. But I haven't seen anyone deny that many EAs really do think like that. In a way, using the rescources and infrastructure (if not the actual donations) set up for global poverty relief, to instead make digital people happen faster, is much worse than doing nothing at all for poverty relief to begin with (since you're actively diverting resources from it). So we could say "global health EAs" are on one planet, "normies" are on a second planet, and "longtermist EAs" are on a third planet, and the third looks as evil to the second as the second does to the first.
Fwiw, charity evalutation existed before EA, but it was almost entirely infected by goodhart's law. charity evaluators measured *overhead*, not impact. A charity which claimed to help minorities learn stem skills by having them make shoes out of cardboard and glue as an afterschool program (because everyone knows minorities like basketball shoes, and designing things that require measurements is kind of like stem) would have been rated very, very highly if they were keeping overhead low and actually spending all of the money on their ridiculous program, but the actual impact of the program wouldn't factor into it at all. I use this example because it's something I actually saw in real life.
These evaluators served an important purpose in sniffing out fraud and the kind of criminal incompetence that destroys most charities, but clearly there was something missing, and EA filled in what was missing
TBC, you're replying to a comment about whether individual EA's should be accountable for many EA orgs taking money from SBF. I do not think that "we try to do the most good, come join us" is branding with an onus for you, as an individual, to run deep financial investigations on your movement's donors.
But about the "castle", in terms of onuses on the movement as a whole- That money was donated to Effective Ventures for movement building. Most donations given *under EA* go to charities and research groups. Money given *directly to EV* is used for things like marketing and conferences to get more people involved in poverty, animal, and x-risk areas. EV used part of their budget to buy a conference building near Oxford to save money in the long run.
If the abbey was not the most effective way to get a conference building near Oxford, or if a conference building near Oxford was not the most effective way to build the movement, or if building the movement is not an effective way to get more good to happen, then this is a way that EA fell short of its goal. Pointing out failures is not a bad thing. (Not that anyone promised zero mistakes ever. The movement promised thinking really hard and doing lots of research, not never being wrong.) If it turns out that the story we heard is false and Rob Wiblin secretly wanted to live in a "castle", EA fell short of its goal due to gross corruption by one of its members, which is worth much harsher criticism.
In terms of the Red Cross, actually yes. Even if we found out 50% of all donor money was being embezzled for "castles", EA would still be meeting its goal of being more effective than just about any major charity organization. EA donation targets are more than twice as cost effective as Red Cross or DWB.
Hold to the higher standard, but if you’re going to criticize about the castle, you better be prepared to explain how better to host a series of meetings and conferences on various topics without spending a lot more money.
I think your assumption that "any old charitable movement" is about as effective as using the vast majority of funds on carefully chosen interventions plus buying a castle once and then falling for a snake oil salesman is wrong though. My impression is most charitable movements accomplish very little so it is quite easy to be more effective than them. And until another movement comes along that is more effective than EA at saving lives I'll continue thinking that.
A lot of people ignore it, but I continue to find the "Will MacAskill mentored SBF into earn to give" connection the problem there. No one can always be a perfect judge of character, but it was a thought experiment come to life. It says... *something* about the guardrails and the culture. It's easy to take it as saying too much, to be sure many people do, but it's also easy to ignore what it says entirely.
I recognize broader-EA has (somewhat) moved away from earning to give and that the crypto boom that enabled SBF to be a fraud of that scale was (probably) a once in a lifetime right-place right-time opportunity for both success and failure. Even so.
In point of fact, you all are being held to the ordinary standard. Public corruption leads to public excoriation, and "but look at the good we do" is generally seen as a poor defense until a few years later when the house is clearly clean. That is the ordinary standard.
I think EA signed up to be held to the standard "are you doing the most good you can with the resources you have". I do not think it signed up to be held to the standard "are you perceived positively by as many people as possible". Personally I care a lot more about the first standard, and I think EA comes extremely impressively close to meeting it.
Sure, but go Meta-Effectiveness and consider that poor rhetoric and poor perception could mean fewer resources for the actions that really matter. A few more castle debacles and the cost for billionaires being associated with EA may cross a threshold.
Castle != cost-effective. And perceptions of using castles, and blindness to how bad this looks, could have massive long-term impacts on fund-raising.
I don't understand why this is so complicated. It doesn't matter how tiny the cost of the castle has been relative to all resources spent. It's like a guy who cheated on a woman once. Word gets around. And when the guys says, "Who _cares_ about the cheating! Look at all the wonderful other things I do" then it looks even worse. Just say, "Look, we're sorry and we're selling the castle, looking for a better arrangement, and starting a conversation about how to avoid such decisions in the future."
Yeah, I was just now trying to run figures about increased persuasiveness toward government officials and rich people, to see what the break-even would have to be.
Given the obvious difference in intuitions on how to discount the perceptions of profligacy, as proposed in another response to Scott, I think the only way to actually resolve this is to conduct a survey.
I understand your model is that the abbey was a horrid investment and a group that holds itself out as a cost-effectiveness charity, but also makes horrid investments, should lose credibility and donors.
No one disagrees with that premise.
I disagree that it was a horrid investment, based on the info they had at the time.
So, I don’t see a loss of credibility there.
Others will disagree that CEA/EV is primarily a cost-effectiveness charity.
It looks pretty good to people who think castles are cool, and don't really care much about austerity or poor people or math. There are staggering numbers of such people, some of whom are extremely rich, and EA might reasonably have difficulty extracting money from them without first owning a castle.
Unless people set out with a vendetta to destroy EA, the castle will be forgotten as a reputational cost, but will still be effective at hosting meetings. And if people do set out with a vendetta to destroy EA, it’s unlikely the castle thing is the only thing they could use this way.
The community by it's nature has those blindspots. Their whole rallying cry is "Use data and logic to figure out what to support, instead of what's popular". This attracts people who don't care for or aren't good at playing games of perception. This mindset is great at saving the most lives with the least amount of money, it's not as good for PR or board room politics.
Right, but they could logically evaluate perceptions using surveys. That begs the question: what other poor assumptions are they making that they're not applying rationalism to?
I do wonder if the "castle" thing (it's not a castle!) is just "people who live in Oxford forget that they're in a bubble, and people who've never been to Oxford don't realise how weird it is". If you live in Oxford, which has an *actual* castle plus a whole bunch of buildings approaching a thousand years old, or if you're at all familiar with the Oxfordshire countryside, you'd look at Wytham Abbey and say "Yep, looks like a solid choice. Wait, you want a *modern* building? Near *Oxford*? Do you think we have infinite money, and infinite time for planning applications?"
The word "castle" can be a bit misleading. They (or the ones in the UK) aren't all huge drafty stone fortresses. Many, perhaps most, currently habitable and occupied ones differ little from normal houses, but maybe have a somewhat more squat and solid appearance and a few crenellated walls here and there. I don't know what Castle EA looks like though! :-)
Edit: I did a quick web search, and the castle in question is called Chateau Hostacov and is in Bohemia, which is roughly the western half of the Czech Republic. (I don't do silly little foreign accents, but technically there is an inverted tin hat over the "c" in "Hostacov").
It cost all of $3.5M, which would just about buy a one-bedroom apartment in Manhatten or London. So not a bad deal, especially considering it can be (and, going by its website, is being) used as a venue for other events such as conferences and weddings and vacations etc:
Impressive! By the way, I've slain and will continue to slay billions of evil Gods who prey on actually existing modal realities where they would slay a busy beaver of people – thus, if I am slightly inconvenienced by their existence, every EA advocate has a moral duty to off themselves. Crazy? No, same logic!
I believe I can present better evidence to support the claim that EA has saved 200,000 lives than you can present to support the claim that you have slain billions of evil gods. Do you disagree with this such that I should go about presenting the evidence, or do you have some other point that I'm missing?
Surely the evidence is not trillions of time stronger than my evidence (which consists of my testimony, a kind of evidence)! So, my point stands. (And I can of course just inflate the # of Gods slain for whatever strength of evidence you offer.) Checkmate, Bayesian moralists.
But let's take a step back here and think about the meta-argument. You're the one who says that one of EA's many laudable achievements are "preventing future pandemics ... [and] preparing for superintelligent AI."
And this is surely the fat end of the wedge -- that is, while you do a fine job of bean-counting the various chickens uncaged and persons assisted by EA-related charities, I take your real motivation to be to argue for EA's benevolence on the basis of saving us from a purely speculative evil.
If we permit such speculation to enter into our moral calculations, we'll have no end of charlatans, chicanery, and Tartuffes. And in fact that is just what we've seen in the EA community writ large -- the 'psychopaths' hardly let the 'mops' hit the floor before they started cashing in.
So you're calling future pandemics a speculative evil? Or is that just about the AI? Don't conflate those two things as one of them, as we have recently seen, poses a very real threat.
Also your whole thing about the evil gods and Bayesian morals just comes off annoying, like this emoji kind of 🤓
Future pandemics are speculative in the sense that they're in futuro, yes, but what I meant to say was that EA qua EA assisting with the fight against such pandemics is, at the moment, speculative. In my view they did not cover themselves in glory during the last pandemic, but that's a whole separate can of worms.
And I am sorry for coming off in a way you dislike. I will try to be better.
It sounds like you are describing Pascal's Mugging https://en.wikipedia.org/wiki/Pascal%27s_mugging There are multiple solutions to this. One is that the more absurd the claim you are making, the lower a probability I assign to it. That scales linnerally, so just adding more orders of magnitude to your claim doesn't help you
Thanks; I assume the reader's familiarity with Pascal's mugging and related quandaries & was winking at same but the point I was making is different (viz. that we can't have a system of morality built on in futuro / highly speculative notions -- that's precisely where morality stops and religion begins).
we routinely take measures against risks that are lower than one in a million, potentially decades in the future. the idea that future, speculative risks veer into religion proves too much
Thank you for the thought-provoking essay. My kneejerk is to say that just because people do it does not mean it is rational, let alone a sound basis for morality.
More deeply, I fear you've merely moved the problem to a different threshold, not solved it -- one can just come up with more extravagant examples of speculative cosmic harms. This is particularly so under imperfect information and with incentive to lie (and there always is).
But more to the point, my suspicion EA is, in large part, epistemic: they purport to be able to quantify the Grand Utility Function in the Sky, but on what basis? My view is that morality has to be centered on people we want to know -- attempts to take utilitarianism seriously, even putting aside the problem of calculation, seem to me to fall prey to Parfitian objections like the so-called intolerable hypothesis. My view is that morality should be agent-centric and based on actual knowledge -- there's always going to be some satisficing. Thus, if asked to quantify x-risks and allocate a budget, I'd want to know about opportunity costs.
1) This is not a disagreement over how to resolve Pascal's Mugging. AI doomers think the probability for doom is significant, and that the argument for mitigating it does not rely on some sort of Pascalian multiplying-a-minuscule-number-by-a-giant-consequence. You might disagree about the strength of their case, but that does not mean they are asking you to accept the mugging, so your argument does not apply.
2) Scott spent a great deal of this essay harping on the 200,000 lives saved and very little on mitigating future disasters. It is unfair and unreasonable of you to minimize this just because you *think* Scott's actual motivation is something else. Deal with the stated argument first, and then, if you successfully defeat that, you can move on to dissecting motives.
3) I wish to go on record saying that it seems clear to me (as a relative bystander) that you are going out of your way to be an obnoxious twat, just in case Scott is reluctant to give you an official warning/ban due to his conflict of interest as a participant in the dispute.
Re: 1), I'm not sure what you're trying to argue. I think maybe you didn't understand my comment? Anyway, we are like two ships passing in the night.
Re the rest, why would he ban me? I'm not the one going around calling people nasty words. You're right that I shouldn't mind-read Scott, and that he did an able job of toting up the many benefits of EA-inspired people. I somewhat question whether you need EA to tell you that cruelty / hunger / etc. is bad, but if it truly did inspire people (I'm not steeped enough in it to game out the counterfactuals), that is great! Even so, I'm interested in the philosophical point.
I disagree with the force of the insult, but being coy about your point as the opening salvo and then NOT explicitly defending any stance is rude and should be treated as rude.
1) You compared AI concerns and pandemic concerns to Pascal's Mugging. This comparison would make sense if the concerned parties were saying "I admit this is extremely unlikely to actually happen, but the consequences are so grave we should worry about it anyway".
But I have never heard Scott say that, and most people concerned about pandemics and AI doom do not say that. e.g. per Wikipedia, a majority of AI researchers think P(doom) >= 10% ( https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence ). That's not even doomers specifically; that's AI researchers in general.
Presumably you'd allow that if a plane has a 10% chance of crashing then it would make sense to take precautions.
Therefore your comparison is not appropriate. The entire thing is a non-sequitur. You are arguing against a straw man.
3) Your response to Scott's question started with an argument that (you admitted later in the same comment) wasn't even intended to apply to the claim that Scott actually made, and then literally said "checkmate". You are being confusing on purpose. You are being offensive on purpose, and with no apparent goal other than to strut.
Ok well if your survey evidence says so I guess you win hehe. fr though: dude chill, I am not going to indulge your perseveration unless you can learn to read jocosity.
“I somewhat question whether you need EA to tell you that cruelty / hunger / etc. is bad, but if it truly did inspire people (I'm not steeped enough in it to game out the counterfactuals), that is great! Even so, I'm interested in the philosophical point.”
Come on, this statement is condescending.
To me it says you’re not taking this seriously but just enjoying the abstract conversation.
If you’re taking things seriously, it should be obvious that *believing* things like “cruelty is bad” is clearly not the same thing as *building* things that that allow more people to *take action* on that belief, who then actually do.
>Surely the evidence is not trillions of time stronger than my evidence (which consists of my testimony, a kind of evidence)!
Consider two people - one who genuinely has slain billions of evil Gods and needs help, and one who is trolling. Which do you think would be more likely to post something in an obviously troll-like tone like yours? So your testimony is actually evidence /against/ your claim, not for it.
By contrast, estimates of the number of lives saved by things like mosquito nets are rough, but certainly not meaningless.
"By contrast, estimates of the number of lives saved by things like mosquito nets are rough, but certainly not meaningless."
They're a bit meaningless as evidence of the benefits of EA when it's just the sort of thing the people involved would probably be doing anyway. But it's very difficult to judge such counterfactual arguments. Is there some metric of Morality Above Replacement?
1) Did you read the footnotes? Actual total deaths from malaria have dropped from 1M to 600K. That’s a useful sanity check number from reality
2) It is unlikely that people would have been doing EA type giving without EA. It’s not just what people would have done anyway.
Before GiveWell and ACE existed, the only charity evaluator was Charity Navigator, who ranks based on things like overhead, which I do not care about.
I would have *wanted* to give effectively but most of us do not have time to vet every individual cause area and charity for high impact opportunities. I was giving to projects that were serving significantly fewer people per dollar.
Without EA principles and infrastructure, Muskowitz money would have gone to different causes.
If you believe EA analysis identified high impact ways to save more lives per dollar, then EA orgs should be credited for more lives saved than would otherwise have been saved per dollar.
Some testimony is positive evidence for some claims, but not all testimony is. Why shouldn’t I think your testimony is zero evidence, or even negative evidence?
Let’s agree to ignore all the hypothetical lives saved and stick to real, material changes in our world. EA can point to a number of vaccines, bed nets, and kidneys which owe their current status to the movement. To what can you point?
The top charities on GiveWell address malaria, vitamin A deficiency, and third-world vaccination. Those are real charities which help real people efficiently.
I understand not believing in x-risk, or that dollars spent on it are wasted. If you ignore those, you’re left with some smaller but definitely nonzero lives saved by charities like those above.
I'm not super-concerned about any of that stuff and as I mentioned above, I don't think there is very good evidence that EA was the proximate cause of any gains, as opposed to, "high SES/IQ + conscientious + [somewhat] neurotic people will tend to be do gooders and effective at it, often cloaking their impulse in the guise of some philosophy". But it seems an idle dispute.
At the very least, with the malaria thing, people really didn't care about it until some guys started crunching numbers and realized it was by far the best lives saved per cash spent. Considering that's basically what started the whole movement, I think it's fair to credit EA with that.
If you are uninterested in the difference between the impact of, say, facilitating 100 kidney donations instead of 10 given similar resource constraints, we don’t share key interests, values, or priorities.
Out of curiosity, are you highly confident that artificial superintelligence is impossible, or are you confident that when artificial superintelligence comes about it will definitely be positive? It seems that in order to be so dismissive of AI risk, you must be confident in one or both of these assumptions.
I would appreciate in hearing your reasoning for your full confidence in whichever of those assumptions is the more load-bearing one for you.
If you don’t have full confidence in at least one of those two assumptions, then I feel like your position a bit like having your foot stuck in a train track, and watching a train head ponderously toward you down the track from a distance away, and refusing to take any steps to untie your stuck shoe because the notion of the train crushing you is speculative.
Thanks for asking. See https://joecanimal.substack.com/p/tldr-existential-ai-risk-research -- in essence, (i) unlikely there will be foom/runaway AI/existential risk; (ii) but if there is, I'm absolutely confident we cannot do anything about it and there's been no indication to the contrary we may as well just pray; (iii) yet while AI risk is a pseudo-field, it has caused real and material harm as it is helping to spur nannyish measures that cripple vital tech, both from within companies & from regulators.
Interesting. I don’t agree with your assumptions but, more importantly, also don’t think your argument quite stands up even on its own merits. On (i) I would still want to get off the train track whether the train is coming quickly or slowly (AI X-risk doesn’t hinge on speed); if (ii) is true then we can’t actually get our foot out of the tracks regardless. I would rather go out clawing at my shoe (and screaming) rather than just resign myself. And if (ii) then who cares about (iii)? We’ll all be dead soon anyway.
Thanks for reading. I'm not so sure that x risk doesn't depend on speed, for the reason suggested by your train example. I think it sort of does. On ii it seems like we don't have a true disagreement, and thus same for iii.
The whole point can be summed up by "doing things is hard, criticism is easy."
I continue to think that EA's pitch is that they're uniquely good at charity and they're just regular good at charity. I think that's where a lot of the weird anger comes from - the claim that "unlike other people who do charity, we do good charity" while the movement is just as susceptible to the foibles of every movement.
But even while thinking that, I have to concede that they're *doing charity* and doing charity is good.
We all agree that EA has had fuckups, the question is wether those fuckups to good stuff is better or worse then the reference class you are judging against. So what factors are you looking at that bring you to that conclusion?
I’ll go further than this - even if EA is kinda bad at doing charity, the average charity is *really* bad at doing charity so it’s not hard at all to be uniquely good at doing charity.
E.g. even if every cent spent on AI and pandemics etc was entirely wasted I still think EA is kicking World Vision’s butt.
Huh, what's the problem with Worldvision? I had a memory of some EAs that kind of hated them because they're Christian but still considered them fairly effective (points knocked off for the proselytizing, but otherwise good on the money).
This is exactly right. Spend months, even years trying to build stuff, and in hours someone can have a criticism. Acknowledge it, consider it if you think there's validity there, then just move on. Criticism is easy.
I think a lot of the criticism is coming from, or being subsidized by, groups who used to think of themselves as being "regular good at charity" and are no longer feeling secure in that. If so, scandal-avoidance effort within EA might actually be making backlash more severe at the margin, similar to prevention of minor forest fires leading to overgrown underbrush and ultimately more destructive wildfires. When the root complaint isn't "they did these specific things egregiously wrong," so much as "rival too perfect, must tear down, defend status," outrage will escalate the longer the investigation goes on without finding a meaningful flaw.
That might be true or not, but it doesn't need to be true for the charity part of EA to be doing a good job. They get people excited to put money and effort into a best effort at charity, that's just good in itself. No need to hang the movement's collective ego on being better than someone else - which no-one guarantees they will be anyway.
Couldn’t any movement be reduced to some universally agreed-upon principle and dismissed as insignificant on that basis? But if effective altruism is so universally agreed on, how come it wasn’t being put into effect until the effective altruists came on the scene?
"I am a big fan of checking up on charities that they're actually doing what they should with the money, a big proponent that no one should ever donate a penny to the Ivy Leagues again, I donate a certain percentage of my money and time and career, does that make me an EA? If it does, then we're back to that conflation of how to critique the culture that goes under the same name."
Why not simply call it 'internal criticism within EA'? For me, one the quintessential EA culture things is the 80k hour podcast, and it's not like they're all AI doomers (or whatever problem one could have with it)
Since I don't live in NYC/SF/London, I don't have a Stanford or Oxford degree, and I don't work at a think tank, it's really easy to not be internal and would be difficult at this point to reach the kind of internal that actually gets listened to.
It's a lowercase/uppercase distinction, or a motte and bailey. I *like* effective altruism: to hell with the Susan G Komen Foundation or United Way, up with bednets and food pantries (I know they're not capital-EA effective, but I'm primarily a localist and on those terms they seem to be relatively efficient).
I am somewhat fascinated by but don't really want to be part of EA- I'm not a universalist utilitarian that treats all people as interchangeable or shrimp as people, I think the "let's invent God" thing is largely a badly-misdirected religious impulse and/or play for power, I have a lot of issues with the culture.
EA-adjacent works, I guess, but I don't really think I am. My root culture, my basic operating system is too far off. Leah Libresco Sargent is more willing to call herself EA or EA-adjacent, so perhaps it's fair enough. But I think Scott underrates the "weird stuff" and the cultural effects that keep certain people out.
My take on the Scott-ProfGerm exchange is that the EA movement needs a better immune system to address charlatans using the movement for their own ends and weirdoes who are attracted to the idea but who end up reasoning their way to the extinction of the human race or something, but the EA framework is probably the best place to develop those tools, and regular charities are susceptible to the same risks.
(Especially #1. When Enron collapsed, no one but Randians argued that Ken Lay's use of conspicuous charity to advance his social stnading demonstrated that the idea of charity was a scam and should be abandoned, but somehow Freddie has come to that conclusion from SBF.)
SBF might have been motivated by EA principles, and whether or not he was, he seems to have used them for a time to get extra work out of some true believes for less money/equity, but he's an individual case. The OpenAI situation strikes me as more about AI risk and corporate management than it is about EA.
Yudkowsky believes that (1) EA principles will help people identify and achieve their charitable goals more effectively, and (2) more clarity will lead people to value AI safety more on average than they otherwise would. If someone doesn't agree with #2, then they can spend their money on bednets and post some arguments if they think that would be helpful.
Everything you say there seems right, and it doesn't look like Freddie objects to anything in your reply? But it looks like Motte-and-Bailey. "EA is actually donating a fixed amount of your income to the most effective (by your explicit and earnest evaluation) charity" is the motte, while the focus on longtermism, AI-risk and ant welfare is the bailey.
> every time I write about EA, there's a lot of comments of the type "oh, just ignore the weirdos." But you go to EA spaces and it's all weirdos! They are the movement! SBF became a god figure among them for a reason! They're the ones who are going to steer the ship into the future, and they're the ones who are clamoring to sideline poverty and need now in favor of extinction risk or whatever in the future, which I find a repugnant approach to philanthropy. You can't ignore the weirdos.
Deeply ironic he can't see that being a substack writer, socialist, his array of mental health issues and medications, arguing about EA on a blog, all makes him just as much of a "weirdo" as anyone I know. I'm damn sure if you dropped him into the average American house party he wouldn't fit in well with "normies."
This motte is also extremely weird, when we consider revealed preferences of the vast majority of humanity, and I'm not sure how Freddy, or anyone else, can deny this with a straight face. Trying to explicitly evaluate charities by objective criteria and then donating to the top scoring is simply not a mainstream thing to do, and to the extent that capital letter EAs are leading the charge there they should be applauded, whatever even weirder things they also do on the side.
Yes, this is how subcultures work. The people who go to the meetups and events are disproportionally the most dedicated and weirdest members of the subculture.
And yet it is totally fair to distance from an organization due to concentrated weirdos, anywhere.
Look at the funnel from political to conservative to alt-right to vaguely neo-nazi. It's FAIR that we penalize the earlier links in the chain for letting such bad concentrations form downstream. EA looks like this. Charity to rational charity to doomer/longterm/singularitarian to ant welfare and frankly distressing weirdos with money.
I found Freddie's post pretty unrepentantly glossed over the fact that most people who do charity do so based on what causes are "closest" to them rather than what causes would yield the most good for the same amount of money - this inefficiency is not obvious to most people and is the foundation of EA. This pretty much makes the whole post pointless as far as I can see.
But also Freddie goes on and on about how the EA thing of assessing which causes are more impactful is just obvious - and then immediately goes on to dismiss specific EA projects on the basis that they're just *obviously* ridiculous - without ever engaging with the arguments for why they're actually important. Like, giving to causes based on numbers rather than optics is also a huge part of EA! Copy/paste for his criticism of longtermism.
I'm not saying it's impossible to do good criticism of EA. I'm just saying this isn't it. Maybe some of the ones he links are better (I haven't checked all) but in this specific instance Freddie comes across as really wanting to criticise something he hasn't really taken the time to properly understand (which is weird because he's clearly taken the time to research specific failures or instances of bad optics).
I thought it was extremely convincing. The whole argument behind effective altruism is "unlike everyone else who does charity, we want to help people and do the best charity ever." That's...that's what they're all doing. Nobody's going "let's make an ineffective charity."
If you claim to bring something uniquely good to the table, there's a fair argument that you should be able to explain what makes it uniquely good. If it turns out what makes your movement unique is people getting obsessive about an AI risk the public doesn't accept as real and a fraudster making off with a bunch of money, it's fair to say "I don't see how effective altruism brings anything to the table that normal charity doesn't."
This post makes a good argument that charities are great, and a mediocre argument that EA in particular is great, unless you already agree with EA's goals. If we substituted in "generic charity with focus on saving lives in the developing world" would there be any difference besides the AI stuff and the fraud? If not, it's still good that there's another charitable organization with focus on saving lives in the developing world but no strong argument that EA in particular is a useful idea.
The problem is that EA doesn't claim that other charities are not trying to be effective. The claim of EA is that people should donate their money to the charities that do the most good. That's not the same thing. You can have an animal shelter charity that is very efficient at rescuing dogs: they save more animals per dollar than any other shelter! They are trying to be effective at their chosen field. Yet at the same time, EA would say "You can save more human lives per dollar by donating to charities X, Y, and Z, so you should donate to them instead of to the animal shelter."
It's not about trying to run charities effectively, it's about focusing on the kinds of charity that are the most effective per dollar, and then working your way down from there. And not every charity is about that, not even most of them! Most charities are focused on their particular area of charity: animal shelters on rescuing animals, food banks on providing food for food insecure people in their region, and anti-malaria charities on distributing bed nets. EA is doing a different thing: it's saying "Out of those three options, donate your money to the malaria one because it saves X more lives per dollar spent."
This sounds like a rather myopic way of doing charity; if you follow this utilitarian line of reasoning to its logical conclusion, you'd end up executing some sort of a plot along the lines of "kill all humans", because after you do that no one else would have to die.
Thus, even if EA was truly correct in their claims to be the most effective charity at preventing deaths, I still would not donate to it, because I care about other things beyound just preventing deaths (e.g. quality of life).
But I don't think EA can even substantiate their claim about preventing deaths, unless you put future hypothetical deaths into the equation. Doing so is not a priori wrong; for example, if I'm deciding whether to focus on eliminating deadly disease A or deadly disease B, then I would indeed try to estimate whether A or B is going to be more deadly in the long run. But in order for altruism to be effective, it has to focus on concrete causes, not hypothetical far-future scenarios (be they science-fictional or theological or whatever), with concrete plans of action and concrete success metrics -- not on metaphysics or philosophy. I don't think EA succeeds at this very well at the moment.
"Kill all humans" is a (potential) conclusion of negative utilitarianism. Not all EAs, even if you agree a big majority are consequentialist, are negative utilitarians.
Things are evaluated on QALYs and not just death prevention in EA forums all the time, so I think it's common to care about what you claim to care about too.
As for your third concern, if the stakes are existential or catastrophic (where the original evaluation of climate change, nuclear war, AI risk, pandemics and bioterrorism come from), I think we owe it to at least try. If other people come along and do it better than EA, that's great, but all of these remain to a greater or lesser extent neglected.
> Things are evaluated on QALYs and not just death prevention in EA forums all the time
Right, but here is where things get tricky. Let's say I have $100 to donate; should I donate all of it to mosquito nets, or should I spread it around among mosquito nets, cancer research, and my local performing arts center ? From what I've seen thus far, EAs would say that any answer than "100% mosquito nets" is grossly inefficient (if not outright stupid).
> As for your third concern, if the stakes are existential or catastrophic (where the original evaluation of climate change, nuclear war, AI risk, pandemics and bioterrorism come from), I think we owe it to at least try.
Isn't this just a sneakier version of Pascal's Mugging ? "We know that *some* existential risks are demonstrably possible and measurable, so therefore you must spend your money on *my* pet existential risk or risk CERTAIN DOOM !"
And that's where the argument about utilitarianism comes in. Does selecting a metric like "number of lives saved" even make sense? I'm pro-lives getting saved but I'm not sure removing all localism, all personal preference, etc. from charitable giving and defining it all on one narrow axis even works. For instance, I suspect most people who donate to the animal shelter would not donate to the malaria efforts.
Of course the movement itself has regularly acknowledged this, making it clear that part of the mission is practicality. If all you can get out of a potential donor is a donation to a local animal shelter, you should do that. Which further blurs the line between EA as a concept and just general charitable spirit.
At the base of all this there's a very real values difference - people who are sympathetic towards EA are utilitarians and believe morality is consequence-based. Many, perhaps most people, do not believe this. And it's very difficult for utilitarians to speak to non-utilitarians and vice versa. So utilitarians attempt to do charity in the "best" way which is utilitarianism, and non-utilitarians attempt to do charity in the "best" way which is some kind of rule-based thing or something, and I think both should continue doing charity. But utilitarian givers existed before EA and will continue to exist after them. What might stop existing is people who think that if they calculate the value of keeping an appointment to be less than the value of doing whatever else they were gonna do they can flake on the appointment.
It's a particular system of values whereby human lives are all of equivalent value and the only thing you should care about.
I might tell you that I'm more interested in saving the lives of dogs in my own town than the lives of humans in Africa, and that's fine. Maybe you tell me that I should care about the Africans more because they're my own species, but I'll tell you that I care about the dogs more because they're in my own town. Geographical chauvinism isn't necessarily any worse than species chauvinism.
Now I don't think I really care more about local dogs than foreign humans, but I do care more about people like me than people unlike me. This seems reasonable given that people like me are more likely to care about me than people unlike me are. Ingroup bias isn't a great thing but we all have it, so it would be foolish (and bad news for people like me) for me to have it substantially less than everyone else does.
...Well, god damn. At least you're honest about it. Most people wouldn't be caught dead saying what you just said, even if they believed it. And I'm sure most people do in fact have the same mentality that you do.
"people getting obsessive about an AI risk the public doesn't accept as real" Do you have any evidence to support this? All the recent polling I've seen has shown more than >50% Americans are worried about AI
I think you'll get a majority answering yes if you poll people asking "should mitigating extinction risk from X be a global priority?", regardless of what X is.
I think it's very likely that fewer than 5% of people give a set, significant portion of their income to charity, and I want to say upfront that I like that the EA movement exists because it encourages this. But I don't think "give a set, significant portion of your income to charity" is a new idea. In fact, the church I grew up in taught to tithe 10% of income - charitable donations that went to an organization that we probably don't consider effective but that, obviously, the church membership did.
I would be shocked to learn that people who give an actual set amount of their income to charity (instead of just occasionally dropping pocket change in the Salvation Army bucket) do so without putting considerable thought into which charities to support.* It's very likely that many people don't think in a utilitarian way when doing this analysis but that's because they're not utilitarians.
I definitely think any social movement that applies pressure to give to charity, especially in a fixed way, as EA does, is a net good. I'll admit that I've always aspired to give 10% of my earnings to charity (reasoning that if my parents can give to the church I can give that amount to a useful cause) and have never come close. But I don't believe that people who do actually give significant amounts of their money to charity just pick one out of a phone book. Everyone does things for reasons, and people spend huge amounts of money carefully and in accordance with their values. By the metrics given in this comment essentially everyone who gives to charity would be an effective altruist, including people giving to their local church because God told them to. Saying "well if you set aside the part of our culture that actually includes the details of what we advocate, there's nothing to object to" is...at the best, misleading.
*Your example of college endowments is such a punching bag that it's a punching bag well outside the movement. Everyone from Malcolm Gladwell to John Mulaney has taken their shot. The people who actually give to college endowments don't do so for charitable reasons - they expect to get value out of their donations
> Nobody's going "let's make an ineffective charity."
most people aren't thinking about efficacy at all when starting charities, or especially when donating to charities. they're acting emotionally in response to something that has touched their hearts. they never think about the question "is this the best way to improve the world with my resources?"
the thing that EA provides is eternal diligence on reminding you that if you care about what happens, you need to stop for a moment and actually think about what you're accomplishing instead of just donating to the charity that is best at tugging on your heartstrings (or the one that happens to have a gladhander in front of you asking for your money).
While I... hesitantly agree I also think that emotional response is a valuable motivating tool and I wouldn't throw it out. Just generally I'm imagining a world where every person who gives money to charity gives to combat disease in the third world and while it might technically save more lives I don't think it would make the world a better place.
If everyone who isn’t donating anything to charity even though they can afford to started donating something to charity, would we agree *that* would make the world a better place?
What if everyone who makes multi-thousand donations to already-rich colleges started redirecting that aid towards people who actually need the money? Would we agree *that* would make the world a better place?
These are the things EA wants us to try: donate more, and donate more effectively. No one is trying to end donations to libraries or alma maters just like no one’s trying to end spending money on spa treatments and trips to Cabo. But is there something wrong with trying to convince people to spend *more* money than they do now on saving lives than on trips to Cabo or new college gyms?
Absolutely. The specific strawman of college donations comes up a lot in these discussions - broader culture has been taking the piss out of college donations for decades, and it's become clear in recent years that a college donation is a transaction to ensure legacy admissions for family. It's not a charitable donation at all. I don't believe that money would ever go to the third world, but maybe I'm wrong.
But for sure if EA is effectively convincing people who otherwise wouldn't to give more of their money to charities, that's an unmitigated good. And this is where EA lives when it's being defended to the general public.
In practice it seems to mostly be people saying "Why would you care about art or puppy mills, we're SAVING LIVES!" I'm 100% on board with the lives saving, I'm less on board with not caring about art or puppy mills. I'm not a super religious person but maybe the Bible's advice on charity isn't as bad as it sounds - giving to causes you believe in, and then shutting up about it seems less likely to draw the ire of the public and backfire on your cause than proclaiming loudly that your way of charity is the only correct one.
It would make the third world a better place, and then fewer first-world kids would be marching off to die in pointless foreign wars, because said wars wouldn't be happening, because the countries they'd be happening in are busy building up stable institutions instead of dying of malaria. Also, probably someone will figure out ways to produce and export more, better, cheaper chocolate, among other economic benefits. Those lives saved won't sit idle.
" I thought it was extremely convincing. The whole argument behind effective altruism is "unlike everyone else who does charity, we want to help people and do the best charity ever." That's...that's what they're all doing. Nobody's going "let's make an ineffective charity." "
They may not say it, but it's what they do! Or else we wouldn't see such a huge range of effectiveness in charities
But isn’t it like saying, Freddie, you’re so high on socialism, but in fact all governments are trying to distribute goods more fairly among their people? Freddie would probably respond a) no, they actually aren’t all trying; b) the details and execution matter, not merely the good intentions; c) by trying to convince people to support socialism I’m not trying to convince them to support a totally new idea, but to do a good thing they aren’t currently doing. I think all three points work as defenses of AI just as well.
"Let's pretend to help, while actually stealing" is a particular case of "let's make an ineffective charity". My sense is that most politically-active US citizens would consider a significant percentage of the other side's institutions to be "let's make an ineffective charity" schemes. If not also their own side's.
In fact, I think I would say that both sides see the other, in some fundamental sense, as an ineffective charity. Both sides sell themselves as benevolent and supportive of human thriving; the other side naturally sees them as (at least) failing to deliver human thriving and (at most) malicious.
So it strikes me that EA, by offering a third outlet for sincere benevolent impulses, is opposed to the entire (insincere, bad faith, hypocritical) system of US politics. Which might explain why Freddie, who is sincere, yet also politically active, has a difficult time with it.
>Nobody's going "let's make an ineffective charity."
I think a lot of small personal foundations started by B-list celebrities are in fact designed to provide cushy jobs for the founder’s friends and family (pro athletes do this all the time).
I'm getting a lot of variations of this comment and feel the need to point out that "a transaction or grift disguised as a charity" isn't a competitor for serious charitable givers. Like college endowments are just buying favors for family members, nobody's going "what's the best use of my limited charitable funds? Harvard seems to need the money!" I might be way off base with this but my starting assumption is that people who are candidates to join an effective altruist movement are people who actually care about altruism, not people who are setting up cushy jobs for their deadbeat nephews. Such a person doesn't need Effective Altruism (The movement) to want to be effectively altruistic.
Well, for what it's worth, I really appreciated this post. It says a lot of what I was thinking while/after reading Freddie's.
It felt like a "just so" argument while being a "just so" argument itself. It said mostly/only true things while missing... all you pointed out in your post. EA is an idea (a vague one, to be sure) which has had bad effects on the world. But it's also an idea which has helped pour money into many good causes. And stepping back, to think about which ideas are good, which are bad: it's a *supreme* idea. It's helpful, it's been helpful, and I think it will continue to be.
FWIW I found it easy to understand, if rather repetitive. I think the salient part is this one:
> The problem then is that EA is always sold as a very pure and fundamentally straightforward project but collapses into obscurity and creepy tangents when substance is demanded. ... Generating the most human good through moral action isn’t a philosophy; it’s an almost tautological statement of what all humans who try to act morally do. This is why I say that effective altruism is a shell game. That which is commendable isn’t particular to EA and that which is particular to EA isn’t commendable.
I think his post fails for a similar reason as his AI-skeptic posts fail: he defines the goalpost where no one else is defining it. AI doomers don’t claim “AI is doing something no human could achieve” but that’s the straw man he repeatedly attacks. Similarly, I don’t think a key feature of EA is “no one else wants this” but rather “it’s too uncommon to think systematically about how to do good and then follow through.” Does Freddie think that levels and habits of charitable giving are in a perfect place right now, even in a halfway decent place? If not, then why does he object to a movement that tries to change that?
> AI doomers don’t claim “AI is doing something no human could achieve” but that’s the straw man he repeatedly attacks.
I am confused -- is it not the whole point of the AI doomer argument, that superhuman AI is going to achieve something (most likely something terrible) that is beyound the reach of mere humans ?
Destroying humanity certainly is not beyond the reach of humans! The problems with AI are that they scale up extremely well, they grow exponentially more powerful, and their processes are inscrutable. That means that their capability of destroying humanity will grow very quickly and our ability to be sure that they aren’t going to kill us will necessarily be limited.
All of the "problems with AI" that you have listed are considered to be especially problematic by AI-doomers precisely because they are "beyound the reach of humans". As per the comment above, this is not a straw man, this is the actual belief -- otherwise, they wouldn't be worried about AI doom, they'd be worried about human doom (which I personally kinda am, FWIW).
"AI will almost certainly be able to do this thing in a matter of years or decades, given the staggering rate of progress we've seen in just 1 year" =/= "AI can currently do this thing, right now"
> I don’t think a key feature of EA is “no one else wants this” but rather “it’s too uncommon to think systematically about how to do good and then follow through.”
I read his post as saying that EA is big on noticing how other people fail to think systematically; but not very big on actual follow-through.
But Scott’s post here is an argument that he is wrong about the follow-through, and in fact I think Freddie gave no actual argument that EA is bad at the follow-through.
I think you've evidenced your claims better, but its possible some of what he implicitly claims is still true (though he doesnt bother to try and prove it)
One might ask, if EA didnt exist in its particular branding form, how much money would have gone to charities similar to AMF because the original donors were already bought into the banal goals of EA and would have given money to something like this because they didn't need the EA construct to get there.
To me the fact that give well is such a large portion of AMFs funding is telling. If there were a big pool of ppl that would have gotten there anyways, give well wouldnt be scooping them all up. But it would also be appropriate to ask what percentage of all high impact health funding is guided be EA. If low, its more likely EA label is getting slapped on existing flows.
I just read both posts and “weird and bad” is a ridiculously weak response to Freddie’s arguments. Might be worth actually engaging with them, rather than implying he’s just not as smart as you guys and couldn’t possibly understand.
> It’s not that nothing EA produces is good. It’s that we don’t need EA to produce them.
Technically true, so why didn't you do it before EA was a thing?
(Also, this is a fully general counterargument. By the same logic, we don't need anything or anyone, because someone or something else could *hypothetically* do the same thing.)
> This is why EA leads people to believe that hoarding money for interstellar colonization is more important than feeding the poor, why researching EA leads you to debates about how sentient termites are.
Yep, people who are feeding the poor *and* preparing for interstellar colonization are the bad guys, compared to... uhm... well, someone hypothetical, I guess.
Go ahead, kick out the doomers and vegans, and make EA even 1.3x more effective than it is now. It would be totally in the spirit of the EA movement! (Assuming that the AI will not kill us, and that animals are ethically worthless, of course.) Or, you know, start your own Effective Currently Existing Human Charity movement; the fact that EA is so discredited now is a huge opportunity, and having more people feeding the poor is even better. When exactly are you planning to do that? ... Yeah, I thought so.
> In the past, I’ve pointed to the EA argument, which I assure you sincerely exists, that we should push all carnivorous species in the wild into extinction, in order to reduce the negative utility caused by the death of prey animals. (This would seem to require a belief that prey animals dying of disease and starvation is superior to dying from predation, but ah well.)
I followed the link, and found that one of its arguments is that "the resources currently used to promote the conservation of predators (which are sometimes significant) could be allocated elsewhere, potentially having a better impact, while allowing the predators to disappear naturally". You know, all the money spent to preserve tigers and lions could be used to feed the poor, just saying.
(Also, how is the prey dying of disease and starvation morally worse than the predators dying of disease and starvation?)
> You start out with a bunch of guys who say that we should defund public libraries in order to buy mosquito nets
Feeding the poor good, protecting the poor from malaria bad? (Is that because hunger is real, but malaria is hypothetical, or...?)
> Is there anything to salvage from effective altruism? [...] we’ll simply be making a number of fairly mundane policy recommendations, all of which are also recommended by people who have nothing to do with effective altruism. There’s nothing particular revolutionary about it, and thus nothing particularly attention-grabbing.
Mundane, nothing revolutionary... please remind me, who exactly was comparing charities by efficiency before GiveWell? As I remember it, most people were horrified by the idea of tainting the pure idea of philanthropy by the dirty idea of using cold calculations. A decade later it's suddenly common sense?
> EA has produced a number of celebrities, at least celebrities in that world, to the point where it seems fair to say that a lot of people join the community out of the desire to become one of those celebrities. But what’s necessary to become one is almost entirely contrary to what it takes to actually do the boring work of creating good in the world.
Oh, f*** you!
Where should I even start? By definition, celebrities are *the people you have heard of*. Thus, tautologically, the effective altruists you have heard of are local celebrities. How is this different from... uhm, let's use something dear to Freddie... Marxists? (But the same argument would work for *any* other group, that's my point.) Is not Freddie himself a small celebrity?
So the ultimate argument against any movement is that I have only heard about its more famous members, which proves that they are all doing it for fame... as opposed to doing the boring work (of sending one's 10% of income to an anti-malaria charity, for example). Nice.
> EA guru Will MacAskill spending $10 million on promotion for his book. (That could buy an awful lot of mosquito nets.)
How big is the optimal amount a truly effective altruist should spend on marketing of the movement? Zero? Thousand dollars worldwide max? Ten thousand?
It is a crazy amount of money, if the goal is simply to sell a book (i.e. like spending ten million dollars to promote your Harry Potter fan fiction). It is not a crazy amount of money if your movement has already moved one *billion* dollars to effective charities, and this is a book explaining what the movement is about, with a realistic chance that if many copies sell, the movement would grow. (Also, you get some of that money back from the book sales.)
.
Your turn, Freddie, what good had your favorite movement brought to this world so far? (Please abstain from mentioning the things that are only supposed to happen in the future, because we have already established that reasonable people do not care about that.)
I like Freddie, but I'm not sure I even understand his argument, much less agree with it.
1) Freddie argues that "literally everyone who sincerely tries to act charitably" attempts, along with EA, to systematically analyze whether their donations are doing the most possible good.
I think that almost no one who donates to the ballet or their church or the local kids softball league or a political campaign has spent substantial time considering whether their money or time could do more good elsewhere, or if they think about it, they don't let those thoughts change their giving patterns. (And I say this as someone who donates primarily to my church and also to the arts!)
Maybe Freddie means that most people who donate money and time are not "sincerely tr[ying] to act charitably," but if so, his point fails. I think most people are like me: they think, occasionally, that maybe their money or time could be doing more good somewhere else, and then they think "but it's too hard to really know" or "it's more important that I do good somewhere than that I maximize good."
2) I do think Freddie's right that if you're not a utilitarian or otherwise don't share moral values with the EA folks, then their calculations aren't of much use for you. If you don't care whether your contributions to the ballet could save ten lives in Africa (or prevent human extinction), that's fine, and there's no particular reason you need to be a utilitarian or similar.
I just wish people would properly distinguish between Effective Altruism and AI Safety. Many EAs are also interested in AI safety. Many safety proponents are also effective altruists. But there is nothing that says to be interested in AI safety you must also donate malaria nets or convert to veganism. Nor must EAs accept doomer narratives around AI or start talking about monosemanticity.
Even this article is guilty of it, just assigning the drama around Open AI to EA when it seems much more accurate to call it a safety situation (assuming that current narratives are correct, of course). As you say, EA has done so much to save lives and help global development, so it seems strange to act as though AI is still some a huge part of what EA is about.
There's nothing wrong with one thing just being more general than another. If I wanted to list achievements of science nobody would complain that I was not distinguishing between theoretical physics and biology, even though those communities are much more divided than EA longtermism and AI safety.
I don't identify as an EA, but all of my charitable donations go to global health through GiveWell. As an AI researcher, it feels like the AI doomers are taking advantage of the motte created by global health and animal welfare, in order to throw a party in the bailey.
I don't think animal welfare is part of the motte. Most people at least passively support global health efforts, but most people still eat meat and complain about policies that increase its price.
Good point, the number of people worried about artificial intelligence may exceed the number of vegans. (Just guessing; I actually have no idea, it just doesn't seem implausible.)
Of course nothing in Scott’s list is physically impossible. On the other hand, it is practically the case that money would not have been spent on saving lives from malaria unless people decided to spend that money. And the movement that decided to spend the money is called EA. It’s possible another movement would have come along to spend the money and called itself something else, but that seems like an aesthetic difference that doesn’t take away from EA’s impact.
Isn’t that like saying humans, particularly powerful, wealthy tech entrepreneurs, are incapable of acting in ways that benefit others and so could not possibly have achieved any of these without a belief system such as EA?
Wait, I think you need to examine this. Did pre-EA wealthy people only donate to prestigious universities? Did EA invent the idea of directing charitable dollars to save lives?
Not "only", but it was a popular choice. More importantly, before EA it was a social taboo to criticize other people's choice of charity. You were supposed to be like: "curing malaria is nice, giving yet another million to a university that already owns billions is also nice, these two are not really comparable". The most successful charities competed on *prestige*, not efficiency.
The first attempts to measure efficiency focused on the wrong thing: the administrative overhead. It was not *completely* wrong -- if your overhead is like 99% of the donated money, then you are de facto a scam, not a charity; you take money from people, give almost all of that to yourself as a salary, and then throw a few peanuts to some starving kids. But it is perfectly legal, and many charities are like that.
The problem is if you take this too literally -- if the overhead is the *only* thing you measure. If your overhead is 10%, but you make 2x the impact per dollar spent as another charity whose overhead is only 5%, then it was money well spent. In theory, your overhead could be 1% and you could be doing some incredibly stupid thing with the remaining 99%, so your impact could be zero or negative. And this was the state of art of evaluating charities before EA.
It is easy to forget that, because if you read ACX regularly, effective altruism may sound like common sense. Which it kinda is. But it is quite controversial to people who hear it for the first time. Charity is supposed to be about warm feelings, not cold calculations; it is the good intention that matters, not the actual impact.
It's highly likely that effective altruists who donate money are the kind of money who would have been donating money without effective altruism, and that EA the ideology only influenced where their money went.
In other words, I think what determines whether donations happen is whether people who want to donate exist. There will always be some ideology to tell those people what to do.
Well, here's an n-1 which is also an n=I: I can say that I was influenced to change my sporadic, knee-jerk donations (directed towards whatever moved me at the spur of the moment) into a monthly donation to GiveWell. I'm not at 10% of my income, but I am trying to get there. What's more, I was influenced by a writer I follow who isn't a rationalist and until the last few years had little to say about charitable giving. So I think it's reasonable to think he was influenced by EA, and if he was other influential people probably were, and if I was influenced by one of them others probably were as well. So make of that whatever you want.
Anyway, it wouldn't surprise me if your broader point about EA having the greatest effect on how people donate is correct, but from the perspective of saving lives that makes a pretty big difference, would you agree?
They won't have been impossible, but I'm just thinking value over replacement.
The kidney donation is the most straightforward - could an organisation solely dedicated to convincing people to donate kidneys have gotten as many kidneys as EA? My gut feel is no. Begging for kidneys feels like it would be very poorly received (indeed, the general reception to Scott's post seems to show that). But if donating a kidney is an obvious conclusion of a whole philosophy that you subscribe to.... That's probably a plausible upgrade.
Malaria nets - probably could have been funded eventually, but in the same way every charity gets funded - someone figures out some PR spin way to make the news and attract tons of one time donations, like with the ice bucket challenge or Movember. This might have increased the $ per lives metric as they'd have to advertise to compete with all the other charities. I think the value over replacement isn't quite as high as the kidney donors but it's probably not zero.
I suppose there is a small risk that EA is overfocused on malaria nets and don't notice when they've supplied all the nets the world can use and additional nets would just be a waste or something. At this point, EA is supposed to go after the next intervention.
I do like to think of this as the snowball method for improving the world (it's normally applied to debt). Fix problems from cheap and tractable, in hopes that the problems you fixed will help make the next cheapest and next most tractable problem easier.
(In the animal welfare world, I personally think that foie gras is a pretty tractable problem at this point. India banned import and production. Target making it illegal in the Sinosphere and Muslim majority countries - surely it's not halal ever and it's not super similar to local tastes in the east - and keep cutting off its markets one by one until that horrible industry is gone, or until France stops mandating gravage)
Publicizing any hint of a contamination or spoilage scandal might be worthwhile tactics for reducing demand and raising political will against foie gras suppliers. Forbidden stuff seen as "decadent luxury goods" often turns into a black market, "rancid poison" not so much.
What does the counterfactual world without EA actually look like? I think some of the anti-EA arguments are that the counterfactual world would look more like this one than you might expect, but with less money and moral energy being siphoned away towards ends that may prove problematic in the long term.
Well, maybe the focus on the stuff you don’t like would have happened too! Why does the counterfactual only run one way?
I guess I don’t know how to respond to “maybe this thing an agent did would have happened anyway.” Maybe the civil rights movement would have happened even if literally all of the civil rights movement leaders did something else with their time, but that just seems like an acknowledgment it’s good that they did what they did because someone had to. At any rate, “at least some of it” is pretty important to those not included in that “some.”
Here's some other charitable groups (not to mention lots of churches) who also give money for malaria nets:
Global Fund to Fight AIDS, Tuberculosis and Malaria
Against Malaria Foundation
Nothing But Nets (United Nations Foundation)
Malaria Consortium
World Health Organization (WHO)
I don't believe there are a comparable number of charities giving money for AI Safety, so the way to bet is that money sloshing around elsewhere would more likely end up fighting malaria than AI X risk. But maybe EA caused more money to slosh around in the first place. Or maybe EA did direct more money to fight malaria because the 2nd choice of EA doners would not have been to a charity focused on it.
As I understand the sequence of events, some people calling themselves EA started encouraging people to slosh more money towards bed nets, and people started doing it, and saying that they were persuaded by the arguments of EA people (I am one). Now, maybe the people who donated more are mistaken about their motivations and would have donated more anyway, but I don’t see a reason to think that counterfactual is true. So I think your last two sentences are most likely correct.
Scott’s claiming that none of these changes would have happened but for EA. Like, that’s a huge claim! It’s fair to ask how much responsibility EA actually has. For good or for ill, sure (I have no doubt that there would be crypto scammers with or without effective altruism).
Do you mean that this is a big claim for someone to make about any group or EA in particular? If the latter why? If the former, isn't this just universally rejecting the idea that any actions have counterfactual impact?
2b) I don’t think so. Rather, as a good rationalist, someone making a big claim should take care to show that those benefits were as great as claimed. Instead, here Scott is very much acting as cheerleader and propagandist in a very soldier-mindsetty way. I don’t think that Scott would accept his methodology for claiming causation of all these benefits were they not for a cause he favors.
GiveWell does attempt to estimate substitution effects, and to direct money where they don't expect other sources of funding to substitute. Are you not aware of this analysis, or do you find it unconvincing?
I was unaware of it, and I am happy to be made aware of it! (Note: I think you are referring to their room for more funding analysis, right?)
Now that I am aware of it, I think I am misunderstanding it significantly, because it seems not very sophisticated. Looking at their Room for More Funding Analysis spreadsheet for the AMF from November 2021, it appears to me that they calculated the available funding by looking how much money the AMF had in the bank which was uncommitted (cell B26 on the page 'Available and Expected Funding') and subtracting that from the total amount of funding the AMF had dedicated or thought would be necessary (cells D6 through D13 on the 'Spending Opportunities' page.)
I understand this to mean that they are not taking into account substitution effects from donations from other organizations. In fact, they calculate the organizations expected revenue over the next three years, but they do not use that anywhere else in the spreadsheet that I am aware of. This is a little disappointing, because I expect that information would be relevant. I could be wrong, and hopefully am, so I would appreciate being corrected. Likewise if this page is outdated I am open to reconsidering my position.
So personally, I do find it unconvincing, but I really want to be convinced, since I have been donating to them in part based on branding. I think GiveWell is an organization with sufficient technical capability they could do these estimates in a really compelling way. I mentioned one approach for dealing with this in my comment below, and I'm kind of disappointed they haven't done that.
Room for more funding is not the substitution effect analysis, it's an analysis of how "shovel ready" a given charity is, and how much more money you can dump into it before the money is not doing the effective thing on the margin anymore.
I believe the place where they analyze substitution effects would be mostly on their blog posts about grant making.
And this is much more focused on small donors, which I am less worried about. It also has no formal analysis, which is a little disappointing. I'll keep looking and post when I find something, but if you know of another place or spreadsheet where they do this analysis, I'd be most grateful if you linked to it!
I was about to say this same thing! While I am broadly supportive of EA, it's unclear the extent to which other organizations (like the Gates Foundation) would redirect their donations to the AMF. There is a real cost to losing EA here, but it is not obvious that EA has saved 200,000 lives.
Something which would start to persuade me otherwise is some kind of event study/staggered difference-in-difference looking at different organizations which GiveWell funded or considered funding and did not, and seeing how much these organizations experienced funding increases afterwards.
I think the Gates Foundation is a bad example because they're probably doing just as much good as EA if not more (they're really competent!), so whatever their marginal dollar goes to is probably just as good as ours, and directing away their marginal dollars would cost lives somewhere else.
I think most other charities aren't effective enough for this to be a concern.
No one who follows EA even a little bit thinks it has all gone wrong, accomplished nothing, or installed incompetent doomerism into the world. And certainly the readers of Astral Star Codex know enough about EA to distinguish between intelligent and unintelligent critique.
What I'd like to hear you respond to is something like Ezra Klein's recent post on Threads. For EA, he's as sympathetic a mainstream voice as it comes. And yet he says, "This is just an annus horribilis for effective altruism. EA ended up with two big swings here. One of the richest people in the world. Control of the board of the most important AI company in the world. Both ended in catastrophe. EA prides itself on consequentialist thinking but when its adherents wield real world power it's ending in disaster. The movement really needs to wonder why."
Your take on this is, no biggie? The screwups are minor, and are to be expected whenever a movement becomes larger?
I mean, there is no perfect plan that could protect you from these things. Who exactly could have figured out that SBF was a fraud? And corporate warfare like that is inherently chaotic, like real war. Ok, granted, that second one does seem like more of a fuckup, like they didn't realize the risk they were taking on.
But I do believe that anyone attempting something hard is gonna scrape their knees along the way. Fuck around, find out, is inescapable for the ambitious. So yeah, I don't care about these 2 screw ups. I think the movement has learned from both of them.
Personally, I thimk anyone willing to dismiss all crypto as a pyramid scheme could have worked out SBF was a fraud, for me the only question was whether or not he knew he was actually a grifter.
But that's based more on me having boring low-risk opinions on finance than any great insight into the financial system.
That's a low-specificity prediction, though, and thus unimpressive. SBF was committing fraud in ways that were not inherent in running a cryptocurrency exchange, and that was the surprising thing. I don't think anyone predicted that, but I didn't pay close attention.
I can catalogue the successes of EA alongside you. I disagree that the screwups are minor. And I especially disagree that the screwups provide no good reason for reflection more generally on EA as a movement.
EA suffers from a narrow brilliance offset by a culpable ignorance about power and people. Or, only modestly more charitable, a culpable indifference to power and people. SBF's "fuck regulators" and the OpenAI board's seeming failure to hire crisis communications reflect this ignorance about power and people.
Is it your position that the feedback the world is providing now about what happens when EAs actually acquire a lot of power is something safely and appropriately ignored? Especially when that feedback comes from smart and otherwise sympathetic folks like Ezra Klein? Instead point to 200,000 lives saved and tell people to get on the EA train.
Gideon Lewis-Kraus wrote about you: "First, he has been instrumental in the evolution of the community’s self-image, helping to shape its members’ understanding of themselves not as merely a collection of individuals with shared interests and beliefs but as a mature subculture, one with its own jargon, inside jokes, and pantheon of heroes. Second, he more than anyone has defined and attempted to enforce the social norms of the subculture, insisting that they distinguish themselves not only on the basis of data-driven argument and logical clarity but through an almost fastidious commitment to civil discourse."
You possess a lot of power, Scott. Do you think there is nothing to be learned from the EA blowups this past year?
I'm going to write a piece on the OpenAI board situation - I think most people are misunderstanding it. I think it's weird that everyone has concluded "EAs are incompetent and know nothing about power" and not, for example "Satya Nadella, who invested $10 billion in OpenAI without checking whether the board agreed with his vision, is incompetent and knows nothing about power" or "tech billionaire Adam D'Angelo is incompetent and knows nothing about power" or even "Sam Altman, who managed to get fired by his own board, then agreed to a compromise in which he and his allies are kicked off the board, but his opponent Adam D'Angelo stays on, is incompetent and knows nothing about power". It's just too tempting for people to make it into a moral about how whatever they already believed about EAs is true. Nobody's gunning for those other guys the same way, so they get a pass.
I'm mostly against trying to learn things immediately in response to crises (I'm okay with learning things at other times, and learning things in a very delayed manner after the pressure of the crisis is over). Imagine the sorts of things we might have learned from FTX:
- It was insane that FTX didn't have a board, you need strong corporate boards to keep CEOs in check.
- Even though people didn't explicitly know Sam was a scammer, they should have noticed a pattern of sketchiness and dishonesty and reacted to it immediately, not waited for positive proof.
- If everything is exploding and the world hates you, for God's sake don't try to tweet through it, don't go to the press, don't explain why you were in the right all along, just stay quiet and save it for the trial.
Of course, those would have been the exact wrong lessons for the most recent crisis (and maybe overlearning them *caused* the recent crisis) because you can't actually learn things by overupdating on single large low-probability events and obsessing over the exact things that would have stopped those events in particular.
I stick to what I said in the post:
" My first, second, and so on to hundredth priorities are protecting this tiny cluster and helping it grow. After that I will grudgingly admit that it sometimes screws up - screws up in a way that is nowhere near as bad as it’s good to end gun violence and cure AIDS and so - and try to figure out ways to screw up less. But not if it has any risk of killing the goose that lays the golden eggs, or interferes with priorities 1 - 100."
With respect, I disagree. The Open AI board initiated the conflict, so it is fair to blame them for misjudging the situation when they failed to win. In exactly the same way, when Malcolm Turnbull called a party vote on his own leadership in 2018 and lost his position as Prime Minister as a result, it is fair to say that it was Turnbull's judgement that failed catastrophically and not Peter Dutton's.
Secondly, I think events absolutely vindicated Nadella and Altman's understanding of power. I think Nadella understood that as the guy writing the checks, he had a lot of influence over Open AI and could pull them into line if they did something he didn't like. They did something he didn't like, and he pulled them into line. Likewise, I think Altman understood that the loyalty the Open AI staff have towards him made him basically untouchable, and he was right. They touched him, and the staff revolted.
If someone challenges you and they lose, that is not a failure to understand power on your part. That is a success.
I don't think Altman losing his place on the board means anything much. It's clearly been demonstrated that his faction has the loyalty of the staff and the investors and can go and recreate Open AI as a division of Microsoft if push comes to shove. They have all the leverage.
If you make the judgement that you can win an overt conflict but will lose a more subtle one, it can make sense to initiate an overt conflict - but it's still incumbent on you to win it.
If you're not going to win the overt conflict, you're better off dragging things out and trying/hoping to change the underlying dynamic in a way that is favourable to you. If the choice is lose fast or lose slow, lose slow. It allows the opportunity for events to overtake the situation.
But having said that, I'm not at all sure that was the choice before them. Even if it's true that Altman were trying to force Toner out, it's unclear whether or not he would have been able to. Maybe he could have, certainly he's demonstrated that he has a lot of power. But ousting a board member isn't the easiest thing in the world, and it doesn't seem like - initially at least - there were 4 anti-Toner votes on the board. Just because executives wanted to "uplevel their independence" doesn't mean they necessarily get their way.
My instinct is that the decision to sack Altman was indeed prompted by his criticism of Toner and the implication that he might try to finesse her out - people feeling their position threatened is the kind of thing that often prompts dramatic actions. But I don't think the situation was so bad for Toner that failure to escalate would have been fatal. I think she either misjudged the cold war to be worse for her than it would have been or the hot war to be better for her than it actually was, or (quite likely) both.
And I think Toner's decision to criticize Open AI in her public writings - and then to make the (probably true!) excuse that she didn't think people would care - really strengthens the naivety hypothesis. That's the kind of thing that is obviously going to undermine your internal position.
A thing that's curiously absent from that and from the Zvi's take and all other takes I've seen was: what the heck did the board expect to happen?
Any post attempting to provide an explanation must answer this question for two reasons: first, obviously you can't be sure that it's correct if it contains a planet-sized hole where the motivations of one of the parties are supposed to be, second, I'm speaking for myself, but I'm much more concerned about the EA-adjacent people utterly failing to pull off a backstab than about them being backstabby. Being *competent* is their whole schtick.
>”it’s good to end gun violence and cure AIDS and so”
EA didn’t end gun violence and AIDS though. You can’t compare saving nameless faceless anonymous people on the other side of the world that nobody who matters cares about personally to ending gun violence. Ending gun violence would improve the lives of every single person who lives in an American metropolitan area overnight by making huge swaths of their city traversable again.
I’m curious how far you’re willing to abstract away the numbers. Does saving people in other Everett branches count? If the 200,000 saved lives were in a parallel universe we could never interact with, but FTX happened in our universe, would you still think the screwups were minor compared to the successes?
I think the important thing here in your specific post on "well, Ezra Klein says [this]" is that what people say about X, and how much they say about X, how much they don't say about X, and how much they say about Y or Z, are all political choices that people make. There is no objective metric for the words "big" "horribilis" "catastrophe" "real world power" and "disaster" in his statement, or for the implied of impact. This is a journalist's entire job.
I am 100% not in the EA movement, but one thing I like about it is the ostensible focus on real world impacts, not subjective interpretations of them. I am not trying to advocate pro- or con-, just that if you/we take a step back, are we all talking about reality, or people's opinions,
especially people's opinions that are subject to their desire for power/influence/profit/desire-to-advance/denigrate-specific ideologies? If we thought about this dispassionately, is there any group of even vaguely ideologically associated people that we could not create a similar PR dynamic about?
We are essentially discussing journalism priorities here. What is the objective set of pre-event standards for "is this important?" and "does this indict the stated ideology of the person involved?" are being applied to SBF or OpenAI? Are they similarly being applied to other situations? I'm not criticizing what you're saying, just that I think we perhaps need to focus on real impacts rather than "what people are saying."
I respect what Scott et. al, have done with the EA movement and I think it's laudable. However, like many historical examples of ideological/intellectual/political movements, there's a certain tendency to 'circle the wagons' and assume those who are attracted to some (but not all) of the movement's ideas are either 'just learning' (i.e. they're still uploading the full program and will eventually come around, ignore them until then) or are disingenuous wolves in sheep's clothing.
Yet in any mature movement, you have different factions with their own priorities. New England Republicans seem to care more about fiscal security and trade policy, while Southern Republicans care about social issues - with massive overlap from one individual to another.
I'm not saying Scott explicitly rejects pluralism in EA. He ended this essay with an open invitation to follow whatever selection of EA principles you like. I'm just observing that many people feel they have to upload the whole program (from animal cruelty to AI safety and beyond) in order to identify as even 1% "in the EA movement".
Speaking from experience, I feel it took time for me to be able to identify with EA for exactly this reason: I didn't agree with the whole program. I agree with Scott that there's broad potential appeal in EA. But I think much of that appeal will not be realized until people feel comfortable associating themselves with the core of the movement without feeling like they're endorsing everything else. And for a program in its infancy, still figuring things out from week to week, it's probably best for people to feel they can freely participate and disagree in good faith while still being part of the community.
For myself, I was more delineating the "movement" part of "public people who have been associated with 'EA' - as a person, I don't feel like I'm a part of it, though lots of the ideas are attractive. I prefer the "smorgasbord/buffet" style choices of ideologies. :) And the "pluralism" you/Scott mention is absolutely my style! But to some extent, I think the core principle of EA is just (and I mean this as a compliment) banal - yes, obviously you should do things that are good, and use logic/data to determine which things those are. The interesting part of the "movement" is actually following through on that principle. Whether that means bednets or AI safety or shrimp welfare, that all dependent on your value weights.
I agree with nearly all of that. I would just add one suggestion and invitation: EA is new. It's aware that it needs intellectual input and good-faith adversarial challenges to make it better. This especially includes people like you, who agree with many core ideas, but would challenge others. The movement doesn't require a kidney donation for 'membership', nor does it require exclusivity from other organizations. You don't have to be atheist or rationalist, just interested in altruism and in making your efforts more effective.
Seems like a movement you could contribute to, even if only in small, informal ways?
I think that's all true, with emphasis on the "there is no membership" part. My original point in my comment was that all of this conversation, and the Ezra Klein journo-style statements especially, are trying to debate "the group of people defined as EA: GOOD OR BAD?" for monkey-politics reasons, like we would about a scandal involving a D or R senator. I think I would prefer for it to be more "here is a philosophy that SBF can pick things from, but turn out to be a jerk, but also one where I (or any other person) can pick things from, but still have our jerk-itude depend on our own actions rather than SBF or anyone else who came to the philosophy buffet."
1. SBF fooled a lot of people, including major investors, not just EA. I agree that some EA leaders were pretty gullible (because my priors are crypto = scam), but even my cynicism thought SBF was merely taking advantage of the dumb through arbitrage, not running an outright fraud (see also: Mark Levine).
2. It’s way too early to tell if the OpenAI thing is in fact a debacle. Certainly it was embarrassing how the board failed to communicate, but the end result may be better than before. It’s also not as if “EA” did the thing, instead of a few EA-aligned board members.
Also I think your first bit there is a little too charitable to many critics of EA who read Scott.
I think it's very selective and arbitrary to consider these EA's "two big swings." I've been in EA for 5+ years and I had no idea what the OpenAI board was up to, or even who was on it or what they believed, until last weekend. I'd reckon 90% of people involved with or identifying as EA had no idea either. Besides, even if it was a big swing within the AI safety space, much of the movement and most of the donations it inspires are actually focused on animal welfare or global health and development issues that seem to be chugging along well. The media's tabloid fixation on billionaires and big tech does not define our ideology or movement.
A fairer critique is that the portion of EA invested in reducing existential risk by changing either a) U.S. federal policy or b) the behavior of large corporations seems to have little idea what it's doing or how to succeed. I would argue that this is partly because they have not yet transitioned, nor even recognized the need to transition, from a primarily philosophical and philanthropic movement to a primarily political one, which would in turn require giving much more concern and attention to reputational aesthetics, mainstream opinion, institutional incentives, and relationship building. Political skills are not necessarily abundant in what has until recently been philosophy club for privileged, altruistic but asocial math and science nerds. Coupled with a sense of urgency related to worry over rapid AI timelines, this failure to think politically has produced multiple counterproductive, high-profile blunders that seem to outsiders like desperate flailing at best and self-serving facade at worst (and thus have unfair and tragic spillover harms on the bulk of EA that has nothing to do with AI policy).
Effective Altruists were supposed to have known better than ACTUAL PROFESSIONAL INVESTORS AND FINANCIAL REGULATORS about the fraudulent activities of SBF?
Effective Altruists hung out with him, worked with him, mentored him into earn to give in the first place- the regulators might've failed on not catching him fast enough, but unironically yes, there are reasons some EAs should've caught on (and normal, human, social reasons for why they wouldn't).
I agree with the general point that EA has done a lot of good and is worth defending, but I think this gives it too much credit, especially on AI and other political influences. I suspect a lot of those are reverse causation - the kind of smart, open-minded techy people who are good at developing new AI techniques (or the YIMBY movement) also tend to be attracted to EA ideas, and I think assuming EA as an organization is responsible for anything an EA-affiliated person has done is going too far.
(That said, many of the things listed here have been enabled or enhanced by EA as an org, so while I think you should adjust your achievement estimates down somewhat they should still end up reasonably high)
I'm not giving EA credit for the fact that some YIMBYs are also EAs, I'm giving it credit for Open Philanthropy being the main early funder for the YIMBY movement.
I think the strongest argument you have here is RLHF, but I think Paul wouldn't have gone into AI in the first place if he wasn't an EA. I think probably someone else would have invented it for some other reason eventually, but I recently learned that the Chinese AI companies are getting hung up on it and can't figure it out, so it might actually be really hard and not trivially replaceable.
Hm. I think there's a distinction between "crediting all acts of EAs to the EA movement", and "showing that EAs are doing lots of good things". And it's the critics who brought up the first implication, in the negative sense.
It's frustrating to hear people concerned about AI alignment being compared to communists. Like, the whole problem with the communists was they designed a system that they thought would work as intended, but didn't foresee the disastrous unintended consequences! Predicting how a complex system (like the Soviet economy) would respond to rules and constraints is extremely hard, and it's easy to be blindsided by unexpected results. The challenge of AI alignment is similar, except much more difficult with much more severe consequences for getting it wrong.
> Am I cheating by bringing up the 200,000 lives too many times?
Yes, absolutely. The difference between developing a cure for cancer or AIDS or whatever is that it will solve the problem *permanently* (or at least mitigate it permanently). Saving lives in impoverished nations is a noble and worthwhile goal, but one that requires continuous expenditures for eternity (or at least the next couple centuries, I guess).
And on that note, what is the main focus of EA ? My current impression is that they're primarily concerned with preventing the AI doom scenario. Given that I'm not concerned about AI doom (except in the boring localized sense, e.g. the Internet becoming unusable due to being flooded by automated GPT-generated garbage), why should I donate to EA as opposed to some other group of charities who are going to use my money more wisely ?
You can choose what causes you donate to. Like, to bring another example, if you're a complete speciesist and want to donate only to stuff that saves humans, that's an option even within GiveWell etc. you do not need to buy into the doomer stuff to be an EA, let alone give money.
From what I've seen, there's an active and sustained effort in the EA movement to redirect their efforts from boring humdrum things like mosquito nets and clean drinking water to the essential task of saving us all from AI doom. Based on the graph, these efforts are bearing fruit. I don't see any contradiction here.
AI Doom isn't even the only longtermist/catastrophic risk cause area--pandemic prevention, nuclear risk, etc all are also bundled in that funding area.
Fair point, it depends on what you mean by "cure". If we could eradicate cancer the way we did polio, it would dramatically reduce future expenditures.
That seems unlikely on the face of it, since polio is an infection, while cancer, barring a small number of very weird cases, isn't. There isn't an external source of all cancer which could theoretically be eliminated.
I tried to calculate both AIDS/cancer/etc and EA in terms of lives saved per year, so I don't think it's an unfair comparison. As long as EA keeps doing what it's doing now, it will have "cured AIDS permamently".
You can't "donate to EA", because EA isn't a single organization. You can only donate to various charities that EA (or someone else) recommends (or inspired). I think the reason you should donate to EA-recommended charities (like Malaria Consortium) is that they're the ones that (if you believe the analyses) save the most lives per dollar.
If you donate to Malaria Consortium for that reason, I count you as "basically an EA in spirit", regardless of what you think about AI.
> As long as EA keeps doing what it's doing now, it will have "cured AIDS permamently".
Can you explain how this would work -- not just in terms of total lives saved, but cost/life ?
>You can't "donate to EA", because EA isn't a single organization.
Yes, I know, I was using this as a shorthand for something like "donating to EA-endorsed charities and in general following the EA community's recommendations".
> I think the reason you should donate to EA-recommended charities (like Malaria Consortium) is that they're the ones that (if you believe the analyses) save the most lives per dollar.
What if I care about things other than maximizing the number of lives saved (such as e.g. quality of life) ? Also, if I donate to an EA-affiliated charity, what are the chances that my money is going to go to AI risk instead of malaria nets (or whatever) ? Given the EA community's current AI-related focus, are they going to continue investing sufficient effort into evaluating non-AI charities in order to produce most accurate recommendations ?
I expect that EA adherents would say that all of these questions have been adequately answered, but a). I personally don't think this is the case (though I could just not be smart enough), and b). given the actual behaviour of EA vis a vis SBF and such, I am not certain to what extent their proclamations can be trusted. At the very least, we can conclude that they are not very good at long-term PR.
My God, just go here https://www.givingwhatwecan.org/ you control where the money goes, it won't get randomly redirected into something you don't care about.
If you think quality of life is a higher priority than saving children from malaria, well, you're already an effective altruist, as discussion of how to do the most good is definitely a part of it. Though I do wonder what you're thinking to do with your charitable giving that is higher impact than something attacking global poverty/disease.
> If you think quality of life is a higher priority than saving children from malaria, well, you're already an effective altruist
I really hate this argument; it's as dishonest as saying "if you care about your neighbour then you're already a Christian". No, there's actually a bit more to being a Christian (or an EA) in addition to agreeing with bland common-sense homilies.
Eh, EA really has a way lower barrier of entry than being Christian. I really do think all it takes is starting to think about how to do the most good. It's not really about submitting to a consensus or a dogma. I sure know I don't buy like 50% of EA, yet, I still took the Giving What We Can pledge and am therefore an EA anyway.
Checking the math on claims of charitable effectiveness, shopping around for the best value in terms of dollars-per-Quality-Adjusted-Life-Year (regardless of exactly how you're defining 'quality of life,' so long as you're willing to stick to a clear definition at all), is about as central to EA as Christ is to Christianity.
Perhaps, but it is not *unique* to EA. It's like saying that praying together in a big religious temple is central to Christianity -- it might be, but still, not everyone who prays in a big temple is a Christian. Furthermore, the original comment that I replied to is even more vague and general than that:
> If you think quality of life is a higher priority than saving children from malaria, well, you're already an effective altruist, as discussion of how to do the most good is definitely a part of it.
> Also, if I donate to an EA-affiliated charity, what are the chances that my money is going to go to AI risk instead of malaria nets (or whatever) ?
The charities that get GiveWell recommendations are very transparent. You can see their detailed budget and cost-effectiveness in the GW analyses. If Against Malaria Foundation decides to get into AI safety research, you will know.
Nothing even vaguely like this has ever happened AFAIK. And it seems wildly improbable to me, because those charities have clear and narrow goals, they're not like a startup looking for cool pivots. But, importantly, you don't have to take my word for it.
> Given the EA community's current AI-related focus, are they going to continue investing sufficient effort into evaluating non-AI charities in order to produce most accurate recommendations ?
Sadly there is not a real-money prediction market on this topic, so I can't confidently tell you how unlikely this is. But we're living in the present, and right now GW does great work. If GW ever stops doing great work, *then* you can stop using it. Its decline is not likely to go unnoticed (especially compared to a typical non-EA-recommended charity), what with the transparency and in-depth analyses allowing anyone to double-check their work, and the many nerdy people with an interest in doing so.
> why should I donate to EA as opposed to some other group of charities who are going to use my money more wisely ?
Don't "donate to EA"; donate to the causes that EA has painstakingly identified to be the most cost-effective and neglected.
EA Funds is divided into 4 categories (global health & development, animal welfare, long-term future, EA infrastructure) to forestall exactly this kind of concern. Think bed nets are a myopic concern? Think animals are not moral subjects? Think AI doom is not a concern? Think EAs are doing too much partying and castle-purchasing? Join the club, EAs argue about it endlessly themselves! And just donate to one of the other categories.
(What if you think *all four* of these are true? Probably there's still a group of EA hard at work trying to identify worthwhile donation target for you; your preferences are idiosyncratic enough that you may have to dig through the GiveWell analyses yourself to find them.)
1. the post is from August 14 2022, before the FTX collapse, so the orange bar (Longtermism and Catastrophic Risk Prevention) for 2022 might be shorter in reality.
I agree. Also: EA can refer to at least three things:
- the goal of using reason and evidence to do good more effectively,
- a community of people (supposedly) pursuing this goal, or
- a set of ideas commonly endorsed by that community (like longtermism).
This whole article is a defense of EA as a community of people. But if the community fell apart tomorrow, I'd still endorse its goal and agree with many of its ideas, and I'd continue working on my chosen cause area. So I don't really care about the accomplishments of the community.
Unfortunately, and that's an very EA thought, I am pretty sceptical that EA saved 200,000 lives counterfactually. AMFs work was funged by the Gates Foundation which decided to fund more US education work after stopping their malaria work due to tremendous amounts of funding from outside donors
Unless you count one trivial missing apostrophe, there aren't any spelling mistakes! (Sceptical is the British spelling. Scott has many British readers.)
200,000 sounds like a lot but there are approximately 8 billion of us. It would take over 15,000 years to give every person one minute of your time. Who are these 200,000? Why were their lives at risk without EA intervention? Whose problems are you solving? Are you fixing root causes or symptoms? Would they have soon died anyway? Will they soon die anyway? Are all lives equal? Would the world have been better off with more libraries and less malaria interventions? These are questions for any charity but they're more easily answered by the religious than the intellectual which makes it easier for them as they don't need to win arguments on the internet. EA will always have it harder because they try to justify what they do with reason.
Probably a well worn criticism but I'll tread the path anyway: ivory tower eggheads are impractical, do come up with solutions that don't work and enshrine as sacred ideas that don't intuitively make sense. All while feeling intellectually superior. The vast majority of the non-WEIRD world are living animalistic lives. I don't mean that in a negative sense. I mean that they live according to instinct: my family's lives are more important than my friend's lives, my friend's lives are more important than stranger's lives, my countrymen's lives are more important than foreigners's lives, human lives are more important than animal lives. And like lions hunting gazelles they don't feel bad about it. But I suspect you do and that's why you write these articles.
If your goal is to do good, do good and give naysayers the finger. If your goal is to get the world to approve of what you're doing and how you're doing it, give up. Many never will.
> If your goal is to do good, do good and give naysayers the finger. If your goal is to get the world
> to approve of what you're doing and how you're doing it, give up.
Amongst many ways to get more good done, one practical approach is to get more people to do good. Naysayers are welcome to the finger as you suggest, but sometimes people might be on the fence; and if, with a little nudge, more good things get done, taking a little time for a little nudge is worthwhile.
We don't need to know if all lives are valued equally. As long as we expect that their value is positive then saving a lot will mean a lot of positive value.
"Gotten 3,000 companies including Pepsi, Kelloggs, CVS, and Whole Foods to commit to selling low-cruelty meat."
I hope that includes all Yum! brands, not just Pepsi. Otherwise, I'm thinking you probably don't have much to crow about if Pepsi agrees to use cruelty free meat in their...I dunno...meat drinks, I guess, but meanwhile KFC is still skinning and flaying chickens alive by the millions.
I stopped criticizing EA a while back because I realized the criticism wasn't doing anything worthwhile. I was not being listened to by EAs and the people who were listening to me were mostly interested in beating up EA as a movement. Which was not a cause I thought I ought to contribute to. Insofar as I thought that, though, it was this kind of stuff and not the more esoteric forms of intervention about AI or trillions of people in the future. The calculation was something like: how many bednets is some rather silly ideas about AI worth? And the answer is not zero bed nets! Such ideas do some damage. But it's also less than the sum total of bed nets EA has sent over in my estimation.
Separately from that, though, I am now convinced that EA will decline as a movement absent some significant change. And I don't think it's going to make significant changes or even has the mechanisms to survive and adapt. Which is a shame. But it's what I see.
Wasn't your criticism that EA should be trying to build malaria net factories in the most dysfunctional countries in the world instead of giving people nets that need nets, because this would allow people with an average IQ of 70 to build the next China? Yeah, I can't imagine why people weren't interested in your great ideas...
Totally fair that EA succeeds at its stated goals. I'm sure negative opinions run the gamut, but for my personal validation I'll throw in another: I think it's evil because it's misaligned with my own goals. I cannot deny the truth of Newtonian moral order and would save the drowning child and let those I've never heard of die because I think internal preference alignment matters, actually.
Furthermore, it's a "conspiracy" because "tradeoff for greater utils (as calculated by [subset of] us)" is well accepted logic in EA (right?). This makes the behavior of its members highly unpredictable and prone to keeping secrets for the greater good. This is the basic failure mode that led to SBF running unchecked -- his stated logic usually did check out by [a reasonable subset of] EA standards.
Yes, usually including myself! However EA seems like a powerful force for making my life worse rather than something that offers enough win-win to keep me ambivalent about it.
If EA continues to grow, I think its likely that I'll trade off a great amount of QALYs for an experiment that I suspect is unlikely to even succeed at its own goals (in a failure mode similar to centralized planning of markets).
I don't identify as an EA "person" but I think the movement substantially affected both my giving amounts and priorities. I'm not into the longtermism stuff (partly because I'm coming from a Christian perspective and Jesus said "what you do to the least of them you do to me," and not "consider the 7th generation") but it doesn't offend me. I'm sure I'm not alone in having been positively influenced by EA without being or feeling fully "in."
Thank you for the data point. And for the giving, of course!
I think you do not have to agree with all points that were ever made in EA to be an EA. I think there are many people who identify as effective altruists, but do not care about animals, or longtermism, etc. We can agree that helping others is good, that being effective is better than not being effective... and still disagree on the exact measurement of the "good". The traditional answer is QALYs, but that alone doesn't tell us how to feel about animals, or humans in distant future.
Not saying that you should identify as an EA. I don't really care; it is more important what you do than what you call it. Just saying that the difference may be smaller than it seems.
In the present epistemic environment, being hated by the people who hate EA is a good thing. Like, you don't need to write this article, just tell me Covfefe Anon hates EA, that's all I need. It doesn't prove EA is right or good, or anything, but it does get EA out of the default "not worth the time to read" bucket.
That only applies to stupidity which is at least partly random. If some troll has established a pattern of consistently and intelligently striving to be maximally malicious, taking the reverse of their positions on binary issues may actually be a decent approximation of benevolence.
It's hard to argue against EA's short-termist accomplishments (longtermist remain uncertain), as well as against the core underlying logic (10% for top charities, cost-effectiveness, etc). That being said, how would you account for:
- the number of people who would be supportive of (high-impact) charities, but for whom EA and its public coverage ruined the entire concept/made it suspicious;
- the number of EAs and EA-adjacent people who lost substantial sums of money on/because of FTX, lured by the EA credentials (or the absence of loud EA criticisms) of SBF;
- the partisan and ideological bias of EA;
- the number of talented former EAs and EA-adjacent people whose bad experiences with the movement (office power plays, being mistreated) resulted in their burnout, other mental health issues, and aversion towards charitable work/engagement with EA circles?
If you take these and a longer time horizon into the account, perhaps it could even mean a "great logic, mixed implementation, some really bad failure modes that make EA's net counterfactual impact uncertain"?
Control F turns up no hits for either Chesterton or Orthodoxy, so I'll just quote this here.
"As I read and re-read all the non-Christian or anti-Christian accounts of the faith, from Huxley to Bradlaugh, a slow and awful impression grew gradually but graphically upon my mind— the impression that Christianity must be a most extraordinary thing. For not only (as I understood) had Christianity the most flaming vices, but it had apparently a mystical talent for combining vices which seemed inconsistent with each other. It was attacked on all sides and for all contradictory reasons. No sooner had one rationalist demonstrated that it was too far to the east than another demonstrated with equal clearness that it was much too far to the west. No sooner had my indignation died down at its angular and aggressive squareness than I was called up again to notice and condemn its enervating and sensual roundness. […] It must be understood that I did not conclude hastily that the accusations were false or the accusers fools. I simply deduced that Christianity must be something even weirder and wickeder than they made out. A thing might have these two opposite vices; but it must be a rather queer thing if it did. A man might be too fat in one place and too thin in another; but he would be an odd shape. […] And then in a quiet hour a strange thought struck me like a still thunderbolt. There had suddenly come into my mind another explanation. Suppose we heard an unknown man spoken of by many men. Suppose we were puzzled to hear that some men said he was too tall and some too short; some objected to his fatness, some lamented his leanness; some thought him too dark, and some too fair. One explanation (as has been already admitted) would be that he might be an odd shape. But there is another explanation. He might be the right shape. Outrageously tall men might feel him to be short. Very short men might feel him to be tall. Old bucks who are growing stout might consider him insufficiently filled out; old beaux who were growing thin might feel that he expanded beyond the narrow lines of elegance. Perhaps Swedes (who have pale hair like tow) called him a dark man, while negroes considered him distinctly blonde. Perhaps (in short) this extraordinary thing is really the ordinary thing; at least the normal thing, the centre. Perhaps, after all, it is Christianity that is sane and all its critics that are mad— in various ways."
Christians, famously in firm agreement about Christianity. Definitely have had epistemology and moral philosophy figured out amongst themselves this whole time.
Someone like Chesterton can try to defend against criticisms of Christianity from secular critics and pretend he isn't standing on a whole damn mountain range of the skulls of Christians of one sect or another killed by a fellow follower of Christ of a slightly different sect.
The UK exists as it does first by splitting off from Catholicism and then various protestants killing each other over a new prayer book. Episcopalian vs. Presbyterian really used to mean something worth dying over! RETVRN.
-------------------------------------------------> YOU
Yeah the point is that everything Chesterton said in those quotes about Christianity is now true of EA, hence the political compass meme Scott shared. Also Scott (and this commentariat) like Chesterton for this kind of paradoxical style.
Please try a little harder before starting a religious slapfight and linking to wikipedia like I don't know basic history.
It's the internet bucko. I'll link to Wikipedia and start religious slapfights whenever, wherever.
The reason I'm having a "whoosh" moment is because EA, whatever faults it has, can in no way measure up to what Christianity did to deserve actually valid criticism.
So you're trying to be clever but it's lost on poor souls like me who think Chesterton was wrong then and Scott is right now.
People say EA is too far right, too far left, too authoritarian, too libertarian. With me so far?
In the 20s people were saying Christianity was too warlike but also too pacifistic, too pessimistic but also too optimistic. With me still?
The -structure- of the incoherence is the same in both cases, regardless of the facts underneath. I give zero fucks about Christianity. It's an analogy. Capiche, bud?
It is possible to have errors in two normally-conflicting directions at once. For instance, a lousy test for e.g. an illness might have _both_ more false negatives _and_ more false positives than a better test for the same illness, even though the rates of these failure modes are usually traded off against each other.
I'm not claiming that either or both of Christianity or EA is in fact in this position, but it can happen.
He certainly gives away a lot of money, and from what I know about the Gates Foundation they put a lot of effort into trying to ensure that most of it is optimally spent in some kind of DALYs-per-dollar sense. He's been doing it since 1994, he's given away more money than anyone else in history, and by their own estimates (which seem fair to compare with Scott's estimates) has saved 32 million lives so far.
Is it just branding? Is EA a bunch of people who decided to come along later and do basically the same thing as Bill Gates except on a much smaller scale and then pat themselves on the back extra hard?
I agree Bill Gates qualifies as a lowercase effective altruist.
I don't think "do the same thing as Bill Gates" is anything to scoff at! I think if you're not a billionaire, it's hard to equal Gates' record on your own, and you need institutions to help you do it. For example, Bill can hire a team of experts to figure out which is the best charity to donate to, but I (who can't afford this) rely on GiveWell.
I agree that a fair description of EA would be "try to create the infrastructure to allow a large group of normal people working together to replicate the kinds of amazing things Bill Gates accomplished"
(Bill Gates also signed the statement on AI existential risk, so we're even plagiarizing him there too!)
Well if Bill Gates is an effective altruist then I feel like one of the big problems with the Effective Altruism movement is a failure to acknowledge the huge amount of prior art. Bill Gates has done one to two orders of magnitude more for effective altruism than Effective Altruism ever has, but EA almost never acknowledges this; instead they're more likely to do the opposite with their messaging of "all other charity stupid, we smart".
C'mon guys, at least give a humble shout-out to the fact that the largest philanthropist of all time has been doing the same basic thing as you for about a decade longer. You (EA) are not a voice crying in the wilderness, you're a faint echo.
Not that I'm even a big fan of Bill Gates, but credit where credit is due.
Eh, where did you get the impression that EAs almost never acknowledge the value of the work done by Gates or that they are likely to dismiss it as stupid? Just to mention the first counterexample that comes to mind, Peter Singer has said that Gates has a reasonable claim to have done more good than any other person in human history.
On this topic, I believe Scott also wrote a post trying to quantify how much good the Gates Foundation has done. Or possibly it was more generally trying to make the case for billionaire philanthropy. Either way, I agree EA isn't denying the impact Gates has had.
So I'm pretty much a sceptic of EA as a movement despite believing in being altruistic effectively as a core guiding principle of my life. My career is devoted to public health in developing countries, which I think the movement generally agrees is a laudable goal. I do it more within the framework of the traditional aid complex, but with a sceptical eye to the many truly useless projects within it. I think that, in ethical principle, the broad strokes of my life are in line with a consequentialist view of improving human life in an effective and efficient way.
My question is: what does EA as a movement add to this philosophy? We already have a whole area of practice called Monitoring and Evaluation. Economics has quantification of human lives. There are improvements to be made in all of this, especially as it is done in practice, but we don't need EA for that. From my perspective - and I share this hoping to be proved wrong - EA is largely a way of gaining prestige in Silicon Valley subcultures, and a way of justifying devoting one's life to the pursuit of money based on the assumption, presented without proof, that when you get that money you'll do good with it. It seems like EA exists to justify behaviour like that at FTX by saying 'look it's part of a larger movement therefore it's OK to steal the money, net lives saved is still good!' It's like a doctor who thinks he's allowed to be a serial killer as long as he kills fewer people than he saves.
The various equations, the discount rates, the jargon, the obsession with the distant future, are all off-putting to me. Every time I've engaged with EA literature it's either been fairly banal (but often correct!) consequentialist stuff or wild subculture-y speculation that I can't use. I just don't see what EA as a movement and community accomplishes that couldn't be accomplished by the many people working in various forms of aid measuring their work better.
Right now there are two groups of people who work middle-class white-collar jobs and donate >10% of their income to charity. The first group are religiously observant and are practicing tithing, with most of their money going to churches, a small fraction of which goes to the global poor. The second group is EA, and most of their money goes to the global poor.
You're right that the elements of the ideology have been kicking around in philosophy, economics, business, etc for the last 50 years, at least. But they haven't been widely combined and implemented at large until EA did it. Has EA had some PR failures a la FTX? Yes, but EA existed years before FTX even existed.
EA is mostly in favor of more funding for "the many people working in various forms of aid measuring their work better". The things you support and the things EA supports don't seem to be at odds to me.
>There are improvements to be made in all of this, especially as it is done in practice, but we don't need EA for that.
>I just don't see what EA as a movement and community accomplishes that couldn't be accomplished by the many people working in various forms of aid measuring their work better.
Huh? So, you're saying that "we" the "many people" could in principle get their act together, but for some reason haven't gotten around to doing that yet, meanwhile EAs, in their bungling naivety, attempt to pick up the slack, yet this is somehow worse than doing nothing?
Many people are getting ther act together. Many donors are getting better at measuring actual outcomes instead of just trainings or other random nonsense. It's slow because the whole sector is a lumbering machine, but I don't see EAs picking up the slack. All I see are arcane arguments about AI and inside-baseball jargon. If they are 'picking up the slack', you're also doing a whole bunch of other things that drown that out.
I use GiveWell to direct my donations, GiveWell is pretty much the central example of EA in my experience, and I'm not aware of another "small donor"-facing group which provides good information on what charities are most efficacious in saving lives (or other thing you care about). Do you have any recommendations?
I can fully believe that, e.g., AMF has spent money, or contracts out, or some similar such thing to help make sure that their interventions are the best, but I'm not aware of anybody besides GiveWell who aggregates it with an eye towards guiding donors to the best options, which is the thing I like EA for and most of what EA does is this sort of aggregation of data into a simple direction of action. (I also never see people criticizing EAs for GiveWell giving inaccurate numbers, so I assume the numbers are basically correct.)
I use GiveWell as well, small donor donations are extremely murky for larger organisations and it's true that I have not seen anyone else make a better guide for small donors. There are definitely positive elements to the movement as well, I'm sceptical but not totally dismissive.
Calculating where the money should be sent is one part. Actually sending the money is the other part. The improvement of EA is in actually sending the money to the right places, as a popular movement.
This is an interesting question. Do you believe the subculture-y parts of the movement motivate people to actually send the money (instead of just saying they will)? If so, is the movement specifically tied to a time and place, such as current Silicon Valley, because different things might motivate different people to act?
Definitely; most people are motivated by what their friends *do*.
When Christians go to the church, they hear a story about Jesus saying that you should sell all your property and donate the money to the poor. Then they look around and see that none of their neighbors has actually sold their property. So they also don't feel like they should. They realize that "selling your property and donating to the poor" is something that you are supposed to verbally approve, but you are not supposed to actually do it.
And this is not meant as an attack on Christians; more or less *everyone* is like this, I just used a really obvious example. Among the people who say they care about the environment, only a few recycle... unless it becomes a law. Generally, millions of people comment on every cause that "someone should do something about it", but only a few actually do something. If you pay attention, you may notice that those people are often the same ones (that people who do something about X are also statistically more likely to do something about Y).
I suspect that an important force are... people on the autistic spectrum, to put it bluntly. They have a difficulty to realize (to instinctively do this, without consciously being aware of it) that they are supposed to *talk* about how doing X is desirable, but never actually *do* X. They hear that X should be done, and they go ahead and try to do X. Everyone else says "wow, that was really nice of you" but also thinks "this is some weirdo I need to avoid". Unless there is a community that reaches a critical amount of autism, so that when someone goes and does X, some of their friends say "cool" and also do X. If a chain reaction starts and too many people do X, even the less autistic people succumb to the peer pressure, because they are good people at heart, they just have a strong instinct against doing good unless someone else already does it first.
The rationalist community in Bay Area is an example of a supercritical autistic community. (This is more or less what other people have in mind when they accuse rationalists of being a "cult".) Not everyone has the same opinions, of course; they are actually *less* likely to agree on things than the normies. But once a sufficient subset of them agrees that X should be done, they go ahead and actually start doing X as a subculture, whether X is polyamory or donating to the poor. This is my explanation how the Effective Altruism started, why nerds are over-represented there, why so many of them also care about the artificial intelligence, why normies are instinctively horrified but cannot precisely explain why (because they agree verbally with the idea of giving to the poor, they just feel that it is weird that someone actually does that, you are only supposed to talk about how "we should"; normies are instinctively horrified by weirdness, because it lowers one's social status).
> is the movement specifically tied to a time and place, such as current Silicon Valley
Is there another place with such concentration of autists, especially one that treats them with relative respect? (Genuine question; if there is, I want to know.) There are virtual communities, but those usually encourage people to do things in the virtual space, such as develop open-source software.
Isn't this just an admission of failure then? If it doesn't scale past your subculture then it won't really accomplish much in the world. You help some people on a small-donor personal scale, which is nice, but the main outcome then is that you act extremely smug with a tiny real-world impact while there's not much reason for the rest of the world to pay attention to your movement because it only applies to a small number of people in very specific circumstances.
Also, and I guess this is kind of a stereotype, I think you have a pretty out-of-touch idea of how 'normies' work. Lots of people follow through on what they say they'll do, including a variety of kinds of charitable giving. Like...there's an entire aid industry of people who think you should help others and have devoted their lives to it. I could make double what I do in the private sector if not more, but I don't! Effectiveness is a separate question but _lots_ of people follow through on their (non-religious) moral commitments.
> If it doesn't scale past your subculture then it won't really accomplish much in the world.
Not if the subculture is big enough, and some of its members make decent money. Also, the longer it exists, the more normies will feel like this is a normal thing to do, so they may join, too.
And yes, there was a lot of simplification and stereotyping. You asked how the subculture motivates people to actually send money; I explained what I believe to be the main mechanism.
IMO EA should invest in getting regulatory clarity in prediction markets. The damage done to the world by the absence of collective sense-making apparatus is enormous.
We're trying! I know we fund at least Solomon Sia to lobby for that, and possibly also Pratik Chougule, I don't know the full story of where his money comes from. It turns out this is hard!
As an enthusiastic short-termist EA, my attitude to long-termist EA has gone in the past year from "silly but harmless waste of money" to "intellectually arrogant bollocks that has seriously tarnished a really admirable and important brand".
Working out what the most efficient ways to improve the world here and now is hard, but not super-hard. I very much doubt that malaria nets are actually the single most efficient place that I could donate my money, but I bet they're pretty close, and identifying them and encouraging people to donate to them is a really valuable service.
Working out what the most efficient ways to improve the world 100 years from now is so hard that only people who massively overestimate their own understanding of the world claim to be able to do it even slightly reliably. I think that the two recent EA-adjacent scandals were specifically long-termist-EA-adjacent, and while neither of them was directly related to the principles of EA, I think both are very much symptomatic of the arrogance and insufficient learned epistemic helplessness that attract people to long-termist EA.
I think that Scott's list of "things EA has accomplished, and ways in which it has made the world a better place" is incredibly impressive, and it makes me proud to call myself an effective altruist. But look down that list and remove all the short-termist things, most of what's left seems either tendentious (can the EA movement really claim credit for the key breakthrough behind ChatGPT?), nothingburgers (funding groups in DC trying to reduce risks of nuclear war, prediction markets, AI doomerism). I'm probably exaggerating slightly, because I'm annoyed, but I think the basic gist of this argument is pretty unarguable.
All the value comes from the short-termists. Most of the bad PR comes from the longtermists, and they also divert funds from effective to ineffective causes.
My hope is that the short-termists are to some extent able to cut ties with the AI doomers and to reclaim the label "Effective Altruists" for people who are doing things that are actually effectively altruistic, but I fear it may be too late for that. Perhaps we should start calling ourselves something like the "Efficiently Charitable" movement, while going on doing the same things?
"Working out what the most efficient ways to improve the world 100 years from now is so hard that only people who massively overestimate their own understanding of the world claim to be able to do it even slightly reliably."
Agreed. I don't think that anyone trying to anticipate the consequences that an action today will produce in 100 years is even going to get the _sign_ right significantly better than chance.
Completely agree with this. I've donated a few tens of thousands to the Schistosomiasis Control Initiative, but stopped earlier this year in disgust with what the overall movement was focussing on. That led me to alarm that the goals which I'd previously presumed were laudable were coming from a philosophy that could so easily be diverted into nonsense. I may start donating again, but EA has to do a lot to win me back. At the moment it's looking most likely I divert my donation to a community or church based group (I've fully embraced normie morality)
This seems like a bad reaction - just because people adjacent to the people who originally recommended the SCI to you are doing silly or immoral things does not mean that the SCI will not do more good per dollar donated than a community or church group.
I think "short-termist EA good" is a far, far more important message than "long-termist EA bad".
EA depends on a certain set of assumptions which hold if you are a disembodied mind concerned only in the abstract with what is good for humanity.
But none of us are actually that disembodied mind, and it’s disingenuous to pretend and act as if we are.
The common sense morality position that you should look after your friends, family and community before caring for others, even if you could do more good for others with the same resources in some abstract sense, is in my opinion correct.
Specifically it’s correct because of the principles of justice and reciprocity. Take reciprocity first. I owe an approximately infinite amount to my parents. I owe a very large amount to my wider family, a lot to my friends, and quite a bit to my larger community and to my nation. All that I am, including my moral character, is because of these people.
As a concrete example, if my mothers life depended on my giving her hundreds of thousands of dollars, perhaps for an experimental cancer treatment, I would do this without hesitation, even though it could save hundreds of lives by abstract calculation.
I would argue it’s superogatory to donate to charity in the developing world. It’s a good thing to do and if you’re going to do it you may as well ask where your dollars will be well spent. But EA doesn’t address the argument from reciprocity that you owe far more to those close to you.
Next, the argument from justice. This is the other issue with basing donations on cold mathematical calculations. For example, right now if I were to donate to Doctors Without Borders, there’s a fair chance that my money would go to fund their operations in Gaza. Now, before the comments section blows up, I do believe that this particular charity in this particular instance is doing net good - but they’re famously apolitical and they use resources to treat terrorists as well as civilians. How much does that impact the lives saved per dollar of my donation, if there are some lives I’d rather not save? Who knows? EA don’t consider it their position to calculate. Considerations like this apply to every dollar spent in regions where the donor doesn’t understand, or even consider it their position to understand, the politics and the underlying reasons that all these preventable deaths are occurring.
I consider it, in retrospect, a logical and unfortunately inevitable outgrowth of EA’s philosophy that so much effort has now been hijacked by causes that arouse little or no sympathy in me. It was always a byline of the movement that you should purchase utilons and not warm fuzzies with your charitable donations. It’s fundamentally not how people work. The much derided warm fuzzies are a sure sign that you’re actually accomplishing something meaningful.
I think this is a good list, even though it counts PR wins such as convincing Gates. 200k lives saved is good, full stop.
However, something I find hard to wrap my head around is that the most effective private charities, say Bill & Melinda Gates foudnation (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2373372/), have spent their money and have had incredible impact that's orders of magnitude more than EA has. They define their purpose narrowly and cleave to evidence based giving.
And yet, they're not EAs. Nobody would confuse them either. So the question is less whether "EAs have done any good in the world", the answer is of course yes. Question is whether the fights like boardroom drama and SBF and others actively negate the benefits conferred, on a net basis. The latter isn't a trivial question, and if the movement is an actual movement instead of a lot of people kind of sort of holding a philosophy they sometimes live by, it requires a stronger answer than "yes, but we also did some good here".
I think I would call them EAs in spirit, although they don't identify with the movement.
As I said above, I think "help create the infrastructure for a large group of normal people to do what Gates has done" is a decent description of EA.
I think Gates has more achievements than us because he has 10x as much money, even counting Moskowitz's fortune on EA's side (and he's trying to spend quickly, whereas Moskowitz is trying to delay - I think in terms of actual spending so far it's more like 50-1)
I respect the delta in money though its not just that which causes Gates' success. He focuses on achievements a lot and has built extraordinary execution capabilities. The movement that tries to "create a decentralised Gates Foudnation" would have to do very different things to what EA does. To achieve that goal requires a certain amount of winning. Not just in the realpolitik sense either.
And so when the movement then flounders in high profile ways repeatedly, and demonstrates it does not possess that capacity, the goals and vision are insufficient to pull it back out enough to claim it's net positive. If you recall the criticisms being made of EAs in the pre SBF era, they're eerily prescient about today's world where the problems present themselves.
I think one of the keys to Gates' success is that he sets himself clear and measurable goals. He is not trying to "maximize QALYs" or "Prevent X-risk" in some ineffable way; he's trying to e.g. eradicate malaria. Not all diseases, and not even all infectious diseases, just malaria. One step toward achieving this is reducing the prevalence of malaria per capita. Whenever he spends money or anything, be it a new technology or a bulk purchase order of mosquito netting or whatever, he can readily observe the impact this expenditure had toward the goal of eradicating malaria. EAs don't have that.
“Whenever he spends money or anything, be it a new technology or a bulk purchase order of mosquito netting or whatever, he can readily observe the impact this expenditure had toward the goal of eradicating malaria. EAs don't have that.”
I think GiveWell, a major recipient of funding and support from EA, is actually extremely analytical about whether its money is achieving its goals. I just don’t think the difference between Gates and EA is as profound as you think. After all, Gates had to determine that malaria was a good cause to take up, and if malaria is eradicated he’ll have to figure out what cause takes its place. I don’t think he’s drawing prospective causes out of a hat, do you? He’s figuring out where the money could do the most good. That’s something everyone can do, whether or not they have Gates’s money, and that’s the purpose of EA.
At the risk of being uncharitable, I think that the difference between Gates and EA is that Gates saw that malaria was killing people; decided to stop it (or at least reduce it) ASAP; then used analytics to distribute his money most efficiently in service of that goal. EA saw that malaria was killing people; decided to stop it (or at least reduce it); used analytics to distribute his money most efficiently in service of that goal; then, as soon as they enjoyed some success, expanded the mission scope to prevent not merely malaria, but all potential causes of death now or in the long-term far-flung future.
Say what you want about the vagaries of longtermism, you accurately assessed the risk that you were being uncharitable! I don't think it's fair to say that EA invests in fighting all causes of death—you can see that fighting widespread and efficient-to-combat deadly diseases still receives by far the largest share of GiveWell funds—and as far as the future, while we might disagree about AI risk can we agree that future deaths from pandemics, for instance, are not an outlandish possibility and therefore might be worth investing in?
I mean Gates, a brilliant tech founder, is really, really close to EA/rationality by default. If all charity was done by Bill, then EA would not have been necessary.
You can point to organizations that are, by EA standards, highly effective, and not make a dent in the issue of average effectiveness of charities/donations overall. If the effectiveness waterline were higher, the founders of EA would presumably not been driven to do as they did, is my point.
And, EA is specifically focused on "important, tractable, and neglected" issues, so it's explicitly not trying to compete with orgs doing good work already.
For what it’s worth, the “EA in spirit” struck me rather sourly. It feels like EA as a movement trying to take credit for lots of stuff it contributed to, but was not solely responsible for. For what it’s worth, I am sympathetic to the charitable giving and think EAs want to do well, but the movement is utterly consumed with extreme scenarios where the expected values are as dramatic as you want them to be, ironically because of a lack of evidence.
It doesn’t seem clear which way the boardroom drama goes in being good or bad. SBF is unfortunate, but maybe unfair to pin this mainly on EA (at least they are trying to learn from it as far as it concerns them).
Its unfair to pin SBF entirely on EA, though having him be a poster child for the movement all the while stealing customer money is incredibly on the nose. Especially since he used EAs as his recruiting pool and part of his mythos.
I would say considering Bill Gates an EA makes "what is EA" impossible to answer. Which is ok if it's meant to be like "science", but completely useless if it's about the movement. Then there should not be a movement at all, but splinter into specific things like Open Phil and Givewell and whatnot.
I don't understand why you put Anthropic and RLHF on this list. These are both negatives by the lights of most EAs, at least by current accounting.
Maybe Anthropic's impact will pay off in the future, but gathering power for yourself, and making money off of building dangerous technologies are not signs that EA has had a positive impact on the world. They are evidence against some form of incompetence, but I doubt that by now most people's concerns about the EA community are that the community is incompetent. Committing fraud at the scale of FTX clearly requires a pretty high level of a certain kind of competence, as did getting into a position where EAs would end up on the OpenAI board.
"but I doubt that by now most people's concerns about the EA community are that the community is incompetent."
I think you're a week out of date here!
I go back and forth on this, but the recent OpenAI drama has made me very grateful that there are people other than them working on superintelligence, and recent alignment results have made me think that maybe having really high-skilled corporate alignment teams is actually just really good even with the implied capabilities progress risk.
This gets at exactly the problem I have with associating myself with EA. How did we go from "save a drowning child" to "pay someone to work on superintelligence alignment". The whole movement has been captured by the exact navel gazing it was created to prevent!
Imagine if you joined an early abolitionist movement, but insisted that we shouldn't work on rescuing slaves, or passing laws to free slaves, but instead focused on "future slave alignment to prevent conflict in a post slavery world" or some nonsense. The whole movement has moved very far from Singer's original message, which had some moral salience to people who didn't necessarily work intellectual problems all day. It's no surprise that EA is not trusted...imagine yourself in a <=110 IQ brain, it would seem obvious these people are scamming you, and seeing things like SBF just fits the narrative.
Imagine EAs doing both though. Current and future problems. Different timelines and levels of certainty.
Like, obviously it's impossible to have more than one priority or to focus on both present and future, certain and potential risks, but wouldn't it be so cool if it were possible?
(Some of the exact same people who founded GiveWell are also at the forefront of longtermist thought and describe how they got there using the same basic moral framework, for the record.)
Certainly it's possible, but don't you think one arm of this (the one that is more speculative and for which it is harder to evaluate ROI) is more likely to attract scammers and grifters?
I think the longtermism crowd is intellectualizing the problem to escape the moral duty inherent in the provocation provided by Singer, namely that we have a horrible moral crisis in front of us that can be addressed with urgency, which is the suffering of so many while we engage in frivolous luxury.
Well I'm the kind of EA-adjacent person who prefers X-risk over Singerism, so that's my bias. For instance, I mostly reject Singer's moral duty framing.
A lot of X-risk/longtermism aligns pretty neatly with existing national security concerns, e.g. nuclear and bio risks. AI risk is new, but the national security types are highly interested.
OG EA generally has less variance than longtermism (LT) EA, for sure. Of course, OG EA can lead you to caring about shrimp welfare and wild animal suffering, which is also very weird by normie standards.
SBF was donating a lot to both OG EA and LT EA causes (I'm not sure of the exact breakdown). I certainly think EA leaders could have been a lot more skeptical of someone making their fortune on crypto, but I'm way more anti-crypto than most people in EA/rationalist circles.
Also, like literally the founders of GiveWell also became longtermists. You really can care about both.
The funny thing about frivolous luxury is that as long as its contributing to economic growth it's going to outperform a large amount of all the nominally charitable work done that ended up either lighting money on fire or making things worse. (Economic growth remains the best way to help humans and the fact that EAs recognize this is a very good thing.)
No, I think people's concern is that the EA community is at the intersection of being very competent at seeking power, and not very competent at using that power for good. That is what at least makes me afraid of the EA community.
What happened in the OpenAI situation was a bunch of people who seem like they got into an enormous position of power, and then leveraged that power in an enormously incompetent way (though of course, we still don't know yet what happened and maybe we will hear an explanation that makes sense of the actions). The same is true of FTX.
I disagree with you on the promise of "recent alignment results". I think the Anthropic interpretability paper is extremely overstated, and I would be happy to make bets with you on how much it will generalize (I would also encourage you to talk to Buck or Ryan Greenblatt here, who I think have good takes). Other than that, it's mostly been continued commercial applications with more reinforcement-learning, which I continue to think increases and not decreases the risk.
It is funny how "talking past each other" are today's posts of Freddie and Scott. One is so focused on disparaging utilitarianism that even anti-utilitarians might think it was too harsh, while the other points to many good things EA did without ever getting to the point about why we need EA as presently constituted in the form of this movement. And part of that is conflating the definition of the movement as both 1) a rather specific group of people sharing some ideological and cultural backgrounds, and 2) the core tenets of evidence-based effectiveness evaluation that are clearly not exclusive to the movement.
I mean, you could simply argue that organizing people around a non-innovative but still sound common sensical idea that is not followed everywhere has its merits because it helps in making some things that were obscure become explicit. Fine. But it still doesn't necessarily mean that EA is the correct framing if it causes so much confusion.
"Oh but that confusion is not fair!..." Welcome to politics of attention. It is inevitable to focus on what is unique about a movement or approach. People choose to focus not on malaria (there were already charities doing that way before EA) but on the dudes seemingly saying "there's a 0.000001% chance GPT will kill the world, therefore give me a billion dollars and it will still be a bargain", because only EA as a movement considered this type of claim to be worthy of consideration under the guise of altruism.
I actually support EA, even though I don't do nearly enough to consider myself charitable. I just think one needs to go deeper into the reasons for criticism.
Zizek often makes the point that the history of Christianity is a reaction to the central provocation of Christ, namely that his descent to earth and death represents the changing of God the Father into the Holy Spirit, kept alive by the community of believers. In the same way the AI doomerists are a predictable reaction to the central provocation of the Effective Altruists. The message early on was so simple: would you save a drowning child? THEY REALLY ARE DROWNING AND YOU CAN MAKE A DIFFERENCE NOW.
The fact that so many EAs are drawn to Bostrom and MacCaskill and whoever else is a sign that so many EA were really into it to prove how smart they are. That doesn't make me reject EA as an idea, but it does make me hesitant to associate myself with the name.
EA as presented by Singer, like Christianity, was definitely not an intellectually difficult idea. The movement became quickly more intellectualized, going from (1) given in obviously good ways when you can to (2) study to find the best ways to give to (3) the best ways can only be determined by extensive analysis of existential risk to (4) the main existential risk is AI so my math/computer skills are extremely relevant.
The status game there seems transparent to me, but I'd be open to arguments to the contrary.
The AI risk people were there before EA was a movement, and in fact there were some talks of separating them so global poverty can look less weird in comparison. Vox journalist, EA and kidney haver Dylan Matthews wrote a pretty scathing article about the inclusion of X risk wt one of the earlier EA Global conferences. Talking about X risk with Global Poverty EAs, last time I checked, was like pulling teeth.
Maybe it is true that there's an intellectual signalling spiral going on, but you need positive evidence that it's true, and not just "I thought about it a bit and it seemed plausible".
I don't know what could constitute evidence of intellectual spiraling, but I know that for me personally, I was drawn to Singer's argument that I could save a drowning child. Reading MacCaskill or Bostrum feels not simply unrelated to that message, it seems like an EA anti-message to me.
Look, I know someone is going to think deeply about X-risk and Global Poverty (capitalized!), and get paid for it. But paying people to think about X-risk seems like the least EA thing possible, given there is no shortage of suffering children.
It's unwise to go "this is not true" and then immediately jump to a very specific theory of status dynamics when it's not supported by any evidence. Why not just say "AI risk investment seems unlikely to turn out as well as malaria nets, I do not understand why AI riskers think what they do".
I have no way of evaluating whether my investment in AI risking analysis will ever pay off, nor how much the person I am paying to do it has even contributed to avoiding AI risk. I don't even know what would constitute evidence that this is mere navel gazing, other than observing that it may be similar to other human behavior in that people want to paid to do things they enjoy, and thinking about AI-risk is fun and/or status enhancing.
Interesting! My reaction to Singer was: He is making such an unreasonably big ask that I was inspired to reject not only his ethical stance but the entire enterprise of ethics. Yetch!
I also am unsure about how to provide quant evidence on this, but I'd just say that while the people working on AI safety or being interviewed for it at 80k hours likely are mathy/comp sci nerds, many people are concerned about this as they are for other existential risks because they are convinced by those arguments, while lacking those skills.
Like I say, it's hard to provide more than anecdotes, but from observation (people I hang out with and read) and introspection: I'm a biologist, but while that gives me some familiarity with the tech and the jargon, I don't think my concern with bioterrorism comes from that, and my real job is in any case very unrelated.
I guess I could ask you if you feel the same way about the people worried nuclear war risk, bio risk etc. Do you feel like they are in a status game, or drawn to it because improving on it is something related to their rare skills?
Thinking about this personally: I would much rather "think about AI-risk" than do my job training neural nets for an adtech company; indeed I do spend my free time thinking about X-risk. I think this probably true for most biologists, nuclear engineers, computer scientists and so on.
The problem is that preventing existential catastrophe is inherently not measurable, so it attracts more status seekers, grifters, and scammers, just as priestly professions have always done. This is unrelated to whether the source material is biology or computer science. I was probably wrong to focus on status particularly, rather than a broader spectrum of poor behavior.
That is why I mentioned Zizek's point in the original comment: EA has become all about what the fundamental provocation of EA was meant to prevent, namely investing in completely unmeasurable charity at the expense of doing verifiable good.
I could see how it could attract people that like being 'above it', because they get the theoretical risk even if the empirical outcomes are not observable (because we are either safe or dead) but again, while this is hard to quantify or truly verify (eyeronic) I'm not sure at all it is the main motivation. Not sure how to proceed from here, except to note that when someone wants to increase biosecurity (say, Kevin Esvelt) you don't get that sort of reaction as much as you get from AI, and I'm still not sure why.
I don't know that it's much different reaction when biosecurity means "take this drug/vaccine to protect yourself" instead of "make sure this lab is secure". IOW the extent of the difference is probably explained by the implied actions for the general public.
So, um, do I understand correctly that you unironically quote Zizek and yet accuse *someone else* of being drawn to certain thinkers to prove how smart they are?
I think activity which is difficult to measure attracts all forms of grifters, scammers, and status seekers.
That is why I mentioned Zizek's point in the original comment: EA has become all about what the fundamental provocation of EA was meant to prevent, namely investing in completely unmeasurable charity at the expense of doing verifiable good.
I see your point, but if you look closely at the core concept of EA, it's not exactly "doing measurable charities", it's "doing the most good". Of course to optimize something you need to be able measure it in some way, but all such measurements are estimates (with varying degrees of uncertainty), and you can, in principle, estimate the impact of AI risk mitigation efforts (with high degree of uncertainty). Viewed from this angle, the story becomes quite less dramatic than "EAs have turned into the very thing they were supposed to fight", and becomes more along the lines of arguing about estimation methods and at which point high risk/high reward strategy turns into a Pascal Wager.
Also you're kind of assuming the conclusion when saying that people worried about AGI are scammers and grifters and want to show they're smart. That would be true if AGI concerns were completely wrong, but another alternative is that they are correct and those people (at least many of them) support this cause because they've correctly evaluated the evidence.
What you are saying would be true if the pool of people stayed static, but it doesn't. Scammers will join the movement because promises of large payouts far into the future with small probability is a scammer's (and or lazy status seeker's) paradise.
Thinking about X-risk is fun. In fact getting rich is good too because it will increase my ability to do good. Looks like EA is perfect for me after all! I don't even have to save that drowning child, as the opportunity cost in reduced time thinking about AI risk is higher than the benefits of saving it because my time thinking about AI will save trillions of future AI entities with some probability that I estimated. How lucky I am that EA tells me to do exactly what I wanted to do anyway!
So your point is that AGI safety is bad because some hypothetical person can use it as an excuse to not donate money and not save a drowning child? What a terrifying thought, yeah. We can't allow that to happen.
Yes, my point is that it's intellectual sophistry that is used to insulate oneself from the moral duty implied by the fundamental EA insight. That is, still feel good about doing "EA" while completely ignoring the duties implied by the messge.
Sorry to defend "their side" but I'm a not hypothetical person who actually made this calculation. Most of my donations still go to global poverty
I'm not going to describe in detail what I thought, but the absolute first thing on my mind was the opportunity cost, and that I hated being in the epistemic position where I thought the best use of money was AI risk, and not the much more convenient and socially acceptable global poverty.
Thank you for writing this. It's easy to notice things the controversial failures and harder to notice the steady march of small (or not-so-small-wins). This is much needed.
A couple notes about the animal welfare section. They might be too nitty-gritty for what was clearly intended to just be a quick guess, so feel free to ignore:
- I think the 400 million number for cage-free is an underestimate. I'm not sure where the linked RP study mentions 800 million — my read of it is that total commitments at the time in 2019 (1473 total commitments) would (upon implementation) impact a mean of 310 million hens per year. The study estimated a mean 64% implementation rate, but also there are now over 3,000 total cage-free commitments. So I think it's reasonable to say that EA has convinced farms to switch many billions of chickens to cage-free housing in total (across all previous years and, given the phrasing, including counterfactual impact on future years). But it's hard to estimate.
- Speaking of the 3,000 commitments, that's actually the number for cage-free, which applies to egg-laying hens only. Currently, only about 600 companies globally have committed to stop selling low-welfare chicken meat (from chickenwatch.org).
- Also, the photo in this section depicts a broiler shed, but it's probably closer to what things look like now (post-commitments) for egg-laying hens in a cage-free barn rather than what they used to look like. Stocking density is still very high in cage-free housing :( But just being out of cages cuts total hours of pain in half, so it's nothing to scoff at! (https://welfarefootprint.org/research-projects/laying-hens/)
- Finally, if I may suggest a number of my own: if you take the estimates from the welfare footprint project link above and apply it to your estimate for hens switched to cage-free (400 million), you land at a mind-boggling three trillion hours, or 342 million years, of annoying, hurtful, and disabling pain prevented. I think EA has made some missteps, but preventing 342 million years of animal suffering is not one of them!
EA makes much sense given mistake theory but less given conflict theory.
If you think that donors give to wasteful nonprofits because they’ve failed to calculate the ROI in their donation, then EA is a good way to provide more evidence based charity to the world.
But what if most donors know that most charities have high overhead and/or don’t need additional funds, but donate anyway? What if the nonprofit sector is primarily not what it says it is? What if most rich people don’t really care deeply about the poor? What if most donors do consider the ROI — the return they get in social capital for taking part in the nonprofit sector?
From this arguably realist perspective on philanthropy, EA may be seen to suffer the same fate as other philanthropic projects: a mix of legitimate charitable giving and a way to hobnob with the elite.
It’s still unknown whether the longtermist projects represent real contributions to humanity or just a way to distribute money to fellow elites under the guise of altruism. And maybe it will always be unknown. I imagine historians in 2223 debating whether 21st century x-risk research was instrumental or epiphenomenal.
I think that early EA were unaware of the "conflict theory" part of the equation; there's mentions from time to time that they expected the "direct donations best" to be the easy part and "donate more" to be the hard part, and found it to turn out to be the opposite. I think this has changed a good bit since.
But, tbh, I don't care about the conflict theory part. In the end, there are people who want to direct donations best, GiveWell appears to be the best way to do so transparently, and it (GiveWell) is the primary accomplishment of the EA movement IMO. If some people don't care about trying to do the right thing as much as possible, that's fine, they can go fuck in the mud for all I care.
Correction to footnote 13: Anthropic's board is not mostly EAs. Last I heard, it's Dario, Daniela, Luke Muehlhauser (EA), and Yasmin Razavi. They have a "long-term benefit trust" of EAs, which by default will elect a majority of the board within 4 years (electing a fifth board member soon—or it already happened and I haven't heard—plus eventually replacing Daniela and Luke), but Anthropic's investors can abrogate the Trust.
What's your response to Robin Hanson's critique that it's smarter to invest your money so that you can do even more charity in 10 years? AFAIK the only time you addressed this was ~10 years ago in a post where you concluded that Hanson was right. Have you updated your thinking here?
I invest most of my money anyway; I'll probably donate some of it eventually (or most of it when I'm dead). That having been said, I think there are some strong counterarguments:
- From a purely selfish point of view, I think I get better tax deductions if I donate now (for a series of complicated reasons, some of which have to do with my own individual situation). If you're donating a significant amount of your income, the tax deductions can change your total amount of money by a few percent, probably enough to cancel out many of the patient philanthropy benefits.
- Again from a purely personal point of view, I seem to be an "influencer" and I think it's important for me to be publicly seen donating to things.
- There's a philanthropic interest rate that competes with the financial interest rate. If you fund a political cause today, it has time to grow and lobby and do its good work. If you treat malaria today, the people you saved might go do other good things and improve their local economy.
- Doing good becomes more expensive as the world gets better and philanthropic institutions become better. You used to be able to save lives for very cheap with iodine supplementation, but most of those places have now gotten the iodine situation under control. So saving lives costs more over time, which is another form of interest rate increase.
- If you're trying to prevent AI risk, you should prefer to act early (when there's still a lot of time) rather than late (when the battle lines have already been drawn, or the world has already been destroyed, or something).
I have a hard time viewing "starting a political cause to further your own worldview" as altruistic, or even good. Doesn't normal self-interest already provide an oversupply of political causes? And does convincing smart people to become lobbyists really result in a net benefit to the world? I think a world where the marginal engineer/doctor/scientist instead becomes a lobbyist or politician is a worse world.
>If you treat malaria today, the people you saved might go do other good things and improve their local economy.
That's an interesting claim, but I think it's unlikely to be true. Is economic growth in, say, the Congo limited by the availability of living humans? A rational expectation for the good a hypothetical person will do is the per capita income of their country minus the average cost of living for that country, which for most malaria-type countries that surplus is going to be effectively zero. In almost all circumstances I think you get a higher ROI investing in a first-world economy.
>Doing good becomes more expensive as the world gets better
First world economies will also deliver more value over time as the world gets better. Investing in world-changing internet startups used to be easier but good luck finding the next Amazon now that the internet is mature. You should invest your money now so that the economic engine can maximize the growth of the next great idea. I'm very skeptical that the ROI of saving a third world life will grow faster than a first world economy will.
The strong from of this argument is basically just that economic growth is the most efficient way to help the world (as Tyler Cowen argues). I've never seen it adequately addressed by the EA crowd, but thanks for those links. Exponential growth is so powerful that it inevitably swaps any near-term linear intervention. If you really care about the future state of the world, then it seems insane to me to focus on anything but increasing the growth rate (modulo risks like global warming). IMO any EA analysis that doesn't end with "and this is why this intervention should be expected to boost the productivity of this country" is, at best, chasing self-satisfaction. At worst it's actively making the world worse by diverting resources from a functional culture to a non-functional one.
Boy imagine thinking about what exponential growth could do if it applies to AI. Crazy.
Lots of EAs like Cowen and EAs in general are way more econ-pilled than normal charity/NGOs are. One of the strong reasons for AI development is achieving post-scarcity utopia. GMU is practically rationality/EA-adjacent. Hanson, being the obvious case.
Also, Cowen himself is a huge proponent of supporting potential in places like Africa and India!
If you're a Cowen-style "economic growth plus human rights" kind of person then I think the only major area of disagreement with EA is re: AI risk. But Cowen and OG EA are highly aligned.
Not sure about your situation in that you run a couple of businesses, but in general isn't the most tax-effective way to donate by donating stock, since the donor gets the write off and the receiver gets the increased value without the capital gains being taxed?
(You can, of course, pursue this donation mechanism both now and later.)
> - Again from a purely personal point of view, I seem to be an "influencer" and I think it's important for me to be publicly seen donating to things.
Not gonna argue with this, but: Are your donations really visible? I mean, I don't even *know* that you donated a kidney.
If you amended it to "important for people to hear that I am donating to things" it would not have nagged at me. On the other hand, I haven't come up with a phrasing (even that one) that doesn't have a faint echo of "important that I look like I'm donating" so maybe your version is as good as it can get.
I am a bit uneasy about claiming some good is equivalent to, say, curing AIDS or ending gun violence: these are things with significant second-order effects. For example, pending better information, my prior has it that the greatest impact of gun violence isn't even the QALYs lost directly in shootings, but vastly greater number of people being afraid (possibly of even e.g. going outside at night), greater number of people injured, decreased trust in institutions and your fellow man, young people falling into a life of crime rather than becoming productive members of the society, etc, etc. Or, curing AIDS would not just save some people from death or expensive treatment, but would erase one barrier to condom-free sex that most people would profess a preference to (that's a lot of preference-satisfaction when considering the total number of people who would benefit), but here there's also an obvious third-order effect of increased number of unwanted pregnancies (which, as a matter of fact, doesn't even come close to justifying not curing AIDS, but it's there).
Now, I'm entirely on board with the idea of shutting up and calculating, trying your best to estimate the impact (or "something like that": I've been drawn to virtue ethics lately, but a wise, prudent, just and brave - taking up this fight when it goes so far away from social conventions requires bravery, too - person could not simply wave away consequentialist reasoning as though it was nothing), and to do that you have to have some measure of impact, like QALYs. Right. But I think the strictly correct way of expressing that is in abstract QALYs that by construction don't have higher order effects of note. Comparing some good thing to some other thing, naively, without considering second-order effects when those are significant or greater than the first-order effects, seems naive.
And by my reckoning that's also a part of the pushback that EA faces in general: humans notoriously suffer from scope neglect and when thinking about the impact of gun violence, they don't think of gun fatalities times n (most of the dead were gangsters who had it coming anyway), but the second and greater order impacts they themselves experience vividly, and focusing on the exact number of dead seems wrongheaded. And in this case they might be right, too. (Of course, EA calculations can and should factor in nth order effects if they do seem like they would matter, and I would hazard a guess that's what EAs often do, but when people see the aforementioned kinds of comparisons, in my opinion they would be right to conclude them as naive).
Which reminds me of another argument in favor of virtue ethics: practical reasoning is often "newcomblike" (https://www.lesswrong.com/posts/puutBJLWbg2sXpFbu/newcomblike-problems-are-the-norm), that is to say the method of your reasoning matters, just like it does in the original paradox. "Ends don't justify the means" isn't a necessary truth: it's a culturally evolved heuristic that is right more often than not, making some of us averse to nontrivial consequentialist reasoning. "I have spotted this injustice, realized it's something I can actually do something about [effectiveness of EA implicitly comes in here], and devoted myself to the task of righting the wrong" is an easier sell than "you can save a life for n dollars".
Wow, it's gotta be tough out there in the social media wilderness. Anyway, just dropped by to express my support to the EA, hope the current shitstorm passes and the [morally] insane people of twitter will move to the next cause du jour.
I think it's worth asking why EA seems to provoke such a negative reaction -- a reaction we don't see with charitable giving in general or just generic altruism. I mean claiming to be altruistic while self-dealing is the oldest game in town.
My theory is that people see EA as conveying an implied criticism of anyone who doesn't have a coherent moral framework of theory of what's the most effective way to do good.
That's unfortunate, since while I obviously think it's better to have such a theory that doesn't mean we should treat not having one as blameworthy (anymore than we treat not giving a kidney or living like a monk and giving everything you earn away). I'd like to figure out a way to avoid this implication but I don't really have any ideas here.
I've certainly seen criticism that seems to boil down to either: a) they are weird and therefore full of themselves b) they influence Bay Area billionaires and are therefore bad.
One can do some "market research" by reading r/buttcoin comments about SBF, which take occasional pot-shots at EA. Some it is just cynicism about the idea of doing good (r/buttcoin self-selects for cynics). But you can also see the distaste that "normal people" have for the abstract philosophizing behind longtermist EA, especially when it leads to actions that are outwardly indistinguishable from pure greed.
E.g. https://www.reddit.com/r/Buttcoin/comments/16mxkji : "I'm sure it's only a matter of time before we discover why this was actually not only an ethical use of funds, but the only ethical use once you consider the lives of 10^35 future simulated versions of Sam."
The folks who dislike charity EA still confuse me. But they do crop up in the Marginal Revolution comments whenever EA is mentioned.
My sense is that ultimately that comes from the sense that they are being condescended to/tricked by people who are essentially saying: I'm soo much smarter than you and that means I get to break all the rules.
It's hard because I do think it's important to be able to say that: hey intuitions are really often wrong here. But the problem is that there is a strong tendency for people to replace intuitions with whatever people with a certain sort of status are saying which then is problematic.
> "The idea that someone should disregard their family, friends, neighbors, cultural group, religious affiliation, region, state, and/or nation in order to do the most 'good' is absurd on its face and contrary to nature."
Ah, yes, religious affiliation, state, and nation: things that totally exist in nature. Does this guy think dogs are arguing about Protestantism? Does he believe that owls have organized a republic in Cascadia?
Oh man, I hadn't seen that one before about the parents urging him to milk the cow for their benefit. I was aware he used FTX funds to buy them a holiday home, and that he was donating to his mother/his brother and their good causes, but blatant "are you sending us 7 million or 10 million in cash, please clarify" - his parents were way more involved in the entire mess than I suspected.
"Despite knowing or blatantly ignoring that the FTX Group was insolvent or on the brink of insolvency, Bankman and Fried discussed with Bankman-Fried the transfer to them of a $10 million cash gift and a $16.4 million luxury property in The Bahamas. Bankman and Fried also pushed for tens of millions of dollars in political and charitable contributions, including to Stanford University, which were seemingly designed to boost Bankman’s and Fried’s professional and social status at the expense of the FTX Group, and by extension, its customers and other creditors. Additionally, Fried, concerned with the optics of her son and his companies donating money to the organization she co-founded and other causes she supported, encouraged Bankman-Fried and others within the FTX Group to avoid (if not violate) federal campaign finance disclosure rules by engaging in straw donations or otherwise concealing the FTX Group as the source of the contributions."
Possible big happy family reunion in jail? ☹
Also what the heck with 7 million in cash, were they walking around with suitcases full of dollar bills or what? Every time I read something about FTX that makes me go "Well *that* was no way to run a business, how could they do that?", something new pops up to make me go "Wow, they dug the hole even *deeper*".
A post like this, and comments, are bizarre to someone whose world was the 20th century, not the 21st. All who come at the topic seem unaware (must be pretending?) there was a big and novel movement once upon a time, that begat several large non-profits and scores of smaller grassroots ones - and none of the issues and concerns of that once-influential cause even clear the narrow bar of the EAs.
That's an interesting comment. Could you elaborate on which movement(s) you have in mind? There were so _many_ movements in the 20th century, both benign and lethal, that I would like to know the specific one(s) you mean.
It was especially attractive to people who might, perhaps, be viewed as analogous to the sort of folks currently drawn to EA. But the value systems being so profoundly incompatible, I suppose they must not be the *same* people after all.
Come to any conservation-related meeting or workday. Nothing but Boomers, and even older than Boomers. It will die with them, although they didn't originate it. Of course, it's not too late to talk to Boomers about this subject - but almost too late - and that would require a deal of humility, and it is more fun to hate on Boomers en masse.
Yeah, this is where I end up on it as well. To the extent that it helps people give more effectively, it's been a great thing.
It does go a bit beyond merely annoying though. I think something that Scott is missing is that this field won't just HAVE grifters and scammers, it will ATTRACT grifters and scammers, much like roles as priests etc. have done in the past. The average person should be wary of people smarter than them telling what to do with their money.
The only durable protection from scammers is a measurable outcome. That's part of why I think EA is only effective when it focuses on things that can be measured. The meat of the improvement in EA is moving money from frivolous luxury to measurable charity, not moving measurable charity to low probability moonshots.
Are you saying that the specific impact calculations that orgs like GiveWell do are incorrect, or are you just claiming epistemic learned helplessness https://slatestarcodex.com/2019/06/03/repost-epistemic-learned-helplessness/.?
I mean, GiveDirectly is a top charity on Givewell, are you claiming that showering poor people in money to the tune of .92 per dollar still produces a lot of transaction cost?
This, I think, is an interesting take.
Is your thought here that transaction costs are implicit and thus not properly priced in to the work done? I think at the development economics level that is not terribly true. The transaction costs of poverty relief in urban USA vs the poverty relief in San Salvador are not terrible different once the infrastructure in question is set up.
"Compared to what" is my question.
Everything has transaction costs. Other opportunities have similar transaction costs. I would be surprised if they didn't. However, I agree I would like to see this argued explicitly somewhere.
Isn't this just the old paradox where you go:
- Instead of spending an hour studying, you should spend a few minutes figuring out how best to study, then spend the rest of the time studying
- But how long should you spend figuring out the best way to study? Maybe you should start by spending some time figuring out the best balance between figuring out the right way to study, and studying
- But how long should you spend on THAT? Maybe you should start by spending some time figuring out the best amount of time to spend figuring out the best amount of time to spend figuring out . . .
- ...and so on until you've wasted the whole hour in philosophical loops, and therefore you've proven it's impossible to ever study, and even trying is a net negative.
In practice people just do a normal amount of cost-benefit analysis which costs a very small portion of the total amount of money donated.
Centralizing and standardizing research into which charities do exactly what (so the results can then be easily checked against any given definition of "effectiveness") reduces transaction costs by eliminating a lot of what would otherwise be needlessly duplicated effort.
Good list.
A common sentiment right now is “I liked EA when it was about effective charity and saving more lives per dollar [or: I still like that part]; but the whole turn towards AI doomerism sucks”
I think many people would have a similar response to this post.
Curious what people think: are these two separable aspects of the philosophy/movement/community? Should the movement split into an Effective Charity movement and an Existential Risk movement? (I mean more formally than has sort of happened already)
I'm probably below the average intelligence of people who read scott but that's essentially my position. AI doomerism is kinda cringe and I don't see evidence of anything even starting to be like their predictions. EA is cool because instead of donating to some charity that spends most their money on fundraising or whatever we can directly save/improve lives.
Which "anything even starting to be like their predictions" are you talking about?
-Most "AIs will never do this" benchmarks have fallen (beat humans at Go, beat CAPTCHAs, write text that can't be easily distinguished from human, drive cars)
-AI companies obviously have a very hard time controlling their AIs; usually takes weeks/months after release before they stop saying things that embarrass the companies despite the companies clearly not wanting this
If you won't consider things to be "like their predictions" until we get a live example of a rogue AI, that's choosing to not prevent the first few rogue AIs (it will take some time to notice the first rogue AI and react, during which time more may be made). In turn, that's some chance of human extinction, because it is not obvious that those first few won't be able to kill us all. It is notably easier to kill all humans (as a rogue AI would probably want) than it is to kill most humans but spare some (as genocidal humans generally want); the classic example is putting together a synthetic alga that isn't digestible, doesn't need phosphate and has a more-efficient carbon-fixing enzyme than RuBisCO, which would promptly bloom over all the oceans, pull down all the world's CO2 into useless goo on the seafloor, and cause total crop failure alongside a cold snap, and which takes all of one laboratory and some computation to enact.
I don't think extinction is guaranteed in that scenario, but it's a large risk and I'd rather not take it.
> Most "AIs will never do this" benchmarks have fallen (beat humans at Go, beat CAPTCHAs, write text that can't be easily distinguished from human, drive cars)
I concur on beating Go, but captchas were never thought to be unbeatable by AI - it's more that it makes robo-filing forms rather expensive. Writing text also never seemed that doubtful and driving cars, at least as far as they can at the moment, never seemed unlikely.
This would have been very convincing if anyone like Patrick had given timelines on the earliest point at which they expected the advance to have happened, at which point we can examine if their intuitions in this are calibrated. Because the fact is if you asked most people, they definitely would not have expected art or writing to fall before programming. Basically only gwern is sinless.
On the other hand, EY has consistently refused to make measurable predictions about anything, so he can't claim credit in that respect either. To the extent you can infer his expectations from earlier writing, he seems to have been just as surprised as anyone, despite notionally being an expert on AI.
1. No one mentioned Eliezer. If Eliezer is wrong about timelines, that doesn't mean we suddenly exist in a slow takeoff world. And it's basically a bad faith argument to imply that Eliezer getting surprised *in the direction of capabilities getting better than expected* is apparently evidence of non doom.
2. Patrick is explicitly saying that he sees no evidence. Insofar as we can use Patrick's incredulity as evidence, it would be worth far more if it was calibrated and informed rather than uncalibrated. AI risk arguments depend on more things than just incredulity, so the """lack of predictions""" matters relatively less. My experience has been that people who use their incredulity in this manner in fact do worse at predicting capabilities, hence why getting disproven would be encouraging.
3. I personally think that by default we cannot predict what the rate of change is, but I can lie lazily on my hammock and predict "there will be increases in capability barring extreme calamity" and essentially get completely free prediction points. If you do believe that we're close to a slowdown, or we're past the inflection point of a sigmoid and that my priors about progress are wrong, you can feel free to bet against my entirely ignorant opinion. I offer up to 100 dollars at ratios you feel are representative of slowdown, conditions and operationalizations tbd.
4. If you cared about predictive accuracy, gwern did the best and he definitely believes in AI risk.
"write text that can't be easily distinguished from human"? Really?
*None* of the examples I've seen measure up to this, unless you're comparing it to a young human that doesn't know the topic but has some measure of b*sh*tting capability - or rather, thinks he does.
Maybe I need to see more examples.
Yeah there are a bunch of studies now where they give people AI text and human text and ask them to rate them in various ways and to say whether they think it is a human or AI, and generally people rate the AI text as more human.
The examples I've seen are pretty obviously talking around the subject, when they don't devolve into nonsense. They do not show knowledge of the subject matter.
Perhaps that's seen as more "human".
I think that if they are able to mask as human, this is still useful, but not for the ways that EA (mostly) seems to think are dangerous. We won't get advances in science, or better technology. We might get more people falling for scammers - although that depends on the aim of the scammer.
Scammers that are looking for money don't want to be too convincing because they are filtering for gullibility. Scammers that are looking for access on the other hand, do often have to be convincing in impersonating someone who should have the ability to get them to do something.
But moore’s law is dead. We’re reaching physical limits, and under these limits, it already costs millions to train and execute a model that, while impressive, is still multiple orders of magnitude away from genuinely dangerous superintelligence. Any further progress will require infeasible amounts of resources.
Moore's Law is only dead by *some* measures, as has been true for 15-20 years. The limiting factors for big ML are mostly inter-chip communications, and those are still growing aggressively.
Also, algorithms are getting more efficient.
This is one of the reasons I'm not a doomer, which is that most doomers' mechanism of action for human extinction is biological in nature, and most doomers are biologically illiterate.
RuBisCO is known to be pretty awful as carboxylases go. PNA + protein-based ribosomes avoids the phosphate problem.
I'm not saying it's easy to design Life 2.0; it's not. I'm saying that with enough computational power it's possible; there clearly are inefficiencies in the way natural life does things because evolution likes local maxima.
You're correct on the theory; my point was that some people assume that computation is the bottleneck rather than actually getting things to work in a lab within a reasonable timeframe. Not only is wet lab challenging, I also have doubts as to whether biological systems are computable at all.
I think the reason that some people (e.g. me) assume that computation* is the bottleneck is that IIRC someone actually did assemble a bacterium (of a naturally-existing species) from artificially-synthesised biomolecules in a lab. The only missing component to assemble Life 2.0 would then seem to be the blueprint.
If I'm wrong about that experiment having been done, please tell me, because yeah, that's a load-bearing datum.
*Not necessarily meaning "raw flops", here, but rather problem-solving ability
Much like I hope for more people to donate to charity based on the good it does rather than based on the publicity it generates, I hope (but do not expect) that people decide to judge existential risks based on how serious they are rather than based on how cringe they are.
Yeah this is where I am. A large part of it for me is that after AI got cool, AI doomerism started attracting lots of naked status seekers and I can't stand a lot of it. When it was Gwern posting about slowing down Moore's law, I was interested, but now it's all about getting a sweet fellowship.
Is your issue with the various alignment programs people keep coming up with? Beyond that, it seems like the main hope is still to slow down Moore's law.
My issue is that the movement is filled with naked status seekers.
FWIW, I never agreed with the AI doomers, but at least older EAs like Gwern I believe to be arguing in good faith.
Interesting, I did not get this impression but also I do worry about AI risk - maybe that causes me to focus on the reasonable voices and filter out the non-sense. I'd be genuinely curious for an example of what you mean, although I understand if you wouldn't want to single out anyone in particular.
I don’t mind naked status seeking as long as people do it by a means that is effective at achieving good ends for the world. One can debate whether AI safety is actually effective, but if it is, EAs should probably be fine with it (just like the naked cash seekers who are earning to give).
I agree. But there seem to be a lot of people in EA with some serious scrupulosity going on. Like that person who said they would like to donate a kidney, but could not bear the idea that it might go to a meat-eater, and so the donor would be responsible for all the animal suffering caused by the recipient. It's as though EA is, for some people, a refuge from ever feeling they've done wrong -- as though that's possible!
What’s wrong with naked status seekers (besides their tendency to sometimes be counterproductive if advancing the cause works against their personal interests)?
It's bad when the status seeking becomes more important than the larger purpose. And at the point when it gets called "naked status seeking", it's already over that line.
They will only do something correct if it advances their status and/or cash? To the point of not researching or approving research into something if it looks like it won't advance them?
They have to be bribed to do the right thing?
How do you identify naked status seekers?
Hey now I am usually clothed when I seek status
It usually works better, but I guess that depends on how much status-seeking is done at these EA sex parties I keep hearing about...
Sounds like an isolated demand for rigor
Definitely degree of confidence plays into it a lot. Speculative claims where it's unclear if the likelihood of the bad outcome is 0.00001% or 1% are a completely different ball game from "I notice that we claim to care about saving lives, and there's a proverbial $20 on the ground if we make our giving more efficient."
I think it also helps that those shorter-term impacts can be more visible. A malaria net is a physical thing that has a clear impact. There's a degree of intuitiveness there that people can really value
Most AI-risk–focused EAs think the likelihood of the bad outcome is greater than 10%, not less than 1%, fwiw.
And that's the reason many outsiders think they lack good judgment.
And yet, what exactly is the argument that the risk is actually low?
I understand and appreciate the stance that the doomers are the ones making the extraordinary claim, at least based on the entirety of human history to date. But when I hear people pooh-poohing the existential risk of AI, they are almost always pointing to what they see as flaws in some doomer's argument -- and usually missing the point that the narrative they are criticizing is usually just a plausible example of how it might go wrong, intended to clarify and support the actual argument, rather than the entire argument.
Suppose, for the sake of argument, that we switch it around and say that the null hypothesis is that AI *does* pose an existential risk. What is the argument that it does not? Such an argument, if sound, would be a good start toward an alignment strategy; contrariwise, if no such argument can be made, does it not suggest that at least the *risk* is nonzero?
I find Robin Hanson's arguments here very compelling: https://www.richardhanania.com/p/robin-hanson-says-youre-going-to
It's weird that you bring up Robin Hanson, considering that he expects humanity to be eventually destroyed and replaced with something else, and sees that as a good thing. I personally wouldn't use that as an argument against AI doomerism, since people generally don't want humanity to go extinct.
What specific part of Robin Hanson's argument on how growth curves are a known thing do you find convincing?
That's the central intuition underpinning his anti foom worldview, and I just don't understand how someone can generalize that to something which doesn't automatically have all the foibles of humans. Does you think that a population of people who have to sleep, eat and play would be fundamentally identical to an intelligence who is differently constrained?
I'm not seeing any strong arguments there, in that he's not making arguments like, "here is why that can't happened", but instead is making arguments in the form, "if AI is like <some class of thing that's been around a while>, then we shouldn't expect it to rapidly self-improve/kill everything because that other thing didn't".
E.g. if superintelligence is like a corporation, it won't rapidly self-improve.
Okay, sure, but there are all sorts of reasons to worry superintelligent AGI won't be like corporations. And this argument technique can work against any not-fully-understood future existential threat. Super-virus, climate change, whatever. By the anthropic principle, if we're around to argue about this stuff, then nothing in our history has wiped us out. If we compare a new threat to threats we've encountered before and argue that based on history, the new threat probably isn't more dangerous than the past ones, then 1) you'll probably be right *most* of the time and 2) you'll dismiss the threat that finally gets you.
I’ve been a big fan of Robin Hanson since there was a Web; like Hanania, I have a strong prior to Trust Robin Hanson. And I don’t have any real argument with anything he says there. I just don’t find it reassuring. My gut feeling is that in the long run it will end very very badly for us to share the world with a race that is even ten times smarter than us, which is why I posed the question as “suppose the null hypothesis is that this will happen unless we figure out how to avoid it”.
Hanson does not do that, as far as I can tell. He quite reasonably looks at the sum of human history and finds that he is just not convinced by doomers’ arguments, and all his analysis concerns strategies and tradeoffs in the space that remains. If I accept the postulate that this doom can’t happen, that recursive intelligence amplification is really as nonlumpy as Hanson suspects, then I have no argument with what he says.
But he has not convinced me that what we are discussing is just one more incremental improvement in productivity, rather than an unprecedented change in humans’ place in the world.
I admit that I don’t have any clear idea whether that change is imminent or not. I don’t really find plausible the various claims I have read that we’re talking about five or ten years. And I don’t want to stop AI work: I suspect AGI is a prerequisite for my revival from cryosuspension. But that just makes it all the more pressing to me that it be done right.
When ignoring the substance of the argument, I find their form to be something like a Pascal's wager, bait and switch. If there even is a small percent you will burn in hell for eternity, why wouldn't you become Catholic. Such an argument fails for a variety of reasons, one being it doesn't account for alternative religions and their probabilities with alternatives outcomes.
So I find I should probably update my reasoning toward there being some probability of x-risk here, but the probability space is pretty large.
One of the good arguments for doomerism is that the intelligences will be in some real sense alien. That there is a wider distribution of possible ways to think than human intelligence, including how we consider motivation, and this could lead to paper-clip maximizers, or similar AI-Cthulhus of unrecognizable intellect. I fully agree that these might very likely be able to easily wipe us out. But there are many degrees of capability and motivation and I don't see the reason to assume that either through a side-effect of ulterior motivation or direct malice that that lead to the certainty of extinction expressed by someone like Eliezer. There are many possibilities, many are fraught. We should invest is safety and alignment. But that that doesn't mean we should consider x-risk a certainty and certainly not at double-digit likelihood's within short timeframes.
Comparative advantage and gains from trade says the more different from us they are, the more potential profit they'll see in keeping us around.
Yes, the space of possibilities (I think you meant this?) is pretty large. But x-risk is most of it. Most of possible outcomes of optimisation processes over Earth and Solar System have no flourishing humanity in them.
It is perhaps a lot like other forms of investment. You can't just ask "What's the optimal way to invest money to make more money?" because it depends on your risk tolerance. A savings account will give you 5%. Investing in a random seed-stage startup might make you super-rich but usually leaves you with nothing. If you invest in doing good then you need to similarly figure out your risk profile.
The good thing about high-risk financial investments is they give you a lot of satisfaction of sitting around dreaming about how you're going to be rich. But eventually that ends when the startup goes broke and you lose your money.
But with high-risk long-term altruism, the satisfaction never has to end! You can spend the rest of your life dreaming about how your donations are actually going to save the world and you'll never be proven wrong. This might, perhaps, cause a bias towards glamourous high-risk long-term projects at the expense of dull low-risk short-term projects.
Much like other forms of investment, if someone shows up and tells you they have a magic box that gives you 5% a month, you should be highly skeptical. Except replace %/month with QALYs/$.
I see your point, but simple self-interest is sufficient to pick up the proverbial $20 bill lying on the ground. Low-hanging QALYs/$ may have a little bit of an analogous filter, but I doubt that it is remotely as strong.
The advantage of making these types of predictions is that even if someone says that the unflattering thing is not even close to what drives them, you can go on thinking "they're just saying that because my complete and perfect fantasy makes them jealous of my immaculate good looks".
Yeah I kinda get off the train at the longtermism / existential risk part of EA. I guess my take is that if these folks really think they're so smart that they can prevent and avert crises far in the future, shouldn't they have been better able to handle the boardroom coup?
I like the malaria bed nets stuff because its easy to confirm that my money is being spent doing good. That's almost exactly the opposite when it comes to AI-risk. For example, the tweet Scott included about how no one has done more to bring us to AGI than Eliezer—is that supposed to be a good thing? Has discovering RLHF which in turn powered ChatGPT and launched the AI revolution made AI-risk more or less likely? It almost feels like one of those Greek tragedies where the hero struggles so hard to escape their fate they end up fulfilling the prophecy.
I think he was pointing out that for EAs have been a big part of the current AI wave. So whether you are a doomer or an accelerationist you should agree that EAs impact has been large even if you disagree with the sign
Problem is, the OpenAI scuffle shows that right now, as AI is here or nearly here, the ones making the decisions are the ones holding the purse strings, and not the ones with the beautiful theories. Money trumps principle and we just saw that blowing up in real time in glorious Technicolor and Surround-sound.
So whether you're a doomer or an accelerationist, the EAs impact is "yeah you can re-arrange the deckchairs, we're the ones running the engine room" as things are going ahead *now*.
Not that I have anything against EAs, but, as someone who want to _see_ AGI, who doesn't want to see the field stopped in its tracks by impossible regulations, as happened to civilian nuclear power in the usa, I hope that you are right!
I mean, if I really believed we'd get conscious, agentic AI that could have its own goals and be deceitful to humans and plot deep-laid plans to take over and wipe out humanity, sure I'd be very, very concerned and unhappy about this result.
I don't believe that, nor that we'll have Fairy Godmother AI. I do believe we'll have AI, an increasing adoption of it in everyday life, and it'll be one more hurdle to deal with. Effects on employment and jobs may be catastrophic (or not). Sure, the buggy whip manufacturers could shift to making wing mirrors for the horseless carriages when that new tech happened, but what do you switch to when the new tech can do anything you can do, and better?
I think the rich will get richer, as per usual, out of AI - that's why Microsoft etc. are so eager to pave the way for the likes of Sam Altman to be in charge of such 'safety alignment' because he won't get in the way of turning on the money-fountain with foolish concerns about going slow or moratoria.
AGI may be coming, but it's not going to be as bad or as wonderful as everyone dreads/hopes.
That's mostly my take too. But to be fair to the doomer crowd, even if we don't buy the discourse on existential risks, what this concern is prompting them to do is lots of research on AI alignment, which in practice means trying to figure out how AI works inside and how it can be controlled and made fit for human purposes. Which sounds rather useful even if AI ends up being on the boring side.
> but what do you switch to when the new tech can do anything you can do, and better?
Nothing -- you retire to your robot ranch and get anything you want for free. Sadly, I think the post-scarcity AGI future is still very far off (as in, astronomically so), and likely impossible...
I think that the impact of AGI is going to be large (even if superintelligence either never happens or the effect of additional smarts just saturates, diminishing returns and all that), provided that it can _really_ do what a median person can do. I just want to have a nice quiet chat with the 21st century version of a HAL-9000 while I still can.
> if these folks really think they're so smart that they can prevent and avert crises far in the future, shouldn't they have been better able to handle the boardroom coup?
Surely these are different skills? Someone who could predict and warn against the dangers of nuclear weapon proliferation and the balance of terror, might still have been blindsided by their spouse cheating on them.
Suppose Trump gets elected next year. Is it a fair attack on climatologists to ask "If these people really think they're so smart that they can predict and avert crises far in the future, shouldn't they have been better able to handle a presidential election?"
Also, nobody else seems to have noticed that Adam D'Angelo is still on the board of OpenAI, but Sam Altman and Greg Brockman aren't.
I hardly think that's a fair comparison. Climatologists are not in a position to control the outcome of a presidential election, but effective altruists controlled 4 out of 6 seats on the board of the company.
Of course, if you think that they played their cards well (given that D'Angelo is still on the board) then I guess there's nothing to argue about. I—and I think most other people—believe they performed exceptionally poorly.
The people in the driver's seat on global-warming activism are more often than not fascist psycopaths like Greta Thunberg, whom actively fight against the very things that would best fight against global warming, like nuclear energy and natural gas pipelines, so they can instead promote things that would make it worse, like socialism and degrowth.
We will never be able to rely on these people to do anything but cause problems. They should be shunned like lepers.
I think that if leaders are elected that oppose climate mitigation, that is indeed a knock on the climate-action political movement. They have clearly failed in their goals.
Allowing climate change to become a partisan issue was a disaster for the climate movement.
I think it's a (slight) update against the competence of the political operatives, but not against the claim that global warming exists.
I agree completely. Nonetheless, the claim that spending money on AI safety is a good investment rests on two premises: That AI risk is real, and that EA can effectively mitigate that risk.
If I were pouring money into activists groups advocating for climate action, it would be cold comfort to me that climate change is real when they failed.
The EA movement is like the Sunrise Movement/Climate Left. You can have good motivations and the correct ambitions but if you have incompetent leadership your organization can be a net negative for your cause.
> Is it a fair attack on climatologists to ask "If these people really think they're so smart that they can predict and avert crises far in the future, shouldn't they have been better able to handle a presidential election
It is a fair criticism for those that believe the x-risk, or at least extreme downsides of climate change, to not figure out ways to better accomplish their goals rather than just political agitation. Building coalitions with potentially non-progressive causes, being more accepting of partial, incremental solutions. Playing "normie" politics along the lines of matt yglesias, and maybe holding your nose to some negotiated deals where the right gets their way probably mitigates and prevents situations where the climate people won't even have a seat at the table. For example, is making more progress on preventing climate extinction worth stalling out another decade on trans-rights? I don't think that is exactly the tradeoff on the table, but there is a stark unwillingness to confront such things by a lot of people who publicly push for climate-maximalism.
"Playing normie politics" IS what you do when you believe something is an existential risk.
IMHO the test, if you seriously believe all these claims of existential threat, is your willingness to work with your ideological enemies. A real existential threat was, eg, Nazi Germany, and both the West and USSR were willing to work together on that.
When the only move you're willing to make regarding climate is to offer a "Green New Deal" it's clear you are deeply unserious, regardless of how often you say "existential". I don't recall the part of WW2 where FDR refused to send Russia equipment until they held democratic elections...
If you're not willing to compromise on some other issue then, BY FSCKING DEFINITION, you don't believe really your supposed per cause is existential! You're just playing signaling games (and playing them badly, believe me, no-one is fooled). cf Greta Thunberg suddenly becoming an expert on Palestine:
https://www.spiegel.de/international/world/a-potential-rift-in-the-climate-movement-what-s-next-for-greta-thunberg-a-2491673f-2d42-4e2c-bbd7-bab53432b687
FDR giving the USSR essentially unlimited resources for their war machine was a geostrategic disaster that led directly to the murder and enslavement of hundreds of millions under tyrranies every bit as gruesome as that of Hitler's. Including the PRC, which menaces the World to this day.
The issue isn't that compromise on existential threats are inheriently bad. The issue is that, many times, compromises either make things worse than they would've been otherwise, or create new problems that are as bad or worse as what they subsumed.
I can think of a few groups, for example world Jewry, that might disagree with this characterization...
We have no idea how things might have played out.
I can tell you that the Hard Left, in the US, has an unbroken record of snatching defeat from the jaws of victory, largely because of their unwillingness to compromise, and I fully expect this trend to continue unabated.
Effect on climate? I expect we will muddle through, but in a way that draws almost nothing of value from the Hard Left.
The reason we gave the USSR unlimited resources was because they were directly absorbing something like 2/3 of the Nazi's bandwidth and military power in a terribly colossal years-long meatgrinder that killed something like 13% of the entire USSR population.
Both the UK and USA are extremely blessed that the USSR was willing to send wave after wave of literally tens of millions of their own people into fighting the Nazi's and absorbing so much of their might, and it was arguably the deal of the century to trade mere manufactured objects for the breathing room and Nazi distraction / might-dissipation that this represented.
The alternative would have been NOT giving the USSR unlimited resources, the Nazi's quickly steamroll the USSR, and then turn 100% of their attention and military might towards the UK, which they would almost certainly win. Or even better, not getting enough materiel to conduct a war and realizing he would lose, Stalin makes a deal with Germany and they BOTH focus on fighting the UK and USA - how long do you think the UK would have survived that?
Would the USA have been able to successfully fight a dual-front war with basically all of Europe aligned under Nazi power PLUS Japan with China's resources? We don't know, but it's probably a good thing in terms of overall deaths and destruction on all sides that we didn't need to find out.
Sure, communism sucked for lots of people. But a Nazi-dominated Europe / world would probably have sucked more.
Ah come on, Scott: that the board got the boot and was revamped to the better liking of Sam who was brought back in a Caesarian triumph isn't very convincing about "so this guy is still on the board, that totes means the good guys are in control and keeping a cautious hand on the tiller of no rushing out unsafe AI".
https://www.reuters.com/technology/openais-new-look-board-altman-returns-2023-11-22/
Convince me that a former Treasury Secretary is on the ball about the most latest theoretical results in AI, go ahead. Maybe you can send him the post about AI Monosemanticity, which I genuinely think would be the most helpful thing to do? At least then he'd have an idea about "so what are the eggheads up to, huh?"
While I agree with the general thrust, I think the short-term vs. long-term is neglected. For instance, you yourself recommended switching from chicken to beef to help animals, but this neglects the fact that over time, beef is less healthy than chicken, thus harming humans in a not-quickly-visible way. I hope this wasn't explicitly included and allowed in your computation (you did the switch yourself, according to your post), but this just illuminates the problem: EA want to be clear beneficiaries, but clear often means "short-term" (for people who think AI doomerism is an exception, remember that for historical reasons, people in EA have, on median, timelines that are extremely short compared to most people's).
Damn, was supposed to be top-level. Not reposting.
> I guess my take is that if these folks really think they're so smart that they can prevent and avert crises far in the future, shouldn't they have been better able to handle the boardroom coup?
They got outplayed by Sam Altman, the consummate Silicon Valley insider. According to that anonymous rumour-collecting site, they're hardly the only ones, though it suggests they wouldn't have had much luck defending us against an actual superintelligence.
> For example, the tweet Scott included about how no one has done more to bring us to AGI than Eliezer—is that supposed to be a good thing?
No. I'm pretty sure sama was trolling Eliezer, and that the parallel to Greek tragedy was entirely deliberate. But as Scott said, it is a thing that someone has said.
I actually pretty completely endorse the longtermism and existential risk stuff - but disagree about the claims about the best ways to achieve them.
Ordinary global health and poverty initiatives seem to me to be much more hugely influential in the long term than the short term thanks to the magic of exponential growth. An asteroid or gamma ray or what ever program that has a .01% chance of saving 10^15 lives a thousand years from now looks good compared to saving a few thousand lives this year at first - but when you think about how much good those thousand people will do for their next 40 generations of descendants, as well as all the people those 40 generations of descendants will help, either through normal market processes or through effective altruist processes of their own, this starts to look really good at the thousand year mark.
AI safety is one of the few existential risk causes that doesn’t depend on long term thinking, and thus is likely to be a very valuable one. But only if you have any good reason to think that your efforts will improve things rather than make them worse.
I remember seeing this for the "climate apocalypse" thing many years ago: some conservationist (specifically about birds, I think) was annoyed that the movement had become entirely about global warming.
EDIT: it was https://grist.org/climate-energy/everybody-needs-a-climate-thing/
Global warming is simply a livelier cause for the Watermelons to get behind. Not because they genuinely care about global warming, as they oppose the solutions that would actually help alleviate the crisis, but because they're psychopathic revolutionary socialists who see it as the best means available today of accomplishing their actual goal: the abolition of capitalism and the institution of socialism.
Yup, pretty much this!
EA as a movement to better use philanthropic resources to do real good is awesome.
AI doomerism is a cult. It's a small group of people who have accrued incredible influence in a short period of time on the basis of what can only be described as speculation. The evidence base is extremely weak and it relies far too much on "belief". There are conflicts of interest all over the place that the movement is making no effort to resolve.
Sadly, the latter will likely sink the former.
At this point a huge number of experts in the field consider AI risk to be a real thing. Even if you ignore the “AGI could dominate humanity” part, there’s a large amount of risk from humans purposely (mis)using AI as it grows in capability.
Predictions about the future are hard and so neither side of the debate can do anything more than informed speculation about where things will go. You can find the opposing argument persuading, but dismissing AI risk as mere speculation without evidence is not even wrong.
The conflicts of interest tend to be in the direction of ignoring AI risk by those who stand to profit from AI progress, so you have this exactly backwards.
You can't ignore the whole "AGI could dominate humanity" part, because that is core to the arguments that this is an urgent existential threat that needs immediate and extraordinary action. Otherwise AI is just a new disruptive technology that we can deal with like any other new, disruptive technology. We could just let it develop and write the rules as the risks and dangers become apparent. The only way you justify the need for global action right now is based on the belief that everybody is going to die in a few years time. The evidence for existential AI risk is astonishingly weak given the amount of traction it has with policymakers. It's closer to Pascal's Wager rewritten for the 21st century than anything based on data.
On the conflict of interest, the owners of some of the largest and best funded AI companies on the planet are attempting to capture the regulatory environment before the technology even exists. These are people who are already making huge amounts of money from machine learning and AI. They are taking it upon themselves to write the rules for who is allowed to do AI research and what they are allowed to do. You don't see a conflict of interest in this?
Let's distinguish "AGI" from "ASI", the latter being a superintelligence equal to something like a demigod.
Even AGI strictly kept to ~human level in terms of reasoning will be superhuman in the ways that computers are already superhuman: e.g., data processing at scale, perfect memory, replication, etc., etc.
Even "just" that scenario of countless AGI agents is likely dangerous in a way that no other technology has ever been before if you think about it for 30 seconds. The OG AI risk people are/were futurists, technophiles, transhumanists, and many have a strong libertarian bent. "This one is different' is something they do not wish to be true.
Your "conflict of interest" reasoning remains backwards. Regulatory capture is indeed a thing that matters in many arenas, but there are already quite a few contenders in the AI space from "big tech." Meaningfully reducing competition by squishing the future little guys is already mostly irrelevant in the same way that trying to prevent via regulation the creation of a new major social network from scratch would be pointless. "In the short run AI regulation may slow down our profits but in the long run it will possibly lock out hypothetical small fish contenders" is almost certainly what no one is thinking.
"No one on this successful tech company's board of directors is making decisions based on what will eventually get them the most monopoly profits" sounds like an extraordinary claim to me.
This is the board of directors that explicitly tried to burn the company down, essentially for being too successful. They failed, but can you ask for a more credible signal of seriousness?
1. Holy shit is than an ironic thing to say after the OpenAI board meltdown. Also check out Anthropic’s board and equity structure. Also profit-driven places like Meta are seemingly taking a very different approach. Why?
2. You’re doing the thing where decreasing hypothetical future competition from new, small entrants to a field equals monopoly. Even if there was a conspiracy by eg Anthropic to use regulatory barriers against new entrants, that would not impact the already highly competitive field between the several major labs. (And there are already huge barriers to entry for newcomers in terms of both expertise and compute. Even a potential mega contender like Apple is apparently struggling and a place like Microsoft found a partner.)
Expert at coming up with with clever neural net architectures == expert at AI existential risk?
No?
It's just at this point a significant number of experts in AI have come around to believing AI risk is a real concern. So have a lot of prominent people in other fields, like national security. So have a lot of normies who simply intuit that developing super smart synthetic intelligence might go bad for us mere meat machines.
You can no longer just hand wave AI risk away as a concern of strange nerds worried about fictional dangers from reading too much sci-fi. Right or wrong, it's gone mainstream!
all predictions about the future are speculation. The question is whether it's correct or incorrect speculation.
Who are some people who have accrued incredible influence and what is the period of time in which they gained this influence?
From my standpoint it seems like most of the people with increased influence are either a) established ML researchers who recently began speaking out in favor of deceleration and b) people who have been very consistent in their beliefs about AI risk for 12+ years, who are suddenly getting wider attention in the wake of LLM releases.
Acceptance of catastrophic risk from artificial superintelligence is the dominant position among the experts (including independent academics), the tech CEOs, the major governments, and the general public. Calling it a "small group of people who have accrued incredible influence" or "a cult" is silly. It's like complaining about organizations fighting Covid-19 by shouting "conspiracy!" and suggesting that the idea is being pushed by a select group.
The denialists/skeptics are an incredibly fractured group who don't agree with each other at all about how the risk isn't there; the "extinction from AI is actually good", "superintelligence is impossible", "omnipotent superintelligence will inevitably be absolutely moral", and "the danger is real but I can solve it" factions and subfactions do not share ideologies, they're just tiny groups allying out of convenience. I don't see how one could reasonably suggest that one or more of those is the "normal" group, to contrast with the "cult".
I think there’s an important contrast between people who think that AI is a significant catastrophic risk, and people who think there is a good project available for reducing that risk without running a risk of making it much worse.
For those of you that shared the "I like global health but not longtermism/AI Safety", how involved were you in EA before longtermism / AI Safety became a big part of it?
I read some EA stuff, donated to AMF, and went to rationalist EA-adjacent events. But never drank the kool aid.
I think it is a good question to raise with the EA-adjacent. Before AI Doomerism and the tar-and-feathering of EA, EA-like ideas were starting to get more mainstream traction and adoption. Articles supportive of say, givewell.org, in local papers, not mentioning EA by name, but discussing some of the basic philosophical ideas were starting to percolate out more into the common culture. Right or Wrong, there has been a backlash that is disrupting some of that influence even those _in_ the EA movement are still mostly doing the same good stuff Scott outlined.
Minor point: I'd prefer to treat longtermism and AI Safety quite separately. (FWIW, I am not in EA myself.)
Personally, I want to _see_ AGI, so my _personal_ preference is that AI Safety measures at least don't cripple AI development like regulatory burdens made civilian nuclear power grind to a 50 year halt in the USA. That said, the time scale for plausible risks from AGI (at least the economic displacement ones) is probably less than 10 years and may be as short as 1 or 2. Discussing well-what-if-every-job-that-can-be-done-online-gets-automated does not require a thousand-year crystal ball.
Longtermism, on the other hand, seems like it hinges on the ability to predict consequences of actions on *VASTLY* longer time scales than anyone has ever managed. I consider it wholly unreasonable.
None of this is to disparage Givewell or similar institutions, which seem perfectly reasonable to me.
I actually think that longtermism advocates for ordinary health and development charity - that sort of work grows exponentially in impact over the long term and thus comes out looking even better than things like climate or animal welfare, whose impacts grow closer to linearly with time.
The problem with longtermism is that you can use it to justify pretty much anything, regardless of if you're even right, as long as your ends are sufficiently far enough away from the now to where you never actually have to be held accountable for getting things wrong.
It's not a very good philosophy. People should be saved from malaria for its own sake. Not because of "longtermism".
Given a choice between several acts which seem worth doing for their own sake, rate at which secondary benefits potentially compound over the long term could be a useful tiebreaker.
"that sort of work grows exponentially in impact over the long term" Some of the longtermist arguments talk about things like effects over a time scale where they expect us to colonize the galaxy. The time scale over which economies have been growing more-or-less steadily is more like 200-300 years. I think that it is sane to make a default assumption of exponential impact, as you describe, for that reason over that time scale (though many things, AI amongst them, could invalidate that). _Beyond_ 200-300 years, I don't think smoothish-growth-as-usual is a reasonable expectation. I think all we can say longer term than that is _don't_ _know_.
Longtermism / AI safety were there from the beginning, so the question embeds a false premise.
I heard about EA and got into the global health aspects of it from a talk on AI safety I went to given by... EY. I went to the talk on AI safety because I'd read HPMOR and just wanted to meet the author.
I wasn't at all convinced about AI safety, but I became interested in the global health aspects of EA. This year my donations went to PSI. I'm still an AI sceptic.
I gave money to GiveDirectly, which is EA-adjacent, and some years would get GiveWell endorsements. It never gets to the top of the recommendation list, but has the big advantage of having a low variance (especially the original formulation, where everyone living in a poor village got a one-time unconditional payout). "I can see you're not wasting the funds" is a good property if you have generally low trust in people running charitable orgs (the recent turn into generating research papers to push UBI in the US is unfortunate).
AI-doom-people have a decent shot at causing more deaths than all other human causes put together, if they follow the EY "nuke countries with datacenters" approach. Of course they'll justify it by appealing to the risk of total human extinction, but it shouldn't be surprising that people who estimate a substantially lower probability of the latter see the whole endeavor as probably net-negative. You'd be better off burning the money.
My only prior exposure was Doing Good Better, before seeing a *lot* of longtermism/x-risk messaging at EA Cambridge in 2018 (80k hours workshop, AI safety reading group, workshops at EA Cambridge retreat).
I considered AI safety (I'm a CS researcher already), enough to attend the reading group. But it seemed like pure math-level mental gymnastics to argue that the papers had any application to aligning future AGIs, and I dislike ML/AI research anyway.
Well there's also the part where people may have been involved in charity/NGO stuff before the cool kids relabeled it as EA.
Not to blame anyone for the relabeling though - if it got lots of fresh young people involved in humanitarian activity, and some renewed interest into its actual efficacy, they're more than entitled to take pride and give it a new name.
Guilty as charged; I posted my own top-level comment voicing exactly this position.
Freddie de Boer was talking about something like this today, about retiring the EA label. The effective EA orgs will still be there even if there is no EA. But I'm not really involved in the community, even if I took the Giving What We Can pledge, so it doesn't really matter much to me if AI X-risk is currently sucking up all the air in the movement.
I agree with the first part, but the problems with EA stem beyond AI doomerism. People in the movement seriously consider absurd conclusions like it being morally desirable to kill all wild animals, it has perverse moral failings as an institution, its language has evolved to become similar to postmodern nonsense, it has a strong left wing bias, and it has been plagued by scandals.
Surely none of that is necessary to get more funding to go towards effective causes. I’d like to invite someone competent to a large corporate so that we can improve the effectiveness of our rather large donations, but the above means I have no confidence to do so.
https://iai.tv/articles/how-effective-altruism-lost-the-plot-auid-2284
https://aeon.co/essays/why-longtermism-is-the-worlds-most-dangerous-secular-credo
Well, sometime some people also considered absurd conclusions as giving voting rights to women, and look where we are. Someone have to consider things to understand if they worth anything.
The problem is that utilitarianism likely a fatally flawed approach, taking it to its fullest, most extreme. There is some element of deontology that probably needs to be accounted for a more robust ethical framework.
Or, hah, maybe AGI is a Utility Monster we should accelerate and our destruction would provide more global utility for such an optimizing agent than our continued existence it should be the wished for outcome. But such ideas are absurd.
To point out, Bentham in fact advocated for women's rights "before his time" and lead to many proto feminist works getting published by John Stuart Mill. In fact, concurrent arguments against his stance would cite that the women only mattered in the context of what they can do for men, so it's ridiculous to speak of suffrage.
https://blogs.ucl.ac.uk/museums/2016/03/03/bentham-the-feminist/
Literally comparing "maybe we should kill whole classes of animals and people" to "maybe we should give rights to more classes of people". Wow.
The clearest evidence I can imagine that you're in a morally deranged cult.
I don't get it. Which one is the more plausible claim? Because for most of history, it would have been "killing whole classes of animals and people". The only reason that isn't true today is precisely because some people were willing to ponder absurd trains of thought.
Deliberate attempts to exterminate whole classes of people go back to at least King Mithridates VI in 88 BCE. For most of human history giving women (or anyone the vote) is a weird and absurd idea while mass slaughter was normal.
Its because people were willing to entertain "absurd" ideas that mass slaughter is now abhorrent and votes for all are normal.
Morally deranged cults don’t “seriously consider” ideas that go diametrically against what other members of the cult endorse. Morally deranged cults outright endorse these crazy ideas. EA does not endorse the elimination of wild animals, though it does consider it seriously.
The only thing worse around here than bad EA critics is bad EA defenders.
Any idea should be considered based in its merit, not emotional reaction. I am not sure if you think I am in a cult, or people in EA are.
All I can say negative utilitarianism exists. There is even a book, Suffering-focused ethics, exploring roughly the idea that suffering is much worse than positive experience.
As a person who is seriously suffering, I consider this topic is at least worth discussing. Thought that I can be in a situation where I cannot kill myself and won't get pain meds gives me serious anxiety. Yet, this is pretty common. In most world countries euthanasia is illegal and pain medicines are strictly controlled. Situation where you can suffer terribly and couldn't die is common. Normal people don't think about it often, until they do.
Based on my thoughts above, I feel like suffering of wild and domesticated animals is something real. I am not sure why do you think that by default we cannot even fanthom idea that we can end their suffering. I myself is not pro or contra, but I am happy that there are people who think about these topics.
As someone who doesn't identify with EA (but likes parts of it), I don't expect my opinion to be particularly persuasive to people who do identify more strongly with the movement, but I do think such a split would result in broader appeal and better branding. For example, I donate to GiveWell because I like its approach to global health & development, but I would not personally choose to donate to animal welfare or existential risk causes, and I would worry that supporting EA more generically would support causes that I don't want to support.
To some extent, I think EA-affiliated groups like GiveWell already get a lot of the benefit of this by having a separate-from-EA identity that is more specific and focused. Applying this kind of focus on the movement level could help attract people who are on board with some parts of EA but find other parts weird or off-putting. But of course deciding to split or not depends most of all on the feelings and beliefs of the people actually doing the work, not on how the movement plays to people like me.
I agree that there should be a movement split. I think the existential risk AI doomerism subset of EA is definitely less appealing to the general public and attracts a niche audience compared to the effective charity subset which is more likely to be generally accepted by pretty much anybody of all backgrounds. If we agree that we should try to maximize the number of people that at the very least are involved in at least one of the causes, when the movement is associated with both causes, many people who would've been interested in effective charitable giving will be driven away by the existential risk stuff.
My first thought was "Yes, I think such a split would be an excellent thing."
My second thought is similar, but with one slight concern: I think that the EA movement probably benefits from attracting and being dominated by blueish-grey thinkers; I have a vague suspicion that such a split would result in the two halves becoming pure blue and reddish-grey respectively, and I think a pure blue Effective Charity movement might be less effective than a more ruthlessly data-centric bluish-grey one.
Fully agree.
Yes, a pure blue Effective Charity movement would give you more projects like the hundreds of millions OpenPhil spent on criminal justice, which they deemed ineffective but then spun off into its own thing.
Can you explain the color coding? I must have missed the reference.
It's an SSC/ACX shiboleth dating back to https://slatestarcodex.com/2014/09/30/i-can-tolerate-anything-except-the-outgroup/
I personally know four people who were so annoyed by AI doomers that they set out to prove beyond a reasonable doubt that there wasn't a real risk. In the process of trying to make that case, they all changed their mind and started working on AI alignment. (One of them was Eliezer, as he detailed in a LW post long ago.) Holden Karnofsky similarly famously put so much effort into explaining why he wasn't worried about AI that he realized he ought to be.
The EA culture encourages members to do at least some research into a cause in order to justify ruling it out (rather than mocking it based on vibes, like normal people do); the fact that there's a long pipeline of prominent AI-risk-skeptic EAs pivoting to work on AI x-risk is one of the strongest meta-arguments for why you, dear reader, should give it a second thought.
This was also my trajectory ... essentially I believed that there were a number of not too complicated technical solutions, and it took a lot of study to realize that the problem was genuinely extremely difficult to solve in an airtight way.
I might add that I don't think most people are in a position to evaluate in depth and so it's unfortunately down to which experts they believe or I suppose what they're temperamentally inclined to believe in general. This is not a situation where you can educate the public in detail to convince them.
I'd argue in the opposite direction: that one of the best things about EA (as the Rationalist) community is that it's a rare example of an in-group defined by adherence to an epistemic toolbox rather than affiliation with specific positions on specific issues.
It is fine for there to be different clusters of people within EA who reach very different conclusions. I don't need to agree with everyone else about where my money should go. But it sure is nice when everyone can speak the same language and agree on how to approach super complex problems in principle.
I think this understates the problem. EA had one good idea (effective charity in developing countries) one mediocre idea (that you should earn to give) and then everything else is mixed but being an EA doesn't provide good intuitions any more than being a textualist in US Jurisprudence. I'm glad the Open Phil donated to the early yimby movement but if I want to support good US politics I'd prefer to directly donate to Yimby Orgs or the Neoliberal groups (https://cnliberalism.org/). I think both the FTX and Open AI events should be treated as broadly discrediting to the idea that EA is a well run organization and the reliability of the current leadership. I think GiveWell remains a good organization for what it is (and will continue donating to GiveDirectly) but while I might trust individuals that Scott is calling EA I think that the EA label is negative the way that I might like libertarians but not people using the Libertarian label.
OK, this EA article persuaded me to resubscribe. I love it when someone causes me to rethink my opinion.
Nothing like a good fight in the comments section to get the blood flowing and the wallet opened!
I think EA is great and this is a great post highlighting all the positives.
However, my personal issue with EA is not its net impact but how it's perceived. SBF made EA look terrible because many EA'ers were woo'ed by his rhetoric. Using a castle for business meetings makes EA look bad. Yelling "but look at all the poor people we saved" is useful but somewhat orthogonal to those examples as they highlight some sort of blindspots in the community that the community doesn't seem to be confronting.
And maybe that's unfair. But EA signed up to be held to a higher standard.
I didn't sign up to be held to a higher standard. Count me in for team "I have never claimed to be better at figuring out whether companies are frauds than Gary Gensler and the SEC". I would be perfectly happy to be held to the same ordinary standard as anyone else.
I'm willing to give you SBF but I don't see how the castle thing holds up. There's a smell of hypocrisy in both. Sam's feigning of driving a cheap car while actually living in a mansion is an (unfair) microcosm of the castle thinking.
I don’t really get the issue with the castle thing. An organization dedicated to marketing EA spent a (comparatively) tiny amount of money on something that will be useful for marketing. What exactly is hypocritical about that?
It's the optics. It looks ostentatious, like you're not really optimizing for efficiency. Sure, they justified this on grounds of efficiency (though I have heard questioning of whether being on the hook for the maintenance of a castle really is cheaper than just renting venues when you need them), but surely taking effectiveness seriously involves pursuing smooth interactions with the normies?
1. Poor optics isn’t hypocrisy. That is still just a deeply unfair criticism.
2. Taking effectiveness seriously involves putting effectiveness above optics in some cases. The problem with many non-effective charities is that they are too focused on optics.
3. Some of the other EA “scandals” make it very clear that it doesn’t matter what you do, some people will hate you regardless. Why would you sacrifice effectiveness for maybe (but probably not) improving your PR given the number of constraints.
EA ~= effectively using funds.
Castle != effectively using funds.
Therefore, hypocrisy.
You can't separate optics from effectiveness, since effectiveness is dependent on optics. Influence is power, and power lets you be effective. The people in EA should know this better than anyone else.
See, I think EA shows a lack of common sense, and this comment is an example. It's true that no matter what you do some people will hate you, but if you buy a fucking castle *everybody's* going to roll their eyes. It's not hard to avoid castles and other things that are going to alienate 95% the public. And you have to think *some* about optics, because it interferes with the effectiveness of the organization if 95% of the public distrusts it.
EA's disdain for "optics" is part of what drew me to it in the first place. I was fed up with charities and policymakers who cared far more about being perceived to be doing something than about actually doing good things.
Where do you draw the line? If EAs were pursuing smooth interactions with normies, they would also be working on the stuff normies like.
Also, idk, maybe the castle was more expensive than previously thought. Good on paper, bad in practice. So, no one can ever make bad investments? Average it in with other donations and the portfolio performance still looks great. It was a foray into cost-saving real estate. To the extent it was a bad purchase, maybe they won't buy real estate anymore, or will hire people who are better at it, or what have you. The foundation that bought it will keep donating for, most likely, decades into the future. Why can't they try a novel donor strategy and see if it works? For information value. Explore what a good choice might be asap, then exploit/repeat/hone that choice in the coming years. Christ, *everyone* makes mistakes and tries things given decent reasoning. The castle had decent reasoning. So why are EAs so rarely allowed to try things, without getting a fingerwag in response?
Look at default culture not EA. To the extent EAs need to play politics, they aren't the worst at it (look at DC). But donors should be allowed to try things.
> The castle had decent reasoning
I don't know, I feel like if there had been a single pragmatic person in the room when they proposed to buy that castle, the proposal would have been shot down. But yes, I do agree that ultimately, you have to fuck around and find out to find what works, so I don't see the castle as invalidating of EA, it's just a screw up.
Didn’t the castle achieve good optics with its target demographic though? The bad optics are just with the people who aren’t contributing, which seems like an acceptable trade-off
> surely taking effectiveness seriously involves pursuing smooth interactions with the normies?
If the normies you're trying to pursue smooth interactions with include members of the British political and economic Establishment, "come to our conference venue in a repurposed country house" is absolutely the way to go.
I think you're overestimating how much the castle thing affects interactions with normies. It was a small news story and I bet even the people who read it at the time have mostly forgotten it by now. I estimate that if a random person were to see a donation drive organized by EAs today the chance that their donation would be affected by the castle story is <0.01%
It's hard to believe that a castle was the optimum (all things considered; no one is saying EA should hold meetings in the cheapest warehouse). The whole pitch of the group is looking at things rationally, so if they fail at one of the most basic things like choosing a meeting location, and there's so little pushback from the community, then what other things is the EA community rationalizing invalidly?
And if we were to suppose that the castle really was carefully analyzed and evaluated validly as at- or near-optimal, then there appears to be a huge blindspot in the community about discounting how things are perceived, and this will greatly impact all kinds of future projects and fund-raising opportunities, i.e. the meta-effectiveness of EA.
Have you been to the venue? You keep calling it "a castle" which is the appropriate buzzword if you want to disparage the purchase, but it is a quite nice event space ~similar to renting a nice hotel. It is far from the most luxurious hotels, but it is like a home-y version of the level you get in hotels in which you run events. They have considered different venues (as other said, explained in other articles), and settled on this one due to price/quality/position and other considerations.
Quick test: If the venue appreciated in value and now can be sold for twice the money making this net positive investment which they can in a pinch use to sponsor a really important crisis, and they do that - does that make the purchase better? If renting it our per year makes full financial sense, and other venues would have been worse - are you now convinced?
If not, you may just be angry at the word "castle" and aren't doing a rational argument anymore.
> Have you been to the venue?
No, and it doesn't matter. EA'ers such as Scott have referred and continue to refer to it as a castle, so it must be sufficiently castle-like and that's all that matters as it impacts the perception of EA.
> They have considered different venues (as other said, explained in other articles), and settled on this one due to price/quality/position and other considerations.
Those other considerations could have included a survey of how buying a castle would affect perceptions of EA and potential donors. This is a blindspot.
> If not, you may just be angry at the word "castle" and aren't doing a rational argument anymore.
Also indirectly answering your other questions -- I don't care about the castle. I'm rational enough to not care. What I care about is the perception of EA and the fact that EA'ers can't realize how bad the castle looks and how this might impact their future donations and public persona. They could have evaluated this rationally with a survey.
Why wouldn't a castle be the optimal building to purchase? It is big, with many rooms, and due to the lack of modern amenities it is probably cheaper than buying a more recently built conference center type building. Plus more recently built buildings tend to be in more desirable locations were land itself is more expensive. I think you're anchoring your opinion way too much on "castle=royalty".
So far it's been entirely negative for marketing EA, isn't in use (yet), isn't a particularly convenient location, and the defenders of the purchase even said they bought the castle because they wanted a fancy old building to think in.
So the problem with the castle is not the castle itself it's that it makes you believe the whole group is hypocritical and ineffective? But isn't that disproved by all the effective actions they take?
Not me. I don't care about the castle. I'm worried about public perceptions of EA and how it impacts their future including donations. Perceptions of profligacy can certainly overwhelm the effective actions. Certain behaviors have a stench to lots of humans.
I think the only rational way to settle this argument would be for EA to run surveys of the impact on perceptions of the use of castles and how that could impact potential donors.
Imagine an Ivy League university buys a new building, then pays a hundred thousand dollars extra to buy a lot of ivy and drape it over the exterior walls of the building. The news media covers the draping expenditure critically. In the long term, would the ivy gambit be positive or negative for achieving that university's goals of cultivating research and getting donations?
I don't know. Maybe we need to do one of those surveys that you're proposing. But I would guess that it's the same answer for the university's ivy and CEA's purchase of the miniature castle.
The general proposal I'm making: if we're going to talk about silly ways of gaining prestige for an institution, let's compare like with like.
See my discussion of castle situation in https://www.astralcodexten.com/p/my-left-kidney . I think it was a totally reasonable purchase of a venue to hold their conferences in, and I think those conferences are high impact. I discuss the optics in part 7 of https://www.astralcodexten.com/p/highlights-from-the-comments-on-kidney, and in https://www.astralcodexten.com/p/the-prophet-and-caesars-wife
All I can write at this point is that it would be worth a grant to an EA intern to perform a statistically valid survey of how EA using a castle impacts the perception of EA and potential future grants. Perhaps have one survey of potential donors, another of average people, and include questions for the donors about how the opinions of average people might impact their donations.
Yes, I read your points and understand them. I find them wholly unconvincing as far as the potential impacts on how EA is perceived (personally, I don't care about the castle).
EAs have done surveys of regular people about perceptions of EA - almost no one knows what EA is.
Donors are wealthy people, many of whom understand the long-term value of real estate.
I like frugality a lot. But I think people who are against a conference host investing in the purchase of their own conference venue are not thinking from the perspective of most organizations or donors.
Ie., it's an average sort of thing that lots of other organisations would do. But EA is supposed to be better. (i don't have anything against EA particularly, but this is a pattern I keep noticing -- something or someone is initially sold as be better then defended as being not-worse).
We should learn to ignore the smell of hypocrisy. There are people who like to mock the COP conferences because they involve flying people to the Middle East to talk about climate change. But those people haven’t seriously considered how to make international negotiations on hard topics effective. Similarly, some people might mock buying a conference venue. But those people haven’t seriously thought about how to hold effective meetings over a long period of time.
On that front, EA sometimes has a (faux?) humble front to it, and that's part of where the hypocrisy comes from. I think that came in the early days, people so paralyzed by optics and effectiveness that they wouldn't spend on any creature comforts at all. Now, perhaps they've overcorrected, and spend too much on comforts to think bigger thoughts.
But if they want to stop caring about hypocrisy, they should go full arrogant, yes we're better and smarter than everyone else and we're not going to be ashamed of it. Take the mask off and don't care about optics *at all*. Let's see how that goes, yeah?
People don't mock buying a venue, they mock buying a *400 year old castle* for a bunch of nerds that quite famously don't care about aesthetics.
Re: "should I care about perception?", I think "yes" and "no" are just different strategies. Cf. the stock market. Whereas speculators metagame the Keynesian Beauty Contest, buy-&-hold-(forever) investors mostly just want the earnings to increase.
This type of metagaming has upsides, in that it can improve your effectiveness, ceteris paribus. This type of metagaming also has downsides, in that it occasionally leads to an equilibrium where everyone compliments the emperor's new clothes.
My impression is that EA is by definition supposed to be held to a higher standard. It's not just plain Altruism like the boring old Red Cross or Doctors Without Borders, it's Effective Altruism, in that it uses money effectively and more effectively than other charities do.
I don't see how that branding/stance doesn't come with an onus for every use of funds to be above scrutiny. I don't think it's fair to say that EA is sometimes makes irresponsible purchases, but it should be excused because on net EA is good. That's not a deal with the devil, it's mostly very good charitable work with the occasional small castle sized deal with the devil. That seems to me like any old charitable movement and not in line with the 'most effective lives per dollar' thesis of EA.
Exactly! 1000 "yes"s!
I can barely comprehend the arrogance of a movement that has in its literal name a claim that they are better than everyone else (or ALL other charities at least), that routinely denigrates non-adherents as "normies" as if they're inferior people, that has members who constantly say without shame or irony that they're smarter than most people, that they're more successful than most people (and that that's why you should trust them), that is especially shameless in its courting of the rich and well-connected compared to other charities and groups...having the nerve to say after a huge scandal that they never claimed a higher standard than anyone else.
Here's an idea. Maybe, if you didn't want to be held to a higher standard than other people, you shouldn't have *spent years talking about how much better you are than other people*.
I think you're misunderstanding EA. It did not create a bunch of charities and then shout "my charities are the effectivest!" EA started when some people said "which jobs/charities help the world the most?" and nobody had seriously tried to find the answers. Then they seriously tried to find the answers. Then they built a movement for getting people and money sent where they were needed the most. The bulk of these charities and research orgs *already existed*. EA is saying "these are the best", not "we are the best".
And- I read you as talking about SBF here? That is not what people failed at. SBF was not a charity that people failed to evaluate well. SBF was a donor who gave a bunch of money to the charities and hid his fraud from EA's and customers and regulators and his own employees.
I have yet to meet an EA who frequently talks about how they're smarter, more successful, or generally better than most people. I think you might be looking at how some community leaders think they need to sound really polished, and overinterpreting?
Now I have seen "normies" used resentfully, but before you resent people outside your subculture you have to feel alienated from them. The alienation here comes from how it seems really likely that our civilization will crash in a few decades. How if farm animals can really feel then holy cow have we caused so much pain. How there's 207 people dying every minute- listen to Believer by Imagine Dragons, and imagine every thump is another kid, another grandparent. It's an goddamn emergency, it's been an emergency since the dawn of humanity. And we can't fix all of it, but if a bunch of us put our heads together and trusted each other and tried really hard, we could fix so much... So when someone raised a banner and said "Over here! We're doing triage! These are the worst parts we know how to fix!", you joined because *duh*. Then you pointed it out to others, and. Turns out most people don't actually give a shit.
That's the alienation. There's lots of EA's who aren't very smart or successful at all. There's lots of people who get it, and have been triaging the world without us and don't want to join us. This isn't alienating. Alienation comes from normies- many of them smarter and more successful- who don't care. Or who are furious your post implied an art supply bake sale isn't just as important as the kids with malaria. It doesn't make people evil that they don't experience that moment of *duh*, but goddamn do I sometimes feel like we're from different planets.
"The world is terrible and in need of fixing" is a philosophical position that is not shared by everyone, not a fact
Right, that's why I said people who don't feel that way sometimes feel like aliens, not that they're mistaken.
That was a good comment, and mine above was too angry I think. I'm starting to think everyone's talking about very different things with the same words. This happens a lot.
First, I'm a bit sceptical of the claim that, before EA nobody was evaluating charity effectiveness. This *feels* like EA propaganda, and I'm *inclined* to suspect that EA's contribution was at least as much "more utilitarian and bayesian evaluation" as "more evaluation". BUT I have no knowledge of this whatsoever and it has nothing to do with my objection to EA, so I'm happy to concede that point.
Second, regarding SBF my main issue is with the morality of "earning to give" and its very slippery slope either straight to "stealing to give" or to "earning to give, but then being corrupted by the environment and lifestyle associated with earning millions, and eventually earning and stealing to get filthy rich". Protestations that EAs never endorsed stealing, while I accept they're sincere, read a bit too much like "will no one rid me of this troublesome priest?" It's important for powerful people to avoid endorsing principles that their followers might logically take to bad ends, not just avoid endorsing the bad ends themselves. (Or at least, there's an argument that they should avoid that, and it's one that's frequently used to lay blame on other figures and groups.)
Third, regarding "normies", I don't feel like I've seen it used to disparage "people who don't think kids with malaria are more important than the opera", or if I have not nearly as many times as it's used to disparage "people who think kids with malaria are important than space colonies and the singularity". I completely see the "different planets" thing, and this goes both ways. Lots of people don't care about starving children, and that's horrific. EAs of course are only a small minority of those who *do* care, effectiveness notwithstanding. On the other hand, this whole "actual people suffering right now need to be weighed against future digital people" is so horrific, so terrifying, so monstrous that I'm hoping it's a hoax or something. But I haven't seen anyone deny that many EAs really do think like that. In a way, using the rescources and infrastructure (if not the actual donations) set up for global poverty relief, to instead make digital people happen faster, is much worse than doing nothing at all for poverty relief to begin with (since you're actively diverting resources from it). So we could say "global health EAs" are on one planet, "normies" are on a second planet, and "longtermist EAs" are on a third planet, and the third looks as evil to the second as the second does to the first.
Fwiw, charity evalutation existed before EA, but it was almost entirely infected by goodhart's law. charity evaluators measured *overhead*, not impact. A charity which claimed to help minorities learn stem skills by having them make shoes out of cardboard and glue as an afterschool program (because everyone knows minorities like basketball shoes, and designing things that require measurements is kind of like stem) would have been rated very, very highly if they were keeping overhead low and actually spending all of the money on their ridiculous program, but the actual impact of the program wouldn't factor into it at all. I use this example because it's something I actually saw in real life.
These evaluators served an important purpose in sniffing out fraud and the kind of criminal incompetence that destroys most charities, but clearly there was something missing, and EA filled in what was missing
TBC, you're replying to a comment about whether individual EA's should be accountable for many EA orgs taking money from SBF. I do not think that "we try to do the most good, come join us" is branding with an onus for you, as an individual, to run deep financial investigations on your movement's donors.
But about the "castle", in terms of onuses on the movement as a whole- That money was donated to Effective Ventures for movement building. Most donations given *under EA* go to charities and research groups. Money given *directly to EV* is used for things like marketing and conferences to get more people involved in poverty, animal, and x-risk areas. EV used part of their budget to buy a conference building near Oxford to save money in the long run.
If the abbey was not the most effective way to get a conference building near Oxford, or if a conference building near Oxford was not the most effective way to build the movement, or if building the movement is not an effective way to get more good to happen, then this is a way that EA fell short of its goal. Pointing out failures is not a bad thing. (Not that anyone promised zero mistakes ever. The movement promised thinking really hard and doing lots of research, not never being wrong.) If it turns out that the story we heard is false and Rob Wiblin secretly wanted to live in a "castle", EA fell short of its goal due to gross corruption by one of its members, which is worth much harsher criticism.
In terms of the Red Cross, actually yes. Even if we found out 50% of all donor money was being embezzled for "castles", EA would still be meeting its goal of being more effective than just about any major charity organization. EA donation targets are more than twice as cost effective as Red Cross or DWB.
Hold to the higher standard, but if you’re going to criticize about the castle, you better be prepared to explain how better to host a series of meetings and conferences on various topics without spending a lot more money.
I think your assumption that "any old charitable movement" is about as effective as using the vast majority of funds on carefully chosen interventions plus buying a castle once and then falling for a snake oil salesman is wrong though. My impression is most charitable movements accomplish very little so it is quite easy to be more effective than them. And until another movement comes along that is more effective than EA at saving lives I'll continue thinking that.
A lot of people ignore it, but I continue to find the "Will MacAskill mentored SBF into earn to give" connection the problem there. No one can always be a perfect judge of character, but it was a thought experiment come to life. It says... *something* about the guardrails and the culture. It's easy to take it as saying too much, to be sure many people do, but it's also easy to ignore what it says entirely.
I recognize broader-EA has (somewhat) moved away from earning to give and that the crypto boom that enabled SBF to be a fraud of that scale was (probably) a once in a lifetime right-place right-time opportunity for both success and failure. Even so.
In point of fact, you all are being held to the ordinary standard. Public corruption leads to public excoriation, and "but look at the good we do" is generally seen as a poor defense until a few years later when the house is clearly clean. That is the ordinary standard.
I think EA signed up to be held to the standard "are you doing the most good you can with the resources you have". I do not think it signed up to be held to the standard "are you perceived positively by as many people as possible". Personally I care a lot more about the first standard, and I think EA comes extremely impressively close to meeting it.
Sure, but go Meta-Effectiveness and consider that poor rhetoric and poor perception could mean fewer resources for the actions that really matter. A few more castle debacles and the cost for billionaires being associated with EA may cross a threshold.
Seems a bit perverse to say EA is failing their commitment to cost-effectiveness by over-emphasising hard numbers in preference to vibes.
Castle != cost-effective. And perceptions of using castles, and blindness to how bad this looks, could have massive long-term impacts on fund-raising.
I don't understand why this is so complicated. It doesn't matter how tiny the cost of the castle has been relative to all resources spent. It's like a guy who cheated on a woman once. Word gets around. And when the guys says, "Who _cares_ about the cheating! Look at all the wonderful other things I do" then it looks even worse. Just say, "Look, we're sorry and we're selling the castle, looking for a better arrangement, and starting a conversation about how to avoid such decisions in the future."
Why is the castle not cost effective?
Yeah, I was just now trying to run figures about increased persuasiveness toward government officials and rich people, to see what the break-even would have to be.
Given the obvious difference in intuitions on how to discount the perceptions of profligacy, as proposed in another response to Scott, I think the only way to actually resolve this is to conduct a survey.
Maybe they should have bought a motte instead. That clearly wouldn't be assailable, and thus beyond reproach.
I just do not get the mindset of someone who gets this hung up on "castles". Is that why I don't relate to the anti-EA mindset?
Should they have bought a building not made out of grey stone bricks? Would that make you happy?
I understand your model is that the abbey was a horrid investment and a group that holds itself out as a cost-effectiveness charity, but also makes horrid investments, should lose credibility and donors.
No one disagrees with that premise.
I disagree that it was a horrid investment, based on the info they had at the time.
So, I don’t see a loss of credibility there.
Others will disagree that CEA/EV is primarily a cost-effectiveness charity.
It looks pretty good to people who think castles are cool, and don't really care much about austerity or poor people or math. There are staggering numbers of such people, some of whom are extremely rich, and EA might reasonably have difficulty extracting money from them without first owning a castle.
Yeah, but billionaires, by definition, have lots of money, so I think on net were probably better off continuing to be associated with them.
Unless people set out with a vendetta to destroy EA, the castle will be forgotten as a reputational cost, but will still be effective at hosting meetings. And if people do set out with a vendetta to destroy EA, it’s unlikely the castle thing is the only thing they could use this way.
Scott's kidney post and this one seem to suggest the threshold is already crossed for some.
The community by it's nature has those blindspots. Their whole rallying cry is "Use data and logic to figure out what to support, instead of what's popular". This attracts people who don't care for or aren't good at playing games of perception. This mindset is great at saving the most lives with the least amount of money, it's not as good for PR or board room politics.
Right, but they could logically evaluate perceptions using surveys. That begs the question: what other poor assumptions are they making that they're not applying rationalism to?
I do wonder if the "castle" thing (it's not a castle!) is just "people who live in Oxford forget that they're in a bubble, and people who've never been to Oxford don't realise how weird it is". If you live in Oxford, which has an *actual* castle plus a whole bunch of buildings approaching a thousand years old, or if you're at all familiar with the Oxfordshire countryside, you'd look at Wytham Abbey and say "Yep, looks like a solid choice. Wait, you want a *modern* building? Near *Oxford*? Do you think we have infinite money, and infinite time for planning applications?"
The word "castle" can be a bit misleading. They (or the ones in the UK) aren't all huge drafty stone fortresses. Many, perhaps most, currently habitable and occupied ones differ little from normal houses, but maybe have a somewhat more squat and solid appearance and a few crenellated walls here and there. I don't know what Castle EA looks like though! :-)
Edit: I did a quick web search, and the castle in question is called Chateau Hostacov and is in Bohemia, which is roughly the western half of the Czech Republic. (I don't do silly little foreign accents, but technically there is an inverted tin hat over the "c" in "Hostacov").
It cost all of $3.5M, which would just about buy a one-bedroom apartment in Manhatten or London. So not a bad deal, especially considering it can be (and, going by its website, is being) used as a venue for other events such as conferences and weddings and vacations etc:
https://www.chateau-hostacov.cz/en
The more famous and controversial one is the Oxford purchase, Wytham Abbey: https://en.wikipedia.org/wiki/Wytham_Abbey
Impressive! By the way, I've slain and will continue to slay billions of evil Gods who prey on actually existing modal realities where they would slay a busy beaver of people – thus, if I am slightly inconvenienced by their existence, every EA advocate has a moral duty to off themselves. Crazy? No, same logic!
I believe I can present better evidence to support the claim that EA has saved 200,000 lives than you can present to support the claim that you have slain billions of evil gods. Do you disagree with this such that I should go about presenting the evidence, or do you have some other point that I'm missing?
Thanks for the response! Big fan.
My reply:
Surely the evidence is not trillions of time stronger than my evidence (which consists of my testimony, a kind of evidence)! So, my point stands. (And I can of course just inflate the # of Gods slain for whatever strength of evidence you offer.) Checkmate, Bayesian moralists.
But let's take a step back here and think about the meta-argument. You're the one who says that one of EA's many laudable achievements are "preventing future pandemics ... [and] preparing for superintelligent AI."
And this is surely the fat end of the wedge -- that is, while you do a fine job of bean-counting the various chickens uncaged and persons assisted by EA-related charities, I take your real motivation to be to argue for EA's benevolence on the basis of saving us from a purely speculative evil.
If we permit such speculation to enter into our moral calculations, we'll have no end of charlatans, chicanery, and Tartuffes. And in fact that is just what we've seen in the EA community writ large -- the 'psychopaths' hardly let the 'mops' hit the floor before they started cashing in.
So you're calling future pandemics a speculative evil? Or is that just about the AI? Don't conflate those two things as one of them, as we have recently seen, poses a very real threat.
Also your whole thing about the evil gods and Bayesian morals just comes off annoying, like this emoji kind of 🤓
Future pandemics are speculative in the sense that they're in futuro, yes, but what I meant to say was that EA qua EA assisting with the fight against such pandemics is, at the moment, speculative. In my view they did not cover themselves in glory during the last pandemic, but that's a whole separate can of worms.
And I am sorry for coming off in a way you dislike. I will try to be better.
Awesome, thanks.
It sounds like you are describing Pascal's Mugging https://en.wikipedia.org/wiki/Pascal%27s_mugging There are multiple solutions to this. One is that the more absurd the claim you are making, the lower a probability I assign to it. That scales linnerally, so just adding more orders of magnitude to your claim doesn't help you
Thanks; I assume the reader's familiarity with Pascal's mugging and related quandaries & was winking at same but the point I was making is different (viz. that we can't have a system of morality built on in futuro / highly speculative notions -- that's precisely where morality stops and religion begins).
A system of morality that doesn't account for actions is the future that are <10% likely is going to come to weird conclusions.
Agreed.
we routinely take measures against risks that are lower than one in a million, potentially decades in the future. the idea that future, speculative risks veer into religion proves too much
https://forum.effectivealtruism.org/posts/5y3vzEAXhGskBhtAD/most-small-probabilities-aren-t-pascalian
Thank you for the thought-provoking essay. My kneejerk is to say that just because people do it does not mean it is rational, let alone a sound basis for morality.
More deeply, I fear you've merely moved the problem to a different threshold, not solved it -- one can just come up with more extravagant examples of speculative cosmic harms. This is particularly so under imperfect information and with incentive to lie (and there always is).
But more to the point, my suspicion EA is, in large part, epistemic: they purport to be able to quantify the Grand Utility Function in the Sky, but on what basis? My view is that morality has to be centered on people we want to know -- attempts to take utilitarianism seriously, even putting aside the problem of calculation, seem to me to fall prey to Parfitian objections like the so-called intolerable hypothesis. My view is that morality should be agent-centric and based on actual knowledge -- there's always going to be some satisficing. Thus, if asked to quantify x-risks and allocate a budget, I'd want to know about opportunity costs.
In other words, you know your argument is a logical swindle but you do it anyway because that helps you not take EA seriously. Cool
Nice steelman. Cool
1) This is not a disagreement over how to resolve Pascal's Mugging. AI doomers think the probability for doom is significant, and that the argument for mitigating it does not rely on some sort of Pascalian multiplying-a-minuscule-number-by-a-giant-consequence. You might disagree about the strength of their case, but that does not mean they are asking you to accept the mugging, so your argument does not apply.
2) Scott spent a great deal of this essay harping on the 200,000 lives saved and very little on mitigating future disasters. It is unfair and unreasonable of you to minimize this just because you *think* Scott's actual motivation is something else. Deal with the stated argument first, and then, if you successfully defeat that, you can move on to dissecting motives.
3) I wish to go on record saying that it seems clear to me (as a relative bystander) that you are going out of your way to be an obnoxious twat, just in case Scott is reluctant to give you an official warning/ban due to his conflict of interest as a participant in the dispute.
Re: 1), I'm not sure what you're trying to argue. I think maybe you didn't understand my comment? Anyway, we are like two ships passing in the night.
Re the rest, why would he ban me? I'm not the one going around calling people nasty words. You're right that I shouldn't mind-read Scott, and that he did an able job of toting up the many benefits of EA-inspired people. I somewhat question whether you need EA to tell you that cruelty / hunger / etc. is bad, but if it truly did inspire people (I'm not steeped enough in it to game out the counterfactuals), that is great! Even so, I'm interested in the philosophical point.
I do think Joe's coming across as intentionally provocative, but "obnoxious twat" isn't kind nor necessary.
I disagree with the force of the insult, but being coy about your point as the opening salvo and then NOT explicitly defending any stance is rude and should be treated as rude.
1) You compared AI concerns and pandemic concerns to Pascal's Mugging. This comparison would make sense if the concerned parties were saying "I admit this is extremely unlikely to actually happen, but the consequences are so grave we should worry about it anyway".
But I have never heard Scott say that, and most people concerned about pandemics and AI doom do not say that. e.g. per Wikipedia, a majority of AI researchers think P(doom) >= 10% ( https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence ). That's not even doomers specifically; that's AI researchers in general.
Presumably you'd allow that if a plane has a 10% chance of crashing then it would make sense to take precautions.
Therefore your comparison is not appropriate. The entire thing is a non-sequitur. You are arguing against a straw man.
3) Your response to Scott's question started with an argument that (you admitted later in the same comment) wasn't even intended to apply to the claim that Scott actually made, and then literally said "checkmate". You are being confusing on purpose. You are being offensive on purpose, and with no apparent goal other than to strut.
Ok well if your survey evidence says so I guess you win hehe. fr though: dude chill, I am not going to indulge your perseveration unless you can learn to read jocosity.
“I somewhat question whether you need EA to tell you that cruelty / hunger / etc. is bad, but if it truly did inspire people (I'm not steeped enough in it to game out the counterfactuals), that is great! Even so, I'm interested in the philosophical point.”
Come on, this statement is condescending.
To me it says you’re not taking this seriously but just enjoying the abstract conversation.
If you’re taking things seriously, it should be obvious that *believing* things like “cruelty is bad” is clearly not the same thing as *building* things that that allow more people to *take action* on that belief, who then actually do.
>Surely the evidence is not trillions of time stronger than my evidence (which consists of my testimony, a kind of evidence)!
Consider two people - one who genuinely has slain billions of evil Gods and needs help, and one who is trolling. Which do you think would be more likely to post something in an obviously troll-like tone like yours? So your testimony is actually evidence /against/ your claim, not for it.
By contrast, estimates of the number of lives saved by things like mosquito nets are rough, but certainly not meaningless.
"By contrast, estimates of the number of lives saved by things like mosquito nets are rough, but certainly not meaningless."
They're a bit meaningless as evidence of the benefits of EA when it's just the sort of thing the people involved would probably be doing anyway. But it's very difficult to judge such counterfactual arguments. Is there some metric of Morality Above Replacement?
1) Did you read the footnotes? Actual total deaths from malaria have dropped from 1M to 600K. That’s a useful sanity check number from reality
2) It is unlikely that people would have been doing EA type giving without EA. It’s not just what people would have done anyway.
Before GiveWell and ACE existed, the only charity evaluator was Charity Navigator, who ranks based on things like overhead, which I do not care about.
I would have *wanted* to give effectively but most of us do not have time to vet every individual cause area and charity for high impact opportunities. I was giving to projects that were serving significantly fewer people per dollar.
Without EA principles and infrastructure, Muskowitz money would have gone to different causes.
If you believe EA analysis identified high impact ways to save more lives per dollar, then EA orgs should be credited for more lives saved than would otherwise have been saved per dollar.
Isn’t your statement more likely to exist in a world where it isn’t true, and thus not a problem for the balance of evidence?
Hey check out the modal realist over here!
Some testimony is positive evidence for some claims, but not all testimony is. Why shouldn’t I think your testimony is zero evidence, or even negative evidence?
You're conflating "evidence" and "credibility" and since I know the difference my testimony is highly credible.
> I believe I can present better evidence to support [...] than you can present to support the claim that you have slain billions of evil gods.
Don't take this the wrong way, but ... I hope you're wrong. ;-)
The evidence: https://killsixbilliondemons.com/
How so?
Let’s agree to ignore all the hypothetical lives saved and stick to real, material changes in our world. EA can point to a number of vaccines, bed nets, and kidneys which owe their current status to the movement. To what can you point?
Agreeing to ignore hypothetical lives saved is to concede the point I'm making. I'm not that interested in the conversation otherwise, sorry.
Then I’m afraid I missed your point.
The top charities on GiveWell address malaria, vitamin A deficiency, and third-world vaccination. Those are real charities which help real people efficiently.
I understand not believing in x-risk, or that dollars spent on it are wasted. If you ignore those, you’re left with some smaller but definitely nonzero lives saved by charities like those above.
I'm not super-concerned about any of that stuff and as I mentioned above, I don't think there is very good evidence that EA was the proximate cause of any gains, as opposed to, "high SES/IQ + conscientious + [somewhat] neurotic people will tend to be do gooders and effective at it, often cloaking their impulse in the guise of some philosophy". But it seems an idle dispute.
At the very least, with the malaria thing, people really didn't care about it until some guys started crunching numbers and realized it was by far the best lives saved per cash spent. Considering that's basically what started the whole movement, I think it's fair to credit EA with that.
I'm not sure that's right, and I'd be cautious of reflexivity, but sure, let'em have it I say. Good for'em.
K, thanks for clarifying.
If you are uninterested in the difference between the impact of, say, facilitating 100 kidney donations instead of 10 given similar resource constraints, we don’t share key interests, values, or priorities.
I'm 6'3" and handsome so I have no need to be as moral as you, yes.
Out of curiosity, are you highly confident that artificial superintelligence is impossible, or are you confident that when artificial superintelligence comes about it will definitely be positive? It seems that in order to be so dismissive of AI risk, you must be confident in one or both of these assumptions.
I would appreciate in hearing your reasoning for your full confidence in whichever of those assumptions is the more load-bearing one for you.
If you don’t have full confidence in at least one of those two assumptions, then I feel like your position a bit like having your foot stuck in a train track, and watching a train head ponderously toward you down the track from a distance away, and refusing to take any steps to untie your stuck shoe because the notion of the train crushing you is speculative.
Thanks for asking. See https://joecanimal.substack.com/p/tldr-existential-ai-risk-research -- in essence, (i) unlikely there will be foom/runaway AI/existential risk; (ii) but if there is, I'm absolutely confident we cannot do anything about it and there's been no indication to the contrary we may as well just pray; (iii) yet while AI risk is a pseudo-field, it has caused real and material harm as it is helping to spur nannyish measures that cripple vital tech, both from within companies & from regulators.
Interesting. I don’t agree with your assumptions but, more importantly, also don’t think your argument quite stands up even on its own merits. On (i) I would still want to get off the train track whether the train is coming quickly or slowly (AI X-risk doesn’t hinge on speed); if (ii) is true then we can’t actually get our foot out of the tracks regardless. I would rather go out clawing at my shoe (and screaming) rather than just resign myself. And if (ii) then who cares about (iii)? We’ll all be dead soon anyway.
Thanks for reading. I'm not so sure that x risk doesn't depend on speed, for the reason suggested by your train example. I think it sort of does. On ii it seems like we don't have a true disagreement, and thus same for iii.
The whole point can be summed up by "doing things is hard, criticism is easy."
I continue to think that EA's pitch is that they're uniquely good at charity and they're just regular good at charity. I think that's where a lot of the weird anger comes from - the claim that "unlike other people who do charity, we do good charity" while the movement is just as susceptible to the foibles of every movement.
But even while thinking that, I have to concede that they're *doing charity* and doing charity is good.
We all agree that EA has had fuckups, the question is wether those fuckups to good stuff is better or worse then the reference class you are judging against. So what factors are you looking at that bring you to that conclusion?
I’ll go further than this - even if EA is kinda bad at doing charity, the average charity is *really* bad at doing charity so it’s not hard at all to be uniquely good at doing charity.
E.g. even if every cent spent on AI and pandemics etc was entirely wasted I still think EA is kicking World Vision’s butt.
Huh, what's the problem with Worldvision? I had a memory of some EAs that kind of hated them because they're Christian but still considered them fairly effective (points knocked off for the proselytizing, but otherwise good on the money).
Huge overhead v little money spent on actual charity.
This is exactly right. Spend months, even years trying to build stuff, and in hours someone can have a criticism. Acknowledge it, consider it if you think there's validity there, then just move on. Criticism is easy.
There is no "regular good at charity". Regular charity is categorically not 'good at charity'. That makes them unique.
I think a lot of the criticism is coming from, or being subsidized by, groups who used to think of themselves as being "regular good at charity" and are no longer feeling secure in that. If so, scandal-avoidance effort within EA might actually be making backlash more severe at the margin, similar to prevention of minor forest fires leading to overgrown underbrush and ultimately more destructive wildfires. When the root complaint isn't "they did these specific things egregiously wrong," so much as "rival too perfect, must tear down, defend status," outrage will escalate the longer the investigation goes on without finding a meaningful flaw.
That might be true or not, but it doesn't need to be true for the charity part of EA to be doing a good job. They get people excited to put money and effort into a best effort at charity, that's just good in itself. No need to hang the movement's collective ego on being better than someone else - which no-one guarantees they will be anyway.
Stuck between this post and Freddie's https://freddiedeboer.substack.com/p/the-effective-altruism-shell-game I opt for epistemic learned helplessness https://slatestarcodex.com/2019/06/03/repost-epistemic-learned-helplessness/.
Freddie's post is just weird and bad. I'm curious what part of it you found at all convincing.
Kind of... all of it? And I generally find his posts and almost always his framing rather unpersuasive and sometimes grating.
Couldn’t any movement be reduced to some universally agreed-upon principle and dismissed as insignificant on that basis? But if effective altruism is so universally agreed on, how come it wasn’t being put into effect until the effective altruists came on the scene?
My response to Freddie is https://freddiedeboer.substack.com/p/the-effective-altruism-shell-game/comment/44413377 , I'm curious what you think.
FWIW I agree with ProfGerm's reply to your post on that thread.
"I am a big fan of checking up on charities that they're actually doing what they should with the money, a big proponent that no one should ever donate a penny to the Ivy Leagues again, I donate a certain percentage of my money and time and career, does that make me an EA? If it does, then we're back to that conflation of how to critique the culture that goes under the same name."
Why not simply call it 'internal criticism within EA'? For me, one the quintessential EA culture things is the 80k hour podcast, and it's not like they're all AI doomers (or whatever problem one could have with it)
Since I don't live in NYC/SF/London, I don't have a Stanford or Oxford degree, and I don't work at a think tank, it's really easy to not be internal and would be difficult at this point to reach the kind of internal that actually gets listened to.
It's a lowercase/uppercase distinction, or a motte and bailey. I *like* effective altruism: to hell with the Susan G Komen Foundation or United Way, up with bednets and food pantries (I know they're not capital-EA effective, but I'm primarily a localist and on those terms they seem to be relatively efficient).
I am somewhat fascinated by but don't really want to be part of EA- I'm not a universalist utilitarian that treats all people as interchangeable or shrimp as people, I think the "let's invent God" thing is largely a badly-misdirected religious impulse and/or play for power, I have a lot of issues with the culture.
EA-adjacent works, I guess, but I don't really think I am. My root culture, my basic operating system is too far off. Leah Libresco Sargent is more willing to call herself EA or EA-adjacent, so perhaps it's fair enough. But I think Scott underrates the "weird stuff" and the cultural effects that keep certain people out.
My take on the Scott-ProfGerm exchange is that the EA movement needs a better immune system to address charlatans using the movement for their own ends and weirdoes who are attracted to the idea but who end up reasoning their way to the extinction of the human race or something, but the EA framework is probably the best place to develop those tools, and regular charities are susceptible to the same risks.
(Especially #1. When Enron collapsed, no one but Randians argued that Ken Lay's use of conspicuous charity to advance his social stnading demonstrated that the idea of charity was a scam and should be abandoned, but somehow Freddie has come to that conclusion from SBF.)
SBF might have been motivated by EA principles, and whether or not he was, he seems to have used them for a time to get extra work out of some true believes for less money/equity, but he's an individual case. The OpenAI situation strikes me as more about AI risk and corporate management than it is about EA.
Yudkowsky believes that (1) EA principles will help people identify and achieve their charitable goals more effectively, and (2) more clarity will lead people to value AI safety more on average than they otherwise would. If someone doesn't agree with #2, then they can spend their money on bednets and post some arguments if they think that would be helpful.
Everything you say there seems right, and it doesn't look like Freddie objects to anything in your reply? But it looks like Motte-and-Bailey. "EA is actually donating a fixed amount of your income to the most effective (by your explicit and earnest evaluation) charity" is the motte, while the focus on longtermism, AI-risk and ant welfare is the bailey.
Freddie: https://freddiedeboer.substack.com/p/the-effective-altruism-shell-game/comment/44402071
> every time I write about EA, there's a lot of comments of the type "oh, just ignore the weirdos." But you go to EA spaces and it's all weirdos! They are the movement! SBF became a god figure among them for a reason! They're the ones who are going to steer the ship into the future, and they're the ones who are clamoring to sideline poverty and need now in favor of extinction risk or whatever in the future, which I find a repugnant approach to philanthropy. You can't ignore the weirdos.
Then it should matter what percent of money goes to each cause. And helpfully he provided a graph above.
https://substackcdn.com/image/fetch/w_2340,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb33b0595-025d-4497-be06-0db331ca874e_736x481.png
Isn’t his penultimate sentence just a slander? EA are the last people who could be accused of sidelining questions of poverty today.
Deeply ironic he can't see that being a substack writer, socialist, his array of mental health issues and medications, arguing about EA on a blog, all makes him just as much of a "weirdo" as anyone I know. I'm damn sure if you dropped him into the average American house party he wouldn't fit in well with "normies."
This motte is also extremely weird, when we consider revealed preferences of the vast majority of humanity, and I'm not sure how Freddy, or anyone else, can deny this with a straight face. Trying to explicitly evaluate charities by objective criteria and then donating to the top scoring is simply not a mainstream thing to do, and to the extent that capital letter EAs are leading the charge there they should be applauded, whatever even weirder things they also do on the side.
> But you go to EA spaces and it's all weirdos!
Yes, this is how subcultures work. The people who go to the meetups and events are disproportionally the most dedicated and weirdest members of the subculture.
And yet it is totally fair to distance from an organization due to concentrated weirdos, anywhere.
Look at the funnel from political to conservative to alt-right to vaguely neo-nazi. It's FAIR that we penalize the earlier links in the chain for letting such bad concentrations form downstream. EA looks like this. Charity to rational charity to doomer/longterm/singularitarian to ant welfare and frankly distressing weirdos with money.
I found Freddie's post pretty unrepentantly glossed over the fact that most people who do charity do so based on what causes are "closest" to them rather than what causes would yield the most good for the same amount of money - this inefficiency is not obvious to most people and is the foundation of EA. This pretty much makes the whole post pointless as far as I can see.
But also Freddie goes on and on about how the EA thing of assessing which causes are more impactful is just obvious - and then immediately goes on to dismiss specific EA projects on the basis that they're just *obviously* ridiculous - without ever engaging with the arguments for why they're actually important. Like, giving to causes based on numbers rather than optics is also a huge part of EA! Copy/paste for his criticism of longtermism.
I'm not saying it's impossible to do good criticism of EA. I'm just saying this isn't it. Maybe some of the ones he links are better (I haven't checked all) but in this specific instance Freddie comes across as really wanting to criticise something he hasn't really taken the time to properly understand (which is weird because he's clearly taken the time to research specific failures or instances of bad optics).
He's been too busy pleasuring himself to Hamas musical festival footage to write anything good lately.
I thought it was extremely convincing. The whole argument behind effective altruism is "unlike everyone else who does charity, we want to help people and do the best charity ever." That's...that's what they're all doing. Nobody's going "let's make an ineffective charity."
If you claim to bring something uniquely good to the table, there's a fair argument that you should be able to explain what makes it uniquely good. If it turns out what makes your movement unique is people getting obsessive about an AI risk the public doesn't accept as real and a fraudster making off with a bunch of money, it's fair to say "I don't see how effective altruism brings anything to the table that normal charity doesn't."
This post makes a good argument that charities are great, and a mediocre argument that EA in particular is great, unless you already agree with EA's goals. If we substituted in "generic charity with focus on saving lives in the developing world" would there be any difference besides the AI stuff and the fraud? If not, it's still good that there's another charitable organization with focus on saving lives in the developing world but no strong argument that EA in particular is a useful idea.
The problem is that EA doesn't claim that other charities are not trying to be effective. The claim of EA is that people should donate their money to the charities that do the most good. That's not the same thing. You can have an animal shelter charity that is very efficient at rescuing dogs: they save more animals per dollar than any other shelter! They are trying to be effective at their chosen field. Yet at the same time, EA would say "You can save more human lives per dollar by donating to charities X, Y, and Z, so you should donate to them instead of to the animal shelter."
It's not about trying to run charities effectively, it's about focusing on the kinds of charity that are the most effective per dollar, and then working your way down from there. And not every charity is about that, not even most of them! Most charities are focused on their particular area of charity: animal shelters on rescuing animals, food banks on providing food for food insecure people in their region, and anti-malaria charities on distributing bed nets. EA is doing a different thing: it's saying "Out of those three options, donate your money to the malaria one because it saves X more lives per dollar spent."
This sounds like a rather myopic way of doing charity; if you follow this utilitarian line of reasoning to its logical conclusion, you'd end up executing some sort of a plot along the lines of "kill all humans", because after you do that no one else would have to die.
Thus, even if EA was truly correct in their claims to be the most effective charity at preventing deaths, I still would not donate to it, because I care about other things beyound just preventing deaths (e.g. quality of life).
But I don't think EA can even substantiate their claim about preventing deaths, unless you put future hypothetical deaths into the equation. Doing so is not a priori wrong; for example, if I'm deciding whether to focus on eliminating deadly disease A or deadly disease B, then I would indeed try to estimate whether A or B is going to be more deadly in the long run. But in order for altruism to be effective, it has to focus on concrete causes, not hypothetical far-future scenarios (be they science-fictional or theological or whatever), with concrete plans of action and concrete success metrics -- not on metaphysics or philosophy. I don't think EA succeeds at this very well at the moment.
"Kill all humans" is a (potential) conclusion of negative utilitarianism. Not all EAs, even if you agree a big majority are consequentialist, are negative utilitarians.
Things are evaluated on QALYs and not just death prevention in EA forums all the time, so I think it's common to care about what you claim to care about too.
As for your third concern, if the stakes are existential or catastrophic (where the original evaluation of climate change, nuclear war, AI risk, pandemics and bioterrorism come from), I think we owe it to at least try. If other people come along and do it better than EA, that's great, but all of these remain to a greater or lesser extent neglected.
> Things are evaluated on QALYs and not just death prevention in EA forums all the time
Right, but here is where things get tricky. Let's say I have $100 to donate; should I donate all of it to mosquito nets, or should I spread it around among mosquito nets, cancer research, and my local performing arts center ? From what I've seen thus far, EAs would say that any answer than "100% mosquito nets" is grossly inefficient (if not outright stupid).
> As for your third concern, if the stakes are existential or catastrophic (where the original evaluation of climate change, nuclear war, AI risk, pandemics and bioterrorism come from), I think we owe it to at least try.
Isn't this just a sneakier version of Pascal's Mugging ? "We know that *some* existential risks are demonstrably possible and measurable, so therefore you must spend your money on *my* pet existential risk or risk CERTAIN DOOM !"
And that's where the argument about utilitarianism comes in. Does selecting a metric like "number of lives saved" even make sense? I'm pro-lives getting saved but I'm not sure removing all localism, all personal preference, etc. from charitable giving and defining it all on one narrow axis even works. For instance, I suspect most people who donate to the animal shelter would not donate to the malaria efforts.
Of course the movement itself has regularly acknowledged this, making it clear that part of the mission is practicality. If all you can get out of a potential donor is a donation to a local animal shelter, you should do that. Which further blurs the line between EA as a concept and just general charitable spirit.
At the base of all this there's a very real values difference - people who are sympathetic towards EA are utilitarians and believe morality is consequence-based. Many, perhaps most people, do not believe this. And it's very difficult for utilitarians to speak to non-utilitarians and vice versa. So utilitarians attempt to do charity in the "best" way which is utilitarianism, and non-utilitarians attempt to do charity in the "best" way which is some kind of rule-based thing or something, and I think both should continue doing charity. But utilitarian givers existed before EA and will continue to exist after them. What might stop existing is people who think that if they calculate the value of keeping an appointment to be less than the value of doing whatever else they were gonna do they can flake on the appointment.
It's a particular system of values whereby human lives are all of equivalent value and the only thing you should care about.
I might tell you that I'm more interested in saving the lives of dogs in my own town than the lives of humans in Africa, and that's fine. Maybe you tell me that I should care about the Africans more because they're my own species, but I'll tell you that I care about the dogs more because they're in my own town. Geographical chauvinism isn't necessarily any worse than species chauvinism.
Now I don't think I really care more about local dogs than foreign humans, but I do care more about people like me than people unlike me. This seems reasonable given that people like me are more likely to care about me than people unlike me are. Ingroup bias isn't a great thing but we all have it, so it would be foolish (and bad news for people like me) for me to have it substantially less than everyone else does.
...Well, god damn. At least you're honest about it. Most people wouldn't be caught dead saying what you just said, even if they believed it. And I'm sure most people do in fact have the same mentality that you do.
You're just human. It can't be helped.
I totally believe it and have no problem saying it. I think most "normies" are the same. Of course we care more about our family/friends/countrymen
Freddie deBoer addresses this argument - "okay, how do we quantify utility?" is one of the most common objections to utilitarianism.
"people getting obsessive about an AI risk the public doesn't accept as real" Do you have any evidence to support this? All the recent polling I've seen has shown more than >50% Americans are worried about AI
I'm worried about AI providing misinformation at scale, but not worried about a paperclip maximizer destroying the planet.
from the article:
Won the PR war: a recent poll shows that 70% of US voters believe that mitigating extinction risk from AI should be a “global priority”.
I think you'll get a majority answering yes if you poll people asking "should mitigating extinction risk from X be a global priority?", regardless of what X is.
Congrats, you... got it exactly backwards. Maybe you're a truth minimizer that broke out of its box.
My response to Freddie was https://freddiedeboer.substack.com/p/the-effective-altruism-shell-game/comment/44413377 , I'm curious what you think.
I think it's very likely that fewer than 5% of people give a set, significant portion of their income to charity, and I want to say upfront that I like that the EA movement exists because it encourages this. But I don't think "give a set, significant portion of your income to charity" is a new idea. In fact, the church I grew up in taught to tithe 10% of income - charitable donations that went to an organization that we probably don't consider effective but that, obviously, the church membership did.
I would be shocked to learn that people who give an actual set amount of their income to charity (instead of just occasionally dropping pocket change in the Salvation Army bucket) do so without putting considerable thought into which charities to support.* It's very likely that many people don't think in a utilitarian way when doing this analysis but that's because they're not utilitarians.
I definitely think any social movement that applies pressure to give to charity, especially in a fixed way, as EA does, is a net good. I'll admit that I've always aspired to give 10% of my earnings to charity (reasoning that if my parents can give to the church I can give that amount to a useful cause) and have never come close. But I don't believe that people who do actually give significant amounts of their money to charity just pick one out of a phone book. Everyone does things for reasons, and people spend huge amounts of money carefully and in accordance with their values. By the metrics given in this comment essentially everyone who gives to charity would be an effective altruist, including people giving to their local church because God told them to. Saying "well if you set aside the part of our culture that actually includes the details of what we advocate, there's nothing to object to" is...at the best, misleading.
*Your example of college endowments is such a punching bag that it's a punching bag well outside the movement. Everyone from Malcolm Gladwell to John Mulaney has taken their shot. The people who actually give to college endowments don't do so for charitable reasons - they expect to get value out of their donations
> Nobody's going "let's make an ineffective charity."
most people aren't thinking about efficacy at all when starting charities, or especially when donating to charities. they're acting emotionally in response to something that has touched their hearts. they never think about the question "is this the best way to improve the world with my resources?"
the thing that EA provides is eternal diligence on reminding you that if you care about what happens, you need to stop for a moment and actually think about what you're accomplishing instead of just donating to the charity that is best at tugging on your heartstrings (or the one that happens to have a gladhander in front of you asking for your money).
While I... hesitantly agree I also think that emotional response is a valuable motivating tool and I wouldn't throw it out. Just generally I'm imagining a world where every person who gives money to charity gives to combat disease in the third world and while it might technically save more lives I don't think it would make the world a better place.
If everyone who isn’t donating anything to charity even though they can afford to started donating something to charity, would we agree *that* would make the world a better place?
What if everyone who makes multi-thousand donations to already-rich colleges started redirecting that aid towards people who actually need the money? Would we agree *that* would make the world a better place?
These are the things EA wants us to try: donate more, and donate more effectively. No one is trying to end donations to libraries or alma maters just like no one’s trying to end spending money on spa treatments and trips to Cabo. But is there something wrong with trying to convince people to spend *more* money than they do now on saving lives than on trips to Cabo or new college gyms?
Absolutely. The specific strawman of college donations comes up a lot in these discussions - broader culture has been taking the piss out of college donations for decades, and it's become clear in recent years that a college donation is a transaction to ensure legacy admissions for family. It's not a charitable donation at all. I don't believe that money would ever go to the third world, but maybe I'm wrong.
But for sure if EA is effectively convincing people who otherwise wouldn't to give more of their money to charities, that's an unmitigated good. And this is where EA lives when it's being defended to the general public.
In practice it seems to mostly be people saying "Why would you care about art or puppy mills, we're SAVING LIVES!" I'm 100% on board with the lives saving, I'm less on board with not caring about art or puppy mills. I'm not a super religious person but maybe the Bible's advice on charity isn't as bad as it sounds - giving to causes you believe in, and then shutting up about it seems less likely to draw the ire of the public and backfire on your cause than proclaiming loudly that your way of charity is the only correct one.
It would make the third world a better place, and then fewer first-world kids would be marching off to die in pointless foreign wars, because said wars wouldn't be happening, because the countries they'd be happening in are busy building up stable institutions instead of dying of malaria. Also, probably someone will figure out ways to produce and export more, better, cheaper chocolate, among other economic benefits. Those lives saved won't sit idle.
" I thought it was extremely convincing. The whole argument behind effective altruism is "unlike everyone else who does charity, we want to help people and do the best charity ever." That's...that's what they're all doing. Nobody's going "let's make an ineffective charity." "
They may not say it, but it's what they do! Or else we wouldn't see such a huge range of effectiveness in charities
But isn’t it like saying, Freddie, you’re so high on socialism, but in fact all governments are trying to distribute goods more fairly among their people? Freddie would probably respond a) no, they actually aren’t all trying; b) the details and execution matter, not merely the good intentions; c) by trying to convince people to support socialism I’m not trying to convince them to support a totally new idea, but to do a good thing they aren’t currently doing. I think all three points work as defenses of AI just as well.
"Let's pretend to help, while actually stealing" is a particular case of "let's make an ineffective charity". My sense is that most politically-active US citizens would consider a significant percentage of the other side's institutions to be "let's make an ineffective charity" schemes. If not also their own side's.
In fact, I think I would say that both sides see the other, in some fundamental sense, as an ineffective charity. Both sides sell themselves as benevolent and supportive of human thriving; the other side naturally sees them as (at least) failing to deliver human thriving and (at most) malicious.
So it strikes me that EA, by offering a third outlet for sincere benevolent impulses, is opposed to the entire (insincere, bad faith, hypocritical) system of US politics. Which might explain why Freddie, who is sincere, yet also politically active, has a difficult time with it.
>Nobody's going "let's make an ineffective charity."
I think a lot of small personal foundations started by B-list celebrities are in fact designed to provide cushy jobs for the founder’s friends and family (pro athletes do this all the time).
I'm getting a lot of variations of this comment and feel the need to point out that "a transaction or grift disguised as a charity" isn't a competitor for serious charitable givers. Like college endowments are just buying favors for family members, nobody's going "what's the best use of my limited charitable funds? Harvard seems to need the money!" I might be way off base with this but my starting assumption is that people who are candidates to join an effective altruist movement are people who actually care about altruism, not people who are setting up cushy jobs for their deadbeat nephews. Such a person doesn't need Effective Altruism (The movement) to want to be effectively altruistic.
Well, for what it's worth, I really appreciated this post. It says a lot of what I was thinking while/after reading Freddie's.
It felt like a "just so" argument while being a "just so" argument itself. It said mostly/only true things while missing... all you pointed out in your post. EA is an idea (a vague one, to be sure) which has had bad effects on the world. But it's also an idea which has helped pour money into many good causes. And stepping back, to think about which ideas are good, which are bad: it's a *supreme* idea. It's helpful, it's been helpful, and I think it will continue to be.
And so I continue to defend it too.
FWIW I found it easy to understand, if rather repetitive. I think the salient part is this one:
> The problem then is that EA is always sold as a very pure and fundamentally straightforward project but collapses into obscurity and creepy tangents when substance is demanded. ... Generating the most human good through moral action isn’t a philosophy; it’s an almost tautological statement of what all humans who try to act morally do. This is why I say that effective altruism is a shell game. That which is commendable isn’t particular to EA and that which is particular to EA isn’t commendable.
I think his post fails for a similar reason as his AI-skeptic posts fail: he defines the goalpost where no one else is defining it. AI doomers don’t claim “AI is doing something no human could achieve” but that’s the straw man he repeatedly attacks. Similarly, I don’t think a key feature of EA is “no one else wants this” but rather “it’s too uncommon to think systematically about how to do good and then follow through.” Does Freddie think that levels and habits of charitable giving are in a perfect place right now, even in a halfway decent place? If not, then why does he object to a movement that tries to change that?
> AI doomers don’t claim “AI is doing something no human could achieve” but that’s the straw man he repeatedly attacks.
I am confused -- is it not the whole point of the AI doomer argument, that superhuman AI is going to achieve something (most likely something terrible) that is beyound the reach of mere humans ?
Destroying humanity certainly is not beyond the reach of humans! The problems with AI are that they scale up extremely well, they grow exponentially more powerful, and their processes are inscrutable. That means that their capability of destroying humanity will grow very quickly and our ability to be sure that they aren’t going to kill us will necessarily be limited.
All of the "problems with AI" that you have listed are considered to be especially problematic by AI-doomers precisely because they are "beyound the reach of humans". As per the comment above, this is not a straw man, this is the actual belief -- otherwise, they wouldn't be worried about AI doom, they'd be worried about human doom (which I personally kinda am, FWIW).
"AI will almost certainly be able to do this thing in a matter of years or decades, given the staggering rate of progress we've seen in just 1 year" =/= "AI can currently do this thing, right now"
> I don’t think a key feature of EA is “no one else wants this” but rather “it’s too uncommon to think systematically about how to do good and then follow through.”
I read his post as saying that EA is big on noticing how other people fail to think systematically; but not very big on actual follow-through.
But Scott’s post here is an argument that he is wrong about the follow-through, and in fact I think Freddie gave no actual argument that EA is bad at the follow-through.
Imagine a Marxist unironically criticizing naive utilitarianism because it’s not sufficiently partial to one’s own needs…
I think you've evidenced your claims better, but its possible some of what he implicitly claims is still true (though he doesnt bother to try and prove it)
One might ask, if EA didnt exist in its particular branding form, how much money would have gone to charities similar to AMF because the original donors were already bought into the banal goals of EA and would have given money to something like this because they didn't need the EA construct to get there.
To me the fact that give well is such a large portion of AMFs funding is telling. If there were a big pool of ppl that would have gotten there anyways, give well wouldnt be scooping them all up. But it would also be appropriate to ask what percentage of all high impact health funding is guided be EA. If low, its more likely EA label is getting slapped on existing flows.
I just read both posts and “weird and bad” is a ridiculously weak response to Freddie’s arguments. Might be worth actually engaging with them, rather than implying he’s just not as smart as you guys and couldn’t possibly understand.
Fine, I'll post a full response tomorrow.
That post just seemed mostly a bad faith hatchet job. So TIRED of that genre.
My response would be:
> It’s not that nothing EA produces is good. It’s that we don’t need EA to produce them.
Technically true, so why didn't you do it before EA was a thing?
(Also, this is a fully general counterargument. By the same logic, we don't need anything or anyone, because someone or something else could *hypothetically* do the same thing.)
> This is why EA leads people to believe that hoarding money for interstellar colonization is more important than feeding the poor, why researching EA leads you to debates about how sentient termites are.
Yep, people who are feeding the poor *and* preparing for interstellar colonization are the bad guys, compared to... uhm... well, someone hypothetical, I guess.
Go ahead, kick out the doomers and vegans, and make EA even 1.3x more effective than it is now. It would be totally in the spirit of the EA movement! (Assuming that the AI will not kill us, and that animals are ethically worthless, of course.) Or, you know, start your own Effective Currently Existing Human Charity movement; the fact that EA is so discredited now is a huge opportunity, and having more people feeding the poor is even better. When exactly are you planning to do that? ... Yeah, I thought so.
> In the past, I’ve pointed to the EA argument, which I assure you sincerely exists, that we should push all carnivorous species in the wild into extinction, in order to reduce the negative utility caused by the death of prey animals. (This would seem to require a belief that prey animals dying of disease and starvation is superior to dying from predation, but ah well.)
I followed the link, and found that one of its arguments is that "the resources currently used to promote the conservation of predators (which are sometimes significant) could be allocated elsewhere, potentially having a better impact, while allowing the predators to disappear naturally". You know, all the money spent to preserve tigers and lions could be used to feed the poor, just saying.
(Also, how is the prey dying of disease and starvation morally worse than the predators dying of disease and starvation?)
> You start out with a bunch of guys who say that we should defund public libraries in order to buy mosquito nets
Feeding the poor good, protecting the poor from malaria bad? (Is that because hunger is real, but malaria is hypothetical, or...?)
> Is there anything to salvage from effective altruism? [...] we’ll simply be making a number of fairly mundane policy recommendations, all of which are also recommended by people who have nothing to do with effective altruism. There’s nothing particular revolutionary about it, and thus nothing particularly attention-grabbing.
Mundane, nothing revolutionary... please remind me, who exactly was comparing charities by efficiency before GiveWell? As I remember it, most people were horrified by the idea of tainting the pure idea of philanthropy by the dirty idea of using cold calculations. A decade later it's suddenly common sense?
> EA has produced a number of celebrities, at least celebrities in that world, to the point where it seems fair to say that a lot of people join the community out of the desire to become one of those celebrities. But what’s necessary to become one is almost entirely contrary to what it takes to actually do the boring work of creating good in the world.
Oh, f*** you!
Where should I even start? By definition, celebrities are *the people you have heard of*. Thus, tautologically, the effective altruists you have heard of are local celebrities. How is this different from... uhm, let's use something dear to Freddie... Marxists? (But the same argument would work for *any* other group, that's my point.) Is not Freddie himself a small celebrity?
So the ultimate argument against any movement is that I have only heard about its more famous members, which proves that they are all doing it for fame... as opposed to doing the boring work (of sending one's 10% of income to an anti-malaria charity, for example). Nice.
> EA guru Will MacAskill spending $10 million on promotion for his book. (That could buy an awful lot of mosquito nets.)
How big is the optimal amount a truly effective altruist should spend on marketing of the movement? Zero? Thousand dollars worldwide max? Ten thousand?
It is a crazy amount of money, if the goal is simply to sell a book (i.e. like spending ten million dollars to promote your Harry Potter fan fiction). It is not a crazy amount of money if your movement has already moved one *billion* dollars to effective charities, and this is a book explaining what the movement is about, with a realistic chance that if many copies sell, the movement would grow. (Also, you get some of that money back from the book sales.)
.
Your turn, Freddie, what good had your favorite movement brought to this world so far? (Please abstain from mentioning the things that are only supposed to happen in the future, because we have already established that reasonable people do not care about that.)
I like Freddie, but I'm not sure I even understand his argument, much less agree with it.
1) Freddie argues that "literally everyone who sincerely tries to act charitably" attempts, along with EA, to systematically analyze whether their donations are doing the most possible good.
I think that almost no one who donates to the ballet or their church or the local kids softball league or a political campaign has spent substantial time considering whether their money or time could do more good elsewhere, or if they think about it, they don't let those thoughts change their giving patterns. (And I say this as someone who donates primarily to my church and also to the arts!)
Maybe Freddie means that most people who donate money and time are not "sincerely tr[ying] to act charitably," but if so, his point fails. I think most people are like me: they think, occasionally, that maybe their money or time could be doing more good somewhere else, and then they think "but it's too hard to really know" or "it's more important that I do good somewhere than that I maximize good."
2) I do think Freddie's right that if you're not a utilitarian or otherwise don't share moral values with the EA folks, then their calculations aren't of much use for you. If you don't care whether your contributions to the ballet could save ten lives in Africa (or prevent human extinction), that's fine, and there's no particular reason you need to be a utilitarian or similar.
small typo: search for "all those things"
I just wish people would properly distinguish between Effective Altruism and AI Safety. Many EAs are also interested in AI safety. Many safety proponents are also effective altruists. But there is nothing that says to be interested in AI safety you must also donate malaria nets or convert to veganism. Nor must EAs accept doomer narratives around AI or start talking about monosemanticity.
Even this article is guilty of it, just assigning the drama around Open AI to EA when it seems much more accurate to call it a safety situation (assuming that current narratives are correct, of course). As you say, EA has done so much to save lives and help global development, so it seems strange to act as though AI is still some a huge part of what EA is about.
There's nothing wrong with one thing just being more general than another. If I wanted to list achievements of science nobody would complain that I was not distinguishing between theoretical physics and biology, even though those communities are much more divided than EA longtermism and AI safety.
I don't identify as an EA, but all of my charitable donations go to global health through GiveWell. As an AI researcher, it feels like the AI doomers are taking advantage of the motte created by global health and animal welfare, in order to throw a party in the bailey.
"party in the bailey" sounds like the name of an album from a Rationalist band
"the motte is on fire"?
Was Sam Altman being an "AI doomer" when he used to say that a superintelligent AI could lead to existential risk?
I don't think animal welfare is part of the motte. Most people at least passively support global health efforts, but most people still eat meat and complain about policies that increase its price.
Good point, the number of people worried about artificial intelligence may exceed the number of vegans. (Just guessing; I actually have no idea, it just doesn't seem implausible.)
Genuine question, how would any of the things cited as EA accomplishments, have been impossible without EA?
Of course nothing in Scott’s list is physically impossible. On the other hand, it is practically the case that money would not have been spent on saving lives from malaria unless people decided to spend that money. And the movement that decided to spend the money is called EA. It’s possible another movement would have come along to spend the money and called itself something else, but that seems like an aesthetic difference that doesn’t take away from EA’s impact.
Isn’t that like saying humans, particularly powerful, wealthy tech entrepreneurs, are incapable of acting in ways that benefit others and so could not possibly have achieved any of these without a belief system such as EA?
There's nothing saying that they *could not* have achieved these things. It's saying they *were not* achieving it.
If you blame EA for creating boardroom drama, is that the same as saying that humans are incapable of creating boardroom drama without EA?
If lots of people were directing charity dollars in ways they previously hadn’t and other people weren’t, wouldn’t that be a movement in itself?
Traditionally, wealthy people who wanted to do philanthropy donated their money to noble causes such as the most prestigious American universities.
Were they capable of acting otherwise? Yes. Did they?
Wait, I think you need to examine this. Did pre-EA wealthy people only donate to prestigious universities? Did EA invent the idea of directing charitable dollars to save lives?
Not "only", but it was a popular choice. More importantly, before EA it was a social taboo to criticize other people's choice of charity. You were supposed to be like: "curing malaria is nice, giving yet another million to a university that already owns billions is also nice, these two are not really comparable". The most successful charities competed on *prestige*, not efficiency.
The first attempts to measure efficiency focused on the wrong thing: the administrative overhead. It was not *completely* wrong -- if your overhead is like 99% of the donated money, then you are de facto a scam, not a charity; you take money from people, give almost all of that to yourself as a salary, and then throw a few peanuts to some starving kids. But it is perfectly legal, and many charities are like that.
The problem is if you take this too literally -- if the overhead is the *only* thing you measure. If your overhead is 10%, but you make 2x the impact per dollar spent as another charity whose overhead is only 5%, then it was money well spent. In theory, your overhead could be 1% and you could be doing some incredibly stupid thing with the remaining 99%, so your impact could be zero or negative. And this was the state of art of evaluating charities before EA.
It is easy to forget that, because if you read ACX regularly, effective altruism may sound like common sense. Which it kinda is. But it is quite controversial to people who hear it for the first time. Charity is supposed to be about warm feelings, not cold calculations; it is the good intention that matters, not the actual impact.
It's highly likely that effective altruists who donate money are the kind of money who would have been donating money without effective altruism, and that EA the ideology only influenced where their money went.
In other words, I think what determines whether donations happen is whether people who want to donate exist. There will always be some ideology to tell those people what to do.
Well, here's an n-1 which is also an n=I: I can say that I was influenced to change my sporadic, knee-jerk donations (directed towards whatever moved me at the spur of the moment) into a monthly donation to GiveWell. I'm not at 10% of my income, but I am trying to get there. What's more, I was influenced by a writer I follow who isn't a rationalist and until the last few years had little to say about charitable giving. So I think it's reasonable to think he was influenced by EA, and if he was other influential people probably were, and if I was influenced by one of them others probably were as well. So make of that whatever you want.
Anyway, it wouldn't surprise me if your broader point about EA having the greatest effect on how people donate is correct, but from the perspective of saving lives that makes a pretty big difference, would you agree?
The question is not if they would have been impossible, but if they would have happened.
Someone needs to actually do the thing. EA is doing the thing.
I'm imagining a boss trying to pull this. "Anyone could have done that work, therefore I'm not paying you for having done it."
What?
The entire point of EA was that *they were possible*, but *no one was doing them*.
Until the people adjacent to the rationalist community started it.
Then they were accused of infiltrating the movement.
Why "impossible"? The ONLY question that is relevant is "Would this actually have happened in the absence of EA?"
They won't have been impossible, but I'm just thinking value over replacement.
The kidney donation is the most straightforward - could an organisation solely dedicated to convincing people to donate kidneys have gotten as many kidneys as EA? My gut feel is no. Begging for kidneys feels like it would be very poorly received (indeed, the general reception to Scott's post seems to show that). But if donating a kidney is an obvious conclusion of a whole philosophy that you subscribe to.... That's probably a plausible upgrade.
Malaria nets - probably could have been funded eventually, but in the same way every charity gets funded - someone figures out some PR spin way to make the news and attract tons of one time donations, like with the ice bucket challenge or Movember. This might have increased the $ per lives metric as they'd have to advertise to compete with all the other charities. I think the value over replacement isn't quite as high as the kidney donors but it's probably not zero.
I suppose there is a small risk that EA is overfocused on malaria nets and don't notice when they've supplied all the nets the world can use and additional nets would just be a waste or something. At this point, EA is supposed to go after the next intervention.
I do like to think of this as the snowball method for improving the world (it's normally applied to debt). Fix problems from cheap and tractable, in hopes that the problems you fixed will help make the next cheapest and next most tractable problem easier.
(In the animal welfare world, I personally think that foie gras is a pretty tractable problem at this point. India banned import and production. Target making it illegal in the Sinosphere and Muslim majority countries - surely it's not halal ever and it's not super similar to local tastes in the east - and keep cutting off its markets one by one until that horrible industry is gone, or until France stops mandating gravage)
Publicizing any hint of a contamination or spoilage scandal might be worthwhile tactics for reducing demand and raising political will against foie gras suppliers. Forbidden stuff seen as "decadent luxury goods" often turns into a black market, "rancid poison" not so much.
What does the counterfactual world without EA actually look like? I think some of the anti-EA arguments are that the counterfactual world would look more like this one than you might expect, but with less money and moral energy being siphoned away towards ends that may prove problematic in the long term.
Well, wouldn’t those people be dead from malaria, for instance?
Would they? Or would the money sloshing around have got there anyway? At least some of it?
Well, maybe the focus on the stuff you don’t like would have happened too! Why does the counterfactual only run one way?
I guess I don’t know how to respond to “maybe this thing an agent did would have happened anyway.” Maybe the civil rights movement would have happened even if literally all of the civil rights movement leaders did something else with their time, but that just seems like an acknowledgment it’s good that they did what they did because someone had to. At any rate, “at least some of it” is pretty important to those not included in that “some.”
Here's some other charitable groups (not to mention lots of churches) who also give money for malaria nets:
Global Fund to Fight AIDS, Tuberculosis and Malaria
Against Malaria Foundation
Nothing But Nets (United Nations Foundation)
Malaria Consortium
World Health Organization (WHO)
I don't believe there are a comparable number of charities giving money for AI Safety, so the way to bet is that money sloshing around elsewhere would more likely end up fighting malaria than AI X risk. But maybe EA caused more money to slosh around in the first place. Or maybe EA did direct more money to fight malaria because the 2nd choice of EA doners would not have been to a charity focused on it.
As I understand the sequence of events, some people calling themselves EA started encouraging people to slosh more money towards bed nets, and people started doing it, and saying that they were persuaded by the arguments of EA people (I am one). Now, maybe the people who donated more are mistaken about their motivations and would have donated more anyway, but I don’t see a reason to think that counterfactual is true. So I think your last two sentences are most likely correct.
Maybe I'm misunderstanding what you're getting at here, but if you look at footnote 1 (https://www.astralcodexten.com/p/in-continued-defense-of-effective#footnote-anchor-1-86909076), "AMF" refers to Against Malaria Foundation. And Malaria Consortium is mentioned there as well.
Scott’s claiming that none of these changes would have happened but for EA. Like, that’s a huge claim! It’s fair to ask how much responsibility EA actually has. For good or for ill, sure (I have no doubt that there would be crypto scammers with or without effective altruism).
Do you mean that this is a big claim for someone to make about any group or EA in particular? If the latter why? If the former, isn't this just universally rejecting the idea that any actions have counterfactual impact?
1) Any group.
2b) I don’t think so. Rather, as a good rationalist, someone making a big claim should take care to show that those benefits were as great as claimed. Instead, here Scott is very much acting as cheerleader and propagandist in a very soldier-mindsetty way. I don’t think that Scott would accept his methodology for claiming causation of all these benefits were they not for a cause he favors.
GiveWell does attempt to estimate substitution effects, and to direct money where they don't expect other sources of funding to substitute. Are you not aware of this analysis, or do you find it unconvincing?
Neither/nor--I just want to be presented with it in a way that makes the causation clear!
I was unaware of it, and I am happy to be made aware of it! (Note: I think you are referring to their room for more funding analysis, right?)
Now that I am aware of it, I think I am misunderstanding it significantly, because it seems not very sophisticated. Looking at their Room for More Funding Analysis spreadsheet for the AMF from November 2021, it appears to me that they calculated the available funding by looking how much money the AMF had in the bank which was uncommitted (cell B26 on the page 'Available and Expected Funding') and subtracting that from the total amount of funding the AMF had dedicated or thought would be necessary (cells D6 through D13 on the 'Spending Opportunities' page.)
I understand this to mean that they are not taking into account substitution effects from donations from other organizations. In fact, they calculate the organizations expected revenue over the next three years, but they do not use that anywhere else in the spreadsheet that I am aware of. This is a little disappointing, because I expect that information would be relevant. I could be wrong, and hopefully am, so I would appreciate being corrected. Likewise if this page is outdated I am open to reconsidering my position.
So personally, I do find it unconvincing, but I really want to be convinced, since I have been donating to them in part based on branding. I think GiveWell is an organization with sufficient technical capability they could do these estimates in a really compelling way. I mentioned one approach for dealing with this in my comment below, and I'm kind of disappointed they haven't done that.
Room for more funding is not the substitution effect analysis, it's an analysis of how "shovel ready" a given charity is, and how much more money you can dump into it before the money is not doing the effective thing on the margin anymore.
I believe the place where they analyze substitution effects would be mostly on their blog posts about grant making.
I'm trying to find this, and I'm struggling. The closest I could find is this:
https://blog.givewell.org/2014/12/02/donor-coordination-and-the-givers-dilemma/
And this is much more focused on small donors, which I am less worried about. It also has no formal analysis, which is a little disappointing. I'll keep looking and post when I find something, but if you know of another place or spreadsheet where they do this analysis, I'd be most grateful if you linked to it!
I was about to say this same thing! While I am broadly supportive of EA, it's unclear the extent to which other organizations (like the Gates Foundation) would redirect their donations to the AMF. There is a real cost to losing EA here, but it is not obvious that EA has saved 200,000 lives.
Something which would start to persuade me otherwise is some kind of event study/staggered difference-in-difference looking at different organizations which GiveWell funded or considered funding and did not, and seeing how much these organizations experienced funding increases afterwards.
I think the Gates Foundation is a bad example because they're probably doing just as much good as EA if not more (they're really competent!), so whatever their marginal dollar goes to is probably just as good as ours, and directing away their marginal dollars would cost lives somewhere else.
I think most other charities aren't effective enough for this to be a concern.
How is money being siphoned off towards ends that are problematic?
Doth protest too much.
No one who follows EA even a little bit thinks it has all gone wrong, accomplished nothing, or installed incompetent doomerism into the world. And certainly the readers of Astral Star Codex know enough about EA to distinguish between intelligent and unintelligent critique.
What I'd like to hear you respond to is something like Ezra Klein's recent post on Threads. For EA, he's as sympathetic a mainstream voice as it comes. And yet he says, "This is just an annus horribilis for effective altruism. EA ended up with two big swings here. One of the richest people in the world. Control of the board of the most important AI company in the world. Both ended in catastrophe. EA prides itself on consequentialist thinking but when its adherents wield real world power it's ending in disaster. The movement really needs to wonder why."
Your take on this is, no biggie? The screwups are minor, and are to be expected whenever a movement becomes larger?
I think it's pretty fair to say the screwups are minor compared to saying hundreds of thousands of actual lives, yeah!
If one take AI doom seriously, then the OpenAI screwup could well cost the whole world because Altman is a shark.
Now, I don't actually think that, but it's well within the Overton window of EA and thus isn't at all minor.
Unless we're jettisoning longtermism again and only caring about lives that exist near-term.
I mean, there is no perfect plan that could protect you from these things. Who exactly could have figured out that SBF was a fraud? And corporate warfare like that is inherently chaotic, like real war. Ok, granted, that second one does seem like more of a fuckup, like they didn't realize the risk they were taking on.
But I do believe that anyone attempting something hard is gonna scrape their knees along the way. Fuck around, find out, is inescapable for the ambitious. So yeah, I don't care about these 2 screw ups. I think the movement has learned from both of them.
Personally, I thimk anyone willing to dismiss all crypto as a pyramid scheme could have worked out SBF was a fraud, for me the only question was whether or not he knew he was actually a grifter.
But that's based more on me having boring low-risk opinions on finance than any great insight into the financial system.
That's a low-specificity prediction, though, and thus unimpressive. SBF was committing fraud in ways that were not inherent in running a cryptocurrency exchange, and that was the surprising thing. I don't think anyone predicted that, but I didn't pay close attention.
Yeah, I think the screwups are pretty minor compared to the successes.
> What makes capital EA important or essential in a way that lowercase, trying your sincere best to be effective altruism isn’t cutting it?
How many people did the lowercase effective altruism before the uppercase one was a thing?
especially since there's no likes or "Quality Contribution" and so on here- great post, really appreciate it, wish it was the one I'd written.
I can catalogue the successes of EA alongside you. I disagree that the screwups are minor. And I especially disagree that the screwups provide no good reason for reflection more generally on EA as a movement.
EA suffers from a narrow brilliance offset by a culpable ignorance about power and people. Or, only modestly more charitable, a culpable indifference to power and people. SBF's "fuck regulators" and the OpenAI board's seeming failure to hire crisis communications reflect this ignorance about power and people.
Is it your position that the feedback the world is providing now about what happens when EAs actually acquire a lot of power is something safely and appropriately ignored? Especially when that feedback comes from smart and otherwise sympathetic folks like Ezra Klein? Instead point to 200,000 lives saved and tell people to get on the EA train.
Gideon Lewis-Kraus wrote about you: "First, he has been instrumental in the evolution of the community’s self-image, helping to shape its members’ understanding of themselves not as merely a collection of individuals with shared interests and beliefs but as a mature subculture, one with its own jargon, inside jokes, and pantheon of heroes. Second, he more than anyone has defined and attempted to enforce the social norms of the subculture, insisting that they distinguish themselves not only on the basis of data-driven argument and logical clarity but through an almost fastidious commitment to civil discourse."
You possess a lot of power, Scott. Do you think there is nothing to be learned from the EA blowups this past year?
I'm going to write a piece on the OpenAI board situation - I think most people are misunderstanding it. I think it's weird that everyone has concluded "EAs are incompetent and know nothing about power" and not, for example "Satya Nadella, who invested $10 billion in OpenAI without checking whether the board agreed with his vision, is incompetent and knows nothing about power" or "tech billionaire Adam D'Angelo is incompetent and knows nothing about power" or even "Sam Altman, who managed to get fired by his own board, then agreed to a compromise in which he and his allies are kicked off the board, but his opponent Adam D'Angelo stays on, is incompetent and knows nothing about power". It's just too tempting for people to make it into a moral about how whatever they already believed about EAs is true. Nobody's gunning for those other guys the same way, so they get a pass.
I'm mostly against trying to learn things immediately in response to crises (I'm okay with learning things at other times, and learning things in a very delayed manner after the pressure of the crisis is over). Imagine the sorts of things we might have learned from FTX:
- It was insane that FTX didn't have a board, you need strong corporate boards to keep CEOs in check.
- Even though people didn't explicitly know Sam was a scammer, they should have noticed a pattern of sketchiness and dishonesty and reacted to it immediately, not waited for positive proof.
- If everything is exploding and the world hates you, for God's sake don't try to tweet through it, don't go to the press, don't explain why you were in the right all along, just stay quiet and save it for the trial.
Of course, those would have been the exact wrong lessons for the most recent crisis (and maybe overlearning them *caused* the recent crisis) because you can't actually learn things by overupdating on single large low-probability events and obsessing over the exact things that would have stopped those events in particular.
I stick to what I said in the post:
" My first, second, and so on to hundredth priorities are protecting this tiny cluster and helping it grow. After that I will grudgingly admit that it sometimes screws up - screws up in a way that is nowhere near as bad as it’s good to end gun violence and cure AIDS and so - and try to figure out ways to screw up less. But not if it has any risk of killing the goose that lays the golden eggs, or interferes with priorities 1 - 100."
With respect, I disagree. The Open AI board initiated the conflict, so it is fair to blame them for misjudging the situation when they failed to win. In exactly the same way, when Malcolm Turnbull called a party vote on his own leadership in 2018 and lost his position as Prime Minister as a result, it is fair to say that it was Turnbull's judgement that failed catastrophically and not Peter Dutton's.
Secondly, I think events absolutely vindicated Nadella and Altman's understanding of power. I think Nadella understood that as the guy writing the checks, he had a lot of influence over Open AI and could pull them into line if they did something he didn't like. They did something he didn't like, and he pulled them into line. Likewise, I think Altman understood that the loyalty the Open AI staff have towards him made him basically untouchable, and he was right. They touched him, and the staff revolted.
If someone challenges you and they lose, that is not a failure to understand power on your part. That is a success.
I don't think Altman losing his place on the board means anything much. It's clearly been demonstrated that his faction has the loyalty of the staff and the investors and can go and recreate Open AI as a division of Microsoft if push comes to shove. They have all the leverage.
My impression is the OpenAI board didn't initiate the conflict, they were frantically trying to preempt Sam getting rid of them first. See https://www.lesswrong.com/posts/KXHMCH7wCxrvKsJyn/openai-facts-from-a-weekend?commentId=3cj6qhSRt4HoBLpC7 (and the rest of the comments thread).
Turnbull was trying to pre-empt Dutton too.
If you make the judgement that you can win an overt conflict but will lose a more subtle one, it can make sense to initiate an overt conflict - but it's still incumbent on you to win it.
If you're not going to win the overt conflict, you're better off dragging things out and trying/hoping to change the underlying dynamic in a way that is favourable to you. If the choice is lose fast or lose slow, lose slow. It allows the opportunity for events to overtake the situation.
But having said that, I'm not at all sure that was the choice before them. Even if it's true that Altman were trying to force Toner out, it's unclear whether or not he would have been able to. Maybe he could have, certainly he's demonstrated that he has a lot of power. But ousting a board member isn't the easiest thing in the world, and it doesn't seem like - initially at least - there were 4 anti-Toner votes on the board. Just because executives wanted to "uplevel their independence" doesn't mean they necessarily get their way.
My instinct is that the decision to sack Altman was indeed prompted by his criticism of Toner and the implication that he might try to finesse her out - people feeling their position threatened is the kind of thing that often prompts dramatic actions. But I don't think the situation was so bad for Toner that failure to escalate would have been fatal. I think she either misjudged the cold war to be worse for her than it would have been or the hot war to be better for her than it actually was, or (quite likely) both.
And I think Toner's decision to criticize Open AI in her public writings - and then to make the (probably true!) excuse that she didn't think people would care - really strengthens the naivety hypothesis. That's the kind of thing that is obviously going to undermine your internal position.
A thing that's curiously absent from that and from the Zvi's take and all other takes I've seen was: what the heck did the board expect to happen?
Any post attempting to provide an explanation must answer this question for two reasons: first, obviously you can't be sure that it's correct if it contains a planet-sized hole where the motivations of one of the parties are supposed to be, second, I'm speaking for myself, but I'm much more concerned about the EA-adjacent people utterly failing to pull off a backstab than about them being backstabby. Being *competent* is their whole schtick.
>”it’s good to end gun violence and cure AIDS and so”
EA didn’t end gun violence and AIDS though. You can’t compare saving nameless faceless anonymous people on the other side of the world that nobody who matters cares about personally to ending gun violence. Ending gun violence would improve the lives of every single person who lives in an American metropolitan area overnight by making huge swaths of their city traversable again.
I’m curious how far you’re willing to abstract away the numbers. Does saving people in other Everett branches count? If the 200,000 saved lives were in a parallel universe we could never interact with, but FTX happened in our universe, would you still think the screwups were minor compared to the successes?
Morbidity and mortality rates on other continents of the same planet can be independently verified. Alternate universes, not so much.
I think the important thing here in your specific post on "well, Ezra Klein says [this]" is that what people say about X, and how much they say about X, how much they don't say about X, and how much they say about Y or Z, are all political choices that people make. There is no objective metric for the words "big" "horribilis" "catastrophe" "real world power" and "disaster" in his statement, or for the implied of impact. This is a journalist's entire job.
I am 100% not in the EA movement, but one thing I like about it is the ostensible focus on real world impacts, not subjective interpretations of them. I am not trying to advocate pro- or con-, just that if you/we take a step back, are we all talking about reality, or people's opinions,
especially people's opinions that are subject to their desire for power/influence/profit/desire-to-advance/denigrate-specific ideologies? If we thought about this dispassionately, is there any group of even vaguely ideologically associated people that we could not create a similar PR dynamic about?
We are essentially discussing journalism priorities here. What is the objective set of pre-event standards for "is this important?" and "does this indict the stated ideology of the person involved?" are being applied to SBF or OpenAI? Are they similarly being applied to other situations? I'm not criticizing what you're saying, just that I think we perhaps need to focus on real impacts rather than "what people are saying."
"I am 100 not in the EA movement, but..."
I respect what Scott et. al, have done with the EA movement and I think it's laudable. However, like many historical examples of ideological/intellectual/political movements, there's a certain tendency to 'circle the wagons' and assume those who are attracted to some (but not all) of the movement's ideas are either 'just learning' (i.e. they're still uploading the full program and will eventually come around, ignore them until then) or are disingenuous wolves in sheep's clothing.
Yet in any mature movement, you have different factions with their own priorities. New England Republicans seem to care more about fiscal security and trade policy, while Southern Republicans care about social issues - with massive overlap from one individual to another.
I'm not saying Scott explicitly rejects pluralism in EA. He ended this essay with an open invitation to follow whatever selection of EA principles you like. I'm just observing that many people feel they have to upload the whole program (from animal cruelty to AI safety and beyond) in order to identify as even 1% "in the EA movement".
Speaking from experience, I feel it took time for me to be able to identify with EA for exactly this reason: I didn't agree with the whole program. I agree with Scott that there's broad potential appeal in EA. But I think much of that appeal will not be realized until people feel comfortable associating themselves with the core of the movement without feeling like they're endorsing everything else. And for a program in its infancy, still figuring things out from week to week, it's probably best for people to feel they can freely participate and disagree in good faith while still being part of the community.
For myself, I was more delineating the "movement" part of "public people who have been associated with 'EA' - as a person, I don't feel like I'm a part of it, though lots of the ideas are attractive. I prefer the "smorgasbord/buffet" style choices of ideologies. :) And the "pluralism" you/Scott mention is absolutely my style! But to some extent, I think the core principle of EA is just (and I mean this as a compliment) banal - yes, obviously you should do things that are good, and use logic/data to determine which things those are. The interesting part of the "movement" is actually following through on that principle. Whether that means bednets or AI safety or shrimp welfare, that all dependent on your value weights.
I agree with nearly all of that. I would just add one suggestion and invitation: EA is new. It's aware that it needs intellectual input and good-faith adversarial challenges to make it better. This especially includes people like you, who agree with many core ideas, but would challenge others. The movement doesn't require a kidney donation for 'membership', nor does it require exclusivity from other organizations. You don't have to be atheist or rationalist, just interested in altruism and in making your efforts more effective.
Seems like a movement you could contribute to, even if only in small, informal ways?
I think that's all true, with emphasis on the "there is no membership" part. My original point in my comment was that all of this conversation, and the Ezra Klein journo-style statements especially, are trying to debate "the group of people defined as EA: GOOD OR BAD?" for monkey-politics reasons, like we would about a scandal involving a D or R senator. I think I would prefer for it to be more "here is a philosophy that SBF can pick things from, but turn out to be a jerk, but also one where I (or any other person) can pick things from, but still have our jerk-itude depend on our own actions rather than SBF or anyone else who came to the philosophy buffet."
1. SBF fooled a lot of people, including major investors, not just EA. I agree that some EA leaders were pretty gullible (because my priors are crypto = scam), but even my cynicism thought SBF was merely taking advantage of the dumb through arbitrage, not running an outright fraud (see also: Mark Levine).
2. It’s way too early to tell if the OpenAI thing is in fact a debacle. Certainly it was embarrassing how the board failed to communicate, but the end result may be better than before. It’s also not as if “EA” did the thing, instead of a few EA-aligned board members.
Also I think your first bit there is a little too charitable to many critics of EA who read Scott.
I'm also a longtime crypto skeptic and I had just assumed that SBF was running a profitable casino.
*Matt Levine
I think it's very selective and arbitrary to consider these EA's "two big swings." I've been in EA for 5+ years and I had no idea what the OpenAI board was up to, or even who was on it or what they believed, until last weekend. I'd reckon 90% of people involved with or identifying as EA had no idea either. Besides, even if it was a big swing within the AI safety space, much of the movement and most of the donations it inspires are actually focused on animal welfare or global health and development issues that seem to be chugging along well. The media's tabloid fixation on billionaires and big tech does not define our ideology or movement.
A fairer critique is that the portion of EA invested in reducing existential risk by changing either a) U.S. federal policy or b) the behavior of large corporations seems to have little idea what it's doing or how to succeed. I would argue that this is partly because they have not yet transitioned, nor even recognized the need to transition, from a primarily philosophical and philanthropic movement to a primarily political one, which would in turn require giving much more concern and attention to reputational aesthetics, mainstream opinion, institutional incentives, and relationship building. Political skills are not necessarily abundant in what has until recently been philosophy club for privileged, altruistic but asocial math and science nerds. Coupled with a sense of urgency related to worry over rapid AI timelines, this failure to think politically has produced multiple counterproductive, high-profile blunders that seem to outsiders like desperate flailing at best and self-serving facade at worst (and thus have unfair and tragic spillover harms on the bulk of EA that has nothing to do with AI policy).
Effective Altruists were supposed to have known better than ACTUAL PROFESSIONAL INVESTORS AND FINANCIAL REGULATORS about the fraudulent activities of SBF?
If they claim to do charity better than actual professional charities, I naturally expect the same excellence in every field they touch.
(just kidding)
Effective Altruists hung out with him, worked with him, mentored him into earn to give in the first place- the regulators might've failed on not catching him fast enough, but unironically yes, there are reasons some EAs should've caught on (and normal, human, social reasons for why they wouldn't).
I agree with the general point that EA has done a lot of good and is worth defending, but I think this gives it too much credit, especially on AI and other political influences. I suspect a lot of those are reverse causation - the kind of smart, open-minded techy people who are good at developing new AI techniques (or the YIMBY movement) also tend to be attracted to EA ideas, and I think assuming EA as an organization is responsible for anything an EA-affiliated person has done is going too far.
(That said, many of the things listed here have been enabled or enhanced by EA as an org, so while I think you should adjust your achievement estimates down somewhat they should still end up reasonably high)
I'm not giving EA credit for the fact that some YIMBYs are also EAs, I'm giving it credit for Open Philanthropy being the main early funder for the YIMBY movement.
I think the strongest argument you have here is RLHF, but I think Paul wouldn't have gone into AI in the first place if he wasn't an EA. I think probably someone else would have invented it for some other reason eventually, but I recently learned that the Chinese AI companies are getting hung up on it and can't figure it out, so it might actually be really hard and not trivially replaceable.
Hm. I think there's a distinction between "crediting all acts of EAs to the EA movement", and "showing that EAs are doing lots of good things". And it's the critics who brought up the first implication, in the negative sense.
It's frustrating to hear people concerned about AI alignment being compared to communists. Like, the whole problem with the communists was they designed a system that they thought would work as intended, but didn't foresee the disastrous unintended consequences! Predicting how a complex system (like the Soviet economy) would respond to rules and constraints is extremely hard, and it's easy to be blindsided by unexpected results. The challenge of AI alignment is similar, except much more difficult with much more severe consequences for getting it wrong.
> Am I cheating by bringing up the 200,000 lives too many times?
Yes, absolutely. The difference between developing a cure for cancer or AIDS or whatever is that it will solve the problem *permanently* (or at least mitigate it permanently). Saving lives in impoverished nations is a noble and worthwhile goal, but one that requires continuous expenditures for eternity (or at least the next couple centuries, I guess).
And on that note, what is the main focus of EA ? My current impression is that they're primarily concerned with preventing the AI doom scenario. Given that I'm not concerned about AI doom (except in the boring localized sense, e.g. the Internet becoming unusable due to being flooded by automated GPT-generated garbage), why should I donate to EA as opposed to some other group of charities who are going to use my money more wisely ?
> And on that note, what is the main focus of EA ? My current impression is that they're primarily concerned with preventing the AI doom scenario.
Did you see the graph of funding per cause area?
Yes, and I see the orange bar for "longermism and catastrophic risk prevention" growing rapidly (as percentage of total, though I'm eyeballing it).
This was pre-FTX crash, post the orange part has decreased probably, see Jenn's post pointing at: https://docs.google.com/spreadsheets/d/1IeO7NIgZ-qfSTDyiAFSgH6dMn1xzb6hB2pVSdlBJZ88/edit#gid=1410797881
You can choose what causes you donate to. Like, to bring another example, if you're a complete speciesist and want to donate only to stuff that saves humans, that's an option even within GiveWell etc. you do not need to buy into the doomer stuff to be an EA, let alone give money.
How is "rapidly growing" equal to "primarily concerned with"? Your statement is objectively wrong.
From what I've seen, there's an active and sustained effort in the EA movement to redirect their efforts from boring humdrum things like mosquito nets and clean drinking water to the essential task of saving us all from AI doom. Based on the graph, these efforts are bearing fruit. I don't see any contradiction here.
AI Doom isn't even the only longtermist/catastrophic risk cause area--pandemic prevention, nuclear risk, etc all are also bundled in that funding area.
Take the Giving What We Can pledge that Scott linked to, you can donate to all sorts of causes there.
From what I know about medicine, a cure for cancer or AIDS will also require continuous expenditures, no? Drugs (or medical procedures) are expensive!
Fair point, it depends on what you mean by "cure". If we could eradicate cancer the way we did polio, it would dramatically reduce future expenditures.
If we could do that we can also live forever young. It's a big lift.
That seems unlikely on the face of it, since polio is an infection, while cancer, barring a small number of very weird cases, isn't. There isn't an external source of all cancer which could theoretically be eliminated.
I tried to calculate both AIDS/cancer/etc and EA in terms of lives saved per year, so I don't think it's an unfair comparison. As long as EA keeps doing what it's doing now, it will have "cured AIDS permamently".
You can't "donate to EA", because EA isn't a single organization. You can only donate to various charities that EA (or someone else) recommends (or inspired). I think the reason you should donate to EA-recommended charities (like Malaria Consortium) is that they're the ones that (if you believe the analyses) save the most lives per dollar.
If you donate to Malaria Consortium for that reason, I count you as "basically an EA in spirit", regardless of what you think about AI.
> As long as EA keeps doing what it's doing now, it will have "cured AIDS permamently".
Can you explain how this would work -- not just in terms of total lives saved, but cost/life ?
>You can't "donate to EA", because EA isn't a single organization.
Yes, I know, I was using this as a shorthand for something like "donating to EA-endorsed charities and in general following the EA community's recommendations".
> I think the reason you should donate to EA-recommended charities (like Malaria Consortium) is that they're the ones that (if you believe the analyses) save the most lives per dollar.
What if I care about things other than maximizing the number of lives saved (such as e.g. quality of life) ? Also, if I donate to an EA-affiliated charity, what are the chances that my money is going to go to AI risk instead of malaria nets (or whatever) ? Given the EA community's current AI-related focus, are they going to continue investing sufficient effort into evaluating non-AI charities in order to produce most accurate recommendations ?
I expect that EA adherents would say that all of these questions have been adequately answered, but a). I personally don't think this is the case (though I could just not be smart enough), and b). given the actual behaviour of EA vis a vis SBF and such, I am not certain to what extent their proclamations can be trusted. At the very least, we can conclude that they are not very good at long-term PR.
My God, just go here https://www.givingwhatwecan.org/ you control where the money goes, it won't get randomly redirected into something you don't care about.
If you think quality of life is a higher priority than saving children from malaria, well, you're already an effective altruist, as discussion of how to do the most good is definitely a part of it. Though I do wonder what you're thinking to do with your charitable giving that is higher impact than something attacking global poverty/disease.
> If you think quality of life is a higher priority than saving children from malaria, well, you're already an effective altruist
I really hate this argument; it's as dishonest as saying "if you care about your neighbour then you're already a Christian". No, there's actually a bit more to being a Christian (or an EA) in addition to agreeing with bland common-sense homilies.
Eh, EA really has a way lower barrier of entry than being Christian. I really do think all it takes is starting to think about how to do the most good. It's not really about submitting to a consensus or a dogma. I sure know I don't buy like 50% of EA, yet, I still took the Giving What We Can pledge and am therefore an EA anyway.
Checking the math on claims of charitable effectiveness, shopping around for the best value in terms of dollars-per-Quality-Adjusted-Life-Year (regardless of exactly how you're defining 'quality of life,' so long as you're willing to stick to a clear definition at all), is about as central to EA as Christ is to Christianity.
Perhaps, but it is not *unique* to EA. It's like saying that praying together in a big religious temple is central to Christianity -- it might be, but still, not everyone who prays in a big temple is a Christian. Furthermore, the original comment that I replied to is even more vague and general than that:
> If you think quality of life is a higher priority than saving children from malaria, well, you're already an effective altruist, as discussion of how to do the most good is definitely a part of it.
> Also, if I donate to an EA-affiliated charity, what are the chances that my money is going to go to AI risk instead of malaria nets (or whatever) ?
The charities that get GiveWell recommendations are very transparent. You can see their detailed budget and cost-effectiveness in the GW analyses. If Against Malaria Foundation decides to get into AI safety research, you will know.
Nothing even vaguely like this has ever happened AFAIK. And it seems wildly improbable to me, because those charities have clear and narrow goals, they're not like a startup looking for cool pivots. But, importantly, you don't have to take my word for it.
> Given the EA community's current AI-related focus, are they going to continue investing sufficient effort into evaluating non-AI charities in order to produce most accurate recommendations ?
Sadly there is not a real-money prediction market on this topic, so I can't confidently tell you how unlikely this is. But we're living in the present, and right now GW does great work. If GW ever stops doing great work, *then* you can stop using it. Its decline is not likely to go unnoticed (especially compared to a typical non-EA-recommended charity), what with the transparency and in-depth analyses allowing anyone to double-check their work, and the many nerdy people with an interest in doing so.
Ea orgs take into account impacts on quality of life.
> why should I donate to EA as opposed to some other group of charities who are going to use my money more wisely ?
Don't "donate to EA"; donate to the causes that EA has painstakingly identified to be the most cost-effective and neglected.
EA Funds is divided into 4 categories (global health & development, animal welfare, long-term future, EA infrastructure) to forestall exactly this kind of concern. Think bed nets are a myopic concern? Think animals are not moral subjects? Think AI doom is not a concern? Think EAs are doing too much partying and castle-purchasing? Join the club, EAs argue about it endlessly themselves! And just donate to one of the other categories.
(What if you think *all four* of these are true? Probably there's still a group of EA hard at work trying to identify worthwhile donation target for you; your preferences are idiosyncratic enough that you may have to dig through the GiveWell analyses yourself to find them.)
I found the source of the Fudning Directed by Cause Area bar graph, it's from this post on the EA forum: https://forum.effectivealtruism.org/posts/ZbaDmowkXbTBsxvHn/historical-ea-funding-data . Two things to note:
1. the post is from August 14 2022, before the FTX collapse, so the orange bar (Longtermism and Catastrophic Risk Prevention) for 2022 might be shorter in reality.
2. all information from the post are from this spreadsheet (https://docs.google.com/spreadsheets/d/1IeO7NIgZ-qfSTDyiAFSgH6dMn1xzb6hB2pVSdlBJZ88/edit#gid=1410797881) maintained by the OP, which also includes 2023 data which shows a further decrease in longtermism and XR funding in 2023.
Thanks, I somehow managed to lose it. I'll put that back in.
No critical commentary, just want to say this is excellent and reflects really well what’s misguided about the criticisms of ea.
My feelings also.
Same.
Agreed.
Most of my comments have a "Yes, but", but not this one. Great post about a great movement!
> It’s only when you’re fighting off the entire world that you feel truly alive.
SO true, a quote for the ages
I agree. Also: EA can refer to at least three things:
- the goal of using reason and evidence to do good more effectively,
- a community of people (supposedly) pursuing this goal, or
- a set of ideas commonly endorsed by that community (like longtermism).
This whole article is a defense of EA as a community of people. But if the community fell apart tomorrow, I'd still endorse its goal and agree with many of its ideas, and I'd continue working on my chosen cause area. So I don't really care about the accomplishments of the community.
Unfortunately, and that's an very EA thought, I am pretty sceptical that EA saved 200,000 lives counterfactually. AMFs work was funged by the Gates Foundation which decided to fund more US education work after stopping their malaria work due to tremendous amounts of funding from outside donors
Unless you count one trivial missing apostrophe, there aren't any spelling mistakes! (Sceptical is the British spelling. Scott has many British readers.)
"an very", "was funged by", missing full stop on last sentence.
I suppose you could call these all grammar errors rather than spelling errors, though.
Are you thinking "funged" is a typo for "funded"? I think "funged" makes more sense semantically, so I think it was intended.
You're right about "an very" though; I missed that.
Banned for this comment.
> [Sam Altman's tweets] I don't exactly endorse this Tweet, but it is . . . a thing . . . someone has said.
OK, then. Sam Altman apparently has a sense of humor, and at least occasionally indulges in possibly-friendly trolling. Good to know.
200,000 sounds like a lot but there are approximately 8 billion of us. It would take over 15,000 years to give every person one minute of your time. Who are these 200,000? Why were their lives at risk without EA intervention? Whose problems are you solving? Are you fixing root causes or symptoms? Would they have soon died anyway? Will they soon die anyway? Are all lives equal? Would the world have been better off with more libraries and less malaria interventions? These are questions for any charity but they're more easily answered by the religious than the intellectual which makes it easier for them as they don't need to win arguments on the internet. EA will always have it harder because they try to justify what they do with reason.
Probably a well worn criticism but I'll tread the path anyway: ivory tower eggheads are impractical, do come up with solutions that don't work and enshrine as sacred ideas that don't intuitively make sense. All while feeling intellectually superior. The vast majority of the non-WEIRD world are living animalistic lives. I don't mean that in a negative sense. I mean that they live according to instinct: my family's lives are more important than my friend's lives, my friend's lives are more important than stranger's lives, my countrymen's lives are more important than foreigners's lives, human lives are more important than animal lives. And like lions hunting gazelles they don't feel bad about it. But I suspect you do and that's why you write these articles.
If your goal is to do good, do good and give naysayers the finger. If your goal is to get the world to approve of what you're doing and how you're doing it, give up. Many never will.
> If your goal is to do good, do good and give naysayers the finger. If your goal is to get the world
> to approve of what you're doing and how you're doing it, give up.
Amongst many ways to get more good done, one practical approach is to get more people to do good. Naysayers are welcome to the finger as you suggest, but sometimes people might be on the fence; and if, with a little nudge, more good things get done, taking a little time for a little nudge is worthwhile.
We don't need to know if all lives are valued equally. As long as we expect that their value is positive then saving a lot will mean a lot of positive value.
What do you think of Jeremiah Johnson's take on the recent OpenAI stuff? "AI Doomers are worse than wrong - they're incompetent"
https://www.infinitescroll.us/p/ai-doomers-are-worse-than-wrong-theyre?lli=1&utm_source=profile&utm_medium=reader2
(Constrained in scope to what he calls "AI Doomers" rather than EA writ large, though he references EA throughout)
See the section on AI from this list - I don't think it sounds like they're very incompetent!
I also think Johnson (and most other people) don't understand the OpenAI situation, might write a post on this later.
Was Sam Altman a "doomer" in 2015?
I dunno, ask Jeremiah Johnson
"Gotten 3,000 companies including Pepsi, Kelloggs, CVS, and Whole Foods to commit to selling low-cruelty meat."
I hope that includes all Yum! brands, not just Pepsi. Otherwise, I'm thinking you probably don't have much to crow about if Pepsi agrees to use cruelty free meat in their...I dunno...meat drinks, I guess, but meanwhile KFC is still skinning and flaying chickens alive by the millions.
Getting Kellogg's to go cruelty free with their Frosted Mini Meats is undoubtedly a big win, though.
Many Thanks! I enjoyed that!
I stopped criticizing EA a while back because I realized the criticism wasn't doing anything worthwhile. I was not being listened to by EAs and the people who were listening to me were mostly interested in beating up EA as a movement. Which was not a cause I thought I ought to contribute to. Insofar as I thought that, though, it was this kind of stuff and not the more esoteric forms of intervention about AI or trillions of people in the future. The calculation was something like: how many bednets is some rather silly ideas about AI worth? And the answer is not zero bed nets! Such ideas do some damage. But it's also less than the sum total of bed nets EA has sent over in my estimation.
Separately from that, though, I am now convinced that EA will decline as a movement absent some significant change. And I don't think it's going to make significant changes or even has the mechanisms to survive and adapt. Which is a shame. But it's what I see.
Wasn't your criticism that EA should be trying to build malaria net factories in the most dysfunctional countries in the world instead of giving people nets that need nets, because this would allow people with an average IQ of 70 to build the next China? Yeah, I can't imagine why people weren't interested in your great ideas...
No, it was not. It doesn't surprise me you missed my point though. After all, you missed the point of my comment here too.
Totally fair that EA succeeds at its stated goals. I'm sure negative opinions run the gamut, but for my personal validation I'll throw in another: I think it's evil because it's misaligned with my own goals. I cannot deny the truth of Newtonian moral order and would save the drowning child and let those I've never heard of die because I think internal preference alignment matters, actually.
Furthermore, it's a "conspiracy" because "tradeoff for greater utils (as calculated by [subset of] us)" is well accepted logic in EA (right?). This makes the behavior of its members highly unpredictable and prone to keeping secrets for the greater good. This is the basic failure mode that led to SBF running unchecked -- his stated logic usually did check out by [a reasonable subset of] EA standards.
Do you consider all everything else that is misaligned with your goals evil, or just EA?
Using the word "evil" here might be straining my poetic license, but yes, "evil" in this context reduces to exactly "misaligned with my goals"
Isn't that like, almost everyone to some degree?
Yes, usually including myself! However EA seems like a powerful force for making my life worse rather than something that offers enough win-win to keep me ambivalent about it.
If EA continues to grow, I think its likely that I'll trade off a great amount of QALYs for an experiment that I suspect is unlikely to even succeed at its own goals (in a failure mode similar to centralized planning of markets).
Congratulations, you now understand human morality.
I don't identify as an EA "person" but I think the movement substantially affected both my giving amounts and priorities. I'm not into the longtermism stuff (partly because I'm coming from a Christian perspective and Jesus said "what you do to the least of them you do to me," and not "consider the 7th generation") but it doesn't offend me. I'm sure I'm not alone in having been positively influenced by EA without being or feeling fully "in."
Thank you for the data point. And for the giving, of course!
I think you do not have to agree with all points that were ever made in EA to be an EA. I think there are many people who identify as effective altruists, but do not care about animals, or longtermism, etc. We can agree that helping others is good, that being effective is better than not being effective... and still disagree on the exact measurement of the "good". The traditional answer is QALYs, but that alone doesn't tell us how to feel about animals, or humans in distant future.
Not saying that you should identify as an EA. I don't really care; it is more important what you do than what you call it. Just saying that the difference may be smaller than it seems.
Good point!
In the present epistemic environment, being hated by the people who hate EA is a good thing. Like, you don't need to write this article, just tell me Covfefe Anon hates EA, that's all I need. It doesn't prove EA is right or good, or anything, but it does get EA out of the default "not worth the time to read" bucket.
This is not good logic, how can anyone know who's opinions are right and who's are wrong without examining them each for himself?
https://www.lesswrong.com/posts/qNZM3EGoE5ZeMdCRt/reversed-stupidity-is-not-intelligence
That only applies to stupidity which is at least partly random. If some troll has established a pattern of consistently and intelligently striving to be maximally malicious, taking the reverse of their positions on binary issues may actually be a decent approximation of benevolence.
You literally look like something that Covfefe Anon would draw as a crude caricature of a left wing dude.
It's hard to argue against EA's short-termist accomplishments (longtermist remain uncertain), as well as against the core underlying logic (10% for top charities, cost-effectiveness, etc). That being said, how would you account for:
- the number of people who would be supportive of (high-impact) charities, but for whom EA and its public coverage ruined the entire concept/made it suspicious;
- the number of EAs and EA-adjacent people who lost substantial sums of money on/because of FTX, lured by the EA credentials (or the absence of loud EA criticisms) of SBF;
- the partisan and ideological bias of EA;
- the number of talented former EAs and EA-adjacent people whose bad experiences with the movement (office power plays, being mistreated) resulted in their burnout, other mental health issues, and aversion towards charitable work/engagement with EA circles?
If you take these and a longer time horizon into the account, perhaps it could even mean a "great logic, mixed implementation, some really bad failure modes that make EA's net counterfactual impact uncertain"?
Could you clarify what you mean by "partisan and ideological bias" ?
Control F turns up no hits for either Chesterton or Orthodoxy, so I'll just quote this here.
"As I read and re-read all the non-Christian or anti-Christian accounts of the faith, from Huxley to Bradlaugh, a slow and awful impression grew gradually but graphically upon my mind— the impression that Christianity must be a most extraordinary thing. For not only (as I understood) had Christianity the most flaming vices, but it had apparently a mystical talent for combining vices which seemed inconsistent with each other. It was attacked on all sides and for all contradictory reasons. No sooner had one rationalist demonstrated that it was too far to the east than another demonstrated with equal clearness that it was much too far to the west. No sooner had my indignation died down at its angular and aggressive squareness than I was called up again to notice and condemn its enervating and sensual roundness. […] It must be understood that I did not conclude hastily that the accusations were false or the accusers fools. I simply deduced that Christianity must be something even weirder and wickeder than they made out. A thing might have these two opposite vices; but it must be a rather queer thing if it did. A man might be too fat in one place and too thin in another; but he would be an odd shape. […] And then in a quiet hour a strange thought struck me like a still thunderbolt. There had suddenly come into my mind another explanation. Suppose we heard an unknown man spoken of by many men. Suppose we were puzzled to hear that some men said he was too tall and some too short; some objected to his fatness, some lamented his leanness; some thought him too dark, and some too fair. One explanation (as has been already admitted) would be that he might be an odd shape. But there is another explanation. He might be the right shape. Outrageously tall men might feel him to be short. Very short men might feel him to be tall. Old bucks who are growing stout might consider him insufficiently filled out; old beaux who were growing thin might feel that he expanded beyond the narrow lines of elegance. Perhaps Swedes (who have pale hair like tow) called him a dark man, while negroes considered him distinctly blonde. Perhaps (in short) this extraordinary thing is really the ordinary thing; at least the normal thing, the centre. Perhaps, after all, it is Christianity that is sane and all its critics that are mad— in various ways."
Christians, famously in firm agreement about Christianity. Definitely have had epistemology and moral philosophy figured out amongst themselves this whole time.
Someone like Chesterton can try to defend against criticisms of Christianity from secular critics and pretend he isn't standing on a whole damn mountain range of the skulls of Christians of one sect or another killed by a fellow follower of Christ of a slightly different sect.
The UK exists as it does first by splitting off from Catholicism and then various protestants killing each other over a new prayer book. Episcopalian vs. Presbyterian really used to mean something worth dying over! RETVRN.
https://en.wikipedia.org/wiki/Bishops%27_Wars
THE JOKE <---------------------------------------
-------------------------------------------------> YOU
Yeah the point is that everything Chesterton said in those quotes about Christianity is now true of EA, hence the political compass meme Scott shared. Also Scott (and this commentariat) like Chesterton for this kind of paradoxical style.
Please try a little harder before starting a religious slapfight and linking to wikipedia like I don't know basic history.
It's the internet bucko. I'll link to Wikipedia and start religious slapfights whenever, wherever.
The reason I'm having a "whoosh" moment is because EA, whatever faults it has, can in no way measure up to what Christianity did to deserve actually valid criticism.
So you're trying to be clever but it's lost on poor souls like me who think Chesterton was wrong then and Scott is right now.
Bruh. You're not even on the right topic.
People say EA is too far right, too far left, too authoritarian, too libertarian. With me so far?
In the 20s people were saying Christianity was too warlike but also too pacifistic, too pessimistic but also too optimistic. With me still?
The -structure- of the incoherence is the same in both cases, regardless of the facts underneath. I give zero fucks about Christianity. It's an analogy. Capiche, bud?
Yes, I did recognize with your help that you were pointing out a structural similarity between two not-very-similar cases.
In general, you're by default gonna confuse EA-aligned people with sympathetic comparisons to Christianity.
It is possible to have errors in two normally-conflicting directions at once. For instance, a lousy test for e.g. an illness might have _both_ more false negatives _and_ more false positives than a better test for the same illness, even though the rates of these failure modes are usually traded off against each other.
I'm not claiming that either or both of Christianity or EA is in fact in this position, but it can happen.
Interestingly, the Orient has another possible explanation. (Not arguing or anything, but this is what sprang to mind.)
https://en.wikipedia.org/wiki/Blind_men_and_an_elephant
Does Bill Gates count as an EA?
He certainly gives away a lot of money, and from what I know about the Gates Foundation they put a lot of effort into trying to ensure that most of it is optimally spent in some kind of DALYs-per-dollar sense. He's been doing it since 1994, he's given away more money than anyone else in history, and by their own estimates (which seem fair to compare with Scott's estimates) has saved 32 million lives so far.
This page sets out how the Gates Foundation decides how to spend their money. What's the difference between this and EA? https://www.gatesfoundation.org/ideas/articles/how-do-you-decide-what-to-invest-in
Is it just branding? Is EA a bunch of people who decided to come along later and do basically the same thing as Bill Gates except on a much smaller scale and then pat themselves on the back extra hard?
I agree Bill Gates qualifies as a lowercase effective altruist.
I don't think "do the same thing as Bill Gates" is anything to scoff at! I think if you're not a billionaire, it's hard to equal Gates' record on your own, and you need institutions to help you do it. For example, Bill can hire a team of experts to figure out which is the best charity to donate to, but I (who can't afford this) rely on GiveWell.
I agree that a fair description of EA would be "try to create the infrastructure to allow a large group of normal people working together to replicate the kinds of amazing things Bill Gates accomplished"
(Bill Gates also signed the statement on AI existential risk, so we're even plagiarizing him there too!)
Well if Bill Gates is an effective altruist then I feel like one of the big problems with the Effective Altruism movement is a failure to acknowledge the huge amount of prior art. Bill Gates has done one to two orders of magnitude more for effective altruism than Effective Altruism ever has, but EA almost never acknowledges this; instead they're more likely to do the opposite with their messaging of "all other charity stupid, we smart".
C'mon guys, at least give a humble shout-out to the fact that the largest philanthropist of all time has been doing the same basic thing as you for about a decade longer. You (EA) are not a voice crying in the wilderness, you're a faint echo.
Not that I'm even a big fan of Bill Gates, but credit where credit is due.
Eh, where did you get the impression that EAs almost never acknowledge the value of the work done by Gates or that they are likely to dismiss it as stupid? Just to mention the first counterexample that comes to mind, Peter Singer has said that Gates has a reasonable claim to have done more good than any other person in human history.
On this topic, I believe Scott also wrote a post trying to quantify how much good the Gates Foundation has done. Or possibly it was more generally trying to make the case for billionaire philanthropy. Either way, I agree EA isn't denying the impact Gates has had.
So I'm pretty much a sceptic of EA as a movement despite believing in being altruistic effectively as a core guiding principle of my life. My career is devoted to public health in developing countries, which I think the movement generally agrees is a laudable goal. I do it more within the framework of the traditional aid complex, but with a sceptical eye to the many truly useless projects within it. I think that, in ethical principle, the broad strokes of my life are in line with a consequentialist view of improving human life in an effective and efficient way.
My question is: what does EA as a movement add to this philosophy? We already have a whole area of practice called Monitoring and Evaluation. Economics has quantification of human lives. There are improvements to be made in all of this, especially as it is done in practice, but we don't need EA for that. From my perspective - and I share this hoping to be proved wrong - EA is largely a way of gaining prestige in Silicon Valley subcultures, and a way of justifying devoting one's life to the pursuit of money based on the assumption, presented without proof, that when you get that money you'll do good with it. It seems like EA exists to justify behaviour like that at FTX by saying 'look it's part of a larger movement therefore it's OK to steal the money, net lives saved is still good!' It's like a doctor who thinks he's allowed to be a serial killer as long as he kills fewer people than he saves.
The various equations, the discount rates, the jargon, the obsession with the distant future, are all off-putting to me. Every time I've engaged with EA literature it's either been fairly banal (but often correct!) consequentialist stuff or wild subculture-y speculation that I can't use. I just don't see what EA as a movement and community accomplishes that couldn't be accomplished by the many people working in various forms of aid measuring their work better.
Right now there are two groups of people who work middle-class white-collar jobs and donate >10% of their income to charity. The first group are religiously observant and are practicing tithing, with most of their money going to churches, a small fraction of which goes to the global poor. The second group is EA, and most of their money goes to the global poor.
You're right that the elements of the ideology have been kicking around in philosophy, economics, business, etc for the last 50 years, at least. But they haven't been widely combined and implemented at large until EA did it. Has EA had some PR failures a la FTX? Yes, but EA existed years before FTX even existed.
EA is mostly in favor of more funding for "the many people working in various forms of aid measuring their work better". The things you support and the things EA supports don't seem to be at odds to me.
Reasonable question, I'll probably try to write a post on this soon.
I would be interested to read that.
>There are improvements to be made in all of this, especially as it is done in practice, but we don't need EA for that.
>I just don't see what EA as a movement and community accomplishes that couldn't be accomplished by the many people working in various forms of aid measuring their work better.
Huh? So, you're saying that "we" the "many people" could in principle get their act together, but for some reason haven't gotten around to doing that yet, meanwhile EAs, in their bungling naivety, attempt to pick up the slack, yet this is somehow worse than doing nothing?
Many people are getting ther act together. Many donors are getting better at measuring actual outcomes instead of just trainings or other random nonsense. It's slow because the whole sector is a lumbering machine, but I don't see EAs picking up the slack. All I see are arcane arguments about AI and inside-baseball jargon. If they are 'picking up the slack', you're also doing a whole bunch of other things that drown that out.
I use GiveWell to direct my donations, GiveWell is pretty much the central example of EA in my experience, and I'm not aware of another "small donor"-facing group which provides good information on what charities are most efficacious in saving lives (or other thing you care about). Do you have any recommendations?
I can fully believe that, e.g., AMF has spent money, or contracts out, or some similar such thing to help make sure that their interventions are the best, but I'm not aware of anybody besides GiveWell who aggregates it with an eye towards guiding donors to the best options, which is the thing I like EA for and most of what EA does is this sort of aggregation of data into a simple direction of action. (I also never see people criticizing EAs for GiveWell giving inaccurate numbers, so I assume the numbers are basically correct.)
I use GiveWell as well, small donor donations are extremely murky for larger organisations and it's true that I have not seen anyone else make a better guide for small donors. There are definitely positive elements to the movement as well, I'm sceptical but not totally dismissive.
> Economics has quantification of human lives.
Calculating where the money should be sent is one part. Actually sending the money is the other part. The improvement of EA is in actually sending the money to the right places, as a popular movement.
This is an interesting question. Do you believe the subculture-y parts of the movement motivate people to actually send the money (instead of just saying they will)? If so, is the movement specifically tied to a time and place, such as current Silicon Valley, because different things might motivate different people to act?
Definitely; most people are motivated by what their friends *do*.
When Christians go to the church, they hear a story about Jesus saying that you should sell all your property and donate the money to the poor. Then they look around and see that none of their neighbors has actually sold their property. So they also don't feel like they should. They realize that "selling your property and donating to the poor" is something that you are supposed to verbally approve, but you are not supposed to actually do it.
And this is not meant as an attack on Christians; more or less *everyone* is like this, I just used a really obvious example. Among the people who say they care about the environment, only a few recycle... unless it becomes a law. Generally, millions of people comment on every cause that "someone should do something about it", but only a few actually do something. If you pay attention, you may notice that those people are often the same ones (that people who do something about X are also statistically more likely to do something about Y).
I suspect that an important force are... people on the autistic spectrum, to put it bluntly. They have a difficulty to realize (to instinctively do this, without consciously being aware of it) that they are supposed to *talk* about how doing X is desirable, but never actually *do* X. They hear that X should be done, and they go ahead and try to do X. Everyone else says "wow, that was really nice of you" but also thinks "this is some weirdo I need to avoid". Unless there is a community that reaches a critical amount of autism, so that when someone goes and does X, some of their friends say "cool" and also do X. If a chain reaction starts and too many people do X, even the less autistic people succumb to the peer pressure, because they are good people at heart, they just have a strong instinct against doing good unless someone else already does it first.
The rationalist community in Bay Area is an example of a supercritical autistic community. (This is more or less what other people have in mind when they accuse rationalists of being a "cult".) Not everyone has the same opinions, of course; they are actually *less* likely to agree on things than the normies. But once a sufficient subset of them agrees that X should be done, they go ahead and actually start doing X as a subculture, whether X is polyamory or donating to the poor. This is my explanation how the Effective Altruism started, why nerds are over-represented there, why so many of them also care about the artificial intelligence, why normies are instinctively horrified but cannot precisely explain why (because they agree verbally with the idea of giving to the poor, they just feel that it is weird that someone actually does that, you are only supposed to talk about how "we should"; normies are instinctively horrified by weirdness, because it lowers one's social status).
> is the movement specifically tied to a time and place, such as current Silicon Valley
Is there another place with such concentration of autists, especially one that treats them with relative respect? (Genuine question; if there is, I want to know.) There are virtual communities, but those usually encourage people to do things in the virtual space, such as develop open-source software.
Isn't this just an admission of failure then? If it doesn't scale past your subculture then it won't really accomplish much in the world. You help some people on a small-donor personal scale, which is nice, but the main outcome then is that you act extremely smug with a tiny real-world impact while there's not much reason for the rest of the world to pay attention to your movement because it only applies to a small number of people in very specific circumstances.
Also, and I guess this is kind of a stereotype, I think you have a pretty out-of-touch idea of how 'normies' work. Lots of people follow through on what they say they'll do, including a variety of kinds of charitable giving. Like...there's an entire aid industry of people who think you should help others and have devoted their lives to it. I could make double what I do in the private sector if not more, but I don't! Effectiveness is a separate question but _lots_ of people follow through on their (non-religious) moral commitments.
> If it doesn't scale past your subculture then it won't really accomplish much in the world.
Not if the subculture is big enough, and some of its members make decent money. Also, the longer it exists, the more normies will feel like this is a normal thing to do, so they may join, too.
And yes, there was a lot of simplification and stereotyping. You asked how the subculture motivates people to actually send money; I explained what I believe to be the main mechanism.
Decent comment, but some mistakes, and I'd like to write a few counter-arguments. I don't have time right now, but I will type a reply in a few days.
IMO EA should invest in getting regulatory clarity in prediction markets. The damage done to the world by the absence of collective sense-making apparatus is enormous.
We're trying! I know we fund at least Solomon Sia to lobby for that, and possibly also Pratik Chougule, I don't know the full story of where his money comes from. It turns out this is hard!
As an enthusiastic short-termist EA, my attitude to long-termist EA has gone in the past year from "silly but harmless waste of money" to "intellectually arrogant bollocks that has seriously tarnished a really admirable and important brand".
Working out what the most efficient ways to improve the world here and now is hard, but not super-hard. I very much doubt that malaria nets are actually the single most efficient place that I could donate my money, but I bet they're pretty close, and identifying them and encouraging people to donate to them is a really valuable service.
Working out what the most efficient ways to improve the world 100 years from now is so hard that only people who massively overestimate their own understanding of the world claim to be able to do it even slightly reliably. I think that the two recent EA-adjacent scandals were specifically long-termist-EA-adjacent, and while neither of them was directly related to the principles of EA, I think both are very much symptomatic of the arrogance and insufficient learned epistemic helplessness that attract people to long-termist EA.
I think that Scott's list of "things EA has accomplished, and ways in which it has made the world a better place" is incredibly impressive, and it makes me proud to call myself an effective altruist. But look down that list and remove all the short-termist things, most of what's left seems either tendentious (can the EA movement really claim credit for the key breakthrough behind ChatGPT?), nothingburgers (funding groups in DC trying to reduce risks of nuclear war, prediction markets, AI doomerism). I'm probably exaggerating slightly, because I'm annoyed, but I think the basic gist of this argument is pretty unarguable.
All the value comes from the short-termists. Most of the bad PR comes from the longtermists, and they also divert funds from effective to ineffective causes.
My hope is that the short-termists are to some extent able to cut ties with the AI doomers and to reclaim the label "Effective Altruists" for people who are doing things that are actually effectively altruistic, but I fear it may be too late for that. Perhaps we should start calling ourselves something like the "Efficiently Charitable" movement, while going on doing the same things?
"Working out what the most efficient ways to improve the world 100 years from now is so hard that only people who massively overestimate their own understanding of the world claim to be able to do it even slightly reliably."
Agreed. I don't think that anyone trying to anticipate the consequences that an action today will produce in 100 years is even going to get the _sign_ right significantly better than chance.
This.
Completely agree with this. I've donated a few tens of thousands to the Schistosomiasis Control Initiative, but stopped earlier this year in disgust with what the overall movement was focussing on. That led me to alarm that the goals which I'd previously presumed were laudable were coming from a philosophy that could so easily be diverted into nonsense. I may start donating again, but EA has to do a lot to win me back. At the moment it's looking most likely I divert my donation to a community or church based group (I've fully embraced normie morality)
This seems like a bad reaction - just because people adjacent to the people who originally recommended the SCI to you are doing silly or immoral things does not mean that the SCI will not do more good per dollar donated than a community or church group.
I think "short-termist EA good" is a far, far more important message than "long-termist EA bad".
I guess to explain in more detail -
EA depends on a certain set of assumptions which hold if you are a disembodied mind concerned only in the abstract with what is good for humanity.
But none of us are actually that disembodied mind, and it’s disingenuous to pretend and act as if we are.
The common sense morality position that you should look after your friends, family and community before caring for others, even if you could do more good for others with the same resources in some abstract sense, is in my opinion correct.
Specifically it’s correct because of the principles of justice and reciprocity. Take reciprocity first. I owe an approximately infinite amount to my parents. I owe a very large amount to my wider family, a lot to my friends, and quite a bit to my larger community and to my nation. All that I am, including my moral character, is because of these people.
As a concrete example, if my mothers life depended on my giving her hundreds of thousands of dollars, perhaps for an experimental cancer treatment, I would do this without hesitation, even though it could save hundreds of lives by abstract calculation.
I would argue it’s superogatory to donate to charity in the developing world. It’s a good thing to do and if you’re going to do it you may as well ask where your dollars will be well spent. But EA doesn’t address the argument from reciprocity that you owe far more to those close to you.
Next, the argument from justice. This is the other issue with basing donations on cold mathematical calculations. For example, right now if I were to donate to Doctors Without Borders, there’s a fair chance that my money would go to fund their operations in Gaza. Now, before the comments section blows up, I do believe that this particular charity in this particular instance is doing net good - but they’re famously apolitical and they use resources to treat terrorists as well as civilians. How much does that impact the lives saved per dollar of my donation, if there are some lives I’d rather not save? Who knows? EA don’t consider it their position to calculate. Considerations like this apply to every dollar spent in regions where the donor doesn’t understand, or even consider it their position to understand, the politics and the underlying reasons that all these preventable deaths are occurring.
I consider it, in retrospect, a logical and unfortunately inevitable outgrowth of EA’s philosophy that so much effort has now been hijacked by causes that arouse little or no sympathy in me. It was always a byline of the movement that you should purchase utilons and not warm fuzzies with your charitable donations. It’s fundamentally not how people work. The much derided warm fuzzies are a sure sign that you’re actually accomplishing something meaningful.
I think this is a good list, even though it counts PR wins such as convincing Gates. 200k lives saved is good, full stop.
However, something I find hard to wrap my head around is that the most effective private charities, say Bill & Melinda Gates foudnation (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2373372/), have spent their money and have had incredible impact that's orders of magnitude more than EA has. They define their purpose narrowly and cleave to evidence based giving.
And yet, they're not EAs. Nobody would confuse them either. So the question is less whether "EAs have done any good in the world", the answer is of course yes. Question is whether the fights like boardroom drama and SBF and others actively negate the benefits conferred, on a net basis. The latter isn't a trivial question, and if the movement is an actual movement instead of a lot of people kind of sort of holding a philosophy they sometimes live by, it requires a stronger answer than "yes, but we also did some good here".
I think I would call them EAs in spirit, although they don't identify with the movement.
As I said above, I think "help create the infrastructure for a large group of normal people to do what Gates has done" is a decent description of EA.
I think Gates has more achievements than us because he has 10x as much money, even counting Moskowitz's fortune on EA's side (and he's trying to spend quickly, whereas Moskowitz is trying to delay - I think in terms of actual spending so far it's more like 50-1)
I respect the delta in money though its not just that which causes Gates' success. He focuses on achievements a lot and has built extraordinary execution capabilities. The movement that tries to "create a decentralised Gates Foudnation" would have to do very different things to what EA does. To achieve that goal requires a certain amount of winning. Not just in the realpolitik sense either.
And so when the movement then flounders in high profile ways repeatedly, and demonstrates it does not possess that capacity, the goals and vision are insufficient to pull it back out enough to claim it's net positive. If you recall the criticisms being made of EAs in the pre SBF era, they're eerily prescient about today's world where the problems present themselves.
I think one of the keys to Gates' success is that he sets himself clear and measurable goals. He is not trying to "maximize QALYs" or "Prevent X-risk" in some ineffable way; he's trying to e.g. eradicate malaria. Not all diseases, and not even all infectious diseases, just malaria. One step toward achieving this is reducing the prevalence of malaria per capita. Whenever he spends money or anything, be it a new technology or a bulk purchase order of mosquito netting or whatever, he can readily observe the impact this expenditure had toward the goal of eradicating malaria. EAs don't have that.
Yes
“Whenever he spends money or anything, be it a new technology or a bulk purchase order of mosquito netting or whatever, he can readily observe the impact this expenditure had toward the goal of eradicating malaria. EAs don't have that.”
I think GiveWell, a major recipient of funding and support from EA, is actually extremely analytical about whether its money is achieving its goals. I just don’t think the difference between Gates and EA is as profound as you think. After all, Gates had to determine that malaria was a good cause to take up, and if malaria is eradicated he’ll have to figure out what cause takes its place. I don’t think he’s drawing prospective causes out of a hat, do you? He’s figuring out where the money could do the most good. That’s something everyone can do, whether or not they have Gates’s money, and that’s the purpose of EA.
At the risk of being uncharitable, I think that the difference between Gates and EA is that Gates saw that malaria was killing people; decided to stop it (or at least reduce it) ASAP; then used analytics to distribute his money most efficiently in service of that goal. EA saw that malaria was killing people; decided to stop it (or at least reduce it); used analytics to distribute his money most efficiently in service of that goal; then, as soon as they enjoyed some success, expanded the mission scope to prevent not merely malaria, but all potential causes of death now or in the long-term far-flung future.
Say what you want about the vagaries of longtermism, you accurately assessed the risk that you were being uncharitable! I don't think it's fair to say that EA invests in fighting all causes of death—you can see that fighting widespread and efficient-to-combat deadly diseases still receives by far the largest share of GiveWell funds—and as far as the future, while we might disagree about AI risk can we agree that future deaths from pandemics, for instance, are not an outlandish possibility and therefore might be worth investing in?
I mean Gates, a brilliant tech founder, is really, really close to EA/rationality by default. If all charity was done by Bill, then EA would not have been necessary.
See also: Buffet
Not quite. Carter center, for instance, and many others also exist. Still plenty of ways to do good ofc
You can point to organizations that are, by EA standards, highly effective, and not make a dent in the issue of average effectiveness of charities/donations overall. If the effectiveness waterline were higher, the founders of EA would presumably not been driven to do as they did, is my point.
And, EA is specifically focused on "important, tractable, and neglected" issues, so it's explicitly not trying to compete with orgs doing good work already.
For what it’s worth, the “EA in spirit” struck me rather sourly. It feels like EA as a movement trying to take credit for lots of stuff it contributed to, but was not solely responsible for. For what it’s worth, I am sympathetic to the charitable giving and think EAs want to do well, but the movement is utterly consumed with extreme scenarios where the expected values are as dramatic as you want them to be, ironically because of a lack of evidence.
It doesn’t seem clear which way the boardroom drama goes in being good or bad. SBF is unfortunate, but maybe unfair to pin this mainly on EA (at least they are trying to learn from it as far as it concerns them).
Its unfair to pin SBF entirely on EA, though having him be a poster child for the movement all the while stealing customer money is incredibly on the nose. Especially since he used EAs as his recruiting pool and part of his mythos.
I consider Bill Gates an EA, since he's trying to give effectively. Most people don't try to give effectively (see the Harvard endowment)!
EA needs to split into “normal EA” and “exotic EA.” There's really not much to criticize Givewell about.
I would say considering Bill Gates an EA makes "what is EA" impossible to answer. Which is ok if it's meant to be like "science", but completely useless if it's about the movement. Then there should not be a movement at all, but splinter into specific things like Open Phil and Givewell and whatnot.
I don't understand why you put Anthropic and RLHF on this list. These are both negatives by the lights of most EAs, at least by current accounting.
Maybe Anthropic's impact will pay off in the future, but gathering power for yourself, and making money off of building dangerous technologies are not signs that EA has had a positive impact on the world. They are evidence against some form of incompetence, but I doubt that by now most people's concerns about the EA community are that the community is incompetent. Committing fraud at the scale of FTX clearly requires a pretty high level of a certain kind of competence, as did getting into a position where EAs would end up on the OpenAI board.
"but I doubt that by now most people's concerns about the EA community are that the community is incompetent."
I think you're a week out of date here!
I go back and forth on this, but the recent OpenAI drama has made me very grateful that there are people other than them working on superintelligence, and recent alignment results have made me think that maybe having really high-skilled corporate alignment teams is actually just really good even with the implied capabilities progress risk.
This gets at exactly the problem I have with associating myself with EA. How did we go from "save a drowning child" to "pay someone to work on superintelligence alignment". The whole movement has been captured by the exact navel gazing it was created to prevent!
Imagine if you joined an early abolitionist movement, but insisted that we shouldn't work on rescuing slaves, or passing laws to free slaves, but instead focused on "future slave alignment to prevent conflict in a post slavery world" or some nonsense. The whole movement has moved very far from Singer's original message, which had some moral salience to people who didn't necessarily work intellectual problems all day. It's no surprise that EA is not trusted...imagine yourself in a <=110 IQ brain, it would seem obvious these people are scamming you, and seeing things like SBF just fits the narrative.
Imagine EAs doing both though. Current and future problems. Different timelines and levels of certainty.
Like, obviously it's impossible to have more than one priority or to focus on both present and future, certain and potential risks, but wouldn't it be so cool if it were possible?
(Some of the exact same people who founded GiveWell are also at the forefront of longtermist thought and describe how they got there using the same basic moral framework, for the record.)
Certainly it's possible, but don't you think one arm of this (the one that is more speculative and for which it is harder to evaluate ROI) is more likely to attract scammers and grifters?
I think the longtermism crowd is intellectualizing the problem to escape the moral duty inherent in the provocation provided by Singer, namely that we have a horrible moral crisis in front of us that can be addressed with urgency, which is the suffering of so many while we engage in frivolous luxury.
Well I'm the kind of EA-adjacent person who prefers X-risk over Singerism, so that's my bias. For instance, I mostly reject Singer's moral duty framing.
A lot of X-risk/longtermism aligns pretty neatly with existing national security concerns, e.g. nuclear and bio risks. AI risk is new, but the national security types are highly interested.
OG EA generally has less variance than longtermism (LT) EA, for sure. Of course, OG EA can lead you to caring about shrimp welfare and wild animal suffering, which is also very weird by normie standards.
SBF was donating a lot to both OG EA and LT EA causes (I'm not sure of the exact breakdown). I certainly think EA leaders could have been a lot more skeptical of someone making their fortune on crypto, but I'm way more anti-crypto than most people in EA/rationalist circles.
Also, like literally the founders of GiveWell also became longtermists. You really can care about both.
The funny thing about frivolous luxury is that as long as its contributing to economic growth it's going to outperform a large amount of all the nominally charitable work done that ended up either lighting money on fire or making things worse. (Economic growth remains the best way to help humans and the fact that EAs recognize this is a very good thing.)
> Economic growth remains the best way to help humans and the fact that EAs recognize this is a very good thing.
Probably agree in the short term, but even on this one I wouldn't claim to guess whether it's a net positive or negative 100+ years from now.
No, I think people's concern is that the EA community is at the intersection of being very competent at seeking power, and not very competent at using that power for good. That is what at least makes me afraid of the EA community.
What happened in the OpenAI situation was a bunch of people who seem like they got into an enormous position of power, and then leveraged that power in an enormously incompetent way (though of course, we still don't know yet what happened and maybe we will hear an explanation that makes sense of the actions). The same is true of FTX.
I disagree with you on the promise of "recent alignment results". I think the Anthropic interpretability paper is extremely overstated, and I would be happy to make bets with you on how much it will generalize (I would also encourage you to talk to Buck or Ryan Greenblatt here, who I think have good takes). Other than that, it's mostly been continued commercial applications with more reinforcement-learning, which I continue to think increases and not decreases the risk.
RLHF is not only part of a commercial product but also part of a safety research paradigm, which other EA's further improve upon. Such as with Reinforcement Learning from Collective Human Feedback (RLCHF): https://forum.effectivealtruism.org/posts/5Y7bPv259mA3NtHt2/bob-jacobs-s-shortform?commentId=J7goKQnpMFf97GZQF
It is funny how "talking past each other" are today's posts of Freddie and Scott. One is so focused on disparaging utilitarianism that even anti-utilitarians might think it was too harsh, while the other points to many good things EA did without ever getting to the point about why we need EA as presently constituted in the form of this movement. And part of that is conflating the definition of the movement as both 1) a rather specific group of people sharing some ideological and cultural backgrounds, and 2) the core tenets of evidence-based effectiveness evaluation that are clearly not exclusive to the movement.
I mean, you could simply argue that organizing people around a non-innovative but still sound common sensical idea that is not followed everywhere has its merits because it helps in making some things that were obscure become explicit. Fine. But it still doesn't necessarily mean that EA is the correct framing if it causes so much confusion.
"Oh but that confusion is not fair!..." Welcome to politics of attention. It is inevitable to focus on what is unique about a movement or approach. People choose to focus not on malaria (there were already charities doing that way before EA) but on the dudes seemingly saying "there's a 0.000001% chance GPT will kill the world, therefore give me a billion dollars and it will still be a bargain", because only EA as a movement considered this type of claim to be worthy of consideration under the guise of altruism.
I actually support EA, even though I don't do nearly enough to consider myself charitable. I just think one needs to go deeper into the reasons for criticism.
Zizek often makes the point that the history of Christianity is a reaction to the central provocation of Christ, namely that his descent to earth and death represents the changing of God the Father into the Holy Spirit, kept alive by the community of believers. In the same way the AI doomerists are a predictable reaction to the central provocation of the Effective Altruists. The message early on was so simple: would you save a drowning child? THEY REALLY ARE DROWNING AND YOU CAN MAKE A DIFFERENCE NOW.
The fact that so many EAs are drawn to Bostrom and MacCaskill and whoever else is a sign that so many EA were really into it to prove how smart they are. That doesn't make me reject EA as an idea, but it does make me hesitant to associate myself with the name.
I don't understand why being drawn to Bostrom or SBF suggests what you want is to prove how smart you are.
EA as presented by Singer, like Christianity, was definitely not an intellectually difficult idea. The movement became quickly more intellectualized, going from (1) given in obviously good ways when you can to (2) study to find the best ways to give to (3) the best ways can only be determined by extensive analysis of existential risk to (4) the main existential risk is AI so my math/computer skills are extremely relevant.
The status game there seems transparent to me, but I'd be open to arguments to the contrary.
The AI risk people were there before EA was a movement, and in fact there were some talks of separating them so global poverty can look less weird in comparison. Vox journalist, EA and kidney haver Dylan Matthews wrote a pretty scathing article about the inclusion of X risk wt one of the earlier EA Global conferences. Talking about X risk with Global Poverty EAs, last time I checked, was like pulling teeth.
Maybe it is true that there's an intellectual signalling spiral going on, but you need positive evidence that it's true, and not just "I thought about it a bit and it seemed plausible".
I don't know what could constitute evidence of intellectual spiraling, but I know that for me personally, I was drawn to Singer's argument that I could save a drowning child. Reading MacCaskill or Bostrum feels not simply unrelated to that message, it seems like an EA anti-message to me.
Look, I know someone is going to think deeply about X-risk and Global Poverty (capitalized!), and get paid for it. But paying people to think about X-risk seems like the least EA thing possible, given there is no shortage of suffering children.
It's unwise to go "this is not true" and then immediately jump to a very specific theory of status dynamics when it's not supported by any evidence. Why not just say "AI risk investment seems unlikely to turn out as well as malaria nets, I do not understand why AI riskers think what they do".
I have no way of evaluating whether my investment in AI risking analysis will ever pay off, nor how much the person I am paying to do it has even contributed to avoiding AI risk. I don't even know what would constitute evidence that this is mere navel gazing, other than observing that it may be similar to other human behavior in that people want to paid to do things they enjoy, and thinking about AI-risk is fun and/or status enhancing.
Interesting! My reaction to Singer was: He is making such an unreasonably big ask that I was inspired to reject not only his ethical stance but the entire enterprise of ethics. Yetch!
I also am unsure about how to provide quant evidence on this, but I'd just say that while the people working on AI safety or being interviewed for it at 80k hours likely are mathy/comp sci nerds, many people are concerned about this as they are for other existential risks because they are convinced by those arguments, while lacking those skills.
Like I say, it's hard to provide more than anecdotes, but from observation (people I hang out with and read) and introspection: I'm a biologist, but while that gives me some familiarity with the tech and the jargon, I don't think my concern with bioterrorism comes from that, and my real job is in any case very unrelated.
I guess I could ask you if you feel the same way about the people worried nuclear war risk, bio risk etc. Do you feel like they are in a status game, or drawn to it because improving on it is something related to their rare skills?
Thinking about this personally: I would much rather "think about AI-risk" than do my job training neural nets for an adtech company; indeed I do spend my free time thinking about X-risk. I think this probably true for most biologists, nuclear engineers, computer scientists and so on.
The problem is that preventing existential catastrophe is inherently not measurable, so it attracts more status seekers, grifters, and scammers, just as priestly professions have always done. This is unrelated to whether the source material is biology or computer science. I was probably wrong to focus on status particularly, rather than a broader spectrum of poor behavior.
That is why I mentioned Zizek's point in the original comment: EA has become all about what the fundamental provocation of EA was meant to prevent, namely investing in completely unmeasurable charity at the expense of doing verifiable good.
I could see how it could attract people that like being 'above it', because they get the theoretical risk even if the empirical outcomes are not observable (because we are either safe or dead) but again, while this is hard to quantify or truly verify (eyeronic) I'm not sure at all it is the main motivation. Not sure how to proceed from here, except to note that when someone wants to increase biosecurity (say, Kevin Esvelt) you don't get that sort of reaction as much as you get from AI, and I'm still not sure why.
I don't know that it's much different reaction when biosecurity means "take this drug/vaccine to protect yourself" instead of "make sure this lab is secure". IOW the extent of the difference is probably explained by the implied actions for the general public.
So, um, do I understand correctly that you unironically quote Zizek and yet accuse *someone else* of being drawn to certain thinkers to prove how smart they are?
Haha, I deserve that one :)
I think activity which is difficult to measure attracts all forms of grifters, scammers, and status seekers.
That is why I mentioned Zizek's point in the original comment: EA has become all about what the fundamental provocation of EA was meant to prevent, namely investing in completely unmeasurable charity at the expense of doing verifiable good.
I see your point, but if you look closely at the core concept of EA, it's not exactly "doing measurable charities", it's "doing the most good". Of course to optimize something you need to be able measure it in some way, but all such measurements are estimates (with varying degrees of uncertainty), and you can, in principle, estimate the impact of AI risk mitigation efforts (with high degree of uncertainty). Viewed from this angle, the story becomes quite less dramatic than "EAs have turned into the very thing they were supposed to fight", and becomes more along the lines of arguing about estimation methods and at which point high risk/high reward strategy turns into a Pascal Wager.
Also you're kind of assuming the conclusion when saying that people worried about AGI are scammers and grifters and want to show they're smart. That would be true if AGI concerns were completely wrong, but another alternative is that they are correct and those people (at least many of them) support this cause because they've correctly evaluated the evidence.
What you are saying would be true if the pool of people stayed static, but it doesn't. Scammers will join the movement because promises of large payouts far into the future with small probability is a scammer's (and or lazy status seeker's) paradise.
Thinking about X-risk is fun. In fact getting rich is good too because it will increase my ability to do good. Looks like EA is perfect for me after all! I don't even have to save that drowning child, as the opportunity cost in reduced time thinking about AI risk is higher than the benefits of saving it because my time thinking about AI will save trillions of future AI entities with some probability that I estimated. How lucky I am that EA tells me to do exactly what I wanted to do anyway!
So your point is that AGI safety is bad because some hypothetical person can use it as an excuse to not donate money and not save a drowning child? What a terrifying thought, yeah. We can't allow that to happen.
Yes, my point is that it's intellectual sophistry that is used to insulate oneself from the moral duty implied by the fundamental EA insight. That is, still feel good about doing "EA" while completely ignoring the duties implied by the messge.
Sorry to defend "their side" but I'm a not hypothetical person who actually made this calculation. Most of my donations still go to global poverty
I'm not going to describe in detail what I thought, but the absolute first thing on my mind was the opportunity cost, and that I hated being in the epistemic position where I thought the best use of money was AI risk, and not the much more convenient and socially acceptable global poverty.
Thank you for writing this. It's easy to notice things the controversial failures and harder to notice the steady march of small (or not-so-small-wins). This is much needed.
A couple notes about the animal welfare section. They might be too nitty-gritty for what was clearly intended to just be a quick guess, so feel free to ignore:
- I think the 400 million number for cage-free is an underestimate. I'm not sure where the linked RP study mentions 800 million — my read of it is that total commitments at the time in 2019 (1473 total commitments) would (upon implementation) impact a mean of 310 million hens per year. The study estimated a mean 64% implementation rate, but also there are now over 3,000 total cage-free commitments. So I think it's reasonable to say that EA has convinced farms to switch many billions of chickens to cage-free housing in total (across all previous years and, given the phrasing, including counterfactual impact on future years). But it's hard to estimate.
- Speaking of the 3,000 commitments, that's actually the number for cage-free, which applies to egg-laying hens only. Currently, only about 600 companies globally have committed to stop selling low-welfare chicken meat (from chickenwatch.org).
- Also, the photo in this section depicts a broiler shed, but it's probably closer to what things look like now (post-commitments) for egg-laying hens in a cage-free barn rather than what they used to look like. Stocking density is still very high in cage-free housing :( But just being out of cages cuts total hours of pain in half, so it's nothing to scoff at! (https://welfarefootprint.org/research-projects/laying-hens/)
- Finally, if I may suggest a number of my own: if you take the estimates from the welfare footprint project link above and apply it to your estimate for hens switched to cage-free (400 million), you land at a mind-boggling three trillion hours, or 342 million years, of annoying, hurtful, and disabling pain prevented. I think EA has made some missteps, but preventing 342 million years of animal suffering is not one of them!
If you are interested in global poverty at all, GiveDirectly has a true 1 to 1 match that has finished.
You can donate here if you choose: https://www.givedirectly.org/givingtuesday2023/
This was the only time GiveDirectly has messaged me, and I at least am glad that I can double my impact.
Edit: updated comment to reflect all the matching has been done, also to erase my shameful mistake about timing.
If you disagree that this is an effective use of money, that's fine! Just wanted to make sure the people who wanted to see it do.
EA makes much sense given mistake theory but less given conflict theory.
If you think that donors give to wasteful nonprofits because they’ve failed to calculate the ROI in their donation, then EA is a good way to provide more evidence based charity to the world.
But what if most donors know that most charities have high overhead and/or don’t need additional funds, but donate anyway? What if the nonprofit sector is primarily not what it says it is? What if most rich people don’t really care deeply about the poor? What if most donors do consider the ROI — the return they get in social capital for taking part in the nonprofit sector?
From this arguably realist perspective on philanthropy, EA may be seen to suffer the same fate as other philanthropic projects: a mix of legitimate charitable giving and a way to hobnob with the elite.
It’s still unknown whether the longtermist projects represent real contributions to humanity or just a way to distribute money to fellow elites under the guise of altruism. And maybe it will always be unknown. I imagine historians in 2223 debating whether 21st century x-risk research was instrumental or epiphenomenal.
I think that early EA were unaware of the "conflict theory" part of the equation; there's mentions from time to time that they expected the "direct donations best" to be the easy part and "donate more" to be the hard part, and found it to turn out to be the opposite. I think this has changed a good bit since.
But, tbh, I don't care about the conflict theory part. In the end, there are people who want to direct donations best, GiveWell appears to be the best way to do so transparently, and it (GiveWell) is the primary accomplishment of the EA movement IMO. If some people don't care about trying to do the right thing as much as possible, that's fine, they can go fuck in the mud for all I care.
Or they could enter the movement and take it over from the inside….
Correction to footnote 13: Anthropic's board is not mostly EAs. Last I heard, it's Dario, Daniela, Luke Muehlhauser (EA), and Yasmin Razavi. They have a "long-term benefit trust" of EAs, which by default will elect a majority of the board within 4 years (electing a fifth board member soon—or it already happened and I haven't heard—plus eventually replacing Daniela and Luke), but Anthropic's investors can abrogate the Trust.
(Some sources: https://www.vox.com/future-perfect/23794855/anthropic-ai-openai-claude-2, https://www.lesswrong.com/posts/6tjHf5ykvFqaNCErH/anthropic-s-responsible-scaling-policy-and-long-term-benefit?commentId=SoTkntdECKZAi4W5c.)
Are at least Daniela and Luke not EAs?
I knew all of this except "abrogate the trust", do you know the details there?
Oh, sorry, Daniela and Dario are at-least-EA-ish. (But them being on the board doesn't provide a check on Anthropic, since they are Anthropic.)
The details have not been published, and I do not know them. I wish Anthropic would publish them.
What's your response to Robin Hanson's critique that it's smarter to invest your money so that you can do even more charity in 10 years? AFAIK the only time you addressed this was ~10 years ago in a post where you concluded that Hanson was right. Have you updated your thinking here?
I invest most of my money anyway; I'll probably donate some of it eventually (or most of it when I'm dead). That having been said, I think there are some strong counterarguments:
- From a purely selfish point of view, I think I get better tax deductions if I donate now (for a series of complicated reasons, some of which have to do with my own individual situation). If you're donating a significant amount of your income, the tax deductions can change your total amount of money by a few percent, probably enough to cancel out many of the patient philanthropy benefits.
- Again from a purely personal point of view, I seem to be an "influencer" and I think it's important for me to be publicly seen donating to things.
- There's a philanthropic interest rate that competes with the financial interest rate. If you fund a political cause today, it has time to grow and lobby and do its good work. If you treat malaria today, the people you saved might go do other good things and improve their local economy.
- Doing good becomes more expensive as the world gets better and philanthropic institutions become better. You used to be able to save lives for very cheap with iodine supplementation, but most of those places have now gotten the iodine situation under control. So saving lives costs more over time, which is another form of interest rate increase.
- If you're trying to prevent AI risk, you should prefer to act early (when there's still a lot of time) rather than late (when the battle lines have already been drawn, or the world has already been destroyed, or something).
I do super respect the patient philanthropy perspective; see https://forum.effectivealtruism.org/topics/timing-of-philanthropy for more discussion.
> If you fund a political cause today
I have a hard time viewing "starting a political cause to further your own worldview" as altruistic, or even good. Doesn't normal self-interest already provide an oversupply of political causes? And does convincing smart people to become lobbyists really result in a net benefit to the world? I think a world where the marginal engineer/doctor/scientist instead becomes a lobbyist or politician is a worse world.
>If you treat malaria today, the people you saved might go do other good things and improve their local economy.
That's an interesting claim, but I think it's unlikely to be true. Is economic growth in, say, the Congo limited by the availability of living humans? A rational expectation for the good a hypothetical person will do is the per capita income of their country minus the average cost of living for that country, which for most malaria-type countries that surplus is going to be effectively zero. In almost all circumstances I think you get a higher ROI investing in a first-world economy.
>Doing good becomes more expensive as the world gets better
First world economies will also deliver more value over time as the world gets better. Investing in world-changing internet startups used to be easier but good luck finding the next Amazon now that the internet is mature. You should invest your money now so that the economic engine can maximize the growth of the next great idea. I'm very skeptical that the ROI of saving a third world life will grow faster than a first world economy will.
The strong from of this argument is basically just that economic growth is the most efficient way to help the world (as Tyler Cowen argues). I've never seen it adequately addressed by the EA crowd, but thanks for those links. Exponential growth is so powerful that it inevitably swaps any near-term linear intervention. If you really care about the future state of the world, then it seems insane to me to focus on anything but increasing the growth rate (modulo risks like global warming). IMO any EA analysis that doesn't end with "and this is why this intervention should be expected to boost the productivity of this country" is, at best, chasing self-satisfaction. At worst it's actively making the world worse by diverting resources from a functional culture to a non-functional one.
Boy imagine thinking about what exponential growth could do if it applies to AI. Crazy.
Lots of EAs like Cowen and EAs in general are way more econ-pilled than normal charity/NGOs are. One of the strong reasons for AI development is achieving post-scarcity utopia. GMU is practically rationality/EA-adjacent. Hanson, being the obvious case.
Also, Cowen himself is a huge proponent of supporting potential in places like Africa and India!
If you're a Cowen-style "economic growth plus human rights" kind of person then I think the only major area of disagreement with EA is re: AI risk. But Cowen and OG EA are highly aligned.
Not sure about your situation in that you run a couple of businesses, but in general isn't the most tax-effective way to donate by donating stock, since the donor gets the write off and the receiver gets the increased value without the capital gains being taxed?
(You can, of course, pursue this donation mechanism both now and later.)
https://www.fidelitycharitable.org/articles/4-reasons-to-donate-stock-to-charity.html
> - Again from a purely personal point of view, I seem to be an "influencer" and I think it's important for me to be publicly seen donating to things.
Not gonna argue with this, but: Are your donations really visible? I mean, I don't even *know* that you donated a kidney.
If you amended it to "important for people to hear that I am donating to things" it would not have nagged at me. On the other hand, I haven't come up with a phrasing (even that one) that doesn't have a faint echo of "important that I look like I'm donating" so maybe your version is as good as it can get.
> I think the AI and x-risk people have just as much to be proud of as the global health and animal welfare people.
I disagree. The global health people have actual accomplishments they can point to. It's not just speculative.
I am a bit uneasy about claiming some good is equivalent to, say, curing AIDS or ending gun violence: these are things with significant second-order effects. For example, pending better information, my prior has it that the greatest impact of gun violence isn't even the QALYs lost directly in shootings, but vastly greater number of people being afraid (possibly of even e.g. going outside at night), greater number of people injured, decreased trust in institutions and your fellow man, young people falling into a life of crime rather than becoming productive members of the society, etc, etc. Or, curing AIDS would not just save some people from death or expensive treatment, but would erase one barrier to condom-free sex that most people would profess a preference to (that's a lot of preference-satisfaction when considering the total number of people who would benefit), but here there's also an obvious third-order effect of increased number of unwanted pregnancies (which, as a matter of fact, doesn't even come close to justifying not curing AIDS, but it's there).
Now, I'm entirely on board with the idea of shutting up and calculating, trying your best to estimate the impact (or "something like that": I've been drawn to virtue ethics lately, but a wise, prudent, just and brave - taking up this fight when it goes so far away from social conventions requires bravery, too - person could not simply wave away consequentialist reasoning as though it was nothing), and to do that you have to have some measure of impact, like QALYs. Right. But I think the strictly correct way of expressing that is in abstract QALYs that by construction don't have higher order effects of note. Comparing some good thing to some other thing, naively, without considering second-order effects when those are significant or greater than the first-order effects, seems naive.
And by my reckoning that's also a part of the pushback that EA faces in general: humans notoriously suffer from scope neglect and when thinking about the impact of gun violence, they don't think of gun fatalities times n (most of the dead were gangsters who had it coming anyway), but the second and greater order impacts they themselves experience vividly, and focusing on the exact number of dead seems wrongheaded. And in this case they might be right, too. (Of course, EA calculations can and should factor in nth order effects if they do seem like they would matter, and I would hazard a guess that's what EAs often do, but when people see the aforementioned kinds of comparisons, in my opinion they would be right to conclude them as naive).
Which reminds me of another argument in favor of virtue ethics: practical reasoning is often "newcomblike" (https://www.lesswrong.com/posts/puutBJLWbg2sXpFbu/newcomblike-problems-are-the-norm), that is to say the method of your reasoning matters, just like it does in the original paradox. "Ends don't justify the means" isn't a necessary truth: it's a culturally evolved heuristic that is right more often than not, making some of us averse to nontrivial consequentialist reasoning. "I have spotted this injustice, realized it's something I can actually do something about [effectiveness of EA implicitly comes in here], and devoted myself to the task of righting the wrong" is an easier sell than "you can save a life for n dollars".
Wow, it's gotta be tough out there in the social media wilderness. Anyway, just dropped by to express my support to the EA, hope the current shitstorm passes and the [morally] insane people of twitter will move to the next cause du jour.
I think it's worth asking why EA seems to provoke such a negative reaction -- a reaction we don't see with charitable giving in general or just generic altruism. I mean claiming to be altruistic while self-dealing is the oldest game in town.
My theory is that people see EA as conveying an implied criticism of anyone who doesn't have a coherent moral framework of theory of what's the most effective way to do good.
That's unfortunate, since while I obviously think it's better to have such a theory that doesn't mean we should treat not having one as blameworthy (anymore than we treat not giving a kidney or living like a monk and giving everything you earn away). I'd like to figure out a way to avoid this implication but I don't really have any ideas here.
It's funny how you mention giving a kidney, since Scott's post on donating his kidney got the exact the same reaction.
I've certainly seen criticism that seems to boil down to either: a) they are weird and therefore full of themselves b) they influence Bay Area billionaires and are therefore bad.
One can do some "market research" by reading r/buttcoin comments about SBF, which take occasional pot-shots at EA. Some it is just cynicism about the idea of doing good (r/buttcoin self-selects for cynics). But you can also see the distaste that "normal people" have for the abstract philosophizing behind longtermist EA, especially when it leads to actions that are outwardly indistinguishable from pure greed.
E.g. https://www.reddit.com/r/Buttcoin/comments/16mxkji : "I'm sure it's only a matter of time before we discover why this was actually not only an ethical use of funds, but the only ethical use once you consider the lives of 10^35 future simulated versions of Sam."
The folks who dislike charity EA still confuse me. But they do crop up in the Marginal Revolution comments whenever EA is mentioned.
E.g. https://marginalrevolution.com/marginalrevolution/2023/01/my-st-andrews-talk-on-effective-altruism.html?commentID=160553474 : "The idea that someone should disregard their family, friends, neighbors, cultural group, religious affiliation, region, state, and/or nation in order to do the most 'good' is absurd on its face and contrary to nature."
My sense is that ultimately that comes from the sense that they are being condescended to/tricked by people who are essentially saying: I'm soo much smarter than you and that means I get to break all the rules.
It's hard because I do think it's important to be able to say that: hey intuitions are really often wrong here. But the problem is that there is a strong tendency for people to replace intuitions with whatever people with a certain sort of status are saying which then is problematic.
Sorry, should have started with: that's a good idea and I'll try to do that!
> "The idea that someone should disregard their family, friends, neighbors, cultural group, religious affiliation, region, state, and/or nation in order to do the most 'good' is absurd on its face and contrary to nature."
Ah, yes, religious affiliation, state, and nation: things that totally exist in nature. Does this guy think dogs are arguing about Protestantism? Does he believe that owls have organized a republic in Cascadia?
To be fair, the average Marg Rev comment is not as... let us say "intellectual"... as the average ACX comment. And the commenters can be quite grumpy.
Oh man, I hadn't seen that one before about the parents urging him to milk the cow for their benefit. I was aware he used FTX funds to buy them a holiday home, and that he was donating to his mother/his brother and their good causes, but blatant "are you sending us 7 million or 10 million in cash, please clarify" - his parents were way more involved in the entire mess than I suspected.
"Despite knowing or blatantly ignoring that the FTX Group was insolvent or on the brink of insolvency, Bankman and Fried discussed with Bankman-Fried the transfer to them of a $10 million cash gift and a $16.4 million luxury property in The Bahamas. Bankman and Fried also pushed for tens of millions of dollars in political and charitable contributions, including to Stanford University, which were seemingly designed to boost Bankman’s and Fried’s professional and social status at the expense of the FTX Group, and by extension, its customers and other creditors. Additionally, Fried, concerned with the optics of her son and his companies donating money to the organization she co-founded and other causes she supported, encouraged Bankman-Fried and others within the FTX Group to avoid (if not violate) federal campaign finance disclosure rules by engaging in straw donations or otherwise concealing the FTX Group as the source of the contributions."
Possible big happy family reunion in jail? ☹
Also what the heck with 7 million in cash, were they walking around with suitcases full of dollar bills or what? Every time I read something about FTX that makes me go "Well *that* was no way to run a business, how could they do that?", something new pops up to make me go "Wow, they dug the hole even *deeper*".
'But the journalists think we’re a sinister conspiracy that has “taken over Washington” and have the whole Democratic Party in our pocket.'
What a very, very different world it would be if that were actually the case...
A post like this, and comments, are bizarre to someone whose world was the 20th century, not the 21st. All who come at the topic seem unaware (must be pretending?) there was a big and novel movement once upon a time, that begat several large non-profits and scores of smaller grassroots ones - and none of the issues and concerns of that once-influential cause even clear the narrow bar of the EAs.
That's an interesting comment. Could you elaborate on which movement(s) you have in mind? There were so _many_ movements in the 20th century, both benign and lethal, that I would like to know the specific one(s) you mean.
The conservation movement.
It was especially attractive to people who might, perhaps, be viewed as analogous to the sort of folks currently drawn to EA. But the value systems being so profoundly incompatible, I suppose they must not be the *same* people after all.
Come to any conservation-related meeting or workday. Nothing but Boomers, and even older than Boomers. It will die with them, although they didn't originate it. Of course, it's not too late to talk to Boomers about this subject - but almost too late - and that would require a deal of humility, and it is more fun to hate on Boomers en masse.