Yeah, this is where I end up on it as well. To the extent that it helps people give more effectively, it's been a great thing.
It does go a bit beyond merely annoying though. I think something that Scott is missing is that this field won't just HAVE grifters and scammers, it will ATTRACT grifters and scammers, much like roles as priests etc. have done in the past. The average person should be wary of people smarter than them telling what to do with their money.
The only durable protection from scammers is a measurable outcome. That's part of why I think EA is only effective when it focuses on things that can be measured. The meat of the improvement in EA is moving money from frivolous luxury to measurable charity, not moving measurable charity to low probability moonshots.
I mean, GiveDirectly is a top charity on Givewell, are you claiming that showering poor people in money to the tune of .92 per dollar still produces a lot of transaction cost?
Is your thought here that transaction costs are implicit and thus not properly priced in to the work done? I think at the development economics level that is not terribly true. The transaction costs of poverty relief in urban USA vs the poverty relief in San Salvador are not terrible different once the infrastructure in question is set up.
"Compared to what" is my question.
Everything has transaction costs. Other opportunities have similar transaction costs. I would be surprised if they didn't. However, I agree I would like to see this argued explicitly somewhere.
- Instead of spending an hour studying, you should spend a few minutes figuring out how best to study, then spend the rest of the time studying
- But how long should you spend figuring out the best way to study? Maybe you should start by spending some time figuring out the best balance between figuring out the right way to study, and studying
- But how long should you spend on THAT? Maybe you should start by spending some time figuring out the best amount of time to spend figuring out the best amount of time to spend figuring out . . .
- ...and so on until you've wasted the whole hour in philosophical loops, and therefore you've proven it's impossible to ever study, and even trying is a net negative.
In practice people just do a normal amount of cost-benefit analysis which costs a very small portion of the total amount of money donated.
Centralizing and standardizing research into which charities do exactly what (so the results can then be easily checked against any given definition of "effectiveness") reduces transaction costs by eliminating a lot of what would otherwise be needlessly duplicated effort.
A common sentiment right now is “I liked EA when it was about effective charity and saving more lives per dollar [or: I still like that part]; but the whole turn towards AI doomerism sucks”
I think many people would have a similar response to this post.
Curious what people think: are these two separable aspects of the philosophy/movement/community? Should the movement split into an Effective Charity movement and an Existential Risk movement? (I mean more formally than has sort of happened already)
I'm probably below the average intelligence of people who read scott but that's essentially my position. AI doomerism is kinda cringe and I don't see evidence of anything even starting to be like their predictions. EA is cool because instead of donating to some charity that spends most their money on fundraising or whatever we can directly save/improve lives.
Which "anything even starting to be like their predictions" are you talking about?
-Most "AIs will never do this" benchmarks have fallen (beat humans at Go, beat CAPTCHAs, write text that can't be easily distinguished from human, drive cars)
-AI companies obviously have a very hard time controlling their AIs; usually takes weeks/months after release before they stop saying things that embarrass the companies despite the companies clearly not wanting this
If you won't consider things to be "like their predictions" until we get a live example of a rogue AI, that's choosing to not prevent the first few rogue AIs (it will take some time to notice the first rogue AI and react, during which time more may be made). In turn, that's some chance of human extinction, because it is not obvious that those first few won't be able to kill us all. It is notably easier to kill all humans (as a rogue AI would probably want) than it is to kill most humans but spare some (as genocidal humans generally want); the classic example is putting together a synthetic alga that isn't digestible, doesn't need phosphate and has a more-efficient carbon-fixing enzyme than RuBisCO, which would promptly bloom over all the oceans, pull down all the world's CO2 into useless goo on the seafloor, and cause total crop failure alongside a cold snap, and which takes all of one laboratory and some computation to enact.
I don't think extinction is guaranteed in that scenario, but it's a large risk and I'd rather not take it.
> Most "AIs will never do this" benchmarks have fallen (beat humans at Go, beat CAPTCHAs, write text that can't be easily distinguished from human, drive cars)
I concur on beating Go, but captchas were never thought to be unbeatable by AI - it's more that it makes robo-filing forms rather expensive. Writing text also never seemed that doubtful and driving cars, at least as far as they can at the moment, never seemed unlikely.
This would have been very convincing if anyone like Patrick had given timelines on the earliest point at which they expected the advance to have happened, at which point we can examine if their intuitions in this are calibrated. Because the fact is if you asked most people, they definitely would not have expected art or writing to fall before programming. Basically only gwern is sinless.
On the other hand, EY has consistently refused to make measurable predictions about anything, so he can't claim credit in that respect either. To the extent you can infer his expectations from earlier writing, he seems to have been just as surprised as anyone, despite notionally being an expert on AI.
1. No one mentioned Eliezer. If Eliezer is wrong about timelines, that doesn't mean we suddenly exist in a slow takeoff world. And it's basically a bad faith argument to imply that Eliezer getting surprised *in the direction of capabilities getting better than expected* is apparently evidence of non doom.
2. Patrick is explicitly saying that he sees no evidence. Insofar as we can use Patrick's incredulity as evidence, it would be worth far more if it was calibrated and informed rather than uncalibrated. AI risk arguments depend on more things than just incredulity, so the """lack of predictions""" matters relatively less. My experience has been that people who use their incredulity in this manner in fact do worse at predicting capabilities, hence why getting disproven would be encouraging.
3. I personally think that by default we cannot predict what the rate of change is, but I can lie lazily on my hammock and predict "there will be increases in capability barring extreme calamity" and essentially get completely free prediction points. If you do believe that we're close to a slowdown, or we're past the inflection point of a sigmoid and that my priors about progress are wrong, you can feel free to bet against my entirely ignorant opinion. I offer up to 100 dollars at ratios you feel are representative of slowdown, conditions and operationalizations tbd.
4. If you cared about predictive accuracy, gwern did the best and he definitely believes in AI risk.
"write text that can't be easily distinguished from human"? Really?
*None* of the examples I've seen measure up to this, unless you're comparing it to a young human that doesn't know the topic but has some measure of b*sh*tting capability - or rather, thinks he does.
Yeah there are a bunch of studies now where they give people AI text and human text and ask them to rate them in various ways and to say whether they think it is a human or AI, and generally people rate the AI text as more human.
The examples I've seen are pretty obviously talking around the subject, when they don't devolve into nonsense. They do not show knowledge of the subject matter.
Perhaps that's seen as more "human".
I think that if they are able to mask as human, this is still useful, but not for the ways that EA (mostly) seems to think are dangerous. We won't get advances in science, or better technology. We might get more people falling for scammers - although that depends on the aim of the scammer.
Scammers that are looking for money don't want to be too convincing because they are filtering for gullibility. Scammers that are looking for access on the other hand, do often have to be convincing in impersonating someone who should have the ability to get them to do something.
But moore’s law is dead. We’re reaching physical limits, and under these limits, it already costs millions to train and execute a model that, while impressive, is still multiple orders of magnitude away from genuinely dangerous superintelligence. Any further progress will require infeasible amounts of resources.
Moore's Law is only dead by *some* measures, as has been true for 15-20 years. The limiting factors for big ML are mostly inter-chip communications, and those are still growing aggressively.
This is one of the reasons I'm not a doomer, which is that most doomers' mechanism of action for human extinction is biological in nature, and most doomers are biologically illiterate.
RuBisCO is known to be pretty awful as carboxylases go. PNA + protein-based ribosomes avoids the phosphate problem.
I'm not saying it's easy to design Life 2.0; it's not. I'm saying that with enough computational power it's possible; there clearly are inefficiencies in the way natural life does things because evolution likes local maxima.
You're correct on the theory; my point was that some people assume that computation is the bottleneck rather than actually getting things to work in a lab within a reasonable timeframe. Not only is wet lab challenging, I also have doubts as to whether biological systems are computable at all.
I think the reason that some people (e.g. me) assume that computation* is the bottleneck is that IIRC someone actually did assemble a bacterium (of a naturally-existing species) from artificially-synthesised biomolecules in a lab. The only missing component to assemble Life 2.0 would then seem to be the blueprint.
If I'm wrong about that experiment having been done, please tell me, because yeah, that's a load-bearing datum.
*Not necessarily meaning "raw flops", here, but rather problem-solving ability
Much like I hope for more people to donate to charity based on the good it does rather than based on the publicity it generates, I hope (but do not expect) that people decide to judge existential risks based on how serious they are rather than based on how cringe they are.
Yeah this is where I am. A large part of it for me is that after AI got cool, AI doomerism started attracting lots of naked status seekers and I can't stand a lot of it. When it was Gwern posting about slowing down Moore's law, I was interested, but now it's all about getting a sweet fellowship.
Is your issue with the various alignment programs people keep coming up with? Beyond that, it seems like the main hope is still to slow down Moore's law.
Interesting, I did not get this impression but also I do worry about AI risk - maybe that causes me to focus on the reasonable voices and filter out the non-sense. I'd be genuinely curious for an example of what you mean, although I understand if you wouldn't want to single out anyone in particular.
I don’t mind naked status seeking as long as people do it by a means that is effective at achieving good ends for the world. One can debate whether AI safety is actually effective, but if it is, EAs should probably be fine with it (just like the naked cash seekers who are earning to give).
I agree. But there seem to be a lot of people in EA with some serious scrupulosity going on. Like that person who said they would like to donate a kidney, but could not bear the idea that it might go to a meat-eater, and so the donor would be responsible for all the animal suffering caused by the recipient. It's as though EA is, for some people, a refuge from ever feeling they've done wrong -- as though that's possible!
What’s wrong with naked status seekers (besides their tendency to sometimes be counterproductive if advancing the cause works against their personal interests)?
It's bad when the status seeking becomes more important than the larger purpose. And at the point when it gets called "naked status seeking", it's already over that line.
They will only do something correct if it advances their status and/or cash? To the point of not researching or approving research into something if it looks like it won't advance them?
Definitely degree of confidence plays into it a lot. Speculative claims where it's unclear if the likelihood of the bad outcome is 0.00001% or 1% are a completely different ball game from "I notice that we claim to care about saving lives, and there's a proverbial $20 on the ground if we make our giving more efficient."
I think it also helps that those shorter-term impacts can be more visible. A malaria net is a physical thing that has a clear impact. There's a degree of intuitiveness there that people can really value
And yet, what exactly is the argument that the risk is actually low?
I understand and appreciate the stance that the doomers are the ones making the extraordinary claim, at least based on the entirety of human history to date. But when I hear people pooh-poohing the existential risk of AI, they are almost always pointing to what they see as flaws in some doomer's argument -- and usually missing the point that the narrative they are criticizing is usually just a plausible example of how it might go wrong, intended to clarify and support the actual argument, rather than the entire argument.
Suppose, for the sake of argument, that we switch it around and say that the null hypothesis is that AI *does* pose an existential risk. What is the argument that it does not? Such an argument, if sound, would be a good start toward an alignment strategy; contrariwise, if no such argument can be made, does it not suggest that at least the *risk* is nonzero?
It's weird that you bring up Robin Hanson, considering that he expects humanity to be eventually destroyed and replaced with something else, and sees that as a good thing. I personally wouldn't use that as an argument against AI doomerism, since people generally don't want humanity to go extinct.
What specific part of Robin Hanson's argument on how growth curves are a known thing do you find convincing?
That's the central intuition underpinning his anti foom worldview, and I just don't understand how someone can generalize that to something which doesn't automatically have all the foibles of humans. Does you think that a population of people who have to sleep, eat and play would be fundamentally identical to an intelligence who is differently constrained?
I'm not seeing any strong arguments there, in that he's not making arguments like, "here is why that can't happened", but instead is making arguments in the form, "if AI is like <some class of thing that's been around a while>, then we shouldn't expect it to rapidly self-improve/kill everything because that other thing didn't".
E.g. if superintelligence is like a corporation, it won't rapidly self-improve.
Okay, sure, but there are all sorts of reasons to worry superintelligent AGI won't be like corporations. And this argument technique can work against any not-fully-understood future existential threat. Super-virus, climate change, whatever. By the anthropic principle, if we're around to argue about this stuff, then nothing in our history has wiped us out. If we compare a new threat to threats we've encountered before and argue that based on history, the new threat probably isn't more dangerous than the past ones, then 1) you'll probably be right *most* of the time and 2) you'll dismiss the threat that finally gets you.
I’ve been a big fan of Robin Hanson since there was a Web; like Hanania, I have a strong prior to Trust Robin Hanson. And I don’t have any real argument with anything he says there. I just don’t find it reassuring. My gut feeling is that in the long run it will end very very badly for us to share the world with a race that is even ten times smarter than us, which is why I posed the question as “suppose the null hypothesis is that this will happen unless we figure out how to avoid it”.
Hanson does not do that, as far as I can tell. He quite reasonably looks at the sum of human history and finds that he is just not convinced by doomers’ arguments, and all his analysis concerns strategies and tradeoffs in the space that remains. If I accept the postulate that this doom can’t happen, that recursive intelligence amplification is really as nonlumpy as Hanson suspects, then I have no argument with what he says.
But he has not convinced me that what we are discussing is just one more incremental improvement in productivity, rather than an unprecedented change in humans’ place in the world.
I admit that I don’t have any clear idea whether that change is imminent or not. I don’t really find plausible the various claims I have read that we’re talking about five or ten years. And I don’t want to stop AI work: I suspect AGI is a prerequisite for my revival from cryosuspension. But that just makes it all the more pressing to me that it be done right.
When ignoring the substance of the argument, I find their form to be something like a Pascal's wager, bait and switch. If there even is a small percent you will burn in hell for eternity, why wouldn't you become Catholic. Such an argument fails for a variety of reasons, one being it doesn't account for alternative religions and their probabilities with alternatives outcomes.
So I find I should probably update my reasoning toward there being some probability of x-risk here, but the probability space is pretty large.
One of the good arguments for doomerism is that the intelligences will be in some real sense alien. That there is a wider distribution of possible ways to think than human intelligence, including how we consider motivation, and this could lead to paper-clip maximizers, or similar AI-Cthulhus of unrecognizable intellect. I fully agree that these might very likely be able to easily wipe us out. But there are many degrees of capability and motivation and I don't see the reason to assume that either through a side-effect of ulterior motivation or direct malice that that lead to the certainty of extinction expressed by someone like Eliezer. There are many possibilities, many are fraught. We should invest is safety and alignment. But that that doesn't mean we should consider x-risk a certainty and certainly not at double-digit likelihood's within short timeframes.
Yes, the space of possibilities (I think you meant this?) is pretty large. But x-risk is most of it. Most of possible outcomes of optimisation processes over Earth and Solar System have no flourishing humanity in them.
It is perhaps a lot like other forms of investment. You can't just ask "What's the optimal way to invest money to make more money?" because it depends on your risk tolerance. A savings account will give you 5%. Investing in a random seed-stage startup might make you super-rich but usually leaves you with nothing. If you invest in doing good then you need to similarly figure out your risk profile.
The good thing about high-risk financial investments is they give you a lot of satisfaction of sitting around dreaming about how you're going to be rich. But eventually that ends when the startup goes broke and you lose your money.
But with high-risk long-term altruism, the satisfaction never has to end! You can spend the rest of your life dreaming about how your donations are actually going to save the world and you'll never be proven wrong. This might, perhaps, cause a bias towards glamourous high-risk long-term projects at the expense of dull low-risk short-term projects.
Much like other forms of investment, if someone shows up and tells you they have a magic box that gives you 5% a month, you should be highly skeptical. Except replace %/month with QALYs/$.
I see your point, but simple self-interest is sufficient to pick up the proverbial $20 bill lying on the ground. Low-hanging QALYs/$ may have a little bit of an analogous filter, but I doubt that it is remotely as strong.
The advantage of making these types of predictions is that even if someone says that the unflattering thing is not even close to what drives them, you can go on thinking "they're just saying that because my complete and perfect fantasy makes them jealous of my immaculate good looks".
Yeah I kinda get off the train at the longtermism / existential risk part of EA. I guess my take is that if these folks really think they're so smart that they can prevent and avert crises far in the future, shouldn't they have been better able to handle the boardroom coup?
I like the malaria bed nets stuff because its easy to confirm that my money is being spent doing good. That's almost exactly the opposite when it comes to AI-risk. For example, the tweet Scott included about how no one has done more to bring us to AGI than Eliezer—is that supposed to be a good thing? Has discovering RLHF which in turn powered ChatGPT and launched the AI revolution made AI-risk more or less likely? It almost feels like one of those Greek tragedies where the hero struggles so hard to escape their fate they end up fulfilling the prophecy.
I think he was pointing out that for EAs have been a big part of the current AI wave. So whether you are a doomer or an accelerationist you should agree that EAs impact has been large even if you disagree with the sign
Problem is, the OpenAI scuffle shows that right now, as AI is here or nearly here, the ones making the decisions are the ones holding the purse strings, and not the ones with the beautiful theories. Money trumps principle and we just saw that blowing up in real time in glorious Technicolor and Surround-sound.
So whether you're a doomer or an accelerationist, the EAs impact is "yeah you can re-arrange the deckchairs, we're the ones running the engine room" as things are going ahead *now*.
Not that I have anything against EAs, but, as someone who want to _see_ AGI, who doesn't want to see the field stopped in its tracks by impossible regulations, as happened to civilian nuclear power in the usa, I hope that you are right!
I mean, if I really believed we'd get conscious, agentic AI that could have its own goals and be deceitful to humans and plot deep-laid plans to take over and wipe out humanity, sure I'd be very, very concerned and unhappy about this result.
I don't believe that, nor that we'll have Fairy Godmother AI. I do believe we'll have AI, an increasing adoption of it in everyday life, and it'll be one more hurdle to deal with. Effects on employment and jobs may be catastrophic (or not). Sure, the buggy whip manufacturers could shift to making wing mirrors for the horseless carriages when that new tech happened, but what do you switch to when the new tech can do anything you can do, and better?
I think the rich will get richer, as per usual, out of AI - that's why Microsoft etc. are so eager to pave the way for the likes of Sam Altman to be in charge of such 'safety alignment' because he won't get in the way of turning on the money-fountain with foolish concerns about going slow or moratoria.
AGI may be coming, but it's not going to be as bad or as wonderful as everyone dreads/hopes.
That's mostly my take too. But to be fair to the doomer crowd, even if we don't buy the discourse on existential risks, what this concern is prompting them to do is lots of research on AI alignment, which in practice means trying to figure out how AI works inside and how it can be controlled and made fit for human purposes. Which sounds rather useful even if AI ends up being on the boring side.
> but what do you switch to when the new tech can do anything you can do, and better?
Nothing -- you retire to your robot ranch and get anything you want for free. Sadly, I think the post-scarcity AGI future is still very far off (as in, astronomically so), and likely impossible...
I think that the impact of AGI is going to be large (even if superintelligence either never happens or the effect of additional smarts just saturates, diminishing returns and all that), provided that it can _really_ do what a median person can do. I just want to have a nice quiet chat with the 21st century version of a HAL-9000 while I still can.
> if these folks really think they're so smart that they can prevent and avert crises far in the future, shouldn't they have been better able to handle the boardroom coup?
Surely these are different skills? Someone who could predict and warn against the dangers of nuclear weapon proliferation and the balance of terror, might still have been blindsided by their spouse cheating on them.
Suppose Trump gets elected next year. Is it a fair attack on climatologists to ask "If these people really think they're so smart that they can predict and avert crises far in the future, shouldn't they have been better able to handle a presidential election?"
Also, nobody else seems to have noticed that Adam D'Angelo is still on the board of OpenAI, but Sam Altman and Greg Brockman aren't.
I hardly think that's a fair comparison. Climatologists are not in a position to control the outcome of a presidential election, but effective altruists controlled 4 out of 6 seats on the board of the company.
Of course, if you think that they played their cards well (given that D'Angelo is still on the board) then I guess there's nothing to argue about. I—and I think most other people—believe they performed exceptionally poorly.
The people in the driver's seat on global-warming activism are more often than not fascist psycopaths like Greta Thunberg, whom actively fight against the very things that would best fight against global warming, like nuclear energy and natural gas pipelines, so they can instead promote things that would make it worse, like socialism and degrowth.
We will never be able to rely on these people to do anything but cause problems. They should be shunned like lepers.
I think that if leaders are elected that oppose climate mitigation, that is indeed a knock on the climate-action political movement. They have clearly failed in their goals.
Allowing climate change to become a partisan issue was a disaster for the climate movement.
I agree completely. Nonetheless, the claim that spending money on AI safety is a good investment rests on two premises: That AI risk is real, and that EA can effectively mitigate that risk.
If I were pouring money into activists groups advocating for climate action, it would be cold comfort to me that climate change is real when they failed.
The EA movement is like the Sunrise Movement/Climate Left. You can have good motivations and the correct ambitions but if you have incompetent leadership your organization can be a net negative for your cause.
> Is it a fair attack on climatologists to ask "If these people really think they're so smart that they can predict and avert crises far in the future, shouldn't they have been better able to handle a presidential election
It is a fair criticism for those that believe the x-risk, or at least extreme downsides of climate change, to not figure out ways to better accomplish their goals rather than just political agitation. Building coalitions with potentially non-progressive causes, being more accepting of partial, incremental solutions. Playing "normie" politics along the lines of matt yglesias, and maybe holding your nose to some negotiated deals where the right gets their way probably mitigates and prevents situations where the climate people won't even have a seat at the table. For example, is making more progress on preventing climate extinction worth stalling out another decade on trans-rights? I don't think that is exactly the tradeoff on the table, but there is a stark unwillingness to confront such things by a lot of people who publicly push for climate-maximalism.
"Playing normie politics" IS what you do when you believe something is an existential risk.
IMHO the test, if you seriously believe all these claims of existential threat, is your willingness to work with your ideological enemies. A real existential threat was, eg, Nazi Germany, and both the West and USSR were willing to work together on that.
When the only move you're willing to make regarding climate is to offer a "Green New Deal" it's clear you are deeply unserious, regardless of how often you say "existential". I don't recall the part of WW2 where FDR refused to send Russia equipment until they held democratic elections...
If you're not willing to compromise on some other issue then, BY FSCKING DEFINITION, you don't believe really your supposed per cause is existential! You're just playing signaling games (and playing them badly, believe me, no-one is fooled). cf Greta Thunberg suddenly becoming an expert on Palestine:
FDR giving the USSR essentially unlimited resources for their war machine was a geostrategic disaster that led directly to the murder and enslavement of hundreds of millions under tyrranies every bit as gruesome as that of Hitler's. Including the PRC, which menaces the World to this day.
The issue isn't that compromise on existential threats are inheriently bad. The issue is that, many times, compromises either make things worse than they would've been otherwise, or create new problems that are as bad or worse as what they subsumed.
I can think of a few groups, for example world Jewry, that might disagree with this characterization...
We have no idea how things might have played out.
I can tell you that the Hard Left, in the US, has an unbroken record of snatching defeat from the jaws of victory, largely because of their unwillingness to compromise, and I fully expect this trend to continue unabated.
Effect on climate? I expect we will muddle through, but in a way that draws almost nothing of value from the Hard Left.
The reason we gave the USSR unlimited resources was because they were directly absorbing something like 2/3 of the Nazi's bandwidth and military power in a terribly colossal years-long meatgrinder that killed something like 13% of the entire USSR population.
Both the UK and USA are extremely blessed that the USSR was willing to send wave after wave of literally tens of millions of their own people into fighting the Nazi's and absorbing so much of their might, and it was arguably the deal of the century to trade mere manufactured objects for the breathing room and Nazi distraction / might-dissipation that this represented.
The alternative would have been NOT giving the USSR unlimited resources, the Nazi's quickly steamroll the USSR, and then turn 100% of their attention and military might towards the UK, which they would almost certainly win. Or even better, not getting enough materiel to conduct a war and realizing he would lose, Stalin makes a deal with Germany and they BOTH focus on fighting the UK and USA - how long do you think the UK would have survived that?
Would the USA have been able to successfully fight a dual-front war with basically all of Europe aligned under Nazi power PLUS Japan with China's resources? We don't know, but it's probably a good thing in terms of overall deaths and destruction on all sides that we didn't need to find out.
Sure, communism sucked for lots of people. But a Nazi-dominated Europe / world would probably have sucked more.
Ah come on, Scott: that the board got the boot and was revamped to the better liking of Sam who was brought back in a Caesarian triumph isn't very convincing about "so this guy is still on the board, that totes means the good guys are in control and keeping a cautious hand on the tiller of no rushing out unsafe AI".
Convince me that a former Treasury Secretary is on the ball about the most latest theoretical results in AI, go ahead. Maybe you can send him the post about AI Monosemanticity, which I genuinely think would be the most helpful thing to do? At least then he'd have an idea about "so what are the eggheads up to, huh?"
While I agree with the general thrust, I think the short-term vs. long-term is neglected. For instance, you yourself recommended switching from chicken to beef to help animals, but this neglects the fact that over time, beef is less healthy than chicken, thus harming humans in a not-quickly-visible way. I hope this wasn't explicitly included and allowed in your computation (you did the switch yourself, according to your post), but this just illuminates the problem: EA want to be clear beneficiaries, but clear often means "short-term" (for people who think AI doomerism is an exception, remember that for historical reasons, people in EA have, on median, timelines that are extremely short compared to most people's).
> I guess my take is that if these folks really think they're so smart that they can prevent and avert crises far in the future, shouldn't they have been better able to handle the boardroom coup?
They got outplayed by Sam Altman, the consummate Silicon Valley insider. According to that anonymous rumour-collecting site, they're hardly the only ones, though it suggests they wouldn't have had much luck defending us against an actual superintelligence.
> For example, the tweet Scott included about how no one has done more to bring us to AGI than Eliezer—is that supposed to be a good thing?
No. I'm pretty sure sama was trolling Eliezer, and that the parallel to Greek tragedy was entirely deliberate. But as Scott said, it is a thing that someone has said.
I actually pretty completely endorse the longtermism and existential risk stuff - but disagree about the claims about the best ways to achieve them.
Ordinary global health and poverty initiatives seem to me to be much more hugely influential in the long term than the short term thanks to the magic of exponential growth. An asteroid or gamma ray or what ever program that has a .01% chance of saving 10^15 lives a thousand years from now looks good compared to saving a few thousand lives this year at first - but when you think about how much good those thousand people will do for their next 40 generations of descendants, as well as all the people those 40 generations of descendants will help, either through normal market processes or through effective altruist processes of their own, this starts to look really good at the thousand year mark.
AI safety is one of the few existential risk causes that doesn’t depend on long term thinking, and thus is likely to be a very valuable one. But only if you have any good reason to think that your efforts will improve things rather than make them worse.
I remember seeing this for the "climate apocalypse" thing many years ago: some conservationist (specifically about birds, I think) was annoyed that the movement had become entirely about global warming.
Global warming is simply a livelier cause for the Watermelons to get behind. Not because they genuinely care about global warming, as they oppose the solutions that would actually help alleviate the crisis, but because they're psychopathic revolutionary socialists who see it as the best means available today of accomplishing their actual goal: the abolition of capitalism and the institution of socialism.
EA as a movement to better use philanthropic resources to do real good is awesome.
AI doomerism is a cult. It's a small group of people who have accrued incredible influence in a short period of time on the basis of what can only be described as speculation. The evidence base is extremely weak and it relies far too much on "belief". There are conflicts of interest all over the place that the movement is making no effort to resolve.
At this point a huge number of experts in the field consider AI risk to be a real thing. Even if you ignore the “AGI could dominate humanity” part, there’s a large amount of risk from humans purposely (mis)using AI as it grows in capability.
Predictions about the future are hard and so neither side of the debate can do anything more than informed speculation about where things will go. You can find the opposing argument persuading, but dismissing AI risk as mere speculation without evidence is not even wrong.
The conflicts of interest tend to be in the direction of ignoring AI risk by those who stand to profit from AI progress, so you have this exactly backwards.
You can't ignore the whole "AGI could dominate humanity" part, because that is core to the arguments that this is an urgent existential threat that needs immediate and extraordinary action. Otherwise AI is just a new disruptive technology that we can deal with like any other new, disruptive technology. We could just let it develop and write the rules as the risks and dangers become apparent. The only way you justify the need for global action right now is based on the belief that everybody is going to die in a few years time. The evidence for existential AI risk is astonishingly weak given the amount of traction it has with policymakers. It's closer to Pascal's Wager rewritten for the 21st century than anything based on data.
On the conflict of interest, the owners of some of the largest and best funded AI companies on the planet are attempting to capture the regulatory environment before the technology even exists. These are people who are already making huge amounts of money from machine learning and AI. They are taking it upon themselves to write the rules for who is allowed to do AI research and what they are allowed to do. You don't see a conflict of interest in this?
Let's distinguish "AGI" from "ASI", the latter being a superintelligence equal to something like a demigod.
Even AGI strictly kept to ~human level in terms of reasoning will be superhuman in the ways that computers are already superhuman: e.g., data processing at scale, perfect memory, replication, etc., etc.
Even "just" that scenario of countless AGI agents is likely dangerous in a way that no other technology has ever been before if you think about it for 30 seconds. The OG AI risk people are/were futurists, technophiles, transhumanists, and many have a strong libertarian bent. "This one is different' is something they do not wish to be true.
Your "conflict of interest" reasoning remains backwards. Regulatory capture is indeed a thing that matters in many arenas, but there are already quite a few contenders in the AI space from "big tech." Meaningfully reducing competition by squishing the future little guys is already mostly irrelevant in the same way that trying to prevent via regulation the creation of a new major social network from scratch would be pointless. "In the short run AI regulation may slow down our profits but in the long run it will possibly lock out hypothetical small fish contenders" is almost certainly what no one is thinking.
"No one on this successful tech company's board of directors is making decisions based on what will eventually get them the most monopoly profits" sounds like an extraordinary claim to me.
This is the board of directors that explicitly tried to burn the company down, essentially for being too successful. They failed, but can you ask for a more credible signal of seriousness?
1. Holy shit is than an ironic thing to say after the OpenAI board meltdown. Also check out Anthropic’s board and equity structure. Also profit-driven places like Meta are seemingly taking a very different approach. Why?
2. You’re doing the thing where decreasing hypothetical future competition from new, small entrants to a field equals monopoly. Even if there was a conspiracy by eg Anthropic to use regulatory barriers against new entrants, that would not impact the already highly competitive field between the several major labs. (And there are already huge barriers to entry for newcomers in terms of both expertise and compute. Even a potential mega contender like Apple is apparently struggling and a place like Microsoft found a partner.)
It's just at this point a significant number of experts in AI have come around to believing AI risk is a real concern. So have a lot of prominent people in other fields, like national security. So have a lot of normies who simply intuit that developing super smart synthetic intelligence might go bad for us mere meat machines.
You can no longer just hand wave AI risk away as a concern of strange nerds worried about fictional dangers from reading too much sci-fi. Right or wrong, it's gone mainstream!
Who are some people who have accrued incredible influence and what is the period of time in which they gained this influence?
From my standpoint it seems like most of the people with increased influence are either a) established ML researchers who recently began speaking out in favor of deceleration and b) people who have been very consistent in their beliefs about AI risk for 12+ years, who are suddenly getting wider attention in the wake of LLM releases.
Acceptance of catastrophic risk from artificial superintelligence is the dominant position among the experts (including independent academics), the tech CEOs, the major governments, and the general public. Calling it a "small group of people who have accrued incredible influence" or "a cult" is silly. It's like complaining about organizations fighting Covid-19 by shouting "conspiracy!" and suggesting that the idea is being pushed by a select group.
The denialists/skeptics are an incredibly fractured group who don't agree with each other at all about how the risk isn't there; the "extinction from AI is actually good", "superintelligence is impossible", "omnipotent superintelligence will inevitably be absolutely moral", and "the danger is real but I can solve it" factions and subfactions do not share ideologies, they're just tiny groups allying out of convenience. I don't see how one could reasonably suggest that one or more of those is the "normal" group, to contrast with the "cult".
I think there’s an important contrast between people who think that AI is a significant catastrophic risk, and people who think there is a good project available for reducing that risk without running a risk of making it much worse.
For those of you that shared the "I like global health but not longtermism/AI Safety", how involved were you in EA before longtermism / AI Safety became a big part of it?
I think it is a good question to raise with the EA-adjacent. Before AI Doomerism and the tar-and-feathering of EA, EA-like ideas were starting to get more mainstream traction and adoption. Articles supportive of say, givewell.org, in local papers, not mentioning EA by name, but discussing some of the basic philosophical ideas were starting to percolate out more into the common culture. Right or Wrong, there has been a backlash that is disrupting some of that influence even those _in_ the EA movement are still mostly doing the same good stuff Scott outlined.
Minor point: I'd prefer to treat longtermism and AI Safety quite separately. (FWIW, I am not in EA myself.)
Personally, I want to _see_ AGI, so my _personal_ preference is that AI Safety measures at least don't cripple AI development like regulatory burdens made civilian nuclear power grind to a 50 year halt in the USA. That said, the time scale for plausible risks from AGI (at least the economic displacement ones) is probably less than 10 years and may be as short as 1 or 2. Discussing well-what-if-every-job-that-can-be-done-online-gets-automated does not require a thousand-year crystal ball.
Longtermism, on the other hand, seems like it hinges on the ability to predict consequences of actions on *VASTLY* longer time scales than anyone has ever managed. I consider it wholly unreasonable.
None of this is to disparage Givewell or similar institutions, which seem perfectly reasonable to me.
I actually think that longtermism advocates for ordinary health and development charity - that sort of work grows exponentially in impact over the long term and thus comes out looking even better than things like climate or animal welfare, whose impacts grow closer to linearly with time.
The problem with longtermism is that you can use it to justify pretty much anything, regardless of if you're even right, as long as your ends are sufficiently far enough away from the now to where you never actually have to be held accountable for getting things wrong.
It's not a very good philosophy. People should be saved from malaria for its own sake. Not because of "longtermism".
Given a choice between several acts which seem worth doing for their own sake, rate at which secondary benefits potentially compound over the long term could be a useful tiebreaker.
"that sort of work grows exponentially in impact over the long term" Some of the longtermist arguments talk about things like effects over a time scale where they expect us to colonize the galaxy. The time scale over which economies have been growing more-or-less steadily is more like 200-300 years. I think that it is sane to make a default assumption of exponential impact, as you describe, for that reason over that time scale (though many things, AI amongst them, could invalidate that). _Beyond_ 200-300 years, I don't think smoothish-growth-as-usual is a reasonable expectation. I think all we can say longer term than that is _don't_ _know_.
I heard about EA and got into the global health aspects of it from a talk on AI safety I went to given by... EY. I went to the talk on AI safety because I'd read HPMOR and just wanted to meet the author.
I wasn't at all convinced about AI safety, but I became interested in the global health aspects of EA. This year my donations went to PSI. I'm still an AI sceptic.
I gave money to GiveDirectly, which is EA-adjacent, and some years would get GiveWell endorsements. It never gets to the top of the recommendation list, but has the big advantage of having a low variance (especially the original formulation, where everyone living in a poor village got a one-time unconditional payout). "I can see you're not wasting the funds" is a good property if you have generally low trust in people running charitable orgs (the recent turn into generating research papers to push UBI in the US is unfortunate).
AI-doom-people have a decent shot at causing more deaths than all other human causes put together, if they follow the EY "nuke countries with datacenters" approach. Of course they'll justify it by appealing to the risk of total human extinction, but it shouldn't be surprising that people who estimate a substantially lower probability of the latter see the whole endeavor as probably net-negative. You'd be better off burning the money.
My only prior exposure was Doing Good Better, before seeing a *lot* of longtermism/x-risk messaging at EA Cambridge in 2018 (80k hours workshop, AI safety reading group, workshops at EA Cambridge retreat).
I considered AI safety (I'm a CS researcher already), enough to attend the reading group. But it seemed like pure math-level mental gymnastics to argue that the papers had any application to aligning future AGIs, and I dislike ML/AI research anyway.
Well there's also the part where people may have been involved in charity/NGO stuff before the cool kids relabeled it as EA.
Not to blame anyone for the relabeling though - if it got lots of fresh young people involved in humanitarian activity, and some renewed interest into its actual efficacy, they're more than entitled to take pride and give it a new name.
Freddie de Boer was talking about something like this today, about retiring the EA label. The effective EA orgs will still be there even if there is no EA. But I'm not really involved in the community, even if I took the Giving What We Can pledge, so it doesn't really matter much to me if AI X-risk is currently sucking up all the air in the movement.
I agree with the first part, but the problems with EA stem beyond AI doomerism. People in the movement seriously consider absurd conclusions like it being morally desirable to kill all wild animals, it has perverse moral failings as an institution, its language has evolved to become similar to postmodern nonsense, it has a strong left wing bias, and it has been plagued by scandals.
Surely none of that is necessary to get more funding to go towards effective causes. I’d like to invite someone competent to a large corporate so that we can improve the effectiveness of our rather large donations, but the above means I have no confidence to do so.
Well, sometime some people also considered absurd conclusions as giving voting rights to women, and look where we are. Someone have to consider things to understand if they worth anything.
The problem is that utilitarianism likely a fatally flawed approach, taking it to its fullest, most extreme. There is some element of deontology that probably needs to be accounted for a more robust ethical framework.
Or, hah, maybe AGI is a Utility Monster we should accelerate and our destruction would provide more global utility for such an optimizing agent than our continued existence it should be the wished for outcome. But such ideas are absurd.
To point out, Bentham in fact advocated for women's rights "before his time" and lead to many proto feminist works getting published by John Stuart Mill. In fact, concurrent arguments against his stance would cite that the women only mattered in the context of what they can do for men, so it's ridiculous to speak of suffrage.
I don't get it. Which one is the more plausible claim? Because for most of history, it would have been "killing whole classes of animals and people". The only reason that isn't true today is precisely because some people were willing to ponder absurd trains of thought.
Deliberate attempts to exterminate whole classes of people go back to at least King Mithridates VI in 88 BCE. For most of human history giving women (or anyone the vote) is a weird and absurd idea while mass slaughter was normal.
Its because people were willing to entertain "absurd" ideas that mass slaughter is now abhorrent and votes for all are normal.
Morally deranged cults don’t “seriously consider” ideas that go diametrically against what other members of the cult endorse. Morally deranged cults outright endorse these crazy ideas. EA does not endorse the elimination of wild animals, though it does consider it seriously.
Any idea should be considered based in its merit, not emotional reaction. I am not sure if you think I am in a cult, or people in EA are.
All I can say negative utilitarianism exists. There is even a book, Suffering-focused ethics, exploring roughly the idea that suffering is much worse than positive experience.
As a person who is seriously suffering, I consider this topic is at least worth discussing. Thought that I can be in a situation where I cannot kill myself and won't get pain meds gives me serious anxiety. Yet, this is pretty common. In most world countries euthanasia is illegal and pain medicines are strictly controlled. Situation where you can suffer terribly and couldn't die is common. Normal people don't think about it often, until they do.
Based on my thoughts above, I feel like suffering of wild and domesticated animals is something real. I am not sure why do you think that by default we cannot even fanthom idea that we can end their suffering. I myself is not pro or contra, but I am happy that there are people who think about these topics.
As someone who doesn't identify with EA (but likes parts of it), I don't expect my opinion to be particularly persuasive to people who do identify more strongly with the movement, but I do think such a split would result in broader appeal and better branding. For example, I donate to GiveWell because I like its approach to global health & development, but I would not personally choose to donate to animal welfare or existential risk causes, and I would worry that supporting EA more generically would support causes that I don't want to support.
To some extent, I think EA-affiliated groups like GiveWell already get a lot of the benefit of this by having a separate-from-EA identity that is more specific and focused. Applying this kind of focus on the movement level could help attract people who are on board with some parts of EA but find other parts weird or off-putting. But of course deciding to split or not depends most of all on the feelings and beliefs of the people actually doing the work, not on how the movement plays to people like me.
I agree that there should be a movement split. I think the existential risk AI doomerism subset of EA is definitely less appealing to the general public and attracts a niche audience compared to the effective charity subset which is more likely to be generally accepted by pretty much anybody of all backgrounds. If we agree that we should try to maximize the number of people that at the very least are involved in at least one of the causes, when the movement is associated with both causes, many people who would've been interested in effective charitable giving will be driven away by the existential risk stuff.
My first thought was "Yes, I think such a split would be an excellent thing."
My second thought is similar, but with one slight concern: I think that the EA movement probably benefits from attracting and being dominated by blueish-grey thinkers; I have a vague suspicion that such a split would result in the two halves becoming pure blue and reddish-grey respectively, and I think a pure blue Effective Charity movement might be less effective than a more ruthlessly data-centric bluish-grey one.
Yes, a pure blue Effective Charity movement would give you more projects like the hundreds of millions OpenPhil spent on criminal justice, which they deemed ineffective but then spun off into its own thing.
I personally know four people who were so annoyed by AI doomers that they set out to prove beyond a reasonable doubt that there wasn't a real risk. In the process of trying to make that case, they all changed their mind and started working on AI alignment. (One of them was Eliezer, as he detailed in a LW post long ago.) Holden Karnofsky similarly famously put so much effort into explaining why he wasn't worried about AI that he realized he ought to be.
The EA culture encourages members to do at least some research into a cause in order to justify ruling it out (rather than mocking it based on vibes, like normal people do); the fact that there's a long pipeline of prominent AI-risk-skeptic EAs pivoting to work on AI x-risk is one of the strongest meta-arguments for why you, dear reader, should give it a second thought.
This was also my trajectory ... essentially I believed that there were a number of not too complicated technical solutions, and it took a lot of study to realize that the problem was genuinely extremely difficult to solve in an airtight way.
I might add that I don't think most people are in a position to evaluate in depth and so it's unfortunately down to which experts they believe or I suppose what they're temperamentally inclined to believe in general. This is not a situation where you can educate the public in detail to convince them.
I'd argue in the opposite direction: that one of the best things about EA (as the Rationalist) community is that it's a rare example of an in-group defined by adherence to an epistemic toolbox rather than affiliation with specific positions on specific issues.
It is fine for there to be different clusters of people within EA who reach very different conclusions. I don't need to agree with everyone else about where my money should go. But it sure is nice when everyone can speak the same language and agree on how to approach super complex problems in principle.
I think this understates the problem. EA had one good idea (effective charity in developing countries) one mediocre idea (that you should earn to give) and then everything else is mixed but being an EA doesn't provide good intuitions any more than being a textualist in US Jurisprudence. I'm glad the Open Phil donated to the early yimby movement but if I want to support good US politics I'd prefer to directly donate to Yimby Orgs or the Neoliberal groups (https://cnliberalism.org/). I think both the FTX and Open AI events should be treated as broadly discrediting to the idea that EA is a well run organization and the reliability of the current leadership. I think GiveWell remains a good organization for what it is (and will continue donating to GiveDirectly) but while I might trust individuals that Scott is calling EA I think that the EA label is negative the way that I might like libertarians but not people using the Libertarian label.
I think EA is great and this is a great post highlighting all the positives.
However, my personal issue with EA is not its net impact but how it's perceived. SBF made EA look terrible because many EA'ers were woo'ed by his rhetoric. Using a castle for business meetings makes EA look bad. Yelling "but look at all the poor people we saved" is useful but somewhat orthogonal to those examples as they highlight some sort of blindspots in the community that the community doesn't seem to be confronting.
And maybe that's unfair. But EA signed up to be held to a higher standard.
I didn't sign up to be held to a higher standard. Count me in for team "I have never claimed to be better at figuring out whether companies are frauds than Gary Gensler and the SEC". I would be perfectly happy to be held to the same ordinary standard as anyone else.
I'm willing to give you SBF but I don't see how the castle thing holds up. There's a smell of hypocrisy in both. Sam's feigning of driving a cheap car while actually living in a mansion is an (unfair) microcosm of the castle thinking.
I don’t really get the issue with the castle thing. An organization dedicated to marketing EA spent a (comparatively) tiny amount of money on something that will be useful for marketing. What exactly is hypocritical about that?
It's the optics. It looks ostentatious, like you're not really optimizing for efficiency. Sure, they justified this on grounds of efficiency (though I have heard questioning of whether being on the hook for the maintenance of a castle really is cheaper than just renting venues when you need them), but surely taking effectiveness seriously involves pursuing smooth interactions with the normies?
1. Poor optics isn’t hypocrisy. That is still just a deeply unfair criticism.
2. Taking effectiveness seriously involves putting effectiveness above optics in some cases. The problem with many non-effective charities is that they are too focused on optics.
3. Some of the other EA “scandals” make it very clear that it doesn’t matter what you do, some people will hate you regardless. Why would you sacrifice effectiveness for maybe (but probably not) improving your PR given the number of constraints.
You can't separate optics from effectiveness, since effectiveness is dependent on optics. Influence is power, and power lets you be effective. The people in EA should know this better than anyone else.
See, I think EA shows a lack of common sense, and this comment is an example. It's true that no matter what you do some people will hate you, but if you buy a fucking castle *everybody's* going to roll their eyes. It's not hard to avoid castles and other things that are going to alienate 95% the public. And you have to think *some* about optics, because it interferes with the effectiveness of the organization if 95% of the public distrusts it.
EA's disdain for "optics" is part of what drew me to it in the first place. I was fed up with charities and policymakers who cared far more about being perceived to be doing something than about actually doing good things.
Where do you draw the line? If EAs were pursuing smooth interactions with normies, they would also be working on the stuff normies like.
Also, idk, maybe the castle was more expensive than previously thought. Good on paper, bad in practice. So, no one can ever make bad investments? Average it in with other donations and the portfolio performance still looks great. It was a foray into cost-saving real estate. To the extent it was a bad purchase, maybe they won't buy real estate anymore, or will hire people who are better at it, or what have you. The foundation that bought it will keep donating for, most likely, decades into the future. Why can't they try a novel donor strategy and see if it works? For information value. Explore what a good choice might be asap, then exploit/repeat/hone that choice in the coming years. Christ, *everyone* makes mistakes and tries things given decent reasoning. The castle had decent reasoning. So why are EAs so rarely allowed to try things, without getting a fingerwag in response?
Look at default culture not EA. To the extent EAs need to play politics, they aren't the worst at it (look at DC). But donors should be allowed to try things.
I don't know, I feel like if there had been a single pragmatic person in the room when they proposed to buy that castle, the proposal would have been shot down. But yes, I do agree that ultimately, you have to fuck around and find out to find what works, so I don't see the castle as invalidating of EA, it's just a screw up.
Didn’t the castle achieve good optics with its target demographic though? The bad optics are just with the people who aren’t contributing, which seems like an acceptable trade-off
> surely taking effectiveness seriously involves pursuing smooth interactions with the normies?
If the normies you're trying to pursue smooth interactions with include members of the British political and economic Establishment, "come to our conference venue in a repurposed country house" is absolutely the way to go.
I think you're overestimating how much the castle thing affects interactions with normies. It was a small news story and I bet even the people who read it at the time have mostly forgotten it by now. I estimate that if a random person were to see a donation drive organized by EAs today the chance that their donation would be affected by the castle story is <0.01%
It's hard to believe that a castle was the optimum (all things considered; no one is saying EA should hold meetings in the cheapest warehouse). The whole pitch of the group is looking at things rationally, so if they fail at one of the most basic things like choosing a meeting location, and there's so little pushback from the community, then what other things is the EA community rationalizing invalidly?
And if we were to suppose that the castle really was carefully analyzed and evaluated validly as at- or near-optimal, then there appears to be a huge blindspot in the community about discounting how things are perceived, and this will greatly impact all kinds of future projects and fund-raising opportunities, i.e. the meta-effectiveness of EA.
Have you been to the venue? You keep calling it "a castle" which is the appropriate buzzword if you want to disparage the purchase, but it is a quite nice event space ~similar to renting a nice hotel. It is far from the most luxurious hotels, but it is like a home-y version of the level you get in hotels in which you run events. They have considered different venues (as other said, explained in other articles), and settled on this one due to price/quality/position and other considerations.
Quick test: If the venue appreciated in value and now can be sold for twice the money making this net positive investment which they can in a pinch use to sponsor a really important crisis, and they do that - does that make the purchase better? If renting it our per year makes full financial sense, and other venues would have been worse - are you now convinced?
If not, you may just be angry at the word "castle" and aren't doing a rational argument anymore.
No, and it doesn't matter. EA'ers such as Scott have referred and continue to refer to it as a castle, so it must be sufficiently castle-like and that's all that matters as it impacts the perception of EA.
> They have considered different venues (as other said, explained in other articles), and settled on this one due to price/quality/position and other considerations.
Those other considerations could have included a survey of how buying a castle would affect perceptions of EA and potential donors. This is a blindspot.
> If not, you may just be angry at the word "castle" and aren't doing a rational argument anymore.
Also indirectly answering your other questions -- I don't care about the castle. I'm rational enough to not care. What I care about is the perception of EA and the fact that EA'ers can't realize how bad the castle looks and how this might impact their future donations and public persona. They could have evaluated this rationally with a survey.
Why wouldn't a castle be the optimal building to purchase? It is big, with many rooms, and due to the lack of modern amenities it is probably cheaper than buying a more recently built conference center type building. Plus more recently built buildings tend to be in more desirable locations were land itself is more expensive. I think you're anchoring your opinion way too much on "castle=royalty".
So far it's been entirely negative for marketing EA, isn't in use (yet), isn't a particularly convenient location, and the defenders of the purchase even said they bought the castle because they wanted a fancy old building to think in.
So the problem with the castle is not the castle itself it's that it makes you believe the whole group is hypocritical and ineffective? But isn't that disproved by all the effective actions they take?
Not me. I don't care about the castle. I'm worried about public perceptions of EA and how it impacts their future including donations. Perceptions of profligacy can certainly overwhelm the effective actions. Certain behaviors have a stench to lots of humans.
I think the only rational way to settle this argument would be for EA to run surveys of the impact on perceptions of the use of castles and how that could impact potential donors.
Imagine an Ivy League university buys a new building, then pays a hundred thousand dollars extra to buy a lot of ivy and drape it over the exterior walls of the building. The news media covers the draping expenditure critically. In the long term, would the ivy gambit be positive or negative for achieving that university's goals of cultivating research and getting donations?
I don't know. Maybe we need to do one of those surveys that you're proposing. But I would guess that it's the same answer for the university's ivy and CEA's purchase of the miniature castle.
The general proposal I'm making: if we're going to talk about silly ways of gaining prestige for an institution, let's compare like with like.
All I can write at this point is that it would be worth a grant to an EA intern to perform a statistically valid survey of how EA using a castle impacts the perception of EA and potential future grants. Perhaps have one survey of potential donors, another of average people, and include questions for the donors about how the opinions of average people might impact their donations.
Yes, I read your points and understand them. I find them wholly unconvincing as far as the potential impacts on how EA is perceived (personally, I don't care about the castle).
EAs have done surveys of regular people about perceptions of EA - almost no one knows what EA is.
Donors are wealthy people, many of whom understand the long-term value of real estate.
I like frugality a lot. But I think people who are against a conference host investing in the purchase of their own conference venue are not thinking from the perspective of most organizations or donors.
Ie., it's an average sort of thing that lots of other organisations would do. But EA is supposed to be better. (i don't have anything against EA particularly, but this is a pattern I keep noticing -- something or someone is initially sold as be better then defended as being not-worse).
We should learn to ignore the smell of hypocrisy. There are people who like to mock the COP conferences because they involve flying people to the Middle East to talk about climate change. But those people haven’t seriously considered how to make international negotiations on hard topics effective. Similarly, some people might mock buying a conference venue. But those people haven’t seriously thought about how to hold effective meetings over a long period of time.
On that front, EA sometimes has a (faux?) humble front to it, and that's part of where the hypocrisy comes from. I think that came in the early days, people so paralyzed by optics and effectiveness that they wouldn't spend on any creature comforts at all. Now, perhaps they've overcorrected, and spend too much on comforts to think bigger thoughts.
But if they want to stop caring about hypocrisy, they should go full arrogant, yes we're better and smarter than everyone else and we're not going to be ashamed of it. Take the mask off and don't care about optics *at all*. Let's see how that goes, yeah?
People don't mock buying a venue, they mock buying a *400 year old castle* for a bunch of nerds that quite famously don't care about aesthetics.
Re: "should I care about perception?", I think "yes" and "no" are just different strategies. Cf. the stock market. Whereas speculators metagame the Keynesian Beauty Contest, buy-&-hold-(forever) investors mostly just want the earnings to increase.
This type of metagaming has upsides, in that it can improve your effectiveness, ceteris paribus. This type of metagaming also has downsides, in that it occasionally leads to an equilibrium where everyone compliments the emperor's new clothes.
My impression is that EA is by definition supposed to be held to a higher standard. It's not just plain Altruism like the boring old Red Cross or Doctors Without Borders, it's Effective Altruism, in that it uses money effectively and more effectively than other charities do.
I don't see how that branding/stance doesn't come with an onus for every use of funds to be above scrutiny. I don't think it's fair to say that EA is sometimes makes irresponsible purchases, but it should be excused because on net EA is good. That's not a deal with the devil, it's mostly very good charitable work with the occasional small castle sized deal with the devil. That seems to me like any old charitable movement and not in line with the 'most effective lives per dollar' thesis of EA.
I can barely comprehend the arrogance of a movement that has in its literal name a claim that they are better than everyone else (or ALL other charities at least), that routinely denigrates non-adherents as "normies" as if they're inferior people, that has members who constantly say without shame or irony that they're smarter than most people, that they're more successful than most people (and that that's why you should trust them), that is especially shameless in its courting of the rich and well-connected compared to other charities and groups...having the nerve to say after a huge scandal that they never claimed a higher standard than anyone else.
Here's an idea. Maybe, if you didn't want to be held to a higher standard than other people, you shouldn't have *spent years talking about how much better you are than other people*.
I think you're misunderstanding EA. It did not create a bunch of charities and then shout "my charities are the effectivest!" EA started when some people said "which jobs/charities help the world the most?" and nobody had seriously tried to find the answers. Then they seriously tried to find the answers. Then they built a movement for getting people and money sent where they were needed the most. The bulk of these charities and research orgs *already existed*. EA is saying "these are the best", not "we are the best".
And- I read you as talking about SBF here? That is not what people failed at. SBF was not a charity that people failed to evaluate well. SBF was a donor who gave a bunch of money to the charities and hid his fraud from EA's and customers and regulators and his own employees.
I have yet to meet an EA who frequently talks about how they're smarter, more successful, or generally better than most people. I think you might be looking at how some community leaders think they need to sound really polished, and overinterpreting?
Now I have seen "normies" used resentfully, but before you resent people outside your subculture you have to feel alienated from them. The alienation here comes from how it seems really likely that our civilization will crash in a few decades. How if farm animals can really feel then holy cow have we caused so much pain. How there's 207 people dying every minute- listen to Believer by Imagine Dragons, and imagine every thump is another kid, another grandparent. It's an goddamn emergency, it's been an emergency since the dawn of humanity. And we can't fix all of it, but if a bunch of us put our heads together and trusted each other and tried really hard, we could fix so much... So when someone raised a banner and said "Over here! We're doing triage! These are the worst parts we know how to fix!", you joined because *duh*. Then you pointed it out to others, and. Turns out most people don't actually give a shit.
That's the alienation. There's lots of EA's who aren't very smart or successful at all. There's lots of people who get it, and have been triaging the world without us and don't want to join us. This isn't alienating. Alienation comes from normies- many of them smarter and more successful- who don't care. Or who are furious your post implied an art supply bake sale isn't just as important as the kids with malaria. It doesn't make people evil that they don't experience that moment of *duh*, but goddamn do I sometimes feel like we're from different planets.
That was a good comment, and mine above was too angry I think. I'm starting to think everyone's talking about very different things with the same words. This happens a lot.
First, I'm a bit sceptical of the claim that, before EA nobody was evaluating charity effectiveness. This *feels* like EA propaganda, and I'm *inclined* to suspect that EA's contribution was at least as much "more utilitarian and bayesian evaluation" as "more evaluation". BUT I have no knowledge of this whatsoever and it has nothing to do with my objection to EA, so I'm happy to concede that point.
Second, regarding SBF my main issue is with the morality of "earning to give" and its very slippery slope either straight to "stealing to give" or to "earning to give, but then being corrupted by the environment and lifestyle associated with earning millions, and eventually earning and stealing to get filthy rich". Protestations that EAs never endorsed stealing, while I accept they're sincere, read a bit too much like "will no one rid me of this troublesome priest?" It's important for powerful people to avoid endorsing principles that their followers might logically take to bad ends, not just avoid endorsing the bad ends themselves. (Or at least, there's an argument that they should avoid that, and it's one that's frequently used to lay blame on other figures and groups.)
Third, regarding "normies", I don't feel like I've seen it used to disparage "people who don't think kids with malaria are more important than the opera", or if I have not nearly as many times as it's used to disparage "people who think kids with malaria are important than space colonies and the singularity". I completely see the "different planets" thing, and this goes both ways. Lots of people don't care about starving children, and that's horrific. EAs of course are only a small minority of those who *do* care, effectiveness notwithstanding. On the other hand, this whole "actual people suffering right now need to be weighed against future digital people" is so horrific, so terrifying, so monstrous that I'm hoping it's a hoax or something. But I haven't seen anyone deny that many EAs really do think like that. In a way, using the rescources and infrastructure (if not the actual donations) set up for global poverty relief, to instead make digital people happen faster, is much worse than doing nothing at all for poverty relief to begin with (since you're actively diverting resources from it). So we could say "global health EAs" are on one planet, "normies" are on a second planet, and "longtermist EAs" are on a third planet, and the third looks as evil to the second as the second does to the first.
Fwiw, charity evalutation existed before EA, but it was almost entirely infected by goodhart's law. charity evaluators measured *overhead*, not impact. A charity which claimed to help minorities learn stem skills by having them make shoes out of cardboard and glue as an afterschool program (because everyone knows minorities like basketball shoes, and designing things that require measurements is kind of like stem) would have been rated very, very highly if they were keeping overhead low and actually spending all of the money on their ridiculous program, but the actual impact of the program wouldn't factor into it at all. I use this example because it's something I actually saw in real life.
These evaluators served an important purpose in sniffing out fraud and the kind of criminal incompetence that destroys most charities, but clearly there was something missing, and EA filled in what was missing
TBC, you're replying to a comment about whether individual EA's should be accountable for many EA orgs taking money from SBF. I do not think that "we try to do the most good, come join us" is branding with an onus for you, as an individual, to run deep financial investigations on your movement's donors.
But about the "castle", in terms of onuses on the movement as a whole- That money was donated to Effective Ventures for movement building. Most donations given *under EA* go to charities and research groups. Money given *directly to EV* is used for things like marketing and conferences to get more people involved in poverty, animal, and x-risk areas. EV used part of their budget to buy a conference building near Oxford to save money in the long run.
If the abbey was not the most effective way to get a conference building near Oxford, or if a conference building near Oxford was not the most effective way to build the movement, or if building the movement is not an effective way to get more good to happen, then this is a way that EA fell short of its goal. Pointing out failures is not a bad thing. (Not that anyone promised zero mistakes ever. The movement promised thinking really hard and doing lots of research, not never being wrong.) If it turns out that the story we heard is false and Rob Wiblin secretly wanted to live in a "castle", EA fell short of its goal due to gross corruption by one of its members, which is worth much harsher criticism.
In terms of the Red Cross, actually yes. Even if we found out 50% of all donor money was being embezzled for "castles", EA would still be meeting its goal of being more effective than just about any major charity organization. EA donation targets are more than twice as cost effective as Red Cross or DWB.
Hold to the higher standard, but if you’re going to criticize about the castle, you better be prepared to explain how better to host a series of meetings and conferences on various topics without spending a lot more money.
I think your assumption that "any old charitable movement" is about as effective as using the vast majority of funds on carefully chosen interventions plus buying a castle once and then falling for a snake oil salesman is wrong though. My impression is most charitable movements accomplish very little so it is quite easy to be more effective than them. And until another movement comes along that is more effective than EA at saving lives I'll continue thinking that.
A lot of people ignore it, but I continue to find the "Will MacAskill mentored SBF into earn to give" connection the problem there. No one can always be a perfect judge of character, but it was a thought experiment come to life. It says... *something* about the guardrails and the culture. It's easy to take it as saying too much, to be sure many people do, but it's also easy to ignore what it says entirely.
I recognize broader-EA has (somewhat) moved away from earning to give and that the crypto boom that enabled SBF to be a fraud of that scale was (probably) a once in a lifetime right-place right-time opportunity for both success and failure. Even so.
In point of fact, you all are being held to the ordinary standard. Public corruption leads to public excoriation, and "but look at the good we do" is generally seen as a poor defense until a few years later when the house is clearly clean. That is the ordinary standard.
I think EA signed up to be held to the standard "are you doing the most good you can with the resources you have". I do not think it signed up to be held to the standard "are you perceived positively by as many people as possible". Personally I care a lot more about the first standard, and I think EA comes extremely impressively close to meeting it.
Sure, but go Meta-Effectiveness and consider that poor rhetoric and poor perception could mean fewer resources for the actions that really matter. A few more castle debacles and the cost for billionaires being associated with EA may cross a threshold.
Castle != cost-effective. And perceptions of using castles, and blindness to how bad this looks, could have massive long-term impacts on fund-raising.
I don't understand why this is so complicated. It doesn't matter how tiny the cost of the castle has been relative to all resources spent. It's like a guy who cheated on a woman once. Word gets around. And when the guys says, "Who _cares_ about the cheating! Look at all the wonderful other things I do" then it looks even worse. Just say, "Look, we're sorry and we're selling the castle, looking for a better arrangement, and starting a conversation about how to avoid such decisions in the future."
Yeah, I was just now trying to run figures about increased persuasiveness toward government officials and rich people, to see what the break-even would have to be.
Given the obvious difference in intuitions on how to discount the perceptions of profligacy, as proposed in another response to Scott, I think the only way to actually resolve this is to conduct a survey.
I understand your model is that the abbey was a horrid investment and a group that holds itself out as a cost-effectiveness charity, but also makes horrid investments, should lose credibility and donors.
No one disagrees with that premise.
I disagree that it was a horrid investment, based on the info they had at the time.
So, I don’t see a loss of credibility there.
Others will disagree that CEA/EV is primarily a cost-effectiveness charity.
It looks pretty good to people who think castles are cool, and don't really care much about austerity or poor people or math. There are staggering numbers of such people, some of whom are extremely rich, and EA might reasonably have difficulty extracting money from them without first owning a castle.
Unless people set out with a vendetta to destroy EA, the castle will be forgotten as a reputational cost, but will still be effective at hosting meetings. And if people do set out with a vendetta to destroy EA, it’s unlikely the castle thing is the only thing they could use this way.
The community by it's nature has those blindspots. Their whole rallying cry is "Use data and logic to figure out what to support, instead of what's popular". This attracts people who don't care for or aren't good at playing games of perception. This mindset is great at saving the most lives with the least amount of money, it's not as good for PR or board room politics.
Right, but they could logically evaluate perceptions using surveys. That begs the question: what other poor assumptions are they making that they're not applying rationalism to?
I do wonder if the "castle" thing (it's not a castle!) is just "people who live in Oxford forget that they're in a bubble, and people who've never been to Oxford don't realise how weird it is". If you live in Oxford, which has an *actual* castle plus a whole bunch of buildings approaching a thousand years old, or if you're at all familiar with the Oxfordshire countryside, you'd look at Wytham Abbey and say "Yep, looks like a solid choice. Wait, you want a *modern* building? Near *Oxford*? Do you think we have infinite money, and infinite time for planning applications?"
The word "castle" can be a bit misleading. They (or the ones in the UK) aren't all huge drafty stone fortresses. Many, perhaps most, currently habitable and occupied ones differ little from normal houses, but maybe have a somewhat more squat and solid appearance and a few crenellated walls here and there. I don't know what Castle EA looks like though! :-)
Edit: I did a quick web search, and the castle in question is called Chateau Hostacov and is in Bohemia, which is roughly the western half of the Czech Republic. (I don't do silly little foreign accents, but technically there is an inverted tin hat over the "c" in "Hostacov").
It cost all of $3.5M, which would just about buy a one-bedroom apartment in Manhatten or London. So not a bad deal, especially considering it can be (and, going by its website, is being) used as a venue for other events such as conferences and weddings and vacations etc:
Yeah, this is where I end up on it as well. To the extent that it helps people give more effectively, it's been a great thing.
It does go a bit beyond merely annoying though. I think something that Scott is missing is that this field won't just HAVE grifters and scammers, it will ATTRACT grifters and scammers, much like roles as priests etc. have done in the past. The average person should be wary of people smarter than them telling what to do with their money.
The only durable protection from scammers is a measurable outcome. That's part of why I think EA is only effective when it focuses on things that can be measured. The meat of the improvement in EA is moving money from frivolous luxury to measurable charity, not moving measurable charity to low probability moonshots.
Are you saying that the specific impact calculations that orgs like GiveWell do are incorrect, or are you just claiming epistemic learned helplessness https://slatestarcodex.com/2019/06/03/repost-epistemic-learned-helplessness/.?
I mean, GiveDirectly is a top charity on Givewell, are you claiming that showering poor people in money to the tune of .92 per dollar still produces a lot of transaction cost?
This, I think, is an interesting take.
Is your thought here that transaction costs are implicit and thus not properly priced in to the work done? I think at the development economics level that is not terribly true. The transaction costs of poverty relief in urban USA vs the poverty relief in San Salvador are not terrible different once the infrastructure in question is set up.
"Compared to what" is my question.
Everything has transaction costs. Other opportunities have similar transaction costs. I would be surprised if they didn't. However, I agree I would like to see this argued explicitly somewhere.
Isn't this just the old paradox where you go:
- Instead of spending an hour studying, you should spend a few minutes figuring out how best to study, then spend the rest of the time studying
- But how long should you spend figuring out the best way to study? Maybe you should start by spending some time figuring out the best balance between figuring out the right way to study, and studying
- But how long should you spend on THAT? Maybe you should start by spending some time figuring out the best amount of time to spend figuring out the best amount of time to spend figuring out . . .
- ...and so on until you've wasted the whole hour in philosophical loops, and therefore you've proven it's impossible to ever study, and even trying is a net negative.
In practice people just do a normal amount of cost-benefit analysis which costs a very small portion of the total amount of money donated.
Centralizing and standardizing research into which charities do exactly what (so the results can then be easily checked against any given definition of "effectiveness") reduces transaction costs by eliminating a lot of what would otherwise be needlessly duplicated effort.
Good list.
A common sentiment right now is “I liked EA when it was about effective charity and saving more lives per dollar [or: I still like that part]; but the whole turn towards AI doomerism sucks”
I think many people would have a similar response to this post.
Curious what people think: are these two separable aspects of the philosophy/movement/community? Should the movement split into an Effective Charity movement and an Existential Risk movement? (I mean more formally than has sort of happened already)
I'm probably below the average intelligence of people who read scott but that's essentially my position. AI doomerism is kinda cringe and I don't see evidence of anything even starting to be like their predictions. EA is cool because instead of donating to some charity that spends most their money on fundraising or whatever we can directly save/improve lives.
Which "anything even starting to be like their predictions" are you talking about?
-Most "AIs will never do this" benchmarks have fallen (beat humans at Go, beat CAPTCHAs, write text that can't be easily distinguished from human, drive cars)
-AI companies obviously have a very hard time controlling their AIs; usually takes weeks/months after release before they stop saying things that embarrass the companies despite the companies clearly not wanting this
If you won't consider things to be "like their predictions" until we get a live example of a rogue AI, that's choosing to not prevent the first few rogue AIs (it will take some time to notice the first rogue AI and react, during which time more may be made). In turn, that's some chance of human extinction, because it is not obvious that those first few won't be able to kill us all. It is notably easier to kill all humans (as a rogue AI would probably want) than it is to kill most humans but spare some (as genocidal humans generally want); the classic example is putting together a synthetic alga that isn't digestible, doesn't need phosphate and has a more-efficient carbon-fixing enzyme than RuBisCO, which would promptly bloom over all the oceans, pull down all the world's CO2 into useless goo on the seafloor, and cause total crop failure alongside a cold snap, and which takes all of one laboratory and some computation to enact.
I don't think extinction is guaranteed in that scenario, but it's a large risk and I'd rather not take it.
> Most "AIs will never do this" benchmarks have fallen (beat humans at Go, beat CAPTCHAs, write text that can't be easily distinguished from human, drive cars)
I concur on beating Go, but captchas were never thought to be unbeatable by AI - it's more that it makes robo-filing forms rather expensive. Writing text also never seemed that doubtful and driving cars, at least as far as they can at the moment, never seemed unlikely.
This would have been very convincing if anyone like Patrick had given timelines on the earliest point at which they expected the advance to have happened, at which point we can examine if their intuitions in this are calibrated. Because the fact is if you asked most people, they definitely would not have expected art or writing to fall before programming. Basically only gwern is sinless.
On the other hand, EY has consistently refused to make measurable predictions about anything, so he can't claim credit in that respect either. To the extent you can infer his expectations from earlier writing, he seems to have been just as surprised as anyone, despite notionally being an expert on AI.
1. No one mentioned Eliezer. If Eliezer is wrong about timelines, that doesn't mean we suddenly exist in a slow takeoff world. And it's basically a bad faith argument to imply that Eliezer getting surprised *in the direction of capabilities getting better than expected* is apparently evidence of non doom.
2. Patrick is explicitly saying that he sees no evidence. Insofar as we can use Patrick's incredulity as evidence, it would be worth far more if it was calibrated and informed rather than uncalibrated. AI risk arguments depend on more things than just incredulity, so the """lack of predictions""" matters relatively less. My experience has been that people who use their incredulity in this manner in fact do worse at predicting capabilities, hence why getting disproven would be encouraging.
3. I personally think that by default we cannot predict what the rate of change is, but I can lie lazily on my hammock and predict "there will be increases in capability barring extreme calamity" and essentially get completely free prediction points. If you do believe that we're close to a slowdown, or we're past the inflection point of a sigmoid and that my priors about progress are wrong, you can feel free to bet against my entirely ignorant opinion. I offer up to 100 dollars at ratios you feel are representative of slowdown, conditions and operationalizations tbd.
4. If you cared about predictive accuracy, gwern did the best and he definitely believes in AI risk.
"write text that can't be easily distinguished from human"? Really?
*None* of the examples I've seen measure up to this, unless you're comparing it to a young human that doesn't know the topic but has some measure of b*sh*tting capability - or rather, thinks he does.
Maybe I need to see more examples.
Yeah there are a bunch of studies now where they give people AI text and human text and ask them to rate them in various ways and to say whether they think it is a human or AI, and generally people rate the AI text as more human.
The examples I've seen are pretty obviously talking around the subject, when they don't devolve into nonsense. They do not show knowledge of the subject matter.
Perhaps that's seen as more "human".
I think that if they are able to mask as human, this is still useful, but not for the ways that EA (mostly) seems to think are dangerous. We won't get advances in science, or better technology. We might get more people falling for scammers - although that depends on the aim of the scammer.
Scammers that are looking for money don't want to be too convincing because they are filtering for gullibility. Scammers that are looking for access on the other hand, do often have to be convincing in impersonating someone who should have the ability to get them to do something.
But moore’s law is dead. We’re reaching physical limits, and under these limits, it already costs millions to train and execute a model that, while impressive, is still multiple orders of magnitude away from genuinely dangerous superintelligence. Any further progress will require infeasible amounts of resources.
Moore's Law is only dead by *some* measures, as has been true for 15-20 years. The limiting factors for big ML are mostly inter-chip communications, and those are still growing aggressively.
Also, algorithms are getting more efficient.
This is one of the reasons I'm not a doomer, which is that most doomers' mechanism of action for human extinction is biological in nature, and most doomers are biologically illiterate.
RuBisCO is known to be pretty awful as carboxylases go. PNA + protein-based ribosomes avoids the phosphate problem.
I'm not saying it's easy to design Life 2.0; it's not. I'm saying that with enough computational power it's possible; there clearly are inefficiencies in the way natural life does things because evolution likes local maxima.
You're correct on the theory; my point was that some people assume that computation is the bottleneck rather than actually getting things to work in a lab within a reasonable timeframe. Not only is wet lab challenging, I also have doubts as to whether biological systems are computable at all.
I think the reason that some people (e.g. me) assume that computation* is the bottleneck is that IIRC someone actually did assemble a bacterium (of a naturally-existing species) from artificially-synthesised biomolecules in a lab. The only missing component to assemble Life 2.0 would then seem to be the blueprint.
If I'm wrong about that experiment having been done, please tell me, because yeah, that's a load-bearing datum.
*Not necessarily meaning "raw flops", here, but rather problem-solving ability
Much like I hope for more people to donate to charity based on the good it does rather than based on the publicity it generates, I hope (but do not expect) that people decide to judge existential risks based on how serious they are rather than based on how cringe they are.
Yeah this is where I am. A large part of it for me is that after AI got cool, AI doomerism started attracting lots of naked status seekers and I can't stand a lot of it. When it was Gwern posting about slowing down Moore's law, I was interested, but now it's all about getting a sweet fellowship.
Is your issue with the various alignment programs people keep coming up with? Beyond that, it seems like the main hope is still to slow down Moore's law.
My issue is that the movement is filled with naked status seekers.
FWIW, I never agreed with the AI doomers, but at least older EAs like Gwern I believe to be arguing in good faith.
Interesting, I did not get this impression but also I do worry about AI risk - maybe that causes me to focus on the reasonable voices and filter out the non-sense. I'd be genuinely curious for an example of what you mean, although I understand if you wouldn't want to single out anyone in particular.
I don’t mind naked status seeking as long as people do it by a means that is effective at achieving good ends for the world. One can debate whether AI safety is actually effective, but if it is, EAs should probably be fine with it (just like the naked cash seekers who are earning to give).
I agree. But there seem to be a lot of people in EA with some serious scrupulosity going on. Like that person who said they would like to donate a kidney, but could not bear the idea that it might go to a meat-eater, and so the donor would be responsible for all the animal suffering caused by the recipient. It's as though EA is, for some people, a refuge from ever feeling they've done wrong -- as though that's possible!
What’s wrong with naked status seekers (besides their tendency to sometimes be counterproductive if advancing the cause works against their personal interests)?
It's bad when the status seeking becomes more important than the larger purpose. And at the point when it gets called "naked status seeking", it's already over that line.
They will only do something correct if it advances their status and/or cash? To the point of not researching or approving research into something if it looks like it won't advance them?
They have to be bribed to do the right thing?
How do you identify naked status seekers?
Hey now I am usually clothed when I seek status
It usually works better, but I guess that depends on how much status-seeking is done at these EA sex parties I keep hearing about...
Sounds like an isolated demand for rigor
Definitely degree of confidence plays into it a lot. Speculative claims where it's unclear if the likelihood of the bad outcome is 0.00001% or 1% are a completely different ball game from "I notice that we claim to care about saving lives, and there's a proverbial $20 on the ground if we make our giving more efficient."
I think it also helps that those shorter-term impacts can be more visible. A malaria net is a physical thing that has a clear impact. There's a degree of intuitiveness there that people can really value
Most AI-risk–focused EAs think the likelihood of the bad outcome is greater than 10%, not less than 1%, fwiw.
And that's the reason many outsiders think they lack good judgment.
And yet, what exactly is the argument that the risk is actually low?
I understand and appreciate the stance that the doomers are the ones making the extraordinary claim, at least based on the entirety of human history to date. But when I hear people pooh-poohing the existential risk of AI, they are almost always pointing to what they see as flaws in some doomer's argument -- and usually missing the point that the narrative they are criticizing is usually just a plausible example of how it might go wrong, intended to clarify and support the actual argument, rather than the entire argument.
Suppose, for the sake of argument, that we switch it around and say that the null hypothesis is that AI *does* pose an existential risk. What is the argument that it does not? Such an argument, if sound, would be a good start toward an alignment strategy; contrariwise, if no such argument can be made, does it not suggest that at least the *risk* is nonzero?
I find Robin Hanson's arguments here very compelling: https://www.richardhanania.com/p/robin-hanson-says-youre-going-to
It's weird that you bring up Robin Hanson, considering that he expects humanity to be eventually destroyed and replaced with something else, and sees that as a good thing. I personally wouldn't use that as an argument against AI doomerism, since people generally don't want humanity to go extinct.
What specific part of Robin Hanson's argument on how growth curves are a known thing do you find convincing?
That's the central intuition underpinning his anti foom worldview, and I just don't understand how someone can generalize that to something which doesn't automatically have all the foibles of humans. Does you think that a population of people who have to sleep, eat and play would be fundamentally identical to an intelligence who is differently constrained?
I'm not seeing any strong arguments there, in that he's not making arguments like, "here is why that can't happened", but instead is making arguments in the form, "if AI is like <some class of thing that's been around a while>, then we shouldn't expect it to rapidly self-improve/kill everything because that other thing didn't".
E.g. if superintelligence is like a corporation, it won't rapidly self-improve.
Okay, sure, but there are all sorts of reasons to worry superintelligent AGI won't be like corporations. And this argument technique can work against any not-fully-understood future existential threat. Super-virus, climate change, whatever. By the anthropic principle, if we're around to argue about this stuff, then nothing in our history has wiped us out. If we compare a new threat to threats we've encountered before and argue that based on history, the new threat probably isn't more dangerous than the past ones, then 1) you'll probably be right *most* of the time and 2) you'll dismiss the threat that finally gets you.
I’ve been a big fan of Robin Hanson since there was a Web; like Hanania, I have a strong prior to Trust Robin Hanson. And I don’t have any real argument with anything he says there. I just don’t find it reassuring. My gut feeling is that in the long run it will end very very badly for us to share the world with a race that is even ten times smarter than us, which is why I posed the question as “suppose the null hypothesis is that this will happen unless we figure out how to avoid it”.
Hanson does not do that, as far as I can tell. He quite reasonably looks at the sum of human history and finds that he is just not convinced by doomers’ arguments, and all his analysis concerns strategies and tradeoffs in the space that remains. If I accept the postulate that this doom can’t happen, that recursive intelligence amplification is really as nonlumpy as Hanson suspects, then I have no argument with what he says.
But he has not convinced me that what we are discussing is just one more incremental improvement in productivity, rather than an unprecedented change in humans’ place in the world.
I admit that I don’t have any clear idea whether that change is imminent or not. I don’t really find plausible the various claims I have read that we’re talking about five or ten years. And I don’t want to stop AI work: I suspect AGI is a prerequisite for my revival from cryosuspension. But that just makes it all the more pressing to me that it be done right.
When ignoring the substance of the argument, I find their form to be something like a Pascal's wager, bait and switch. If there even is a small percent you will burn in hell for eternity, why wouldn't you become Catholic. Such an argument fails for a variety of reasons, one being it doesn't account for alternative religions and their probabilities with alternatives outcomes.
So I find I should probably update my reasoning toward there being some probability of x-risk here, but the probability space is pretty large.
One of the good arguments for doomerism is that the intelligences will be in some real sense alien. That there is a wider distribution of possible ways to think than human intelligence, including how we consider motivation, and this could lead to paper-clip maximizers, or similar AI-Cthulhus of unrecognizable intellect. I fully agree that these might very likely be able to easily wipe us out. But there are many degrees of capability and motivation and I don't see the reason to assume that either through a side-effect of ulterior motivation or direct malice that that lead to the certainty of extinction expressed by someone like Eliezer. There are many possibilities, many are fraught. We should invest is safety and alignment. But that that doesn't mean we should consider x-risk a certainty and certainly not at double-digit likelihood's within short timeframes.
Comparative advantage and gains from trade says the more different from us they are, the more potential profit they'll see in keeping us around.
Yes, the space of possibilities (I think you meant this?) is pretty large. But x-risk is most of it. Most of possible outcomes of optimisation processes over Earth and Solar System have no flourishing humanity in them.
It is perhaps a lot like other forms of investment. You can't just ask "What's the optimal way to invest money to make more money?" because it depends on your risk tolerance. A savings account will give you 5%. Investing in a random seed-stage startup might make you super-rich but usually leaves you with nothing. If you invest in doing good then you need to similarly figure out your risk profile.
The good thing about high-risk financial investments is they give you a lot of satisfaction of sitting around dreaming about how you're going to be rich. But eventually that ends when the startup goes broke and you lose your money.
But with high-risk long-term altruism, the satisfaction never has to end! You can spend the rest of your life dreaming about how your donations are actually going to save the world and you'll never be proven wrong. This might, perhaps, cause a bias towards glamourous high-risk long-term projects at the expense of dull low-risk short-term projects.
Much like other forms of investment, if someone shows up and tells you they have a magic box that gives you 5% a month, you should be highly skeptical. Except replace %/month with QALYs/$.
I see your point, but simple self-interest is sufficient to pick up the proverbial $20 bill lying on the ground. Low-hanging QALYs/$ may have a little bit of an analogous filter, but I doubt that it is remotely as strong.
The advantage of making these types of predictions is that even if someone says that the unflattering thing is not even close to what drives them, you can go on thinking "they're just saying that because my complete and perfect fantasy makes them jealous of my immaculate good looks".
Yeah I kinda get off the train at the longtermism / existential risk part of EA. I guess my take is that if these folks really think they're so smart that they can prevent and avert crises far in the future, shouldn't they have been better able to handle the boardroom coup?
I like the malaria bed nets stuff because its easy to confirm that my money is being spent doing good. That's almost exactly the opposite when it comes to AI-risk. For example, the tweet Scott included about how no one has done more to bring us to AGI than Eliezer—is that supposed to be a good thing? Has discovering RLHF which in turn powered ChatGPT and launched the AI revolution made AI-risk more or less likely? It almost feels like one of those Greek tragedies where the hero struggles so hard to escape their fate they end up fulfilling the prophecy.
I think he was pointing out that for EAs have been a big part of the current AI wave. So whether you are a doomer or an accelerationist you should agree that EAs impact has been large even if you disagree with the sign
Problem is, the OpenAI scuffle shows that right now, as AI is here or nearly here, the ones making the decisions are the ones holding the purse strings, and not the ones with the beautiful theories. Money trumps principle and we just saw that blowing up in real time in glorious Technicolor and Surround-sound.
So whether you're a doomer or an accelerationist, the EAs impact is "yeah you can re-arrange the deckchairs, we're the ones running the engine room" as things are going ahead *now*.
Not that I have anything against EAs, but, as someone who want to _see_ AGI, who doesn't want to see the field stopped in its tracks by impossible regulations, as happened to civilian nuclear power in the usa, I hope that you are right!
I mean, if I really believed we'd get conscious, agentic AI that could have its own goals and be deceitful to humans and plot deep-laid plans to take over and wipe out humanity, sure I'd be very, very concerned and unhappy about this result.
I don't believe that, nor that we'll have Fairy Godmother AI. I do believe we'll have AI, an increasing adoption of it in everyday life, and it'll be one more hurdle to deal with. Effects on employment and jobs may be catastrophic (or not). Sure, the buggy whip manufacturers could shift to making wing mirrors for the horseless carriages when that new tech happened, but what do you switch to when the new tech can do anything you can do, and better?
I think the rich will get richer, as per usual, out of AI - that's why Microsoft etc. are so eager to pave the way for the likes of Sam Altman to be in charge of such 'safety alignment' because he won't get in the way of turning on the money-fountain with foolish concerns about going slow or moratoria.
AGI may be coming, but it's not going to be as bad or as wonderful as everyone dreads/hopes.
That's mostly my take too. But to be fair to the doomer crowd, even if we don't buy the discourse on existential risks, what this concern is prompting them to do is lots of research on AI alignment, which in practice means trying to figure out how AI works inside and how it can be controlled and made fit for human purposes. Which sounds rather useful even if AI ends up being on the boring side.
> but what do you switch to when the new tech can do anything you can do, and better?
Nothing -- you retire to your robot ranch and get anything you want for free. Sadly, I think the post-scarcity AGI future is still very far off (as in, astronomically so), and likely impossible...
I think that the impact of AGI is going to be large (even if superintelligence either never happens or the effect of additional smarts just saturates, diminishing returns and all that), provided that it can _really_ do what a median person can do. I just want to have a nice quiet chat with the 21st century version of a HAL-9000 while I still can.
> if these folks really think they're so smart that they can prevent and avert crises far in the future, shouldn't they have been better able to handle the boardroom coup?
Surely these are different skills? Someone who could predict and warn against the dangers of nuclear weapon proliferation and the balance of terror, might still have been blindsided by their spouse cheating on them.
Suppose Trump gets elected next year. Is it a fair attack on climatologists to ask "If these people really think they're so smart that they can predict and avert crises far in the future, shouldn't they have been better able to handle a presidential election?"
Also, nobody else seems to have noticed that Adam D'Angelo is still on the board of OpenAI, but Sam Altman and Greg Brockman aren't.
I hardly think that's a fair comparison. Climatologists are not in a position to control the outcome of a presidential election, but effective altruists controlled 4 out of 6 seats on the board of the company.
Of course, if you think that they played their cards well (given that D'Angelo is still on the board) then I guess there's nothing to argue about. I—and I think most other people—believe they performed exceptionally poorly.
The people in the driver's seat on global-warming activism are more often than not fascist psycopaths like Greta Thunberg, whom actively fight against the very things that would best fight against global warming, like nuclear energy and natural gas pipelines, so they can instead promote things that would make it worse, like socialism and degrowth.
We will never be able to rely on these people to do anything but cause problems. They should be shunned like lepers.
I think that if leaders are elected that oppose climate mitigation, that is indeed a knock on the climate-action political movement. They have clearly failed in their goals.
Allowing climate change to become a partisan issue was a disaster for the climate movement.
I think it's a (slight) update against the competence of the political operatives, but not against the claim that global warming exists.
I agree completely. Nonetheless, the claim that spending money on AI safety is a good investment rests on two premises: That AI risk is real, and that EA can effectively mitigate that risk.
If I were pouring money into activists groups advocating for climate action, it would be cold comfort to me that climate change is real when they failed.
The EA movement is like the Sunrise Movement/Climate Left. You can have good motivations and the correct ambitions but if you have incompetent leadership your organization can be a net negative for your cause.
> Is it a fair attack on climatologists to ask "If these people really think they're so smart that they can predict and avert crises far in the future, shouldn't they have been better able to handle a presidential election
It is a fair criticism for those that believe the x-risk, or at least extreme downsides of climate change, to not figure out ways to better accomplish their goals rather than just political agitation. Building coalitions with potentially non-progressive causes, being more accepting of partial, incremental solutions. Playing "normie" politics along the lines of matt yglesias, and maybe holding your nose to some negotiated deals where the right gets their way probably mitigates and prevents situations where the climate people won't even have a seat at the table. For example, is making more progress on preventing climate extinction worth stalling out another decade on trans-rights? I don't think that is exactly the tradeoff on the table, but there is a stark unwillingness to confront such things by a lot of people who publicly push for climate-maximalism.
"Playing normie politics" IS what you do when you believe something is an existential risk.
IMHO the test, if you seriously believe all these claims of existential threat, is your willingness to work with your ideological enemies. A real existential threat was, eg, Nazi Germany, and both the West and USSR were willing to work together on that.
When the only move you're willing to make regarding climate is to offer a "Green New Deal" it's clear you are deeply unserious, regardless of how often you say "existential". I don't recall the part of WW2 where FDR refused to send Russia equipment until they held democratic elections...
If you're not willing to compromise on some other issue then, BY FSCKING DEFINITION, you don't believe really your supposed per cause is existential! You're just playing signaling games (and playing them badly, believe me, no-one is fooled). cf Greta Thunberg suddenly becoming an expert on Palestine:
https://www.spiegel.de/international/world/a-potential-rift-in-the-climate-movement-what-s-next-for-greta-thunberg-a-2491673f-2d42-4e2c-bbd7-bab53432b687
FDR giving the USSR essentially unlimited resources for their war machine was a geostrategic disaster that led directly to the murder and enslavement of hundreds of millions under tyrranies every bit as gruesome as that of Hitler's. Including the PRC, which menaces the World to this day.
The issue isn't that compromise on existential threats are inheriently bad. The issue is that, many times, compromises either make things worse than they would've been otherwise, or create new problems that are as bad or worse as what they subsumed.
I can think of a few groups, for example world Jewry, that might disagree with this characterization...
We have no idea how things might have played out.
I can tell you that the Hard Left, in the US, has an unbroken record of snatching defeat from the jaws of victory, largely because of their unwillingness to compromise, and I fully expect this trend to continue unabated.
Effect on climate? I expect we will muddle through, but in a way that draws almost nothing of value from the Hard Left.
The reason we gave the USSR unlimited resources was because they were directly absorbing something like 2/3 of the Nazi's bandwidth and military power in a terribly colossal years-long meatgrinder that killed something like 13% of the entire USSR population.
Both the UK and USA are extremely blessed that the USSR was willing to send wave after wave of literally tens of millions of their own people into fighting the Nazi's and absorbing so much of their might, and it was arguably the deal of the century to trade mere manufactured objects for the breathing room and Nazi distraction / might-dissipation that this represented.
The alternative would have been NOT giving the USSR unlimited resources, the Nazi's quickly steamroll the USSR, and then turn 100% of their attention and military might towards the UK, which they would almost certainly win. Or even better, not getting enough materiel to conduct a war and realizing he would lose, Stalin makes a deal with Germany and they BOTH focus on fighting the UK and USA - how long do you think the UK would have survived that?
Would the USA have been able to successfully fight a dual-front war with basically all of Europe aligned under Nazi power PLUS Japan with China's resources? We don't know, but it's probably a good thing in terms of overall deaths and destruction on all sides that we didn't need to find out.
Sure, communism sucked for lots of people. But a Nazi-dominated Europe / world would probably have sucked more.
Ah come on, Scott: that the board got the boot and was revamped to the better liking of Sam who was brought back in a Caesarian triumph isn't very convincing about "so this guy is still on the board, that totes means the good guys are in control and keeping a cautious hand on the tiller of no rushing out unsafe AI".
https://www.reuters.com/technology/openais-new-look-board-altman-returns-2023-11-22/
Convince me that a former Treasury Secretary is on the ball about the most latest theoretical results in AI, go ahead. Maybe you can send him the post about AI Monosemanticity, which I genuinely think would be the most helpful thing to do? At least then he'd have an idea about "so what are the eggheads up to, huh?"
While I agree with the general thrust, I think the short-term vs. long-term is neglected. For instance, you yourself recommended switching from chicken to beef to help animals, but this neglects the fact that over time, beef is less healthy than chicken, thus harming humans in a not-quickly-visible way. I hope this wasn't explicitly included and allowed in your computation (you did the switch yourself, according to your post), but this just illuminates the problem: EA want to be clear beneficiaries, but clear often means "short-term" (for people who think AI doomerism is an exception, remember that for historical reasons, people in EA have, on median, timelines that are extremely short compared to most people's).
Damn, was supposed to be top-level. Not reposting.
> I guess my take is that if these folks really think they're so smart that they can prevent and avert crises far in the future, shouldn't they have been better able to handle the boardroom coup?
They got outplayed by Sam Altman, the consummate Silicon Valley insider. According to that anonymous rumour-collecting site, they're hardly the only ones, though it suggests they wouldn't have had much luck defending us against an actual superintelligence.
> For example, the tweet Scott included about how no one has done more to bring us to AGI than Eliezer—is that supposed to be a good thing?
No. I'm pretty sure sama was trolling Eliezer, and that the parallel to Greek tragedy was entirely deliberate. But as Scott said, it is a thing that someone has said.
I actually pretty completely endorse the longtermism and existential risk stuff - but disagree about the claims about the best ways to achieve them.
Ordinary global health and poverty initiatives seem to me to be much more hugely influential in the long term than the short term thanks to the magic of exponential growth. An asteroid or gamma ray or what ever program that has a .01% chance of saving 10^15 lives a thousand years from now looks good compared to saving a few thousand lives this year at first - but when you think about how much good those thousand people will do for their next 40 generations of descendants, as well as all the people those 40 generations of descendants will help, either through normal market processes or through effective altruist processes of their own, this starts to look really good at the thousand year mark.
AI safety is one of the few existential risk causes that doesn’t depend on long term thinking, and thus is likely to be a very valuable one. But only if you have any good reason to think that your efforts will improve things rather than make them worse.
I remember seeing this for the "climate apocalypse" thing many years ago: some conservationist (specifically about birds, I think) was annoyed that the movement had become entirely about global warming.
EDIT: it was https://grist.org/climate-energy/everybody-needs-a-climate-thing/
Global warming is simply a livelier cause for the Watermelons to get behind. Not because they genuinely care about global warming, as they oppose the solutions that would actually help alleviate the crisis, but because they're psychopathic revolutionary socialists who see it as the best means available today of accomplishing their actual goal: the abolition of capitalism and the institution of socialism.
Yup, pretty much this!
EA as a movement to better use philanthropic resources to do real good is awesome.
AI doomerism is a cult. It's a small group of people who have accrued incredible influence in a short period of time on the basis of what can only be described as speculation. The evidence base is extremely weak and it relies far too much on "belief". There are conflicts of interest all over the place that the movement is making no effort to resolve.
Sadly, the latter will likely sink the former.
At this point a huge number of experts in the field consider AI risk to be a real thing. Even if you ignore the “AGI could dominate humanity” part, there’s a large amount of risk from humans purposely (mis)using AI as it grows in capability.
Predictions about the future are hard and so neither side of the debate can do anything more than informed speculation about where things will go. You can find the opposing argument persuading, but dismissing AI risk as mere speculation without evidence is not even wrong.
The conflicts of interest tend to be in the direction of ignoring AI risk by those who stand to profit from AI progress, so you have this exactly backwards.
You can't ignore the whole "AGI could dominate humanity" part, because that is core to the arguments that this is an urgent existential threat that needs immediate and extraordinary action. Otherwise AI is just a new disruptive technology that we can deal with like any other new, disruptive technology. We could just let it develop and write the rules as the risks and dangers become apparent. The only way you justify the need for global action right now is based on the belief that everybody is going to die in a few years time. The evidence for existential AI risk is astonishingly weak given the amount of traction it has with policymakers. It's closer to Pascal's Wager rewritten for the 21st century than anything based on data.
On the conflict of interest, the owners of some of the largest and best funded AI companies on the planet are attempting to capture the regulatory environment before the technology even exists. These are people who are already making huge amounts of money from machine learning and AI. They are taking it upon themselves to write the rules for who is allowed to do AI research and what they are allowed to do. You don't see a conflict of interest in this?
Let's distinguish "AGI" from "ASI", the latter being a superintelligence equal to something like a demigod.
Even AGI strictly kept to ~human level in terms of reasoning will be superhuman in the ways that computers are already superhuman: e.g., data processing at scale, perfect memory, replication, etc., etc.
Even "just" that scenario of countless AGI agents is likely dangerous in a way that no other technology has ever been before if you think about it for 30 seconds. The OG AI risk people are/were futurists, technophiles, transhumanists, and many have a strong libertarian bent. "This one is different' is something they do not wish to be true.
Your "conflict of interest" reasoning remains backwards. Regulatory capture is indeed a thing that matters in many arenas, but there are already quite a few contenders in the AI space from "big tech." Meaningfully reducing competition by squishing the future little guys is already mostly irrelevant in the same way that trying to prevent via regulation the creation of a new major social network from scratch would be pointless. "In the short run AI regulation may slow down our profits but in the long run it will possibly lock out hypothetical small fish contenders" is almost certainly what no one is thinking.
"No one on this successful tech company's board of directors is making decisions based on what will eventually get them the most monopoly profits" sounds like an extraordinary claim to me.
This is the board of directors that explicitly tried to burn the company down, essentially for being too successful. They failed, but can you ask for a more credible signal of seriousness?
1. Holy shit is than an ironic thing to say after the OpenAI board meltdown. Also check out Anthropic’s board and equity structure. Also profit-driven places like Meta are seemingly taking a very different approach. Why?
2. You’re doing the thing where decreasing hypothetical future competition from new, small entrants to a field equals monopoly. Even if there was a conspiracy by eg Anthropic to use regulatory barriers against new entrants, that would not impact the already highly competitive field between the several major labs. (And there are already huge barriers to entry for newcomers in terms of both expertise and compute. Even a potential mega contender like Apple is apparently struggling and a place like Microsoft found a partner.)
Expert at coming up with with clever neural net architectures == expert at AI existential risk?
No?
It's just at this point a significant number of experts in AI have come around to believing AI risk is a real concern. So have a lot of prominent people in other fields, like national security. So have a lot of normies who simply intuit that developing super smart synthetic intelligence might go bad for us mere meat machines.
You can no longer just hand wave AI risk away as a concern of strange nerds worried about fictional dangers from reading too much sci-fi. Right or wrong, it's gone mainstream!
all predictions about the future are speculation. The question is whether it's correct or incorrect speculation.
Who are some people who have accrued incredible influence and what is the period of time in which they gained this influence?
From my standpoint it seems like most of the people with increased influence are either a) established ML researchers who recently began speaking out in favor of deceleration and b) people who have been very consistent in their beliefs about AI risk for 12+ years, who are suddenly getting wider attention in the wake of LLM releases.
Acceptance of catastrophic risk from artificial superintelligence is the dominant position among the experts (including independent academics), the tech CEOs, the major governments, and the general public. Calling it a "small group of people who have accrued incredible influence" or "a cult" is silly. It's like complaining about organizations fighting Covid-19 by shouting "conspiracy!" and suggesting that the idea is being pushed by a select group.
The denialists/skeptics are an incredibly fractured group who don't agree with each other at all about how the risk isn't there; the "extinction from AI is actually good", "superintelligence is impossible", "omnipotent superintelligence will inevitably be absolutely moral", and "the danger is real but I can solve it" factions and subfactions do not share ideologies, they're just tiny groups allying out of convenience. I don't see how one could reasonably suggest that one or more of those is the "normal" group, to contrast with the "cult".
I think there’s an important contrast between people who think that AI is a significant catastrophic risk, and people who think there is a good project available for reducing that risk without running a risk of making it much worse.
For those of you that shared the "I like global health but not longtermism/AI Safety", how involved were you in EA before longtermism / AI Safety became a big part of it?
I read some EA stuff, donated to AMF, and went to rationalist EA-adjacent events. But never drank the kool aid.
I think it is a good question to raise with the EA-adjacent. Before AI Doomerism and the tar-and-feathering of EA, EA-like ideas were starting to get more mainstream traction and adoption. Articles supportive of say, givewell.org, in local papers, not mentioning EA by name, but discussing some of the basic philosophical ideas were starting to percolate out more into the common culture. Right or Wrong, there has been a backlash that is disrupting some of that influence even those _in_ the EA movement are still mostly doing the same good stuff Scott outlined.
Minor point: I'd prefer to treat longtermism and AI Safety quite separately. (FWIW, I am not in EA myself.)
Personally, I want to _see_ AGI, so my _personal_ preference is that AI Safety measures at least don't cripple AI development like regulatory burdens made civilian nuclear power grind to a 50 year halt in the USA. That said, the time scale for plausible risks from AGI (at least the economic displacement ones) is probably less than 10 years and may be as short as 1 or 2. Discussing well-what-if-every-job-that-can-be-done-online-gets-automated does not require a thousand-year crystal ball.
Longtermism, on the other hand, seems like it hinges on the ability to predict consequences of actions on *VASTLY* longer time scales than anyone has ever managed. I consider it wholly unreasonable.
None of this is to disparage Givewell or similar institutions, which seem perfectly reasonable to me.
I actually think that longtermism advocates for ordinary health and development charity - that sort of work grows exponentially in impact over the long term and thus comes out looking even better than things like climate or animal welfare, whose impacts grow closer to linearly with time.
The problem with longtermism is that you can use it to justify pretty much anything, regardless of if you're even right, as long as your ends are sufficiently far enough away from the now to where you never actually have to be held accountable for getting things wrong.
It's not a very good philosophy. People should be saved from malaria for its own sake. Not because of "longtermism".
Given a choice between several acts which seem worth doing for their own sake, rate at which secondary benefits potentially compound over the long term could be a useful tiebreaker.
"that sort of work grows exponentially in impact over the long term" Some of the longtermist arguments talk about things like effects over a time scale where they expect us to colonize the galaxy. The time scale over which economies have been growing more-or-less steadily is more like 200-300 years. I think that it is sane to make a default assumption of exponential impact, as you describe, for that reason over that time scale (though many things, AI amongst them, could invalidate that). _Beyond_ 200-300 years, I don't think smoothish-growth-as-usual is a reasonable expectation. I think all we can say longer term than that is _don't_ _know_.
Longtermism / AI safety were there from the beginning, so the question embeds a false premise.
I heard about EA and got into the global health aspects of it from a talk on AI safety I went to given by... EY. I went to the talk on AI safety because I'd read HPMOR and just wanted to meet the author.
I wasn't at all convinced about AI safety, but I became interested in the global health aspects of EA. This year my donations went to PSI. I'm still an AI sceptic.
I gave money to GiveDirectly, which is EA-adjacent, and some years would get GiveWell endorsements. It never gets to the top of the recommendation list, but has the big advantage of having a low variance (especially the original formulation, where everyone living in a poor village got a one-time unconditional payout). "I can see you're not wasting the funds" is a good property if you have generally low trust in people running charitable orgs (the recent turn into generating research papers to push UBI in the US is unfortunate).
AI-doom-people have a decent shot at causing more deaths than all other human causes put together, if they follow the EY "nuke countries with datacenters" approach. Of course they'll justify it by appealing to the risk of total human extinction, but it shouldn't be surprising that people who estimate a substantially lower probability of the latter see the whole endeavor as probably net-negative. You'd be better off burning the money.
My only prior exposure was Doing Good Better, before seeing a *lot* of longtermism/x-risk messaging at EA Cambridge in 2018 (80k hours workshop, AI safety reading group, workshops at EA Cambridge retreat).
I considered AI safety (I'm a CS researcher already), enough to attend the reading group. But it seemed like pure math-level mental gymnastics to argue that the papers had any application to aligning future AGIs, and I dislike ML/AI research anyway.
Well there's also the part where people may have been involved in charity/NGO stuff before the cool kids relabeled it as EA.
Not to blame anyone for the relabeling though - if it got lots of fresh young people involved in humanitarian activity, and some renewed interest into its actual efficacy, they're more than entitled to take pride and give it a new name.
Guilty as charged; I posted my own top-level comment voicing exactly this position.
Freddie de Boer was talking about something like this today, about retiring the EA label. The effective EA orgs will still be there even if there is no EA. But I'm not really involved in the community, even if I took the Giving What We Can pledge, so it doesn't really matter much to me if AI X-risk is currently sucking up all the air in the movement.
I agree with the first part, but the problems with EA stem beyond AI doomerism. People in the movement seriously consider absurd conclusions like it being morally desirable to kill all wild animals, it has perverse moral failings as an institution, its language has evolved to become similar to postmodern nonsense, it has a strong left wing bias, and it has been plagued by scandals.
Surely none of that is necessary to get more funding to go towards effective causes. I’d like to invite someone competent to a large corporate so that we can improve the effectiveness of our rather large donations, but the above means I have no confidence to do so.
https://iai.tv/articles/how-effective-altruism-lost-the-plot-auid-2284
https://aeon.co/essays/why-longtermism-is-the-worlds-most-dangerous-secular-credo
Well, sometime some people also considered absurd conclusions as giving voting rights to women, and look where we are. Someone have to consider things to understand if they worth anything.
The problem is that utilitarianism likely a fatally flawed approach, taking it to its fullest, most extreme. There is some element of deontology that probably needs to be accounted for a more robust ethical framework.
Or, hah, maybe AGI is a Utility Monster we should accelerate and our destruction would provide more global utility for such an optimizing agent than our continued existence it should be the wished for outcome. But such ideas are absurd.
To point out, Bentham in fact advocated for women's rights "before his time" and lead to many proto feminist works getting published by John Stuart Mill. In fact, concurrent arguments against his stance would cite that the women only mattered in the context of what they can do for men, so it's ridiculous to speak of suffrage.
https://blogs.ucl.ac.uk/museums/2016/03/03/bentham-the-feminist/
Literally comparing "maybe we should kill whole classes of animals and people" to "maybe we should give rights to more classes of people". Wow.
The clearest evidence I can imagine that you're in a morally deranged cult.
I don't get it. Which one is the more plausible claim? Because for most of history, it would have been "killing whole classes of animals and people". The only reason that isn't true today is precisely because some people were willing to ponder absurd trains of thought.
Deliberate attempts to exterminate whole classes of people go back to at least King Mithridates VI in 88 BCE. For most of human history giving women (or anyone the vote) is a weird and absurd idea while mass slaughter was normal.
Its because people were willing to entertain "absurd" ideas that mass slaughter is now abhorrent and votes for all are normal.
Morally deranged cults don’t “seriously consider” ideas that go diametrically against what other members of the cult endorse. Morally deranged cults outright endorse these crazy ideas. EA does not endorse the elimination of wild animals, though it does consider it seriously.
The only thing worse around here than bad EA critics is bad EA defenders.
Any idea should be considered based in its merit, not emotional reaction. I am not sure if you think I am in a cult, or people in EA are.
All I can say negative utilitarianism exists. There is even a book, Suffering-focused ethics, exploring roughly the idea that suffering is much worse than positive experience.
As a person who is seriously suffering, I consider this topic is at least worth discussing. Thought that I can be in a situation where I cannot kill myself and won't get pain meds gives me serious anxiety. Yet, this is pretty common. In most world countries euthanasia is illegal and pain medicines are strictly controlled. Situation where you can suffer terribly and couldn't die is common. Normal people don't think about it often, until they do.
Based on my thoughts above, I feel like suffering of wild and domesticated animals is something real. I am not sure why do you think that by default we cannot even fanthom idea that we can end their suffering. I myself is not pro or contra, but I am happy that there are people who think about these topics.
As someone who doesn't identify with EA (but likes parts of it), I don't expect my opinion to be particularly persuasive to people who do identify more strongly with the movement, but I do think such a split would result in broader appeal and better branding. For example, I donate to GiveWell because I like its approach to global health & development, but I would not personally choose to donate to animal welfare or existential risk causes, and I would worry that supporting EA more generically would support causes that I don't want to support.
To some extent, I think EA-affiliated groups like GiveWell already get a lot of the benefit of this by having a separate-from-EA identity that is more specific and focused. Applying this kind of focus on the movement level could help attract people who are on board with some parts of EA but find other parts weird or off-putting. But of course deciding to split or not depends most of all on the feelings and beliefs of the people actually doing the work, not on how the movement plays to people like me.
I agree that there should be a movement split. I think the existential risk AI doomerism subset of EA is definitely less appealing to the general public and attracts a niche audience compared to the effective charity subset which is more likely to be generally accepted by pretty much anybody of all backgrounds. If we agree that we should try to maximize the number of people that at the very least are involved in at least one of the causes, when the movement is associated with both causes, many people who would've been interested in effective charitable giving will be driven away by the existential risk stuff.
My first thought was "Yes, I think such a split would be an excellent thing."
My second thought is similar, but with one slight concern: I think that the EA movement probably benefits from attracting and being dominated by blueish-grey thinkers; I have a vague suspicion that such a split would result in the two halves becoming pure blue and reddish-grey respectively, and I think a pure blue Effective Charity movement might be less effective than a more ruthlessly data-centric bluish-grey one.
Fully agree.
Yes, a pure blue Effective Charity movement would give you more projects like the hundreds of millions OpenPhil spent on criminal justice, which they deemed ineffective but then spun off into its own thing.
Can you explain the color coding? I must have missed the reference.
It's an SSC/ACX shiboleth dating back to https://slatestarcodex.com/2014/09/30/i-can-tolerate-anything-except-the-outgroup/
I personally know four people who were so annoyed by AI doomers that they set out to prove beyond a reasonable doubt that there wasn't a real risk. In the process of trying to make that case, they all changed their mind and started working on AI alignment. (One of them was Eliezer, as he detailed in a LW post long ago.) Holden Karnofsky similarly famously put so much effort into explaining why he wasn't worried about AI that he realized he ought to be.
The EA culture encourages members to do at least some research into a cause in order to justify ruling it out (rather than mocking it based on vibes, like normal people do); the fact that there's a long pipeline of prominent AI-risk-skeptic EAs pivoting to work on AI x-risk is one of the strongest meta-arguments for why you, dear reader, should give it a second thought.
This was also my trajectory ... essentially I believed that there were a number of not too complicated technical solutions, and it took a lot of study to realize that the problem was genuinely extremely difficult to solve in an airtight way.
I might add that I don't think most people are in a position to evaluate in depth and so it's unfortunately down to which experts they believe or I suppose what they're temperamentally inclined to believe in general. This is not a situation where you can educate the public in detail to convince them.
I'd argue in the opposite direction: that one of the best things about EA (as the Rationalist) community is that it's a rare example of an in-group defined by adherence to an epistemic toolbox rather than affiliation with specific positions on specific issues.
It is fine for there to be different clusters of people within EA who reach very different conclusions. I don't need to agree with everyone else about where my money should go. But it sure is nice when everyone can speak the same language and agree on how to approach super complex problems in principle.
I think this understates the problem. EA had one good idea (effective charity in developing countries) one mediocre idea (that you should earn to give) and then everything else is mixed but being an EA doesn't provide good intuitions any more than being a textualist in US Jurisprudence. I'm glad the Open Phil donated to the early yimby movement but if I want to support good US politics I'd prefer to directly donate to Yimby Orgs or the Neoliberal groups (https://cnliberalism.org/). I think both the FTX and Open AI events should be treated as broadly discrediting to the idea that EA is a well run organization and the reliability of the current leadership. I think GiveWell remains a good organization for what it is (and will continue donating to GiveDirectly) but while I might trust individuals that Scott is calling EA I think that the EA label is negative the way that I might like libertarians but not people using the Libertarian label.
OK, this EA article persuaded me to resubscribe. I love it when someone causes me to rethink my opinion.
Nothing like a good fight in the comments section to get the blood flowing and the wallet opened!
I think EA is great and this is a great post highlighting all the positives.
However, my personal issue with EA is not its net impact but how it's perceived. SBF made EA look terrible because many EA'ers were woo'ed by his rhetoric. Using a castle for business meetings makes EA look bad. Yelling "but look at all the poor people we saved" is useful but somewhat orthogonal to those examples as they highlight some sort of blindspots in the community that the community doesn't seem to be confronting.
And maybe that's unfair. But EA signed up to be held to a higher standard.
I didn't sign up to be held to a higher standard. Count me in for team "I have never claimed to be better at figuring out whether companies are frauds than Gary Gensler and the SEC". I would be perfectly happy to be held to the same ordinary standard as anyone else.
I'm willing to give you SBF but I don't see how the castle thing holds up. There's a smell of hypocrisy in both. Sam's feigning of driving a cheap car while actually living in a mansion is an (unfair) microcosm of the castle thinking.
I don’t really get the issue with the castle thing. An organization dedicated to marketing EA spent a (comparatively) tiny amount of money on something that will be useful for marketing. What exactly is hypocritical about that?
It's the optics. It looks ostentatious, like you're not really optimizing for efficiency. Sure, they justified this on grounds of efficiency (though I have heard questioning of whether being on the hook for the maintenance of a castle really is cheaper than just renting venues when you need them), but surely taking effectiveness seriously involves pursuing smooth interactions with the normies?
1. Poor optics isn’t hypocrisy. That is still just a deeply unfair criticism.
2. Taking effectiveness seriously involves putting effectiveness above optics in some cases. The problem with many non-effective charities is that they are too focused on optics.
3. Some of the other EA “scandals” make it very clear that it doesn’t matter what you do, some people will hate you regardless. Why would you sacrifice effectiveness for maybe (but probably not) improving your PR given the number of constraints.
EA ~= effectively using funds.
Castle != effectively using funds.
Therefore, hypocrisy.
You can't separate optics from effectiveness, since effectiveness is dependent on optics. Influence is power, and power lets you be effective. The people in EA should know this better than anyone else.
See, I think EA shows a lack of common sense, and this comment is an example. It's true that no matter what you do some people will hate you, but if you buy a fucking castle *everybody's* going to roll their eyes. It's not hard to avoid castles and other things that are going to alienate 95% the public. And you have to think *some* about optics, because it interferes with the effectiveness of the organization if 95% of the public distrusts it.
EA's disdain for "optics" is part of what drew me to it in the first place. I was fed up with charities and policymakers who cared far more about being perceived to be doing something than about actually doing good things.
Where do you draw the line? If EAs were pursuing smooth interactions with normies, they would also be working on the stuff normies like.
Also, idk, maybe the castle was more expensive than previously thought. Good on paper, bad in practice. So, no one can ever make bad investments? Average it in with other donations and the portfolio performance still looks great. It was a foray into cost-saving real estate. To the extent it was a bad purchase, maybe they won't buy real estate anymore, or will hire people who are better at it, or what have you. The foundation that bought it will keep donating for, most likely, decades into the future. Why can't they try a novel donor strategy and see if it works? For information value. Explore what a good choice might be asap, then exploit/repeat/hone that choice in the coming years. Christ, *everyone* makes mistakes and tries things given decent reasoning. The castle had decent reasoning. So why are EAs so rarely allowed to try things, without getting a fingerwag in response?
Look at default culture not EA. To the extent EAs need to play politics, they aren't the worst at it (look at DC). But donors should be allowed to try things.
> The castle had decent reasoning
I don't know, I feel like if there had been a single pragmatic person in the room when they proposed to buy that castle, the proposal would have been shot down. But yes, I do agree that ultimately, you have to fuck around and find out to find what works, so I don't see the castle as invalidating of EA, it's just a screw up.
Didn’t the castle achieve good optics with its target demographic though? The bad optics are just with the people who aren’t contributing, which seems like an acceptable trade-off
> surely taking effectiveness seriously involves pursuing smooth interactions with the normies?
If the normies you're trying to pursue smooth interactions with include members of the British political and economic Establishment, "come to our conference venue in a repurposed country house" is absolutely the way to go.
I think you're overestimating how much the castle thing affects interactions with normies. It was a small news story and I bet even the people who read it at the time have mostly forgotten it by now. I estimate that if a random person were to see a donation drive organized by EAs today the chance that their donation would be affected by the castle story is <0.01%
It's hard to believe that a castle was the optimum (all things considered; no one is saying EA should hold meetings in the cheapest warehouse). The whole pitch of the group is looking at things rationally, so if they fail at one of the most basic things like choosing a meeting location, and there's so little pushback from the community, then what other things is the EA community rationalizing invalidly?
And if we were to suppose that the castle really was carefully analyzed and evaluated validly as at- or near-optimal, then there appears to be a huge blindspot in the community about discounting how things are perceived, and this will greatly impact all kinds of future projects and fund-raising opportunities, i.e. the meta-effectiveness of EA.
Have you been to the venue? You keep calling it "a castle" which is the appropriate buzzword if you want to disparage the purchase, but it is a quite nice event space ~similar to renting a nice hotel. It is far from the most luxurious hotels, but it is like a home-y version of the level you get in hotels in which you run events. They have considered different venues (as other said, explained in other articles), and settled on this one due to price/quality/position and other considerations.
Quick test: If the venue appreciated in value and now can be sold for twice the money making this net positive investment which they can in a pinch use to sponsor a really important crisis, and they do that - does that make the purchase better? If renting it our per year makes full financial sense, and other venues would have been worse - are you now convinced?
If not, you may just be angry at the word "castle" and aren't doing a rational argument anymore.
> Have you been to the venue?
No, and it doesn't matter. EA'ers such as Scott have referred and continue to refer to it as a castle, so it must be sufficiently castle-like and that's all that matters as it impacts the perception of EA.
> They have considered different venues (as other said, explained in other articles), and settled on this one due to price/quality/position and other considerations.
Those other considerations could have included a survey of how buying a castle would affect perceptions of EA and potential donors. This is a blindspot.
> If not, you may just be angry at the word "castle" and aren't doing a rational argument anymore.
Also indirectly answering your other questions -- I don't care about the castle. I'm rational enough to not care. What I care about is the perception of EA and the fact that EA'ers can't realize how bad the castle looks and how this might impact their future donations and public persona. They could have evaluated this rationally with a survey.
Why wouldn't a castle be the optimal building to purchase? It is big, with many rooms, and due to the lack of modern amenities it is probably cheaper than buying a more recently built conference center type building. Plus more recently built buildings tend to be in more desirable locations were land itself is more expensive. I think you're anchoring your opinion way too much on "castle=royalty".
So far it's been entirely negative for marketing EA, isn't in use (yet), isn't a particularly convenient location, and the defenders of the purchase even said they bought the castle because they wanted a fancy old building to think in.
So the problem with the castle is not the castle itself it's that it makes you believe the whole group is hypocritical and ineffective? But isn't that disproved by all the effective actions they take?
Not me. I don't care about the castle. I'm worried about public perceptions of EA and how it impacts their future including donations. Perceptions of profligacy can certainly overwhelm the effective actions. Certain behaviors have a stench to lots of humans.
I think the only rational way to settle this argument would be for EA to run surveys of the impact on perceptions of the use of castles and how that could impact potential donors.
Imagine an Ivy League university buys a new building, then pays a hundred thousand dollars extra to buy a lot of ivy and drape it over the exterior walls of the building. The news media covers the draping expenditure critically. In the long term, would the ivy gambit be positive or negative for achieving that university's goals of cultivating research and getting donations?
I don't know. Maybe we need to do one of those surveys that you're proposing. But I would guess that it's the same answer for the university's ivy and CEA's purchase of the miniature castle.
The general proposal I'm making: if we're going to talk about silly ways of gaining prestige for an institution, let's compare like with like.
See my discussion of castle situation in https://www.astralcodexten.com/p/my-left-kidney . I think it was a totally reasonable purchase of a venue to hold their conferences in, and I think those conferences are high impact. I discuss the optics in part 7 of https://www.astralcodexten.com/p/highlights-from-the-comments-on-kidney, and in https://www.astralcodexten.com/p/the-prophet-and-caesars-wife
All I can write at this point is that it would be worth a grant to an EA intern to perform a statistically valid survey of how EA using a castle impacts the perception of EA and potential future grants. Perhaps have one survey of potential donors, another of average people, and include questions for the donors about how the opinions of average people might impact their donations.
Yes, I read your points and understand them. I find them wholly unconvincing as far as the potential impacts on how EA is perceived (personally, I don't care about the castle).
EAs have done surveys of regular people about perceptions of EA - almost no one knows what EA is.
Donors are wealthy people, many of whom understand the long-term value of real estate.
I like frugality a lot. But I think people who are against a conference host investing in the purchase of their own conference venue are not thinking from the perspective of most organizations or donors.
Ie., it's an average sort of thing that lots of other organisations would do. But EA is supposed to be better. (i don't have anything against EA particularly, but this is a pattern I keep noticing -- something or someone is initially sold as be better then defended as being not-worse).
We should learn to ignore the smell of hypocrisy. There are people who like to mock the COP conferences because they involve flying people to the Middle East to talk about climate change. But those people haven’t seriously considered how to make international negotiations on hard topics effective. Similarly, some people might mock buying a conference venue. But those people haven’t seriously thought about how to hold effective meetings over a long period of time.
On that front, EA sometimes has a (faux?) humble front to it, and that's part of where the hypocrisy comes from. I think that came in the early days, people so paralyzed by optics and effectiveness that they wouldn't spend on any creature comforts at all. Now, perhaps they've overcorrected, and spend too much on comforts to think bigger thoughts.
But if they want to stop caring about hypocrisy, they should go full arrogant, yes we're better and smarter than everyone else and we're not going to be ashamed of it. Take the mask off and don't care about optics *at all*. Let's see how that goes, yeah?
People don't mock buying a venue, they mock buying a *400 year old castle* for a bunch of nerds that quite famously don't care about aesthetics.
Re: "should I care about perception?", I think "yes" and "no" are just different strategies. Cf. the stock market. Whereas speculators metagame the Keynesian Beauty Contest, buy-&-hold-(forever) investors mostly just want the earnings to increase.
This type of metagaming has upsides, in that it can improve your effectiveness, ceteris paribus. This type of metagaming also has downsides, in that it occasionally leads to an equilibrium where everyone compliments the emperor's new clothes.
My impression is that EA is by definition supposed to be held to a higher standard. It's not just plain Altruism like the boring old Red Cross or Doctors Without Borders, it's Effective Altruism, in that it uses money effectively and more effectively than other charities do.
I don't see how that branding/stance doesn't come with an onus for every use of funds to be above scrutiny. I don't think it's fair to say that EA is sometimes makes irresponsible purchases, but it should be excused because on net EA is good. That's not a deal with the devil, it's mostly very good charitable work with the occasional small castle sized deal with the devil. That seems to me like any old charitable movement and not in line with the 'most effective lives per dollar' thesis of EA.
Exactly! 1000 "yes"s!
I can barely comprehend the arrogance of a movement that has in its literal name a claim that they are better than everyone else (or ALL other charities at least), that routinely denigrates non-adherents as "normies" as if they're inferior people, that has members who constantly say without shame or irony that they're smarter than most people, that they're more successful than most people (and that that's why you should trust them), that is especially shameless in its courting of the rich and well-connected compared to other charities and groups...having the nerve to say after a huge scandal that they never claimed a higher standard than anyone else.
Here's an idea. Maybe, if you didn't want to be held to a higher standard than other people, you shouldn't have *spent years talking about how much better you are than other people*.
I think you're misunderstanding EA. It did not create a bunch of charities and then shout "my charities are the effectivest!" EA started when some people said "which jobs/charities help the world the most?" and nobody had seriously tried to find the answers. Then they seriously tried to find the answers. Then they built a movement for getting people and money sent where they were needed the most. The bulk of these charities and research orgs *already existed*. EA is saying "these are the best", not "we are the best".
And- I read you as talking about SBF here? That is not what people failed at. SBF was not a charity that people failed to evaluate well. SBF was a donor who gave a bunch of money to the charities and hid his fraud from EA's and customers and regulators and his own employees.
I have yet to meet an EA who frequently talks about how they're smarter, more successful, or generally better than most people. I think you might be looking at how some community leaders think they need to sound really polished, and overinterpreting?
Now I have seen "normies" used resentfully, but before you resent people outside your subculture you have to feel alienated from them. The alienation here comes from how it seems really likely that our civilization will crash in a few decades. How if farm animals can really feel then holy cow have we caused so much pain. How there's 207 people dying every minute- listen to Believer by Imagine Dragons, and imagine every thump is another kid, another grandparent. It's an goddamn emergency, it's been an emergency since the dawn of humanity. And we can't fix all of it, but if a bunch of us put our heads together and trusted each other and tried really hard, we could fix so much... So when someone raised a banner and said "Over here! We're doing triage! These are the worst parts we know how to fix!", you joined because *duh*. Then you pointed it out to others, and. Turns out most people don't actually give a shit.
That's the alienation. There's lots of EA's who aren't very smart or successful at all. There's lots of people who get it, and have been triaging the world without us and don't want to join us. This isn't alienating. Alienation comes from normies- many of them smarter and more successful- who don't care. Or who are furious your post implied an art supply bake sale isn't just as important as the kids with malaria. It doesn't make people evil that they don't experience that moment of *duh*, but goddamn do I sometimes feel like we're from different planets.
"The world is terrible and in need of fixing" is a philosophical position that is not shared by everyone, not a fact
Right, that's why I said people who don't feel that way sometimes feel like aliens, not that they're mistaken.
That was a good comment, and mine above was too angry I think. I'm starting to think everyone's talking about very different things with the same words. This happens a lot.
First, I'm a bit sceptical of the claim that, before EA nobody was evaluating charity effectiveness. This *feels* like EA propaganda, and I'm *inclined* to suspect that EA's contribution was at least as much "more utilitarian and bayesian evaluation" as "more evaluation". BUT I have no knowledge of this whatsoever and it has nothing to do with my objection to EA, so I'm happy to concede that point.
Second, regarding SBF my main issue is with the morality of "earning to give" and its very slippery slope either straight to "stealing to give" or to "earning to give, but then being corrupted by the environment and lifestyle associated with earning millions, and eventually earning and stealing to get filthy rich". Protestations that EAs never endorsed stealing, while I accept they're sincere, read a bit too much like "will no one rid me of this troublesome priest?" It's important for powerful people to avoid endorsing principles that their followers might logically take to bad ends, not just avoid endorsing the bad ends themselves. (Or at least, there's an argument that they should avoid that, and it's one that's frequently used to lay blame on other figures and groups.)
Third, regarding "normies", I don't feel like I've seen it used to disparage "people who don't think kids with malaria are more important than the opera", or if I have not nearly as many times as it's used to disparage "people who think kids with malaria are important than space colonies and the singularity". I completely see the "different planets" thing, and this goes both ways. Lots of people don't care about starving children, and that's horrific. EAs of course are only a small minority of those who *do* care, effectiveness notwithstanding. On the other hand, this whole "actual people suffering right now need to be weighed against future digital people" is so horrific, so terrifying, so monstrous that I'm hoping it's a hoax or something. But I haven't seen anyone deny that many EAs really do think like that. In a way, using the rescources and infrastructure (if not the actual donations) set up for global poverty relief, to instead make digital people happen faster, is much worse than doing nothing at all for poverty relief to begin with (since you're actively diverting resources from it). So we could say "global health EAs" are on one planet, "normies" are on a second planet, and "longtermist EAs" are on a third planet, and the third looks as evil to the second as the second does to the first.
Fwiw, charity evalutation existed before EA, but it was almost entirely infected by goodhart's law. charity evaluators measured *overhead*, not impact. A charity which claimed to help minorities learn stem skills by having them make shoes out of cardboard and glue as an afterschool program (because everyone knows minorities like basketball shoes, and designing things that require measurements is kind of like stem) would have been rated very, very highly if they were keeping overhead low and actually spending all of the money on their ridiculous program, but the actual impact of the program wouldn't factor into it at all. I use this example because it's something I actually saw in real life.
These evaluators served an important purpose in sniffing out fraud and the kind of criminal incompetence that destroys most charities, but clearly there was something missing, and EA filled in what was missing
TBC, you're replying to a comment about whether individual EA's should be accountable for many EA orgs taking money from SBF. I do not think that "we try to do the most good, come join us" is branding with an onus for you, as an individual, to run deep financial investigations on your movement's donors.
But about the "castle", in terms of onuses on the movement as a whole- That money was donated to Effective Ventures for movement building. Most donations given *under EA* go to charities and research groups. Money given *directly to EV* is used for things like marketing and conferences to get more people involved in poverty, animal, and x-risk areas. EV used part of their budget to buy a conference building near Oxford to save money in the long run.
If the abbey was not the most effective way to get a conference building near Oxford, or if a conference building near Oxford was not the most effective way to build the movement, or if building the movement is not an effective way to get more good to happen, then this is a way that EA fell short of its goal. Pointing out failures is not a bad thing. (Not that anyone promised zero mistakes ever. The movement promised thinking really hard and doing lots of research, not never being wrong.) If it turns out that the story we heard is false and Rob Wiblin secretly wanted to live in a "castle", EA fell short of its goal due to gross corruption by one of its members, which is worth much harsher criticism.
In terms of the Red Cross, actually yes. Even if we found out 50% of all donor money was being embezzled for "castles", EA would still be meeting its goal of being more effective than just about any major charity organization. EA donation targets are more than twice as cost effective as Red Cross or DWB.
Hold to the higher standard, but if you’re going to criticize about the castle, you better be prepared to explain how better to host a series of meetings and conferences on various topics without spending a lot more money.
I think your assumption that "any old charitable movement" is about as effective as using the vast majority of funds on carefully chosen interventions plus buying a castle once and then falling for a snake oil salesman is wrong though. My impression is most charitable movements accomplish very little so it is quite easy to be more effective than them. And until another movement comes along that is more effective than EA at saving lives I'll continue thinking that.
A lot of people ignore it, but I continue to find the "Will MacAskill mentored SBF into earn to give" connection the problem there. No one can always be a perfect judge of character, but it was a thought experiment come to life. It says... *something* about the guardrails and the culture. It's easy to take it as saying too much, to be sure many people do, but it's also easy to ignore what it says entirely.
I recognize broader-EA has (somewhat) moved away from earning to give and that the crypto boom that enabled SBF to be a fraud of that scale was (probably) a once in a lifetime right-place right-time opportunity for both success and failure. Even so.
In point of fact, you all are being held to the ordinary standard. Public corruption leads to public excoriation, and "but look at the good we do" is generally seen as a poor defense until a few years later when the house is clearly clean. That is the ordinary standard.
I think EA signed up to be held to the standard "are you doing the most good you can with the resources you have". I do not think it signed up to be held to the standard "are you perceived positively by as many people as possible". Personally I care a lot more about the first standard, and I think EA comes extremely impressively close to meeting it.
Sure, but go Meta-Effectiveness and consider that poor rhetoric and poor perception could mean fewer resources for the actions that really matter. A few more castle debacles and the cost for billionaires being associated with EA may cross a threshold.
Seems a bit perverse to say EA is failing their commitment to cost-effectiveness by over-emphasising hard numbers in preference to vibes.
Castle != cost-effective. And perceptions of using castles, and blindness to how bad this looks, could have massive long-term impacts on fund-raising.
I don't understand why this is so complicated. It doesn't matter how tiny the cost of the castle has been relative to all resources spent. It's like a guy who cheated on a woman once. Word gets around. And when the guys says, "Who _cares_ about the cheating! Look at all the wonderful other things I do" then it looks even worse. Just say, "Look, we're sorry and we're selling the castle, looking for a better arrangement, and starting a conversation about how to avoid such decisions in the future."
Why is the castle not cost effective?
Yeah, I was just now trying to run figures about increased persuasiveness toward government officials and rich people, to see what the break-even would have to be.
Given the obvious difference in intuitions on how to discount the perceptions of profligacy, as proposed in another response to Scott, I think the only way to actually resolve this is to conduct a survey.
Maybe they should have bought a motte instead. That clearly wouldn't be assailable, and thus beyond reproach.
I just do not get the mindset of someone who gets this hung up on "castles". Is that why I don't relate to the anti-EA mindset?
Should they have bought a building not made out of grey stone bricks? Would that make you happy?
I understand your model is that the abbey was a horrid investment and a group that holds itself out as a cost-effectiveness charity, but also makes horrid investments, should lose credibility and donors.
No one disagrees with that premise.
I disagree that it was a horrid investment, based on the info they had at the time.
So, I don’t see a loss of credibility there.
Others will disagree that CEA/EV is primarily a cost-effectiveness charity.
It looks pretty good to people who think castles are cool, and don't really care much about austerity or poor people or math. There are staggering numbers of such people, some of whom are extremely rich, and EA might reasonably have difficulty extracting money from them without first owning a castle.
Yeah, but billionaires, by definition, have lots of money, so I think on net were probably better off continuing to be associated with them.
Unless people set out with a vendetta to destroy EA, the castle will be forgotten as a reputational cost, but will still be effective at hosting meetings. And if people do set out with a vendetta to destroy EA, it’s unlikely the castle thing is the only thing they could use this way.
Scott's kidney post and this one seem to suggest the threshold is already crossed for some.
The community by it's nature has those blindspots. Their whole rallying cry is "Use data and logic to figure out what to support, instead of what's popular". This attracts people who don't care for or aren't good at playing games of perception. This mindset is great at saving the most lives with the least amount of money, it's not as good for PR or board room politics.
Right, but they could logically evaluate perceptions using surveys. That begs the question: what other poor assumptions are they making that they're not applying rationalism to?
I do wonder if the "castle" thing (it's not a castle!) is just "people who live in Oxford forget that they're in a bubble, and people who've never been to Oxford don't realise how weird it is". If you live in Oxford, which has an *actual* castle plus a whole bunch of buildings approaching a thousand years old, or if you're at all familiar with the Oxfordshire countryside, you'd look at Wytham Abbey and say "Yep, looks like a solid choice. Wait, you want a *modern* building? Near *Oxford*? Do you think we have infinite money, and infinite time for planning applications?"
The word "castle" can be a bit misleading. They (or the ones in the UK) aren't all huge drafty stone fortresses. Many, perhaps most, currently habitable and occupied ones differ little from normal houses, but maybe have a somewhat more squat and solid appearance and a few crenellated walls here and there. I don't know what Castle EA looks like though! :-)
Edit: I did a quick web search, and the castle in question is called Chateau Hostacov and is in Bohemia, which is roughly the western half of the Czech Republic. (I don't do silly little foreign accents, but technically there is an inverted tin hat over the "c" in "Hostacov").
It cost all of $3.5M, which would just about buy a one-bedroom apartment in Manhatten or London. So not a bad deal, especially considering it can be (and, going by its website, is being) used as a venue for other events such as conferences and weddings and vacations etc:
https://www.chateau-hostacov.cz/en