Regarding terrorism not working, that can't really be true since if the blowback effect is so large as to negate any benefit than it's big enough to harness by engaging in terrorism in the name of a group which is beneficial to create blowback against -- eg maybe join Greenpeace and try to get caught trying to blow up a factory making golden rice.
I think the more accurate response is that terrorism is extremely psychologically difficult to effectuate in a way that achieves your ends because people usually need to be hyped up to get themselves to do it and that process tends to link them to their true cause. Also, if you did have both the kind of epistemic care to ensure you were doing good (w/o asking for advice that would leave a trail) and the self-control to avoid leaving a trail you may have skills that could be more effectively leveraged elsewhere.
EDIT: In case it's not clear, I'm not suggesting terrorism is a useful way to achieve your goals. Rather, I'm pedantically insisting that the right analysis is that -- while in theory under the free agent paradigm we use to understand these questions you could harness that blowback -- in practice (thankfully!!) humans are subject to psychological constraints that make them unable to spend decades living as the enemy before sacrificing themselves in a horrific act apparently aimed at their friends. Or even to simply be able to pull the trigger on the kind of targets that might help rather than those they hate.
I think false flags would work better than regular terrorism, but that they're hard to pull off - Osama can't convincingly pretend to be an anti-Islam extremist. I also think most people smart enough to do them are smart enough not to become terrorists in the first place.
I agree with the first part of that response ...but surely the second part is circular. Unless you presume ineffectiveness (or personal selfishness) you can't argue that smart people wouldn't do it. May be true but it's not an argument that should convince someone.
After 9/11 I (young at the time) thought that if Al Qaeda were smart they should get some sympathetic person who's totally straitlaced and has no record to commit a terrorist act, because they would totally avoid suspicion. Since Al Qaeda is presumably smart enough to think of this but it hasn't happened in all this time, I conclude that it's nearly impossible to find someone who fits the bill and is willing to actually do it. Call it smarts or whatever you like.
Right, that's the part I said above about it being psychologically very difficult. People aren't robots and the kind of stuff needed to work someone up to such an act requires being immersed in a community who supports said deciscion.
But that gets to an issue with how we construct our deciscion making models and what we mean by "you should do X". Ultimately, you do whatever you do, but we idealize the situation by considering a model where you choose between various actions where we pretend psychological constraints don't exist.
In that model, it is true that in many situations some kind of terrorism our violence will be an effective tactic.
But this ends up confusing people with frequently bad results because they don't appreciate just how binding those constraints really are -- though more often it's national intelligence agencies not EAs making the mistake
I agree. I've had the similar thought that if a politically minded billionaire like George Soros really wanted the Democrats to win, or the Kock brothers wanted the Republicans to win, what they should do is infiltrate the libertarian/green party and run as a 3rd party candidate, and being a billionaire have enough influence to cause a proper spoiler effect against the party they don't like. You probably don't even need to be a billionaire and could be a more moderate millionaire for smaller but still important elections on the state or municipal level. As far as I know, this never happens, despite being possible in every First Past the Post election. Since it doesn't, it must actually be rather difficult to infiltrate, psychologically or otherwise, to infiltrate and become a major member of a movement you hate.
"if a politically minded billionaire like George Soros really wanted the Democrats to win, or the Kock brothers wanted the Republicans to win, what they should do is infiltrate the libertarian/green party and run as a 3rd party candidate, and being a billionaire have enough influence to cause a proper spoiler effect against the party they don't like"
The problem with that approach would seem to be "Yeah, but then you have to try and get Libertarians to all pull together". Seemingly there is/was a split within the Libertarian Party, one faction won and after tireless scheming and hard work got their guy into power, and then at the big Libertarian Party conference, he... couldn't keep it together long enough not to abide by "Don't take candy from strangers":
"He went on to complain about “all the egregious offenses against our freedom” perpetrated by the Trump administration between 2017 and 2021, at which point a member of the audience shouted out: “How high are you?”
Dr Rectenwald grinned and called back: “Not high enough! Hey, I’m living liberty today!”
When he was later asked about that answer by The Washington Post journalist Meryl Kornfield, Dr Rectenwald confirmed that wasn’t joking: he had taken an edible before going on stage.
“This was not some sort of a major political scandal, okay. I wasn’t found in bed with Stormy Daniels. I’m at a Libertarian Party convention. Somebody offered me something,” he said."
'Somebody gave me something'? And if that was poisoned or laced with something dangerous? 🤦♀️ I have to say, Trump had a point when he fired back at them:
"The presumptive Republican presidential nominee had given a speech at the convention in Washington DC on Saturday evening and was booed and jeered by the audience when he urged them to vote for him.
“Maybe you don’t want to win,” the candidate hit back. “Only do that if you want to win. If you want to lose, don’t do that. Keep getting three per cent every four years.”
I think actually personally running as a spoiler candidate is inefficient and offers poor cover, but wealthy actors who want to influence elections funding spoiler candidates does appear to be a thing that happens.
Most people probably do not want to commit violent acts against innocent people. But attacking data centers isnt that. There were plenty of normal textile workers who attacked machinery that threatened their way of life in England, they weren’t a bunch of extreme kooks, and so I expect we’ll see the same here. And it did almost work for those textile folks, they won some sympathy from prominent people.
Ultimately there will need to be some threat of force to stop AI, and it may come to actually using it, and a lot of people will decide that rationally they are left with nothing but direct action. Gazans etc can appeal to people and politics, but AI will be taking all those decisions out of human control and normal political processes will be useless, so there may not be an alternative strategy in the end.
If there were only three plants in all the world that could machine pistons precisely enough to work in steam engines, it maybe could have The key elements of the industrial revolution were a lot more decentralized than the key elements of any present AI revolution.
Another answer is that even if people agree to do false flag stuff, efficient uses of it would, by definition, be not known to you, unless you're a part of the conspiracy.
Also, I think that you kinda have to believe that certain acts like this would be super helpful if you take the Yudkowsky story about AGI risk seriously. Now plausibly humans can't figure out which ones they are, but if the AI can't exert control over society by engineering the right kind of events then it's much less dangerous.
So I think it's worth distinguishing the question of whether we can know that a certain instance of terrorism would have a certain outcome and whether an omniscient being would ever find a good way to put it to use. I think lots of the barriers here are epistemic in nature.
If Omega gives a friendly superintelligence the choice to commit various forms of terrorism, and no other options, yes the friendly superintelligence can probably use this capability to do some good.
But for a friendly ASI with limited resources in the real world, it would be surprising if terrorism was the most effective strategy.
An unfriendly ASI would have less reason to avoid terrorism, but still likely wouldn't do it.
I think it depends alot on how you imagine the limitations working. For instance, how hard is it to trick people into engaging in attacks which are credited to groups you want to discredit?
Could a superintelligence find some plan, consisting solely of blowing stuff up, that was better than nothing. Sure. (by choosing the specifics of where, when ...) Could a super intelligence find some plan consisting solely of mailing fish to people that was better than doing nothing. Also yes.
If a superintelligence has a fixed budget (say in money, bandwith, attention, whatever), is blowing stuff up likely to be the best use of that? Are bombs a better bargin than all the other things it could buy? I don't think so.
I think it depends on what your goals are. If you think the goal of Hamas is to help the Palestinian people, it has been remarkably anti-effective in the past year. But if you think the goal is to isolate and destroy the Israeli state, it has done a lot more toward that in the past year than anyone has accomplished in decades.
Well, the default scenario seems to be that Israel slowly strangles Palestine and Palestinians. If Hamas's actions cause Israel to collapse instead, after which Palestinians could flourish (as much as third-worlders can, anyway), then there's an argument for it being effective.
I don't think *instead* or *flourish* is obviously what they particularly care about. If they do, then it's less plausible that they've been effective, and if they don't, then it's more plausible.
Israel used to have multiple states warring against it at the same time (1948, 1967, 1973). Now it's just at war with Hamas, and further away from destruction than it was warring with actual states.
I was either not yet born or not yet paying attention during the period from 1973 to 2000. The main thing I have seen is that Israel is currently more isolated and unsupported than it has been at any other time since 2000 (though it's obviously not completely isolated and unsupported). In that sense, Oct. 7 may have been successful, compared to various other activities they and others tried over the past two decades.
Everything Hamas has tried has failed, so perhaps by that standard it doesn't seem that much worse. The US is still actually supporting Israel in its fight against Hamas, but it's not like Hamas was ever willing to do anything that could get the US to favor them instead.
I do not claim that Hamas has great odds at success. Luckily, their insane dream of driving the Jews back into the sea seems unlikely to come true.
But in mid-2023, the Israel-Palestine conflict had mostly disappeared from the news. Israel was normalizing its relationship with its Arab neighbors.
Today, the conflict is in the news all the time. Iran shot a few token rockets towards Israel. Several European countries have decided that they will recognize Palestine as a state. On US campuses, where the future elite of Israels most important ally is raised, wokes are happily chanting Hamas slogans.
Hamas knows the Palestinians will never defeat Israel on their own. They do not have a military objective, they have a social objective. The product Hamas is producing is dead Palestinian kids, killed by IDF.
From an instrumental rationality point of view, I can't really fault their strategy. Oct-7 was meant to goad the IDF into killing Gazans. If ones utility function only had a single term, which is "the destruction of Israel", this seems like a pretty optimal strategy, which raises the odds of Israel being destroyed this century (except for AI) by a factor of perhaps 1.5, say from 2% to 3% or so.
Of course anyone with a less monstrous utility function -- which contains terms for dead Israelis or dead Gazans, perhaps -- would conclude that Oct-7 is tremendously net negative, but there is no accounting for utility functions.
My view is that the governments of these neighboring Arab countries are just keeping their heads down, but haven't changed their minds at all about the desirability with siding with Israel against Iran. At the end of this current war, it will fade into the past just like everything else in the past of the Middle East that one might think would prevent such an alliance.
I think the presumption here was for EA goals. Trivially, some goals are achieved by terrorism -- for instance increasing the amount of terrorism.
Regarding Hamas, I don't think that's correct. Isolate certainly, but states, especially religious ones, often pull together when they feel isolated. Indeed, I think they succeeded in heading off the possibility where Israel simply devolves into a multi-ethnic, multi-religion democratic country like the US -- indeed, maybe even one where Palestinians are the majority. It wasn't the most likely outcome by far but it was much more possible before these events.
The reaction (and flagrant disregard of it's own foundational document regarding jurisdiction) of the ICC and the ICJ has convinced Israel and many Jews in the diaspora that they can never trust in the international system to give them a fair shake and that there is a truly pressing need to have a Jewish homeland.
It's hard to engage in terrorism in favor of big AI companies though because they're already approximately getting everything they want. The situation is inherently asymmetric. "Terrorism to create blowback" would have to look like "join OpenAI and advocate for really creepy project ideas." Stuff like the ScarJo voice drama.
I guess you could bomb Rationalist hubs and AI Risk orgs in the name of Effective Accelerationism. (Apologies to the e/acc people for even suggesting this, they're just the only ones to really put a name on it.)
It depends on what you're trying to accomplish. Did the US do itself serious damage by overreacting to 9/11? Maybe, but how much is hard to estimate, and my one sure prediction is that if the US goes down, people will be arguing about the causes.
Right, I think it's frequently very difficult to predict the effects of actions in the long term. And that's absolutely a practical concern here. But, if we are being consistent, we need to apply that to all such interventions and be equally worried about whether the real effect of advocating for X will actually bring about X.
Indeed, I do think that epistemic limitations are a strong argument that political interventions tend to have relatively low expected benefit.
Scott speculates that 99% of terrorists get caught at the “your collaborator is an undercover fed” stage. If that’s accurate to within an order of magnitude, the headlines are much more likely to read “oil executive caught plotting to commit a terrorist attack and blame it on Greenpeace” than “Greenpeace commits terrorist attack.” So the blowback effect would likely help Greenpeace rather than hurt it.
The Reichstag fire worked well for Hitler because he was the head of government at that point and controlled the investigation into the fire.
Terrorism is also negative-sum. It's totally possible for both terrorist's cause, and the cause they're opposing, to be worse off for it e.g. maybe the backlash against Greenpeace damages the environmental movement making climate change worse, and also the damage to the golden rice factory makes vitamin A deficiency in poor countries worse. Or consider how both Al Qaeda and the US would be better off (at least by many values, I'm not 100% sure what Al Qaeda's value function is here) if 9/11 hadn't happened.
The most successful terrorist attacks I can think of in history were ones by an extremist subgroup trying to provoke a war against the war-reluctant main group they were part of, such as Franz Ferdinand's assassination by the Black Hand or the series of acts by the Black Dragon Society and the other expansionist ultranationalists that pushed Japan into Asian expansion. All of these relied on harsh reactions from the targets as a source of additional motivation and a whip to goad their unwilling allies into the conflict.
Causing war, chaos and mayhem seems to be something terrorism is good at, yes. Another example would be the assassination of Rabin, which likely did not help the Oslo accords.
The assassination of Rabin did help the Oslo Accords - it put in power Peres who was more left-wing than Rabin, and discredited the right wing as political murderers.
In environmentalism, eco-terrorists are stunningly ineffective. I work in fossil fuels. Someone gassed the office building, causing a building-wide evacuation, getting themselves arrested - but the impact on the company's actions and bottom line was.... Negligible.
I mean, I got to enjoy a nice walk outside rather than slave away on my computer. But a day later I was back at it.
The fossil fuels problem is a demand-side problem, not a supply side problem. In 2024, the evil fossil fuel companies aren't actually trying all that hard to sell oil because there will always be buyers as long as people need oil. In fact, they're aggressively trying to sell hydrogen, one of the no carbon things that I think is kind of dumb logistically and has zero customers.
If you wanted to move the dial, you'd protest importing cars that run on petrol, and counter-protest the other environmentalists stopping the construction of new solar farms. Terrorism aimed at supply side does nothing.
Yes, as I said it is very hard to implement effective terrorism because the kind of actions that would make for effective terrorism aren't the kind of things that inspire people to terrorism. In practice, terrorism tends to attack symbolic targets and cause backlash because people need to be emotionally hyped up to commit the act.
I'm just being pedantic that the correct account isn't that it's not in theory workable but that you can't really behave like the idealized actor who could live a full life pretending to be their enemy and then commit horrific acts in their name.
For instance, the ideal kind of terrorism that would achieve environmental ends is probably working your way up the hierarchy in an oil company and making sure you had heavy financial interests in that company's stock and then assassinating Greta Thunberg (sp?) in a way that tries -- but delibrately fails -- to look like an accident.
How on earth would a failed assassination false flag thing help??? It wouldn't cause any sanctions on a company, just on that specific executive/board which you can just replace with new people entirely. Also, some Boeing whistleblowers have died under suspicious circumstances and guess what, we're still flying on planes, aren't we?
For this specific industry, the problem is that you can't try to boycott fossil fuels at the current state. It's like trying to boycott Amazon Web Services. Too much vital infrastructure relies on it right now.
Any attempts to "boycott" fossil fuels tend to be geopolitics, e.g China refusing to buy Australian coal for a bit back in 2020 - 2021, and all that did was cause blackouts over there. The attempted sanctions on Russian oil and gas went poorly because the timeframes are way too short.
But I suppose the only way you could force massive investment into switching over is.... If you're using approximately all of it in a war and you can't get any for geopolitical reasons? Which would be a pretty crappy outcome that guarantees civilian suffering and unnecessary cost during the transition (energy rationing generally does not have good outcomes).
"Works" in the sense of "changing people's attitudes towards animals" doesn't happen. In that sense "terrorism doesn't work" is correct.
"Works" in the sense of "makes people think the authorities are ineffective" can work, but mostly through random violence.
Which is why say, Lenin succeeded, while PETA hasn't.
If your goal is "burn it all down", then attempting to burn it all down does work towards your goal. It is much easier to push Humpty-Dumpty off a wall than to put him back together again afterwards.
Any conception of the Good which asks us to reach some difficult standard will be attacked by critics who find the standard hard to reach and respond out of jealousy or anger.
In my opinion, EA belongs in the pantheon of Great faiths of the world, which find themselves continually subjected to scorn and derision by persons who claim to be motivated by truth but in reality are animated more by petty jealousy than that magnanimous spirits which animates all men who aspire to greatness.
Welcome to the club.
Scott, your commitment to truth and generosity have been in part responsible for my own dedication to God, which I would see simply as a name for that which is ultimately true and real. Please see these people for what they are: animated by the spirit of Cain.
"...and this is why we need to end Shrimp Suffering and stop Skynet!!!"
EA is a Religion - you said it yourself. When people reject it, they typically don't reject the concept of "The Good" but *your conception* of "The Good". To claim that anyone that rejects it is "animated by spirit of Cain and petty jealousy" doesn't help swing them to your side.
The statement was only intended for Scott. I think anyone who believes in the concept of The Good would be wiser to focus on their own capacity for foolishness rather that of other people, which is both easy to do and useless unless it’s to help better understand the human capacity for self deception.
I'm not going to say the argument that if EA was good, it would improve the charitable giving in Boston and SF is the worst argument I've ever seen, because there's so much competition. It's up there, though.
Is there a term for straw man argument fantasia? This isn't just perfectionism, it's a gish gallop of invented impossible standards.
I actually think there’s something good about the argument. It’s probably looking at too short a timescale, but Effective Altruists really do hope that the movement will eventually have some effect on its goals above pre-existing trend, and noting that this hasn’t happened (either at the level of objective malaria case rates falling faster for more time, or at the level of more people giving more money) shows that the movement isn’t (yet) having the impact it wants.
Based on their own metrics, they couldn't look at donations but instead the effects of those donations. A billion dollars that doesn't change conditions in reality is no better than a million that makes no change.
It sounds like Stone's criticism might work either way - either there are no measurable downstream effects, in which case EAs are not very effective, or their numbers are too small to be effective, in which case they are not very effective.
In other words, at that point you've got a small number of people making an individually meaningful (maybe?) change that amounts to a rounding error on the problems they are trying to solve.
GiveWell, as I understand it, disburses money to multiple charities based on effectiveness principles, so it's kind of an EA clearinghouse. Is it true that there are no measurable downstream effects of the money they receive, which I believe has grown recently? In this post Scott suggests that they've saved thousands of lives: https://www.astralcodexten.com/p/in-continued-defense-of-effective
I'm aware of Scott's previous post, and don't have a particular argument against his numbers. On the other hand, 61 million people died last year. No number Scott shared comes close to a fraction of that.
Stone's argument that you can't see the effects of EA even looking at the two places where they are most influential says that EA is too small to be meaningful, even if everything worked as they hope. 200,000 people saved is nothing.
I'm not saying EAs do nothing useful, or that giving to charity is pointless. I'm saying that EAs do too little to justify their own propaganda and the way they speak about it, which seems to be more Stone's point than anything else.
I like that EA exists and want to see them succeed. I think they've done some positive good, and promoting GiveWell is great. But it's all small time stuff. They can't replace the world's charities, and my speculation is that if they tried they would have the same issues with overly large bureaucracies and disjointed priorities that all large organizations have. That's a big part of the criticism about shrimp charities and AI - even being a small and relatively nimble group, they are already veering into what most people would consider pet projects of those giving or working in the organizations, rather than objectively important life-saving approaches.
Well, I definitely agree that the Charity/Death matchup was another blowout last year. We'll get 'em this year!
More seriously, I'm inclined to reject in the strongest possible terms the idea that 200,000 people saved is nothing, and while I didn't read Stone's piece I have my doubts that he used that particular argument. But maybe I could be persuaded in that direction if I knew what alternative Stone was pointing to that's superior.
It feels like "Oh, they do mathy stuff? Well, I can do mathy stuff too, look!" which then fails because they don't have enough practice doing mathy stuff, or enough practice noticing the problems in their own arguments, so they end up making a very stupid argument.
I think Lyman is working on conflict theory and you're working on mistake theory. One simple question for Lyman: what would change his mind about EAs? I don't think anything would.
His social circle changing their minds about it. He wouldn't generate that answer himself, but would probably agree that it's true in principle, except impossible in reality.
Munecat (you know, that YouTube chick) has set her sights on EA. That can't be good for you, can it? Not the usual kind of opposition that you can afford to be excited to face. To what degree are you scared?
I’m terminally online and I had no idea who this was until I looked her up.
From a quick glance, her channel seems targeted towards heavily left-leaning 20 something’s, the same demographic as Hasan. I can tell you from experience that most of that demographic already hates EA because of its association with Bay Area techbros.
If I was Scott, I’d be about as worried about her coming out against EA as I would be if Andrew Tate did; the overlap between people who are interested in EA and watch either of them is ~0.
Also, I see that you're a different person, but you just admitted that you don't care, although you claim to want to make the world better. That makes you either evil, or extremely ineffective
You attributed to Nalthis the belief that EA is a cult, based on a tendentious reading of his words. (If two groups are disjoint, this doesn't tell you *which* group is the cult, therefore it's not reasonable to claim his words entail a belief that EA is a cult. But I think you know that.)
Now TGGP just "admitted" EA's "don't care", according to you. He "admitted" that EA's don't care about, in his words, "niche internet microcelebrities", which is true of almost everyone, and is a totally different kind of "not caring" than would hinder your effectiveness at large scale goals. I'm sure you know that too, but still you generalize from one to the other.
Accuse me of trolling and I might admit it. I'm not even gonna look up what you're accusing me of
A LOT of people watch Andrew Tate. You can call his followers a lot of things, but the usual definition of "cult" does not apply to such a large group. Who endorses EA these days? Post-SBF, even Elon and his followers have moved on from you. So yeah, picking one group and calling it the "cult" is appropriate
Is it a totally different kind of "not caring"? I don't think so. In my opinion (you're welcome to try to disagree), the only kind of "large scale goal" that is *not* unambiguously bad is spreading a message or your own version of the "good news" or your own version of "the truth". I think any individual who has any other "large scale goals" is either delusional or just plain evil. Hitler had large scale goals, for example
So if you don't care about convincing munecat or her followers, you're evil in one of 2 ways: you either think she (and/or her welfare) is worth sacrificing, or you're trying to be "altruistic" toward her without her consent (like force-feeding her). So yeah, both your meanings of not caring are pretty much synonymous to me
It is not especially good. I'm generally.a pretty big fan of PhilosophyTube but felt like that video really missed the mark. PT often takes a sort of soft marxist / anticapitalist conception of the good as a given, and instead of concluding that EA is people trying to good by a different metric than her, tries really hard to suggest that EA is more about status and prestige and signaling and personal power and is intellectually dishonest because it isn't supporting the things she thinks they should.
I don't think EA is about status, prestige, and signaling, but even if it is, I don't even care as long as they actually drive money to important charities, and actually help the world.
How naive! Let's list some people who have endeavored to help the world: Sam Bankman-Fried, Adolf Hitler, Joseph Stalin, Mao Zedong, every single leader of the Church of Scientology, Elon Musk, the list goes on... Now that I've said something that I believe, let me ask you something: list some charities that you think are important
This is why you don't just look at the expressed intentions, nor at the actual intentions, but at what they are doing and why.
If you think they are doing important mistakes in their analysis of what to do to help, you will have to debate these, you can't just say "it is just about status and prestige", and expect to make an important argument against it, for two reasons :
- Just stating stuffs like that, without argument, will not convince anybody not already convinced.
- The reason it is done, doesn't help us know if it is a good thing to do it or not (a lot of people honestly trying to help, did a lot of wrong, and the opposite is also true).
It is quite easy to explain where each people on your list did/do make the world worse (even if some people are still delusional about Musk), just do the same thing with EA (but first inform yourself about what they are really doing, I could be wrong, but I think it is possible you aren't really well-informed on it).
…what is the kind of opposition you *can* be excited to face?
Look, I’m glad that I’ve never had a YouTuber try to run me out of town. Or even go after my job, or an adjacent philosophical movement. But I don’t feel like “excited” or “scared” are really the right terms here.
> …what is the kind of opposition you *can* be excited to face?
Intellectually honest, creative, constructive, so on. Like, if you believe in debate / collaborative truthseeking / etc., opposition / disagreement is an engine to produce more knowledge.
I wonder how much of the dislike for EA culture is a reaction to the fact that EA enthusiasts haven't adopted the same kind of norms about when not to share their moral theory that we've developed for religion, vegetarianism etc...
I mean, yes EA is full of a lot of people with half-assed philosophical views. Heck, I'm sure many would put me into that bucket and my PhD involved a substantial philosophy component. But that's much more true of the donors to regular charities, especially religious ones. The number of people who actually know what's in their own religious texts is shockingly low.
But thanks to centuries of conflict we have strong norms about not sharing those views in certain ways.
I think this is right. Across many arguments on the topic, something I've seen many EA critics say is, "to YOU donating to the local art museum or your alma mater may be less 'effective' than donating bed nets, but that's just your judgment. There's no objectively true measure of effectiveness." To which the obvious answer is, you're right, so that's why we're out here trying to convince people to use our measure of effectiveness. But one gets the sense that's out of bounds.
If a rich person is donating no money to charity, it's socially acceptable to try to convince them to donate some. But once they've decided to donate some, it seems like it's *not* socially acceptable to try to convince them to donate it elsewhere. That seems inconsistent to me but it seems like it's based on some pretty durable norms.
Also, this is another case where the most important part may be the part everyone agrees on but lots of people don't do, namely donate to charity at all. It's not fun to argue about whether one should donate if one can, since almost everyone agrees they should. It's more fun to argue about what donations are "effective" or whether that's even measurable.
"To which the obvious answer is, you're right, so that's why we're out here trying to convince people to use our measure of effectiveness. But one gets the sense that's out of bounds."
Compare that with all the objections about "imposing your religion" when it comes to the public square and topics such as abortion. Yes, if I could convert everyone to accepting the Catholic theology around sex and reproduction, then we could all agree on the moral value of embryos as human persons. But that ain't gonna happen. Ditto with "if everyone just accepts the EA measure of effectiveness".
Well, if the standard is "everyone," I agree that it ain't gonna happen. But is that an objection to trying to convince people on the margin? Because that does sometimes work!
> If a rich person is donating no money to charity, it's socially acceptable to try to convince them to donate some. But once they've decided to donate some, it seems like it's *not* socially acceptable to try to convince them to donate it elsewhere. That seems inconsistent to me
I feel that's perfectly consistent - the former case you are essentially appealing "hey, according to your moral norms (as far as you claim), you should donate", and then the person reflects on that and agrees (or disagrees); but in the latter case you'd be saying "according to your moral norms you think that you should donate to X, but according to my moral norms Y is better" which is.... different. It is generally accepted to point out hypocrisy and align words with deeds, but it's generally not accepted to demand someone to change their reasonable-but-different moral priorities unless they were violating some taboos.
I think at this point it depends on how one does it. However, I don't think this necessarily entails pressuring someone to change their moral norms. I think there are very few people whose moral norms don't take saving lives as one of the highest causes one can contribute to. Suggesting that they can achieve that goal better is often taken as helpful rather than preachy; at any rate that's how I took it.
I think the issue is more that it's not really practical to do this well.
The problem is that we can either exercise approval or disapproval and in an ideal situation we would approve of all charitable donations but just approve less of the less effective charity. Unfortunately, in practice, people don't really know how much you would have approved had they done the other donation so often the only way to convey the message sounds like passive aggressive criticism "great but you could have..."
The "ideology and movement" distinction and trying to be a big tent probably contributes to this issue IMO. EA has a distinct culture that is incredibly elitist and quite off-putting to "normies," but tries to maintain this whole thing about just meaning "doing good better".
So is EA simply "doing good better" by any means at all, or is it trying to claim that blowing hundreds of millions on criminal justice reform and X amount on shrimp suffering are among the most effective possible causes, and also maybe you should go vegan and donate a kidney? Scott showed the most self-awareness on this in his review of WWOTF (https://www.astralcodexten.com/p/book-review-what-we-owe-the-future), ctrl+f "seagulls", and has not returned to such clarity in any of his EA posts since. Clearly, EA isn't *just* an idea; there's a whole lot of cultural assumptions smuggled in.
It's not the elitism that bothers people. It's the lack of social skills that results in the impression of being looked down on.
People fucking love elites, big celebrities are always sitting on boards for charities or whatever and people love it. But they are careful not to be seen as critical of people without that status.
I actually think the problem is not sufficently distinguishing the movement and the multiple different related ideas idea in many of these later posts.
I agree the idea isn't merely "it's important to do good effectively," I think that misses some key elements, I think the minimal EA thesis can be summarized as something like:
within the range of charitable interventions widely seen as good [1] the desierability of those interventions can be accurately compared by summing over the individual benefits and when you actually do the math it often reveals huge benefits that would otherwise be overlooked.
That view is something that is more controversial than one might think but the real controversy comes from the other part of the standard EA view. The belief that therefore we should allocate social credit according to the efficacy of someone's charitable giving.
Unfortunately, EA types tend not to be great with social skills so instead of actually conveying more approval for more effective giving what they actually often manage to do is convey disapproval of ineffective giving which upsets people. Not to mention many people just dislike the aesthetics of the movement the same way many greens really dislike the aesthetics of nuclear (and vice versa) prior to any discussion of the policy.
Anyway, long story short, it's better to disentangle all these different aspects.
--
1: so consensual interventions which help currently living participants w/o direct/salient harm to anyone or other weird defeasors.
That's true, but at some point vegetarians do have to advocate for their ideas and I'm sure they find (as I used to) that just that by itself can be perceived as preachy by people who don't want to be confronted by the ideas no matter how they're packaged, and I think some of that is going on with EA too.
Sure, advocacy is admirable and useful. Vegans get in trouble when they badly misread the room and try a stunt like pledging to not ever to sit at the same table as non vegans. They tried something like that not long ago. It didn’t play well, as anyone outside their orbit could have easily predicted. You have to meet people where they are, not where your circle of close friends happen to be.
That's certainly true, but let's be honest, most EAs are in it for the discussion/feeling of being more consistent not the altruism.
And it's wonderful we can convert the former into helping people but I don't think it's possible for the social movement EA to ever act like vegetarians because the motivation isn't deep emotional concern with suffering (for some it is and the rest of us do care) but it's feeling good about ourselves for being consistent. And this does create extra irritation bc people feel they are being looked down on by individuals who are doing the same thing they are -- the few EA saints don't bother people.
Hence the need for a seperate term like "efficient giving" or whatever to solicit donations from people who find the social movement unappealing.
--
And that's just the way of altruistic groups. I care about global warming but I find the aesthetics of most environmental groups repellent. Best you can often do is create a range of aesthetics that work for the same objectives.
The quiet altruist is admirable in many ways, but there do need to be people that evangelize in some fashion. I don't have leadership skills and am prone to arrogance, so perhaps quiet altruistrism makes most sense for me to aspire to. But there are people in EA with the right qualities for modest evangelizing.
Ohh absolutely, but most EAs don't have them and lots of us are into EA because we like the aesthetic and debating this kind of stuff and not everyone is going to realize when they need to turn that off.
True. I've been thinking about a related issue regarding comparative advantage. For example, going vegan probably isn't worth it for people with high opportunity cost of time and energy, but may be for those with low OC. But that sort of reasoning is conspicuously vulnerable to abuse (and resentment) because it's basically saying the powerful people get to do fun stuff and the masses should eat bad-tasting food.
"Wokeness" / "social justice" gained a lot of ground through preachiness but also produced a backlash. I'd guess the milder forms of their preachiness were quite effective on net, but the extreme forms were very counterproductive.
Good point, though sometimes very mild preachiness/disapproval can be helpful ("ohh, you don't use cruelty free eggs?") but it's hard.
The bigger issues EA faces is that even when it's not trying to be preachy it gets perceived as such. A vegetarian can just explain their POV when asked and won't be perceived as judgemental if they restrict themselves to I statements.
Now imagine the same convo w/ an EA. Something like the third question will be why they think their charities are more effective and they have to give a statement that pretty explicitly compares what they do with what others are doing. Also, it's internal deliberations get perceived as such.
I think this is definitely part of the puzzle. I think another part of the puzzle is that the EA culture is quite weird in a way that seems to drive a lot people to distraction. As Scott notes, EA is a piece of social technology among other things. It has an extremely distinct vibe. Some people are all-in on the vibe, some (like me) have what is mostly an affectionate tolerance for it, and some people seem to really, really loathe it.
Unfortunately, I think the notion that EA should avoid tainting itself by broadening its appeal is wrong on the merits, and that to be maximally effective EA absolutely should moderate and mainstream itself. The resistance to this idea feels mostly like a cope by people who couldn't mainstream themselves if they wanted to -- it's hard to choose not be weird if you are in fact weird. Every time I read the EA forums (which I basically stopped doing because they are exhausting), I find myself wondering if people are just using the phrase "epistemic status" at this point as a sort of normie-repellent.
If this sounds like an attack on EA, it's not meant to be. I find the vituperation in arguments like Stone's to be odd and unfortunate, but also worth understanding.
*Can* EA go mainstream without being philosophically and compositionally compromised? Organizations that alter their philosophy to appeal to more people tend to end up endorsing the same things as all other mainstream organizations. And gaining members faster than the culture can propagate is going to lead to problems of takeover.
I think so, absolutely, yes. Scott consistently presents a mainstreamed version of it here: donate 10% of your income and give to charities that have a proven impact on people's lives are both somewhat radical and also reasonably unweird concepts at the core of EA.
Note also that I don't think EA has to jettison all of the weird bits, such as the esoteric cause evaluation. I just think they need to be willing to tailor their message to audience and -- this is probably the important bit -- tolerate the tailoring.
EA cannot go mainstream, but not for the reasons you listed.
It's already difficult to disburse the amount of money 1 Dustin Moskovitz has in ways that are consistent with how to analyze the evidence, I think of the existing charities, there can probably be around 10x or at most 100x the amount of donations before we are wildly out of distribution (and it's only that high because I'm treating GiveDirectly as a money hole.)
It would certainly be nice if everyone donated to global poverty charities, but at that scale, the type of intervention you'd be thinking about has to start including things like "funding fundamental research" or "scale up developmental economics as a field".
This is something I've been worried about for years, that there aren't big enough money holes for more charity!
Maybe this is where the watering down part of mainstreaming EA comes in. Sure, we might tap out early on Dustin Moskovitz-grade charities. But here are two somewhat different questions: would it be a net benefit to the world if annual charitable giving was 10% higher than it is today? And is current giving inefficient enough that we could raise the good done by at least 10% if resources were better allocated? I pulled the 10% figures out of nowhere, but the point is just that if you believe the world would be a better place with more and better charity, then that is an argument for mainstreaming EA. Diehard EAists might call this a very weak form of EA, and they'd be right. But scale matters.
I think at scale, considerations I'd consider facile now, like "is charity really the most efficient thing" would become real live players. For example, I'm not sure throwing 100x times the amount of money into say, SF'a homelessness problem would help, it might literally be better to burn the money or spend it on video games! If you believe Bryan Caplan's myth of the rational voter's thesis that self interested voting would be better than what we have now, because at least one person benefits for sure in that circumstance, as opposed to programs now that benefit no one, you can imagine other similar signaling dynamics start to dominate. Not even going to start on the sheer amount of adversarial selection that would start happening, where honest charities would start losing out to outright fraudulent ones and so on.
I don't think I know when this would start happening, but I'd lower bound it at at least 3rd world NGO scale. I'd be surprised if the upper bound were above 10% of 1st world disposable income.
Neither of these are radical! It's called a tithe! Giving 10% of your income to your Church (i.e. THE charity that had a proven impact on people's lives) has been the standard for ~1500 years!
I think they lose the vast majority of people at the "altruism" part (esp. with the way it's usually operationalized), and criticisms around "effectiveness" are post hoc.
Regarding terrorism not working, that can't really be true since if the blowback effect is so large as to negate any benefit than it's big enough to harness by engaging in terrorism in the name of a group which is beneficial to create blowback against -- eg maybe join Greenpeace and try to get caught trying to blow up a factory making golden rice.
I think the more accurate response is that terrorism is extremely psychologically difficult to effectuate in a way that achieves your ends because people usually need to be hyped up to get themselves to do it and that process tends to link them to their true cause. Also, if you did have both the kind of epistemic care to ensure you were doing good (w/o asking for advice that would leave a trail) and the self-control to avoid leaving a trail you may have skills that could be more effectively leveraged elsewhere.
EDIT: In case it's not clear, I'm not suggesting terrorism is a useful way to achieve your goals. Rather, I'm pedantically insisting that the right analysis is that -- while in theory under the free agent paradigm we use to understand these questions you could harness that blowback -- in practice (thankfully!!) humans are subject to psychological constraints that make them unable to spend decades living as the enemy before sacrificing themselves in a horrific act apparently aimed at their friends. Or even to simply be able to pull the trigger on the kind of targets that might help rather than those they hate.
I think false flags would work better than regular terrorism, but that they're hard to pull off - Osama can't convincingly pretend to be an anti-Islam extremist. I also think most people smart enough to do them are smart enough not to become terrorists in the first place.
I agree with the first part of that response ...but surely the second part is circular. Unless you presume ineffectiveness (or personal selfishness) you can't argue that smart people wouldn't do it. May be true but it's not an argument that should convince someone.
After 9/11 I (young at the time) thought that if Al Qaeda were smart they should get some sympathetic person who's totally straitlaced and has no record to commit a terrorist act, because they would totally avoid suspicion. Since Al Qaeda is presumably smart enough to think of this but it hasn't happened in all this time, I conclude that it's nearly impossible to find someone who fits the bill and is willing to actually do it. Call it smarts or whatever you like.
Right, that's the part I said above about it being psychologically very difficult. People aren't robots and the kind of stuff needed to work someone up to such an act requires being immersed in a community who supports said deciscion.
But that gets to an issue with how we construct our deciscion making models and what we mean by "you should do X". Ultimately, you do whatever you do, but we idealize the situation by considering a model where you choose between various actions where we pretend psychological constraints don't exist.
In that model, it is true that in many situations some kind of terrorism our violence will be an effective tactic.
But this ends up confusing people with frequently bad results because they don't appreciate just how binding those constraints really are -- though more often it's national intelligence agencies not EAs making the mistake
I agree. I've had the similar thought that if a politically minded billionaire like George Soros really wanted the Democrats to win, or the Kock brothers wanted the Republicans to win, what they should do is infiltrate the libertarian/green party and run as a 3rd party candidate, and being a billionaire have enough influence to cause a proper spoiler effect against the party they don't like. You probably don't even need to be a billionaire and could be a more moderate millionaire for smaller but still important elections on the state or municipal level. As far as I know, this never happens, despite being possible in every First Past the Post election. Since it doesn't, it must actually be rather difficult to infiltrate, psychologically or otherwise, to infiltrate and become a major member of a movement you hate.
"if a politically minded billionaire like George Soros really wanted the Democrats to win, or the Kock brothers wanted the Republicans to win, what they should do is infiltrate the libertarian/green party and run as a 3rd party candidate, and being a billionaire have enough influence to cause a proper spoiler effect against the party they don't like"
The problem with that approach would seem to be "Yeah, but then you have to try and get Libertarians to all pull together". Seemingly there is/was a split within the Libertarian Party, one faction won and after tireless scheming and hard work got their guy into power, and then at the big Libertarian Party conference, he... couldn't keep it together long enough not to abide by "Don't take candy from strangers":
https://sg.news.yahoo.com/libertarian-candidate-reveals-took-edible-161423546.html
"He went on to complain about “all the egregious offenses against our freedom” perpetrated by the Trump administration between 2017 and 2021, at which point a member of the audience shouted out: “How high are you?”
Dr Rectenwald grinned and called back: “Not high enough! Hey, I’m living liberty today!”
When he was later asked about that answer by The Washington Post journalist Meryl Kornfield, Dr Rectenwald confirmed that wasn’t joking: he had taken an edible before going on stage.
“This was not some sort of a major political scandal, okay. I wasn’t found in bed with Stormy Daniels. I’m at a Libertarian Party convention. Somebody offered me something,” he said."
'Somebody gave me something'? And if that was poisoned or laced with something dangerous? 🤦♀️ I have to say, Trump had a point when he fired back at them:
"The presumptive Republican presidential nominee had given a speech at the convention in Washington DC on Saturday evening and was booed and jeered by the audience when he urged them to vote for him.
“Maybe you don’t want to win,” the candidate hit back. “Only do that if you want to win. If you want to lose, don’t do that. Keep getting three per cent every four years.”
I think actually personally running as a spoiler candidate is inefficient and offers poor cover, but wealthy actors who want to influence elections funding spoiler candidates does appear to be a thing that happens.
Most people probably do not want to commit violent acts against innocent people. But attacking data centers isnt that. There were plenty of normal textile workers who attacked machinery that threatened their way of life in England, they weren’t a bunch of extreme kooks, and so I expect we’ll see the same here. And it did almost work for those textile folks, they won some sympathy from prominent people.
Ultimately there will need to be some threat of force to stop AI, and it may come to actually using it, and a lot of people will decide that rationally they are left with nothing but direct action. Gazans etc can appeal to people and politics, but AI will be taking all those decisions out of human control and normal political processes will be useless, so there may not be an alternative strategy in the end.
I don't think there's any way scattered attacks on machines could have prevented the industrial revolution.
If there were only three plants in all the world that could machine pistons precisely enough to work in steam engines, it maybe could have The key elements of the industrial revolution were a lot more decentralized than the key elements of any present AI revolution.
Another answer is that even if people agree to do false flag stuff, efficient uses of it would, by definition, be not known to you, unless you're a part of the conspiracy.
Also, I think that you kinda have to believe that certain acts like this would be super helpful if you take the Yudkowsky story about AGI risk seriously. Now plausibly humans can't figure out which ones they are, but if the AI can't exert control over society by engineering the right kind of events then it's much less dangerous.
So I think it's worth distinguishing the question of whether we can know that a certain instance of terrorism would have a certain outcome and whether an omniscient being would ever find a good way to put it to use. I think lots of the barriers here are epistemic in nature.
If Omega gives a friendly superintelligence the choice to commit various forms of terrorism, and no other options, yes the friendly superintelligence can probably use this capability to do some good.
But for a friendly ASI with limited resources in the real world, it would be surprising if terrorism was the most effective strategy.
An unfriendly ASI would have less reason to avoid terrorism, but still likely wouldn't do it.
I think it depends alot on how you imagine the limitations working. For instance, how hard is it to trick people into engaging in attacks which are credited to groups you want to discredit?
Could a superintelligence find some plan, consisting solely of blowing stuff up, that was better than nothing. Sure. (by choosing the specifics of where, when ...) Could a super intelligence find some plan consisting solely of mailing fish to people that was better than doing nothing. Also yes.
If a superintelligence has a fixed budget (say in money, bandwith, attention, whatever), is blowing stuff up likely to be the best use of that? Are bombs a better bargin than all the other things it could buy? I don't think so.
I think it depends on what your goals are. If you think the goal of Hamas is to help the Palestinian people, it has been remarkably anti-effective in the past year. But if you think the goal is to isolate and destroy the Israeli state, it has done a lot more toward that in the past year than anyone has accomplished in decades.
Well, the default scenario seems to be that Israel slowly strangles Palestine and Palestinians. If Hamas's actions cause Israel to collapse instead, after which Palestinians could flourish (as much as third-worlders can, anyway), then there's an argument for it being effective.
I don't think *instead* or *flourish* is obviously what they particularly care about. If they do, then it's less plausible that they've been effective, and if they don't, then it's more plausible.
Israel used to have multiple states warring against it at the same time (1948, 1967, 1973). Now it's just at war with Hamas, and further away from destruction than it was warring with actual states.
I was either not yet born or not yet paying attention during the period from 1973 to 2000. The main thing I have seen is that Israel is currently more isolated and unsupported than it has been at any other time since 2000 (though it's obviously not completely isolated and unsupported). In that sense, Oct. 7 may have been successful, compared to various other activities they and others tried over the past two decades.
Everything Hamas has tried has failed, so perhaps by that standard it doesn't seem that much worse. The US is still actually supporting Israel in its fight against Hamas, but it's not like Hamas was ever willing to do anything that could get the US to favor them instead.
I do not claim that Hamas has great odds at success. Luckily, their insane dream of driving the Jews back into the sea seems unlikely to come true.
But in mid-2023, the Israel-Palestine conflict had mostly disappeared from the news. Israel was normalizing its relationship with its Arab neighbors.
Today, the conflict is in the news all the time. Iran shot a few token rockets towards Israel. Several European countries have decided that they will recognize Palestine as a state. On US campuses, where the future elite of Israels most important ally is raised, wokes are happily chanting Hamas slogans.
Hamas knows the Palestinians will never defeat Israel on their own. They do not have a military objective, they have a social objective. The product Hamas is producing is dead Palestinian kids, killed by IDF.
From an instrumental rationality point of view, I can't really fault their strategy. Oct-7 was meant to goad the IDF into killing Gazans. If ones utility function only had a single term, which is "the destruction of Israel", this seems like a pretty optimal strategy, which raises the odds of Israel being destroyed this century (except for AI) by a factor of perhaps 1.5, say from 2% to 3% or so.
Of course anyone with a less monstrous utility function -- which contains terms for dead Israelis or dead Gazans, perhaps -- would conclude that Oct-7 is tremendously net negative, but there is no accounting for utility functions.
My view is that the governments of these neighboring Arab countries are just keeping their heads down, but haven't changed their minds at all about the desirability with siding with Israel against Iran. At the end of this current war, it will fade into the past just like everything else in the past of the Middle East that one might think would prevent such an alliance.
I think the presumption here was for EA goals. Trivially, some goals are achieved by terrorism -- for instance increasing the amount of terrorism.
Regarding Hamas, I don't think that's correct. Isolate certainly, but states, especially religious ones, often pull together when they feel isolated. Indeed, I think they succeeded in heading off the possibility where Israel simply devolves into a multi-ethnic, multi-religion democratic country like the US -- indeed, maybe even one where Palestinians are the majority. It wasn't the most likely outcome by far but it was much more possible before these events.
The reaction (and flagrant disregard of it's own foundational document regarding jurisdiction) of the ICC and the ICJ has convinced Israel and many Jews in the diaspora that they can never trust in the international system to give them a fair shake and that there is a truly pressing need to have a Jewish homeland.
How do you know he wasn't a closeted anti-Islam extremist all along?
It's hard to engage in terrorism in favor of big AI companies though because they're already approximately getting everything they want. The situation is inherently asymmetric. "Terrorism to create blowback" would have to look like "join OpenAI and advocate for really creepy project ideas." Stuff like the ScarJo voice drama.
Good thing I support them then.
I guess you could bomb Rationalist hubs and AI Risk orgs in the name of Effective Accelerationism. (Apologies to the e/acc people for even suggesting this, they're just the only ones to really put a name on it.)
It depends on what you're trying to accomplish. Did the US do itself serious damage by overreacting to 9/11? Maybe, but how much is hard to estimate, and my one sure prediction is that if the US goes down, people will be arguing about the causes.
Right, I think it's frequently very difficult to predict the effects of actions in the long term. And that's absolutely a practical concern here. But, if we are being consistent, we need to apply that to all such interventions and be equally worried about whether the real effect of advocating for X will actually bring about X.
Indeed, I do think that epistemic limitations are a strong argument that political interventions tend to have relatively low expected benefit.
Scott speculates that 99% of terrorists get caught at the “your collaborator is an undercover fed” stage. If that’s accurate to within an order of magnitude, the headlines are much more likely to read “oil executive caught plotting to commit a terrorist attack and blame it on Greenpeace” than “Greenpeace commits terrorist attack.” So the blowback effect would likely help Greenpeace rather than hurt it.
The Reichstag fire worked well for Hitler because he was the head of government at that point and controlled the investigation into the fire.
Terrorism is also negative-sum. It's totally possible for both terrorist's cause, and the cause they're opposing, to be worse off for it e.g. maybe the backlash against Greenpeace damages the environmental movement making climate change worse, and also the damage to the golden rice factory makes vitamin A deficiency in poor countries worse. Or consider how both Al Qaeda and the US would be better off (at least by many values, I'm not 100% sure what Al Qaeda's value function is here) if 9/11 hadn't happened.
The most successful terrorist attacks I can think of in history were ones by an extremist subgroup trying to provoke a war against the war-reluctant main group they were part of, such as Franz Ferdinand's assassination by the Black Hand or the series of acts by the Black Dragon Society and the other expansionist ultranationalists that pushed Japan into Asian expansion. All of these relied on harsh reactions from the targets as a source of additional motivation and a whip to goad their unwilling allies into the conflict.
To that I can add IRA-style terrorism as guerilla warfare, trying to make staying in your country so miserable the occupier leaves. That can work.
Causing war, chaos and mayhem seems to be something terrorism is good at, yes. Another example would be the assassination of Rabin, which likely did not help the Oslo accords.
The assassination of Rabin did help the Oslo Accords - it put in power Peres who was more left-wing than Rabin, and discredited the right wing as political murderers.
In environmentalism, eco-terrorists are stunningly ineffective. I work in fossil fuels. Someone gassed the office building, causing a building-wide evacuation, getting themselves arrested - but the impact on the company's actions and bottom line was.... Negligible.
I mean, I got to enjoy a nice walk outside rather than slave away on my computer. But a day later I was back at it.
The fossil fuels problem is a demand-side problem, not a supply side problem. In 2024, the evil fossil fuel companies aren't actually trying all that hard to sell oil because there will always be buyers as long as people need oil. In fact, they're aggressively trying to sell hydrogen, one of the no carbon things that I think is kind of dumb logistically and has zero customers.
If you wanted to move the dial, you'd protest importing cars that run on petrol, and counter-protest the other environmentalists stopping the construction of new solar farms. Terrorism aimed at supply side does nothing.
Yes, as I said it is very hard to implement effective terrorism because the kind of actions that would make for effective terrorism aren't the kind of things that inspire people to terrorism. In practice, terrorism tends to attack symbolic targets and cause backlash because people need to be emotionally hyped up to commit the act.
I'm just being pedantic that the correct account isn't that it's not in theory workable but that you can't really behave like the idealized actor who could live a full life pretending to be their enemy and then commit horrific acts in their name.
For instance, the ideal kind of terrorism that would achieve environmental ends is probably working your way up the hierarchy in an oil company and making sure you had heavy financial interests in that company's stock and then assassinating Greta Thunberg (sp?) in a way that tries -- but delibrately fails -- to look like an accident.
How on earth would a failed assassination false flag thing help??? It wouldn't cause any sanctions on a company, just on that specific executive/board which you can just replace with new people entirely. Also, some Boeing whistleblowers have died under suspicious circumstances and guess what, we're still flying on planes, aren't we?
For this specific industry, the problem is that you can't try to boycott fossil fuels at the current state. It's like trying to boycott Amazon Web Services. Too much vital infrastructure relies on it right now.
Any attempts to "boycott" fossil fuels tend to be geopolitics, e.g China refusing to buy Australian coal for a bit back in 2020 - 2021, and all that did was cause blackouts over there. The attempted sanctions on Russian oil and gas went poorly because the timeframes are way too short.
But I suppose the only way you could force massive investment into switching over is.... If you're using approximately all of it in a war and you can't get any for geopolitical reasons? Which would be a pretty crappy outcome that guarantees civilian suffering and unnecessary cost during the transition (energy rationing generally does not have good outcomes).
"Works" in the sense of "changing people's attitudes towards animals" doesn't happen. In that sense "terrorism doesn't work" is correct.
"Works" in the sense of "makes people think the authorities are ineffective" can work, but mostly through random violence.
Which is why say, Lenin succeeded, while PETA hasn't.
If your goal is "burn it all down", then attempting to burn it all down does work towards your goal. It is much easier to push Humpty-Dumpty off a wall than to put him back together again afterwards.
note: I am not in favor of terrorism at all.
Like neonazis bombing the Oktoberfest, of all places?
https://en.wikipedia.org/wiki/Oktoberfest_bombing
Any conception of the Good which asks us to reach some difficult standard will be attacked by critics who find the standard hard to reach and respond out of jealousy or anger.
In my opinion, EA belongs in the pantheon of Great faiths of the world, which find themselves continually subjected to scorn and derision by persons who claim to be motivated by truth but in reality are animated more by petty jealousy than that magnanimous spirits which animates all men who aspire to greatness.
Welcome to the club.
Scott, your commitment to truth and generosity have been in part responsible for my own dedication to God, which I would see simply as a name for that which is ultimately true and real. Please see these people for what they are: animated by the spirit of Cain.
I like the phrase, "animated by the spirit of Cain," for the phenomenon.
"...and this is why we need to end Shrimp Suffering and stop Skynet!!!"
EA is a Religion - you said it yourself. When people reject it, they typically don't reject the concept of "The Good" but *your conception* of "The Good". To claim that anyone that rejects it is "animated by spirit of Cain and petty jealousy" doesn't help swing them to your side.
The statement was only intended for Scott. I think anyone who believes in the concept of The Good would be wiser to focus on their own capacity for foolishness rather that of other people, which is both easy to do and useless unless it’s to help better understand the human capacity for self deception.
I'm not going to say the argument that if EA was good, it would improve the charitable giving in Boston and SF is the worst argument I've ever seen, because there's so much competition. It's up there, though.
Is there a term for straw man argument fantasia? This isn't just perfectionism, it's a gish gallop of invented impossible standards.
I actually think there’s something good about the argument. It’s probably looking at too short a timescale, but Effective Altruists really do hope that the movement will eventually have some effect on its goals above pre-existing trend, and noting that this hasn’t happened (either at the level of objective malaria case rates falling faster for more time, or at the level of more people giving more money) shows that the movement isn’t (yet) having the impact it wants.
But does his data even demonstrate it? Wouldn't a better measure be, like, donations to GiveWell or things like that?
Based on their own metrics, they couldn't look at donations but instead the effects of those donations. A billion dollars that doesn't change conditions in reality is no better than a million that makes no change.
It sounds like Stone's criticism might work either way - either there are no measurable downstream effects, in which case EAs are not very effective, or their numbers are too small to be effective, in which case they are not very effective.
In other words, at that point you've got a small number of people making an individually meaningful (maybe?) change that amounts to a rounding error on the problems they are trying to solve.
GiveWell, as I understand it, disburses money to multiple charities based on effectiveness principles, so it's kind of an EA clearinghouse. Is it true that there are no measurable downstream effects of the money they receive, which I believe has grown recently? In this post Scott suggests that they've saved thousands of lives: https://www.astralcodexten.com/p/in-continued-defense-of-effective
I'm aware of Scott's previous post, and don't have a particular argument against his numbers. On the other hand, 61 million people died last year. No number Scott shared comes close to a fraction of that.
Stone's argument that you can't see the effects of EA even looking at the two places where they are most influential says that EA is too small to be meaningful, even if everything worked as they hope. 200,000 people saved is nothing.
I'm not saying EAs do nothing useful, or that giving to charity is pointless. I'm saying that EAs do too little to justify their own propaganda and the way they speak about it, which seems to be more Stone's point than anything else.
I like that EA exists and want to see them succeed. I think they've done some positive good, and promoting GiveWell is great. But it's all small time stuff. They can't replace the world's charities, and my speculation is that if they tried they would have the same issues with overly large bureaucracies and disjointed priorities that all large organizations have. That's a big part of the criticism about shrimp charities and AI - even being a small and relatively nimble group, they are already veering into what most people would consider pet projects of those giving or working in the organizations, rather than objectively important life-saving approaches.
Well, I definitely agree that the Charity/Death matchup was another blowout last year. We'll get 'em this year!
More seriously, I'm inclined to reject in the strongest possible terms the idea that 200,000 people saved is nothing, and while I didn't read Stone's piece I have my doubts that he used that particular argument. But maybe I could be persuaded in that direction if I knew what alternative Stone was pointing to that's superior.
It feels like "Oh, they do mathy stuff? Well, I can do mathy stuff too, look!" which then fails because they don't have enough practice doing mathy stuff, or enough practice noticing the problems in their own arguments, so they end up making a very stupid argument.
I think Lyman is working on conflict theory and you're working on mistake theory. One simple question for Lyman: what would change his mind about EAs? I don't think anything would.
His social circle changing their minds about it. He wouldn't generate that answer himself, but would probably agree that it's true in principle, except impossible in reality.
Sounds about right
Where is the conflict theory in his argument?
Munecat (you know, that YouTube chick) has set her sights on EA. That can't be good for you, can it? Not the usual kind of opposition that you can afford to be excited to face. To what degree are you scared?
"Munecat (you know, that YouTube chick)"
I don't know. Who is this famous in her own backyard person?
I guess it doesn't really matter. She'll never be as famous or as EFFECTIVE (lol) as SBF (Sam Bankman-Fried)
>you know, that YouTube chick
I’m terminally online and I had no idea who this was until I looked her up.
From a quick glance, her channel seems targeted towards heavily left-leaning 20 something’s, the same demographic as Hasan. I can tell you from experience that most of that demographic already hates EA because of its association with Bay Area techbros.
If I was Scott, I’d be about as worried about her coming out against EA as I would be if Andrew Tate did; the overlap between people who are interested in EA and watch either of them is ~0.
So you agree EA is a cult
No, niche internet microcelebrities are more cultlike than people who don't care about them.
You're famous only for SBF
Also, I see that you're a different person, but you just admitted that you don't care, although you claim to want to make the world better. That makes you either evil, or extremely ineffective
Who do you think you're talking to? I'm not famous at all. When did I claim anything about wanting to make the world better?
I meant the EA community when I said "you". Assumed you were part of it given the context of the thread
You attributed to Nalthis the belief that EA is a cult, based on a tendentious reading of his words. (If two groups are disjoint, this doesn't tell you *which* group is the cult, therefore it's not reasonable to claim his words entail a belief that EA is a cult. But I think you know that.)
Now TGGP just "admitted" EA's "don't care", according to you. He "admitted" that EA's don't care about, in his words, "niche internet microcelebrities", which is true of almost everyone, and is a totally different kind of "not caring" than would hinder your effectiveness at large scale goals. I'm sure you know that too, but still you generalize from one to the other.
A bit less sophistry please.
Accuse me of trolling and I might admit it. I'm not even gonna look up what you're accusing me of
A LOT of people watch Andrew Tate. You can call his followers a lot of things, but the usual definition of "cult" does not apply to such a large group. Who endorses EA these days? Post-SBF, even Elon and his followers have moved on from you. So yeah, picking one group and calling it the "cult" is appropriate
Is it a totally different kind of "not caring"? I don't think so. In my opinion (you're welcome to try to disagree), the only kind of "large scale goal" that is *not* unambiguously bad is spreading a message or your own version of the "good news" or your own version of "the truth". I think any individual who has any other "large scale goals" is either delusional or just plain evil. Hitler had large scale goals, for example
So if you don't care about convincing munecat or her followers, you're evil in one of 2 ways: you either think she (and/or her welfare) is worth sacrificing, or you're trying to be "altruistic" toward her without her consent (like force-feeding her). So yeah, both your meanings of not caring are pretty much synonymous to me
Philosophy Tube seems to make the same genre of content as Munncat and already did a video on EA. PT is a larger channel, and nothing came of that.
Speaking of which, has anyone seen that video? Is it any good?
It is not especially good. I'm generally.a pretty big fan of PhilosophyTube but felt like that video really missed the mark. PT often takes a sort of soft marxist / anticapitalist conception of the good as a given, and instead of concluding that EA is people trying to good by a different metric than her, tries really hard to suggest that EA is more about status and prestige and signaling and personal power and is intellectually dishonest because it isn't supporting the things she thinks they should.
Philosophy Tube is rather weak compared to munecat. But yeah, EA IS about status and prestige and signaling
For all of us ignorant of munecat, why is PT weak in comparison?
I don't think EA is about status, prestige, and signaling, but even if it is, I don't even care as long as they actually drive money to important charities, and actually help the world.
How naive! Let's list some people who have endeavored to help the world: Sam Bankman-Fried, Adolf Hitler, Joseph Stalin, Mao Zedong, every single leader of the Church of Scientology, Elon Musk, the list goes on... Now that I've said something that I believe, let me ask you something: list some charities that you think are important
This is why you don't just look at the expressed intentions, nor at the actual intentions, but at what they are doing and why.
If you think they are doing important mistakes in their analysis of what to do to help, you will have to debate these, you can't just say "it is just about status and prestige", and expect to make an important argument against it, for two reasons :
- Just stating stuffs like that, without argument, will not convince anybody not already convinced.
- The reason it is done, doesn't help us know if it is a good thing to do it or not (a lot of people honestly trying to help, did a lot of wrong, and the opposite is also true).
It is quite easy to explain where each people on your list did/do make the world worse (even if some people are still delusional about Musk), just do the same thing with EA (but first inform yourself about what they are really doing, I could be wrong, but I think it is possible you aren't really well-informed on it).
…what is the kind of opposition you *can* be excited to face?
Look, I’m glad that I’ve never had a YouTuber try to run me out of town. Or even go after my job, or an adjacent philosophical movement. But I don’t feel like “excited” or “scared” are really the right terms here.
> …what is the kind of opposition you *can* be excited to face?
Intellectually honest, creative, constructive, so on. Like, if you believe in debate / collaborative truthseeking / etc., opposition / disagreement is an engine to produce more knowledge.
I wonder how much of the dislike for EA culture is a reaction to the fact that EA enthusiasts haven't adopted the same kind of norms about when not to share their moral theory that we've developed for religion, vegetarianism etc...
I mean, yes EA is full of a lot of people with half-assed philosophical views. Heck, I'm sure many would put me into that bucket and my PhD involved a substantial philosophy component. But that's much more true of the donors to regular charities, especially religious ones. The number of people who actually know what's in their own religious texts is shockingly low.
But thanks to centuries of conflict we have strong norms about not sharing those views in certain ways.
I think this is right. Across many arguments on the topic, something I've seen many EA critics say is, "to YOU donating to the local art museum or your alma mater may be less 'effective' than donating bed nets, but that's just your judgment. There's no objectively true measure of effectiveness." To which the obvious answer is, you're right, so that's why we're out here trying to convince people to use our measure of effectiveness. But one gets the sense that's out of bounds.
If a rich person is donating no money to charity, it's socially acceptable to try to convince them to donate some. But once they've decided to donate some, it seems like it's *not* socially acceptable to try to convince them to donate it elsewhere. That seems inconsistent to me but it seems like it's based on some pretty durable norms.
Also, this is another case where the most important part may be the part everyone agrees on but lots of people don't do, namely donate to charity at all. It's not fun to argue about whether one should donate if one can, since almost everyone agrees they should. It's more fun to argue about what donations are "effective" or whether that's even measurable.
"To which the obvious answer is, you're right, so that's why we're out here trying to convince people to use our measure of effectiveness. But one gets the sense that's out of bounds."
Compare that with all the objections about "imposing your religion" when it comes to the public square and topics such as abortion. Yes, if I could convert everyone to accepting the Catholic theology around sex and reproduction, then we could all agree on the moral value of embryos as human persons. But that ain't gonna happen. Ditto with "if everyone just accepts the EA measure of effectiveness".
Well, if the standard is "everyone," I agree that it ain't gonna happen. But is that an objection to trying to convince people on the margin? Because that does sometimes work!
Go forth and save souls, how can I object to that?
> If a rich person is donating no money to charity, it's socially acceptable to try to convince them to donate some. But once they've decided to donate some, it seems like it's *not* socially acceptable to try to convince them to donate it elsewhere. That seems inconsistent to me
I feel that's perfectly consistent - the former case you are essentially appealing "hey, according to your moral norms (as far as you claim), you should donate", and then the person reflects on that and agrees (or disagrees); but in the latter case you'd be saying "according to your moral norms you think that you should donate to X, but according to my moral norms Y is better" which is.... different. It is generally accepted to point out hypocrisy and align words with deeds, but it's generally not accepted to demand someone to change their reasonable-but-different moral priorities unless they were violating some taboos.
I think at this point it depends on how one does it. However, I don't think this necessarily entails pressuring someone to change their moral norms. I think there are very few people whose moral norms don't take saving lives as one of the highest causes one can contribute to. Suggesting that they can achieve that goal better is often taken as helpful rather than preachy; at any rate that's how I took it.
I think the issue is more that it's not really practical to do this well.
The problem is that we can either exercise approval or disapproval and in an ideal situation we would approve of all charitable donations but just approve less of the less effective charity. Unfortunately, in practice, people don't really know how much you would have approved had they done the other donation so often the only way to convey the message sounds like passive aggressive criticism "great but you could have..."
The "ideology and movement" distinction and trying to be a big tent probably contributes to this issue IMO. EA has a distinct culture that is incredibly elitist and quite off-putting to "normies," but tries to maintain this whole thing about just meaning "doing good better".
So is EA simply "doing good better" by any means at all, or is it trying to claim that blowing hundreds of millions on criminal justice reform and X amount on shrimp suffering are among the most effective possible causes, and also maybe you should go vegan and donate a kidney? Scott showed the most self-awareness on this in his review of WWOTF (https://www.astralcodexten.com/p/book-review-what-we-owe-the-future), ctrl+f "seagulls", and has not returned to such clarity in any of his EA posts since. Clearly, EA isn't *just* an idea; there's a whole lot of cultural assumptions smuggled in.
It's not the elitism that bothers people. It's the lack of social skills that results in the impression of being looked down on.
People fucking love elites, big celebrities are always sitting on boards for charities or whatever and people love it. But they are careful not to be seen as critical of people without that status.
I actually think the problem is not sufficently distinguishing the movement and the multiple different related ideas idea in many of these later posts.
I agree the idea isn't merely "it's important to do good effectively," I think that misses some key elements, I think the minimal EA thesis can be summarized as something like:
within the range of charitable interventions widely seen as good [1] the desierability of those interventions can be accurately compared by summing over the individual benefits and when you actually do the math it often reveals huge benefits that would otherwise be overlooked.
That view is something that is more controversial than one might think but the real controversy comes from the other part of the standard EA view. The belief that therefore we should allocate social credit according to the efficacy of someone's charitable giving.
Unfortunately, EA types tend not to be great with social skills so instead of actually conveying more approval for more effective giving what they actually often manage to do is convey disapproval of ineffective giving which upsets people. Not to mention many people just dislike the aesthetics of the movement the same way many greens really dislike the aesthetics of nuclear (and vice versa) prior to any discussion of the policy.
Anyway, long story short, it's better to disentangle all these different aspects.
--
1: so consensual interventions which help currently living participants w/o direct/salient harm to anyone or other weird defeasors.
I do think those norms have developed because they are important for effectiveness. Vegetarians have learned that preachiness doesn’t actually work.
That's true, but at some point vegetarians do have to advocate for their ideas and I'm sure they find (as I used to) that just that by itself can be perceived as preachy by people who don't want to be confronted by the ideas no matter how they're packaged, and I think some of that is going on with EA too.
Sure, advocacy is admirable and useful. Vegans get in trouble when they badly misread the room and try a stunt like pledging to not ever to sit at the same table as non vegans. They tried something like that not long ago. It didn’t play well, as anyone outside their orbit could have easily predicted. You have to meet people where they are, not where your circle of close friends happen to be.
That's certainly true, but let's be honest, most EAs are in it for the discussion/feeling of being more consistent not the altruism.
And it's wonderful we can convert the former into helping people but I don't think it's possible for the social movement EA to ever act like vegetarians because the motivation isn't deep emotional concern with suffering (for some it is and the rest of us do care) but it's feeling good about ourselves for being consistent. And this does create extra irritation bc people feel they are being looked down on by individuals who are doing the same thing they are -- the few EA saints don't bother people.
Hence the need for a seperate term like "efficient giving" or whatever to solicit donations from people who find the social movement unappealing.
--
And that's just the way of altruistic groups. I care about global warming but I find the aesthetics of most environmental groups repellent. Best you can often do is create a range of aesthetics that work for the same objectives.
The quiet altruist is admirable in many ways, but there do need to be people that evangelize in some fashion. I don't have leadership skills and am prone to arrogance, so perhaps quiet altruistrism makes most sense for me to aspire to. But there are people in EA with the right qualities for modest evangelizing.
Ohh absolutely, but most EAs don't have them and lots of us are into EA because we like the aesthetic and debating this kind of stuff and not everyone is going to realize when they need to turn that off.
True. I've been thinking about a related issue regarding comparative advantage. For example, going vegan probably isn't worth it for people with high opportunity cost of time and energy, but may be for those with low OC. But that sort of reasoning is conspicuously vulnerable to abuse (and resentment) because it's basically saying the powerful people get to do fun stuff and the masses should eat bad-tasting food.
I think that's were it's important to seperate the question of what is good to do and what is good to critisize/praise people for doing.
Also the powerful people likely have more impact, even per dollar, so likely have more obligations.
"Wokeness" / "social justice" gained a lot of ground through preachiness but also produced a backlash. I'd guess the milder forms of their preachiness were quite effective on net, but the extreme forms were very counterproductive.
Good point, though sometimes very mild preachiness/disapproval can be helpful ("ohh, you don't use cruelty free eggs?") but it's hard.
The bigger issues EA faces is that even when it's not trying to be preachy it gets perceived as such. A vegetarian can just explain their POV when asked and won't be perceived as judgemental if they restrict themselves to I statements.
Now imagine the same convo w/ an EA. Something like the third question will be why they think their charities are more effective and they have to give a statement that pretty explicitly compares what they do with what others are doing. Also, it's internal deliberations get perceived as such.
I think this is definitely part of the puzzle. I think another part of the puzzle is that the EA culture is quite weird in a way that seems to drive a lot people to distraction. As Scott notes, EA is a piece of social technology among other things. It has an extremely distinct vibe. Some people are all-in on the vibe, some (like me) have what is mostly an affectionate tolerance for it, and some people seem to really, really loathe it.
Unfortunately, I think the notion that EA should avoid tainting itself by broadening its appeal is wrong on the merits, and that to be maximally effective EA absolutely should moderate and mainstream itself. The resistance to this idea feels mostly like a cope by people who couldn't mainstream themselves if they wanted to -- it's hard to choose not be weird if you are in fact weird. Every time I read the EA forums (which I basically stopped doing because they are exhausting), I find myself wondering if people are just using the phrase "epistemic status" at this point as a sort of normie-repellent.
If this sounds like an attack on EA, it's not meant to be. I find the vituperation in arguments like Stone's to be odd and unfortunate, but also worth understanding.
*Can* EA go mainstream without being philosophically and compositionally compromised? Organizations that alter their philosophy to appeal to more people tend to end up endorsing the same things as all other mainstream organizations. And gaining members faster than the culture can propagate is going to lead to problems of takeover.
I think so, absolutely, yes. Scott consistently presents a mainstreamed version of it here: donate 10% of your income and give to charities that have a proven impact on people's lives are both somewhat radical and also reasonably unweird concepts at the core of EA.
Note also that I don't think EA has to jettison all of the weird bits, such as the esoteric cause evaluation. I just think they need to be willing to tailor their message to audience and -- this is probably the important bit -- tolerate the tailoring.
EA cannot go mainstream, but not for the reasons you listed.
It's already difficult to disburse the amount of money 1 Dustin Moskovitz has in ways that are consistent with how to analyze the evidence, I think of the existing charities, there can probably be around 10x or at most 100x the amount of donations before we are wildly out of distribution (and it's only that high because I'm treating GiveDirectly as a money hole.)
It would certainly be nice if everyone donated to global poverty charities, but at that scale, the type of intervention you'd be thinking about has to start including things like "funding fundamental research" or "scale up developmental economics as a field".
This is something I've been worried about for years, that there aren't big enough money holes for more charity!
Maybe this is where the watering down part of mainstreaming EA comes in. Sure, we might tap out early on Dustin Moskovitz-grade charities. But here are two somewhat different questions: would it be a net benefit to the world if annual charitable giving was 10% higher than it is today? And is current giving inefficient enough that we could raise the good done by at least 10% if resources were better allocated? I pulled the 10% figures out of nowhere, but the point is just that if you believe the world would be a better place with more and better charity, then that is an argument for mainstreaming EA. Diehard EAists might call this a very weak form of EA, and they'd be right. But scale matters.
I think at scale, considerations I'd consider facile now, like "is charity really the most efficient thing" would become real live players. For example, I'm not sure throwing 100x times the amount of money into say, SF'a homelessness problem would help, it might literally be better to burn the money or spend it on video games! If you believe Bryan Caplan's myth of the rational voter's thesis that self interested voting would be better than what we have now, because at least one person benefits for sure in that circumstance, as opposed to programs now that benefit no one, you can imagine other similar signaling dynamics start to dominate. Not even going to start on the sheer amount of adversarial selection that would start happening, where honest charities would start losing out to outright fraudulent ones and so on.
I don't think I know when this would start happening, but I'd lower bound it at at least 3rd world NGO scale. I'd be surprised if the upper bound were above 10% of 1st world disposable income.
Neither of these are radical! It's called a tithe! Giving 10% of your income to your Church (i.e. THE charity that had a proven impact on people's lives) has been the standard for ~1500 years!
I think they lose the vast majority of people at the "altruism" part (esp. with the way it's usually operationalized), and criticisms around "effectiveness" are post hoc.