192 Comments
User's avatar
Simon's avatar

Looking at this from the outside, with its recursions and metaphors-about-metaphors, its digressions and aphorisms, its semantic arguments it’s hard not to get the feeling of a psychiatric patient (a mad, multiple personality patient, with too many pseudonyms) desperately trying to avoid acknowledging something obvious.

(Criticism of criticism of vcriticism... could easily appear in that cult classic _Knots_ book by RD Laing.)

Scientific debates don’t look like this. Nor do political debates, or philosophical ones. The patterns look like some strange combination of the three. And it’s the endless aporias that’s strike me the most. Nobody seems to feel satisfied at the end. The fact that people can’t help do it over and over, despite the frustration, suggests a form of repetition compulsion. Scott’s earlier analogy to criticism as a kink is perhaps the right category, if not the right specific diagnosis.

I want to suggest to the community that maybe, just maybe, they’ve been participating in a folie á N, a group madness. One that, from which, a part of the whole occasionally tries to escape. Some parody of a therapy session emerges, with people taking on different roles, but no change in the madness happens.

Expand full comment
Dan L's avatar

> Scientific debates don’t look like this. Nor do political debates, or philosophical ones.

Disagree. This pattern feels pretty normal, for the interface between abstract paradigmatic disagreements and functional operationalized criticisms. "Some strange combination of the three" isn't *wrong* though, it's just that talking about 'group dynamics around the abstract principles underlying X' that's pretty much a given.

Expand full comment
[insert here] delenda est's avatar

Agree. The core point is that EA is a religious type movement and one that requires significant costs: you have to be willing to disavow human nature, but grants significant benefits: you get to believe that _you_ will do good, better even, at scale, with other people's money in other people's lives.

The absurdity of that proposition should be enough on its own!

Expand full comment
Desertopa's avatar

That seems like a real stretch to characterize as religious even entirely granting the basis of the argument. Religions and religious reasoning don't usually center around making tangible and observable benefits for other people at one's own expense, especially not in ways where you're encouraged to make an active effort to check if it's working or not. It feels like a particularly far point down the path of characterizing everything as a religion.

https://slatestarcodex.com/2015/03/25/is-everything-a-religion/

Expand full comment
[insert here] delenda est's avatar

I was simplifying. In many ways it is obviously not a religion, hence I said religious type.

My main point was that you are required to believe something absurd on its face, even if believing that allows/helps many EAs to do a lot of good.

Expand full comment
Simon's avatar

I’m not sure it’s a religion, and really want to emphasize the psychological (almost psychiatric!) aspects of the discussion.

Religious debates don’t look like what we’re seeing here, but people making analogies to religion and accusing each other of being religious feels like something that might happen in a therapeutic context.

Expand full comment
JamesLeng's avatar

Present-day mainstream religions might not be big on producing empirically-verifiable results, but the ancient Romans gave their own religion about as much scientific scrutiny, proportionately, as the modern US military does to aircraft - for roughly the same reasons.

Expand full comment
FLWAB's avatar

By the same token, the west spent a good 1000+ years where the most intelligent people that could be found were set to the task of figuring out everything they could about God (theology).

Expand full comment
JamesLeng's avatar

They made some interesting progress in that time, but then it turned out good enough ironworking and trigonometry let you win wars even if you've got the religious side of things completely wrong. Without that incentive to produce ongoing strategic results, corruption spread, service quality plummeted, and now what have we got? https://dresdencodak.com/2022/01/17/dark-science-113-the-church-of-the-empty-inbox/ The point I'm getting at is that I think we might have an unfairly bad impression of religion's potential, because there have been so few examples in living memory of anything coming close to fully achieving it, or even anyone making a proper attempt.

Expand full comment
Jorgen Harris's avatar

Most people in the effective altruism movement are trying to do good with their own money, unless you've decided that donors don't count.

I think the idea that you _can't_ do good in other people's lives with money is absurd. Do you think that money can't be used to buy antimalarials and water treatment, or that antimalarials and water treatment aren't helpful?

Expand full comment
Paul Goodman's avatar

You think that believing it's possible to help people is absurd?

Expand full comment
Simon's avatar

This is a good example of why there’s such a psychological puzzle here. I mean the following in a very respectful way.

The OP wrote “you get to believe that _you_ will do good, better even, at scale, with other people's money in other people's lives.”

But the interpretation you make is that OP thinks the idea of helping people is absurd.

This is such a misreading, in a context (Scott blog) that prizes careful attention, that I can’t make sense of it.

To make sense of it, it really does seem we need a story about what’s not being said, and (even more strangely) what people don’t know they’re not saying (but is influencing and distorting the conversation regardless).

Expand full comment
Paul Goodman's avatar

For clarity and rhetoric purposes, I was taking a broad starting point to try to narrow down what OP's apparently non-sensical argument was actually trying to say. If you'd rather I skip past the discussion and lay out my thoughts in more detail from the get-go:

A) It seems pretty clear to me that it is possible to help people.

B) Given A, it seems at least not absurd to suppose that I myself could help people.

C) Given A and the fact that money can be used to accomplish many goals people have, it seams reasonable to suppose that it's possible to help people with money.

D) Given B and C, it seems plausible that I could help people with money.

As someone pointed out many if not most EAs give their own money, so the "other people's" part seems irrelevant and strawmannish to me, but for the sake of argument:

E) Money is fungible, so one person's money is as good as anyone else's.

F) Given C and E it seems possible to use other people's money to help people.

And then:

G) It seems clear to me that it's possible to do better or worse at pretty much any task with a goal.

H) Given D and G it seems likely that there are choices I could make that would result in me being more or less efficient at helping people with money.

Finally:

I) I'm aware that doing things at scale is more difficult, but it's clearly possible in some cases, and it's not clear why adding "at scale" takes any of the above propositions from "reasonable" to "absurd."

So can either you or [insert here] explain to me which of these statements is absurd, and why? In general I think this highlights the pitfall of argument from absurdity, since many people have very different intuitions of what's "absurd" and what's reasonable.

Expand full comment
Simon's avatar

As you can tell from my first post, I’m trying to figure out why the debate is so (to my view) pathological, psychologically. What you wrote here is more normal, and could certainly appear in a moral philosophy debate.

Expand full comment
Paul Goodman's avatar

I might suggest that if lack of rigor strikes you as "pathological" you might be wise to avoid making diagnoses based on internet comments.

In this case my second comment reflects the same thoughts as my first one- it just expresses them more completely and rigorously.

Expand full comment
Simon's avatar

It’s been a long day but after seeing all the exchanges I think you’ve captured something here. In particular, the emphasis on the _you_; there’s something about the particularity of the call that EA makes, that seems to help explain the unusual nature of the discussion.

It also maybe helps me understand a remark referenced by Scott, above -- that EA causes people psychological harm by making people feel bad for not doing more. It’s odd at first glance (charities and political movements guilt trip all the time, why should EA be extra powerful?)

One might say: a certain kind of person is interpellated by EA (in the Althusser sense). Maybe there’s a Lacanian take, that might be better attuned to the psychology of the phenomenon.

Expand full comment
Viliam's avatar

> you get to believe that _you_ will do good, better even, at scale, with other people's money in other people's lives.

I am confused about the "with other people's money" part.

I asummed that a typical EA is someone who sends *their own* money to the effective charities.

Expand full comment
Simon's avatar

I think the claim here is that the EA does more than simply choose a charity to donate to; they have a method that gives the best answer to which ones all (with similar goals) ought to donate to.

(This is the GiveWell pitch: fund us to figure out what to do with everyone’s money.)

Expand full comment
Henry Solospiritus's avatar

Well said!

Expand full comment
Edward Scizorhands's avatar

When you're working on a project that has a great capacity to produce piles of skulls [1], you should regularly be trying to figure out if you're headed towards a pile of skulls.

[1] because by definition it's effective, so getting the sign right matters

Expand full comment
Simon's avatar

So, this is, I think another good example of how odd the discussion is. The rhetoric is so oddly multi-register I just can’t place it: extremely vivid phrases (“piles of skulls”) coupled with in-group technical language (“getting the sign right”).

It’s not how people talk when they’re evaluating charities (for example, in normal contexts). Or how they talk when they’re reasoning morally (in normal contexts, say an op-ed, or a seminar). It’s not clarifying, either.

So what *is* going on? Why the idiolects, and why these particular idiolects?

Expand full comment
Edward Scizorhands's avatar

It's short-hand. I can write it out long form.

EA premises itself on being both effective and altruistic.

Effectiveness, in terms of having an effect, is something that is pretty quickly determined. If I have a campaign to write get-well-soon cards to kids in hospitals [1], I assert it's not going to have much effect, positive or negative. If I have the kids treated by no people but an army of robots responding to them 24/7, there's definitely going to be an effect. But is the effect good or bad? That is the altruism part. If I'm hurting the kids it's not altruistic.

I could also reference consequentialism but that'd just be more in-group technical language?

> It’s not how people talk when they’re evaluating charities (for example, in normal contexts

Most people don't evaluate charities at all beyond "this makes me feel good to donate to."

[1] I think this is a good approximation of the average charity organization

Expand full comment
Simon's avatar

“Most people don't evaluate charities at all beyond "this makes me feel good to donate to."”

Just as a point of order, this is false.

There’s an entire discipline associated with non-profit evaluation and management. A few years ago we sent them some of the GiveWell material. It’s, essentially, a book report, produced at enormous cost. (GiveWell early on stated that they’d no longer solicit outside evaluations, because [roughly] people weren’t up to their standards.)

The larger question remains unanswered, though. What’s the origin of all this weird language (in your updated post, robot nurses, etc). There’s something odd about it, and it seems to be a clue to why these debates are so weird, aporetic, etc.

Expand full comment
Noah's Titanium Spine's avatar

>> Most people don't evaluate charities at all beyond "this makes me feel good to donate to."

> Just as a point of order, this is false.

No, it's true.

It's certainly why I give to the Internet Archive, Wikipedia, The Insitute for Justice, and my local Cat Care Society.

"Feeling good by donating" is the *purpose* of charity, for most people. We don't SAY that, but it's what's actually happening.

This is why EA is (arguably) revolutionary: it's nominally concerned with *outcomes* in the realm of charitable giving, rather than self-satisfaction on the part of donors. It may or may not be successful at that, but we should acknowledge the weight of the idea behind it.

Expand full comment
Simon's avatar

This is a common elision I see in a lot of EA discussion:

1. Normies don’t evaluate charities.

2. Therefore, nobody does.

3. Therefore, EA is revolutionary.

In reality, however, this is a huge area of study. As was pointed out a while ago, RE: GiveWell, after subtracting out the early weird AI risk stuff, it basically imitated the outcomes from those people (Gates Foundation work, later some Social Justice, it seems.)

Expand full comment
dionysus's avatar

It's called lingo. Every community of people has its own lingo, from scientists to programmers to janitors. New Yorkers, Floridians, and Brits also have their own lingo; we just tend to call them dialects instead. What would be weird is if any distinctive community *didn't* have its own lingo.

Expand full comment
Simon's avatar

I see what you’re getting at. But it’s more structured than (the informal notion of) a dialect, which is usually just a set of lexical substitutes, and perhaps swapping out one set of dead metaphors for another.

Expand full comment
Viliam's avatar

There are three groups that overlap a lot: (1) effective altruists, (2) readers of Astral Codex Ten, and (3) readers of Less Wrong a.k.a. the rationalist community.

As a result, you can find people debating EA on ACX using the LW lingo. Thus the technical language, etc.

I am taking the "more structured than... a set of... substitutes" as a *compliment*, because indeed the rationalist community is trying to discuss things that are typically not discussed elsewhere, in a way different from the usual online debates.

Now of course one should tone down the lingo when talking outside of LW, but an expression or two will slip through if you feel like most people will understand what you mean.

But specifically the "noticing the skulls" idiom originates at SSC (the previous incarnation of ACX). https://slatestarcodex.com/2017/04/07/yes-we-have-noticed-the-skulls/

Expand full comment
Simon's avatar

One thing I’m noticing is that the discourse is strange, and extreme--but at the same time extraordinarily abstract.

Expand full comment
Cosimo Giusti's avatar

It's the birth of a new political party.

Expand full comment
mimi's avatar

I think that is en extraordinary claim, one which to me requires more evidence than you've provided here. In general I feel any claim of the form of a diagnosis (which is on the meta level, as it discusses the people holding the debate) requires more thought, care, and evidence than the original question, not less, especially as claims like that which do not have extraordinary evidence are an easy way to forego the entire original discussion.

Moving on to a more direct reply, why do you feel meta level criticisms, or even ascending levels of meta criticisms are unlike scientific debates? I feel like I've seen that pattern over and over again. Take for example Kuhn's theory of scientific revolutions, or critical theory. The criticisms of Kuhn or of critical theory can be pretty meta, while discussing something that is itself already meta. (Your comment, itself, is already on a meta level of Scott's post, and my first paragraphs is meta to yours. This paragraph isn't, although this parenthetical is).

Could you clarify more what it is you're pointing to? I feel like I'm not seeing it clearly.

Expand full comment
Cosimo Giusti's avatar

I'm PND, M'am, Party Not Designated. I try to avoid partisan squabbles.

Expand full comment
Simon's avatar

Science isn’t particularly meta: it doesn’t spend a lot of time talking about how it’s participants are talking at the meta level, and definitely doesn’t go three or four layers deep.

References to Kuhn (at the meta level, rather than a substantive hypothesis about an object level subject of study) are rare, and usually a sign that something has gone wrong in a scientific discussion.

Science has a pretty clear aboutness, in other words, that makes progress by centering the participants on a common object.

Expand full comment
Lars Doucet's avatar

The point about evangelism is well taken. The critic mentioned they didn’t believe in Baha’i themselves - which is even more relevant than whether Baha’i itself is true or not. I also find it refreshing when philosophies I already don’t agree with expend no effort in changing that fact - but I don’t think that choice by itself is particularly noble for the reasons Scott gives.

Expand full comment
Matthew Carlin's avatar

I also believe "recycling is good" is true, but I'm *not* for evangelizing it beyond making that statement and having some discussions with willing participants.

Expand full comment
Lars Doucet's avatar

I mean depends how big a deal the thing is right? If you’re sitting on something really important it seems downright irresponsible not to “evangelize” it.

Expand full comment
Matthew Carlin's avatar

(Shoutout from fellow Texas game programmer land. By George, you do good work.)

The inside view is yes.

The outside view is to see the massive, massive imbalance of people who did damage sharing something they (probably erroneously) thought was really important, and set a good prior. The inside view is rational when you're inside. The outside view makes the inside view seem crazy crazy irrational.

Expand full comment
Aneesh Mulye's avatar

FWIW, I saw the sense in prediction markets after Eliezer and Linta's current fic portrayed how they'd work in detail (even if fictionalised).

Made it clear how, for instance, they close the incentive feedback loop in scientific funding. The current bureaucratic setup seems even more incredibly and much more *obviously* broken now.

Expand full comment
Conor's avatar

What fiction is that?

Expand full comment
Level 50 Lapras's avatar

Reading Matt Levine's Money Stuff is a good way to lose your faith in prediction markets.

Expand full comment
Retsam's avatar

Yeah, to add to the defense of evangelism, Penn Gilette (atheist magician of the Penn and Teller duo) once said something to the effect of "If I were going to be hit by a bus and didn't see it, would you stand there and say 'I wouldn't want to make them uncomfortable'? At some point you're going to tackle me."

I have to imagine the Baha'i faith doesn't really have a concept of hell... But even beyond that (and I do think Christian evangelism can often overemphasize hell), it just makes me think your faith isn't that compelling if you don't have a desire to share it. (The fact that many Baha'i ignore this precept to not evangelize is a good sign though)

Expand full comment
Stygian Nutclap's avatar

One would think that a religion as small as Baha'i would place more emphasis on evangelism, though in fact I think it does in some capacity. I've seen advertisements in weird places. Enough to plant a seed.

Many larger religions don't seem to carry the imperative to evangelize either (at least, not to the degree of Christianity/Islam), but they must have at some point in history.

Expand full comment
Jorgen Harris's avatar

My understanding is that the Baha'i faith doesn't aggressively evangelize because it doesn't believe that people need to be Baha'i in order to be saved, or even in order to be good people. Instead, Baha'i adherents have a special responsibility to heal the world, serve as examples, and promote the enlightenment and moral uplift of everyone else.

Given this, I think the restriction on directly encouraging people to become Baha'i makes sense as a way of privileging the unity and commitment of the religion over its size. Baha'i do missionary work all over the world, but that work, in my understanding, usually focuses on stuff like farming techniques and community-building.

Expand full comment
Retsam's avatar

Yeah, that's about what I expected, it can make sense for Baha'i to not evangelize, given those beliefs... but that certainly doesn't generalize to other beliefs. Certainly not to religions with a concept of non-universal salvation, and probably not to most beliefs (religious and non-religious) generally.

And, even then, I still think a largely non-evangelizing religion is not a good sign for the religion.

Imagine seeing a movie that was so amazing it "changed your life": of course you're going to "evangelize" that movie and tell your friends they should go see it. On the other hand, if someone doesn't tell me they should go see a movie that they watched, and in fact, when I suggest that I'm going to see it anyways, they ask me if I'm really sure I want to go see it... well that doesn't suggest to me that it's a very good movie.

Expand full comment
Jorgen Harris's avatar

Oh, totally agree. I don't even think it's fair to call Baha'i "non-evangelizing"--they just aggressively evangelize the stuff that they think is universal, like peace, harmony, and love, while giving a more careful soft-sell on the stuff that they think is only for certain people. So, kind of like reading a book that was very challenging and changed your life, seeing the movie and thinking it's pretty good, and telling everyone to go see the movie, while only recommending the book to certain people :).

I guess to me, though, this raises the question of what you _can_ be mad about with people evangelizing beliefs that they think everyone should adopt. It makes total sense for someone who is certain that I'll go to hell if I'm not baptized into the Catholic church to say whatever they think will get me to do it, and in fact not evangelizing at me with those beliefs suggests that they don't care about me very much.

However, as someone who doesn't hold that particular belief, I think it's reasonable to be annoyed at aggressive evangelism. It's frustrating to have a nice stranger engage in a conversation with you, only to realize a few minutes in that they're trying to get you to join the Jehovah's witnesses, and that they won't be content with a nice, interesting conversation about what their religion means to _them_ because they are convinced that their religion is crucial to _you_. It's annoying to have people knock on your door or hand you pamphlets, especially when the pamphlets tell you that you're going to hell.

Which makes me feel like perhaps people have a special responsibility to be cautious and skeptical before accepting beliefs that justify (or even obligate) being annoying to strangers.

After all, beliefs that justify intense evangelism often also justify, when possible, shunning me for not being baptized, or taxing me, or beating me, or otherwise engaging in pretty unpleasant coercive actions to save me from eternal damnation, and historically that's how most people have handled this sort of belief. I'm very happy that we've shifted to a social equilibrium where that sort of behavior is not acceptable but pushy evangelism is, but that doesn't change the more fundamental ways in which beliefs like this are dangerous.

Expand full comment
User's avatar
Comment deleted
Jul 29, 2022
Comment deleted
Expand full comment
Jorgen Harris's avatar

I don't think I agree that being annoying means that your tactics don't work. You convince people of stuff by getting their attention and then winning them over with argument, connection, etc. Stuff that gets attention when you're not trying to give it attention is, almost definitionally, annoying, because attention is scarce.

So, while a less annoying strategy might have a higher success rate among people who ever engage with it, it's likely to engage fewer people to begin with. There's some point on this engaging/not-annoying possibilities frontier that maximizes converts, and it's unlikely to be where the strategy is not at all annoying.

As evidence, I would present the fact that advertising is a competitive, thriving, profitable, long-standing industry, and that virtually all advertising is at least a little bit annoying. Furthermore, advertising aimed at the broadest possible customer base (e.g. beer, soda, toilet paper, etc.) is generally more annoying than ultra-niche advertising (e.g. airplane parts, ultra-complicated board games, etc.)

Expand full comment
User's avatar
Comment deleted
Jul 29, 2022
Comment deleted
Expand full comment
Matthew Carlin's avatar

It's also possible to accept such beliefs while purposefully holding back the evangelical implications, out of humility and an abundance of politeness. If I think you're going to hell, it should still be possible for me to say (1) wait, that's what I *think* but I could be wrong and (2) your life is yours to live once you've heard the story, just as mine is mine to live with respect to some other thing I don't believe in. Yes, even while fervently believing those things and experiencing nervous discomfort on your behalf.

This gets back to the core dispute with EA: there are reasonable moral frameworks where it's *okay* to let bad things happen because the alternative is too hubristic and presumptuous.

A literal bus coming my way? Sure, warn me or push me. An invisible bus that you've found by some indirect use of thoughts, not senses? No, I get the urge, but consider the hubris, consider how many people have believed in how many such buses and how few have been definitively right. The prior is strong with this one.

Expand full comment
Jorgen Harris's avatar

Yes, that's certainly true, and most people who believe such things do exactly what you're saying. But, at some level, I don't feel like the politeness and humility you describe are really consistent with fully believing in hell. If you're sure that an invisible bus is about to run me over, I hope you'd push me, even if my reaction to being saved from a threat I didn't know about might be anger instead of gratitude. But, as you're saying, it's perhaps wrong to be sure of something that you see many other people with equal capacity to you choose not to believe.

Also, while I at some level agree with and respect this sort of humble believing you're describing, it also feels untenable when it comes to sufficiently extreme beliefs. I might be willing to let someone make a choice that I think has (say) a 10% chance of killing them, because I can weigh other normal human things against death. Who am I to say that spiritual autonomy isn't worth the risk of death? Or that heroin isn't worth the risk of death? And since I'm willing to do that, I'm willing to adopt a certain level of humility and say that even when I feel sure of something, I need to be open to being wrong, especially when others aren't sure. And so, while I might sometimes want to act coercively to protect someone, I don't need to do so.

But if I believe that your actions have a 10% chance of leading to an infinity of infinite torture, what can I possibly line up against that? What could possibly justify not protecting someone from something that's so inconceivably bad that it's badness dwarfs the goodness of everything on earth?

Obviously, most people who believe in hell manage to do so. I've had lots of friends who believe in hell (and presumably believe that I'm going to hell) who were lovely to me in all ways and who didn't aggressively proselytize, let alone threaten to burn me at the stake or shun me. But I can't help but feel like there's something particularly dangerous or coercive about that sort of belief, and that the fact that our society is functioning reasonably well while accommodating that belief is the result of some hard-won but hard to justify social norms.

Expand full comment
Orson Smelles's avatar

> But if I believe that your actions have a 10% chance of leading to an infinity of infinite torture, what can I possibly line up against that?

For all of my emotive waffle up there, I think this is the nut: if you take it seriously, I think it's pretty easy to convince yourself that moving someone even infinitesimally away from hell is of finite (perhaps large, if you're motivated) goodness and suddenly you have a license to neglect a lot of considerations (like being humble or polite, at the shallow end) if souls might be on the line. It is often hard to feel confident that true believers are robustly aligned when it comes to the temporal needs and preferences of, well, everyone else.

Expand full comment
Matthew Carlin's avatar

Because of the history of over-certain well-meaning bad actors, my most certain beliefs should almost always stop at your doorstep.

Many people don't (can't?) really follow this; they *really* take on the whole belief, implications and all. I just think the world would be better if we did follow it. I can't and won't get you to follow it. But I will follow it myself.

Expand full comment
Jiro's avatar

>But if I believe that your actions have a 10% chance of leading to an infinity of infinite torture, what can I possibly line up against that?

That's Pascal's Mugging.

Expand full comment
Orson Smelles's avatar

> If I think you're going to hell, it should still be possible for me to say (1) wait, that's what I *think* but I could be wrong

But isn't this kind of epistemic humility doctrinally impious/heretical in a lot of religious groups? It seems sort of like saying, "it's not really a conflict because they can always believe their religion a little less hard". While I'd certainly prefer that, I don't think it actually resolves the tension for everyone. There are a lot of situations that would be better if people were less neurotic or rigid about their beliefs, but that's not really the direction even secular society is heading right now, let alone faith communities with grisly eschatologies.

I don't disagree with you overall, and that's basically how I handle topics I have strong convictions about in personal interactions, but I think it entails pretty radical changes if you have some not-uncommon worldviews (certainly beyond just the religious or one particular ideology).

------

Also, as a personal aside, I'm actually not sure the brimstone believer becomes less bad without evangelism - I think even a quiet/humble belief that specific people/groups are destined for eternal torment is corrosive and antisocial. This is certainly influenced by loved ones with gay parents who grew up surrounded by peers (and tragically, sometimes teachers) who were taught that "love the sinner, hate the sin" counts as tolerance - they were trapped between the obvious agony of knowing by implication that "friends" expected their entire family to burn in hell and, in a cruel inversion, the feeling that it might actually be intolerant to criticize or avoid people with those beliefs, since they were sincerely-held and not evangelizing (and seemingly a local majority). In this case, I think the low-evangelism mode (combined with a facile idea of pluralism where no belief can ever be harmful just to express) was actually more insidious because it sapped the ability to process genuine hurts and learn how to draw boundaries in relationships.

Is this sort of saying, "I would rather people with bad opinions evangelize so they're easier to avoid"? Probably, but I think it does speak to a slightly deeper (but still fairly selfish) point, which is that I would actually really prefer that the intensity of (other) people's behavior reflect the intensity of their beliefs, because it reduces model uncertainties and therefore my anxiety (as well as, as above, time and emotional energy invested in people who turn out to have beliefs that they have no interest in reconciling with your human dignity).

Expand full comment
Matthew Carlin's avatar

>>>But isn't this kind of epistemic humility doctrinally impious/heretical in a lot of religious groups?

Yep, it's why I respect the Baha'i. I don't think it's cheap to do. I think it's expensive and ultimately worthwhile.

Expand full comment
Matthew Carlin's avatar

Contra Penn, the sometimes intense and painful family drama of deathbed conversion attempts performed against the wishes of the dying.

Expand full comment
Anon's avatar

I just want to register my frustration that this post didn't abandon format and call itself "Highlights From The Criticism of Criticism of Criticism of Criticism". Too far is not far enough.

EDIT: Oh, and the quote of Alex's post is broken, the first paragraph is outside of the blockquote-marking bar. And I recognize that this is Carlin's gaffe and not yours, but the religion is called Baha'i.

Expand full comment
Matthew Carlin's avatar

Ah, heck, thanks.

Expand full comment
Anon's avatar

Happens to the Jainest of us.

Expand full comment
DxS's avatar

This reminds me of a bravery debate. Everyone is fighting against a real but different opponent, and gets confused because all the opponents share the same name.

One opponent is "confusing a good cause with the need for expertise and evidence." Nobody wants that.

Another opponent is "scorning innovative work just because attempting something new makes you seem arrogant." Nobody wants that either.

And another opponent is "mistaking superficial work for effective work." Look, another thing everybody doesn't want!

I don't have a solution, alas, except to agree with Scott that specific illustrations help.

Expand full comment
Stygian Nutclap's avatar

As I initially understood EA through osmosis by reading in these communities, it has often been reduced to short-hand heuristics, e.g. send money to charities targeting the developed world. That makes it easy to criticize both specifically and paradigmatic-ally. This is at odds with what its definition should imply, which is to just try to find the most effective solutions to human problems, or alternatively the most pragmatic means through which virtually anyone could help - these don't mean the same things, and a chunk of criticism could be chalked up to their conflation (like yeah, you could conceive of something better than Joe Schmo's money in specific instances, but Joe Schmo has money, little time and few options - should he do nothing?).

That doesn't even require a framework.

Expand full comment
magic9mushroom's avatar

"The EA movement is obsessed with imaginary or hypothetical problems, like the suffering of wild animals or AIs, or existential AI risk, and prioritizes them over real and existing problems"

Two obvious counters to this:

1) I can guarantee you that if you prove to a given EA that a given problem is not real (and won't become real), he/she will stop worrying about it. EAs care about AI risk because they believe it *is* a real problem i.e. may come to pass in reality if not stopped. It's not an *existing* problem in the sense that a genocidal AI does not yet exist, but that proves way too much; "terrorists have never used nukes, therefore we shouldn't invest effort into preventing terrorists from getting nukes" is risible logic.

2) I happen to agree that animal suffering is not important, due to some fairly-involved ethical reasoning. But... it's not "imaginary or hypothetical", any more than "Negro suffering" was imaginary or hypothetical in the antebellum US South. You do actually have to do that ethical reasoning to distinguish the cases; "common sense" is evidently inadequate to the task.

Expand full comment
Seth Schoen's avatar

As someone who thinks that wild animal suffering is extremely important (though not that we have any idea how to solve it, and it might end up being singularity-complete), I think that there are lots of psychological mechanisms that discourage us from taking such ideas very seriously, and those mechanisms are, if not good, at least things that exist for reasons and that maybe are kind of necessary. I'm thinking partly of Scott's "Epistemic Learned Helplessness" post.

Our natural intuitions tell us that the most real or important problems are found among those that we have some chance of solving on our own or in small groups. That was kind of approximately true for almost all of human history. Maybe human societies would actually break if we didn't have these intuitions; maybe we wouldn't be able to cooperate well or trust each other or whatever. Maybe we would all get distracted by speculating about the big questions and get eaten by wolves.

Nonetheless, it's possible that for some notion of value/reality/correctness, there really are super-duper-big-bads that go way, way beyond the ordinary villains of ordinary life (who are already extremely difficult to defeat). And maybe if we think well enough and deeply enough, we notice them. Maybe in turn this still makes us worse at everyday life, because we're still not exactly wealthy and safe enough to necessarily afford to focus on the biggest and deepest and furthest problems.

Another way of putting that is that in a given notion of value, there's no guarantee that the world or universe as a whole, or just the parts you can see, will turn out not to be inconceivably horrible, much more than you bargained for when you started speculating about it. :-)

In this connection, Scott mentioned the famous line from H. P. Lovecraft:

> The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents. We live on a placid island of ignorance in the midst of black seas of infinity, and it was not meant that we should voyage far. The sciences, each straining in its own direction, have hitherto harmed us little; but some day the piecing together of dissociated knowledge will open up such terrifying vistas of reality, and of our frightful position therein, that we shall either go mad from the revelation or flee from the deadly light into the peace and safety of a new dark age.

Expand full comment
magic9mushroom's avatar

The ethical reasoning I refer to is social-contractarianism - most particularly, the notion that intrinsic value in morality is limited to those capable of morality (at the very least, reciprocity).

The prime justification here for SC is non-exploitability; creatures like lions cannot be bargained with, so giving concessions to them is futile and giving them full citizenship with all its associated freedoms would immediately result in people getting eaten by lions. Wolves can be our slaves, our exhibits or our foes, but never our equals. If you want to treat your animal slaves well, I've no objection, but I don't think giving them rights is warranted since they can't understand (let alone uphold) responsibilities.

(There's also the "suffering as a criterion for moral value leads to existential despair because bacteria have responses describable as pain" issue, but fair point on the "things can be unfixably evil".)

As I said, though, it's something definitely worth thinking about even if I've come to the conclusion that the status quo isn't far off sanity.

(I should clarify that I do think rights should probably be extended to the mirror-test-passers, as there's precedent for their participation in the social contract (most obviously, orcas being employed by fishermen in return for food). But the further down you go the less plausible this is, and certainly it's nonsense for stuff like cows and chickens.)

Expand full comment
Seth Schoen's avatar

I've heard of this contractarian view before but it doesn't seem right to me in that I don't think of rights as a matter of bargaining (so I don't think non-exploitability is a goal). But I can imagine the plausibility of thinking of rights as a matter of bargaining, like finding a Schelling point that rational creatures would consciously assent to and recognize the stability of.

One argument against this, which I'm sure you're familiar with, is that possibly human beings who can't contract or bargain (yet, or ever, or anymore) are supposed to have rights, like infants, people with severe disabilities, or senile elders. Or severely mentally ill people, including psychopaths who might not have certain capacities to want to respect others' rights. All of these people's rights might be significantly circumscribed by their limited capacities (like maybe some of them should be imprisoned or institutionalized!), but they still conventionally appear to have and to deserve rights.

But maybe we need a stronger conceptual distinction between "being a moral end"/"having moral worth" and "being a participant in human society". Wolves and lions could do the former but not the latter, and people could have responsibilities to them on account of that, but not literal equality in terms of, like, being free to roam about in a city.

I have to admit to some confusion (more about my own view than about yours) because I'm also very uneasy with social contract theories in general, inclining toward viewing people as having presocial or precontractual rights and responsibilities toward others. So while it seems clear that lions can't participate in our hypothetical social contracts, I would also say "but our own rights as humans already don't derive only from our hypothetical social contracts, so lions' inability to participate in them shouldn't mean that the lions don't have rights against humans!". But I've been noticing how hard it can be to say exactly what those rights are or exactly how they're grounded.

Edit: Also, maybe we can find some agreement by thinking in terms of whether you have a responsibility to save others in distress, and whether it's good to do so. Like the child drowning in the pond thing, beloved of some of the inventors of the EA concept. Maybe you would say (I don't know for sure) that it's theoretically supererogatory to save random drowning children in the state of nature, but maybe also clearly required within (one's own?) human society, because societies can form social contracts that expand our responsibilities this way, so that we know what we can expect from each other, at least approximately. By contrast, it might be good to save random drowning elephants (or lions), in the sense that it makes the world better, but it's permanently supererogatory to do this because there's no way that the elephants can reach an agreement or understanding with us whereby we would promise to do this. It could be clearly a good thing to do, that is also clearly not anyone's responsibility to do, because there's no one representing elephants with whom one could agree on this responsibility. Is that kind of similar to your view?

Expand full comment
magic9mushroom's avatar

>One argument against this, which I'm sure you're familiar with, is that possibly human beings who can't contract or bargain (yet, or ever, or anymore) are supposed to have rights, like infants, people with severe disabilities, or senile elders. Or severely mentally ill people, including psychopaths who might not have certain capacities to want to respect others' rights. All of these people's rights might be significantly circumscribed by their limited capacities (like maybe some of them should be imprisoned or institutionalized!), but they still conventionally appear to have and to deserve rights.

You're right - I am familiar with it, and even considered it while writing the earlier post. It is a pickle.

In the case of infants I partially bite the bullet i.e. I think prompt infanticide isn't morally murder. There are a lot of caveats, though, like "mutilating babies predictably harms the people they'll become", "humans become at least vaguely capable of functioning within society within a year", "killing John's baby without John's permission is grievously harming John", and so on.

With a lot of the others there's the inherent issue of "how would one implement this IRL without leaving an obvious avenue for murdering people and then claiming they were X", so the point's kind of moot. "Barack Obama is a hippopotamus", though, is literally the example Scott used of something more ridiculous than Lizardman Conspiracy, so the spectre doesn't arise, and "was literally born yesterday" is not going to fly either for anyone who's been around long enough for people to want them dead.

Elephants are highly intelligent (they're on the "passes mirror test" list I mentioned) and also fairly social, so not a great example of "you cannot contract with this", but that doesn't address the point.

My real answer to that point is along the lines of: I have no moral intuition on this matter and can't construct a clear rationale. Also, total-order morality is scary as hell. So I just don't care; I would not think less of someone for saving or not saving the lion (absent other considerations e.g. if saving the lion is likely to result in the lion eating people then I'd say it's at least supererogatory to not save it).

Expand full comment
Martin Blank's avatar

Rights don't exist period. They are human fictions. No one "does or does not deserve them". So whether they are "created by bargaining" (or not) is a stipulation of us.

"Wait I have a right to my life and property!" is not going to magically deflect a Viking axe headed towards your face. Nor stop the Vikings from taking your valuables.

I would say essentially rights are marketing slogans for laws/norms a particular society wants people to take very seriously. But to be vulnerable to those slogans, someone needs to actually be a member of your society (broadly defined).

Almost all ethical theorizing goes much better if you just throw all the "rights" language in the garbage. You can bring it back out once you are done to dress up/market your results, but it is a hopelessly confused way of attempting to get to results.

Ethics definitely in large part is about bargaining/allowing groups of agents to cooperate/coordinate together in non-destructive ways. But what things they will agree to and what behaviors are "ethical" is going to be hugely situational.

The ethics/rights you settle on for a small colony of scientists with little hope of resupply are going to be vastly different than a large group of bronze age humans in a vacant-ish area, or a modern interconnected society on a world with 8 billion people. Think about how different norms were in armies, or sailing ships, or communities living closer to the edge of subsistence (infanticide).

If we suddenly ran across aliens with wildly different social structures, and intuitions, what ethical framework we could establish with them is going to depend on the facts of their situation and biology/psychology etc. If their culture revolves around the "females" hunting and killing 80% of the least fit female offspring, telling them about utilitarianism or "rights" is likely a total non-starter.

And more broadly "ethics" theory is not some unified thing. It is a rope, made up of many related but separate strands, none of which go through the whole length. Sort of like finding a definition of "game".

Utilitarianism works for some things, SCT for others, deontology for others, and caveman brain for others.

But there is no reason to think the hodgepodge of intuitions, social structures, cultural norms, and psychological structures we evolved/developed in any way cohere into something that can be simplified down to some unified set of "ethics" or "rights".

Expand full comment
FLWAB's avatar

"Rights don't exist period. They are human fictions."

A reasonable proposition. I believe that rights are not human fictions, but I also am a Christian who believes all men and women are images of the Divine and as such are sacred. If you don't believe that, the fiction theory is more sensible.

Which is why, in Jefferson's first draft of the Declaration of Independence the famous line read as follows:

"We hold these truths to be sacred & undeniable; that all men are created equal & independant, that from that equal creation they derive rights inherent & inalienable,"

The story goes that Franklin convinced Jefferson to change it from "sacred" to "self-evident" in order to please the more strict deists. It is a more universal line to say that these truths are "Self-evident" but they're really only self-evident in a culture that is completely seeped in Christianity. The idea that all men are equal is a Christian idea, based on the idea that all men are made in God's image: without that idea, it's nonsense. People are clearly not equal: some are smarter, some are stronger, etc. It makes sense from a Christian perspective to say that an illiterate Chinese peasant rice farmer and the Emperor of Prussia are both equal in value in the eyes of God: in everyone else's eyes they are clearly extremely unequal in value.

By the same token, if animals have rights they come from their relationship to God and to man: if there is no God, then man can choose whatever relationship he pleases with them.

Expand full comment
dionysus's avatar

"The idea that all men are equal is a Christian idea, based on the idea that all men are made in God's image: without that idea, it's nonsense. "

This is Christianity taking credit for the accomplishments of its enemies. The ideology of equality, just like the US Declaration of Independence, arose out of Enlightenment philosophy, 1700 years after the advent of Christianity. Enlightenment philosophers were (as you mentioned) often deists, or even atheists, who tended to be skeptical of organized religion. I won't deny that some early liberal philosophers were Christians, even more anti-freedom, anti-equality, anti-democracy reactionaries were Christian.

Expand full comment
Matthew Carlin's avatar

I'm saving this comment for reference. Very good, thanks.

Expand full comment
Martin Blank's avatar

I might as well get something out of several graduate level ethical theory classes and a lifelong friendship with a couple ethicists.

;)

Expand full comment
Stygian Nutclap's avatar

I would be more explicit and suggest that social-contractarianism needn't even imply intrinsic value.

Expand full comment
[insert here] delenda est's avatar

Maybe kind of necessary...?

Can you posit any version of history that produces wild mammals and does not involve what you consider to be suffering?

Expand full comment
George H.'s avatar

Yeah, 'Nature red in tooth and claw', suffering is what drives evolution. Though I also understand the human desire to help some injured wild animal and bring it back to health. It's hard to reason that... For the betterment of the genetic future of your species I am going to let you suffer and die. Though that might be the best thing for the future.

Expand full comment
The Ancient Geek's avatar

Altruists are obliged to do something about X if is real and X is morally relevant to them. I have found that a lot of EA/rationalist types arent very open to the idea that the second, normative claim needs to be established separately, and also aren't very open to the idea that there can be metaethical doubt about utilitarianism.

Expand full comment
Level 50 Lapras's avatar

And in the hypothetical alternate world EA movement which distributed bednets and spent a lot of time trying to save people from eternal damnation through Jesus Christ, the idea that people would abandon the later if you could "prove" that hell isn't real would be cold comfort to anyone looking on from the outside.

Expand full comment
bucket's avatar

The evangelism/having a discussion boundary does seem to have more to do with “do the people talking to each other have mutually reconcilable value systems?” rather than any intrinsic properties of the words coming out of either of their mouths.

If yes, then the listener can slot the information being spoken into their ears into a corrigible framework, frictionlessly, inception-style—almost if they had already always believed it to be true.

If no, then some flag will eventually get thrown in a listener’s mind that *this* person is one of *those* people that believes that *wrong* thing, and they’re trying to convince me that *wrong* thing is *true* when it’s *not*.

In this way, literally describing something within the framework of a value system that is incompatible with another can be interpreted as an attack by that other (or, more weakly, evangelism). The crux is foundational.

Having wasted a youth arguing with folks on the internet, I’m fairly pessimistic about truly having conversations with folks that I know to have these “mutual incompatibility” triggers. You basically have to encode your message in a way that completely dodges the memetic immune system they’ve erected/has been erected around their beliefs. Worse, knowing that you’re trying to explicitly package information in a way that dodges their memetic immune systems makes them even more likely to interpret your information as an attack (which, honestly, can you blame them? You’re trying to overturn one of their core values! Flipping some cherished bit! People actually walk around with these bits!! Any wedge issue you can possibly imagine cleaves a memeplex in half around it!)

This will be foundationally problematic for any organization that’s explicitly trying to manufacture controversial change. People don’t want to flip their bits.

Expand full comment
Seth Schoen's avatar

Theoretically, maybe not all seemingly-intractable disagreements are about values, so maybe they're not all theoretically intractable for that reason.

Like religious evangelism, which is a famously intractable kind of Internet argument, sometimes appears to hinge on disagreements of matters of fact. Like people will argue about whether or not a specific miracle really happened, and so then we should or should not be persuaded to believe a religious tradition founded on, or confirmed by, that miracle. Supposedly, each claimed miracle really happened, or not, or has some kind of evidence for or against it which is independent of people's values.

But your point might still work pretty much the same if you can generalize "value systems" further. Maybe to "stances" à la David Chapman or "pretty deep commitments" or "axioms" or something?

Expand full comment
bucket's avatar

Indeed, I may be overloading the word “value” when I really mean something closer to a deeply rooted axiom (ie, disagreement not being about the fact of the matter of individual details of possible miracles, but more about whether the ontology of the universe contains something even remotely like a miracle)

Expand full comment
FeepingCreature's avatar

edit: I read too fast and mistook this list for something from a serious blog post rather than just an example for a comment. As a result, this comment is probably overly harsh in its phrasing relative to the target. Feel free to skip it.

I think I was thinking something like "I'd just ignored that list but if Scott is citing it I guess I'd better respond in detail."

Original comment: Yeah let me elaborate why the "paradigmatic criticism" made me scoff:

- "giving in the developing world is bad if it leads to bad outcomes and you can't measure the bad outcomes" so ... don't give in the developing world? Ever? or measure better? measure better how? At least point me at the book to read and give a one-line summary of recommendations to address this, because this is clearly not a recommendation followed by the non-EA charity space anyways.

- "this type of giving reflects the giver's priorities" :very sarcastic voice: really??? charitable giving is decided on the basis of the giver's interests? yeah no shit, it's my money. The whole point of EA is "Do *you want* to do the most good?" This is inherently anchored to the giver's value system.

- "this type of giving strangles local attempts to do the same work" see I know the examples this is referring to but this is one case where it would have been worlds better to give at least one example because as written this is beat for beat equivalent to actually used political arguments to abolish literally every social safety net. Stop sucking the government's teat! Starving African ... welfare queen!

- "The EA movement is obsessed with imaginary or hypothetical problems" ... "Stop wanting wrong things for no reason" has literally not convinced any human ever in the history of the planet. I now disagree about the noncentral fallacy - this argument is the worst argument in the world.

- "The EA movement is based on the false premise that its outcomes can in fact be clearly measured and optimized" okay, um, how do I say this, have you read the Sequences? if you can't optimize an outcome, you cannot do anything whatsoever. so like, sure, but absent that there's also no basis for your criticism? How are you saying that the EA charitable giving is *bad*? Did you perhaps model an outcome and are trying to avoid it because it's bad? Yeah that's optimizing, optimizing is the thing you are doing there, as the quote goes, now we're just haggling about utility.

- "The EA movement consists of newcomers to charity work who reject the experience of seasoned veterans in the space" Yes.

- "The EA movement creates suffering by making people feel that not acting in a fully EA-endorsed manner is morally bad" I believe this is called "being a moral opinion", yes. edit: Am I endorsing this? No, I just think it cannot be fully avoided. Moral claims cause moral strife.

And like. Maybe this is uncharitable and the book-length opinions really have genuine worth and value and should be read by everyone in EA. But if they do, none of the value made it into this list! Clearly whatever the minimum length for convincing literature is has to be somewhere in this half-open range.

Maybe submit a book review?

Expand full comment
Notmy Realname's avatar

>So maybe my thoughts on the actual EA criticism contest are something like “I haven’t checked exactly what things they do or don’t want criticism of, but I’m prepared to be basically fine if they want criticism of some stuff but not others”.

This feels like a Motte and Bailey issue. When I read the rules the preamble states pretty clearly that a wide range of topics are welcome, but the rule minutia make it hard to address broader paradigmatic issues. The organizers can call the winner The Best Criticism of EA even though they have implicitly limited the potential criticisms they receive to prevent ones they may be uncomfortable with.

To build off your example, imagine on the next Sunday the pastor comes back and says "We judged 'my voice is too quiet' as the winner of the Criticism of Christianity contest, paid the winner $5,000, and bought me a new microphone". Sure he got some criticism and received it, but the overall framing of a contest implies that this was the most important criticism to address, and conveniently an easy to address one won. He can say he addressed the biggest criticism, while at the same time not addressing the "God isn't real" criticism.

Expand full comment
FeepingCreature's avatar

That just sounds like a disagreement about the word "best"? It sounds to me like the organizers were looking for the most useful criticism to improve their performance within the paradigm.

> conveniently an easy to address one won.

This is in fact what you want out of criticisms though. Easy to address means you don't need to put in much effort to get a potentially large improvement. Now maybe 30% more churchgoers can actually hear the sermon! That's genuinely a great improvement! What's wrong with being easy?

Expand full comment
Notmy Realname's avatar

It's a contest for criticism of effective altruism, where entries will be scored and the winners, runners up, and honorable mentions receiving prize money. In my opinion there is an implicit statement that the winners are better than the non-winners.

>This is in fact what you want out of criticisms though.

My point is that by the pastor giving the win to "turn up the mic" rather than "God isn't real" to a contest broadly framed as "criticism of Christianity" rather than "criticism of my specific performance in our single church" he gets to declare victory over a much broader field of concerns than he actually faced

Expand full comment
FeepingCreature's avatar

Right, so you're looking at it as a social battle? But I don't think the pastor was looking to win a social battle at all, the pastor just wanted to improve the sermon in an effective fashion. I don't know how the pastor could signal this; maybe don't publicize the winner at all? But then he can just give the money to his son or whatever. Publicize the winner in a small closed circle?

It's like if you're driving from New York to Sacramento and you make a blogpost titled "Soliciting criticisms of my planned route for driving a car to Sacramento" and then people get very upset that you didn't select the paradigmatic criticism of "abolish fossil-burning cars and build out rail lines." What are you, an oil industry shill? No, you just did not want to embark on a multi-decade reevaluation of your entire way of living, you just wanted to get to Sacramento faster, and paradigmatic advice like "what's so great about Sacramento anyways" is in fact of less than no use to you.

I think EA wanted to get to Sacramento, and the only reason this has blown up so much is that they accidentally labeled their post "Soliciting criticism of our driving plan" and now people think they want to win a performative victory for the combustion engine or w/e.

Expand full comment
Notmy Realname's avatar

To be clear, I have no problem with the pastor wanting to improve his sermon, or to escape the analogy for the EA folks to want to improve specific actions policies programs or what have you, which is what I think the rules of the contest targets. My complaint is that per their tl;dr "We're running a writing contest for critically engaging with theory or work in effective altruism (EA)", so if they only give prizes to the best critiques within their specific framework there is an implicit statement that the critiques within their framework are the best "criti[ques] engaging with theory or work in effective altruism".

To take your example, for "Soliciting criticisms of my planned route for driving a car to Sacremento" it's perfectly reasonable to not want a response on Peak Oil. However, if you title your contest "Critically engaging with theory or work in Driving" and then only address and award criticisms of your planned drive to Sacremento, you're again declaring victory against a much broader field of critiques than you actually faced.

My point is exactly your last edit, the contest itself is reasonable but the framing of it is way out of proportion to the actual eligibility criteria in a way that to me feels intellectually dishonest as it allows them to give the pretense of addressing broad criticism while actually not needing to do so.

I am those people who think that.

Expand full comment
FeepingCreature's avatar

Fair enough! I agree with this criticism, the labeling is clearly poor (for the goal I presume). I think they underestimated the breadth of available disagreement.

edit: I think to some extent I'm discarding the idea that EA is trying to win a performative battle because that just sounds ..... useless? I can't imagine anyone going "Oh, EA asked for reviews but there was a really good review saying that everyone should fully participate in cutthroat capitalism to maximize value fulfillment but EA didn't accept it, that must mean that this argument was fully demolished by them if it didn't win" ... and it seems like that's what would need to be believed for the performative victory over paradigmatic criticism to actually affect anyone. For a contest like this, I would expect everyone reacting to the outcome to have already priced in that "EA does EA things", such that the victory of an EA-aligned feedback offers no new evidence.

Expand full comment
Kenny Easwaran's avatar

If the winners yield a concrete improvement that makes things substantively better, then yes, their criticism is in an important way better than the ones that make the deeper and more trenchant criticism, in a way that doesn't help yield any big improvement for people!

Expand full comment
Matthew Carlin's avatar

This is what Zvi was getting at with his list of assumptions and opaque line by line critique of their wording: they rather seem to be trying to have their cake and eat it too, to get to tell themselves and others that they took in all the criticism from the grandest to the smallest, did the rational thing, but also, to be able to implicitly dismiss the really hard and vague critiques for structural reasons.

Expand full comment
Douglas Knight's avatar

Yeah, Scott hasn't read the rules, but he complains that Zvi finds them hard to read!

Zvi wasn't (just) complaining that the contest was too narrow, but that this narrowness was opaque and that people kept telling him to enter it.

Expand full comment
Michael Watts's avatar

> The universe was thought to be infinitely large and infinitely old and that matter is approximately uniformly distributed at the largest scales (Copernican Principle). Any line of sight should eventually hit a star. Work out the math and the entire sky should be as bright as a sun all the time. This contradicts our observation that the sky is dark at night. This paradox was eventually resolved by accepting that the age of the universe is finite

People still bring this up as an unresolved paradox, which I've never found particularly convincing. But I don't see how a finite age of the universe is supposed to be a resolution. According to this line of argument... why are some stars brighter than other stars? Why is the age of the universe relevant? Are all the stars we can see constantly getting brighter, because the age of the universe is increasing?

Expand full comment
Ancient Oak's avatar

> But I don't see how a finite age of the universe is supposed to be a resolution

In stationary infinite universe light may be (for now) blocked by gas and dust between us and distant stars.

Expand full comment
Wasserschweinchen's avatar

As I understand it, it's basically because if the universe is x years old, you can't see stars that are more than x light years away, and most directions in the sky won't have a star within x light years.

Expand full comment
Michael Watts's avatar

This is what I don't understand. The lesson of high-powered space photography is that, within the finite observable universe, every line of sight already terminates in a star. So it's not relevant that there are more stars out beyond the horizon of observability - we wouldn't see them anyway, because there are observable stars blocking them.

But we can't see the observable stars either (without going to heroic lengths to capture their light), because they're not bright enough. And that fact makes me question why it's supposed to be a paradox that the night sky is dark. I think "not bright" and "dark" mean the same thing.

https://en.wikipedia.org/wiki/Hubble_Ultra-Deep_Field

Expand full comment
dionysus's avatar

No, it's definitely not true that every line of sight terminates in a star. No picture of the night sky you've ever seen, unless it's one taken by an interferometer like CHARA, resolves the surface of a star. Stars appear to have finite size because of diffraction across the aperture of the telescope or lens, resulting in a point spread function: https://en.wikipedia.org/wiki/Point_spread_function

We can do some order of magnitude calculations. The average density of the universe is actually very well measured, and it's 9.47e-30 g/cm^3 (http://hyperphysics.phy-astr.gsu.edu/hbase/Astro/denpar.html). Only 16% of that is normal matter (the rest is dark matter), and only 10% of the normal matter is in stars (the rest is in gas). So that's a stellar density of 10^-64 stars/cm^3. The Sun is about 7e10 cm in radius, so you'd need to travel in a straight line for about L = 1/(n*R) = 2e53 cm = 6e22 Gpc before you hit a star. For comparison, the observable universe is 13 Gpc in radius.

Expand full comment
George H.'s avatar

OK just off the top of my head, but I think it's the expansion of the universe that is more important. As time goes by there is a smaller and smaller fraction of the universe that is observable. (At some time in the past there was light everywhere... this is the Cosmic Background radiation that we can now observe.)

Expand full comment
Kenny Easwaran's avatar

Let's temporarily assume that all stars are the same size and the same brightness. From observing the area around us (out to a few dozen light years or so) it appears that there's something like a 10^-30 chance that any star-sized region of space has a star in it. (I have no idea if this is the right number, but there's some specific tiny number.) Now consider some ray into space. Along any star-diameter-sized distance of that ray, there's a 10^-30 chance of running into a star. So with probability 1, you'll eventually run into a star. Thus, this ray will get as many photons coming along it as any other ray that hits a star, so the ray should look just as bright as any other ray. (We have to be a bit careful if we don't want to think of 0-width rays, and instead think of a small cone. If a star is twice as far away, then its light will be 1/4 the brightness - but the probability of one star within distance X is equal to the probability of 4 stars within distance 2X, which is equal to the probability of 9 stars within distance 3X, and any of these results are approximately equally bright, and with probability 1, one of them will occur.)

The reason the age of the universe is relevant is that the average distance to the nearest star on one of these paths is looooong. If there's a 10^-30 chance of a star within the distance of a sun's diameter, then since the sun's diameter is about 10^-8 of a light year, there's about a 10^-22 chance of a star within the distance of a light year. Thus, on average, the nearest star on any of these rays is about 10^22 light years (with many of them being substantially farther than this). If the universe is only about 10^10 years old, then even if it's infinite, there just hasn't yet been time for light to reach us along most of the rays.

As you suggest, we also will have to deal with the assumption that stars are the same size and brightness. As long as there is some standard *average* size and brightness of stars, then the above calculation will tell us that the *average* patch of the sky is as bright as this average brightness. But if most 1 degree by 1 degree conical patches of the sky achieve this by having 10^20 stars that are on average 10^20 average star-diameters away from us, then the law of large numbers will mean that most of these patches of the sky will be extremely similar in brightness, and only the few that have a small number of stars relatively close to us will be as much brighter or dimmer than the average as individual stars sometimes are.

Expand full comment
Michael Watts's avatar

> As long as there is some standard *average* size and brightness of stars, then the above calculation will tell us that the *average* patch of the sky is as bright as this average brightness. But if most 1 degree by 1 degree conical patches of the sky achieve this by having 10^20 stars that are on average 10^20 average star-diameters away from us, then the law of large numbers will mean that most of these patches of the sky will be extremely similar in brightness, and only the few that have a small number of stars relatively close to us will be as much brighter or dimmer than the average as individual stars sometimes are.

But this is an argument that most of the sky will be as bright as the rest of the sky. That's true; most of the sky is dark. The paradox is supposed to be that most of the sky is dark when, according to... someone... it should be bright.

> If the universe is only about 10^10 years old, then even if it's infinite, there just hasn't yet been time for light to reach us along most of the rays.

But we don't have problems looking along any given sightline and finding a star there. We would be shocked to look along *any* sightline and not find a star. Starlight is already reaching us along every sightline.

Expand full comment
Kenny Easwaran's avatar

> this is an argument that most of the sky will be as bright as the rest of the sky

It's not just that - it's an argument that most of the sky will be as bright as the average brightness of opaque matter in the universe. Within our stellar neighborhood, the average opaque matter is close to as bright as a star, so we should expect the sky to be that bright.

> we don't have problems looking along any given sightline and finding a star there.

We actually do - even in the Webb deep field, the majority of the sight lines have no visible star.

Expand full comment
Edward Scizorhands's avatar

My problem is that you can have an infinite number of stars yet still have a sky that isn't filled with them.

Imagine the stars are at every integer location on a 2-d graph. And I'm sitting at (0,0) and staring at the point (1, √2). I am not going to be looking directly at any star.

Expand full comment
Wasserschweinchen's avatar

If the stars have radius .01, you'll be looking at the one at (99, 140).

Expand full comment
Robert Jones's avatar

Right, you also need the assumption that stars are uniformly distributed (and time invariant). You can think of the universe as composed as an infinite number of shells centred on Earth. The light from each shell goes as the inverse square of the radius, but the surface area goes as the square of the radius, so the total light received at Earth from each shell is the same. But there are infinitely many shells, so summing over all of them shows the Earth receives inifinite light.

Expand full comment
TM's avatar

I read the 'evangelizing' comment as being related to some EA practices and found it difficult to imagine it could have been meant to apply to Scott's blog. (though still possible of course)

I agree 'there’s no clear line between expressing an opinion and evangelizing' and I also agree that telling everybody about that 'thing' that is so important to you can be misunderstand even if you don't want to convert them ... but I still think it's something like a continuum with some things (more) clearly being on the evangelizing side and some on the 'not evangelizing' side. 'Writing sth. on your blog that people visit when they want' or 'sending free printed copies of HPMOR to folks uninvitedly' sure seems different. I'm also not at college, but I lately read a bit of specific criticism of EA's recruitment work, and I can understand why some folks (apparantly) could find it cult-like.

Which doesn't take from the point whether you find it necessary or whether it's more or less effective in spreading your ideas.

And I really liked the last sentence.

Expand full comment
Matthew Carlin's avatar

[I wrote it and] it was clearly not meant to apply to Scott's blog. I've been a reader since he was on livejournal and I've done a stint as a paying customer on Substack.

I pull Scott. EA pushes me. Baptists push me.

Scott, it's as simple as that: who pushes, who pulls. Sure, maybe the oncoming bus push or the abolitionist push are worth it, but it's a really, really high bar to clear.

Expand full comment
TM's avatar

> but I’m prepared to be basically fine if they want criticism of some stuff but not others.

Fine with me, but it would be useful if the contest would be explicit and clear about this. This would have several advantages: People would know what to submit and what they could win a price for. Contest organizers would have a higher chance to get what they want. And EA couldn't use one kind of criticism to fend of another one or pride themselves to take all kind of criticism when they are looking for sth. very specific.

A priest who respectfully engages in an open debate with folks who find religion appealing but also can't belive in God, gets a different kind of acknowledgement from me than a priest who asks how to deliver his sermon in a way to be best heard.

Expand full comment
bruce's avatar

There was no first person to consider abolitionism. There were lots of people who did not want to be slaves. But the grinding poverty of the past meant every political economy on Earth was based on slavery through 1600, though it was a lot less basic to places not invaded recently, like England in 1600. Around 1600 in England, when North African Muslims were raiding Europeans in general and sometimes English sailors for slaves, lots of Brits said this was bad and English should not be raided for slaves. It was taught in schools and preached from pulpits. No Englishman should be a slave. Blah.

And when everyone is preaching blah, some preachers strut their stuff and say blah blah blah. If it sounds studly, other preachers will go along and preach blah blah blah. Not just no Englishman should be a slave, but no Englishwoman. Nobody should be a slave! Over the next couple hundred years it slowly caught on in England. Slowly, because enslaving outgroup was so profitable.

1600's through 1800's British Isles were a perfectly placed pirate base against everyone else in Western Europe. Piracy had a moral hazard, but was so profitable they ended up with a British Empire. Maynard Keynes thought the British Treasury was founded by Drake's piracy. They had to keep it. How to justify the moral hazard? The Black Legend of the Bad Spanish Empire of slavers worked okay. Meanwhile the pirates were taking and trading slaves like crazed pirates, and making bank, enough to shift from raiding to trading, enough to be governed from London. The empire they were deniably building together could point to outgroup's nasty slaving ways, and Brit slavers were ingroup enslaving outgroup.

The 13 colonies of piratical slavers and a lot of British poors wanting a better life prospered and became widely known across Britain as 'the best place in the world for a poor man' (per Bernard Bailey). When they were poor London let them handle their own affairs. By the 1750's they were building (I think) a third of the British merchant marine and worth governing by their betters. George Washington exceeded London's orders (while following Virginia's orders, and supported by Whigs in London's government) and attacked Fort Duquesne, fortified by the French against British (mostly Virginia British) expansion. He lost and was taken prisoner, but was released and not punished by Virginia and supported by Whigs. The Brits came back and took the fort, Fort Pitt. Washington had triggered the Seven Year's War between France and England. England won. Now to handle the poors affairs. The poors liked handling their own affairs.

The colonies revolted and all thirteen fought for eight years of fairly nasty war. Long nasty wars have a high moral hazard and need justifications. The Tory Samuel Johnson, already toasting the success of the next slave revolt in the West Indies, wrote a good polemic against the revolting colonials- 'Why are the loudest YELPS for liberty from the floggers of Negroese'? and John Wesley stole it. Johnson was happy 'to have gained such a mind as yours confirms me' and the Methodists preached Wesley's patriotic Brit sermons against the colonials and against slavery. For the next hundred years the British Empire preached abolition as a justification for bagging any profitable area that looked easy and, like everywhere, was based on slavery. 'Castlereagh, the name is like a knell' bribed the Spanish Foreign Minister with 100,000 pounds to abolish slavery in Spanish America, triggering a revolt in Spanish America that opened Spanish America to British trade. And ended slavery in Spanish America.

Even the revolting colonials gave up slavery, not least because the moral hazard of slavery made it less profitable as the Industrial Revolution got going. Also the Black Legend of the Evil Spanish Empire helped justify grabbing Florida, and then also the northern wilderness loosely held by New Spain. Everyone has been an abolitionist since.

Not from one pushy evangelist, but from a mix of self-interest and genuine moral choice and a lot of preachers and teachers. Like EA.

Expand full comment
Seth Schoen's avatar

It feels upsetting to be reminded of self-interest in abolitionism. Maybe this is partly because of the uncomfortable thought that, if certain contingent economic developments hadn't happened as they did, we would still have slavery today!

But we often hear that in the U.S. civil war the north had economic interests opposed to slavery and the south had economic interests in favor of it. And also that there were changes over time that were making slavery more unprofitable. Even that more limited account suggests that some people had the moral luck to happen to benefit less from slavery, so they were less likely to end up being responsible for engaging in it and perpetuating it.

https://en.wikipedia.org/wiki/Moral_luck

https://plato.stanford.edu/entries/moral-luck/

Expand full comment
TGGP's avatar

Slavery was not the basis for the British economy even prior to 1600. Slavery died out in western europe after the collapse of the Roman empire, largely replaced by serfdom.

Expand full comment
bruce's avatar

Yes, I'd conflate serfdom with slavery (not slavery because selling serfs had technical problems) with slavery. Also Japanese making the lowest wages in human history. Slavers, pirates, overseers of radically low-wage laborers aren't fussy about the letter of the law, as Americans are rediscovering given the bipartisan consensus for lower wages through higher immigration or by any means necessary. And in England serfdom was dying out. Not invaded recently, give or take slave raids from pirates and the Spanish losing at sea.

Expand full comment
TGGP's avatar

That reminds me of pseudoerasmus on those low Japanese wages:

https://pseudoerasmus.com/2017/10/02/ijd/

Expand full comment
Caba's avatar

As an European, what you've just said is so obvious and so much the way everyone over here understands history, that nobody would think it needs to be stated.

That it is not obvious to Americans has been one of several culture shocks for me about how differently other cultures view the past, as I found out when I started to discover the anglophone internet, a long time ago.

I'm not saying that Americans have the facts of history wrong, but they tend to frame them differently from the way we do, in a way that seems to imply slavery has always been a central feature of Western civilization until Victorian era abolitionism. Whereas if you asked me about the "end of slavery" it is the transition from antiquity to middle ages that would come to my mind.

And yet there is something to be said for that framing; for example I hadn't realized, until English speakers made me (including Shakespeare), that places such as Venice traded in slaves throughout the Christian era. I don't remember my high school history books mentioning this fact.

I've had the same culture shock about the way English speakers see several other parts of history. For example Vikings, who loom so large that they swallow up the whole early Middle Ages in the Anglo-American mind (instead of the Carolingians). It's a whole other way of viewing the European past.

Expand full comment
Ghillie Dhu's avatar

You seem to be using the term "moral hazard" in an idiosyncratic way, which made it more difficult (at least for me) to understand this comment.

Expand full comment
bruce's avatar

I believe insurance companies use the phrase to mean situations where people are tempted to take risks they are not responsible for. As slaves are in practice reduced to the status of minors or below, their owners are liable to take risks with them irresponsibly.

It's hard to talk about this stuff without spouting outrage in place of facts or being so cold you don't notice the skulls.

Expand full comment
Ghillie Dhu's avatar

I think that's just an externality, and moral hazard is specific to insurance.

Expand full comment
bruce's avatar

Every business is risk, capital, labor, and risk covers the other two. Piracy and slavery were all kinds of risky. Insurance is about risk. 'Just' an externality?

Expand full comment
Ghillie Dhu's avatar

"Moral hazard" is only about how an insured party will make riskier choices than they would if uninsured; it's not an externality because the insurer has consented to the policy.

Piracy, slavery, and conquest impose externalities on their victims, but there would only be moral hazard if someone were, e.g., insuring the pirate ship.

"Just an externality" as in: I don't think there's a more specific term for the sort of behavior you're describing.

Expand full comment
bruce's avatar

Most victims of piracy, slavery and conquest were predators instead of prey every chance they got, so I could possibly argue it's not an externality. They were all doing the dance.

But no, I'm using 'moral hazard' in a wider sense from half-remembered Michael Gilbert mystery stories read in the 1970's.

Expand full comment
Nancy Lebovitz's avatar

Hypothesis: The demand for criticism of EA is larger than the supply of good criticism of EA.

Expand full comment
George H.'s avatar

Oh dear, I am not sure where I heard it first. (E. Weinstein?) But perhaps the problem of 'everyone' being a racist these days is that we don't have enough real racists to point to and say, "see look how bad that is."

Expand full comment
Nancy Lebovitz's avatar

It's complicated, but I think there's both a lot of covert racism (because overt racism gets punished) and rewards for accusing people of racism. It's Goodhart on top of Goodhart.

Expand full comment
Mo Nastri's avatar

> the trick is evangelizing without making people hate you. I’ve worked on this skill for many years, and the best solution I’ve come up with is talking about a bunch of things so nobody feels too lectured to about any particular issue.

I think this is also related to some of the writing advice you gave in https://slatestarcodex.com/2016/02/20/writing-advice/ especially regarding how to talk about potentially incendiary topics.

Expand full comment
TM's avatar

I really like the example of the Priest. But I think the potential criticisms of 'God doesn't exist' or 'buy a better mic so we can hear you better' reflect extremes and need additional examples or miss a point.

Again, suppose there is a Priest who wants to see more people attend Sunday service and is also worried that people are leaving. Ultimately he is interested in more people believing more strongly in God. He is asking for criticism and what he can do better.

He's maybe hoping for 'make your sermon a bit shorter' or 'hold service an hour later on Sunday morning and I'd be there'. But instead he may get criticism like 'you're driving this big car while preaching poverty, I don't want to listen to you preaching X while doing Z.' or: 'I believe in God, but I'm appalled by the cases of misuse that took place in your ranks. Write an open letter to your Bishop to fully investigate those cases, then I'll be happy to attend your service.'

I think the Priest is fine to reject the criticism of 'there is no God' - this is the one thing he cannot give up to and still be a Priest. And anyway those guys will never end up in his church.

The example of 'voice is too low, buy a new mic' found in one of the comments is in some ways extreme in the other direction: the Priest can easily solve this with limited ressources, it doesn't require any behavioural change from him, no whatsover change in 'paradigms' and also no loss of status or comfort. It's probably not even a criticism he'd feel uneasy about - compare 'you need a new mic' to 'your voice is unpleasant' or 'your sermons are chaotic and can't be understood'. Simple solution and win-win.

But what about the third category? Preaching poverty and love-thy-next while living in prosperity and making use of luxury goods not available to many others? Or not reacting to the cases of misuse in his own ranks? I think those examples are closer to the 'paradigmatic' criticism that we're talking about in EA. It requires real changes in thinking (all priests I know do it, but is it really okay to drive this big car, while preaching poverty? Am I allowed to criticize a bishop?) and behaviour and it risks loosing the support of other important members in the organization. While not giving up on what is the (most narrowly defined!) core of the issue.

I would argue that those are the criticisms the Priest should hear. Or more precisely: I think it's the Priest's decision to ask for improvements of his sermon only and implement them. Arguably that's already more than what most priests are doing. But I think it's a missed opportunity to not listen to the complaints about his affluent lifestyle and the apparant 'sins' in his owns ranks. Especially when you care about people coming to your services and believing in God.

As mentioned, I think those examples are closer to 'paradigmatic' criticism of EA. And I think they are worth being heard. Especially if they come from folks being close to the (again, most narrowly defined!) core value of EA.

Expand full comment
Erwin's avatar

This is a very good point and example.

I'm not much into EA, but from the discussions here what comes to my mind is:

EA always wants to help the poor, and put a lot of effort in using money effectively to do so. But they turn a blind eye to were the money comes from and the roles the wealthy and the US foreign policy plays in making or keeping them poor. Of cause if they would, it would be much harder to raise funds from the wealthy and they would have to question their own lifestyle and cultural beliefs. Doing good feels better and being best in doing good feels the best, so the strive and look for critics to become better in doing good. But if ever, it touches their own lifestyle only minimally e.g. driving a electric car instead of a SUV or living like middle class even if they could afford much more.

Expand full comment
Ravi D'Elia's avatar

I'm intrigued by the idea that wild animal suffering is hypothetical or imaginary. Do wild animals not exist? Are they all living idyllic heavenly lives without suffering? You don't have to care about animal suffering I guess, I mean I still eat meat so clearly I don't care that much, but imaginary just isn't correct.

Expand full comment
gph's avatar

Depends on what you mean by suffering. Does a dog have the Buddha-nature?

Expand full comment
Artischoke's avatar

I dont think one can cite Buddhists to support the idea that "human suffering is real in a way animal suffering isn't". See eg the Mahayana Bodhisattva vow which is about ending the suffering of all *sentient beings*. Where the grey area is not "are animals included"? But rather "What about bacteria or plants"?

Expand full comment
Martin Blank's avatar

I think it is more about whether the suffering is morally relevant. Is it "suffering" in the ethical sense. Utility monsters and all that.

Say we discovered that all worker ants are in extreme constant pain. The insufficiently nuanced utility maximizer might end up requiring the entirety of the world human and otherwise be changed to focus on this massive moral emergency.

Expand full comment
FLWAB's avatar

I think it's more a matter of helplessness.

A few weeks ago I was looking out a window into my backyard and I saw something strange: an osprey was standing in the yard, with a robin pinned under it's claws. It was pecking at the robin's neck. Instantly, I had the urge to run outside and save the robin. Yet in that same moment I stopped myself: what good would I be accomplishing? The osprey needs to eat too. If I save the robin then the osprey either starves or kills some other prey. I am not opposed to the existence of ospreys, and would be a little sad if they all went extinct. So, given that I think ospreys should exist, what good would I be accomplishing by trying to rob this one of it's dinner?

I read a short story by an EA affiliated (or adjacent, I don't know precisely) writer that had a future where all animal life besides homo sapians has been eradicated. Why? Because it was the only way to stop animal suffering. I was, and remain, appalled at the idea of an Earth sterilized of animal life. Yet that does seem to be the only solution to animal suffering, particularly wild animal suffering.

Still, if someone had a specific proposal to reduce wild animal suffering, I would hear it. If someone told me "We can can spend X dollars and prevent 10,000 wild deer from being slowly eaten from the inside by parasites" then I would be interested and might even donate for a certain value of X.

Expand full comment
Martin Blank's avatar

And then the deer overpopulate and get hit by cars and the DNR expands the hunting permits for the next year.

Expand full comment
Paul Logan's avatar

Tbh I think the term “evangelize” is the combat meme du jour. It’s exacerbated by cultural propriety around online safe space that varies between platforms. So you’re either be accused of evangelism because:

A. Someone feels their community bounds have been violated.

B. Someone feels personally attacked by your views and rationalizes that by externalizing it into “my community bounds are being violated.”

Reddit and twitter accuse people of evangelizing a lot because of a need to guard very porous cultural boarders with big dramatic virtue signaling. It serves the dual purpose of keeping you “out” of their safe space and warning others like you away from it. People who come onto *your* platform and accuse you of evangelizing are just doing the very human thing of believing they are the only ones entitled to ideological borders.

Expand full comment
walruss's avatar

Y'know what's odd? When I criticize pop culture or our news coverage or whatever, I'm definitely not lying. I genuinely think that the way we process narrative in modern society is unhealthy to the point of doing massive social damage. But also I enjoy doing it. It makes me feel like I noticed something others missed (even when my point is trite and cliche). Criticism always elevates you - at the least, it makes you seem smarter than the thing you criticize.

Zvi's a careful thinker, and I don't think he just wrote a criticism he doesn't believe to write one. But he's also clearly reveling in the recursive nature of this discussion, the fact that there are deeper levels to explore, and also he's hoping to win a contest. To some degree criticism is a form of entertainment - it can show mastery of a subject in a way that a straightforward statement of principle can't. And the more abstracted that criticism is, the broader the mastery feels.

When I read your discussion of psych's "woke" criticism I thought to myself, "That's different. It's just virtue signalling to people who already agree with it all." But I'm not sure it's different. It's possible, I think, that broad criticism signals in-group prestige a lot more easily than the difficult work of building out specific ideas. Especially when I can then just say "oh your specific idea is just a new version of this general, unfalsifiable trend I already discussed. Checkmate, people who try to do things." (See, e.g. TheLastPsych).

Expand full comment
Hilarius Bookbinder's avatar

>I think there’s something where whenever a philosophy makes unusual moral demands on people (eg vegetarianism), then talking about it at all gets people accused of “evangelizing”, in a way unlike just talking about abortion or taxes or communism or which kinds of music are better

The difference is this. If I’m going on and on about being vegan, my non-vegan listeners see me as setting myself up as their moral superior, someone who rejects their lifestyle in favor of my own more virtuous one. So I’m viewed as arrogant, judgmental, and condescending. But if I’m going on and on about how great John Coltrane is, then worst case I’ll be seen as a slightly boorish fanboy, and best case as a friend who just wants to share some cool music. It’s hard not to see someone regularly touting the virtues of EA as similarly putting themselves on a moral pillar, in a way that an economist defending some wonkish tax policy is not.

Expand full comment
Don P.'s avatar

Also...did Scott put "abortion" on that list to see if anyone was noticing? The anti-abortion movement is literally a co-product of evangelical Christianity (along with the Catholic Church), and if yelling outside abortion providers with signs and pictures isn't "evangelism", then it sounds like the only allowed use of the word is going to be in conversations that begin "Let me evangelize you now."

Expand full comment
Matthew Carlin's avatar

[Re-using a nested reply because it works as my main reply]

As a very long time reader of the blog, I pull Scott. EA pushes me. Baptists push me. US Government actually doesn't push me, except maybe on tax day or when I'm driving 90mph in a 70mph zone.

Sure, maybe there's slick pushing and there's blunt pushing and the former manages not to feel like pushing. Sure, maybe the oncoming bus push or the abolitionist push are worth it, but it's a really, really high bar to clear.

Scott, it's ultimately as simple as that: who pushes, who pulls.

Since a few EA topics are probably abolition level to you, they probably clear your bar. Since they're not to me, they don't clear my bar.

Edit: Comments are all evangelism. My comment was evangelism. Conversation is evangelism. I should have added this exception to my original post: I grant that a certain amount of positional conversation is necessary, tolerated, or even welcome. Having a comments section on a post is asking for a certain amount of positional discussion, and that's cool up to a point. But, note! Even in your comments section, some people have created a general consensus that they push certain specific points too much too many times.

Expand full comment
Martin Blank's avatar

IME Ba'hai people really are very good about not evangelizing. They do also tend to be hippies who make a lot of anomalous and possibly detrimental life choices, but seem fairly happy.

Expand full comment
John Slow's avatar

I see Scott's point, that EA, unlike Bahai'ism is doing a lot of urgent and important work, and their message must be spread ASAP. I also see Matthew's point, that I almost always feel repulsed by people trying to push an ideology down my throat, however important they claim it is. I pay Scott to read his views. Hence, I don't feel the same kind of repulsion when I read his posts.

Hence, perhaps the only way to evangelize about important ideas without being repulsive is to first become so widely respected that people ask/pay you to know your views.

Expand full comment
Erwin's avatar

It is not evangelism if I answer your questions. Especially paying Scott for writing his opinion is like asking him.

I'm very munch missing the context of Metthews comment in Scott's reaction an think it had be fair to give it, as Mathew first only expressed his personal disbelieve in evangelism (https://astralcodexten.substack.com/p/criticism-of-criticism-of-criticism/comment/7858704 ) and then was explicitly asked to elaborate further that resulted in the cited comment.

Another big point whether something accounts as evangelism is respect: You can ask politely, but just respect it if people aren't interested or don't want to talk now. Also never push emotions and specifically fear to grab attention and to convince people.

Expand full comment
D Moleyk's avatar

>This is actually seriously my point: there’s no clear line between expressing an opinion and evangelizing.

I think the correct answer is to bite bullet and admit, yes, everyone who expresses opinions publicly with intention of changing other people's opinions is evangelizing, either implicitly or explicitly. BUT one can draw distinction between different qualities of evangelizing: Am I evangelizing politely or forcefully? Am I evangelizing manipulatively by misrepresenting my arguments or intentions? Am I evangelizing my idea of "all publicly expressed opinions are evangelism" too forcefully to the detriment to other ideas I also implicitly also want to evangelize by writing my reply, such as, "maintain norms of charitable argumentative discussion"?

It is not like it is impossible to have opinions and not evangelize: one can hold their opinions private, revealed only to chosen people on explicit request.

One can step up to minimal evangelism by imparting publicly only minimal amount of information necessary for the improvement of mankind. An EAian whose message is "it is a good idea to weigh effectiveness of a charity by experimental method" is imposing their opinion less than another who tells everyone their opinions on animal suffering, human suffering, optimal sexual behavior, AI risk, and particular best charities. And a third EAian who can restrict themselves to saying only "experimental method is useful in many domains" is imposing their opinions even less than the first one (because the listener may then think for themselves what the application should be).

Expand full comment
RandomSourceAnimal's avatar

"'The Anti-Politics Machine' is standard reading in grad-level development economics (I’ve now had it assigned twice for courses) -- not because we all believe “development economics is bad” or “to figure out how to respond to the critiques” but because fighting poverty / disease is really hard and understanding ways people have failed in the past is necessary to avoid the same mistakes in the future. So we’re aware of the skulls, but it still takes active effort to avoid them and regularly people don’t. "

No. I'm not going to accept this argument. Consider its elements:

- Experts are familiar with this criticism.

- Experts use this criticism to avoid future failures.

- Expert's failures should be excused because our job is hard.

- Experts cannot be replaced with regular people.

I don't care. This argument can fit any number of fields, from social science to scapulimancy. It is not evidence that your field has any intrinsic value.

Show me the money. What has your discipline achieved? Does your discipline show evidence of a cumulative growth of understanding? Or is it merely chasing one intellectual fad after another? Do basic foundational questions in your discipline remain unresolved over decades, without one faction mustering sufficient empirical evidence to conclusively put the question to rest? Does your discipline act as a priesthood for a political faction, manufacturing justifications for political issues as needed?

Expand full comment
Bugmaster's avatar

Does the EA movement have some FAQ where they respond to Alex's comments (#7) ? Because he pretty much listed all the reasons why I don't donate to them...

Expand full comment
Dan Pandori's avatar

https://www.effectivealtruism.org/faqs-criticism-objections

This was the first hit when Googling 'effective altruism faq'. I haven't read it in depth, but it seems to address the majority of the listed points, and I expect it would be easy to find other examples of this on 80000 Hours or the EA forum.

Expand full comment
Bugmaster's avatar

Sorry, but I've read that FAQ earlier, and have not found any specific responses to these objections. The FAQ contains some generalities, but most of them IMO amount to saying "it's all good, trust us". For example, the entry on "How are the people you’re trying to help involved in the decision-making?" starts off by stating that people who have resources are better off than people who do not (it's hard to argue with such a statement, but also hard to see how it answers the question); makes an off-hand mention of surveys that may or may not exist and are probably unreliable; and then starts rambling about "non-human animals" (which could be an interesting topic, but out of scope for the current question).

To put it another way, the FAQ reads like feel-good PR, not like a list of meaningful answers to specific questions.

Expand full comment
TTAR's avatar

EA is a counterproductive force that wastes rationalist talent and causes active psychological harm to the compulsively scrupulous. Altruism is an inherently self-contradictory concept, you can't cross the is-ought divide, the only rational outcome of an analysis of altruism is well summarized already... https://en.m.wikipedia.org/wiki/Moral_nihilism#The_scope_question

Expand full comment
Matthew Carlin's avatar

(Note that I'm largely unconvinced by EA because I agree with the stuff in your first sentence but even still) just because there's an argument and it seems convincing doesn't mean the conclusion is "the only rational outcome", basically ever. The value of altruism is actively debated among rational people, who take all kinds of positions.

Expand full comment
TTAR's avatar

Fair enough, my friends, but not internet comment readers, know that my use of the words "only" or "never" are strictly hyperbolic humor. Error theory isn't the only rational moral theory, it's just, really really tough to beat due to is<>ought, if you approach it from a rational/humean/popperian/yudkowskian/etc. lens, even if rational"ists" themselves are often deeply attracted to utilitarianism because it is a mind parasite that falsely promises a rationalization (in the positive sense) of their various instincts that philosophers have erroneously attempted to categorize under "ethical" or "moral."

Sorry, that only came off even stronger. Gotta stop drinking while commenting.

Expand full comment
Matthew Carlin's avatar

is/ought oughtn't and maybe isn't, but if it is, it isn't definitive in favor of error theory because it isn't necessary to agree with the supremacy of is over ought. One is perfectly free to have categorical, deontological morality or an axiomatic preference for virtue ethics with no particular violation of rationality.

I agree about the utilitarianism thing though.

Myself, I like to comment while hyped up on chocolate.

Expand full comment
TTAR's avatar

I feel like ought is a pattern running on my meat hardware, which is an is. So if my ought is, then is has supremacy. My moral instincts are just preferences to perceive that the world is a certain way/to perceive a certain pattern of signals. Is seems primary because it describes 1) which signals I'm receiving and 2) which signals, the receipt of which, generate the largest felt reward. This dissolves most moral frameworks in recognition that there need be no framework, model, consistency, etc. - beyond a clear description that can explain why certain signals generate a reward experience, and mechanically how that process functions and came to be. That said, Virtue Ethics is vague enough that it seems like it could encompass "I want to perceive things that gratify my instincts; my instincts want to perceive that I am high status; the things called virtues by virtues ethicists are valorized by my peers; I should demonstrate those virtues to make others behave in a way that will result in a signal reaching my brain that makes me think I have high social status."

Expand full comment
Matthew Carlin's avatar

Why does base system have supremacy over interpreted system?

Your moral instincts are, to use the parlance of this blog, mesa optimizers for effective group function. Why stop trusting that? (Yeah, yeah, tribalism, ancient violence, xenophobia, but on the other hand, moral repugnance, moral crusades, literal crusades, etc)

Expand full comment
Scott Alexander's avatar

If you're a nihilist, why do you care about causing active psychological harm to people? And why do you say it wastes talent? Wastes in comparison to what?

Expand full comment
TTAR's avatar

It just violates my preferences. I don't like watching people confuse themselves around and around trying to square a circle or optimize their scruples, but I do like understanding myself and the world around me (or, I like having the perception that I do). It's annoying to have a bunch of confused people running around, spouting their confusion into the space that generally does a good job with the things I like - that is the "waste" and it's also the pain of perceiving psychological harm in others that I disenjoy. I'd rather not perceive the psychological harm of others, not for any utilitarian or moral reason but simply because it's not a pleasurable perception. I suppose you could call me more of an egoist than a nihilist for that reason, or maybe even a hedonist, but it's a Randless, Nietzscheless egoism, or a very highly generalized hedonism.

Expand full comment
Phil H's avatar

Was the thumbnail image: "DALL-E, mindfuck me with a kaleidoscope please"?

Expand full comment
Scott Alexander's avatar

I tried various version of "fractal of people criticizing each other" which didn't work, but "kaleidoscope of people criticizing each other" seemed close enough.

Expand full comment
Josh's avatar

I think the central reason Baha'i do not evangelize is because they believe there are a fixed number of Baha'is over time. This was explained to me by a friend when I was living in Israel and visiting a Baha'i village; I am not Baha'i though so take with a grain of salt.

When one Baha'i dies (the story goes), another is born in a Baha'i family or is converted. If you're born Baha'i, you're always Baha'i, there's no way out as far as the counting goes.

This presents a paradox: populations are growing with time, and if there's a way to become Baha'i but no way for anyone to leave, then any convert will seem to violate conservation of Baha'i.

How do they get around this? I kid you not, the story is that there is some large, undiscovered island with many, many Baha'i, whose population is decreasing with time. So there's a sense in which converting someone to Baha'i is like killing one of the Baha'islanders (I'm not sure if it's phrased quite so starkly by them).

Wild stuff!

Expand full comment
Erwin's avatar

For me this just sounds like a story basically telling: 'Don't worry that our religion dies out, god will take care of this without anyone of us evangelizing.' This doesn't say the numbers of Baha'i can't increase, it only says it can't decrease.

Expand full comment
Erwin's avatar

#8 about evangelizing got me:

1. Yes there are grey zones, but there are some things that are (in my opinion) not evangelizing:

- answering direct questions about my opinion

- participating in a conversation/discussion/debate everyone involved is enjoying.

- just living / acting your believes without disturbing others more than necessary.

2. so according to 1. Matthew was not evangelizing, he was even explicitly asked to explain why he was skeptical not only of the moral but also the effectiveness of evangelizing. It just seems like it triggered something in You Scott.

3. I don't know if or how EA is evangelizing, as I know it only from this blog. But I fully agree with Matthew about evangelizing in general, it fully reflects my live experience. It even seems a universal principle that any push or force causes defence (flight or fight) and that any fleeing or hiding makes curious and invites to follow. This applies not only to humans, but can also be observed in many animals. There is even a whole school of taming horses by following them slowly in a closed area so they flee you and move back, and then moving back yourself causing the horse to follow you and come to you.

4. When talking about 'philosophy makes [...] moral demands on people', the difference is whether you make these demands from your audience. To stay with your example: I'm vegetarian for 25+ years now and I was never accused of being evangelizing. I don't make a big deal out of it, just stating the fact when I'm asked of offered some meat. When people ask me why, I explain my case without implying that everyone should be vegetarian or even explicitly stating that this is everyone's private decision that I respect. This way less people will hear my arguments, but the people hearing them are much more open and let them sink in. Additionally this does not build up biases and negative stereotypes against vegetarians, and people making moral decisions in general. The few people that still get triggered either have had bad experiences with evangelizing vegetarians or have a bad conscience for eating meat, which is not healthy and I would like to help to live happily however they continue their life.

5. even if it is something really relevant like hell or the super-plague it doesn't help to push much. People will just be annoyed, call you a conspiracy theorist or freak and won't listen but avoid you. It could be much more effective to just give hints and make people curious, so they ask you and really listen. (Some say this one of the reasons why Q-Anon is so widespread: they only give hints and make the people research and put the puzzle together on their own. The resulting picture is different for everybody, but this doesn't matter much as these things are hard to verify anyways.)

In case of hell you have to respect the decision of everyone, as one of our main gifts from god is our free will. In case of the super-plague you can make a plan of action with the people listening and willing to join you. If all fails nobody will judge you if you tried your best, so you also shouldn't judge yourself.

6. You won't convince me to see prediction markets as a solution to relevant problems. I suppose reading about it would be more interesting if You wouldn't try to convince me. But as I'm here by free choice and I don't have to read every article, everything is fine, thanks for your great work ;-)

Expand full comment
Kronopath's avatar

Bit late to this whole party, but:

> I don’t think a pastor who asked for criticism/suggestions/advice about basic church-running stuff, but rolled his eyes when someone tried to convince him that there was no God, is doing anything wrong, as long as he specifies beforehand that this is what he wants, and has thought through the question of God’s existence at some point before in some responsible way.

I wholeheartedly agree. I also think that if the contest was intended to be taken this way, it should have said so explicitly, and not through the subtext of its fine print.

(I also loosely endorse the comments that TM made before me on this subject.)

Expand full comment
Kronopath's avatar

I think the biggest difference between evangelism and persuasion is how "load-bearing" your ideology is.

Going back to religion for a second, before you start believing that it's important to save people from Hell, you first have to believe that:

1. God exists (with all the implications that brings)

2. Life after death exists (ditto)

3. Hell exists (ditto^2)

4. People end up in hell after they die if they don't believe in God or follow his principles

5. You have the right interpretation of what God and his principles are

6. You're sufficiently persuasive or charismatic that you'll be able to persuade people to believe in God and thus avoid hell instead of just annoying them (or you feel you can bear the social cost)

That's a lot to swallow. Try to force all that on someone at once, and they'll probably choke.

Contrast that with GiveWell, an EA organization that I think actually does a good job of being persuasive. Their argument (at least for their top charities) is something more like this:

1. There are a bunch of awful diseases in the developing world that cause people immense suffering, often death.

2. A lot of these diseases have pretty cheap preventative treatments (e.g. vitamin supplementation) or tools (e.g. bed nets).

3. Problem is, people out there are so poor that they can't afford these treatments.

4. But you can. You can buy one of these treatments for (usually) double-digit dollars or less.

5. We estimate that on (very rough) average, if you buy X thousand dollars of this and spread it out over Y hundred people, you'll likely have prevented at least one person from dying (or at least several people from suffering badly).

6. Don't believe us? Here's some in-depth profiles of exactly how we did our math to prove it, as well as information about the rigorous amounts of data we ask from our charities.

Notice something? Even though GiveWell is founded on utilitarian ethics, there's barely any utilitarianism anywhere in that argument, aside from maybe the (optional) last step. You don't need to swallow utilitarian theory whole in order to be convinced.

The only time utilitarian theory might be important to your decision-making is if you're considering donating to GiveWell *in lieu of* another charity. That's when you have to start making the arguments about which charity does the most good.

But that situation is often irrelevant. The average person doesn't give to charity *at all*, even people who are well-off and could afford it. They don't need to trade off one charity vs. another. They just need an argument that's good enough to get them to justify donating any of their spare money, and a reasonable assurance that they're not being scammed or mislead.

By grounding their work in the results, rather than the ideology, GiveWell opens itself up to people of all sorts of moral codes, so long as that moral code allows for something like "It's good to prevent people from suffering or dying of horrible, easily preventable diseases."

So what does this mean for other more speculative EA cause areas, like AI safety? Probably that people need more convincing, concrete evidence that it's actually a near-ish-term problem. Less "Think of all the trillions of people that may never live" and more "Look, GPT-whatever has already been proven to have X or Y failure mode, here's a convincing paper to show it. That's not a problem now because this thing is small, but if you scale it up and put it in charge of some important part of business or government it's going to end up causing this kind of problem or worse on a much larger scale."

(This also may explain why so many people are concerned about things like AI racial bias as opposed to bigger problems like superintelligence. It's a lot more tangible, and is already causing us some small-to-moderate problems.)

Most people do not "shut up and multiply". In fact, they're resistant to it: the whole "torture vs. dust specks" thought experiment where that was coined is notoriously controversial even among the Less Wrong crowd. Making that the load-bearing part of your argument is arguably evangelism, and is also probably setting yourself up to fail.

Expand full comment
Linch's avatar

This is my response to your post: https://twitter.com/LinchZhang/status/1555007124949704704

Expand full comment