Looking at this from the outside, with its recursions and metaphors-about-metaphors, its digressions and aphorisms, its semantic arguments it’s hard not to get the feeling of a psychiatric patient (a mad, multiple personality patient, with too many pseudonyms) desperately trying to avoid acknowledging something obvious.
(Criticism of criticism of vcriticism... could easily appear in that cult classic _Knots_ book by RD Laing.)
Scientific debates don’t look like this. Nor do political debates, or philosophical ones. The patterns look like some strange combination of the three. And it’s the endless aporias that’s strike me the most. Nobody seems to feel satisfied at the end. The fact that people can’t help do it over and over, despite the frustration, suggests a form of repetition compulsion. Scott’s earlier analogy to criticism as a kink is perhaps the right category, if not the right specific diagnosis.
I want to suggest to the community that maybe, just maybe, they’ve been participating in a folie á N, a group madness. One that, from which, a part of the whole occasionally tries to escape. Some parody of a therapy session emerges, with people taking on different roles, but no change in the madness happens.
> Scientific debates don’t look like this. Nor do political debates, or philosophical ones.
Disagree. This pattern feels pretty normal, for the interface between abstract paradigmatic disagreements and functional operationalized criticisms. "Some strange combination of the three" isn't *wrong* though, it's just that talking about 'group dynamics around the abstract principles underlying X' that's pretty much a given.
Agree. The core point is that EA is a religious type movement and one that requires significant costs: you have to be willing to disavow human nature, but grants significant benefits: you get to believe that _you_ will do good, better even, at scale, with other people's money in other people's lives.
The absurdity of that proposition should be enough on its own!
That seems like a real stretch to characterize as religious even entirely granting the basis of the argument. Religions and religious reasoning don't usually center around making tangible and observable benefits for other people at one's own expense, especially not in ways where you're encouraged to make an active effort to check if it's working or not. It feels like a particularly far point down the path of characterizing everything as a religion.
I’m not sure it’s a religion, and really want to emphasize the psychological (almost psychiatric!) aspects of the discussion.
Religious debates don’t look like what we’re seeing here, but people making analogies to religion and accusing each other of being religious feels like something that might happen in a therapeutic context.
Present-day mainstream religions might not be big on producing empirically-verifiable results, but the ancient Romans gave their own religion about as much scientific scrutiny, proportionately, as the modern US military does to aircraft - for roughly the same reasons.
By the same token, the west spent a good 1000+ years where the most intelligent people that could be found were set to the task of figuring out everything they could about God (theology).
They made some interesting progress in that time, but then it turned out good enough ironworking and trigonometry let you win wars even if you've got the religious side of things completely wrong. Without that incentive to produce ongoing strategic results, corruption spread, service quality plummeted, and now what have we got? https://dresdencodak.com/2022/01/17/dark-science-113-the-church-of-the-empty-inbox/ The point I'm getting at is that I think we might have an unfairly bad impression of religion's potential, because there have been so few examples in living memory of anything coming close to fully achieving it, or even anyone making a proper attempt.
Most people in the effective altruism movement are trying to do good with their own money, unless you've decided that donors don't count.
I think the idea that you _can't_ do good in other people's lives with money is absurd. Do you think that money can't be used to buy antimalarials and water treatment, or that antimalarials and water treatment aren't helpful?
This is a good example of why there’s such a psychological puzzle here. I mean the following in a very respectful way.
The OP wrote “you get to believe that _you_ will do good, better even, at scale, with other people's money in other people's lives.”
But the interpretation you make is that OP thinks the idea of helping people is absurd.
This is such a misreading, in a context (Scott blog) that prizes careful attention, that I can’t make sense of it.
To make sense of it, it really does seem we need a story about what’s not being said, and (even more strangely) what people don’t know they’re not saying (but is influencing and distorting the conversation regardless).
For clarity and rhetoric purposes, I was taking a broad starting point to try to narrow down what OP's apparently non-sensical argument was actually trying to say. If you'd rather I skip past the discussion and lay out my thoughts in more detail from the get-go:
A) It seems pretty clear to me that it is possible to help people.
B) Given A, it seems at least not absurd to suppose that I myself could help people.
C) Given A and the fact that money can be used to accomplish many goals people have, it seams reasonable to suppose that it's possible to help people with money.
D) Given B and C, it seems plausible that I could help people with money.
As someone pointed out many if not most EAs give their own money, so the "other people's" part seems irrelevant and strawmannish to me, but for the sake of argument:
E) Money is fungible, so one person's money is as good as anyone else's.
F) Given C and E it seems possible to use other people's money to help people.
And then:
G) It seems clear to me that it's possible to do better or worse at pretty much any task with a goal.
H) Given D and G it seems likely that there are choices I could make that would result in me being more or less efficient at helping people with money.
Finally:
I) I'm aware that doing things at scale is more difficult, but it's clearly possible in some cases, and it's not clear why adding "at scale" takes any of the above propositions from "reasonable" to "absurd."
So can either you or [insert here] explain to me which of these statements is absurd, and why? In general I think this highlights the pitfall of argument from absurdity, since many people have very different intuitions of what's "absurd" and what's reasonable.
As you can tell from my first post, I’m trying to figure out why the debate is so (to my view) pathological, psychologically. What you wrote here is more normal, and could certainly appear in a moral philosophy debate.
It’s been a long day but after seeing all the exchanges I think you’ve captured something here. In particular, the emphasis on the _you_; there’s something about the particularity of the call that EA makes, that seems to help explain the unusual nature of the discussion.
It also maybe helps me understand a remark referenced by Scott, above -- that EA causes people psychological harm by making people feel bad for not doing more. It’s odd at first glance (charities and political movements guilt trip all the time, why should EA be extra powerful?)
One might say: a certain kind of person is interpellated by EA (in the Althusser sense). Maybe there’s a Lacanian take, that might be better attuned to the psychology of the phenomenon.
I think the claim here is that the EA does more than simply choose a charity to donate to; they have a method that gives the best answer to which ones all (with similar goals) ought to donate to.
(This is the GiveWell pitch: fund us to figure out what to do with everyone’s money.)
When you're working on a project that has a great capacity to produce piles of skulls [1], you should regularly be trying to figure out if you're headed towards a pile of skulls.
[1] because by definition it's effective, so getting the sign right matters
So, this is, I think another good example of how odd the discussion is. The rhetoric is so oddly multi-register I just can’t place it: extremely vivid phrases (“piles of skulls”) coupled with in-group technical language (“getting the sign right”).
It’s not how people talk when they’re evaluating charities (for example, in normal contexts). Or how they talk when they’re reasoning morally (in normal contexts, say an op-ed, or a seminar). It’s not clarifying, either.
So what *is* going on? Why the idiolects, and why these particular idiolects?
EA premises itself on being both effective and altruistic.
Effectiveness, in terms of having an effect, is something that is pretty quickly determined. If I have a campaign to write get-well-soon cards to kids in hospitals [1], I assert it's not going to have much effect, positive or negative. If I have the kids treated by no people but an army of robots responding to them 24/7, there's definitely going to be an effect. But is the effect good or bad? That is the altruism part. If I'm hurting the kids it's not altruistic.
I could also reference consequentialism but that'd just be more in-group technical language?
> It’s not how people talk when they’re evaluating charities (for example, in normal contexts
Most people don't evaluate charities at all beyond "this makes me feel good to donate to."
[1] I think this is a good approximation of the average charity organization
“Most people don't evaluate charities at all beyond "this makes me feel good to donate to."”
Just as a point of order, this is false.
There’s an entire discipline associated with non-profit evaluation and management. A few years ago we sent them some of the GiveWell material. It’s, essentially, a book report, produced at enormous cost. (GiveWell early on stated that they’d no longer solicit outside evaluations, because [roughly] people weren’t up to their standards.)
The larger question remains unanswered, though. What’s the origin of all this weird language (in your updated post, robot nurses, etc). There’s something odd about it, and it seems to be a clue to why these debates are so weird, aporetic, etc.
>> Most people don't evaluate charities at all beyond "this makes me feel good to donate to."
> Just as a point of order, this is false.
No, it's true.
It's certainly why I give to the Internet Archive, Wikipedia, The Insitute for Justice, and my local Cat Care Society.
"Feeling good by donating" is the *purpose* of charity, for most people. We don't SAY that, but it's what's actually happening.
This is why EA is (arguably) revolutionary: it's nominally concerned with *outcomes* in the realm of charitable giving, rather than self-satisfaction on the part of donors. It may or may not be successful at that, but we should acknowledge the weight of the idea behind it.
This is a common elision I see in a lot of EA discussion:
1. Normies don’t evaluate charities.
2. Therefore, nobody does.
3. Therefore, EA is revolutionary.
In reality, however, this is a huge area of study. As was pointed out a while ago, RE: GiveWell, after subtracting out the early weird AI risk stuff, it basically imitated the outcomes from those people (Gates Foundation work, later some Social Justice, it seems.)
It's called lingo. Every community of people has its own lingo, from scientists to programmers to janitors. New Yorkers, Floridians, and Brits also have their own lingo; we just tend to call them dialects instead. What would be weird is if any distinctive community *didn't* have its own lingo.
I see what you’re getting at. But it’s more structured than (the informal notion of) a dialect, which is usually just a set of lexical substitutes, and perhaps swapping out one set of dead metaphors for another.
There are three groups that overlap a lot: (1) effective altruists, (2) readers of Astral Codex Ten, and (3) readers of Less Wrong a.k.a. the rationalist community.
As a result, you can find people debating EA on ACX using the LW lingo. Thus the technical language, etc.
I am taking the "more structured than... a set of... substitutes" as a *compliment*, because indeed the rationalist community is trying to discuss things that are typically not discussed elsewhere, in a way different from the usual online debates.
Now of course one should tone down the lingo when talking outside of LW, but an expression or two will slip through if you feel like most people will understand what you mean.
I think that is en extraordinary claim, one which to me requires more evidence than you've provided here. In general I feel any claim of the form of a diagnosis (which is on the meta level, as it discusses the people holding the debate) requires more thought, care, and evidence than the original question, not less, especially as claims like that which do not have extraordinary evidence are an easy way to forego the entire original discussion.
Moving on to a more direct reply, why do you feel meta level criticisms, or even ascending levels of meta criticisms are unlike scientific debates? I feel like I've seen that pattern over and over again. Take for example Kuhn's theory of scientific revolutions, or critical theory. The criticisms of Kuhn or of critical theory can be pretty meta, while discussing something that is itself already meta. (Your comment, itself, is already on a meta level of Scott's post, and my first paragraphs is meta to yours. This paragraph isn't, although this parenthetical is).
Could you clarify more what it is you're pointing to? I feel like I'm not seeing it clearly.
Science isn’t particularly meta: it doesn’t spend a lot of time talking about how it’s participants are talking at the meta level, and definitely doesn’t go three or four layers deep.
References to Kuhn (at the meta level, rather than a substantive hypothesis about an object level subject of study) are rare, and usually a sign that something has gone wrong in a scientific discussion.
Science has a pretty clear aboutness, in other words, that makes progress by centering the participants on a common object.
The point about evangelism is well taken. The critic mentioned they didn’t believe in Baha’i themselves - which is even more relevant than whether Baha’i itself is true or not. I also find it refreshing when philosophies I already don’t agree with expend no effort in changing that fact - but I don’t think that choice by itself is particularly noble for the reasons Scott gives.
I also believe "recycling is good" is true, but I'm *not* for evangelizing it beyond making that statement and having some discussions with willing participants.
I mean depends how big a deal the thing is right? If you’re sitting on something really important it seems downright irresponsible not to “evangelize” it.
(Shoutout from fellow Texas game programmer land. By George, you do good work.)
The inside view is yes.
The outside view is to see the massive, massive imbalance of people who did damage sharing something they (probably erroneously) thought was really important, and set a good prior. The inside view is rational when you're inside. The outside view makes the inside view seem crazy crazy irrational.
FWIW, I saw the sense in prediction markets after Eliezer and Linta's current fic portrayed how they'd work in detail (even if fictionalised).
Made it clear how, for instance, they close the incentive feedback loop in scientific funding. The current bureaucratic setup seems even more incredibly and much more *obviously* broken now.
Yeah, to add to the defense of evangelism, Penn Gilette (atheist magician of the Penn and Teller duo) once said something to the effect of "If I were going to be hit by a bus and didn't see it, would you stand there and say 'I wouldn't want to make them uncomfortable'? At some point you're going to tackle me."
I have to imagine the Baha'i faith doesn't really have a concept of hell... But even beyond that (and I do think Christian evangelism can often overemphasize hell), it just makes me think your faith isn't that compelling if you don't have a desire to share it. (The fact that many Baha'i ignore this precept to not evangelize is a good sign though)
One would think that a religion as small as Baha'i would place more emphasis on evangelism, though in fact I think it does in some capacity. I've seen advertisements in weird places. Enough to plant a seed.
Many larger religions don't seem to carry the imperative to evangelize either (at least, not to the degree of Christianity/Islam), but they must have at some point in history.
My understanding is that the Baha'i faith doesn't aggressively evangelize because it doesn't believe that people need to be Baha'i in order to be saved, or even in order to be good people. Instead, Baha'i adherents have a special responsibility to heal the world, serve as examples, and promote the enlightenment and moral uplift of everyone else.
Given this, I think the restriction on directly encouraging people to become Baha'i makes sense as a way of privileging the unity and commitment of the religion over its size. Baha'i do missionary work all over the world, but that work, in my understanding, usually focuses on stuff like farming techniques and community-building.
Yeah, that's about what I expected, it can make sense for Baha'i to not evangelize, given those beliefs... but that certainly doesn't generalize to other beliefs. Certainly not to religions with a concept of non-universal salvation, and probably not to most beliefs (religious and non-religious) generally.
And, even then, I still think a largely non-evangelizing religion is not a good sign for the religion.
Imagine seeing a movie that was so amazing it "changed your life": of course you're going to "evangelize" that movie and tell your friends they should go see it. On the other hand, if someone doesn't tell me they should go see a movie that they watched, and in fact, when I suggest that I'm going to see it anyways, they ask me if I'm really sure I want to go see it... well that doesn't suggest to me that it's a very good movie.
Oh, totally agree. I don't even think it's fair to call Baha'i "non-evangelizing"--they just aggressively evangelize the stuff that they think is universal, like peace, harmony, and love, while giving a more careful soft-sell on the stuff that they think is only for certain people. So, kind of like reading a book that was very challenging and changed your life, seeing the movie and thinking it's pretty good, and telling everyone to go see the movie, while only recommending the book to certain people :).
I guess to me, though, this raises the question of what you _can_ be mad about with people evangelizing beliefs that they think everyone should adopt. It makes total sense for someone who is certain that I'll go to hell if I'm not baptized into the Catholic church to say whatever they think will get me to do it, and in fact not evangelizing at me with those beliefs suggests that they don't care about me very much.
However, as someone who doesn't hold that particular belief, I think it's reasonable to be annoyed at aggressive evangelism. It's frustrating to have a nice stranger engage in a conversation with you, only to realize a few minutes in that they're trying to get you to join the Jehovah's witnesses, and that they won't be content with a nice, interesting conversation about what their religion means to _them_ because they are convinced that their religion is crucial to _you_. It's annoying to have people knock on your door or hand you pamphlets, especially when the pamphlets tell you that you're going to hell.
Which makes me feel like perhaps people have a special responsibility to be cautious and skeptical before accepting beliefs that justify (or even obligate) being annoying to strangers.
After all, beliefs that justify intense evangelism often also justify, when possible, shunning me for not being baptized, or taxing me, or beating me, or otherwise engaging in pretty unpleasant coercive actions to save me from eternal damnation, and historically that's how most people have handled this sort of belief. I'm very happy that we've shifted to a social equilibrium where that sort of behavior is not acceptable but pushy evangelism is, but that doesn't change the more fundamental ways in which beliefs like this are dangerous.
I don't think I agree that being annoying means that your tactics don't work. You convince people of stuff by getting their attention and then winning them over with argument, connection, etc. Stuff that gets attention when you're not trying to give it attention is, almost definitionally, annoying, because attention is scarce.
So, while a less annoying strategy might have a higher success rate among people who ever engage with it, it's likely to engage fewer people to begin with. There's some point on this engaging/not-annoying possibilities frontier that maximizes converts, and it's unlikely to be where the strategy is not at all annoying.
As evidence, I would present the fact that advertising is a competitive, thriving, profitable, long-standing industry, and that virtually all advertising is at least a little bit annoying. Furthermore, advertising aimed at the broadest possible customer base (e.g. beer, soda, toilet paper, etc.) is generally more annoying than ultra-niche advertising (e.g. airplane parts, ultra-complicated board games, etc.)
It's also possible to accept such beliefs while purposefully holding back the evangelical implications, out of humility and an abundance of politeness. If I think you're going to hell, it should still be possible for me to say (1) wait, that's what I *think* but I could be wrong and (2) your life is yours to live once you've heard the story, just as mine is mine to live with respect to some other thing I don't believe in. Yes, even while fervently believing those things and experiencing nervous discomfort on your behalf.
This gets back to the core dispute with EA: there are reasonable moral frameworks where it's *okay* to let bad things happen because the alternative is too hubristic and presumptuous.
A literal bus coming my way? Sure, warn me or push me. An invisible bus that you've found by some indirect use of thoughts, not senses? No, I get the urge, but consider the hubris, consider how many people have believed in how many such buses and how few have been definitively right. The prior is strong with this one.
Yes, that's certainly true, and most people who believe such things do exactly what you're saying. But, at some level, I don't feel like the politeness and humility you describe are really consistent with fully believing in hell. If you're sure that an invisible bus is about to run me over, I hope you'd push me, even if my reaction to being saved from a threat I didn't know about might be anger instead of gratitude. But, as you're saying, it's perhaps wrong to be sure of something that you see many other people with equal capacity to you choose not to believe.
Also, while I at some level agree with and respect this sort of humble believing you're describing, it also feels untenable when it comes to sufficiently extreme beliefs. I might be willing to let someone make a choice that I think has (say) a 10% chance of killing them, because I can weigh other normal human things against death. Who am I to say that spiritual autonomy isn't worth the risk of death? Or that heroin isn't worth the risk of death? And since I'm willing to do that, I'm willing to adopt a certain level of humility and say that even when I feel sure of something, I need to be open to being wrong, especially when others aren't sure. And so, while I might sometimes want to act coercively to protect someone, I don't need to do so.
But if I believe that your actions have a 10% chance of leading to an infinity of infinite torture, what can I possibly line up against that? What could possibly justify not protecting someone from something that's so inconceivably bad that it's badness dwarfs the goodness of everything on earth?
Obviously, most people who believe in hell manage to do so. I've had lots of friends who believe in hell (and presumably believe that I'm going to hell) who were lovely to me in all ways and who didn't aggressively proselytize, let alone threaten to burn me at the stake or shun me. But I can't help but feel like there's something particularly dangerous or coercive about that sort of belief, and that the fact that our society is functioning reasonably well while accommodating that belief is the result of some hard-won but hard to justify social norms.
> But if I believe that your actions have a 10% chance of leading to an infinity of infinite torture, what can I possibly line up against that?
For all of my emotive waffle up there, I think this is the nut: if you take it seriously, I think it's pretty easy to convince yourself that moving someone even infinitesimally away from hell is of finite (perhaps large, if you're motivated) goodness and suddenly you have a license to neglect a lot of considerations (like being humble or polite, at the shallow end) if souls might be on the line. It is often hard to feel confident that true believers are robustly aligned when it comes to the temporal needs and preferences of, well, everyone else.
Because of the history of over-certain well-meaning bad actors, my most certain beliefs should almost always stop at your doorstep.
Many people don't (can't?) really follow this; they *really* take on the whole belief, implications and all. I just think the world would be better if we did follow it. I can't and won't get you to follow it. But I will follow it myself.
> If I think you're going to hell, it should still be possible for me to say (1) wait, that's what I *think* but I could be wrong
But isn't this kind of epistemic humility doctrinally impious/heretical in a lot of religious groups? It seems sort of like saying, "it's not really a conflict because they can always believe their religion a little less hard". While I'd certainly prefer that, I don't think it actually resolves the tension for everyone. There are a lot of situations that would be better if people were less neurotic or rigid about their beliefs, but that's not really the direction even secular society is heading right now, let alone faith communities with grisly eschatologies.
I don't disagree with you overall, and that's basically how I handle topics I have strong convictions about in personal interactions, but I think it entails pretty radical changes if you have some not-uncommon worldviews (certainly beyond just the religious or one particular ideology).
------
Also, as a personal aside, I'm actually not sure the brimstone believer becomes less bad without evangelism - I think even a quiet/humble belief that specific people/groups are destined for eternal torment is corrosive and antisocial. This is certainly influenced by loved ones with gay parents who grew up surrounded by peers (and tragically, sometimes teachers) who were taught that "love the sinner, hate the sin" counts as tolerance - they were trapped between the obvious agony of knowing by implication that "friends" expected their entire family to burn in hell and, in a cruel inversion, the feeling that it might actually be intolerant to criticize or avoid people with those beliefs, since they were sincerely-held and not evangelizing (and seemingly a local majority). In this case, I think the low-evangelism mode (combined with a facile idea of pluralism where no belief can ever be harmful just to express) was actually more insidious because it sapped the ability to process genuine hurts and learn how to draw boundaries in relationships.
Is this sort of saying, "I would rather people with bad opinions evangelize so they're easier to avoid"? Probably, but I think it does speak to a slightly deeper (but still fairly selfish) point, which is that I would actually really prefer that the intensity of (other) people's behavior reflect the intensity of their beliefs, because it reduces model uncertainties and therefore my anxiety (as well as, as above, time and emotional energy invested in people who turn out to have beliefs that they have no interest in reconciling with your human dignity).
I just want to register my frustration that this post didn't abandon format and call itself "Highlights From The Criticism of Criticism of Criticism of Criticism". Too far is not far enough.
EDIT: Oh, and the quote of Alex's post is broken, the first paragraph is outside of the blockquote-marking bar. And I recognize that this is Carlin's gaffe and not yours, but the religion is called Baha'i.
This reminds me of a bravery debate. Everyone is fighting against a real but different opponent, and gets confused because all the opponents share the same name.
One opponent is "confusing a good cause with the need for expertise and evidence." Nobody wants that.
Another opponent is "scorning innovative work just because attempting something new makes you seem arrogant." Nobody wants that either.
And another opponent is "mistaking superficial work for effective work." Look, another thing everybody doesn't want!
I don't have a solution, alas, except to agree with Scott that specific illustrations help.
As I initially understood EA through osmosis by reading in these communities, it has often been reduced to short-hand heuristics, e.g. send money to charities targeting the developed world. That makes it easy to criticize both specifically and paradigmatic-ally. This is at odds with what its definition should imply, which is to just try to find the most effective solutions to human problems, or alternatively the most pragmatic means through which virtually anyone could help - these don't mean the same things, and a chunk of criticism could be chalked up to their conflation (like yeah, you could conceive of something better than Joe Schmo's money in specific instances, but Joe Schmo has money, little time and few options - should he do nothing?).
"The EA movement is obsessed with imaginary or hypothetical problems, like the suffering of wild animals or AIs, or existential AI risk, and prioritizes them over real and existing problems"
Two obvious counters to this:
1) I can guarantee you that if you prove to a given EA that a given problem is not real (and won't become real), he/she will stop worrying about it. EAs care about AI risk because they believe it *is* a real problem i.e. may come to pass in reality if not stopped. It's not an *existing* problem in the sense that a genocidal AI does not yet exist, but that proves way too much; "terrorists have never used nukes, therefore we shouldn't invest effort into preventing terrorists from getting nukes" is risible logic.
2) I happen to agree that animal suffering is not important, due to some fairly-involved ethical reasoning. But... it's not "imaginary or hypothetical", any more than "Negro suffering" was imaginary or hypothetical in the antebellum US South. You do actually have to do that ethical reasoning to distinguish the cases; "common sense" is evidently inadequate to the task.
As someone who thinks that wild animal suffering is extremely important (though not that we have any idea how to solve it, and it might end up being singularity-complete), I think that there are lots of psychological mechanisms that discourage us from taking such ideas very seriously, and those mechanisms are, if not good, at least things that exist for reasons and that maybe are kind of necessary. I'm thinking partly of Scott's "Epistemic Learned Helplessness" post.
Our natural intuitions tell us that the most real or important problems are found among those that we have some chance of solving on our own or in small groups. That was kind of approximately true for almost all of human history. Maybe human societies would actually break if we didn't have these intuitions; maybe we wouldn't be able to cooperate well or trust each other or whatever. Maybe we would all get distracted by speculating about the big questions and get eaten by wolves.
Nonetheless, it's possible that for some notion of value/reality/correctness, there really are super-duper-big-bads that go way, way beyond the ordinary villains of ordinary life (who are already extremely difficult to defeat). And maybe if we think well enough and deeply enough, we notice them. Maybe in turn this still makes us worse at everyday life, because we're still not exactly wealthy and safe enough to necessarily afford to focus on the biggest and deepest and furthest problems.
Another way of putting that is that in a given notion of value, there's no guarantee that the world or universe as a whole, or just the parts you can see, will turn out not to be inconceivably horrible, much more than you bargained for when you started speculating about it. :-)
In this connection, Scott mentioned the famous line from H. P. Lovecraft:
> The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents. We live on a placid island of ignorance in the midst of black seas of infinity, and it was not meant that we should voyage far. The sciences, each straining in its own direction, have hitherto harmed us little; but some day the piecing together of dissociated knowledge will open up such terrifying vistas of reality, and of our frightful position therein, that we shall either go mad from the revelation or flee from the deadly light into the peace and safety of a new dark age.
The ethical reasoning I refer to is social-contractarianism - most particularly, the notion that intrinsic value in morality is limited to those capable of morality (at the very least, reciprocity).
The prime justification here for SC is non-exploitability; creatures like lions cannot be bargained with, so giving concessions to them is futile and giving them full citizenship with all its associated freedoms would immediately result in people getting eaten by lions. Wolves can be our slaves, our exhibits or our foes, but never our equals. If you want to treat your animal slaves well, I've no objection, but I don't think giving them rights is warranted since they can't understand (let alone uphold) responsibilities.
(There's also the "suffering as a criterion for moral value leads to existential despair because bacteria have responses describable as pain" issue, but fair point on the "things can be unfixably evil".)
As I said, though, it's something definitely worth thinking about even if I've come to the conclusion that the status quo isn't far off sanity.
(I should clarify that I do think rights should probably be extended to the mirror-test-passers, as there's precedent for their participation in the social contract (most obviously, orcas being employed by fishermen in return for food). But the further down you go the less plausible this is, and certainly it's nonsense for stuff like cows and chickens.)
I've heard of this contractarian view before but it doesn't seem right to me in that I don't think of rights as a matter of bargaining (so I don't think non-exploitability is a goal). But I can imagine the plausibility of thinking of rights as a matter of bargaining, like finding a Schelling point that rational creatures would consciously assent to and recognize the stability of.
One argument against this, which I'm sure you're familiar with, is that possibly human beings who can't contract or bargain (yet, or ever, or anymore) are supposed to have rights, like infants, people with severe disabilities, or senile elders. Or severely mentally ill people, including psychopaths who might not have certain capacities to want to respect others' rights. All of these people's rights might be significantly circumscribed by their limited capacities (like maybe some of them should be imprisoned or institutionalized!), but they still conventionally appear to have and to deserve rights.
But maybe we need a stronger conceptual distinction between "being a moral end"/"having moral worth" and "being a participant in human society". Wolves and lions could do the former but not the latter, and people could have responsibilities to them on account of that, but not literal equality in terms of, like, being free to roam about in a city.
I have to admit to some confusion (more about my own view than about yours) because I'm also very uneasy with social contract theories in general, inclining toward viewing people as having presocial or precontractual rights and responsibilities toward others. So while it seems clear that lions can't participate in our hypothetical social contracts, I would also say "but our own rights as humans already don't derive only from our hypothetical social contracts, so lions' inability to participate in them shouldn't mean that the lions don't have rights against humans!". But I've been noticing how hard it can be to say exactly what those rights are or exactly how they're grounded.
Edit: Also, maybe we can find some agreement by thinking in terms of whether you have a responsibility to save others in distress, and whether it's good to do so. Like the child drowning in the pond thing, beloved of some of the inventors of the EA concept. Maybe you would say (I don't know for sure) that it's theoretically supererogatory to save random drowning children in the state of nature, but maybe also clearly required within (one's own?) human society, because societies can form social contracts that expand our responsibilities this way, so that we know what we can expect from each other, at least approximately. By contrast, it might be good to save random drowning elephants (or lions), in the sense that it makes the world better, but it's permanently supererogatory to do this because there's no way that the elephants can reach an agreement or understanding with us whereby we would promise to do this. It could be clearly a good thing to do, that is also clearly not anyone's responsibility to do, because there's no one representing elephants with whom one could agree on this responsibility. Is that kind of similar to your view?
>One argument against this, which I'm sure you're familiar with, is that possibly human beings who can't contract or bargain (yet, or ever, or anymore) are supposed to have rights, like infants, people with severe disabilities, or senile elders. Or severely mentally ill people, including psychopaths who might not have certain capacities to want to respect others' rights. All of these people's rights might be significantly circumscribed by their limited capacities (like maybe some of them should be imprisoned or institutionalized!), but they still conventionally appear to have and to deserve rights.
You're right - I am familiar with it, and even considered it while writing the earlier post. It is a pickle.
In the case of infants I partially bite the bullet i.e. I think prompt infanticide isn't morally murder. There are a lot of caveats, though, like "mutilating babies predictably harms the people they'll become", "humans become at least vaguely capable of functioning within society within a year", "killing John's baby without John's permission is grievously harming John", and so on.
With a lot of the others there's the inherent issue of "how would one implement this IRL without leaving an obvious avenue for murdering people and then claiming they were X", so the point's kind of moot. "Barack Obama is a hippopotamus", though, is literally the example Scott used of something more ridiculous than Lizardman Conspiracy, so the spectre doesn't arise, and "was literally born yesterday" is not going to fly either for anyone who's been around long enough for people to want them dead.
Elephants are highly intelligent (they're on the "passes mirror test" list I mentioned) and also fairly social, so not a great example of "you cannot contract with this", but that doesn't address the point.
My real answer to that point is along the lines of: I have no moral intuition on this matter and can't construct a clear rationale. Also, total-order morality is scary as hell. So I just don't care; I would not think less of someone for saving or not saving the lion (absent other considerations e.g. if saving the lion is likely to result in the lion eating people then I'd say it's at least supererogatory to not save it).
Rights don't exist period. They are human fictions. No one "does or does not deserve them". So whether they are "created by bargaining" (or not) is a stipulation of us.
"Wait I have a right to my life and property!" is not going to magically deflect a Viking axe headed towards your face. Nor stop the Vikings from taking your valuables.
I would say essentially rights are marketing slogans for laws/norms a particular society wants people to take very seriously. But to be vulnerable to those slogans, someone needs to actually be a member of your society (broadly defined).
Almost all ethical theorizing goes much better if you just throw all the "rights" language in the garbage. You can bring it back out once you are done to dress up/market your results, but it is a hopelessly confused way of attempting to get to results.
Ethics definitely in large part is about bargaining/allowing groups of agents to cooperate/coordinate together in non-destructive ways. But what things they will agree to and what behaviors are "ethical" is going to be hugely situational.
The ethics/rights you settle on for a small colony of scientists with little hope of resupply are going to be vastly different than a large group of bronze age humans in a vacant-ish area, or a modern interconnected society on a world with 8 billion people. Think about how different norms were in armies, or sailing ships, or communities living closer to the edge of subsistence (infanticide).
If we suddenly ran across aliens with wildly different social structures, and intuitions, what ethical framework we could establish with them is going to depend on the facts of their situation and biology/psychology etc. If their culture revolves around the "females" hunting and killing 80% of the least fit female offspring, telling them about utilitarianism or "rights" is likely a total non-starter.
And more broadly "ethics" theory is not some unified thing. It is a rope, made up of many related but separate strands, none of which go through the whole length. Sort of like finding a definition of "game".
Utilitarianism works for some things, SCT for others, deontology for others, and caveman brain for others.
But there is no reason to think the hodgepodge of intuitions, social structures, cultural norms, and psychological structures we evolved/developed in any way cohere into something that can be simplified down to some unified set of "ethics" or "rights".
"Rights don't exist period. They are human fictions."
A reasonable proposition. I believe that rights are not human fictions, but I also am a Christian who believes all men and women are images of the Divine and as such are sacred. If you don't believe that, the fiction theory is more sensible.
Which is why, in Jefferson's first draft of the Declaration of Independence the famous line read as follows:
"We hold these truths to be sacred & undeniable; that all men are created equal & independant, that from that equal creation they derive rights inherent & inalienable,"
The story goes that Franklin convinced Jefferson to change it from "sacred" to "self-evident" in order to please the more strict deists. It is a more universal line to say that these truths are "Self-evident" but they're really only self-evident in a culture that is completely seeped in Christianity. The idea that all men are equal is a Christian idea, based on the idea that all men are made in God's image: without that idea, it's nonsense. People are clearly not equal: some are smarter, some are stronger, etc. It makes sense from a Christian perspective to say that an illiterate Chinese peasant rice farmer and the Emperor of Prussia are both equal in value in the eyes of God: in everyone else's eyes they are clearly extremely unequal in value.
By the same token, if animals have rights they come from their relationship to God and to man: if there is no God, then man can choose whatever relationship he pleases with them.
"The idea that all men are equal is a Christian idea, based on the idea that all men are made in God's image: without that idea, it's nonsense. "
This is Christianity taking credit for the accomplishments of its enemies. The ideology of equality, just like the US Declaration of Independence, arose out of Enlightenment philosophy, 1700 years after the advent of Christianity. Enlightenment philosophers were (as you mentioned) often deists, or even atheists, who tended to be skeptical of organized religion. I won't deny that some early liberal philosophers were Christians, even more anti-freedom, anti-equality, anti-democracy reactionaries were Christian.
Yeah, 'Nature red in tooth and claw', suffering is what drives evolution. Though I also understand the human desire to help some injured wild animal and bring it back to health. It's hard to reason that... For the betterment of the genetic future of your species I am going to let you suffer and die. Though that might be the best thing for the future.
Altruists are obliged to do something about X if is real and X is morally relevant to them. I have found that a lot of EA/rationalist types arent very open to the idea that the second, normative claim needs to be established separately, and also aren't very open to the idea that there can be metaethical doubt about utilitarianism.
And in the hypothetical alternate world EA movement which distributed bednets and spent a lot of time trying to save people from eternal damnation through Jesus Christ, the idea that people would abandon the later if you could "prove" that hell isn't real would be cold comfort to anyone looking on from the outside.
The evangelism/having a discussion boundary does seem to have more to do with “do the people talking to each other have mutually reconcilable value systems?” rather than any intrinsic properties of the words coming out of either of their mouths.
If yes, then the listener can slot the information being spoken into their ears into a corrigible framework, frictionlessly, inception-style—almost if they had already always believed it to be true.
If no, then some flag will eventually get thrown in a listener’s mind that *this* person is one of *those* people that believes that *wrong* thing, and they’re trying to convince me that *wrong* thing is *true* when it’s *not*.
In this way, literally describing something within the framework of a value system that is incompatible with another can be interpreted as an attack by that other (or, more weakly, evangelism). The crux is foundational.
Having wasted a youth arguing with folks on the internet, I’m fairly pessimistic about truly having conversations with folks that I know to have these “mutual incompatibility” triggers. You basically have to encode your message in a way that completely dodges the memetic immune system they’ve erected/has been erected around their beliefs. Worse, knowing that you’re trying to explicitly package information in a way that dodges their memetic immune systems makes them even more likely to interpret your information as an attack (which, honestly, can you blame them? You’re trying to overturn one of their core values! Flipping some cherished bit! People actually walk around with these bits!! Any wedge issue you can possibly imagine cleaves a memeplex in half around it!)
This will be foundationally problematic for any organization that’s explicitly trying to manufacture controversial change. People don’t want to flip their bits.
Theoretically, maybe not all seemingly-intractable disagreements are about values, so maybe they're not all theoretically intractable for that reason.
Like religious evangelism, which is a famously intractable kind of Internet argument, sometimes appears to hinge on disagreements of matters of fact. Like people will argue about whether or not a specific miracle really happened, and so then we should or should not be persuaded to believe a religious tradition founded on, or confirmed by, that miracle. Supposedly, each claimed miracle really happened, or not, or has some kind of evidence for or against it which is independent of people's values.
But your point might still work pretty much the same if you can generalize "value systems" further. Maybe to "stances" à la David Chapman or "pretty deep commitments" or "axioms" or something?
Indeed, I may be overloading the word “value” when I really mean something closer to a deeply rooted axiom (ie, disagreement not being about the fact of the matter of individual details of possible miracles, but more about whether the ontology of the universe contains something even remotely like a miracle)
edit: I read too fast and mistook this list for something from a serious blog post rather than just an example for a comment. As a result, this comment is probably overly harsh in its phrasing relative to the target. Feel free to skip it.
I think I was thinking something like "I'd just ignored that list but if Scott is citing it I guess I'd better respond in detail."
Original comment: Yeah let me elaborate why the "paradigmatic criticism" made me scoff:
- "giving in the developing world is bad if it leads to bad outcomes and you can't measure the bad outcomes" so ... don't give in the developing world? Ever? or measure better? measure better how? At least point me at the book to read and give a one-line summary of recommendations to address this, because this is clearly not a recommendation followed by the non-EA charity space anyways.
- "this type of giving reflects the giver's priorities" :very sarcastic voice: really??? charitable giving is decided on the basis of the giver's interests? yeah no shit, it's my money. The whole point of EA is "Do *you want* to do the most good?" This is inherently anchored to the giver's value system.
- "this type of giving strangles local attempts to do the same work" see I know the examples this is referring to but this is one case where it would have been worlds better to give at least one example because as written this is beat for beat equivalent to actually used political arguments to abolish literally every social safety net. Stop sucking the government's teat! Starving African ... welfare queen!
- "The EA movement is obsessed with imaginary or hypothetical problems" ... "Stop wanting wrong things for no reason" has literally not convinced any human ever in the history of the planet. I now disagree about the noncentral fallacy - this argument is the worst argument in the world.
- "The EA movement is based on the false premise that its outcomes can in fact be clearly measured and optimized" okay, um, how do I say this, have you read the Sequences? if you can't optimize an outcome, you cannot do anything whatsoever. so like, sure, but absent that there's also no basis for your criticism? How are you saying that the EA charitable giving is *bad*? Did you perhaps model an outcome and are trying to avoid it because it's bad? Yeah that's optimizing, optimizing is the thing you are doing there, as the quote goes, now we're just haggling about utility.
- "The EA movement consists of newcomers to charity work who reject the experience of seasoned veterans in the space" Yes.
- "The EA movement creates suffering by making people feel that not acting in a fully EA-endorsed manner is morally bad" I believe this is called "being a moral opinion", yes. edit: Am I endorsing this? No, I just think it cannot be fully avoided. Moral claims cause moral strife.
And like. Maybe this is uncharitable and the book-length opinions really have genuine worth and value and should be read by everyone in EA. But if they do, none of the value made it into this list! Clearly whatever the minimum length for convincing literature is has to be somewhere in this half-open range.
>So maybe my thoughts on the actual EA criticism contest are something like “I haven’t checked exactly what things they do or don’t want criticism of, but I’m prepared to be basically fine if they want criticism of some stuff but not others”.
This feels like a Motte and Bailey issue. When I read the rules the preamble states pretty clearly that a wide range of topics are welcome, but the rule minutia make it hard to address broader paradigmatic issues. The organizers can call the winner The Best Criticism of EA even though they have implicitly limited the potential criticisms they receive to prevent ones they may be uncomfortable with.
To build off your example, imagine on the next Sunday the pastor comes back and says "We judged 'my voice is too quiet' as the winner of the Criticism of Christianity contest, paid the winner $5,000, and bought me a new microphone". Sure he got some criticism and received it, but the overall framing of a contest implies that this was the most important criticism to address, and conveniently an easy to address one won. He can say he addressed the biggest criticism, while at the same time not addressing the "God isn't real" criticism.
That just sounds like a disagreement about the word "best"? It sounds to me like the organizers were looking for the most useful criticism to improve their performance within the paradigm.
> conveniently an easy to address one won.
This is in fact what you want out of criticisms though. Easy to address means you don't need to put in much effort to get a potentially large improvement. Now maybe 30% more churchgoers can actually hear the sermon! That's genuinely a great improvement! What's wrong with being easy?
It's a contest for criticism of effective altruism, where entries will be scored and the winners, runners up, and honorable mentions receiving prize money. In my opinion there is an implicit statement that the winners are better than the non-winners.
>This is in fact what you want out of criticisms though.
My point is that by the pastor giving the win to "turn up the mic" rather than "God isn't real" to a contest broadly framed as "criticism of Christianity" rather than "criticism of my specific performance in our single church" he gets to declare victory over a much broader field of concerns than he actually faced
Right, so you're looking at it as a social battle? But I don't think the pastor was looking to win a social battle at all, the pastor just wanted to improve the sermon in an effective fashion. I don't know how the pastor could signal this; maybe don't publicize the winner at all? But then he can just give the money to his son or whatever. Publicize the winner in a small closed circle?
It's like if you're driving from New York to Sacramento and you make a blogpost titled "Soliciting criticisms of my planned route for driving a car to Sacramento" and then people get very upset that you didn't select the paradigmatic criticism of "abolish fossil-burning cars and build out rail lines." What are you, an oil industry shill? No, you just did not want to embark on a multi-decade reevaluation of your entire way of living, you just wanted to get to Sacramento faster, and paradigmatic advice like "what's so great about Sacramento anyways" is in fact of less than no use to you.
I think EA wanted to get to Sacramento, and the only reason this has blown up so much is that they accidentally labeled their post "Soliciting criticism of our driving plan" and now people think they want to win a performative victory for the combustion engine or w/e.
To be clear, I have no problem with the pastor wanting to improve his sermon, or to escape the analogy for the EA folks to want to improve specific actions policies programs or what have you, which is what I think the rules of the contest targets. My complaint is that per their tl;dr "We're running a writing contest for critically engaging with theory or work in effective altruism (EA)", so if they only give prizes to the best critiques within their specific framework there is an implicit statement that the critiques within their framework are the best "criti[ques] engaging with theory or work in effective altruism".
To take your example, for "Soliciting criticisms of my planned route for driving a car to Sacremento" it's perfectly reasonable to not want a response on Peak Oil. However, if you title your contest "Critically engaging with theory or work in Driving" and then only address and award criticisms of your planned drive to Sacremento, you're again declaring victory against a much broader field of critiques than you actually faced.
My point is exactly your last edit, the contest itself is reasonable but the framing of it is way out of proportion to the actual eligibility criteria in a way that to me feels intellectually dishonest as it allows them to give the pretense of addressing broad criticism while actually not needing to do so.
Fair enough! I agree with this criticism, the labeling is clearly poor (for the goal I presume). I think they underestimated the breadth of available disagreement.
edit: I think to some extent I'm discarding the idea that EA is trying to win a performative battle because that just sounds ..... useless? I can't imagine anyone going "Oh, EA asked for reviews but there was a really good review saying that everyone should fully participate in cutthroat capitalism to maximize value fulfillment but EA didn't accept it, that must mean that this argument was fully demolished by them if it didn't win" ... and it seems like that's what would need to be believed for the performative victory over paradigmatic criticism to actually affect anyone. For a contest like this, I would expect everyone reacting to the outcome to have already priced in that "EA does EA things", such that the victory of an EA-aligned feedback offers no new evidence.
If the winners yield a concrete improvement that makes things substantively better, then yes, their criticism is in an important way better than the ones that make the deeper and more trenchant criticism, in a way that doesn't help yield any big improvement for people!
This is what Zvi was getting at with his list of assumptions and opaque line by line critique of their wording: they rather seem to be trying to have their cake and eat it too, to get to tell themselves and others that they took in all the criticism from the grandest to the smallest, did the rational thing, but also, to be able to implicitly dismiss the really hard and vague critiques for structural reasons.
> The universe was thought to be infinitely large and infinitely old and that matter is approximately uniformly distributed at the largest scales (Copernican Principle). Any line of sight should eventually hit a star. Work out the math and the entire sky should be as bright as a sun all the time. This contradicts our observation that the sky is dark at night. This paradox was eventually resolved by accepting that the age of the universe is finite
People still bring this up as an unresolved paradox, which I've never found particularly convincing. But I don't see how a finite age of the universe is supposed to be a resolution. According to this line of argument... why are some stars brighter than other stars? Why is the age of the universe relevant? Are all the stars we can see constantly getting brighter, because the age of the universe is increasing?
As I understand it, it's basically because if the universe is x years old, you can't see stars that are more than x light years away, and most directions in the sky won't have a star within x light years.
This is what I don't understand. The lesson of high-powered space photography is that, within the finite observable universe, every line of sight already terminates in a star. So it's not relevant that there are more stars out beyond the horizon of observability - we wouldn't see them anyway, because there are observable stars blocking them.
But we can't see the observable stars either (without going to heroic lengths to capture their light), because they're not bright enough. And that fact makes me question why it's supposed to be a paradox that the night sky is dark. I think "not bright" and "dark" mean the same thing.
No, it's definitely not true that every line of sight terminates in a star. No picture of the night sky you've ever seen, unless it's one taken by an interferometer like CHARA, resolves the surface of a star. Stars appear to have finite size because of diffraction across the aperture of the telescope or lens, resulting in a point spread function: https://en.wikipedia.org/wiki/Point_spread_function
We can do some order of magnitude calculations. The average density of the universe is actually very well measured, and it's 9.47e-30 g/cm^3 (http://hyperphysics.phy-astr.gsu.edu/hbase/Astro/denpar.html). Only 16% of that is normal matter (the rest is dark matter), and only 10% of the normal matter is in stars (the rest is in gas). So that's a stellar density of 10^-64 stars/cm^3. The Sun is about 7e10 cm in radius, so you'd need to travel in a straight line for about L = 1/(n*R) = 2e53 cm = 6e22 Gpc before you hit a star. For comparison, the observable universe is 13 Gpc in radius.
OK just off the top of my head, but I think it's the expansion of the universe that is more important. As time goes by there is a smaller and smaller fraction of the universe that is observable. (At some time in the past there was light everywhere... this is the Cosmic Background radiation that we can now observe.)
Let's temporarily assume that all stars are the same size and the same brightness. From observing the area around us (out to a few dozen light years or so) it appears that there's something like a 10^-30 chance that any star-sized region of space has a star in it. (I have no idea if this is the right number, but there's some specific tiny number.) Now consider some ray into space. Along any star-diameter-sized distance of that ray, there's a 10^-30 chance of running into a star. So with probability 1, you'll eventually run into a star. Thus, this ray will get as many photons coming along it as any other ray that hits a star, so the ray should look just as bright as any other ray. (We have to be a bit careful if we don't want to think of 0-width rays, and instead think of a small cone. If a star is twice as far away, then its light will be 1/4 the brightness - but the probability of one star within distance X is equal to the probability of 4 stars within distance 2X, which is equal to the probability of 9 stars within distance 3X, and any of these results are approximately equally bright, and with probability 1, one of them will occur.)
The reason the age of the universe is relevant is that the average distance to the nearest star on one of these paths is looooong. If there's a 10^-30 chance of a star within the distance of a sun's diameter, then since the sun's diameter is about 10^-8 of a light year, there's about a 10^-22 chance of a star within the distance of a light year. Thus, on average, the nearest star on any of these rays is about 10^22 light years (with many of them being substantially farther than this). If the universe is only about 10^10 years old, then even if it's infinite, there just hasn't yet been time for light to reach us along most of the rays.
As you suggest, we also will have to deal with the assumption that stars are the same size and brightness. As long as there is some standard *average* size and brightness of stars, then the above calculation will tell us that the *average* patch of the sky is as bright as this average brightness. But if most 1 degree by 1 degree conical patches of the sky achieve this by having 10^20 stars that are on average 10^20 average star-diameters away from us, then the law of large numbers will mean that most of these patches of the sky will be extremely similar in brightness, and only the few that have a small number of stars relatively close to us will be as much brighter or dimmer than the average as individual stars sometimes are.
> As long as there is some standard *average* size and brightness of stars, then the above calculation will tell us that the *average* patch of the sky is as bright as this average brightness. But if most 1 degree by 1 degree conical patches of the sky achieve this by having 10^20 stars that are on average 10^20 average star-diameters away from us, then the law of large numbers will mean that most of these patches of the sky will be extremely similar in brightness, and only the few that have a small number of stars relatively close to us will be as much brighter or dimmer than the average as individual stars sometimes are.
But this is an argument that most of the sky will be as bright as the rest of the sky. That's true; most of the sky is dark. The paradox is supposed to be that most of the sky is dark when, according to... someone... it should be bright.
> If the universe is only about 10^10 years old, then even if it's infinite, there just hasn't yet been time for light to reach us along most of the rays.
But we don't have problems looking along any given sightline and finding a star there. We would be shocked to look along *any* sightline and not find a star. Starlight is already reaching us along every sightline.
> this is an argument that most of the sky will be as bright as the rest of the sky
It's not just that - it's an argument that most of the sky will be as bright as the average brightness of opaque matter in the universe. Within our stellar neighborhood, the average opaque matter is close to as bright as a star, so we should expect the sky to be that bright.
> we don't have problems looking along any given sightline and finding a star there.
We actually do - even in the Webb deep field, the majority of the sight lines have no visible star.
My problem is that you can have an infinite number of stars yet still have a sky that isn't filled with them.
Imagine the stars are at every integer location on a 2-d graph. And I'm sitting at (0,0) and staring at the point (1, √2). I am not going to be looking directly at any star.
Right, you also need the assumption that stars are uniformly distributed (and time invariant). You can think of the universe as composed as an infinite number of shells centred on Earth. The light from each shell goes as the inverse square of the radius, but the surface area goes as the square of the radius, so the total light received at Earth from each shell is the same. But there are infinitely many shells, so summing over all of them shows the Earth receives inifinite light.
I read the 'evangelizing' comment as being related to some EA practices and found it difficult to imagine it could have been meant to apply to Scott's blog. (though still possible of course)
I agree 'there’s no clear line between expressing an opinion and evangelizing' and I also agree that telling everybody about that 'thing' that is so important to you can be misunderstand even if you don't want to convert them ... but I still think it's something like a continuum with some things (more) clearly being on the evangelizing side and some on the 'not evangelizing' side. 'Writing sth. on your blog that people visit when they want' or 'sending free printed copies of HPMOR to folks uninvitedly' sure seems different. I'm also not at college, but I lately read a bit of specific criticism of EA's recruitment work, and I can understand why some folks (apparantly) could find it cult-like.
Which doesn't take from the point whether you find it necessary or whether it's more or less effective in spreading your ideas.
[I wrote it and] it was clearly not meant to apply to Scott's blog. I've been a reader since he was on livejournal and I've done a stint as a paying customer on Substack.
I pull Scott. EA pushes me. Baptists push me.
Scott, it's as simple as that: who pushes, who pulls. Sure, maybe the oncoming bus push or the abolitionist push are worth it, but it's a really, really high bar to clear.
> but I’m prepared to be basically fine if they want criticism of some stuff but not others.
Fine with me, but it would be useful if the contest would be explicit and clear about this. This would have several advantages: People would know what to submit and what they could win a price for. Contest organizers would have a higher chance to get what they want. And EA couldn't use one kind of criticism to fend of another one or pride themselves to take all kind of criticism when they are looking for sth. very specific.
A priest who respectfully engages in an open debate with folks who find religion appealing but also can't belive in God, gets a different kind of acknowledgement from me than a priest who asks how to deliver his sermon in a way to be best heard.
There was no first person to consider abolitionism. There were lots of people who did not want to be slaves. But the grinding poverty of the past meant every political economy on Earth was based on slavery through 1600, though it was a lot less basic to places not invaded recently, like England in 1600. Around 1600 in England, when North African Muslims were raiding Europeans in general and sometimes English sailors for slaves, lots of Brits said this was bad and English should not be raided for slaves. It was taught in schools and preached from pulpits. No Englishman should be a slave. Blah.
And when everyone is preaching blah, some preachers strut their stuff and say blah blah blah. If it sounds studly, other preachers will go along and preach blah blah blah. Not just no Englishman should be a slave, but no Englishwoman. Nobody should be a slave! Over the next couple hundred years it slowly caught on in England. Slowly, because enslaving outgroup was so profitable.
1600's through 1800's British Isles were a perfectly placed pirate base against everyone else in Western Europe. Piracy had a moral hazard, but was so profitable they ended up with a British Empire. Maynard Keynes thought the British Treasury was founded by Drake's piracy. They had to keep it. How to justify the moral hazard? The Black Legend of the Bad Spanish Empire of slavers worked okay. Meanwhile the pirates were taking and trading slaves like crazed pirates, and making bank, enough to shift from raiding to trading, enough to be governed from London. The empire they were deniably building together could point to outgroup's nasty slaving ways, and Brit slavers were ingroup enslaving outgroup.
The 13 colonies of piratical slavers and a lot of British poors wanting a better life prospered and became widely known across Britain as 'the best place in the world for a poor man' (per Bernard Bailey). When they were poor London let them handle their own affairs. By the 1750's they were building (I think) a third of the British merchant marine and worth governing by their betters. George Washington exceeded London's orders (while following Virginia's orders, and supported by Whigs in London's government) and attacked Fort Duquesne, fortified by the French against British (mostly Virginia British) expansion. He lost and was taken prisoner, but was released and not punished by Virginia and supported by Whigs. The Brits came back and took the fort, Fort Pitt. Washington had triggered the Seven Year's War between France and England. England won. Now to handle the poors affairs. The poors liked handling their own affairs.
The colonies revolted and all thirteen fought for eight years of fairly nasty war. Long nasty wars have a high moral hazard and need justifications. The Tory Samuel Johnson, already toasting the success of the next slave revolt in the West Indies, wrote a good polemic against the revolting colonials- 'Why are the loudest YELPS for liberty from the floggers of Negroese'? and John Wesley stole it. Johnson was happy 'to have gained such a mind as yours confirms me' and the Methodists preached Wesley's patriotic Brit sermons against the colonials and against slavery. For the next hundred years the British Empire preached abolition as a justification for bagging any profitable area that looked easy and, like everywhere, was based on slavery. 'Castlereagh, the name is like a knell' bribed the Spanish Foreign Minister with 100,000 pounds to abolish slavery in Spanish America, triggering a revolt in Spanish America that opened Spanish America to British trade. And ended slavery in Spanish America.
Even the revolting colonials gave up slavery, not least because the moral hazard of slavery made it less profitable as the Industrial Revolution got going. Also the Black Legend of the Evil Spanish Empire helped justify grabbing Florida, and then also the northern wilderness loosely held by New Spain. Everyone has been an abolitionist since.
Not from one pushy evangelist, but from a mix of self-interest and genuine moral choice and a lot of preachers and teachers. Like EA.
It feels upsetting to be reminded of self-interest in abolitionism. Maybe this is partly because of the uncomfortable thought that, if certain contingent economic developments hadn't happened as they did, we would still have slavery today!
But we often hear that in the U.S. civil war the north had economic interests opposed to slavery and the south had economic interests in favor of it. And also that there were changes over time that were making slavery more unprofitable. Even that more limited account suggests that some people had the moral luck to happen to benefit less from slavery, so they were less likely to end up being responsible for engaging in it and perpetuating it.
Slavery was not the basis for the British economy even prior to 1600. Slavery died out in western europe after the collapse of the Roman empire, largely replaced by serfdom.
Yes, I'd conflate serfdom with slavery (not slavery because selling serfs had technical problems) with slavery. Also Japanese making the lowest wages in human history. Slavers, pirates, overseers of radically low-wage laborers aren't fussy about the letter of the law, as Americans are rediscovering given the bipartisan consensus for lower wages through higher immigration or by any means necessary. And in England serfdom was dying out. Not invaded recently, give or take slave raids from pirates and the Spanish losing at sea.
As an European, what you've just said is so obvious and so much the way everyone over here understands history, that nobody would think it needs to be stated.
That it is not obvious to Americans has been one of several culture shocks for me about how differently other cultures view the past, as I found out when I started to discover the anglophone internet, a long time ago.
I'm not saying that Americans have the facts of history wrong, but they tend to frame them differently from the way we do, in a way that seems to imply slavery has always been a central feature of Western civilization until Victorian era abolitionism. Whereas if you asked me about the "end of slavery" it is the transition from antiquity to middle ages that would come to my mind.
And yet there is something to be said for that framing; for example I hadn't realized, until English speakers made me (including Shakespeare), that places such as Venice traded in slaves throughout the Christian era. I don't remember my high school history books mentioning this fact.
I've had the same culture shock about the way English speakers see several other parts of history. For example Vikings, who loom so large that they swallow up the whole early Middle Ages in the Anglo-American mind (instead of the Carolingians). It's a whole other way of viewing the European past.
I believe insurance companies use the phrase to mean situations where people are tempted to take risks they are not responsible for. As slaves are in practice reduced to the status of minors or below, their owners are liable to take risks with them irresponsibly.
It's hard to talk about this stuff without spouting outrage in place of facts or being so cold you don't notice the skulls.
Every business is risk, capital, labor, and risk covers the other two. Piracy and slavery were all kinds of risky. Insurance is about risk. 'Just' an externality?
"Moral hazard" is only about how an insured party will make riskier choices than they would if uninsured; it's not an externality because the insurer has consented to the policy.
Piracy, slavery, and conquest impose externalities on their victims, but there would only be moral hazard if someone were, e.g., insuring the pirate ship.
"Just an externality" as in: I don't think there's a more specific term for the sort of behavior you're describing.
Most victims of piracy, slavery and conquest were predators instead of prey every chance they got, so I could possibly argue it's not an externality. They were all doing the dance.
But no, I'm using 'moral hazard' in a wider sense from half-remembered Michael Gilbert mystery stories read in the 1970's.
Oh dear, I am not sure where I heard it first. (E. Weinstein?) But perhaps the problem of 'everyone' being a racist these days is that we don't have enough real racists to point to and say, "see look how bad that is."
It's complicated, but I think there's both a lot of covert racism (because overt racism gets punished) and rewards for accusing people of racism. It's Goodhart on top of Goodhart.
> the trick is evangelizing without making people hate you. I’ve worked on this skill for many years, and the best solution I’ve come up with is talking about a bunch of things so nobody feels too lectured to about any particular issue.
I really like the example of the Priest. But I think the potential criticisms of 'God doesn't exist' or 'buy a better mic so we can hear you better' reflect extremes and need additional examples or miss a point.
Again, suppose there is a Priest who wants to see more people attend Sunday service and is also worried that people are leaving. Ultimately he is interested in more people believing more strongly in God. He is asking for criticism and what he can do better.
He's maybe hoping for 'make your sermon a bit shorter' or 'hold service an hour later on Sunday morning and I'd be there'. But instead he may get criticism like 'you're driving this big car while preaching poverty, I don't want to listen to you preaching X while doing Z.' or: 'I believe in God, but I'm appalled by the cases of misuse that took place in your ranks. Write an open letter to your Bishop to fully investigate those cases, then I'll be happy to attend your service.'
I think the Priest is fine to reject the criticism of 'there is no God' - this is the one thing he cannot give up to and still be a Priest. And anyway those guys will never end up in his church.
The example of 'voice is too low, buy a new mic' found in one of the comments is in some ways extreme in the other direction: the Priest can easily solve this with limited ressources, it doesn't require any behavioural change from him, no whatsover change in 'paradigms' and also no loss of status or comfort. It's probably not even a criticism he'd feel uneasy about - compare 'you need a new mic' to 'your voice is unpleasant' or 'your sermons are chaotic and can't be understood'. Simple solution and win-win.
But what about the third category? Preaching poverty and love-thy-next while living in prosperity and making use of luxury goods not available to many others? Or not reacting to the cases of misuse in his own ranks? I think those examples are closer to the 'paradigmatic' criticism that we're talking about in EA. It requires real changes in thinking (all priests I know do it, but is it really okay to drive this big car, while preaching poverty? Am I allowed to criticize a bishop?) and behaviour and it risks loosing the support of other important members in the organization. While not giving up on what is the (most narrowly defined!) core of the issue.
I would argue that those are the criticisms the Priest should hear. Or more precisely: I think it's the Priest's decision to ask for improvements of his sermon only and implement them. Arguably that's already more than what most priests are doing. But I think it's a missed opportunity to not listen to the complaints about his affluent lifestyle and the apparant 'sins' in his owns ranks. Especially when you care about people coming to your services and believing in God.
As mentioned, I think those examples are closer to 'paradigmatic' criticism of EA. And I think they are worth being heard. Especially if they come from folks being close to the (again, most narrowly defined!) core value of EA.
I'm not much into EA, but from the discussions here what comes to my mind is:
EA always wants to help the poor, and put a lot of effort in using money effectively to do so. But they turn a blind eye to were the money comes from and the roles the wealthy and the US foreign policy plays in making or keeping them poor. Of cause if they would, it would be much harder to raise funds from the wealthy and they would have to question their own lifestyle and cultural beliefs. Doing good feels better and being best in doing good feels the best, so the strive and look for critics to become better in doing good. But if ever, it touches their own lifestyle only minimally e.g. driving a electric car instead of a SUV or living like middle class even if they could afford much more.
Looking at this from the outside, with its recursions and metaphors-about-metaphors, its digressions and aphorisms, its semantic arguments it’s hard not to get the feeling of a psychiatric patient (a mad, multiple personality patient, with too many pseudonyms) desperately trying to avoid acknowledging something obvious.
(Criticism of criticism of vcriticism... could easily appear in that cult classic _Knots_ book by RD Laing.)
Scientific debates don’t look like this. Nor do political debates, or philosophical ones. The patterns look like some strange combination of the three. And it’s the endless aporias that’s strike me the most. Nobody seems to feel satisfied at the end. The fact that people can’t help do it over and over, despite the frustration, suggests a form of repetition compulsion. Scott’s earlier analogy to criticism as a kink is perhaps the right category, if not the right specific diagnosis.
I want to suggest to the community that maybe, just maybe, they’ve been participating in a folie á N, a group madness. One that, from which, a part of the whole occasionally tries to escape. Some parody of a therapy session emerges, with people taking on different roles, but no change in the madness happens.
> Scientific debates don’t look like this. Nor do political debates, or philosophical ones.
Disagree. This pattern feels pretty normal, for the interface between abstract paradigmatic disagreements and functional operationalized criticisms. "Some strange combination of the three" isn't *wrong* though, it's just that talking about 'group dynamics around the abstract principles underlying X' that's pretty much a given.
Agree. The core point is that EA is a religious type movement and one that requires significant costs: you have to be willing to disavow human nature, but grants significant benefits: you get to believe that _you_ will do good, better even, at scale, with other people's money in other people's lives.
The absurdity of that proposition should be enough on its own!
That seems like a real stretch to characterize as religious even entirely granting the basis of the argument. Religions and religious reasoning don't usually center around making tangible and observable benefits for other people at one's own expense, especially not in ways where you're encouraged to make an active effort to check if it's working or not. It feels like a particularly far point down the path of characterizing everything as a religion.
https://slatestarcodex.com/2015/03/25/is-everything-a-religion/
I was simplifying. In many ways it is obviously not a religion, hence I said religious type.
My main point was that you are required to believe something absurd on its face, even if believing that allows/helps many EAs to do a lot of good.
I’m not sure it’s a religion, and really want to emphasize the psychological (almost psychiatric!) aspects of the discussion.
Religious debates don’t look like what we’re seeing here, but people making analogies to religion and accusing each other of being religious feels like something that might happen in a therapeutic context.
Present-day mainstream religions might not be big on producing empirically-verifiable results, but the ancient Romans gave their own religion about as much scientific scrutiny, proportionately, as the modern US military does to aircraft - for roughly the same reasons.
By the same token, the west spent a good 1000+ years where the most intelligent people that could be found were set to the task of figuring out everything they could about God (theology).
They made some interesting progress in that time, but then it turned out good enough ironworking and trigonometry let you win wars even if you've got the religious side of things completely wrong. Without that incentive to produce ongoing strategic results, corruption spread, service quality plummeted, and now what have we got? https://dresdencodak.com/2022/01/17/dark-science-113-the-church-of-the-empty-inbox/ The point I'm getting at is that I think we might have an unfairly bad impression of religion's potential, because there have been so few examples in living memory of anything coming close to fully achieving it, or even anyone making a proper attempt.
Most people in the effective altruism movement are trying to do good with their own money, unless you've decided that donors don't count.
I think the idea that you _can't_ do good in other people's lives with money is absurd. Do you think that money can't be used to buy antimalarials and water treatment, or that antimalarials and water treatment aren't helpful?
You think that believing it's possible to help people is absurd?
This is a good example of why there’s such a psychological puzzle here. I mean the following in a very respectful way.
The OP wrote “you get to believe that _you_ will do good, better even, at scale, with other people's money in other people's lives.”
But the interpretation you make is that OP thinks the idea of helping people is absurd.
This is such a misreading, in a context (Scott blog) that prizes careful attention, that I can’t make sense of it.
To make sense of it, it really does seem we need a story about what’s not being said, and (even more strangely) what people don’t know they’re not saying (but is influencing and distorting the conversation regardless).
For clarity and rhetoric purposes, I was taking a broad starting point to try to narrow down what OP's apparently non-sensical argument was actually trying to say. If you'd rather I skip past the discussion and lay out my thoughts in more detail from the get-go:
A) It seems pretty clear to me that it is possible to help people.
B) Given A, it seems at least not absurd to suppose that I myself could help people.
C) Given A and the fact that money can be used to accomplish many goals people have, it seams reasonable to suppose that it's possible to help people with money.
D) Given B and C, it seems plausible that I could help people with money.
As someone pointed out many if not most EAs give their own money, so the "other people's" part seems irrelevant and strawmannish to me, but for the sake of argument:
E) Money is fungible, so one person's money is as good as anyone else's.
F) Given C and E it seems possible to use other people's money to help people.
And then:
G) It seems clear to me that it's possible to do better or worse at pretty much any task with a goal.
H) Given D and G it seems likely that there are choices I could make that would result in me being more or less efficient at helping people with money.
Finally:
I) I'm aware that doing things at scale is more difficult, but it's clearly possible in some cases, and it's not clear why adding "at scale" takes any of the above propositions from "reasonable" to "absurd."
So can either you or [insert here] explain to me which of these statements is absurd, and why? In general I think this highlights the pitfall of argument from absurdity, since many people have very different intuitions of what's "absurd" and what's reasonable.
As you can tell from my first post, I’m trying to figure out why the debate is so (to my view) pathological, psychologically. What you wrote here is more normal, and could certainly appear in a moral philosophy debate.
I might suggest that if lack of rigor strikes you as "pathological" you might be wise to avoid making diagnoses based on internet comments.
In this case my second comment reflects the same thoughts as my first one- it just expresses them more completely and rigorously.
It’s been a long day but after seeing all the exchanges I think you’ve captured something here. In particular, the emphasis on the _you_; there’s something about the particularity of the call that EA makes, that seems to help explain the unusual nature of the discussion.
It also maybe helps me understand a remark referenced by Scott, above -- that EA causes people psychological harm by making people feel bad for not doing more. It’s odd at first glance (charities and political movements guilt trip all the time, why should EA be extra powerful?)
One might say: a certain kind of person is interpellated by EA (in the Althusser sense). Maybe there’s a Lacanian take, that might be better attuned to the psychology of the phenomenon.
> you get to believe that _you_ will do good, better even, at scale, with other people's money in other people's lives.
I am confused about the "with other people's money" part.
I asummed that a typical EA is someone who sends *their own* money to the effective charities.
I think the claim here is that the EA does more than simply choose a charity to donate to; they have a method that gives the best answer to which ones all (with similar goals) ought to donate to.
(This is the GiveWell pitch: fund us to figure out what to do with everyone’s money.)
Well said!
When you're working on a project that has a great capacity to produce piles of skulls [1], you should regularly be trying to figure out if you're headed towards a pile of skulls.
[1] because by definition it's effective, so getting the sign right matters
So, this is, I think another good example of how odd the discussion is. The rhetoric is so oddly multi-register I just can’t place it: extremely vivid phrases (“piles of skulls”) coupled with in-group technical language (“getting the sign right”).
It’s not how people talk when they’re evaluating charities (for example, in normal contexts). Or how they talk when they’re reasoning morally (in normal contexts, say an op-ed, or a seminar). It’s not clarifying, either.
So what *is* going on? Why the idiolects, and why these particular idiolects?
It's short-hand. I can write it out long form.
EA premises itself on being both effective and altruistic.
Effectiveness, in terms of having an effect, is something that is pretty quickly determined. If I have a campaign to write get-well-soon cards to kids in hospitals [1], I assert it's not going to have much effect, positive or negative. If I have the kids treated by no people but an army of robots responding to them 24/7, there's definitely going to be an effect. But is the effect good or bad? That is the altruism part. If I'm hurting the kids it's not altruistic.
I could also reference consequentialism but that'd just be more in-group technical language?
> It’s not how people talk when they’re evaluating charities (for example, in normal contexts
Most people don't evaluate charities at all beyond "this makes me feel good to donate to."
[1] I think this is a good approximation of the average charity organization
“Most people don't evaluate charities at all beyond "this makes me feel good to donate to."”
Just as a point of order, this is false.
There’s an entire discipline associated with non-profit evaluation and management. A few years ago we sent them some of the GiveWell material. It’s, essentially, a book report, produced at enormous cost. (GiveWell early on stated that they’d no longer solicit outside evaluations, because [roughly] people weren’t up to their standards.)
The larger question remains unanswered, though. What’s the origin of all this weird language (in your updated post, robot nurses, etc). There’s something odd about it, and it seems to be a clue to why these debates are so weird, aporetic, etc.
>> Most people don't evaluate charities at all beyond "this makes me feel good to donate to."
> Just as a point of order, this is false.
No, it's true.
It's certainly why I give to the Internet Archive, Wikipedia, The Insitute for Justice, and my local Cat Care Society.
"Feeling good by donating" is the *purpose* of charity, for most people. We don't SAY that, but it's what's actually happening.
This is why EA is (arguably) revolutionary: it's nominally concerned with *outcomes* in the realm of charitable giving, rather than self-satisfaction on the part of donors. It may or may not be successful at that, but we should acknowledge the weight of the idea behind it.
This is a common elision I see in a lot of EA discussion:
1. Normies don’t evaluate charities.
2. Therefore, nobody does.
3. Therefore, EA is revolutionary.
In reality, however, this is a huge area of study. As was pointed out a while ago, RE: GiveWell, after subtracting out the early weird AI risk stuff, it basically imitated the outcomes from those people (Gates Foundation work, later some Social Justice, it seems.)
It's called lingo. Every community of people has its own lingo, from scientists to programmers to janitors. New Yorkers, Floridians, and Brits also have their own lingo; we just tend to call them dialects instead. What would be weird is if any distinctive community *didn't* have its own lingo.
I see what you’re getting at. But it’s more structured than (the informal notion of) a dialect, which is usually just a set of lexical substitutes, and perhaps swapping out one set of dead metaphors for another.
There are three groups that overlap a lot: (1) effective altruists, (2) readers of Astral Codex Ten, and (3) readers of Less Wrong a.k.a. the rationalist community.
As a result, you can find people debating EA on ACX using the LW lingo. Thus the technical language, etc.
I am taking the "more structured than... a set of... substitutes" as a *compliment*, because indeed the rationalist community is trying to discuss things that are typically not discussed elsewhere, in a way different from the usual online debates.
Now of course one should tone down the lingo when talking outside of LW, but an expression or two will slip through if you feel like most people will understand what you mean.
But specifically the "noticing the skulls" idiom originates at SSC (the previous incarnation of ACX). https://slatestarcodex.com/2017/04/07/yes-we-have-noticed-the-skulls/
One thing I’m noticing is that the discourse is strange, and extreme--but at the same time extraordinarily abstract.
It's the birth of a new political party.
I think that is en extraordinary claim, one which to me requires more evidence than you've provided here. In general I feel any claim of the form of a diagnosis (which is on the meta level, as it discusses the people holding the debate) requires more thought, care, and evidence than the original question, not less, especially as claims like that which do not have extraordinary evidence are an easy way to forego the entire original discussion.
Moving on to a more direct reply, why do you feel meta level criticisms, or even ascending levels of meta criticisms are unlike scientific debates? I feel like I've seen that pattern over and over again. Take for example Kuhn's theory of scientific revolutions, or critical theory. The criticisms of Kuhn or of critical theory can be pretty meta, while discussing something that is itself already meta. (Your comment, itself, is already on a meta level of Scott's post, and my first paragraphs is meta to yours. This paragraph isn't, although this parenthetical is).
Could you clarify more what it is you're pointing to? I feel like I'm not seeing it clearly.
I'm PND, M'am, Party Not Designated. I try to avoid partisan squabbles.
Science isn’t particularly meta: it doesn’t spend a lot of time talking about how it’s participants are talking at the meta level, and definitely doesn’t go three or four layers deep.
References to Kuhn (at the meta level, rather than a substantive hypothesis about an object level subject of study) are rare, and usually a sign that something has gone wrong in a scientific discussion.
Science has a pretty clear aboutness, in other words, that makes progress by centering the participants on a common object.
The point about evangelism is well taken. The critic mentioned they didn’t believe in Baha’i themselves - which is even more relevant than whether Baha’i itself is true or not. I also find it refreshing when philosophies I already don’t agree with expend no effort in changing that fact - but I don’t think that choice by itself is particularly noble for the reasons Scott gives.
I also believe "recycling is good" is true, but I'm *not* for evangelizing it beyond making that statement and having some discussions with willing participants.
I mean depends how big a deal the thing is right? If you’re sitting on something really important it seems downright irresponsible not to “evangelize” it.
(Shoutout from fellow Texas game programmer land. By George, you do good work.)
The inside view is yes.
The outside view is to see the massive, massive imbalance of people who did damage sharing something they (probably erroneously) thought was really important, and set a good prior. The inside view is rational when you're inside. The outside view makes the inside view seem crazy crazy irrational.
FWIW, I saw the sense in prediction markets after Eliezer and Linta's current fic portrayed how they'd work in detail (even if fictionalised).
Made it clear how, for instance, they close the incentive feedback loop in scientific funding. The current bureaucratic setup seems even more incredibly and much more *obviously* broken now.
What fiction is that?
This one: https://www.projectlawful.com .
Reading Matt Levine's Money Stuff is a good way to lose your faith in prediction markets.
Yeah, to add to the defense of evangelism, Penn Gilette (atheist magician of the Penn and Teller duo) once said something to the effect of "If I were going to be hit by a bus and didn't see it, would you stand there and say 'I wouldn't want to make them uncomfortable'? At some point you're going to tackle me."
I have to imagine the Baha'i faith doesn't really have a concept of hell... But even beyond that (and I do think Christian evangelism can often overemphasize hell), it just makes me think your faith isn't that compelling if you don't have a desire to share it. (The fact that many Baha'i ignore this precept to not evangelize is a good sign though)
One would think that a religion as small as Baha'i would place more emphasis on evangelism, though in fact I think it does in some capacity. I've seen advertisements in weird places. Enough to plant a seed.
Many larger religions don't seem to carry the imperative to evangelize either (at least, not to the degree of Christianity/Islam), but they must have at some point in history.
My understanding is that the Baha'i faith doesn't aggressively evangelize because it doesn't believe that people need to be Baha'i in order to be saved, or even in order to be good people. Instead, Baha'i adherents have a special responsibility to heal the world, serve as examples, and promote the enlightenment and moral uplift of everyone else.
Given this, I think the restriction on directly encouraging people to become Baha'i makes sense as a way of privileging the unity and commitment of the religion over its size. Baha'i do missionary work all over the world, but that work, in my understanding, usually focuses on stuff like farming techniques and community-building.
Yeah, that's about what I expected, it can make sense for Baha'i to not evangelize, given those beliefs... but that certainly doesn't generalize to other beliefs. Certainly not to religions with a concept of non-universal salvation, and probably not to most beliefs (religious and non-religious) generally.
And, even then, I still think a largely non-evangelizing religion is not a good sign for the religion.
Imagine seeing a movie that was so amazing it "changed your life": of course you're going to "evangelize" that movie and tell your friends they should go see it. On the other hand, if someone doesn't tell me they should go see a movie that they watched, and in fact, when I suggest that I'm going to see it anyways, they ask me if I'm really sure I want to go see it... well that doesn't suggest to me that it's a very good movie.
Oh, totally agree. I don't even think it's fair to call Baha'i "non-evangelizing"--they just aggressively evangelize the stuff that they think is universal, like peace, harmony, and love, while giving a more careful soft-sell on the stuff that they think is only for certain people. So, kind of like reading a book that was very challenging and changed your life, seeing the movie and thinking it's pretty good, and telling everyone to go see the movie, while only recommending the book to certain people :).
I guess to me, though, this raises the question of what you _can_ be mad about with people evangelizing beliefs that they think everyone should adopt. It makes total sense for someone who is certain that I'll go to hell if I'm not baptized into the Catholic church to say whatever they think will get me to do it, and in fact not evangelizing at me with those beliefs suggests that they don't care about me very much.
However, as someone who doesn't hold that particular belief, I think it's reasonable to be annoyed at aggressive evangelism. It's frustrating to have a nice stranger engage in a conversation with you, only to realize a few minutes in that they're trying to get you to join the Jehovah's witnesses, and that they won't be content with a nice, interesting conversation about what their religion means to _them_ because they are convinced that their religion is crucial to _you_. It's annoying to have people knock on your door or hand you pamphlets, especially when the pamphlets tell you that you're going to hell.
Which makes me feel like perhaps people have a special responsibility to be cautious and skeptical before accepting beliefs that justify (or even obligate) being annoying to strangers.
After all, beliefs that justify intense evangelism often also justify, when possible, shunning me for not being baptized, or taxing me, or beating me, or otherwise engaging in pretty unpleasant coercive actions to save me from eternal damnation, and historically that's how most people have handled this sort of belief. I'm very happy that we've shifted to a social equilibrium where that sort of behavior is not acceptable but pushy evangelism is, but that doesn't change the more fundamental ways in which beliefs like this are dangerous.
I don't think I agree that being annoying means that your tactics don't work. You convince people of stuff by getting their attention and then winning them over with argument, connection, etc. Stuff that gets attention when you're not trying to give it attention is, almost definitionally, annoying, because attention is scarce.
So, while a less annoying strategy might have a higher success rate among people who ever engage with it, it's likely to engage fewer people to begin with. There's some point on this engaging/not-annoying possibilities frontier that maximizes converts, and it's unlikely to be where the strategy is not at all annoying.
As evidence, I would present the fact that advertising is a competitive, thriving, profitable, long-standing industry, and that virtually all advertising is at least a little bit annoying. Furthermore, advertising aimed at the broadest possible customer base (e.g. beer, soda, toilet paper, etc.) is generally more annoying than ultra-niche advertising (e.g. airplane parts, ultra-complicated board games, etc.)
It's also possible to accept such beliefs while purposefully holding back the evangelical implications, out of humility and an abundance of politeness. If I think you're going to hell, it should still be possible for me to say (1) wait, that's what I *think* but I could be wrong and (2) your life is yours to live once you've heard the story, just as mine is mine to live with respect to some other thing I don't believe in. Yes, even while fervently believing those things and experiencing nervous discomfort on your behalf.
This gets back to the core dispute with EA: there are reasonable moral frameworks where it's *okay* to let bad things happen because the alternative is too hubristic and presumptuous.
A literal bus coming my way? Sure, warn me or push me. An invisible bus that you've found by some indirect use of thoughts, not senses? No, I get the urge, but consider the hubris, consider how many people have believed in how many such buses and how few have been definitively right. The prior is strong with this one.
Yes, that's certainly true, and most people who believe such things do exactly what you're saying. But, at some level, I don't feel like the politeness and humility you describe are really consistent with fully believing in hell. If you're sure that an invisible bus is about to run me over, I hope you'd push me, even if my reaction to being saved from a threat I didn't know about might be anger instead of gratitude. But, as you're saying, it's perhaps wrong to be sure of something that you see many other people with equal capacity to you choose not to believe.
Also, while I at some level agree with and respect this sort of humble believing you're describing, it also feels untenable when it comes to sufficiently extreme beliefs. I might be willing to let someone make a choice that I think has (say) a 10% chance of killing them, because I can weigh other normal human things against death. Who am I to say that spiritual autonomy isn't worth the risk of death? Or that heroin isn't worth the risk of death? And since I'm willing to do that, I'm willing to adopt a certain level of humility and say that even when I feel sure of something, I need to be open to being wrong, especially when others aren't sure. And so, while I might sometimes want to act coercively to protect someone, I don't need to do so.
But if I believe that your actions have a 10% chance of leading to an infinity of infinite torture, what can I possibly line up against that? What could possibly justify not protecting someone from something that's so inconceivably bad that it's badness dwarfs the goodness of everything on earth?
Obviously, most people who believe in hell manage to do so. I've had lots of friends who believe in hell (and presumably believe that I'm going to hell) who were lovely to me in all ways and who didn't aggressively proselytize, let alone threaten to burn me at the stake or shun me. But I can't help but feel like there's something particularly dangerous or coercive about that sort of belief, and that the fact that our society is functioning reasonably well while accommodating that belief is the result of some hard-won but hard to justify social norms.
> But if I believe that your actions have a 10% chance of leading to an infinity of infinite torture, what can I possibly line up against that?
For all of my emotive waffle up there, I think this is the nut: if you take it seriously, I think it's pretty easy to convince yourself that moving someone even infinitesimally away from hell is of finite (perhaps large, if you're motivated) goodness and suddenly you have a license to neglect a lot of considerations (like being humble or polite, at the shallow end) if souls might be on the line. It is often hard to feel confident that true believers are robustly aligned when it comes to the temporal needs and preferences of, well, everyone else.
Because of the history of over-certain well-meaning bad actors, my most certain beliefs should almost always stop at your doorstep.
Many people don't (can't?) really follow this; they *really* take on the whole belief, implications and all. I just think the world would be better if we did follow it. I can't and won't get you to follow it. But I will follow it myself.
>But if I believe that your actions have a 10% chance of leading to an infinity of infinite torture, what can I possibly line up against that?
That's Pascal's Mugging.
> If I think you're going to hell, it should still be possible for me to say (1) wait, that's what I *think* but I could be wrong
But isn't this kind of epistemic humility doctrinally impious/heretical in a lot of religious groups? It seems sort of like saying, "it's not really a conflict because they can always believe their religion a little less hard". While I'd certainly prefer that, I don't think it actually resolves the tension for everyone. There are a lot of situations that would be better if people were less neurotic or rigid about their beliefs, but that's not really the direction even secular society is heading right now, let alone faith communities with grisly eschatologies.
I don't disagree with you overall, and that's basically how I handle topics I have strong convictions about in personal interactions, but I think it entails pretty radical changes if you have some not-uncommon worldviews (certainly beyond just the religious or one particular ideology).
------
Also, as a personal aside, I'm actually not sure the brimstone believer becomes less bad without evangelism - I think even a quiet/humble belief that specific people/groups are destined for eternal torment is corrosive and antisocial. This is certainly influenced by loved ones with gay parents who grew up surrounded by peers (and tragically, sometimes teachers) who were taught that "love the sinner, hate the sin" counts as tolerance - they were trapped between the obvious agony of knowing by implication that "friends" expected their entire family to burn in hell and, in a cruel inversion, the feeling that it might actually be intolerant to criticize or avoid people with those beliefs, since they were sincerely-held and not evangelizing (and seemingly a local majority). In this case, I think the low-evangelism mode (combined with a facile idea of pluralism where no belief can ever be harmful just to express) was actually more insidious because it sapped the ability to process genuine hurts and learn how to draw boundaries in relationships.
Is this sort of saying, "I would rather people with bad opinions evangelize so they're easier to avoid"? Probably, but I think it does speak to a slightly deeper (but still fairly selfish) point, which is that I would actually really prefer that the intensity of (other) people's behavior reflect the intensity of their beliefs, because it reduces model uncertainties and therefore my anxiety (as well as, as above, time and emotional energy invested in people who turn out to have beliefs that they have no interest in reconciling with your human dignity).
>>>But isn't this kind of epistemic humility doctrinally impious/heretical in a lot of religious groups?
Yep, it's why I respect the Baha'i. I don't think it's cheap to do. I think it's expensive and ultimately worthwhile.
Contra Penn, the sometimes intense and painful family drama of deathbed conversion attempts performed against the wishes of the dying.
I just want to register my frustration that this post didn't abandon format and call itself "Highlights From The Criticism of Criticism of Criticism of Criticism". Too far is not far enough.
EDIT: Oh, and the quote of Alex's post is broken, the first paragraph is outside of the blockquote-marking bar. And I recognize that this is Carlin's gaffe and not yours, but the religion is called Baha'i.
Ah, heck, thanks.
Happens to the Jainest of us.
This reminds me of a bravery debate. Everyone is fighting against a real but different opponent, and gets confused because all the opponents share the same name.
One opponent is "confusing a good cause with the need for expertise and evidence." Nobody wants that.
Another opponent is "scorning innovative work just because attempting something new makes you seem arrogant." Nobody wants that either.
And another opponent is "mistaking superficial work for effective work." Look, another thing everybody doesn't want!
I don't have a solution, alas, except to agree with Scott that specific illustrations help.
As I initially understood EA through osmosis by reading in these communities, it has often been reduced to short-hand heuristics, e.g. send money to charities targeting the developed world. That makes it easy to criticize both specifically and paradigmatic-ally. This is at odds with what its definition should imply, which is to just try to find the most effective solutions to human problems, or alternatively the most pragmatic means through which virtually anyone could help - these don't mean the same things, and a chunk of criticism could be chalked up to their conflation (like yeah, you could conceive of something better than Joe Schmo's money in specific instances, but Joe Schmo has money, little time and few options - should he do nothing?).
That doesn't even require a framework.
"The EA movement is obsessed with imaginary or hypothetical problems, like the suffering of wild animals or AIs, or existential AI risk, and prioritizes them over real and existing problems"
Two obvious counters to this:
1) I can guarantee you that if you prove to a given EA that a given problem is not real (and won't become real), he/she will stop worrying about it. EAs care about AI risk because they believe it *is* a real problem i.e. may come to pass in reality if not stopped. It's not an *existing* problem in the sense that a genocidal AI does not yet exist, but that proves way too much; "terrorists have never used nukes, therefore we shouldn't invest effort into preventing terrorists from getting nukes" is risible logic.
2) I happen to agree that animal suffering is not important, due to some fairly-involved ethical reasoning. But... it's not "imaginary or hypothetical", any more than "Negro suffering" was imaginary or hypothetical in the antebellum US South. You do actually have to do that ethical reasoning to distinguish the cases; "common sense" is evidently inadequate to the task.
As someone who thinks that wild animal suffering is extremely important (though not that we have any idea how to solve it, and it might end up being singularity-complete), I think that there are lots of psychological mechanisms that discourage us from taking such ideas very seriously, and those mechanisms are, if not good, at least things that exist for reasons and that maybe are kind of necessary. I'm thinking partly of Scott's "Epistemic Learned Helplessness" post.
Our natural intuitions tell us that the most real or important problems are found among those that we have some chance of solving on our own or in small groups. That was kind of approximately true for almost all of human history. Maybe human societies would actually break if we didn't have these intuitions; maybe we wouldn't be able to cooperate well or trust each other or whatever. Maybe we would all get distracted by speculating about the big questions and get eaten by wolves.
Nonetheless, it's possible that for some notion of value/reality/correctness, there really are super-duper-big-bads that go way, way beyond the ordinary villains of ordinary life (who are already extremely difficult to defeat). And maybe if we think well enough and deeply enough, we notice them. Maybe in turn this still makes us worse at everyday life, because we're still not exactly wealthy and safe enough to necessarily afford to focus on the biggest and deepest and furthest problems.
Another way of putting that is that in a given notion of value, there's no guarantee that the world or universe as a whole, or just the parts you can see, will turn out not to be inconceivably horrible, much more than you bargained for when you started speculating about it. :-)
In this connection, Scott mentioned the famous line from H. P. Lovecraft:
> The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents. We live on a placid island of ignorance in the midst of black seas of infinity, and it was not meant that we should voyage far. The sciences, each straining in its own direction, have hitherto harmed us little; but some day the piecing together of dissociated knowledge will open up such terrifying vistas of reality, and of our frightful position therein, that we shall either go mad from the revelation or flee from the deadly light into the peace and safety of a new dark age.
The ethical reasoning I refer to is social-contractarianism - most particularly, the notion that intrinsic value in morality is limited to those capable of morality (at the very least, reciprocity).
The prime justification here for SC is non-exploitability; creatures like lions cannot be bargained with, so giving concessions to them is futile and giving them full citizenship with all its associated freedoms would immediately result in people getting eaten by lions. Wolves can be our slaves, our exhibits or our foes, but never our equals. If you want to treat your animal slaves well, I've no objection, but I don't think giving them rights is warranted since they can't understand (let alone uphold) responsibilities.
(There's also the "suffering as a criterion for moral value leads to existential despair because bacteria have responses describable as pain" issue, but fair point on the "things can be unfixably evil".)
As I said, though, it's something definitely worth thinking about even if I've come to the conclusion that the status quo isn't far off sanity.
(I should clarify that I do think rights should probably be extended to the mirror-test-passers, as there's precedent for their participation in the social contract (most obviously, orcas being employed by fishermen in return for food). But the further down you go the less plausible this is, and certainly it's nonsense for stuff like cows and chickens.)
I've heard of this contractarian view before but it doesn't seem right to me in that I don't think of rights as a matter of bargaining (so I don't think non-exploitability is a goal). But I can imagine the plausibility of thinking of rights as a matter of bargaining, like finding a Schelling point that rational creatures would consciously assent to and recognize the stability of.
One argument against this, which I'm sure you're familiar with, is that possibly human beings who can't contract or bargain (yet, or ever, or anymore) are supposed to have rights, like infants, people with severe disabilities, or senile elders. Or severely mentally ill people, including psychopaths who might not have certain capacities to want to respect others' rights. All of these people's rights might be significantly circumscribed by their limited capacities (like maybe some of them should be imprisoned or institutionalized!), but they still conventionally appear to have and to deserve rights.
But maybe we need a stronger conceptual distinction between "being a moral end"/"having moral worth" and "being a participant in human society". Wolves and lions could do the former but not the latter, and people could have responsibilities to them on account of that, but not literal equality in terms of, like, being free to roam about in a city.
I have to admit to some confusion (more about my own view than about yours) because I'm also very uneasy with social contract theories in general, inclining toward viewing people as having presocial or precontractual rights and responsibilities toward others. So while it seems clear that lions can't participate in our hypothetical social contracts, I would also say "but our own rights as humans already don't derive only from our hypothetical social contracts, so lions' inability to participate in them shouldn't mean that the lions don't have rights against humans!". But I've been noticing how hard it can be to say exactly what those rights are or exactly how they're grounded.
Edit: Also, maybe we can find some agreement by thinking in terms of whether you have a responsibility to save others in distress, and whether it's good to do so. Like the child drowning in the pond thing, beloved of some of the inventors of the EA concept. Maybe you would say (I don't know for sure) that it's theoretically supererogatory to save random drowning children in the state of nature, but maybe also clearly required within (one's own?) human society, because societies can form social contracts that expand our responsibilities this way, so that we know what we can expect from each other, at least approximately. By contrast, it might be good to save random drowning elephants (or lions), in the sense that it makes the world better, but it's permanently supererogatory to do this because there's no way that the elephants can reach an agreement or understanding with us whereby we would promise to do this. It could be clearly a good thing to do, that is also clearly not anyone's responsibility to do, because there's no one representing elephants with whom one could agree on this responsibility. Is that kind of similar to your view?
>One argument against this, which I'm sure you're familiar with, is that possibly human beings who can't contract or bargain (yet, or ever, or anymore) are supposed to have rights, like infants, people with severe disabilities, or senile elders. Or severely mentally ill people, including psychopaths who might not have certain capacities to want to respect others' rights. All of these people's rights might be significantly circumscribed by their limited capacities (like maybe some of them should be imprisoned or institutionalized!), but they still conventionally appear to have and to deserve rights.
You're right - I am familiar with it, and even considered it while writing the earlier post. It is a pickle.
In the case of infants I partially bite the bullet i.e. I think prompt infanticide isn't morally murder. There are a lot of caveats, though, like "mutilating babies predictably harms the people they'll become", "humans become at least vaguely capable of functioning within society within a year", "killing John's baby without John's permission is grievously harming John", and so on.
With a lot of the others there's the inherent issue of "how would one implement this IRL without leaving an obvious avenue for murdering people and then claiming they were X", so the point's kind of moot. "Barack Obama is a hippopotamus", though, is literally the example Scott used of something more ridiculous than Lizardman Conspiracy, so the spectre doesn't arise, and "was literally born yesterday" is not going to fly either for anyone who's been around long enough for people to want them dead.
Elephants are highly intelligent (they're on the "passes mirror test" list I mentioned) and also fairly social, so not a great example of "you cannot contract with this", but that doesn't address the point.
My real answer to that point is along the lines of: I have no moral intuition on this matter and can't construct a clear rationale. Also, total-order morality is scary as hell. So I just don't care; I would not think less of someone for saving or not saving the lion (absent other considerations e.g. if saving the lion is likely to result in the lion eating people then I'd say it's at least supererogatory to not save it).
Rights don't exist period. They are human fictions. No one "does or does not deserve them". So whether they are "created by bargaining" (or not) is a stipulation of us.
"Wait I have a right to my life and property!" is not going to magically deflect a Viking axe headed towards your face. Nor stop the Vikings from taking your valuables.
I would say essentially rights are marketing slogans for laws/norms a particular society wants people to take very seriously. But to be vulnerable to those slogans, someone needs to actually be a member of your society (broadly defined).
Almost all ethical theorizing goes much better if you just throw all the "rights" language in the garbage. You can bring it back out once you are done to dress up/market your results, but it is a hopelessly confused way of attempting to get to results.
Ethics definitely in large part is about bargaining/allowing groups of agents to cooperate/coordinate together in non-destructive ways. But what things they will agree to and what behaviors are "ethical" is going to be hugely situational.
The ethics/rights you settle on for a small colony of scientists with little hope of resupply are going to be vastly different than a large group of bronze age humans in a vacant-ish area, or a modern interconnected society on a world with 8 billion people. Think about how different norms were in armies, or sailing ships, or communities living closer to the edge of subsistence (infanticide).
If we suddenly ran across aliens with wildly different social structures, and intuitions, what ethical framework we could establish with them is going to depend on the facts of their situation and biology/psychology etc. If their culture revolves around the "females" hunting and killing 80% of the least fit female offspring, telling them about utilitarianism or "rights" is likely a total non-starter.
And more broadly "ethics" theory is not some unified thing. It is a rope, made up of many related but separate strands, none of which go through the whole length. Sort of like finding a definition of "game".
Utilitarianism works for some things, SCT for others, deontology for others, and caveman brain for others.
But there is no reason to think the hodgepodge of intuitions, social structures, cultural norms, and psychological structures we evolved/developed in any way cohere into something that can be simplified down to some unified set of "ethics" or "rights".
"Rights don't exist period. They are human fictions."
A reasonable proposition. I believe that rights are not human fictions, but I also am a Christian who believes all men and women are images of the Divine and as such are sacred. If you don't believe that, the fiction theory is more sensible.
Which is why, in Jefferson's first draft of the Declaration of Independence the famous line read as follows:
"We hold these truths to be sacred & undeniable; that all men are created equal & independant, that from that equal creation they derive rights inherent & inalienable,"
The story goes that Franklin convinced Jefferson to change it from "sacred" to "self-evident" in order to please the more strict deists. It is a more universal line to say that these truths are "Self-evident" but they're really only self-evident in a culture that is completely seeped in Christianity. The idea that all men are equal is a Christian idea, based on the idea that all men are made in God's image: without that idea, it's nonsense. People are clearly not equal: some are smarter, some are stronger, etc. It makes sense from a Christian perspective to say that an illiterate Chinese peasant rice farmer and the Emperor of Prussia are both equal in value in the eyes of God: in everyone else's eyes they are clearly extremely unequal in value.
By the same token, if animals have rights they come from their relationship to God and to man: if there is no God, then man can choose whatever relationship he pleases with them.
"The idea that all men are equal is a Christian idea, based on the idea that all men are made in God's image: without that idea, it's nonsense. "
This is Christianity taking credit for the accomplishments of its enemies. The ideology of equality, just like the US Declaration of Independence, arose out of Enlightenment philosophy, 1700 years after the advent of Christianity. Enlightenment philosophers were (as you mentioned) often deists, or even atheists, who tended to be skeptical of organized religion. I won't deny that some early liberal philosophers were Christians, even more anti-freedom, anti-equality, anti-democracy reactionaries were Christian.
I'm saving this comment for reference. Very good, thanks.
I might as well get something out of several graduate level ethical theory classes and a lifelong friendship with a couple ethicists.
;)
I would be more explicit and suggest that social-contractarianism needn't even imply intrinsic value.
Maybe kind of necessary...?
Can you posit any version of history that produces wild mammals and does not involve what you consider to be suffering?
Yeah, 'Nature red in tooth and claw', suffering is what drives evolution. Though I also understand the human desire to help some injured wild animal and bring it back to health. It's hard to reason that... For the betterment of the genetic future of your species I am going to let you suffer and die. Though that might be the best thing for the future.
Altruists are obliged to do something about X if is real and X is morally relevant to them. I have found that a lot of EA/rationalist types arent very open to the idea that the second, normative claim needs to be established separately, and also aren't very open to the idea that there can be metaethical doubt about utilitarianism.
And in the hypothetical alternate world EA movement which distributed bednets and spent a lot of time trying to save people from eternal damnation through Jesus Christ, the idea that people would abandon the later if you could "prove" that hell isn't real would be cold comfort to anyone looking on from the outside.
The evangelism/having a discussion boundary does seem to have more to do with “do the people talking to each other have mutually reconcilable value systems?” rather than any intrinsic properties of the words coming out of either of their mouths.
If yes, then the listener can slot the information being spoken into their ears into a corrigible framework, frictionlessly, inception-style—almost if they had already always believed it to be true.
If no, then some flag will eventually get thrown in a listener’s mind that *this* person is one of *those* people that believes that *wrong* thing, and they’re trying to convince me that *wrong* thing is *true* when it’s *not*.
In this way, literally describing something within the framework of a value system that is incompatible with another can be interpreted as an attack by that other (or, more weakly, evangelism). The crux is foundational.
Having wasted a youth arguing with folks on the internet, I’m fairly pessimistic about truly having conversations with folks that I know to have these “mutual incompatibility” triggers. You basically have to encode your message in a way that completely dodges the memetic immune system they’ve erected/has been erected around their beliefs. Worse, knowing that you’re trying to explicitly package information in a way that dodges their memetic immune systems makes them even more likely to interpret your information as an attack (which, honestly, can you blame them? You’re trying to overturn one of their core values! Flipping some cherished bit! People actually walk around with these bits!! Any wedge issue you can possibly imagine cleaves a memeplex in half around it!)
This will be foundationally problematic for any organization that’s explicitly trying to manufacture controversial change. People don’t want to flip their bits.
Theoretically, maybe not all seemingly-intractable disagreements are about values, so maybe they're not all theoretically intractable for that reason.
Like religious evangelism, which is a famously intractable kind of Internet argument, sometimes appears to hinge on disagreements of matters of fact. Like people will argue about whether or not a specific miracle really happened, and so then we should or should not be persuaded to believe a religious tradition founded on, or confirmed by, that miracle. Supposedly, each claimed miracle really happened, or not, or has some kind of evidence for or against it which is independent of people's values.
But your point might still work pretty much the same if you can generalize "value systems" further. Maybe to "stances" à la David Chapman or "pretty deep commitments" or "axioms" or something?
Indeed, I may be overloading the word “value” when I really mean something closer to a deeply rooted axiom (ie, disagreement not being about the fact of the matter of individual details of possible miracles, but more about whether the ontology of the universe contains something even remotely like a miracle)
edit: I read too fast and mistook this list for something from a serious blog post rather than just an example for a comment. As a result, this comment is probably overly harsh in its phrasing relative to the target. Feel free to skip it.
I think I was thinking something like "I'd just ignored that list but if Scott is citing it I guess I'd better respond in detail."
Original comment: Yeah let me elaborate why the "paradigmatic criticism" made me scoff:
- "giving in the developing world is bad if it leads to bad outcomes and you can't measure the bad outcomes" so ... don't give in the developing world? Ever? or measure better? measure better how? At least point me at the book to read and give a one-line summary of recommendations to address this, because this is clearly not a recommendation followed by the non-EA charity space anyways.
- "this type of giving reflects the giver's priorities" :very sarcastic voice: really??? charitable giving is decided on the basis of the giver's interests? yeah no shit, it's my money. The whole point of EA is "Do *you want* to do the most good?" This is inherently anchored to the giver's value system.
- "this type of giving strangles local attempts to do the same work" see I know the examples this is referring to but this is one case where it would have been worlds better to give at least one example because as written this is beat for beat equivalent to actually used political arguments to abolish literally every social safety net. Stop sucking the government's teat! Starving African ... welfare queen!
- "The EA movement is obsessed with imaginary or hypothetical problems" ... "Stop wanting wrong things for no reason" has literally not convinced any human ever in the history of the planet. I now disagree about the noncentral fallacy - this argument is the worst argument in the world.
- "The EA movement is based on the false premise that its outcomes can in fact be clearly measured and optimized" okay, um, how do I say this, have you read the Sequences? if you can't optimize an outcome, you cannot do anything whatsoever. so like, sure, but absent that there's also no basis for your criticism? How are you saying that the EA charitable giving is *bad*? Did you perhaps model an outcome and are trying to avoid it because it's bad? Yeah that's optimizing, optimizing is the thing you are doing there, as the quote goes, now we're just haggling about utility.
- "The EA movement consists of newcomers to charity work who reject the experience of seasoned veterans in the space" Yes.
- "The EA movement creates suffering by making people feel that not acting in a fully EA-endorsed manner is morally bad" I believe this is called "being a moral opinion", yes. edit: Am I endorsing this? No, I just think it cannot be fully avoided. Moral claims cause moral strife.
And like. Maybe this is uncharitable and the book-length opinions really have genuine worth and value and should be read by everyone in EA. But if they do, none of the value made it into this list! Clearly whatever the minimum length for convincing literature is has to be somewhere in this half-open range.
Maybe submit a book review?
>So maybe my thoughts on the actual EA criticism contest are something like “I haven’t checked exactly what things they do or don’t want criticism of, but I’m prepared to be basically fine if they want criticism of some stuff but not others”.
This feels like a Motte and Bailey issue. When I read the rules the preamble states pretty clearly that a wide range of topics are welcome, but the rule minutia make it hard to address broader paradigmatic issues. The organizers can call the winner The Best Criticism of EA even though they have implicitly limited the potential criticisms they receive to prevent ones they may be uncomfortable with.
To build off your example, imagine on the next Sunday the pastor comes back and says "We judged 'my voice is too quiet' as the winner of the Criticism of Christianity contest, paid the winner $5,000, and bought me a new microphone". Sure he got some criticism and received it, but the overall framing of a contest implies that this was the most important criticism to address, and conveniently an easy to address one won. He can say he addressed the biggest criticism, while at the same time not addressing the "God isn't real" criticism.
That just sounds like a disagreement about the word "best"? It sounds to me like the organizers were looking for the most useful criticism to improve their performance within the paradigm.
> conveniently an easy to address one won.
This is in fact what you want out of criticisms though. Easy to address means you don't need to put in much effort to get a potentially large improvement. Now maybe 30% more churchgoers can actually hear the sermon! That's genuinely a great improvement! What's wrong with being easy?
It's a contest for criticism of effective altruism, where entries will be scored and the winners, runners up, and honorable mentions receiving prize money. In my opinion there is an implicit statement that the winners are better than the non-winners.
>This is in fact what you want out of criticisms though.
My point is that by the pastor giving the win to "turn up the mic" rather than "God isn't real" to a contest broadly framed as "criticism of Christianity" rather than "criticism of my specific performance in our single church" he gets to declare victory over a much broader field of concerns than he actually faced
Right, so you're looking at it as a social battle? But I don't think the pastor was looking to win a social battle at all, the pastor just wanted to improve the sermon in an effective fashion. I don't know how the pastor could signal this; maybe don't publicize the winner at all? But then he can just give the money to his son or whatever. Publicize the winner in a small closed circle?
It's like if you're driving from New York to Sacramento and you make a blogpost titled "Soliciting criticisms of my planned route for driving a car to Sacramento" and then people get very upset that you didn't select the paradigmatic criticism of "abolish fossil-burning cars and build out rail lines." What are you, an oil industry shill? No, you just did not want to embark on a multi-decade reevaluation of your entire way of living, you just wanted to get to Sacramento faster, and paradigmatic advice like "what's so great about Sacramento anyways" is in fact of less than no use to you.
I think EA wanted to get to Sacramento, and the only reason this has blown up so much is that they accidentally labeled their post "Soliciting criticism of our driving plan" and now people think they want to win a performative victory for the combustion engine or w/e.
To be clear, I have no problem with the pastor wanting to improve his sermon, or to escape the analogy for the EA folks to want to improve specific actions policies programs or what have you, which is what I think the rules of the contest targets. My complaint is that per their tl;dr "We're running a writing contest for critically engaging with theory or work in effective altruism (EA)", so if they only give prizes to the best critiques within their specific framework there is an implicit statement that the critiques within their framework are the best "criti[ques] engaging with theory or work in effective altruism".
To take your example, for "Soliciting criticisms of my planned route for driving a car to Sacremento" it's perfectly reasonable to not want a response on Peak Oil. However, if you title your contest "Critically engaging with theory or work in Driving" and then only address and award criticisms of your planned drive to Sacremento, you're again declaring victory against a much broader field of critiques than you actually faced.
My point is exactly your last edit, the contest itself is reasonable but the framing of it is way out of proportion to the actual eligibility criteria in a way that to me feels intellectually dishonest as it allows them to give the pretense of addressing broad criticism while actually not needing to do so.
I am those people who think that.
Fair enough! I agree with this criticism, the labeling is clearly poor (for the goal I presume). I think they underestimated the breadth of available disagreement.
edit: I think to some extent I'm discarding the idea that EA is trying to win a performative battle because that just sounds ..... useless? I can't imagine anyone going "Oh, EA asked for reviews but there was a really good review saying that everyone should fully participate in cutthroat capitalism to maximize value fulfillment but EA didn't accept it, that must mean that this argument was fully demolished by them if it didn't win" ... and it seems like that's what would need to be believed for the performative victory over paradigmatic criticism to actually affect anyone. For a contest like this, I would expect everyone reacting to the outcome to have already priced in that "EA does EA things", such that the victory of an EA-aligned feedback offers no new evidence.
If the winners yield a concrete improvement that makes things substantively better, then yes, their criticism is in an important way better than the ones that make the deeper and more trenchant criticism, in a way that doesn't help yield any big improvement for people!
This is what Zvi was getting at with his list of assumptions and opaque line by line critique of their wording: they rather seem to be trying to have their cake and eat it too, to get to tell themselves and others that they took in all the criticism from the grandest to the smallest, did the rational thing, but also, to be able to implicitly dismiss the really hard and vague critiques for structural reasons.
Yeah, Scott hasn't read the rules, but he complains that Zvi finds them hard to read!
Zvi wasn't (just) complaining that the contest was too narrow, but that this narrowness was opaque and that people kept telling him to enter it.
> The universe was thought to be infinitely large and infinitely old and that matter is approximately uniformly distributed at the largest scales (Copernican Principle). Any line of sight should eventually hit a star. Work out the math and the entire sky should be as bright as a sun all the time. This contradicts our observation that the sky is dark at night. This paradox was eventually resolved by accepting that the age of the universe is finite
People still bring this up as an unresolved paradox, which I've never found particularly convincing. But I don't see how a finite age of the universe is supposed to be a resolution. According to this line of argument... why are some stars brighter than other stars? Why is the age of the universe relevant? Are all the stars we can see constantly getting brighter, because the age of the universe is increasing?
> But I don't see how a finite age of the universe is supposed to be a resolution
In stationary infinite universe light may be (for now) blocked by gas and dust between us and distant stars.
As I understand it, it's basically because if the universe is x years old, you can't see stars that are more than x light years away, and most directions in the sky won't have a star within x light years.
This is what I don't understand. The lesson of high-powered space photography is that, within the finite observable universe, every line of sight already terminates in a star. So it's not relevant that there are more stars out beyond the horizon of observability - we wouldn't see them anyway, because there are observable stars blocking them.
But we can't see the observable stars either (without going to heroic lengths to capture their light), because they're not bright enough. And that fact makes me question why it's supposed to be a paradox that the night sky is dark. I think "not bright" and "dark" mean the same thing.
https://en.wikipedia.org/wiki/Hubble_Ultra-Deep_Field
No, it's definitely not true that every line of sight terminates in a star. No picture of the night sky you've ever seen, unless it's one taken by an interferometer like CHARA, resolves the surface of a star. Stars appear to have finite size because of diffraction across the aperture of the telescope or lens, resulting in a point spread function: https://en.wikipedia.org/wiki/Point_spread_function
We can do some order of magnitude calculations. The average density of the universe is actually very well measured, and it's 9.47e-30 g/cm^3 (http://hyperphysics.phy-astr.gsu.edu/hbase/Astro/denpar.html). Only 16% of that is normal matter (the rest is dark matter), and only 10% of the normal matter is in stars (the rest is in gas). So that's a stellar density of 10^-64 stars/cm^3. The Sun is about 7e10 cm in radius, so you'd need to travel in a straight line for about L = 1/(n*R) = 2e53 cm = 6e22 Gpc before you hit a star. For comparison, the observable universe is 13 Gpc in radius.
OK just off the top of my head, but I think it's the expansion of the universe that is more important. As time goes by there is a smaller and smaller fraction of the universe that is observable. (At some time in the past there was light everywhere... this is the Cosmic Background radiation that we can now observe.)
Let's temporarily assume that all stars are the same size and the same brightness. From observing the area around us (out to a few dozen light years or so) it appears that there's something like a 10^-30 chance that any star-sized region of space has a star in it. (I have no idea if this is the right number, but there's some specific tiny number.) Now consider some ray into space. Along any star-diameter-sized distance of that ray, there's a 10^-30 chance of running into a star. So with probability 1, you'll eventually run into a star. Thus, this ray will get as many photons coming along it as any other ray that hits a star, so the ray should look just as bright as any other ray. (We have to be a bit careful if we don't want to think of 0-width rays, and instead think of a small cone. If a star is twice as far away, then its light will be 1/4 the brightness - but the probability of one star within distance X is equal to the probability of 4 stars within distance 2X, which is equal to the probability of 9 stars within distance 3X, and any of these results are approximately equally bright, and with probability 1, one of them will occur.)
The reason the age of the universe is relevant is that the average distance to the nearest star on one of these paths is looooong. If there's a 10^-30 chance of a star within the distance of a sun's diameter, then since the sun's diameter is about 10^-8 of a light year, there's about a 10^-22 chance of a star within the distance of a light year. Thus, on average, the nearest star on any of these rays is about 10^22 light years (with many of them being substantially farther than this). If the universe is only about 10^10 years old, then even if it's infinite, there just hasn't yet been time for light to reach us along most of the rays.
As you suggest, we also will have to deal with the assumption that stars are the same size and brightness. As long as there is some standard *average* size and brightness of stars, then the above calculation will tell us that the *average* patch of the sky is as bright as this average brightness. But if most 1 degree by 1 degree conical patches of the sky achieve this by having 10^20 stars that are on average 10^20 average star-diameters away from us, then the law of large numbers will mean that most of these patches of the sky will be extremely similar in brightness, and only the few that have a small number of stars relatively close to us will be as much brighter or dimmer than the average as individual stars sometimes are.
> As long as there is some standard *average* size and brightness of stars, then the above calculation will tell us that the *average* patch of the sky is as bright as this average brightness. But if most 1 degree by 1 degree conical patches of the sky achieve this by having 10^20 stars that are on average 10^20 average star-diameters away from us, then the law of large numbers will mean that most of these patches of the sky will be extremely similar in brightness, and only the few that have a small number of stars relatively close to us will be as much brighter or dimmer than the average as individual stars sometimes are.
But this is an argument that most of the sky will be as bright as the rest of the sky. That's true; most of the sky is dark. The paradox is supposed to be that most of the sky is dark when, according to... someone... it should be bright.
> If the universe is only about 10^10 years old, then even if it's infinite, there just hasn't yet been time for light to reach us along most of the rays.
But we don't have problems looking along any given sightline and finding a star there. We would be shocked to look along *any* sightline and not find a star. Starlight is already reaching us along every sightline.
> this is an argument that most of the sky will be as bright as the rest of the sky
It's not just that - it's an argument that most of the sky will be as bright as the average brightness of opaque matter in the universe. Within our stellar neighborhood, the average opaque matter is close to as bright as a star, so we should expect the sky to be that bright.
> we don't have problems looking along any given sightline and finding a star there.
We actually do - even in the Webb deep field, the majority of the sight lines have no visible star.
My problem is that you can have an infinite number of stars yet still have a sky that isn't filled with them.
Imagine the stars are at every integer location on a 2-d graph. And I'm sitting at (0,0) and staring at the point (1, √2). I am not going to be looking directly at any star.
If the stars have radius .01, you'll be looking at the one at (99, 140).
Right, you also need the assumption that stars are uniformly distributed (and time invariant). You can think of the universe as composed as an infinite number of shells centred on Earth. The light from each shell goes as the inverse square of the radius, but the surface area goes as the square of the radius, so the total light received at Earth from each shell is the same. But there are infinitely many shells, so summing over all of them shows the Earth receives inifinite light.
I read the 'evangelizing' comment as being related to some EA practices and found it difficult to imagine it could have been meant to apply to Scott's blog. (though still possible of course)
I agree 'there’s no clear line between expressing an opinion and evangelizing' and I also agree that telling everybody about that 'thing' that is so important to you can be misunderstand even if you don't want to convert them ... but I still think it's something like a continuum with some things (more) clearly being on the evangelizing side and some on the 'not evangelizing' side. 'Writing sth. on your blog that people visit when they want' or 'sending free printed copies of HPMOR to folks uninvitedly' sure seems different. I'm also not at college, but I lately read a bit of specific criticism of EA's recruitment work, and I can understand why some folks (apparantly) could find it cult-like.
Which doesn't take from the point whether you find it necessary or whether it's more or less effective in spreading your ideas.
And I really liked the last sentence.
[I wrote it and] it was clearly not meant to apply to Scott's blog. I've been a reader since he was on livejournal and I've done a stint as a paying customer on Substack.
I pull Scott. EA pushes me. Baptists push me.
Scott, it's as simple as that: who pushes, who pulls. Sure, maybe the oncoming bus push or the abolitionist push are worth it, but it's a really, really high bar to clear.
> but I’m prepared to be basically fine if they want criticism of some stuff but not others.
Fine with me, but it would be useful if the contest would be explicit and clear about this. This would have several advantages: People would know what to submit and what they could win a price for. Contest organizers would have a higher chance to get what they want. And EA couldn't use one kind of criticism to fend of another one or pride themselves to take all kind of criticism when they are looking for sth. very specific.
A priest who respectfully engages in an open debate with folks who find religion appealing but also can't belive in God, gets a different kind of acknowledgement from me than a priest who asks how to deliver his sermon in a way to be best heard.
There was no first person to consider abolitionism. There were lots of people who did not want to be slaves. But the grinding poverty of the past meant every political economy on Earth was based on slavery through 1600, though it was a lot less basic to places not invaded recently, like England in 1600. Around 1600 in England, when North African Muslims were raiding Europeans in general and sometimes English sailors for slaves, lots of Brits said this was bad and English should not be raided for slaves. It was taught in schools and preached from pulpits. No Englishman should be a slave. Blah.
And when everyone is preaching blah, some preachers strut their stuff and say blah blah blah. If it sounds studly, other preachers will go along and preach blah blah blah. Not just no Englishman should be a slave, but no Englishwoman. Nobody should be a slave! Over the next couple hundred years it slowly caught on in England. Slowly, because enslaving outgroup was so profitable.
1600's through 1800's British Isles were a perfectly placed pirate base against everyone else in Western Europe. Piracy had a moral hazard, but was so profitable they ended up with a British Empire. Maynard Keynes thought the British Treasury was founded by Drake's piracy. They had to keep it. How to justify the moral hazard? The Black Legend of the Bad Spanish Empire of slavers worked okay. Meanwhile the pirates were taking and trading slaves like crazed pirates, and making bank, enough to shift from raiding to trading, enough to be governed from London. The empire they were deniably building together could point to outgroup's nasty slaving ways, and Brit slavers were ingroup enslaving outgroup.
The 13 colonies of piratical slavers and a lot of British poors wanting a better life prospered and became widely known across Britain as 'the best place in the world for a poor man' (per Bernard Bailey). When they were poor London let them handle their own affairs. By the 1750's they were building (I think) a third of the British merchant marine and worth governing by their betters. George Washington exceeded London's orders (while following Virginia's orders, and supported by Whigs in London's government) and attacked Fort Duquesne, fortified by the French against British (mostly Virginia British) expansion. He lost and was taken prisoner, but was released and not punished by Virginia and supported by Whigs. The Brits came back and took the fort, Fort Pitt. Washington had triggered the Seven Year's War between France and England. England won. Now to handle the poors affairs. The poors liked handling their own affairs.
The colonies revolted and all thirteen fought for eight years of fairly nasty war. Long nasty wars have a high moral hazard and need justifications. The Tory Samuel Johnson, already toasting the success of the next slave revolt in the West Indies, wrote a good polemic against the revolting colonials- 'Why are the loudest YELPS for liberty from the floggers of Negroese'? and John Wesley stole it. Johnson was happy 'to have gained such a mind as yours confirms me' and the Methodists preached Wesley's patriotic Brit sermons against the colonials and against slavery. For the next hundred years the British Empire preached abolition as a justification for bagging any profitable area that looked easy and, like everywhere, was based on slavery. 'Castlereagh, the name is like a knell' bribed the Spanish Foreign Minister with 100,000 pounds to abolish slavery in Spanish America, triggering a revolt in Spanish America that opened Spanish America to British trade. And ended slavery in Spanish America.
Even the revolting colonials gave up slavery, not least because the moral hazard of slavery made it less profitable as the Industrial Revolution got going. Also the Black Legend of the Evil Spanish Empire helped justify grabbing Florida, and then also the northern wilderness loosely held by New Spain. Everyone has been an abolitionist since.
Not from one pushy evangelist, but from a mix of self-interest and genuine moral choice and a lot of preachers and teachers. Like EA.
It feels upsetting to be reminded of self-interest in abolitionism. Maybe this is partly because of the uncomfortable thought that, if certain contingent economic developments hadn't happened as they did, we would still have slavery today!
But we often hear that in the U.S. civil war the north had economic interests opposed to slavery and the south had economic interests in favor of it. And also that there were changes over time that were making slavery more unprofitable. Even that more limited account suggests that some people had the moral luck to happen to benefit less from slavery, so they were less likely to end up being responsible for engaging in it and perpetuating it.
https://en.wikipedia.org/wiki/Moral_luck
https://plato.stanford.edu/entries/moral-luck/
Slavery was not the basis for the British economy even prior to 1600. Slavery died out in western europe after the collapse of the Roman empire, largely replaced by serfdom.
Yes, I'd conflate serfdom with slavery (not slavery because selling serfs had technical problems) with slavery. Also Japanese making the lowest wages in human history. Slavers, pirates, overseers of radically low-wage laborers aren't fussy about the letter of the law, as Americans are rediscovering given the bipartisan consensus for lower wages through higher immigration or by any means necessary. And in England serfdom was dying out. Not invaded recently, give or take slave raids from pirates and the Spanish losing at sea.
That reminds me of pseudoerasmus on those low Japanese wages:
https://pseudoerasmus.com/2017/10/02/ijd/
As an European, what you've just said is so obvious and so much the way everyone over here understands history, that nobody would think it needs to be stated.
That it is not obvious to Americans has been one of several culture shocks for me about how differently other cultures view the past, as I found out when I started to discover the anglophone internet, a long time ago.
I'm not saying that Americans have the facts of history wrong, but they tend to frame them differently from the way we do, in a way that seems to imply slavery has always been a central feature of Western civilization until Victorian era abolitionism. Whereas if you asked me about the "end of slavery" it is the transition from antiquity to middle ages that would come to my mind.
And yet there is something to be said for that framing; for example I hadn't realized, until English speakers made me (including Shakespeare), that places such as Venice traded in slaves throughout the Christian era. I don't remember my high school history books mentioning this fact.
I've had the same culture shock about the way English speakers see several other parts of history. For example Vikings, who loom so large that they swallow up the whole early Middle Ages in the Anglo-American mind (instead of the Carolingians). It's a whole other way of viewing the European past.
You seem to be using the term "moral hazard" in an idiosyncratic way, which made it more difficult (at least for me) to understand this comment.
I believe insurance companies use the phrase to mean situations where people are tempted to take risks they are not responsible for. As slaves are in practice reduced to the status of minors or below, their owners are liable to take risks with them irresponsibly.
It's hard to talk about this stuff without spouting outrage in place of facts or being so cold you don't notice the skulls.
I think that's just an externality, and moral hazard is specific to insurance.
Every business is risk, capital, labor, and risk covers the other two. Piracy and slavery were all kinds of risky. Insurance is about risk. 'Just' an externality?
"Moral hazard" is only about how an insured party will make riskier choices than they would if uninsured; it's not an externality because the insurer has consented to the policy.
Piracy, slavery, and conquest impose externalities on their victims, but there would only be moral hazard if someone were, e.g., insuring the pirate ship.
"Just an externality" as in: I don't think there's a more specific term for the sort of behavior you're describing.
Most victims of piracy, slavery and conquest were predators instead of prey every chance they got, so I could possibly argue it's not an externality. They were all doing the dance.
But no, I'm using 'moral hazard' in a wider sense from half-remembered Michael Gilbert mystery stories read in the 1970's.
Hypothesis: The demand for criticism of EA is larger than the supply of good criticism of EA.
Oh dear, I am not sure where I heard it first. (E. Weinstein?) But perhaps the problem of 'everyone' being a racist these days is that we don't have enough real racists to point to and say, "see look how bad that is."
It's complicated, but I think there's both a lot of covert racism (because overt racism gets punished) and rewards for accusing people of racism. It's Goodhart on top of Goodhart.
> the trick is evangelizing without making people hate you. I’ve worked on this skill for many years, and the best solution I’ve come up with is talking about a bunch of things so nobody feels too lectured to about any particular issue.
I think this is also related to some of the writing advice you gave in https://slatestarcodex.com/2016/02/20/writing-advice/ especially regarding how to talk about potentially incendiary topics.
I really like the example of the Priest. But I think the potential criticisms of 'God doesn't exist' or 'buy a better mic so we can hear you better' reflect extremes and need additional examples or miss a point.
Again, suppose there is a Priest who wants to see more people attend Sunday service and is also worried that people are leaving. Ultimately he is interested in more people believing more strongly in God. He is asking for criticism and what he can do better.
He's maybe hoping for 'make your sermon a bit shorter' or 'hold service an hour later on Sunday morning and I'd be there'. But instead he may get criticism like 'you're driving this big car while preaching poverty, I don't want to listen to you preaching X while doing Z.' or: 'I believe in God, but I'm appalled by the cases of misuse that took place in your ranks. Write an open letter to your Bishop to fully investigate those cases, then I'll be happy to attend your service.'
I think the Priest is fine to reject the criticism of 'there is no God' - this is the one thing he cannot give up to and still be a Priest. And anyway those guys will never end up in his church.
The example of 'voice is too low, buy a new mic' found in one of the comments is in some ways extreme in the other direction: the Priest can easily solve this with limited ressources, it doesn't require any behavioural change from him, no whatsover change in 'paradigms' and also no loss of status or comfort. It's probably not even a criticism he'd feel uneasy about - compare 'you need a new mic' to 'your voice is unpleasant' or 'your sermons are chaotic and can't be understood'. Simple solution and win-win.
But what about the third category? Preaching poverty and love-thy-next while living in prosperity and making use of luxury goods not available to many others? Or not reacting to the cases of misuse in his own ranks? I think those examples are closer to the 'paradigmatic' criticism that we're talking about in EA. It requires real changes in thinking (all priests I know do it, but is it really okay to drive this big car, while preaching poverty? Am I allowed to criticize a bishop?) and behaviour and it risks loosing the support of other important members in the organization. While not giving up on what is the (most narrowly defined!) core of the issue.
I would argue that those are the criticisms the Priest should hear. Or more precisely: I think it's the Priest's decision to ask for improvements of his sermon only and implement them. Arguably that's already more than what most priests are doing. But I think it's a missed opportunity to not listen to the complaints about his affluent lifestyle and the apparant 'sins' in his owns ranks. Especially when you care about people coming to your services and believing in God.
As mentioned, I think those examples are closer to 'paradigmatic' criticism of EA. And I think they are worth being heard. Especially if they come from folks being close to the (again, most narrowly defined!) core value of EA.
This is a very good point and example.
I'm not much into EA, but from the discussions here what comes to my mind is:
EA always wants to help the poor, and put a lot of effort in using money effectively to do so. But they turn a blind eye to were the money comes from and the roles the wealthy and the US foreign policy plays in making or keeping them poor. Of cause if they would, it would be much harder to raise funds from the wealthy and they would have to question their own lifestyle and cultural beliefs. Doing good feels better and being best in doing good feels the best, so the strive and look for critics to become better in doing good. But if ever, it touches their own lifestyle only minimally e.g. driving a electric car instead of a SUV or living like middle class even if they could afford much more.