> A moral rule of say spend 1% of your time and money on altruism and try to make that as effective as possible would be better...
Maybe I'm missing something obvious, but isn't that almost word-for-word the goal of effective altruism, only with a 0 after the 1 (and they'll help figure out how best to spend it)?
Yeah, right-wing commentators often make the argument that aid to Sub-Saharan Africa is just facilitating the Negroid Population Bomb, and I don't think that's a crazy thing to be concerned about.
However, (A) they are vastly overestimating the extent to which the average african currently gets their calories from western aid, (B) ignoring that mortality reductions and economic gain reduces TFR, (C) ignoring that aid can be used directly to encourage smaller family sizes, and (D) look pretty fucking barbaric when suggesting that mass starvation is just the natural antidote to this problem, in preference to spending 0.5% of western GDP.
I mean... I'm a HBD-pilled pro-eugenics neo-Darwinist, I know that the high-IQ populations of the planet need to prioritise their genetic continuation, SSA's TFR needs to come down, and I don't assign equal value to all human life any more than I equally value all animal life. But unless you value black lives about as much as bacteria I don't see how slashing these aid programs can be morally justified.
I think this is why so much of Christianity is about forgiveness and change and acceptance. The people writing the manuals desperately wanted to be good, and that's the dynamic you need. Of course if being good isn't a priority, it sort of becomes pointless self justification, which is why the average American atheist is cynical about the project -- seeing what is there at the local church is grim vision indeed -- but there's a roadmap there. The requirement isn't to be perfect. The goal is to be perfect. The aesthetic is to be perfect.
"The person who saves the 37th child is more moral than the person who doesn't"
I don't think anyone disagrees that saving the nth child will give you some morality points. The disagreement is whether refusing the save the nth child will lose you morality points.
Not if it's the Internet Atheist version of "I don't believe any of this sky-fairy crap but I will quote it to force you to do something I want you to do".
Not all they that cry "Lord, Lord" will be saved, remember?
I think "obligated" is a difficult word here and can be avoided, as descriptively we don't require this of anyone.
It would be more accurate to say, "The more you do, the more value your life has" or something similar. You need strong phrasing to communicate the vital importance of doing this but not blame-based to avoid basically saying "it doesn't matter what you did if you didn't do everything."
But once we start discussing morality, we're wading into an entire morass. Morality is good, okay, but what counts as moral? If I think homosexuality is immoral, am I good or bad? How do we determine if it is or is not immoral? If not saving a drowning child is immoral, is not saving a pregnancy from being aborted immoral? How do we distinguish between the two lives there?
Because the people on here telling me to "go back to the hypothetical, engage with the hypothetical" don't want any nuance or grey areas or contemplation of real world, we are supposed to just go "child drowning, must save". Okay then, child in womb about to be killed, must save. Engage with that and then talk to me about morality.
Oh and you can't argue "it's not a child", "it's only a potential person", "it depends on the stage of development" and the rest of such arguments because nuh-uh, that's dodging the hypothetical. After all, we don't list off what age the drowning child is, whether they're a genius or a Downs Syndrome child, who their parents are, or any of the rest of it. So now define morality for me based on actions deliberately chosen or inaction deliberately chosen with no refinements other than "this is a life, you are obliged to save life".
As a worshipper of Tlaloc, I feel my moral duty is to drown as many children as possible and so if I'm not pushing a kid into a pond 24/7, can I really say that my life has value? 😁
Bringing up the actual in real life effects of actual charities doesn't seem to motivate anyone, because they fall back onto abstract arguments about why it's not good to do charity that on average saves a life per 6000 dollars. And obviously as you see, it's pointless to discuss hypotheticals when you can have real life details to talk about things.
So yeah, I agree that EA refusing to obey social mores is cultish. Normal people drop it when they see you aren't interested in conversation.
I do think you can persuade people, but it's much closer to discovering existing EAs than it is making them. Doesn't invalidate your point though, especially since this essay is targeted at someone who probably thinks they think a lot about morality.
Pardon me if I'm missing something obvious, but don't “split-brain” patients still potentially have a ton of mutual feedback via the rest of the nervous system and body?
Oh yeah completely separately I'd like to apologize for embodying the failure mode you're talking about here. I'm not good and I use this place as a cathartic dumping ground for my frustrations, whoops.
Sometimes the brain worms get me but I'll try to keep in mind that sometimes third parties have to scroll past my garbage. Need to imagine a stern looking Scott telling me to think if it's a good comment before posting.
Arbitrariness is a matter of degree. The fewer convoluted assumptions are required before logical implication can take over, the less arbitrary some idea is. Saying "still ultimately arbitrary" and then justifying "ultimately" on the grounds of the is-ought problem being a thing at all... by that standard, the phrase "arbitrary ethical rules" is about as redundant as "wet lakes" or "spherical planets" - unclear what it would even mean for the descriptor not to apply, so using it anyway is more likely a matter of smuggling in misleading connotations.
If someone told me their own hamburger had ketchup on it, just after having taken a bite, I'd be inclined to believe them even if I couldn't see any ketchup there myself - it's not an intrinsically implausible claim, and they'd know as well as anyone would.
Similarly, having observed it directly I consider my own life to have value, and I'm willing to extend the benefit of the doubt to pretty much everyone else's.
It was originally, long before Substack was founded, at a different URL that's no longer online. Possibly people don't know that there's now a Substack.
Oh, thank goodness - I'd have been sad if a "foundational reference" essay that I reread periodically was gone for good. Link rot comes for everything in the end...
This kind of thing is getting far beyond the actual utility of moral thought experiments. Once you're bringing in blatantly nonsensical constructs like the river where all the drowning children from a magical megacity go, you've passed the point where you can get any useful insight from thinking about this hypothetical.
If you want to actually make a moral point around this, it's better to find real-life situations that illustrate your preferred point, even if they're messier or have inconvenient details. The fact that reality has inconvenient details in it is actually germane to moral decision-making.
So much this. My moral intuition just completely checks out somewhere between the examples 2 and 3 and goes "blah, whatever, this is all mega-contrived nonsense, I might just as well imagine me a spaceship while at it". Even though I'm already convinced of the argument Scott makes.
True that it's hard to learn from these--but they're not for *learning* morality. Thought experiments are the edge cases by which you *test* what you've learned or concluded. In that analogy, it's like looking at what architecture *can't* do by studying an Escher lithograph.
Practically speaking, no one has been persuaded into actually looking into details when they say things like "why would I donate to malaria nets". They fall back onto their preconceptions about how charities are corrupt and oh no nothing ever happens productively when it comes to charities, despite those points being laid out in exhausting detail on givewell's website.
So when people say that hypotheticals are useless and that it takes too much time to find out germane details, it sure does seem like people have a gigantic preference for not having anything damage their self image as a fundamentally morally good person, and this preference happens before any rules about the correct level of meta or object level details arise.
I mean, that's obvious, right?What's your point? That most people don't seem especially saintly when scrutinized by Singer or similarly scrupulous utilitarians?
If it was obvious, there'd be way more pushback re: discussion norms against bad faith. Coming into a discussion with your bottom line written down and being unwilling to update on germane facts that someone has to find for you is rude and should be banned via most ethical systems and not just utilitarianism (or is being stubborn a virtue?)
I'm not saying that they're at fault for being less virtuous, but for *not even attempting to be virtuous by most definitions of virtue*. Neither deontology nor virtue ethics says that's it okay to ignore rules or virtues because it feels uncomfortable. And this isn't a deep seated discomfort that's hard to hide, it's an obvious-by-your-accounting one!
Plenty of people think of things like maintaining faith and hope in conditions where they are challenged as virtuous, rather than as opportunities to reconsider your beliefs. Usually this is couched in terms of being ultimately right, contra the immediate evidence - seems like a pretty good definition of stubbornness to me.
You're wrong. I was persuaded precisely by the details, specifically by Scott back on SSC - the post which finally pushed me over was *Beware Systemic Change*, oddly enough, but the fuel was all of his writing about poverty and the effectiveness and so on in a specific detailed fashion.
What I think you're saying is "people want to be selfish and will engage in whatever tortured pseudo-logic that lets them indulge in this urge with minimal guilt". And on a purely descriptive level, I agree. I also think that's bad, and we should not in any way encourage that behavior.
Thank you so much for proving me wrong. I should not have been hyperbolic.
And I also agree this shouldn't be encouraged, but I have no idea what a productive way of going about this would be. The unproductive way I've been doing is to post snark and dunks, which I agree is bad and also should not be encouraged but what if it makes me feel a tiny bit better for one moment? have you considered that.
But no seriously, you can't see the exact degree to which someone is bad faith in this way until you've engaged with them substantially, at which point they usually get bored and call you names instead of responding. Any ideas would be welcome
Politics is the mind-killer. It is the little death that precedes total obliteration. I will face the hot takes and I will permit them to pass over me and through me. And when the thinkpieces and quips have gone past, I will turn the inner eye to see its path. Where the dunks have gone there will be nothing. Only I will remain.
But to your point, yes, broadly speaking I agree. Claims that you have an obligation to be Perfectly Rational or Perfectly Moral-Maximising or whatever at all times, and to fall short by a hairs breadth is equivalent to having never tried at all or tried as hard as possible to do the opposite, are utterly Not Helpful and also patently stupid. If I came across as saying that, I strongly apologise. And implied within that position is that it is less than maximally damning to fall short from time to time - not *good* maybe, but you do get credit for the Good Things.
And yes, I agree that there is a lot of bad faith on this topic, because people want to justify their urges to have another six-pack of dubiously-enjoyable beer rather than helping someone else, an urge which only gets greater with greater psychological distance. Construal level theory is applicable here, I think. Frankly, I'm getting pretty hacked off with people arguing in what is obviously bad faith trying to justify both being selfish and viewing themselves as not-selfish.
The basic way I ground things out is "do you accept that, barring incurring some greater Bad Thing, to a first approximation we have some degree of moral obligation to help others in bad situations?" If yes, then we can discuss specifics and frameworks and suchforth. If not, we're from such totally different moral universes then our differences are far more fundamental.
> If I came across as saying that, I strongly apologise.
You did not come across this way.
I actually do think I'm not being helpful, and like, surely there exist norms that we can push for where people don't post such bad faith takes.
> If not, we're from such totally different moral universes
To a certain extent, this is not what Scott believes and it's to his great credit that he doesn't, because it's what motivated him to be persuasive and argue cogently for his point.
Agreed. The day I first encountered Peter Singer's original drowning child essay, I went home and donated to malaria nets. I've been donating 10% of my income to global health charities ever since. Hypothetical situations aren't inherently unpersuasive, even if you can't persuade all the people all the time.
I truly think that most people just don't have money to donate to charity after all of the taxes they pay. People may believe that if spending isn't taken care of immediately that the government will go bankrupt within 1-5 years and if that happens the entire western world will. Ollapse overnight and a whole lot of people, the entire planet, will be suffering a whole lot. People may also believe DOGE actually will make things more efficient and if that ends up being the case its completely fine to continue to help the rest of the world in a streamlined and technologically up to date way.
I honestly haven't kept up with DOGE and whats going on but it seems like theyre going full Shiva on everything and then reinstating things they make mistakes on. Its not the way I think anyone would prefer but if it really is true that the US could bankrupt within 1-5 years then this absolutely had to happen and one can be a moral person that supports this.
I think the mega-death river is actually a pretty reasonable analogy for many real-life situations. Scott has mentioned the rich Zimbabweans who ignore the suffering of their countrymen. These are analogies for simply turning a blind eye to suffering, and the point being illustrated is that morality does not reasonably have any *actual* relationship with distance or entanglement or whatever, it's just more convenient to request that people close to a situation respond to it.
Of course there are plenty of ordinary Angolan businessmen, but I think the assumption must be that the rich Angolan is probably not a legitimate businessman but someone who skims or completely appropriates Western aid or the oil revenues that themselves owe to Western businessmen.
I would mostly agree. It's the distillation of some moral hypothetical into a specific (albeit wholly artificial and nonsensical) scenario that makes it a PARABLE.
I think people are apt to ignore problems if they think they can't do anything useful. They might or might not be right about whether they can do anything useful.
Sometimes the locals are the only ones who can help. Oskar Schindler was in the right place and at the right time to save a good number of Jews. Henry Ford wasn't in a place where he could do much. What he could do, make weapons for the allies, was entirely different from what Oskar could do (making defective shells for the Nazis as a cover for saving Jews).
Even assuming Ford was a moral person who was genuinely interested in helping, he didn't have an avenue to do so in a direct way. I don't consider that a moral failing. That he instead chose to help the war effort (which maybe not coincidentally also gave him a lot of money) is not a moral failing either.
And sometimes we just make mistakes, which we cannot determine at the time. The US returned several boatloads of Jews to Europe at a time when it didn't seem like that was likely a big deal. Hindsight wants us to call the action evil, but that's a kind of bias. It was 1939. Very little of Europe was under the control of the Nazis and there wasn't much reason to think that would change. Even less reason to think that the Nazis planned to exterminate Jews in lands they conquered. The solution of "always accept boatloads of foreigners" is not a reasonable policy and comes with its own negatives and evils, which again would be noticed in hindsight.
Which means that "sometimes accept boatloads of foreigners" is a reasonable policy. That does not imply that "always accept boatloads of foreigners" is as well.
Yes, I think even more than physical closeness (which, to me, include all the examples with remote bots, portals and any techno-magical way to be able to experience things and jump in as easily and quickly as if you were physically close - so the thought experiments are not ruling out closeness, because it's very clear those alternatives have the same effect as physical closeness for many things, not only altruism. It just precise what closeness is, when (existing or hypothetical) things make it more complex than physical distance), altruism is boosted by:
- innate empathy (higher for children, higher for people more like you, higher for women, lower for enemies)
- the impression you can help (your efforts are not likely to be in vain)
- the impression you will not loose too much by helping
- this include the fear of establishing a precedent for such help, which indeed can cost a lot if such issue is ultra-common. For me a better interpretation to lack of empathy for common misery than habituation...
- the impression you can gain social status as the "good guy" (direct or indirect bystanders).
On the other hand, it is decreased (decreased a lot, I think) by:
- the impression you are being taken advantage of, scammed in a way.... (i.e. your saving will super-benefit the victim that would become more well off than just fix the issue (like drowning), or, more commonly, it benefit a third party, especially if this third party caused the problem in the first place). This is linked to the "loose too much", but not only, also a little bit to the social status (hero v.s. trop bon trop con (too good=too dumb), but I feel it really is an altruism killer in it's own instinctual way. Maybe THE killer.
I use "instinctual" a lot, because I am fully in the camp of morality being an instinct first, an axiom-based construction (distant) second. So, like other instincts/innate things (like sensory perception), it is easy to construct moral illusions, especially in situations impossible (or unlikely) to happen during human evolution.
You're a doctor working at a hospital, putting in superhuman effort and working round the clock to save as many people as you possible can. Once you finish your residency, do you have the moral obligation to keep doing this?
You have a moral obligation to be a good person. There are many ways to do that, of which backbreaking labor at a hospital is both not the only option and perhaps not the best option.
You don't have a moral obligation to be a good person - to be a good person is to go above and beyond your obligations. Meeting your obligations doesn't make you good, it makes you normal.
This attitude is toxic and feeds into tribalism and "no cookies" arguments where treating the other tribe well gives credit even for a little while treating your tribe with anything but the most delicate kid gloves invites excoriation.
I'm not sure it works as descriptivist either--there are plenty of people who divide the world into "good people" and "bad people", not "the good, the bad, and the average".
I didn't respond at first because in some sense you're right - or we could quibble over what "good" or "Good" mean, which probably isn't productive.
I will say that I don't consider moral to be neutral. Just being a normal person who does normal stuff doesn't make you moral. It doesn't make you immoral, either.
For me to consider someone moral, I believe that they have to do things that are morally positive that are not natural or easy. There has to be at least some effort at doing other than go-with-the-flow.
Again, not doing that doesn't make you evil (usually), but I don't want to dilute the idea of morality to make it natural and easy. It lets everybody get off too easily and with no benefit to society. We should expect more, in the sense of "leave the place better than you found it."
Does it matter? Does the fact that someone else's lack of moral obligation left you in this situation mean you don't need to help?
Maybe you see a drowning child because someone didn't fulfill their moral obligation to add fences. Or because someone pushed the child into the river. Does that change your moral obligation to save them?
Strongly disagree. The utility of unrealistically simple toy models is that they can explain principles that the messiness of real-world examples conceals.
Suppose you're Newton trying to explain how orbits work with the cannon thought experiment, but the person you're talking with keeps bringing up ways in which the example is unrealistic. "What sort of gunpowder could propel a cannonball out of the atmosphere?" they ask, and "What about air resistance slowing the cannonball down?" and so on.
It's not unreasonable to say in that situation "No, ignore all of that and focus on the idea the thought experiment is trying to communicate. If it helps, imagine that the cannon is in a vacuum and the gunpowder is magic."
And sure, if Newton thought hard enough, maybe he could come up with the concept of rockets and provided an entirely realistic example of the principle- but if someone had demanded that of them, they'd still have been missing the point.
>The utility of unrealistically simple toy models is that they can explain principles that the messiness of real-world examples conceals.
Even the most simple, original Drowning Child thought experiment is drawn from messy reality. It asks us to avoid many questions that any person in that situation might ask themselves, intuitively or not: What is the risk to myself, other than the financial risk of ruining my suit? Am I a good enough swimmer to get to the child and pull it to shore? Is the child, once I reach it, going to drown *me* by thrashing around in panic? Is the child 10 meters from shore, or 100? Are there any tools around that could help, like a rope?
Plenty of complications there already, and no need to introduce even more. Or, if you do need to introduce more, start asking yourself if it's really a good thought experiment to begin with.
But Newton never claimed that his cannonball experiment was, by itself, proof of his theories, only that it helped to illustrate an idea that he'd separately demonstrated from real examples. Scott doesn't have the real-world demonstration.
I'd have thought the opposite is true, or can be, in that well-chosen idealized scenarios can help clarify and emphasize moral points. It's analagous to a cartoon or diagram, in which a few lines can vividly convey all the relevant information in a photo without any extraneous detail.
I actually have found a lot of utility in it because I seem to disagree with basically everyone in this thread, and it has given me context on why I find EA so uncompelling.
By relocating to the drowning child cabin you are given an incredibly rare chance to save many lives, and you should really be taking advantage of it.
On the other hand, you only get the opportunity because the megacity is so careless about the lives of its children. Obviously, saving the drowning children is a good thing, but what would be even better is if the megacity does something to prevent the kids falling into lakes and streams in the first place.
And if they don't bother because "well that sucker downstream will save the kids for us, and we can then spend the money that should go to fencing off dangerous waterways and having lifeguards stationed around pools on trips to Dubai for the rulers of the city", then are we really saving lives in the long run?
You are not really engaging with the thought experiment. Maybe think of this experiment instead: you suddenly develop the superpower of being able to be aware of any time somebody is drowning within say, 100 miles, and being able to teleport to them and teleport back afterward. If you think 100 miles is so little that a significant number of people drowning within that area is the government being lazy or corrupt, then imagine it was 150, or 200, or 1000, or the whole planet if you must. Would you have an obligation to ever use the active part of your powers to save drowning victims, and how much if so?
"You are not really engaging with the thought experiment."
Because it's rigged. It's not honest. It's trying to force me along the path to the pre-determined conclusion: "we think X is the right thing to do and we want you to agree".
I don't try to convert people to Catholicism on here, even though I do think in that case X is right, because I have too much respect for their own minds and souls. I'll be jiggered if I let some thought experiment that is as rigged as a Las Vegas roulette wheel manhandle me into "well of course I agree with everything your cult says".
EDIT: You want me to engage with the thought experiment? Fine. Let's forget the megacity.
Outside my door is a river, and every hour down this river comes a drowning child. Am I obligated to save one of them?
I answer no.
Am I obligated to save every single one of them, morning noon and night, twenty-four drowning children a day every day for the foreseeable future?
Again I answer, no.
But that's not what I'm supposed to answer? Well then make the terms clearer: you're not asking me "do you think you are morally obligated?", you're telling me I'm morally obligated. And you have NOT made that case at all.
There's a group of people who think that if you live in a regular old cabin in the woods in the real world, see a single child drowning in the river outside, and can save them with only a mild inconvenience, you are morally obligated to do so.
The child-drowning-in-a-river-every-hour thought experiment is a way to further explore that belief and discuss where that moral obligation comes from. Of course it's going to sound absurd to you, because you don't agree with the original premise. It's convoluted because it's a distortion of a previous thought experiment.
I'm not a huge fan of the every hour version because it implies an excessive burden on the person who would have to save a child every hour, completely disrupting their life and removing moral obligation to some degree. I think the comparison of the moralities of the two people earning $200k is a much more interesting example.
Save sixteen kids the first day, then formally adopt those. Have them stand watch in shifts, with long bamboo poles for rescuing their future siblings from safely back on shore. If their original parents show up, agree to an out-of-court settlement, conditional on a full-time lifeguard being hired to solve the problem properly.
I mean, if you seriously think you're not morally obligated to save any drowning children (and elsewhere in the thread you said it applies to the original hypothetical with just one child too), then fine, you've finally engaged instead of talking around the thing.
This, and your attitude to moral questions in general, does affect my opinion of the effectiveness of Catholicism, and religion in general, in instilling morals in people though, and I can't be the only one. You're not just a non-missionary, you're an anti-missionary.
Oh dearie, dearie me. You now have a poor opinion of Catholicism, huh? As distinct from up to ten minutes ago when you were on the point of converting?
Yeah, I'm afraid my only reaction here is 😁😁😁😁😁😁😁😁😁😁
Now, who's the one not engaging with the hypothetical? "Just because you think it's bad doesn't mean it's wrong", remember that when it comes to imposing one's own morals or religious beliefs on others in such instances as trans athletes in women's sports, polyamory, no-fault divorce, capitalism, communism, abortion, child-free movement and a lot more.
You don't like the conclusion I come to when faced with the hypothetical original Drowning Child or the variants with the Megacity Drowning Children River? Tough for you, that does not make my view wrong *unless* you can demonstrate from whence comes the moral obligation.
"If you agree to X you are morally obliged to agree to Y". Fine. Demonstrate to me where you get the moral obligation about X in the first instance. You haven't done that, you (and the thought experiment) are assuming we all share Western, Christianity-derived, social values about the importance of life, the duty towards one's neighbour, and what is moral and ethical to do.
That's a presumption, not a proof. Indeed, we are arguing about universal values and objective moral standards in the first place!
I can be just as disappointed about "the effectiveness of Effective Altruism, and rationalism in general, in instilling morals in people" if you refuse to agree with me that "if you agree to save the Drowning Child, you are morally obligated to agree to ban abortion".
Malaria killed 608,000 children globally in 2022. Abortion killed 609,360 children in the USA alone in 2022. Now who cares more about the sacred value of life and the duty to save children?
That's the fun with hypotheticals - someone elsewhere said "the choice is snake hands or snake feet and you're going 'I want to pick snake tail'" but why not? It's a hypothetical, nobody in reality is going to get snake hands or snake feet! So why not "Oh I think I'd rather be Rahu instead!" with the snakey tail?
These things never seem to bother with considering that the value of a human life is not a universal constant, any more than is the value of other life on this planet.
Oh, sure. It's an arm-twisting argument about "you should give to charity", not anything more. Same way the thought experiment about "suppose a famous violinst was connected up to your circulatory system" or "suppose people got pregnant from dandelion seeds floating in the window" about abortion.
It's set up to force you along a path to the conclusion the experimenter wants you to arrive at. You have three choices:
(1) Agree with the conclusion - good, moral person, here's a pat on the head for you
(2) Agree with X but not with Y - tsk, tsk, you are being inconsistent! You don't want to be inconsistent, do you? Only bad and stupid people are inconsistent!
(3) Recognise the trap lying in wait and refuse to agree with X in the first place - and we get what FeaturelessPoint above pulls with me - oh you bad and wicked and evil monster, how could you?
Many people go along with (1) because nobody (or very very few) are willing to be called a monster by people they have been habituated to regard as Authorities (hence why it's always Famous Philosopher or Big Name University coming out with the dumb experiments; we'll all laugh and ignore if it was Joe Schmoe on the Innertubes) and most people want to get along with others so they'll cave on (2). We all want to think of ourselves as moral and good people, after all, and if the Authority says "only viewpoint 1 is acceptable for good and moral people to hold", we'll most of us go along meekly enough.
You have to be hardened enough to go "okay, I'm a monster? fine, I'm a monster!" but it becomes a lot easier if your views have had you called a monster for decades (same way every Republican candidate was "Hitler for real this time", eventually people stop paying attention).
I'm willing to bite that bullet in a hypothetical, because I know it's a hypothetical and what I might or might not do in a spherical cow world of runaway trolleys and nobody in sight for miles around a pond except me and a drowning child, is completely different from what I'd do in real life.
In real life, maybe I don't jump into the pond because I can't swim. Maybe this is my only good suit and if I ruin it, I can't easily afford to replace it, and then I can't go to that interview to get the job that means now I can pay rent and feed my own kids. Maybe I'm scared of water. Maybe I think the kid is just messing around and isn't really drowning. Real life is fucking complicated*, so I have no problem being a contrarian in a simplified thought experiment that I can tell is trying to steer me down path A and not path B.
In real life, I acknowledge the duty to give to charity, because my religion tells me to do so. That's a world away from some smug thought experiment.
*Which is why there is a field called moral theology in Catholicism, and why for instance orthodox Jews get around Sabbath prohibitions by using automated switches to turn on lights etc. The bare rule says X. Real life makes it hard to do X, is Y acceptable? How about Z? "You're a bad Jew and make me think badly of Judaism as instilling moral values if you use automation on the Sabbath" is easy to say when it's not you trying to live your values.
I'm loving your ability to enunciate what the rest of us can only mutely feel.
"Suppose people got pregnant from dandelion seeds floating in the window" - hadn't heard that one but it's funny to me because it puts the thought experimenters about at the level of some adolescent girls circa 1984 - when my fellow schoolgirls earnestly discussed whether one could get pregnant "sitting in the ocean" lol.
"Again, suppose it were like this: people-seeds drift about in the air like pollen, and if you open your windows, one may drift in and take root in your carpets or upholstery. You don't want children, so you fix up your windows with fine mesh screens, the very best you can buy. As can happen, however, and on very, very rare occasions does happen, one of the screens is defective, and a seed drifts in and takes root. Does the person-plant who now develops have a right to the use of your house? Surely not--despite the fact that you voluntarily opened your windows, you knowingly kept carpets and upholstered furniture, and you knew that screens were sometimes defective. Someone may argue that you are responsible for its rooting, that it does have a right to your house, because after all you could have lived out your life with bare floors and furniture, or with sealed windows and doors. But this won't do--for by the same token anyone can avoid a pregnancy due to rape by having a hysterectomy, or anyway by never leaving home without a (reliable!) army."
Interestingly, she seems to argue *against* the Drowning Child scenario, though not by mentioning it:
"For we should now, at long last, ask what it comes to, to have a right to life. In some views having a right to life includes having a right to be given at least the bare minimum one needs for continued life. But suppose that what in fact IS the bare minimum a man needs for continued life is something he has no right at all to be given? If I am sick unto death, and the only thing that will save my life is the touch of Henry Fonda's cool hand on my fevered brow. then all the same, I have no right to be given the touch of Henry Fonda's cool hand on my fevered brow. It would be frightfully nice of him to fly in from the West Coast to provide it. It would be less nice, though no doubt well meant, if my friends flew out to the West coast and brought Henry Fonda back with them. But I have no right at all against anybody that he should do this for me."
So by her logic, if you live by the river of drowning children, nobody in the world can force or expect you to rush out and save them every hour, or indeed at all. Just because your cabin is located beside the river, where there is a megacity upstream where the children all tumble into lakes and get washed downstream, puts no obligation whatsoever on you. You didn't do anything to create the river or the city, or the careless parents and negligent city government.
I appreciated this perspective and was surprised is wasn't brought up earlier or given greater weight.
Deontological details are important, but a core part of all of this revolves around who is accountable for stopping an atrocity. I loved Scott's article, but we focused on pushing the extreme boundaries on how to evaluate a hapless individual's response to the megacity drowning machine while literally ignoring the rest of the society.
I've waved this part off as avoiding the pitfalls of the bystander effect; plus the point of the article seems to be answering the question "what should I as an individual do?" as well . But sometimes a problem requires a mobilized, community response.
I also appreciated Deiseach pointing out that when you altruistically remove pain from a dysfunctional system that you can remove the incentives for the system to change which can have a worse outcome.
If it needs to be in the form of a thought experiment:
A high profile, reckless child belonging to a powerful lawmaker who is constantly gallivanting falls in the river. If you save them you know the child will stay mum about it to avoid backlash from their parents, but if they drown the emotionally vexed lawmaker will attempt to re-prioritize riparian safety laws. What do you do?
The megacity is a vibrant democracy. Every child who drowns traumatizes the entire family and their immediate relations and galvanizes them to vote against the status quo and demand policy change, which is the only thing that will ultimately stop the jeopardy to the children long term. Do you save an arbitrary child that afternoon? How about at night after saving every child during your standard waking hours?
No one wants to see an atrocity occur. But sometimes letting things burn allows enough smoke to get in the air that meaningful action can finally happen. We should at least consider this if we're doing an elaborate moral calculus.
I went to that page and entered my information, but it didn't tell me whether I was on the global rich list or not, and it didn't say how rich someone would have to be in order to be on the global rich list (which I assume is not a real list, but a metaphor meaning in the top 0.01% or something). Do you know?
Scott has to keep making up fantastical situations because it’s the only way to pump up the drowning child intuition. I don’t regularly encounter strangers who I can see right in front of me having an emergency where they have seconds before dying but are also thousands of miles away.
Hmm, I don't have an ethic where I judge hypotheticals in terms of their realism. In fact, isn't the beauty of the hypothetical the fact that it is so malleable?
It really is a different mode of thinking. For some people, abstract situations are clarifying because they eliminate the ancillary details that obscure the general principle. For others, it's necessary to have all the ancillary details to make the impact of the general principle evident.
I've always favored the former, but I regularly encounter folks who only process things via the latter. Communicating effectively and convincingly across different types requires the ability to switch modes. Sorta like talking to a physicist vs. an engineer.
I redd about this on r/askphilosophy (not sure why this scenario is resurfacing so much lately) and was struck by this comment:
"Singer isn't writing for people walking by ponds that children have fallen into, though. It's a thought experiment ... Singer's point isn't "Intuition, yay!" it's that our intuition privileges the people close to us but we should consider distant folks the same. It's that our intuition is wrong."
That comes very close to sounding like there is no "thought" either sought or required - that he had a point, and smuggled it into a parable.
It seems entirely disingenuous to me. He (Singer) should state his point, assert that he knows the truth and you know a lie, and let the chips fall where they may.
"That comes very close to sounding like there is no "thought" either sought or required - that he had a point, and smuggled it into a parable."
It's a gotcha, and why my withers remain resolutely unwrung by those telling me I'm immoral if I don't fall into line about "if X, then by necessity and compulsion Y".
I kind of know what you mean, but I kind of also feel like thought experiments lay bare uncomfortable truths about ourselves that we can typically hide from behind "germane details"
Is it a good thing to aspire to be the moral equivalent of the Siberian peasant who can’t do math word problems because he rejects hypotheticals? The thought experiments are useful for crystallizing what principles are relevant and how. Most people don’t intuitively think in terms of symbolic abstractions, that’s why hypothetical scenarios. Their practical absurdity is beside the point.
Given Russian history, I sorta suspect the Siberian peasant is capable of doing math word problems in private, but has developed an exceptionally vigilant spam filter. Smooth-talking outsider comes along, saying things without evidence? Don't try to figure out the scam, just play dumb, avoid giving offense, and wait for him to leave.
I agree. Maybe the Siberian peasant is too stupid to do maths problems, or maybe he remembers the last time some government guy from the Big City turned up and asked the villagers to agree to a harmless imaginary statement.
They're still scrubbing the bloodstains out of the floor in that hut.
Perhaps more precisely, people discount help according to their social circle's accounting of that help. Distance is part of it, relatedness is another (especially in collectivist cultures like mine)
That's a good point. Intuitively, the kid you can see drowning is probably your neighbor's kid or your second cousin twice removed or something. The kid you can't see drowning has nothing to do with you, and you have nothing to do with any of the people that would be grateful for them being saved
Honestly that's not intuitive to me? I've never thought of the drowning child thought experiment and thought "wow, that's probably related to someone I know!" and if we imagine that the drowning child is not at all related, e.g they're a Nigerian tourist or something, it still seems like I'm just as obligated to save them.
So people intuitively recognize that you should save drowning children bcs that intuition evolved to help people related to you pass their genes on. They don't have that intuition for people far away bcs that had no reason to evolve since helping people hundreds of miles away doesn't help your genes.
In older times people just said that other tribes didn't really matter and only they did, so that's why they only helped their own tribe. Nowadays people are more egalitarian and recognize that everyone has moral worth, so they have twist themselves into knots to justify their intuition that you don't have to help far-off strangers.
If there are two people choking, one 10cm away inside a bank vault you don't know how to unlock (and might be charged with a felony for trying), the other a hundred meters away across clear open ground, who do you have the greater responsibility to?
Available bandwidth and ping times are more important than literal spatial distance.
I would agree with this idea. It also seems like a near vs far mode thing. The suffering of children in Africa is very conceptually distant, and we perceive it with a very low resolution. A child drowning right next to you just feels a lot more real.
In Garrett Cullity's The Moral Demands of Affluence his argument is that the "Extreme Argument" (Singer's drowning child) would require us to compromise our own impartially acceptable goods. And we don't even ask that of the people we are saving, so they can't ask it of us. (Kind of hard to do a tl;dr on it because his entire 300 page book is solely on this topic.)
"My strategy is to begin by describing certain personal goods—friendships and commitments to personal projects will be my leading examples—that are in an important sense constituted by attitudes of personal partiality. Focusing on these goods involves no bias towards the well-off: they are goods that have fundamental importance to people’s lives, irrespective of the material standard of living of those who possess them. Next, I shall point out the way in which your pursuit of these goods would be fundamentally compromised if you were attempting to follow the Extreme Demand. Your life would have to be altruistically focused, in a distinctive way that I shall describe. The rest of the chapter then demonstrates that this is not just a tough consequence of the Extreme Demand: it is a reason for rejecting it. If other people’s interests in life are to ground a requirement on us to save them—as surely they do—then, I shall argue, it must be impartially acceptable to pursue the kinds of good that give people such interests. An ethical outlook will be impartially rejectable if it does not properly accommodate the pursuit of these goods—on any plausible conception of appropriate impartiality."
I don't know if you were the one to recommend it in a prior ACX post, but I saw a comment about that book and devoured it. I am shocked that it doesn't seem to have any influence on modern EA, because it deals a strong counterargument to the standard rejection of the infinite demand problem.
I don't think it was me. I live in an Asian timezone so rarely post here (or on any other moderately popular American thing since they always have 500 comments by the time I wake up).
But maybe we both saw the same original recommendation? I read it probably 5-6 years ago, so maybe I saw it back on SSC in the day??
I'm not well versed on the matter--isn't the entire point of Singer's drowning child that it is not an extreme demand? It doesn't require you to sacrifice important personal goods like friendships or personal projects, just that you accept a minor inconvenience.
Edit--I read more about it, and Singer's drowning child is not the extreme demand, but the child-per-hour argument could be. The iterative vs aggregative question to moral duty seems particularly relevant to Scott's post.
With the disclaimer that it has been many years since I read the book and Cullitty's "life-saving analogy" is slightly different from Singer's for reasons he explains in the book. But part of the Singer's argument is he isn't actually asking us "just" to save one single child with his life-saving analogy. That's just the wedge entry point and you then are required by his logic to iterate on it.
"I do not claim to have invented what I am calling the ‘life-saving analogy’. Peter Singer did that, in 1972, when he compared the failure to donate money towards relief of the then-recent Bengal famine with the failure to stop to pull a drowning child from a shallow pond."
Singer certainly wouldn't say "you donated to the Bengal famine in 1972 so you are relieved from all further charity for the rest of your life." He would ask you to iterate for the next thing. After all his other books advocate for recurring monthly donations, not one offs.
"No matter how many lives I may have saved already, the wrongness of not saving the next one is to be determined by iterating the same comparison."
Singer doesn't let you off the hook if you see a school bus plunge into the river and have saved a single child from it. You can't just say "eh, I did my part, time to get to work, I'm already running late for morning standup".
And then once you iterate it seems to lead inexorably to the Extreme Demand.
"An iterative approach to the life-saving analogy leads to the conclusion that
you are required to get as close as you productively can to meeting the
Extreme Demand"
So I think Cullitty, at least, believes that (some variation of) Singer's argument requires the Extreme Demand.
I think it also abolishes the "why help one near person when the same resources would help one hundred far persons?" arguments I see bruited about.
If you don't get to say "I donated once/I pulled one kid out of a river once" and that's it, no more obligations, then neither do you get to argue that people far away should be prioritised in giving over people near to you (and I've seen plenty of arguments about 'this is why you shouldn't give to the soup kitchen on your street when that same money would help many more people ten thousand miles away').
If I'm obliged to help those far away, I am *also* obliged to help those near to me. I'm obliged to donate to malaria net charities to save 100 children, but I'm *also* obliged to give that one homeless guy money when he begs from me.
Distance is not an excuse in either case; if I'm obliged to help one, I am obliged to help all, and I don't get off the hook by saying "but I just transferred 10% of my wages to GiveWell" when confronted by the beggar at my bus stop.
If you're going to get moralistic about (I assume) HIV you should also bear in mind it gets transmitted from mothers to newborns, who obviously have no moral responsibility for their plight.
"An entire society focuses on a sexual practice that spreads an incurable disease. The disease is also passed on to the children."
->
"An entire society has collectively decided that drowning children is sexually pleasurable. Should you save the child and ignore the sexual practices of the society?"
This is standard motte and bailey. You cannot consider one without the other in this thought experiment and turn around and apply it to real life.
This is kind of an absurd argument given that the sexual practice in question is just "having sex" and infecting children is in no way a necessary consequence for people to fulfill their desire.
This is again a question of moral luck: people in the United States can, relatively trivially, get the medicine necessary to have sex without passing on HIV and people in some nations cannot.
Okay, fine - we should definitely discourage this. I don't think that gives us the license to ignore newborns getting HIV or that this is tantamount to deliberately drowning children.
Do you really think the world's assembled anti-HIV efforts would have ignored this out of embarrassment or stupidity? It's largely a sexually transmitted disease - they are not squeamish when it comes to studying which sexual activities are associated with increased risk. It is easy to find tables with estimated infection rates for anal sex, sex while carrying open STI sores, and so on.
I suggest you come up with some other way of blaming HIV incidence on the backwards culture of Africans.
>An entire society has collectively decided that drowning children is sexually pleasurable. Should you save the child and ignore the sexual practices of the society
Those are two separate questions. You should save the child and do what you can to encourage the societal change that you think would be beneficial.
I get that this is some kind of tortured analogy to HIV, so I guess the real question is do you actually think non-profits aren’t also spending money on safer sex education in addition to HAART?
That doesn't mean long-term utilitarian arguments about the consequences of policy go away. It is conceivable that refusing to pay for HIV medication would ultimately produce a society where the risks of HIV are so terrifying that no-one engages in unprotected sex, and thus the number of infected newborns drops to zero. Even if, in the short term, more newborns die.
I don't know if this is actually true, but 'moralism' isn't the correct frame to analyse this problem with. "Fewer people should die of HIV" is a 'moralising' position.
Yeah, some countries do have wildly high rates, but were these countries with zero access to HIV meds? How do you know this didn't exacerbate the problem?
I don't know what the correct solution to the calculus here would look like, I'm just pointing out that calling your critic a 'moraliser' in response is nonsensical. There's no utilitarian calculus without a moral definition of utility.
Countries in the first world with ready access to these medications are way better off than much poorer countries who rely on foreign aid to get them.
And my point was not that it's absurd to invoke morality, my point was that invoking it to assign responsibility to victims was incorrect for the population of victims who have no agency.
EDIT: Also antiretrovirals can virtually eliminate transmission so it's hard to see how the "moral hazard" argument would work here, at least in a scenario where you adequately supply everyone.
> Countries in the first world with ready access to these medications...
Are, among other things, overwhelmingly white/asian, and I'm HBD-pilled, so I don't assume that identical policies are going to yield identical outcomes in both areas.
> EDIT: Also antiretrovirals can virtually eliminate transmission
If they remember to take them, sure. IQ has an impact on how reliably that happens, though.
So there is a question here of judging people as a society. If the newborns in a society all get HIV because the society is bad, but then the newborns grow up to be part of the same society, how do we judge them?
There's a reasonable counterargument that any specific newborn isn't blameworthy because he's a blank slate that might not grow up like that society, so we shouldn't judge him as a social average. But then this is already effectively a newborn we know nothing about except that he's from that society, so maybe judging him by the priors of his society (instead of global priors) makes more sense.
(There's also, in this particular case, a second objection that AIDS aid might gradually help push that society away from the disease as a group; this is a practical question I have no particular insight about)
Umm, if you're talking about HIV, don't all sexual practices that involve the exchange of semen, saliva, or vaginal secretions increase the spread of the incurable disease? Doesn't this include all societies that allow or encourage physical sexual connection?
Does it make any difference whether or not you're a member of the society?
If you’re going to say this and repeat it. then get it right. The O.08% figure applies to the situation where the infected male is asymptomatic. If he is sick the chance of transmission is 8x higher, or 0.64, enuf bigger to be harder to shrug at. That works out to a 12% chance of infection if thewoman has sex with the symptomatic man 20 times.
The blaming was down to opposition to condoms. Condoms are the highest good, you see, and being opposed to them means that you are a no-fun wet blanket who thinks having fun sex with no consequences like kids is a bad thing. This makes Westerners mad about their current attitudes to sex, because it makes them feel like they're being blamed and are bad people (see the comments above about people wanting to believe they're good and moral while not doing something to help) so they use cases like "condoms reduce spread of AIDS, the church wants people to die" to make themselves feel justified.
It's not so much that the church is against condoms in committed relationships, it's the position that even using condoms in extramarital or promiscuous sex makes those things a worse sin instead of not as bad, that really riles people up wrt AIDS and other diseases.
"being opposed to them means that you are a no-fun wet blanket who thinks having fun sex with no consequences like kids is a bad thing. This makes Westerners mad about their current attitudes to sex, because it makes them feel like they're being blamed and are bad people"
All this is an accurate description of the Catholic church's perspective. Judge and ye shall be called judgey.
Can you please post a link to the stat you quote of 37x increased risk of HIV transmission when dry sex is practiced vs intercourse with no interference with the vagina’s
secretions? I have looked quickly on Google Scholar and asked the “deep research” version of GPT and cannot find any figures remotely like what you are quoting.
Yeah I found that too. That 37x-as-likely sounded like bullshit to me from the start, because it's too big and precise. This just isn't the kind of data from which you can extract such a precise number, or find such a large difference between groups. To get such a big number, and to trust it, you'd have to have a control group of couples and a dry sex group, and assign couples randomly to the groups and then followed them for a year doing HIV tests. Um, obviously such a study was not done, and it's not possible to get data you're confident of just by interviewing people about practices and number of partners, etc. In fact it's probably not possible even group the people studied into dry sex and regular sex groups . You'd have to find people who have *only* done dry sex or only done plain vanilla intercourse. In one of the studies I looked at the best they could do was look at women who reported they had had dry sex at least once in the period they asked about.
I really don't doubt that dry sex creates abrasions on the genitals of both partners and that this ups the chance of HIV transmission. Really irritates me when people spout bullshit that supports it though.
Isn't it only non-monogomous sex that can really spread any of these diseases? (Okay, you could be born with it, and give it to your one sexual partner, but none of these diseases can long survive on such a patry pathway, all in practice require promiscuity). Pre-1960 there have been a lot of societies (damn near all of them in theory if not in practice?) that encourage only monogomous sexual connections.
We don't have time series data, but we have extensive literary evidence. That's usually the case with history. Demanding time series data smells of an isolated demand for rigour. After all, you seemed comfortable ruling out any society being entirely monogamous in your previous comment.
it wasn't "all societies", not even close. Read some anthropology.
It really depends on the kind of sex you're having much more than the number of partners- anal sex is *massively* more dangerous than vaginal. In Australia, for example, gay men are 100x more likely to have HIV than female sex workers (and most of the female sex workers most likely got it through drug use rather than sex. Most sex workers in Australia don't use IV drugs, but there's a somewhat significant number who do).
The point in war is to kill or injure enemy soldiers, so no, I don't see any intrinsic value in doing the opposite of that (treating the wounds of the enemy,) when you are at war. (Nor would I expect the enemy to treat our wounded, although that would be a nice bonus.) War is brutal in and of itself and so if you want to avoid brutality and cruelty, my suggestion is to avoid war.
Out shallow societal norms about war crimes are a joke because 1. As demonstrated countless times, we throw the norm out of the window when it is convenient to do so. And 2. It's perpetrating a gigantic fraud to imagine criminal war vs. just War when War itself is a crime
Is your argument that no child actually dies which could have been reasonably prevented without incurring greater moral wrong? Because that's patently false.
Is your argument that discussions about the nuances of moral theory and intuitions and how they cash out in actual object-level behavior is useless? Because that could work, but would need a greater explanation.
Is your argument that discussions about the nuances of moral theory and intuitions and how they cash out in actual object-level behavior is rhetorically not effective? Because that's false - I myself was persuaded by precisely that kind of argument and to this day find them more persuasive than other forms. We can talk about if other forms would be more effective, but that'd need more explanation.
Is your argument that worrying about malaria is inefficient compared to worrying about HIV? Because any even somewhat reasonable numbers say that's not true.
Is your argument that worrying about HIV is a waste of time because people with HIV are too ignorant and engage in risky behaviour? Because then the solution is education.
Is your argument that worrying about HIV is a waste of time because people with HIV are too stupid to avoid engaging in risky behaviour? Because then you'd need more evidence to support this claim.
Or do you just want to beat your own hobby horse about how African People Bad? I assume I don't need to bother saying why that position is oderous.
Hi Alan, I would like to see a single piece of writing from Scott on the importance of education against practices like dry sex or having sex with infants to "cure" HIV. I would also like to see where in this post, his analogies are anything like societal practices encouraging dry sex or having sex with infants to "cure" HIV. Please make it so that a five year old can understand, thank you!
I'm South African. South Africa has for decades had major public awareness campaigns about AIDS that (among other things) explain and warn against those specific practices. Everyone who went to a South African high school heard the spiel repeatedly, and wrote exams testing their understanding of it. It was on TV, the radio, newspapers, the internet.
Those awareness campaigns are funded by the international aid that Scott has repeatedly endorsed in writing. I would have thought it fairly obvious to everyone that said funding is allocated to local education programs, in addition to condom and ARV distribution, etc.
Here is an additional hypothetical. I won't call it an analogy, because it describes a strictly worse situation than reality.
A child is drowning in a river, because their parents pushed them in. Should you save the child? Or should you let the child die because clearly those parents are terrible people who don't deserve to have children?
According to a source I find online, South Africa funds just over 70% of it's anti-AIDS budget directly, 18-24% comes from PEPFAR depending on the year, and the rest from something called the Global Fund.
So your argument is that current educational programs don't exist (not true, as Synchrotron describes at least in the case of SA, and a cursory search finds similar programs in at least a dozen African countries) or that they're not effective? Because again, an even cursory glance at the literature suggests that while they're obviously far from perfect, rates of safer sex practices does improve with education, albeit very unevenly depending on country and specific program.
Actually, I'll make it easier for you. What, precisely, is your actual argument?
I think you are on the right track here. The common issue with extrapolating all these drowning child experiments is that the child presumably has no agency in the matter. The intuitions change very quickly if they do.
"You save the child drowning in the pond and point out the "Danger: No Swimming" sign. The child thanks you, then immediately jumps back into the pond and swims out towards the middle, and proceeds towards drowning. Do you save them again, and how often do you let them repeat that?"
"You see some adult men swimming in a pond, and one of them starts to drown. You save him, and then the next day you see him swimming in the pond again, apparently not having learned his lesson. Do you hang around in case he starts to drown? If he does start to drown, do you save him again? How often do you repeat that?"
All that before you get to the questions of "Can you actually save the person?" "Will going out to help only drown both of you?" "How likely are you to make things worse by trying to save them?" That last one doesn't fit the metaphor at all, but is in fact usually what happens with foreign aid: the situation is made somewhat worse.
Another question is: How much do you know the situation? Is the child actually drowning? Is he swimming? Filming a movie?
Another is: How much responsibility have the Megacity for all the drowning?
I think what happens is Alexander took a case that is rare and unpredictable and said it happens all the time. This of course inverts our intuitions.
In this case, in real life, it would be like:
"We have no responsibility to save YOUR children, but we don't like to hear them crying so we added a net at the border so they can drown at your end".
Indeed. In fairness it is Singer’s base example, and people just use it because it seems to be difficult for most to grapple with. Singer is not someone I feel good about based on his writing that I have read, but maybe he is a decent person.
"How likely are you to make things worse by trying to save them?" That last one doesn't fit the metaphor at all, but is in fact usually what happens with foreign aid: the situation is made somewhat worse."
"The “Law of Unintended Consequences” reared its ugly head despite the best of intentions. For example, when the US flew in free food for the starving people of Port Au Prince, it put the farmers out of business. They just couldn’t compete against free food. Many were forced to abandon their farms and move to tent cities so that they could get fed and obtain the services they needed."
I was reading the obituary of a neighbor’s father, a doctor, and I learned that he had always had a special passion for Haiti, from way back in the 80s, and “had made over 150 trips there”.
How admirable, I thought. And there was really nothing else to think of in connection with that, beyond its evidence of his compassion.
Among the many things I don’t understand is why people look so hard for (and frequently find) unintended consequences when talking about ostensibly altruistic acts, but rarely when talking about “selfish” ones. The example taken from the blurb of Scott’s father’s book is a single paragraph among others, most of which extol the virtue of voluntarism (although I haven’t read the book, so it may include a lot of similar examples of do-gooding gone wrong.) But even in the case of the farmers who lost their market, we don’t know for sure that that itself wasn’t a blessing in disguise – maybe some of them went on to find different, better paying and less arduous work. Maybe some of the people who were prevented from starving went on to do good works far in excess of saving a drowning child.
But as soon as it comes to “selfish” acts – starting a business with the aim of becoming rich, a business that fills a societal need or want – we don’t try to look for unintended consequences (we call them externalities); instead we point to the good they are doing. Even if we admit the negative externalities (the classic case is pollution, but another more modern one is social media platforms’ responsibility for increased political polarization), we still say “but look at all the good they’re doing,” or at least the potential good, if the benefits are still in the future.
One reason for saving a drowning child might be so that you don’t hate yourself for not doing it, which is only tangentially related to desiring others to see you as virtuous. Should that count as an argument against altruism? Why does the argument against the possibility of true altruism not also get applied to selfishness? Even the most selfish, sociopathic and least self-aware person will bring on themself *some* negative consequences of their actions – the loss of opportunities for even more selfishness; the loss of the possibility of truly mutually beneficial relationships; a victim who seeks revenge. Even if they die before realizing these negative consequences, their legacy and the reputation of their descendants will be tarnished.
Unintended consequences are not synonymous with externalities. The reason people focus on them with regards to altruistic motives is that the general default mode towards apparently altruistic acts is “do it” when in fact it might make things worse, whereas there is a default of assuming selfish acts are harmful to others, often in excess of what is really there.
Yes, I agree, unintended consequences are not synonymous with externalities -- externalities can be unintended, which is the rationale for environmental review of projects, but some of them are planned for, and some of them are intended to be mitigated whereas others are ignored or covered up. I don't agree that the default mode toward selfish acts is "don't do it," however. Selfishness is in many cases held up as a virtue (e.g. the selfish gene; the profit motive; the adversarial process in legal proceedings; the notion of survival of the fittest and competition for an ecological niche).
The point is the Drowning Child argument tries to hit us over the head with "do this self-evidently good thing or else what kind of monster are you?" without consideration of unintended effects. Donating to anti-malaria charities is a good thing.
So is feeding the hungry. And yet the intervention in Haiti ended up causing *more* hunger and undermining local food production. So was the self-evidently good thing an unalloyed good, or should we maybe look before we leap into the pond?
I think this is obviously the wrong analysis of pepfar but even if it were right this wouldn’t be a good argument against the against malaria foundation
" 'An entire society focuses on a sexual practice that spreads an incurable disease. People are now dying because of this disease.'
Is it your moral responsibility to pay to reduce the disease incidence for people in this society given that they are spreading the disease?"
Even if we granted the premise that it's not our moral responsibility to save people who recklessly endangered themself or others, many of the people who are getting HIV were not reckless. Some of them are literal babies or women and children who were raped. Many others didn't have the education to know how HIV is spread and how to avoid being infected; if someone mistakenly believes that sex with a virgin renders them immune to HIV, can you blame them for getting HIV when they thought they couldn't?
But I would definitely contest that premise. If someone is drowning in front of you, you're obligated to save them. It doesn't matter if they got there by recklessly playing near the lake or through no fault of their own. If someone will die unless you intervene, you have to help regardless of how they got into that position.
> ...behave as if the coalition is still intact...
I think you may have snuck Kant in through the back door. Isn't this kind of what his ethics is? Behave according to those principles that you could reasonably wish were inflexible laws of nature (or, in this case, were agreed to by the angelic coalition).
No, Kant relies on the idea of immoral actions being illogical, because they contradict the rules that also provide the environment where the action even makes sense to do.
Lies only make sense if people trust you to tell the truth.
Theft only makes sense if you think you get to keep what you take,
>My favorite heuristic for thinking about this is John Rawls’ “original position” - if we were all pre-incarnation angelic intelligences, knowing we would go to Earth and become humans but ignorant of which human we would become, what deals would we strike with each other to make our time on Earth as pleasant as possible? So for example, we would probably agree not to commit rape, because we wouldn’t know if we would be the offender or the victim, and we would expect rape to hurt the victim more than it helped the offender.
No, it's trivially obviously false that we would agree to that (or anything else) in this scenario. If we don't have any information about which humans we are, then we're equally likely as not to end up being sadomasochists, so any agreement premised on the assumption that we want to minimize suffering for either ourselves or others is dead on arrival. All other conceivable agreements are also trivially DOA in this scenario, since we also don't have any information about whether we're going to want or care about any possible outcomes that might result. Consistently applied Rawlsianism is just roundabout moral nihilism.
In order for it to be possible that the intelligences behind the veil of ignorance might have any reason to agree to anything, you have to add as a kludge that they know masochism, suicidality, and other such preferences will be highly unusual among humans in the society they're born into, and that it's therefore highly unlikely they'll end up with such traits. But if they can know that, then there's no reason why they can't also know the commonality of other traits, and then there's no reason why they shouldn't be able to at least make a well-informed Bayesian estimate of whether they're more likely to end up the offender or victim in a rape, or whatever else you want them not to know, and so the whole experiment becomes pointless.
Masochists tend to be very picky about the kind of pain they want. I have no idea whether this is as true about what kind of pain sadists want to impose.
I think that's a misstating of what the veil makes you ignorant of.. The point isn't that you don't know anything about the society into which you will be incarnated; the point is that you don't know what role in that society you will have.
Firstly, as a masochist myself, you are heavily misrepresenting masochism. Secondly, as someone who's met a weirdly large number of people who have committed rape, I'm pretty sure the net utility *for rapists* is at least slightly negative - some of them get something out of it, but some of them are deeply traumatized by it and very seriously regret it (and that's ignoring the ones who actually get reported and charged and go to prison, because I haven't met any of those).
I've wondered whether there were people who committed rape once and found they didn't like it and never did it again, or maybe once more to be certain and then never again.
It makes no difference to the victims, but it might make a difference to rape prevention strategy.
Yeah, that sounds about right. I definitely meant that the majority of people who commit rape only do so once, not that the majority of rapes are committed by one-time offenders. Probably should have clarified, though, so thanks for that.
You can try to invent an alternate version of the VOI where you arbitrarily know what your values will be without knowing anything else, but I'm not sure how such a blatantly arbitrary thought experiment is supposed to be a compelling argument for anything.
The point isn't that you know what values will be, but that you know the distribution of values/preferences and circumstances, from which yours will be randomly chosen.
I already explained in my original post why this doesn't work. If you grant the souls this kind of probabilistic information, then there's no reason why they can't also make well-informed probabilistic guesses regarding all the other things they're supposed to remain ignorant of, which makes their "ignorance" functionally meaningless.
It does work. If you don’t know whether you will be born a sexual predator or a victim, you should assume you’ll be a victim and therefore advocate for a society that prevents sexual assault.
The whole point of the veil is to be arbitrary. You only know *this* which is what the constructor of the thought experiment has predetermined is the important thing.
> we're equally likely as not to end up being sadomasochists
I think a lot of ethical thought experiments are pointless too, but the point that you could be a masochist is complete nonsense. Sadomasochists are a small minority of people, full-time ones even more so. Rawls’ angels could assume their human avatars wouldn’t like pain. The point is to apply that frame to actual human ethical questions, and humans can assume that the drowning child doesn’t enjoy drowning and that children in Africa don’t enjoy starving or dying of malaria. Otherwise is just silly sophistry.
I already explained in my original post why this doesn't work. If you grant the "angels" this kind of probabilistic information, then there's no reason why they can't also make well-informed probabilistic guesses regarding all the other things they're supposed to remain ignorant of, which makes their "ignorance" functionally meaningless.
I don't understand. How does probabilistic information about the personality makeup of the human species mean you can't be incarnated at random? Are they supposed to be making decisions with no knowledge of the world whatsoever?
>Are they supposed to be making decisions with no knowledge of the world whatsoever?
Not exactly. Souls behind the VOI are allowed to know general rules that apply to all human interactions; there's no reason why they can't know that humans inhale oxygen and exhale carbon dioxide, or other such things. They just aren't allowed any information that might incentivize them to favour the interests of any one person or group of people over those of any other person or group of people. So they can't know that "sadomasochists are a small minority of people", because then it would be rational for them to treat the interests of non-sadomasochists as a group as more important than those of sadomasochists as a group.
So... yeah, it looks like your quote is accurate, Rawls intended for the VoI to preclude any information about group size and relative probability of who you'd incarnate as.
At a glance, Rawls does seem to be making a lot of stipulations or assumptions about the value system of the angels, though (maximin principle, conservative harm avoidance, some stipulation of 'monetary gain' as if he were doing economics), so... it looks like "maybe you all incarnate as hellraiser cenobites" would contradict his thought experiment. But maybe I'd have to read it again.
There's perhaps a more fundamental objection to "you can't know how common different groups are", which is that subgroups are in principle infinitely sub-divisable. Are the "ginger swedish lesbians born with twelve fingers" group supposed to be exactly as common as "people over five feet tall"?
I have never heard it claimed that Rawls prohibits probabilistic knowledge. Indexical ignorance is precisely the ignorance Rawls seems to be requiring.
Then you have not actually read Rawls, because not only does he state this prohibition explicitly, but he also explicitly acknowledges that removing this prohibition would make his argument completely nonsensical.
From "A Theory of Justice", pages 134-135 in the latest edition:
>Now there appear to be three chief features of situations that give plausibility to this unusual rule.20 First, since the rule takes no account of the likelihoods of the possible circumstances, there must be some reason for sharply discounting estimates of these probabilities. Offhand, the most natural rule of choice would seem to be to compute the expectation of monetary gain for each decision and then to adopt the course of action with the highest prospect. (This expectation is defined as follows: let us suppose that gij represent the numbers in the gain-and-loss table, where i is the row index and j is the column index; and let pj, j 1, 2, 3, be the likelihoods of the circumstances, with pj 1. Then the expectation for the ith decision is equal to pjgij.) Thus it must be, for example, that the situation is one in which a knowledge of likelihoods is impossible, or at best extremely insecure.
>[...]
>Let us review briefly the nature of the original position with these three special features in mind. To begin with, the veil of ignorance excludes all knowledge of likelihoods. The parties have no basis for determining the probable nature of their society, or their place in it. Thus they have no basis for probability calculations. [...] Not only are they unable to conjecture the likelihoods of the various possible circumstances, they cannot say much about what the possible circumstances are, much less
enumerate them and foresee the outcome of each alternative available. Those deciding are much more in the dark than illustrations by numerical tables suggest.
The Rawls veil of ignorance works even if the "angelic intelligences" know every single fact about what will result from the society they choose except which human they will end up being. In that case it's basically rule total utilitarianism. It also works, somewhat, if there's only one intelligence doing the choosing, although there it ends up looking like rule average utilitarianism.
I think the mistake you're making is assuming that behind the veil of ignorance you're choosing with the same intelligence and values that you have in life, which can leak information about which human you are, causing a failure to come to agreement, but part of the experiment is that behind the veil you have a completely standardized mind.
>I think the mistake you're making is assuming that behind the veil of ignorance you're choosing with the same intelligence and values that you have in life,
...What? The fact that you're *not* doing this is my whole point!
Then I fail to understand what you mean by "But if they can know that, then there's no reason why they can't also know the commonality of other traits, and then there's no reason why they shouldn't be able to at least make a well-informed Bayesian estimate of whether they're more likely to end up the offender or victim in a rape, or whatever else you want them not to know, and so the whole experiment becomes pointless." The only thing they're supposed to not know is which particular human they end up as. Bayesian estimates of what a generic human is likely to experience are on the table! (The original Rawls book does handle this badly, but it's because Rawls has a particular (and common) blind spot about probability rather than it being an inherent defect of the thought experiment.)
What I mean is that the whole goal of the VOI is to justify some kind of egalitarian intuition. But this only sort-of appears to work in Rawls' original version because the souls lack *any* ability to guess, even probabilistically, what sort of people they're going to be (a point which Rawls states explicitly). If they're allowed to make informed guesses as to what sorts of people they'll most likely be, then there's no reason for them not to make rules where an East Asian's interests count for 36x more than a Pacific Islander's, or where a Christian's interests count for 31000x more than a Zoroastrian's, or where an autistic person's interests count for only 1% those of an allistic, or to any number of the other sorts of discriminatory rules which the whole point of proposing the VOI is to avoid.
If you're trying to maximise your expected utility, you don't want a scenario "where an autistic person's interests count for only 1% those of an allistic".
This is because in a world with 99 allistics/1 autistic, and a dispute between an autistic in which the autistic loses 50x as much as the allistic gains, you have:
a 1% chance of being the autistic and losing 50
a 1% chance of being *the specific* allistic in the dispute and gaining 1
a 98% chance of being someone else
...which is an EV of -49/100.
You'd be in support of a measure that hurt the autistic by 50 in order to make the lives of *all* the allistics better by 1, but that's not valuing an autistic's interests at 1% of an allistic's, it's just not valuing them at twice as important..
Why would privileges only accrue to the specific allistic in the dispute in this scenario? That's never been how discrimination has worked. If you were born white in Apartheid South Africa, you wouldn't need to get into a specific, identifiable dispute with a black person to be favoured over them for the highest-paying jobs, for your vote to count more than theirs in an election, etc. you'd just get all that automatically.
"So for example, we would probably agree not to commit rape, because we wouldn’t know if we would be the offender or the victim, and we would expect rape to hurt the victim more than it helped the offender."
Unless, of course, the rapist got much more pleasure than the victim felt suffering, so the total amount of happiness in the world increased:
I broadly agree that we should "do unto others as we would have them do unto us" but yeah, depends on the tastes of both ourselves and the other person.
I would draw a distinction between "observing a problem" and "touching a problem" in jai's original post. Trace is commenting on the "touching" side of things, specifically the pattern where a charity solicits money to solve a problem, spend that money to make poor progress on the problem, and defends this as "everyone's mad at us trying to help even though not trying would be worse". It is possible to fruitfully spend money in distant weird to you circumstances you don't properly understand, but if you think you're helping somewhere you're familiar with, you're more likely to be right.
I think the distance objection does not refer to literal distance, but our lack of knowledge and increase in risk of harm the further we are from the people we're trying to help.
For example, consider the classic insecticide-treated mosquito nets to prevent malaria. Straightforward lifesaving intervention that GiveWell loves, right? It turns out that many of the hungry families who received such nets decided to use them to catch fish instead. This not only failed to prevent malaria, but also poisoned fish and people with insecticide. We didn't save as many drowning children as we hoped, and may have even pushed more of them underwater, because we were epistemically too far away to appreciate the entire socioeconomic context of the problem.
The further you are in physical and social space and time from the people you're trying to help, the greater the risk that your intervention might not only fail to help, but might actually harm. This is the main reason for discount rates. It's not that people in the far future are worth less morally, but that our interventions become more uncertain and risky. We're discounting our actions, not the goals of our actions. Yes, this is learned epistemic helplessness, but it is justified epistemic helplessness.
> It turns out that many of the hungry families who received such nets decided to use them to catch fish instead. This not only failed to prevent malaria, but also poisoned fish and people with insecticide.
The best study we have on bed net toxicity—as opposed to one 2015 NYT article that made a guess based on one observation in one community—is from a 2021 paper that’s linked in the Vox article. It does a thorough job summarizing all known evidence regarding the issue, and concludes with a lot of uncertainty. However:
> I asked the study’s lead author, David Larsen, chair of the department of public health at Syracuse’s Falk College of Sport & Human Dynamics and an expert on malaria and mosquito-borne illnesses, for his reaction to Andreessen citing his work. He found the idea that one should stop using bednets because of the issues the paper raises ridiculous:
> “Andreessen is missing a lot of the nuance. In another study we discussed with traditional leaders the damage they thought ITNs [insecticide-treated nets] were doing to the fisheries. Although the traditional leaders attributed fishery decline to ITN fishing, they were adamant that the ITNs must continue. Malaria is a scourge, and controlling malaria should be the priority. In 2015 ITNs were estimated to have saved more than 10 million lives — likely 20-25 million at this point.
>“… ITNs are perhaps the most impactful medical intervention of this century. Is there another intervention that has saved so many lives? Maybe the COVID-19 vaccine. ITNs are hugely effective at reducing malaria transmission, and malaria is one of the most impactful pathogens on humanity. My thought is that local communities should decide for themselves through their processes. They should know the potential risk that ITN fishing poses, but they also experience the real risk of malaria transmission.”
There’s no good evidence that bed net toxicity kills a lot of people, and there’s extremely good evidence that they’re one of the best interventions out there for reducing child mortality. See also the article’s comments on nets getting used for fishing; the studies on net effectiveness account for this. Even if the nets do cause some level of harm, the downsides are enormously outweighed by the upsides, which are massive:
> A systematic review by the Cochrane Collaboration, probably the most respected reviewer of evidence on medical issues, found that across five different randomized studies, insecticide-treated nets reduce child mortality from all causes by 17 percent, and save 5.6 lives for every 1,000 children protected by nets.
This doesn’t mean that we should stop studying possible downsides of bed nets or avoid finding ways to improve them, but it does mean that 1) they do prevent malaria, extremely well, and 2) they save pretty much as many children as we thought.
To add, the Against Malaria Foundation specifically knows about this failure mode and sends someone to randomly check up on households to see if they're using the nets correctly. The rate of observed compliance failure isn't close to zero, but it isn't close to a high number either. See: https://www.givewell.org/charities/amf#Monitoring_and_evaluation_2
Maybe I'm too cynical but I haven't seen anyone change their mind when you add context that defies their expectation, I feel like they either sputter about how that's not their real objection (which if you think about it is pretty damn rude to say "this is why I believe in X" and then immediately go "I don't believe in X why would you think I believe in X") or they just stop engaging.
But I think we agree that the general principle still stands that moral interventions further in time and space from ourselves generally have more risk. We can reduce the risk with careful study, but helping people far away is rarely as straightforward as "saving a child from drowning" where the benefit is clear and immediate. I find the "drowning child" thought experiment to be unhelpful as a metaphor for that reason.
We're not saving drowning children. We're writing policies to gather resources to hire technicians to build machines to pluck children from rivers at some point in the future. In expectation we aim to save children from drowning, but unlike the thought experiment there are many layers and linkages where things can go wrong, and that should be acknowledged and respected.
Sure—but then shouldn’t we respond by being very careful about international health interventions and trying as hard as we can to make sure that they’re evidence-based, as opposed to throwing up our hands and giving up on ever helping people in other countries? The former is basically the entire goal of the organizations that Scott is asking people to listen to (GiveWell, etc). Hell, GiveWell’s AMF review is something like 30 pages long with well over 100 citations.
There has to be some point where it’s acceptable to say “Alright, we’ve done a pretty good job trying to assess whether this intervention works and it still looks good, let’s do it.” Going back again to the organizations that Scott wants people to donate to, I think that bar has been met.
I believe that where the bar lies should be for each person to decide for themself. Also, it's not enough for an intervention to have positive effect, but it must have a more positive effect than doing what we would otherwise do anyway. That's a much harder bar to clear.
I personally do think many international interventions have positive effects in expectation. But I am skeptical that they have more positive effect than the "null hypothesis" of simply acting as the market incentivises. I'm honestly really not sure if sending bed nets to Uganda helps save more lives in the long run than just buying Ugandan exports when they make sense to buy and thereby encouraging Ugandan economic development, or just keeping my money in the bank and thereby lowering international interest rates and helping Uganda and all other countries.
The market is a superintelligent artificial intelligence that is meant to optimize exactly this. To be fair, part of the process of optimization is precisely people sometimes deciding that donating is best. Market efficiency is achieved by individuals taking advantage of inefficiencies. But I don't think I have any comparative advantage.
The market optimizes something very different from "human flourishing". Economic resources and productivity are conducive enough to human flourishing that we've been able to gain a lot by taking advantage of the market being smarter than individuals, but now it's taking us down the path of racing toward AI, so in the end we're very likely to lose more than we ever gained by listening to the market. And in the meantime, Moloch is very much an aspect of "who" the market is.
Moloch is an aspect of everything. It would be cherry-picking to say that it uniquely destroys the efficient market hypothesis vs. all other solutions. Efficiently functioning markets very much is demonstrated in the real world as leading to vastly better outcomes than any other known system of resource allocation.
This argument proves too much, though. If the maximally efficient way to save lives is sitting back and letting markets do their thing, wouldn’t that also mean that we should get rid of food stamps, welfare, and every other social program in the US? After all, these programs weren’t created by market forces—they were created by voters who wanted to help the unfortunate (or help themselves) and who probably weren’t thinking all that hard about the economic consequences of these policies. The true market-based approach would to destroy the social safety net, lower taxes by a proportional amount, ban all private charities that give things away at below-market prices, and let the chips fall where they may.
Markets are good at doing what they do, but there’s no law of economics that says markets must maximize human welfare. They maximize economic efficiency, which is somewhat correlated with human welfare but a very imperfect proxy for it. I don’t think that I can beat the market at what it does best (which is why I’m mostly invested in the S&P), but when it comes to something the market isn’t designed for and doesn’t really care about, I trust it far less.
Moreover: Is that your true objection? If someone came out with a miracle study proving that donations to the AMF save more lives than investments in the S&P (I know this is sort of impossible to quantify, it let’s say they did), would you then agree that donating to the AMF is a good idea if you want to improve human welfare?
The market does an impressive job at optimizing for the welfare of people who have money. LVT + UBI would neatly sort out most of the associated misalignment problems.
Stories about clothing donation - unsorted heaps of your old sportsball and fun-run and corporate team building tee shirts having more value than the most beautiful locally-produced textiles - are depressing in this regard, and bring to mind the African economist who - 20 years ago or so - received a tiny bit of attention for asking Western do-gooders to basically leave Africa alone.
Do you also apply this heuristic to acts that we might call selfish? Starting a clothing business to make lots of money by jumping on microtrends in fashion carries the risk of encouraging young people to overextend their credit. Discarded, no-longer-in-fashion garments may end up clogging landfills. And yet it’s the ostensibly altruistic projects that we attack for ending up “doing more harm than good." The others we praise for their entrepreneurial spirit.
> insecticide-treated nets reduce child mortality from all causes by 17 percent, and save 5.6 lives for every 1,000 children protected by nets
I'm curious as to how the math here works out. If they're reducing child mortality by 17%, how does that not imply 170 lives saved per 1000 children? Everyone goes through an infant stage during their lives, right?
17 percent of the total risk of child mortality. If the total risk of child mortality without bednets was 100% then Africa wouldn't have made it long enough for this even to become a charity.
That's exactly the answer. When you're helping the child in the lake across the street there are a lot of implied social contracts at play, between you and your neighbors, you and your city, you and your country. That child will grow up to pay taxes and be the teacher of your grandchildren or the doctor that will take care of you as you age.
There's no such contract with the far away child. You don't know if the child's drowning because their society keeps throwing children in lakes. You don't know that the money you send won't be used to throw even more children in lakes. You don't even know if that child will be saved just to grow up and come make war with your own society.
There’s something to this, but I’m not sure if it’s enough. Suppose you’re American and taking a vacation (or on a business trip, or working there temporarily) in rural China and you see a drowning child.
Would you decide not to save them because it’s not your country? What if Omega tells you that it’s a genuine accident and the locals are not routinely leaving children to drown?
If you read the article many of the Chinese bystanders were not passive, obviously the drowning child scenario assumes a shallow lake, very few people would dive into a fast moving river without some sense of training (the diplomat competed in triathlons).
Unironically yes. When you travel to a foreign country like this, you are an outsider and you aren’t really supposed to interact with the locals very much. I wouldn’t talk to them, I wouldn’t make friends with them, so I sure as hell am not going to get involved in their private affairs like this. It’s none of my business and as an outsider I wouldn’t be welcome to participate. I’m pretty sure that if I saved the child, I would be called a creep for laying hands on him. Without knowledge of the actual language, I wouldn’t have the tools to explain myself otherwise.
Honestly, I think your thought experiment kind of illuminates why we save local children and not distant ones. The local children are theoretically members of our community, and though community bonds are weaker than ever, they aren’t non-existent, they still matter. Ergo, we save the child to reinforce this community norm, and we hope someone else saves our own children from drowning some day.
That doesn’t transfer if we do it in a foreign country.
"I'm pretty sure that if I saved the child, I would be called a creep for laying hands on him." I'm not a mind reader, but this sure reads like a bad faith argument to me.
Have you read the fable of the snow child? It’s a story about a fox who saves a girl who was lost in the woods in the winter. Upon bringing the girl home the parents shoot the fox because he’s a fox and they were afraid that he was going to steal their hens. The girl of course admonished the parents for this, but it didn’t change the fact that the fox was dead.
Do not underestimate the power of xenophobia.
Communicating with foreigners can be a very high stakes situation. When people are naturally suspicious of you, it’s critical that you stick to pre-approved socially accepted scripts and to not deviate from them, otherwise the outcomes can be very unpredictable. Drowning children is a rare enough event that we don’t have universally agreed upon procedures for how to handle them.
There have, I don't think commonly but still non-zero, honor killings because a (male) foreigner interacted with a local (female) child. The interaction that the tourist probably thought was merely being polite was enough for her to be marked unclean.
If you happen to stumble into a living thought experiment where a child is drowning in a shallow pond, it's worth the risk of a cultural misunderstanding and the child's later death instead of the certainty of their current drowning. But such cultures do exist.
As someone who would like to be saved from any hypothetical future drownings, even if they were to happen in foreign countries, or in my own country in instances where the only potential saviours are foreigners, I very much dispute your last sentence as logically following from the previous.
Indeed, I would like the community of people who would feel obligated to save my children from drowning to be as large as possible, all else equal.
To deal with the objections below, switch it up so it's in your home country but a visiting tourist: a cruise ship docks at your local beach; you know at this time of year the majority of swimmers are tourists not locals. You see a kid drowning... Do you ignore them because they're from a different country?
>That child will grow up to pay taxes and be the teacher of your grandchildren or the doctor that will take care of you as you age.
Would you accept that our moral circle should expand as the economy becomes more globalised then? It's standard in the modern economy for kids on the other side of the world to grow up to make your clothes, grow your coffee etc.
Yes, but the economic bound is not enough. You need a cultural bound. None of these variables are binary in practice, so the amount of economic ties, cultural ties and amount of help we send tend to be proportional to each other.
"You don't know if the child's drowning because their society keeps throwing children in lakes." That doesn't seem like a good reason to not save the child.
"You don't know that the money you send won't be used to throw even more children in lakes." This would be an argument against dropping money randomly, but we have fairly robust ways of evaluating charitable giving.
"You don't even know if that child will be saved just to grow up and come make war with your own society." Saving them with a life preserver that says 'Your drowning prevented by [my society]" seems like an excellent way to prevent that, with the added benefit that they'll tell all their friends to not make war on your society, too.
This is an over-simplification. Children are routinely indoctrinated by their societies throughout early adulthood to become warriors or to create more warriors. There is absolutely a real risk that a random person saved will be your enemy in the future. Saving them from a vast distance can indeed by seen by that society as a great way of helping their future takeover while impoverishing their enemy. Moloch is everywhere.
Allowing for the sake of argument that that's a significant problem, seems to me the obvious patch would be forking some percentage of nets off the main production line before the insecticide-treatment step, adding a cheap but visible feature to them (maybe weights? A cord for retrieval?) which would make the insecticide-free nets more useful for fishing and/or less useful for beds, then distributing those through the same channel until fish-net demand is saturated.
I think uncertainty in outcome and timing explains a lot, at least for my own behavior.
If I am certain of a benefit to others while uncertain about how grumpy I will be after the good deed, the finger is on the balance to help.
The inverse is also true. Giving for the certainty of a relief is very different from giving with the non-zero chance that funds get diverted to wars, corruption or crime organisations.
Thank you. This is very much my intuition as well, and I'm glad somebody else laid it out clearly. The biggest flaw in all these thought experiments, IMO, is that you're assumed to have 100% accurate knowledge of the situation. Accurately knowing the details of the river and the megacity and the drowning children is FAR more important to moral culpability than whether you happen to have a cabin nearby, or whether you happen to live there.
Sounds like we need some kind of social arrangement where we are gently compelled to work together to solve social problems cooperatively with roughly equal burdens and benefits, determined by need and ability to contribute. What would we call this...rule of Social cooperation? Perhaps social...ism?
Nah, sounds scary. Lets just keep letting the rules be defined by the Sociopathic Jerks Convention, with voting shares determined by capitol contributions.
Right, the trick is that the altruistic people need to make rules that exclude the sociopaths jerks from accumulating power and which make cooperation the better choice from their own selfish perspective.
Perhaps a good start, just a bare minimum, would be to strictly limit the amount of capitol that any one person can control. (could be through wealth taxes, could be through expropriation, could be through enforcement of antitrust....whatever, trying to keep this on a higher level.) The extreme inequality here leads to further multiplication of the power of the wealthy, and because the Sociopathic Jerk Convention (E.g. the behavior of corporations, which are amoral) is running the show, their rules allow them to further multiply their power.
The altruistic people need to be assertive and willing to fight. There are more of us than there are of them by a huge margin.
Better yet, why not leverage the ambitions of entrepreneurs to invest their time, money and creativity to solve problems for consumers? Big investments require big bets and huge risks which need to be offset with immense potential rewards.
I think a lower bound is more important - and more feasible to enforce - than an upper bound.
When you go to the most powerful capitalist in the world and tell him "your net worth is above this arbitrary line, so a new law almost everyone else agreed on says you have to give some of it up," is he going to actually cooperate with that policy in good faith? Or is he going to hire the best lawyers and accountants in the world to find some loophole, possibly involving his idiot nephew becoming (on paper) the new second or third wealthiest capitalist in the world?
One trouble with Rawlsian veils (better yet Harsanyian veils) is that networks of billions of interacting people are complex adaptive systems with emergent characteristics and outcomes. If we want to establish morality by what actions would be lead to the best outcomes, then we need to actually play through the system and see how it develops.
May I suggest that a world where everyone gave everything beyond basic sustenance to anyone worse off than them would scale into a world where nobody invested, saved for the future and everyone felt like a slave to humanity because they would be. It would be a world of complete and total destitution devoid of any ability to help people across the world.
I think it is more realistic to take real humans with their real natures and find rules and ethics and institutions which build upon this human nature in a productive way. I would offer that this is more of a world where altruism, utilitarianism and egoism overlap. Science does this by rewarding scientists with reputation for creating knowledge beneficial to humanity. Free markets do this by rewarding producers for solving problems for consumers. Democracy does this (in theory at least) by aligning the welfare of the politician with the citizenry voting for them. Charities do this by recognizing the benefactors with praise and bronze inscriptions.
There are good reasons why pretty much nobody gives everything to charity. Effective Altruists need to take it up a level.
Socialism has a history far broader than whatever particular example you are thinking of.
Economic system and body count don't seem correlated meaningfully...mercantilism and capitalism gets colonialism (and neocolonialism) and slavery, also fun wars like Vietnam and Iraq, communists get the great purge and great leap forward.
Authoritarianism has the body count, regardless of if it's socialist, capitalist, theocratic, mercantilism or whatever you prefer.
>Socialism has a history far broader than whatever particular example you are thinking of.
And /none/ of it has been notably successful, in any significant respect (& particularly compared to its competitor(s)—just the opposite, in fact—so... well, I remain skeptical.
Of course, it depends on what you call "socialism". Is "capitalism but with government services" deserving of the name? If so, it DOES work! (...but I, of course, would credit that to the other component at work.)
>mercantilism and capitalism gets colonialism (and neocolonialism) and slavery, also fun wars like Vietnam and Iraq, communists get the great purge and great leap forward.
I think there are many things wrong with this attempt at "death-toll parity", but no one ever changes their minds on this topic & it's always a huge exhausting slog... so I'm just registering my objection; those who agree can nod wisely, those who don't can frown severely, and we all end up exactly where we would've anyway except without spending hours flinging studies and arguments and so forth at each other!
Well, I don't want to turn this thread into a debate on socialism, and you're very right that how we define our terms is contested and critical.
I would suggest that there are many examples, such as with Allende, where it seems like it was going to work really well and the CIA simply could not have that.
I'd also note that life for the average Cuban is far, far better under Communism that it was under Batista, for example, and possibly the results in the other countries you are thinking about are better than you think when looked at from the perspective from the average poor person rather than the very-small middle and upper classes, who typically control the narrative.
Regardless, I was just saying that authoritarianism is orthogonal to economic model. And it is authoritarianism, regardless of the economic model, which is "scary." The Nazis were not less horrific simply because they had a right-wing social and economic program.
Would Venezuela be an example of a country where non-authoritarian socialism has gone badly? (May be you can wriggle out of this by saying it's a bit authoritarian?)
I suppose Norway would be an example of a country where socialism has gone pretty well (though with a fairly large dose of capitalism, and the advantage of massive oil reserves - not that those saved Venezuela).
Norway is not Socialism, per ChatGPT at least. It is a Social Democracy, not a Socialist government, though one may quibble about the distinction. Norway has:
Private Property and Business:
Individuals can own businesses and land
Market economy drives most goods and services.
Stock Market and Investment:
Norway has a well-functioning stock exchange
Encourages entrepreneurship and foreign investment
Profit Incentives:
While taxes are high, businesses still operate for profit
Wealth creation is encouraged, though it's heavily taxed and redistributed
I would personally argue that it would function even better with higher profit motive and less government intervention, but it is a misnomer to claim it is Socialism.
Venezuela went very authoritarian, but also, I wouldn't make the claim that every socialist experiment, even less authoritarian ones, are good. Norway is a possible example of a good one as you mention. Cuba is an obvious example. One could argue China is doing really well, and you can say it's capitalist but also they haul off billionaires to labor camps if they get too out of line so I would push back on that.
But Venezuela failed. This stuff is complicated. Anyone who says HURR DURR SOCIALISIM BAD is ignorant, many of them proudly so.
At my workplace (before I quit, angrily—and, as it turns out, unwisely), we had several Cubans who had come over here to 'Merica on rafts and the like. They were bigger fans of America than most Americans—the yard manager bought a Corvette and had it emblazoned with a giant American flag, always wore "America: Land of the FREE!" or "...of Opportunity!" etc. T-shirts, and so on (& I once witnessed his eyes get wet at the anthem before a game(!)—and... uh... well, they would talk about Cuban food, women, weather, vistas, but to a man they said they'd die trying to sneak back in the U.S. rather than accept being forced to go back & remain.
Anecdote, of course. But I get the impression that this is the modal Cuban, over here; granted, they're self-selected—but one doesn't see very many Proud Cuban Forever, "I'd die before leaving my adopted Cuba!", etc., expats, going the other direction.
Speaking personally, I can feel that the appeal of both socialism and effective altruism are linked to the same set of intuitions about solving social problems.
To me, the big difference is: (many) socialists seem more attached to a specific idea of how to act in accordance with that intuition than with actually figuring out the best way to operationalize them.
Socialists tend to presume they know the answer even in cases where their preferred answer does not seem like it actually achieves the goals they're supposed to be working towards.
Or, maybe a different way of saying it: I think ~120 years ago, socialism would have felt a lot like EA today: the ideology of smart, scientific, conscientious but not sentimentalist, universalists. But the actual history of socialism means that a lot of the intellectual energy of socialism has gone into retroactively justifying the USSR and Mao and whatever, so that original core has become very diluted.
TBC, I don't mean this as a complete dismissal of socialism, I think there are lots of people who consider themselves socialists who I think basically have the right moral intuitions and attitudes, and I absolutely feel the pull of socialist ideas... But I often find myself frustrated how quickly so many socialists just refuse to engage with the fact that capitalism has been absolutely necessary to generate the resources necessary for a universalist moral program, or will completely abandon any pretence of conscientiousness as soon as awkward facts about communist totalitarianism are mentioned.
I'd say "hollowed out" rather than "diluted." Anybody who got sufficiently sick of trying to justify the USSR, and still cared about the original virtuous goal, started calling their personal agenda something else and focusing it in different directions.
"To me, the big difference is: (many) socialists seem more attached to a specific idea of how to act in accordance with that intuition than with actually figuring out the best way to operationalize them."
Yes, clearly. That's because socialism (and capitalism) includes a large component of moral axioms and value claims as well as claims about facts, and you are not going to argue someone out of their moral axioms.
I'm opposed to capitalism partially for evidence-based reasons and partly because of basic values (I think it's morally wrong to derive most of your income from non-labor sources) and you couldn't convince me out of having my values even if you changed my opinion about some facts.
"or will completely abandon any pretence of conscientiousness as soon as awkward facts about communist totalitarianism are mentioned."
what facts, or "facts", are you thinking of, and why would you expect they would change my mind?
I'm aware socialist countries tend to be authoritarian (not necessarily "totalitarian", whatever you think that means), but I'm not really bothered by that in principle, since I don't view political freedom as self evidently good.
"Yes, clearly. That's because socialism (and capitalism) includes a large component of moral axioms and value claims as well as claims about facts, and you are not going to argue someone out of their moral axioms."
That's totally fair, but in the context of the original comment, which implied that "socialism" was just a method to implement the strategy of gently compelling people to work together to solve social problems, the fact that socialism has other moral axioms that may be unrelated to the project of solving those problems--or at least, that the problems socialism see itself as solving might be different than the problems suggested by Scott's post.
"what facts, or "facts", are you thinking of, and why would you expect they would change my mind?"
The usual ones about gulags and the Cultural Revolution and so forth; I'm sure you already know them. And I didn't say that they should make you change your mind, I said that socialists abandon their conscientiousness in the face of those facts: they tend to defend actions and outcomes that are canonically the sort of thing our hypothetical strategy of "gently compelling people to cooperatively solve problems" is meant to be *solving*.
Again, this is fine, you're allowed to think that the occasional gulag is justified to make sure that no one derives income from non-labour sources. I'm not saying you shouldn't be a socialist, I'm saying that being a socialist is *different* from the project that Scott loosely alludes to and that the top-level commenter suggests is basically achieved by socialism.
I'm explaining to the top-level commenter why some people who are sympathetic to the goal that Scott outlines, and who have some sympathy for the intuition that this has something in common with socialism, might still not consider themselves to be socialist, or at least, might think that the two projects aren't exactly identical.
Okay, you've changed my mind. I'm now convinced that promoting a social norm of saving strangers is actively evil because of second-order effects. Thanks!
Reading "More Drowning Children", the thought that came up for me was, "Damn, he has greatest ability to write reticulated hypotheticals which primarily serve to justify his priors of any one I've ever read!"
My second thought: For me, the issue is more, "At the end of this ever-escalating set of drowning children, do I ever get to do anything other than the minimal activities that allow me to survive to rescue more drowning children?" Not what you're getting ads, I know, but what you're doing seems to me to point in that direction.
I might as well take the role of the angel on your shoulder, whispering into your ear to tempt you, saying, why not give all you have to help those in extreme need just once, to see how it feels? What if your material comfort was always at the whims of strange coincidence, and goodness was the true measure of man? What if you found out you liked being a penniless saint more than a Respectable Person? You might enjoy it more than you think. Just think about it. :)
Penniless saints have done far less good in the world as a whole than wealthy countries and wealthy billionaires who then had enough time and capacity to look beyond their near term needs.
Sounds like something you'd hear from media sponsored by billionaires, or in history books written by billionaires, or in a society which overemphasizes the achievements of billionaires while ignoring the harm they are doing, etc.
I actually completely agree with this post. You shouldn't take your own feeling of "feeling good" as the entire idea behind morality. Yes, billionaires giving to charity will do more good than a penniless saint (being influential can make up for this gap -- Gandhi may have done more good than the amount than the amount of money in his pocket -- but the random penniless saint won't outweigh $100,000,000 to charity).
That being said, billionaires can save 100,000 lives, but you personally could save 1 life. If you don't save that one life you could, seems like you're saying you don't value saving lives at all.
You could say "one of my highest utility action is to become a billionaire first, AND THEN donate all my money to the causes which are the most effective" and yes! I might even agree with you! If you dedicate yourself to that then you're doing good! But if instead, you say "well it's difficult to do the maximally efficient thing so I'm not even going to save ONE LIFE", then you're giving an excuse for not saving a life even if you wanted to.
You could say "one of my highest utility actions is to CONVINCE all the billionaires to donate their money to charity". and yes! I might even agree with you! If you dedicate yourself to that then you're doing good! But if instead you say "well, most people who say they're moral aren't doing that, so clearly the idea of morality is bunk and I'm not a bad person for not following the natural conclusion of my morality" then that's a problem.
Someone who weaves complicated webs to not do anything different than what they wanted to IS, IN FACT a worse person than if that same person donated enough to charity to save one life.
No matter what, all morality says you need to either *try*, OR say you don't value saving any lives (an internally consistent moral position that wouldn't care if your mom was tortured with a razor blade) OR do what scott says in the post and assume that looking cool/feeling good about yourself IS morality, and therefore there's no moral difference between saving 0 lives or 1 life and 10,000 lives if they provide the same societal benefit and warm feeling of fuzzyness about being a good person in your gut.
I'm not sure what complexity there is. The invisible hand makes free societies wealthy, and wealthy societies give more to charity. No external effort, no waiting, no convincing, marketing, sales, or anything else needed. Lowest effort, highest utility.
There is more in heaven and earth than billionaires. There are also a lot more millionaires, and even more hundred-thousand-aires than there are billionaires. Grow the whole pie. This isn't zero-sum.
"At the end of this ever-escalating set of drowning children, do I ever get to do anything other than the minimal activities that allow me to survive to rescue more drowning children?"
In the thought example, some of the saved children should take over the job, and the others maybe give thanks for saving their lives.
In real life, no one is ever going to reward you, because the kind of people with the capacity and desire to reward you are probably too busy saving kids themselves. Until the day comes when there's finally no more kids to save anywhere, then MAYBE society will throw you a bone, but we'll probably all be dead before that happens.
This is a the point of the first issue of Kurt Busiek's Astro City comic. Samaritan, a Superman-like hero, can never rest, literally (I think), because with his super-hearing he can always, 24/7, hear a drowning child in Africa, and he can get there in 2 seconds, so he feels compelled to do so.
Seems like the superior solution would be finding an EA / moral entrepreneur who will happily pay market value for the cabin, and then set up a net or chute or some sort of ongoing technological solution that diverts drowning children into an area with towels, snacks, and a phone where they can call their parents for pickup. Parents are charged an entrance fee to enter and retreive their saved children.
I unironically think the moral equivalent of this for Scott's favorite African use cases is something like "sweatshops."
"Parents are charged an entrance fee to enter and retrieve their saved children."
What if the parents don't turn up?
"Look, do you really think that by now *nobody* has realised all the missing children are due to them falling into lakes and streams? One child per hour every hour every day every month all year? 8,760 child fatalities due to drowning per year for this one city alone? Come on, haven't *you* figured out by now that this is happening on purpose?
Don't want to be bothered with your kids anymore? Don't worry, eventually they'll wander off and fall into one of our many unsecured lakes, streams, ponds, and waterways, and that's that problem off your hands! Your kid is too stupid to figure out that they shouldn't go near this body of water? Then they're too stupid to live, but Nature - in the guise of drowning - will take care of that.
You keep saving all these kids, our population is going to explode! And the genetically unfit will survive and reproduce! It will ruin our society!"
Bodies of water have inherent danger. Yet, it is worth the tradeoff of not posting lifeguards at every single river, every pond and stream, just to stop the potential of some children drowning. Life is life, and tragic accidents happen. Safetyism is worse.
Economic development leads to lower fertility. I definitely think population and fertility rates in Africa are huge social problems, but the best way to address them is to make Africans more prosperous so they adopt the norms about sex and family size that other countries have adopted as they get richer.
Yeah, probably. But what about voting for the Different Dam Party? Or voting for a party whose headline 5 policies you greatly support but also have a nasty line in their manifesto about building a different dam?
I think at some point Scott has to accept that people reading this blog are exactly the types of people to optimize for their own coolness and not at all for truth seeking or morality, when you see them go into contortions to avoid intuition pumps. The problem is upstream of logical argument, and in whatever thought process preventing them from thinking they could be at all immoral.
Depends. Some people are here for the careful explorations of morality. Some people are here because they heard it was where all the smart kids hang out, and they are desparate to prove they belong, which often means showing off your ability to do cognitive somersaults over things like empathy or basic moral intuition. Its essentially trangressive intellectualism as psychic fashion.
Although I am being bad for not mentioning that I'm really talking about the commenters. If you were persuaded, the most likely time you mention it (if you do *at all* which you probably don't because mentioning donations are gauche) is a random start or end of year open thread, probably with no direct direct link back to the persuasive post. If you weren't persuaded, you likely fall into the above failure mode. (Edit: and therefore immediately respond)
Yup, people who go to meetups are several tiers above the average commentator, who cannot seem to grasp the purpose of hypotheticals and post things like "well this just makes me WANT to drown people (unstated subtext) because I don't like your arguments". Even if those types of people went to meetups, they'd know better than to say things like that!
And "seeming cool" doesn't mean "fashionable" or "obviously pandering to populist sentiments" (both of which I agree would be a bad way to describe even the current commentators) in this context, but something more like "self conception preserving" or "alliance affirming". Someone replying a post about morality about how obviously they love their family, and obviously giving is local because then it'd be reciprocal are not thinking truth seeking thoughts but "yay friends" or "yay status quo".
If you think you have a simpler explanation of why over 50% of the replies are point missing or explicitly talk about how they don't want to engage with the hypotheticals, with only reference to the replier's surface level feelings rather than marshaling object level arguments on why it'd be inappropriate to use hypotheticals then I'm all ears. But saying "people just make mistakes" is not an answer when the mistakes are all correlated in this fashion.
>when you see them go into contortions to avoid intuition pumps
Funny enough it was Scott himself in his What We Owe The Future review that broached the idea you probably should just hit da bricks and stop playing the philosophy game! He wanted to avoid the intuition pumps because they're bad. When you *know* someone is rigging the game in ways that aren't beneficial to you, you are not obligated to go along with the rigging.
Ever-more-contrived thought experiments are not about truth-seeking, either.
>whatever thought process preventing them from thinking they could be at all immoral.
I’m confused by the use of ethical thought experiments designed to hone our moral intuitions, but which rely on increasingly fantastical scenarios and ethical epicycles upon epicycles. Mid-way through I was wondering if you were going to say “gotcha! this was all a way of showing that the drowning-child mode of talking about ethics is getting a bit out of hand.” Aren’t there more realistic examples we could be using? Or is the unreality part of the point?
Like with scientific experiments, you try to get down to testing just one variable in thought experiments. The realism isn't the point, just like when a scientist who is studying the effects of some chemical on mice ensures that they each get perfectly identical and unchanging diets over the course of the experiment. The scientist isn't worried about whether it is realistic that their diets would be so static because that's not what's being tested right now.
You can build back to realistic scenarios after you've gotten answers to some of the core questions. But reality is usually messy and involves lots of variables at once, so unless you have done the work to answer some of those more basic questions, you're going to get stuck in the muck, unsure what factors are really at play. Same as if the scientist just splashed unmeasured amounts of the chemical onto random field mice in a local park.
The problem is, the drowning child thought experiment, in its *original* form, is already the most free of confounders, as it is much simpler than the scenarios Scott proposed here. So the equivalent of your mouse science example would be: I give my mice a certain drug, and the mice are held und the most supremely controlled circumstances such as their diet. But the drug did not have any effect. So now instead I let my mice roam free in the garden and feed them leftovers from the employee canteen, and then I give them the drug again and see if it works now.
The original Drowning Child thought experiment is "you'd save a child if you saw it drowning, wouldn't you?" and the majority of people will go "of course I would".
*Then* it sandbags you with "okay, so now you have agreed to save *all* the drowning children forever" and people not unreasonably go "hold on, that's not what I agreed to!"
And then the proposers go "oh how greedy and selfish and immoral those people are, not like wonderful me who optimises for truth seeking and morality".
No, it asks you _why_ you feel so sure you have to save the one drowning child, but you never even think about the others. The point is to make you realize that _is_ what you (implicitly) agree with; that it's _your_ judgement that thinks you're greedy and selfish for not saving children.
Some people actually respond in the desired way to the thought experiment; they can't think of any compelling answer to the question "what's the difference?"
Other people propose answers like: "the difference is spatial proximity", and so Scott counter-proposes other thought experiments to try and isolate that variable and discovers that it actually doesn't seem very explanatory.
The point of these iterated versions is to isolate different variables that have been proposed to see if they actually work to answer the question; and if we can discover an answer, figure out what it suggests about our actual moral obligations vis a vis funding AIDS reduction in Africa or whatever.
But Scott *isn't* isolating any variables, nor is he trying to. He's just constantly changing all the variables on a whim, including "variables" that aren't actually variable to begin with (e.g. laws of physics). Continuing the analogy from before, what Scott is doing here is like if one of the scientists were to notice that the mice seem to be becoming unhealthy, and another scientist proposes that it might be because their diets don't contain enough protein. Then the first scientist says, "okay, let's test for that. We'll send the mice to an alternate universe where the speed of light is 2 m/s slower than it is in our world, genetically modify them to have pink fur with purple polka dots, give them all tiny ear piercings, and start adding protein to their diets -- if your theory is correct, this should resolve their health issues."
I guess I disagree? People claimed that the clear difference between drowning kids and malarial kids in Kenya is distance, so Scott lists some (not even all that unrealistic) examples where you're physically distant to see if the intuition holds?
After rejecting physical distance he tries to think of some other factors: Copenhagen-style "entanglement", the repeated nature of the malaria situation as opposed to the (usually) one-off nature of the drowning child. He decides that these are indeed the operative intuitions, and then challenges them, finding all versions unsatisfying of using these as a complete basis for moral action, before laying out his preferred resolution.
I agree the examples come fast and thick, and sometimes it feels like we're nested a few levels deep, but I think he's exactly looking at the variables "physical distance", "declining marginal value of moral action", "entanglement with the situation" , and trying to isolate them individually and then in various combinations/interpretations.
Actually, what's going on here is that we observed some effect X in the original experiment (the drowning child). Then someone claimed "yes, but that effect only occurs when their living space is a small cage. In more naturally-sized living spaces, the effect X would vanish. The chemical isn't sufficient on its own." And so we go on to run the test again, but now the scientist builds a (still contained and controlled in other ways) testing environment where the mice live in burrows made of dirt instead of cages.
It's trying to apply rules of logic to something that is squishy and not made of logic.
Really there are no reason to believe that our moral intuitions are coherent. They probably aren't. Thought experiments are fun and useful for trying to explore the edges and reasons of our intuitions, but they have their limits. This article may have gracefully (or not gracefully, depending on your perspective) bumped up against them.
You could have a framework where you expect yourself, and hopefully others, to donate a portion of thier time and/or money to helping others (call it the old 10 percent tithe, although admittedly everyone has thier own number). If you already expect yourself to do this, then adding on saving a drowning kid once hardly costs you more in the big picture, and is the right thing to do since you're uniquely positioned to do it. if it's really important to you, you can just take it out of your mental tithe ledger and skip one life unit of donation that month (although you probably won't because it's in the noise anyway). But if you're by the drowning river and this is happening so often it's significantly cutting into your tithe, it's perfectly reasonable to start actually taking your lifeguard duties out of your mental tithe, and start wondering if this is the most effective way for your tithe to save lives. And if not, then we all reasonably conclude you're fine (even better off) not doing it.
"... doesn't seem to be a simple one-to-one correspondence where you’re the only person who can help: [sociopathic jerk thought experiment]"
I'm not sure if this tells us tooo much about the effect of other people in real-world moral dilemmata; one might bite the bullet and say "sure, /in that case/, where you know you're the only one who can help, you should; but in any real situation, there will be 1000 other people of whom you know little—any one of which /could/ help."
That is, if we're considering whether there is some sort of dilution of moral responsibility, I don't think the S.J.C. example really captures the salient considerations/intuitions.
-------------
I disagree with the other commenters about the utility of these thought-experiments in general, though.
They're /supposed/ to be extreme, so as to isolate the effect of x or y factor upon moral judgments—the only other options are to (a) waste all your time arguing small details & becoming confused (or, perhaps, just becoming frustrated by arguing with someone who's become confused) by the interplay of the thousand messy complications in real-world scenarios, or (b) throw up your hands & say "there's no way to systematize it, man, it's just... like... ineffable!"
If there is some issue with one of the thought experiments, such that it does not apply / isn't quite analogous / *is* isomorphic in structure but *isn't* analyzed correctly / etc., it ought to be possible to point it out. (Compare: "Yo, Einstein, man, these Gedankenexperimente are too extreme to ever be useful in reality! Speed of LIGHT? Let's think about more PRACTICAL stuff!")
I can't help but feel that some reactions of the "these are too whacky, bro" sort must come from a sense of frustration at the objector's inability to articulate why the (argument from the) scenario isn't convincing.
I'm sympathetic, though, because I think that sometimes one /can correctly/ dismiss such a scenario—intuiting that there's something wrong—without necessarily being able to put it to one's interlocutor in a convincing way.
Still—no reason to throw the bathwater out with the baby. It's still warm enough to soak in for a while!
In the Christian tradition, Jesus explains precisely what decides someone's eternal fate in Matthew 25 -- suffice it to say, it really is just meeting the material and social needs of the worst off people. No requirement you're religious in any way, and Jesus does mention that it'll lead to a lot of surprise both from disappointed "devout" people and confused but delighted skeptics.
Obviously there are other traditions and legends, but presuming Heaven is a Judeo-Christian term of art for a specific kind of eternal fate, it seemed relevant.
I'm not sure what it would mean to believe the gospel, but like absolutely refuse to care for a neighbor as though they were yourself. It is a gibberish idea.
Yeah that’s what James says in James 2:18. Contrast Ephesians 2:6-10. Seems like a contradiction! But it’s not. Paul explains in some detail in Romans 3.
The "actually existing Christian tradition" would say that the morally relevant aspect of action is the act of the will, not the change in external circumstances brought about. This is why the charity of the poor woman who gave her last coin was of greater merit than those of the rich.
Obviously one cannot harden one's heart to the poor and still be righteous. What I am saying is that external impact is in some cases disconnected from moral goodness; thus, the rich man who gives 1000 dollars has not done a moral act 100 times better than the poor man who gives 10 dollars.
> What if it is primarily the cultivation of a certain purity or nobility of soul?
Interesting theory. How much does that cost to do, at a median level of success? In terms of man-hours, or purchasing power parity, or wattage for sacred hot springs, or whatever standard fits best.
Would soul-cultivation be directly incompatible with funding antimalarial bed nets, or is there room for Pareto improvements in someone hedging between the possibilities? "Tithe 10%, and also do these weekly exercises, from which you'll personally benefit" isn't an obviously harsher standard to live up to than tithing by itself.
> After all, if the soul is immortal, its quality is infinitely more valuable than any material and temporal vicissitudes.
Giving up on explaining the nobility of soul formulation. However, I will say that immortality of the soul is not shaped like the linked image; the amount of suffering in Hell or Purgatory or the amount of joy in Heaven is far greater than anything terrestrial.
I would argue that saving drowning children is actually a very-high-utility action, because you can call the child's parents to pick the child up and they'll be super grateful, and even if they don't pay money, you'll accrue social and reputational benefits. Tacking on "...oh, but your water-damaged suit!" is misleading, because even with a water-damaged suit, saving the child is still obviously net-positive-utility.
(So, for example, if you get the chance to move to a cabin and rescue drowning children all day, you could totally just do that and make a living off it. Start a Patreon, have a little website with a heartwarming story about how you're able to save all these children thanks to the generosity of your patrons. When you save a child, send them back to their parents with a link to your venmo.)
The Drowning Child story takes a situation in which saving the drowning child is obviously high-utility, and conflates it with a situation in which saving the person-with-a-disease is obviously negative-utility.
I don't have a moral about whether you should give lots of money to charity. I just think the drowning child story is misleading us, because it says "...you chose to save the drowning child, so for consistency you should make the same moral decision in other equivalent situations" but the situations are not actually equivalent.
I would argue that it's mostly false that society gives you kudos for saving drowning children. Society gives you very little. The *child's parents* are the people who are rewarding you.
In steps the entrepreneurial nonprofit Singer Fineries, the world's first designer of high-end suits, gowns, and other formalwear that are all 100% waterproof. For the go-getting drowning-child-saver on the go! Ethically sourced materials made by fair trade certified artisans vetted by GiveWell, all proceeds donated to effective charities, carbon-neutral, etc.
Even better, the SF corporation will provide training in practical needlework, tailoring, and seamstressing for every saved child and hold a position open for them to work on the waterproof clothing. Sweatshops, you say? No, not at all! Ethical pro-child independence, we say! Earn your own living, live your own life, free of the neglectful parents who let you tumble into the lake and left it up to a stranger to save you!
"(So, for example, if you get the chance to move to a cabin and rescue drowning children all day, you could totally just do that and make a living off it. Start a Patreon, have a little website with a heartwarming story about how you're able to save all these children thanks to the generosity of your patrons. When you save a child, send them back to their parents with a link to your venmo.)"
I like the cut of your jib, young lion, but I think the EA and those inspired by Singer would be appalled. You're not supposed to *benefit* from this, you are supposed to engage in it via scrupulosity-evoked guilt! You should be paring yourself down to the bone to save drowning children every spare minute! You surely should *not* be making a nice little living from being a professional lifeguard! 😁
I have to say, if you must live beside a river full of dumb brats whose inattentive parents can't be bothered to keep them from drowning themselves, you may as well make a go of it how you can. Venmo those negligent caretakers for every cent you can, and don't forget to shame them on social media if they don't cough up!
"But I think most people would consider it common sense that refusing to rescue the 37th kid near the cabin is a minor/excusable sin, but refusing to rescue the one kid in your hometown is inexcusable."
What?!?! I cannot for a second imagine that a majority of people would say "just picking a number of kids you're down to save is fine in this situation". That there is a diminishing marginal utility of saving dead kids!
If this is happening I genuinely think that someone living in this cabin needs to realize their life has been turned upside down by fate and that their new main life goal has to be "saving the 24 lives that are ending per day" by whatever means possible. Calling every newspaper to make sure people are aware of the 24 daily drowning kids in the river. Begging any person you see to trade with you in saving 12 kids a day so you can sleep. Make other people "touch the problem." Whatever the solution is--if a problem this immediate, substantial, and solvable appears and no one else is doing anything about it, you have to do what you can to get every kid saved.
I took it as "personally stop whatever else you may be doing to physically save the kids, despite the effect on your own life, sleep deprivation, etc." (until you pass out and drown)
if other means are available, damn right I'm making sure there are lifeguards
"What?!?! I cannot for a second imagine that a majority of people would say "just picking a number of kids you're down to save is fine in this situation". That there is a diminishing marginal utility of saving dead kids!"
Why not? You're one person, there's a kid in the river every hour, it's physically impossible for you to save every kid in 24 hours in perpetuity. You have to eat, sleep, catch your breath after jumping into the river and pulling the last kid out, etc., never mind working at your job.
So most people would agree that yeah, you can't save them all, not on your own. Maybe after saving 37 kids straight you collapse from fatigue and end up in hospital. That means all the rest of the kids drown unless someone takes over from you. Or you work a reasonable rate of "I save one kid every daylight hour for ten hours, then that's it".
If you're discounting the need to have some connection to the harms in order to be responsible for curing them, be it causality or proximity or association, then you're stuck back into the original problem we're trying to escape here. Other than your proximity to the river, there's nothing special about your situation unless or until you've assumed a duty. You are best positioned to intervene if it's just physically jumping in 24 times a day, but we're advanced humans with technology and an economy, so your neighbor a half mile in from the river could just as equally hire a person or pay for a contraption to save the kids as you could. If there is no need for a connection, merely awareness, then why isn't your new main life goal saving the 2800 children under the age of 5 who die every day from dysentery? Because there are other people doing something about it? Not very well, it would seem!
I was amazed that this essay wasn't about /didn't get to USAID. USAID is a global aid program Trump is essentially closing. As a result he (and America) are being blamed for killing poor foreigners who will apparently no longer survive due to not receiving the aid. Would it not be our problem at all if we'd never given any aid? Are we really the ones killing people by no longer voluntarily providing aid?
Yes, because if the actual aid is shut down all of a sudden without prior warning, you are exhibiting the vice of akrasia, and not giving people who you now have an obligation to time to adjust or plan out their response. Now the USA does have at least a little obligation towards poorer countries, so when it goes to start fulfilling those obligations again, people will not trust it.
There is an actual argument against USAID (it is used to spew evil filth into the rest of the world) but I actually agree with Scott on the exact points of good which he highlighted it was doing, so a sufficiently competent statesman should be able to shut down the bad parts and keep the good parts.
A sufficiently competent and powerful statesman. It would take a great deal of power to be able to pick and choose when dismantling these organizations.
If we make Trump an eternal dictator of the entire planet, all the drowning children will become his personal property, and then he will have an incentive to save them. Perhaps he will order Elon Musk to build a fleet of self-driving boats to rescue those kids.
> There is an actual argument against USAID (it is used to spew evil filth into the rest of the world) but I actually agree with Scott on the exact points of good which he highlighted it was doing, so a sufficiently competent statesman should be able to shut down the bad parts and keep the good parts.
Yes, this is something that frankly appalled me about Musk's handling of the situation. So far as I can tell, the percentage of USAID funding that was actually being spent on DEI or wokeness-related programs was small, and it's not like Musk couldn't have afforded to hire auditors with basic reading comprehension to go in and surgically remove the objectionable items. He chose to go in with a hatchet on day one for the sake of cheap political theatre.
I don't think $47,000 for a Transgender Opera in Colombia is a wise use of taxpayer funding, but every item on that list combined amounts to less than half a billion dollars, and USAID was spending 40 billion a year.
There are even items on that list that I'm not sure should have been axed. Is anyone going to die because Afghanistan lacks condoms? Not directly, but there might be some risky abortions avoided, not to mention that Afghan TFR is well-above replacement and could place demographic pressure on limited agricultural resources, possibly triggering war or famine. I don't have a high opinion of Palestinian society, but unless the plan is to liquidate the region's population then constructing a pier to facilitate imports of essential food items isn't an automatically terrible idea.
Here are some at least perceived serious problems with USAID and rationale for the rapid action:
1) Lots of the funding was not going directly to overseas, but directly into Washington Beltway NGOs in the US. Yes, presumably much ended up overseas, but certainly parts of it simply enriched politically-connected individuals of the opposition party.
2) In many cases USAID funding directly sponsored and supported poltical ambitions and patrons of one party, not both parties in the US. This rendered it perceived as not just neutral but actively harmful to the opposition party.
3) Because the first 100 days of a lame-duck US President's term are widely perceived to be much more effective and important than the remainder, it is/was necessary to move very quickly to shut it down, both to actually succeed (it is already tied up in courts), and to see the impact on individual recipients, and use the impulse response of the system to better-understand the fraud and patronage that might be involved.
Fixing e.g. PEPFAR after the fact is not ideal, but letting the perfect be the enemy of the good is also not ideal.
Because a presidential term where all branches are controlled by one party is incredibly rare and hard to predict, and certainly that term will not be controlled by 'you' (whoever you is), and might not lead to the same desired outcome.
For example, right now, the house of representatives is balanced on a knife edge of control where any absences render it evenly split or controlled by the opposition.
If the control of multiple branches was so important, then why try to invest the all-important 100 days in shutting down the programs by executive order without involving Congress? That could have been done in the first term just fine.
1. An elected officeholder or group continuing in office during the period between failure to win an election and the inauguration of a successor.
2. An officeholder who has chosen not to run for reelection or is ineligible for reelection.
Wow. Today I learned about definition #2. God, do I hate stupid extra definitions of terms that ruin the first, good definition of those terms (see also: literally)
This is interesting, as I have always understand lame duck-ness to be definition 2, not 1. I would have reversed their order based on my own experience.
I could maybe buy the limited-time-window argument, but people in Scott's comment section were saying it would only have taken a few interns a couple of weeks to read through all the axed NSF grant proposals, so... even under time pressure, I think Musk could likely have done better.
> Lots of the funding was not going directly to overseas, but directly into Washington Beltway NGOs in the US. Yes, presumably much ended up overseas, but certainly parts of it simply enriched politically-connected individuals of the opposition party.
If you're paying the staff who run a charity NGO, and they talk to their patrons and vote for the party who funds them, then... yes, you will be 'enriching politically-connected individuals of the opposition party', almost by definition. I don't know a solution to this problem other than the GOP being less negligent when it comes to institution-building.
At least on paper, less than 10% of US foreign aid is/was allocated to 'human rights and democracy', or anything that could plausibly be interpreted as 'NGO color revolution money'.
The sexual revolution debate aside, I don't think any and all birth control is wrong, so... gonna have to differ on the condoms.
The problem is trying to disentangle the good parts from the bad parts, since any attempt to question it is met with the "people will die!" defence and asking the civil servants "so what did you do last week?" is seemingly intolerable interference.
Nothing wrong with gutting fat or rot. Some servants however, really do say "I stopped HIV babies from dying," and it is competent statesmanship to be able to distinguishing between the two, or at least undo the problem when there is just cause.
I would think it is worse to take action to foreseeably cause death, as opposed to neglecting to take action to foreseeably prevent death. (If this weren't the case, the answer to the trolley problem would be obvious)
I do admire that you continue to advocate for some version of "EA values" in these increasingly screw-you-got-mine times, even if it's largely academic to me as a Poor with limited resources wrt the scale of drowning children. Not having any realistic path towards ameliorating that state of affairs means it's even more important to Be Excellent To Others in whatever small capacities present themselves, I think. Everyone can do the mundane things somebody has to and nobody else will, if one cares to notice them, Copenhagen be damned. (While acknowledging that yes, there's real value in blissful ignorance! Premature burnout from the daunting scale of worldwide lifeguard duties is worse than at least helping the local drownees and ignoring the ones in the next city over.)
The real problem comes with coordinating others to act similarly, so the burden is collective but light, versus an endless uphill battle for a few heroic souls. That always feels missing from such philosophical musings - the kind of people susceptible to Singerian arguments aren't the ones who most needed convincing. Classic memes like religion sort of work for instilling the charitable drive, but come with a whole host of other "entanglements" that aren't all desirable.
I think a core objection to giving lots of money to be charity might be skepticism that the people being saved actually exist.
Like... the Effective Altruism page about malaria bednets has this long string of numbers they multiply together, to figure out how many dollars it takes to save a life. And that's legit cool. Of course, when you multiply a string of numbers like that, you tend to get huge error bars because all the uncertainties multiply. But they're smart people, I assume, and they're trying really hard, so I'm sure they're not trying to be deceptive. I have to respect that they've done as much as they have.
But... I'm in an environment where people will say anything to get me to give them money, and I guess I've gotten used to expecting evidence that people's claims are real? And I know that, if I buy a bunch of bednets to prevent malaria, no evidence will ever be provided that any malaria was prevented. At best they'll have some statistics and some best-guess counterfactuals.
And -- I mean, I'm sure the bednets people are good people. I've never met any of them personally, but they're working on a charity that does really good things, so they must be really good people with the best of intentions. But it sort of feels like they don't really have an incentive structure that aligns with communicating honestly.
I dunno. The internet in general isn't a high-trust place. I guess probably the people in the charity part of the internet are especially honest and trustworthy, so rationally I'd probably have to concede that the charity really is saving lives. But I don't feel it.
>I'm in an environment where people will say anything to get me to give them money<
So... got a lot of it laying around then, eh?
Hey, unrelated but FYI, I've been meaning to tell you ever since we last saw each other at that university or high-school wherein we were real good friends: if you give me some money, I'll write you into my next book as a badass superhero. Also, I may be on the verge of solving world peace and stuff, if only I had the funds... ah, the tragedy of it all—to have the solution in my hands, yet be stymied by a mere want of filthy, filthy lucre–
"I'm in an environment where people will say anything to get me to give them money"
Begging emails from charities. Gave one donation to a specific cause one time, got hailshowers of "donate donate please give donate we need money for this special thing donate donate" begging emails until I directed them all to spam.
That sort of nagging turns me away more than anything; I don't have a zillion dollars to donate to all the good causes, and I'm going to give to what I judge most in need/most effective. I am not an ATM you can hit up every time you want a donation for anything and everything. And of course they used the tearjerking heartstrings tugging route: here's little Conchita or whomever who is the human equivalent of a one-legged blind puppy, don't you feel sorry for her? Here's Anita who is the homeless mother of twenty in a war zone who has to pick up grains from bird droppings to feed her earless quadruplets, don't you feel sorry for her?
No, in fact, because you've so desensitised me with all the begging and all the hard cases, I have no problems shrugging and going "not my circus, not my monkeys".
There's a thought experiment, where someone runs up to you and says: "Give me a hundred dollars right now or else TEN TRILLION BILLION GAZILLION people will die horribly!"
And the thought experiment says: "Okay, that's a crazy claim, but as a Bayesian you have to assign some probability to the chance that it's true. And then you have to multiply that by the disutility of ten trillion billion gazillion people dying horribly, and check if it's worse than giving a hundred dollars to an obvious scammer. And what if the number was even more than that?"
But in practice people don't do this, we just say "no thanks, I don't believe you" and walk away. I'm not sure what rule we're applying here, but it seems to work pretty well.
And when I think about buying anti-malaria bednets, I feel like that same sort of rule is getting applied.
GiveWell is mostly advertising that you donate to charities that are not them. So it really seems like your thought experiment is in the opposite direction: someone tells you to give to an unrelated third party and you're trying to come up with reasons why the third party isn't really a third party.
The easy out for this is that because the claim is physically impossible the actual expected utility is always 0. Probability doesn't innately have to be an asymptote of 0, it can just be 0.
Our knowledge of what's physically possible is probabilistic, though, so this out doesn't really work. I think a more realistic out is that even though we don't have the cognitive resources to correctly estimate a probability at 1/3^^^3 or something by reasoning explicitly, conservation of evidence implies that most general statements about what's happening to about 3^^^3 entities are going to have a probability of about ~1/3^^^3 or lower so failing a straightforward logical argument why it's much larger in this case then if you have any risk averseness at all, and probably even if you don't, you should ignore such possibilities.
I don't think this is the core objection, it's more often an excuse. If everyone trusted the EA people's figures, most people still wouldn't donate anywhere near as much as EA people say they should.
GiveDirectly has a web-page with a real-time-updated list of testimonials from people who receive money and saying what they did with it, so I don't think this is the main blocker.
After thinking it over somewhat, sadly I think I have to admit that this *was* an excuse.
I recant the above statement. I do think that statistics are easy to lie with, or easy to get confused and report overly optimistic numbers despite the best of intentions. But I don't think it was my core objection.
Proximity based moral obligations work because the further away something is, the less power you have to actually affect things, and therefore the less responsible you are for them. You may say 'give to effective charities' but how do I know that those charities are actually effective and are not lying to me, or covering up horrific side effects, or ignorant of their side effects? Therefore, it would seem that I have more of an obligation to give to charities whose effects I can easily check up on in my day to day life*.
By this principle, the person in the NYC portal has an obligation, since he can actually see and actually help. If the guy screws up following your instructions, the situation is not worse than before. If you come up with a highly implausible scenario where his screwup can cause massive damage, then it becomes more morally complicated.
Same for the robot operator, since he is in control of the robot and knows what it is doing, assuming he knows it won't risk the patient's life. If you were a non-surgeon robot operator who came across the robot in the middle of an operation (the surgeon took an urgent bathroom break?) it would be immoral for you to help, since you wouldn't know what effect messing with the surgery would have.
In the same way, if I am simply told that going into a pond and pressing a button would save a drowning child halfway across the world, well, I have no way to verify that now do I? It could blow up a dam for all I know.
For the drowning child question, you always have a moral obligation if it occurs, but you don't necessarily have an obligation to put yourself into situations where moral obligations occur. Going out of your way to avoid touching things however is the sin of denying/attempting to subvert God's providence, see Book of Jonah.
So my Copenhagen answer is as follows: if a morally tough situation happens to occur to you, it is God's providence, and He wants you to do it.
>God notices there is one extra spot in Heaven, and plans to give it to either you or your neighbor. He knows that you jump in the water and save one child per day, and your neighbor (if he were in the same situation) would save zero. He also knows that you are willing to pay 80% of the cost of a lifeguard, and your neighbor (who makes exactly the same amount of money as you) would pay 0%
The neighbor judged himself when he called you a monster for not doing enough to save people didn't he? He also was touched by the situation when he involved himself in it by commenting and refusing the counteroffer. Its also seems fairly proximate to him for him to be auto-involved, and he is cognizant of this and in denial. Problem solved.
I recognize where you are going with this, and my point is not that you are a monster for not doing enough, but that your donations can have side effects which you cannot detect and cannot evaluate to adjust for in time, or they can end up not doing anything. Sure you can export it to other EAs to verify, but how can you trust them to be honest or competent? The crypto fiasco is a good example here.
>Alice vs Bob
God is Omnipotent and Omnibenevolent, he can have infinite spots in heaven and design life by his providence so that both Alice and Bob can have the appropriate moral tests which they can legitimately succeed or fail at. Bob would likely have a nicer spot in heaven though assuming he succeeded, because he had more opportunity for virtuous merit.
*Note that this argument is not an argument against giving to charity, only against giving to 'untrusted' charities, which I classify EAs as because they seem to be focused on minmaxing a single issue as if life is designed like a video game without considering side effects they can't see, and are prone to falling for things that smell suspiciously like the St Petersburg Paradox.
My logic leads me to conclude that it is optimal to use your money to help the homeless near you since you have the most knowledge and power-capacity towards it, which I have been half-heartedly doing but should put more effort into.
I've helped homeless people sometimes but more often than not I haven't. Homeless people sometimes have simple problems that you can help with (e.g. need a coat) but often it would require an expert to actually help them out as much as a malaria net would help someone in Africa.
This is true if the distribution of problems are the same near and far, but if you live in a rich country and are thinking of donating to a far away poor country, that's probably not true: the people near you with _real_ problems are people with medical conditions that require expert knowledge to solve, or mental problems that we may not know how to help, and so forth. While the people in poor countries may have problems like vitamin A deficiency which is easily solved by giving them vitamin A, or endemic malaria which is relatively easily solved by giving them bednets.
Even with the distance, I'm pretty confident it's much easier for me to get a hundred vitamin A capsules to Kenya than to cure whatever it is that affects the homeless guy who stays in the shelter a few blocks away from me.
Indeed, the whole point of charity evaluators like GiveWell is to quantify how easily a dollar of yours will translate into meaningful effects on the lives of others.
You lost me when you brought Jonah into your argument. IIRC, God's brief to Jonah was that he specifically was to go to Ninevah and preach against the evil there. After trying to avoid the task, Jonah finally slogged his way to Ninevah and preached what God had told him to preach. But he failed God because didn't preach about His mercy, as well. But nowhere in the story do I remember God telling Jonah to preach about His mercy.
How can we know God's will? God didn't tell Jonah that there was an additional item on his SoW. The only takeaway I get from Johah, is that if I rescue a drowning child, I need to preach about God's mercy as I pull the kid out of the water. In the trolly scenario, God's will may be that the five people tied to the track die, and the bystander lives. But His providence put us in the control of a trolly car, and he left us the choice between killing five people tied to the track or a single bystander. We don't know what God's optimal solution is.
You misunderstood my point. God gave Jonah a job, he tried to evade it entirely, and that was clearly the sin, which indicates that trying to cleverly dodge moral responsibility by removing proximity is bad.
Jonah's behavior in chapter 4 is not relevant to the point.
>How can we know God's will
Study moral theology and you can guesstimate what the correct action in a given situation is.
>preach about God's mercy as I pull the kid out of the water
As others pointed out, you can recruit the kid to help you pull more kids out of the water.
So, what does Christian moral theology indicate we should do if we find ourselves in a trolley problem scenario? Bear in mind that this is also the type of ethical koan that has troubled Talmudic scholars. For instance, Rabbi Avrohom Karelitz asked whether it is ethical to deflect a projectile from a larger crowd toward a smaller one. Karelitz maintained that Jewish law does not allow actively causing harm to others, even to save a greater number of people. He argued that human beings cannot know God's intent, so they do not have the authority to make calculations about whose life is more valuable. Thus, according to Karelitz, it would be neither ethically or halachically permissible to deflect a projectile from a larger crowd toward a smaller one because doing so would constitute an act of direct harm.
As for me, I'd say the heck with the Karelitz, I'd deflect the projectile toward the fewest number of victims. I don't know what I'd do if my child was in the group receiving the deflection, though. But I'd probably make my decision by reflex without considering the downstream ramifications. Ethical problems do not lend themselves to logical analysis because human existence is greater than logical formulas. Sure, we could all be Vulcans and obey the logical constructs of our culture, but the minute we encountered a Gödelian paradox, we'd be helpless.
You are free to flip the lever, but not push the fat man, since the trolley running over a single person is a side effect, while pushing the fat man is directly evil.
The trolley running over a single person is a side effect of moving the trolley, the fat man dying is a side effect of moving the fat man. There isn't really a sharp line here.
Its not a side effect though. You are actively choosing to push the fat man, ie it is you active will that the fat man be pushed, and the trolley is stopped by the means of pushing the fat man.
I will point out that I think a more nuanced framing of Rawlsian ethics is inter-temporal Rawlsian ethics where we both don’t know **where** we will be born or **when** we will be born.
Instead of the argument of keeping taxes on the rich low so they don’t defect, those in the future will want as much growth in the past as possible to maximize the total wealth of the world and the total number of medical breakthroughs available to them.
There is now a balance of being fair and equitable at a single spacetime slice and people farther in the future wanting more growth and investment in previous time slices that better benefit them.
I think this makes the tradeoffs we often confront in redistribution vs investment more salient and makes the correct policy more difficult to easily figure out.
(Sorry if this was mentioned in another comment, I looked at about half.)
I think intertemporal Rawlsian ethics is a wonderful idea, but it’s *really* sensitive to your discount function and error bars on the probabilities of stable growth and maintenance of a functioning civilization , isn’t it?
> First, she could lobby the megacity to redirect the dam; this would cause the drowning children to go somewhere else - they would be equally dead, but it’s not your problem.
By the standards of morality in the thought experiment this is the correct solution. The prevailing standards in this hypothetical world allow for magical child flushing rivers accepted without significant protest or mitigation. Objectively, you are not doing anything wrong.
Morality is not an abstract thing to be discovered and while basic survivorship bias means that societies with a sense of morality that result in Child River O'Death are unlikely to be tremendously advanced we can say that certain moral codes can be more effective than others at human flourishing, you cannot use thought experiments to find the rules because there are none. It's all a blur of human messiness.
If I got to choose, I would rather give 1000 people a sandwich or something instead of torturing 1000 people by cutting off their fingers.
Sure, you can argue "you cannot use thought experiments to find the rules because there are none".
Yes, it's "all a blur of human messiness".
Would you rather give 1000 people a sandwich or torture 1000 people? If you prefer one, well, you might even have a reason, let's get to the bottom of it. I'll call it "morality". And if this hypothetical seems to have a preference, we can probably assume other hypotheticals do, like the ones Scott is using.
If you don't prefer either, cause "nothing means anything, man, we're all like, dust in space" then I hope that I'm not one of the 1000 people.
> If you don't prefer either, cause "nothing means anything, man, we're all like, dust in space" then I hope that I'm not one of the 1000 people.
Yes, I fully understand that you're gesturing at the normal human tendency towards being pro-social in the case of Sandwich V. Torture. But the issue with these "thought experiments" which consist of implausible setup followed by binary choice of options designed to be as far apart as possible is that the human tendency you're referencing does not operate according to rules of logic and any answer I can give provides no information on how morality or decision making works. Or: if I'm in a position to torture 1,000 people by cutting off their fingers, I need more information before I can tell you my choice because my actual choices - the thing we are trying to model and/or predict - depend on those variables.
Crafting a hypothetical to try to disprove that someone's objection to a previous hypothetical - or even worse, a concrete course of action which comes with all of the contingent details that do matter a great deal - wasn't their real objection is useless, because it requires inventing situations that are outside the distribution of the thing being studied.
If someone came to you with an idealized method of calculating an object's trajectory and you point out that it is unlikely to be correct because it doesn't take gravity into account, them producing a thought experiment where the object is in a perfect vacuum without the influence of gravity does not mean that gravity isn't your real objection to their method.
A new reign of terror is needed. this comment section sucks. I think I saw someone advocating the defunding of pepfar on grounds of "the kids mother shouldn't have been wrong about sexual hygeine" or something.
more productively, I disagree with the veil of ignorance approach. Just be the kind of person that the future you hope for would admire you (or at least not condemn you). much simpler and more emotionally compelling, and I think it leads to better behavior.
I think this points at something important, but the intuition is sharper if you also stipulate the future knowing your context and thoughts and being much wiser and much, much kinder. Some people believe this means it is "good to believe" in a religion, but I think that is sort of silly and arrogant. Of course there are many people who have enough empathy to know your thoughts and there are very moral people.
People utterly refusing to engage really indicates the change in audience from people who find this kind of discussion interesting on its own merits (SA is clearly doing this to probe the limits of moral thinking as an intellectual exercise) and people who view this kind of moral discussion as a personal attack on them. Discussing morality like this feels like a core part of the rationalist movement and refusing to do so is not a good sign.
To flip this a little: I think it's maybe good that Scott is spreading EA ideas outside their natural constituency. In the spirit of, "if you never miss a flight you're spending too much time in airports", I propose "if you're not getting bad faith pushback and refusal to engage, you're not doing enough to spread pro-social ideas".
While I think the commenting population has gotten worse since the Substack move, I also think the drowning child is a terrible thought experiment and more complicated versions are not so much enlightening as they are a mild form of torture, like that episode of The Good Place where the gang gets explores a hundred variations on the trolley problem.
Discussing morality is interesting. *This particular branch* is exhausted and everyone is entrenched in the degree to which they admire or despise Singer's Mugging. The juice has been squeezed.
I am rabidly opposed to the rapid abolition of USAID.
But I am, in fact, quite struck by how appalling the continuation of the AIDS crisis in Southern Africa is and how little we are willing to condemn the sexual behavior that appears to be the driving factor in this crisis.
Babies may be blameless, but it is legitimately fucked up that a very-easy-to-prevent disease has such a high prevalence. AIDS is not malaria. The prevalence does not appear to be reduced by PEPFAR over multiples decades
Failing to engage with the thought experiment is a failure to examine your own moral system, and a failure to contribute anything useful to the discussion. Any of these comments (which there are way too many) that say something like "it's too abstract/too weird what if I change the premise of the thought experiment so I don't have to choose any bad option/ignore the thought experiment because it's dumb" are missing the whole point.
If your answer to the trolley problem is "this wouldn't happen to me, why would I think about it" then you're failing to find what your moral system prioritizes. If your answer to a would-you-rather have snakes for arms or snakes for legs is "neither to be honest" you're being annoying. If your answer to "what superpower would you have" is "the superpower to create superpowers" you're not being clever, you're avoiding having to make a choice. Just make a choice! Choose one of the options given to you in any of these scenarios, please! And if you still say "well um technically the rules state *any* superpower" then change the rules yourself so you can't choose the thing that's the most boring, obviously unintended, easily-avoided-if-the-question-is-just-phrased-a-different-way option. Choose! Pull the lever to kill 1 person instead of 5 or not! What are you so afraid of? Learning about yourself?
Scott says this in the article:
"Assume that all unmentioned details are resolved in whatever way makes the thought experiment most unsettling - so for example, maybe the megacity inhabitants are well-intentioned, but haven’t hired their own lifeguards because their city is so vast that this is only #999 on their list of causes of death and nobody’s gotten around to it yet."
And I think it's worth a whole post by itself why people are so reluctant to choose. Anybody unwilling to take these steps to try to figure out what they genuinely prioritize is *actively avoiding* setting up a framework for their own priorities, moral or otherwise. It’s not just that these people about dodging an uncomfortable choice — they're also refusing to engage with the process of decision-making itself. I cannot imagine setting up any reasonable moral system if I didn't do something so simple as *imagine decisions I don't have to make right now, but could have to make*. If I don't do that I'm basically letting whatever random emotions or vibes I feel in the moment, when I really really have to choose, BE my moral system. Why would people do that to themselves? Something something defense mechanisms? Something something instinctually keeping their options open when there's no pressing need to choose?
I don't know. I would choose snakes for legs though.
This is not my experience. People in the comments are talking about how it's "far beyond the actual utility of moral thought experiments", "How is bringing up all of these absurd hypotheticals supposed to help your interests?", "never encountered a hypothetical that wasn't profoundly unrealistic". This is a post about hypotheticals. If they don't engage with it, instead dismissing the use of hypotheticals altogether, well, refer to my main post.
Many other comments dismiss the hypotheticals with "but, like, we're all atoms and stuff, man, so like, what means anything?" I have a hard time believing these people wouldn't care if their loved ones were tortured. If they say they would care, great, they've just made a decision in a hypothetical, hopefully they're willing to make decisions in other hypotheticals.
>Religion and culture already control their actions enough to make civilized society possible. Nothing more is needed.
Nothing more is needed? There are things that I think are bad that exist in the world (malaria, starvation, poverty, torture) that I would prefer that there is less of. If I can make it so there's less of this stuff, then I'd like to. To do that, it seems I first have to decide what this bad stuff is, and to quantify how bad bad stuff is compared to each other (paperclip vs cutting off fingers with a knife). That's morality, and it can help guide decisions, too!
In 2009 on LessWrong, no less! I love it, I missed this one. Yeah I guess you can’t force all readers to read this before each article dealing with hypotheticals.
Maybe a disclaimer like “if you’re about to dismiss the use of hypotheticals, visit this lesswrong post” at the top? But I imagine the comment section would then also have people arguing against this lesswrong post, which seems kinda dumb. Also, do you really want homework on every post? “Read the entire sequences before this post to understand this best”. Ehhhhhhh
Maybe I’m going about this all wrong and I should just be ignoring all the comments that don’t engage with the hypothetical, because *I’m* not discussing the hypothetical either, I’m countering people who aren’t discussing it. So I’ve made the comment section EVEN LESS about the actual post. I don’t know, ignoring a huge chunk of comments who just don’t get the post feels weird though.
I agree that people who do this are annoying. Though too many thought experiments are also annoying. In my experience the reason that the type of person who refuses to answer a hypothetical does that is because they interpret it as a "gotcha" type question that is being asked by the asker for the purpose of pinning them down and then lording it over them by explaining why they're wrong or inferior in some manner. I don't think that's always, or even often, the intention of the asker, but that is how the reluctant askee tends to view it.
Yeah, this. Scott has a good track record of not using antagonistic thought experiments, but elsewhere online, that's not the case. It makes sense some commenters would apply a general purpose cached response to not indulge it.
Agreed. I think moral reasoning of this sort is worthless at convincing others and the methods of analytic moral philosophy in general are not good. But failing to engage with hypotheticals (including explaining why they're irrelevant or undermotivated) is like a guy with a big club saying "Me no understand why engage in 'abstract' reasoning, I just hit you with club."
I have 15,000 USD in credit card debt, near-term plans for marriage and children, plus a genetic predisposition towards cancer that may be passed on to my children. To what degree is my money my own?
If I always knew that I would have obligations to my family, and I could never fully predict my capacity to meet those obligations, then how should I think about the money that I gave to effective altruism when I was younger?
I think Scott is correct, to a first approximation, and that there is virtue in buying bed nets, but no obligation. I also agree with the comment about bed nets being a rare example where we can be confident that we're doing a good thing, despite how distant the problem is from our everyday lives.
Even so, I think the rhetoric around effective altruism is sometimes a bit... I don't know, maybe tone deaf or something? Because lots of people aren't great with money, and when you ask them to tithe 10% they're going to think of all the times they couldn't afford to help a loved one, and they're going to extrapolate that into the future, and they might decide that virtue is poor compensation for an increased risk of struggling to feed their future children, or whatever.
And, yeah, it's too easy to use this as an excuse for being lazy and not demonstrating virtue. And people who aren't good with money could sometimes be better, with a bit more effort. I really do think that Scott is mostly right, here. But it also feels like he's missing something important.
If it helps, I think you're accidental collateral damage. He's mostly talking to the types of person who say they wouldn't donate to charity even if they had means, and brag about this fact. I think insofar as EA is concerned, you should put on your own parachute first. There's no point in ruining your life when someone else can do similarly without ruin.
In theory if we lived in a world where everyone was already donating a lot, yeah maybe that would be a concern (but probably not since societal effort has nonlinear effects). But we're very far from that world, and I think it's wrong to think we are.
In my tradition, when it comes to questions of the heart, even one penny from a struggling widow is more than all the billions the hyper rich donate. There are important questions about how to help people in need, but that is the landscape, and we are travelers on it. Your heart isn't defined by magnitude of impact. It's beauty is captured in that wordless prayer that others might be better off than you.
Doesn’t this produce a paradox though? If I believe that as a median American I’m expected to donate $32,000 per year to reduce myself to the global median of $8,000 why would I bother working at all?
You could of course conclude that not only is every $ I fail to donate a theft from the global poor but that every hour I fail to work is an equivalent theft. Except, even as an EA sympathetic person that feels ridiculous.
I’m not sure there’s a clean solution to this whole paradox, but I’m also not sure there’s a clean above model works well.
I already knew that. And I think you missed the point of my question, which is "To what degree does my money belong to me, as opposed to my family? And how will I justify my altruism to my family if I find myself unable to pay their medical bills in the future?"
Your philosophy would be more convincing if I could reasonably expect strangers to altruistically help me if I find myself in need, such that the selflessness isn't so unilateral. But Scott already pointed out that I can do no such thing, and at best I can pretend. But by pretending, I would be gambling with the lives of the people I love.
I know it's possible to be okay with that. I might even agree that it's noble. But they say that the strong do what they can while the weak suffer what they must, and there's as much truth in that as there is in effective altruism. The world isn't just. And I have neither the obligation nor the inclination to be a saint.
- It costs much more money to actually save a life through charity ($4000+) than to save a drowning child in these thought experiments.
- A natural morality IMO is that helping people is never obligatory; it's supererogatory: nice but not obligatory. The only obligation is not to hurt others. Saving drowning children, and saving lives through charity, are equally good, but neither is obligatory. (Going further, of course there's the nihilist view that the concept of morality makes no sense; also various other moral non-cognitivist views, like that moral intuition is much like a taste, and not something that can be objectively correct or incorrect, so there's no reason to expect it to be consistent.)
Or they can abandon the intuition that saving the drowning child is obligatory; or abandon the meta-intuition that analogous situations ought to be treated analogously, and instead rely on their intuition in each case separately. Or of course abandon the intuition that charity is not obligatory, as the EAs would like them. If we find a contradiction between people's different moral intuitions, that doesn't tell which one should be abandoned.
Why would you ever abandon that intuition? It seems I would rather take that as axiomatic, and then work backwards from it.
I don't feel a pressing need to resolve metaethics wrt charity. And ultimately all of this discussion can easily be discounted as so much sophistry, but dear god let me not get to a point where I'm ever thinking that saving a drowning child is not obligatory lest it undermine my courage to act
I've never encountered a hypothetical that wasn't profoundly unrealistic. You have put Bob in a world where there is no local government. No police force to call and no local volunteer fire department. There's no local Red Cross to lobby for a solution to the Death River. Delete these real aspects of the real world and there will be an abundance of problems that are too big for one guy in one cabin next to one river to solve.
Also. If Bob is so isolated in his cabin, where are the 35 kids floating down the river coming from, all of them still alive? You also omitted the impact of their grieving parents who would be lobbying and suing local government for its failure to take action.
This hypothetical is as unrealistic as speculation about the sex lives of worms on Pluto.
I'd describe it as farcical but directionally relevant to some elements of reality. There are indeed many people we can help, and to the one who suffers, it hardly matters if they're in our backyard or not, so long as either way we don't help them. And to the person in the cabin, it hardly matters if people suffer and die nearby or far, so long as they've resolved to ignore them. There is no governance covering both people, yes. That part is accurate to real life, for international aid at least. But they are indeed sharing a world, with capacity to save or be saved.
Realistic? Name a city or county in the United States that would not react to the fact that 35 children drowned every single day in a river within their boundaries.
Rare? Name one point in the history where such an event has taken place. These events are not rare—they are nonexistent.
A privately hired lifeguard has nothing in common with a publicly funded fire department, which exists in every city and every suburban county in the United States. In the United States, citizens are not expected to deal with large scale problems such as 35 kids floating down the river every single day.
Incorrect, at least on a few levels. Many if not most small municipalities throughout American history rely on volunteer and citizen-based fire departments.
Likewise, as a society that arguably aspires to maximal freedom in the US Constitution, Americans are very much expected to try to deal with large-scale problems through private mechanisms, and in fact our charities and charitable giving as a percent of our GDP, and in absolute dollar terms, are world-leading by a significant margin.
Also just to tie this back to a percentage figure, Americans give 1.47% of GDP to charity, roughly double second place New Zealand, at least per ChatGPT and assuming no hallucinations, and the dollar figures are two orders of magnitude larger.
Charitable Donations as a Percentage of GDP:
According to a 2016 report by CAF:
United States: Charitable giving constituted 1.44% of its GDP, totaling approximately $258.5 billion. (Wikipedia)
New Zealand: Donations amounted to 0.79% of GDP, around $1.1 billion.
Canada: Charitable contributions were 0.77% of GDP, equating to $12.4 billion.
First of all, modern volunteer fires departments almost always receive some municipal funds for equipment, building, et cetera. By happenstance, I once attended a community fundraiser for a volunteer fire department. That well attended community fundraiser brought the communal into the equation.
Second of all, charities represent a communal effort to solve communal problems. They are one gigantic step beyond a single man at a cabin expected to deal with an obviously communal problem,
The hypothetical also assumes that the parents of these hundreds of dying children will play no role in trying to achieve a communal solution to this communal problem.
While I don't disagree, you still haven't demonstrated that receiving municipal funds are a better solution. That it tends to evolve in that direction is meaningful, but might actually be an antipattern co-opted by Moloch rent seekers for example.
This is devils' advocacy - I have no experience in this area, but I do think that the volunteers throughout our history deserve due credit and may have done a great job relative to our current system.
I think volunteerism is great too. However, I distinguish individual efforts from communal volunteerism. Believe it or not, I had a dear friend who managed to reach out his unusually long arm and grab a kid just before he landed in a raging flooded river. This event happened once in his lifetime.
Communal volunteerism realizes that there is a recurring problem in society that could be helped by a self-organized group standing ready to help. The Red Cross, founded in the 19th century, served as a template for many of these organizations.
BTW, the hypothetical guy in the cabin could have built a weir 1/2 mile up the river from his cabin. A weir is a flow through dam used for catching fish. This one would be designed for catching drowning kids.
I view this from an evolutionary perspective. We are hard wired to react vigorously to events that happen in our presence, such as a lion attacking a child. We have no evolutionary wiring to respond to events outside of our personal experience. It's hard to go against evolutionary hardwiring.
Hypotheticals aren't meant to be realistic, they're meant to isolate one part of a logic chain so you can discuss it without those other factors being in consideration. People have a bad habit in debate of switching between arguments when they're losing. The hypothetical forces you to focus only on one specific part of your reasoning so you can either account for the faults the other person thinks it has, or admit that it's flawed and abandon it. It's a technique for reaching agreement.
A quick (sigh) hypothetical example:
"If you have $500 you must give it to me. I would use $500 to make rent. If I make rent I will be able to use my shower. I will look presentable for an interview. I will then get this high-paying job and I can pay you back. Also you do not need the $500. You have an emergency savings fund already and no immediate expenses."
Most of the time an argument would look like this:
"Yeah, dude I'm saving the money in case some big expense comes up, sorry."
"I need that money way more than you though!"
"It's not fair to ask me to pay your expenses."
"I'm going to get a job and then you won't have to!"
"Are you sure you'll get the job?"
"Yeah! So it's just this one time!"
"What if you don't?"
"I will but even if I don't I still need to make rent and you won't miss the money!"
"That's not the point though."
"Yes it is!"
etc.
See how the money requester is bouncing back and forth between his argument that he should get the money because he's going to get the job and pay it back, and his argument that you have an obligation to give him money because you don't immediately need it? You can isolate one of those arguments with a hypothetical:
"Let's say tomorrow the perfect candidate walks into their office and takes the job before you have a chance to have your interview. This guy's exactly who they're looking for, immaculately qualified, and they hire him on the spot. So you don't get the job. You can't pay me back. Do you *still* think I should give you the money?"
That's unlikely to happen, but now you can talk about *just* the argument that you owe this dude money because you have more, without having him constantly try to jump back to the job thing.
Of course, this assumes good faith and a willingness to actually explore the argument together. In this particular case you'd be better served by just saying "no" and leaving. But in this blog's community, there is significant interest in getting to the bottom of why we hold certain beliefs, and if those beliefs are wrong, changing them.
Scott wants to know the answer to a specific question: "There is an argument that you are only responsible for saving people you naturally encounter in day-to-day life. Is it wrong to structure your life in such a way that you don't naturally encounter people in urgent need? Do you have a duty to save people you choose not to be in proximity to?" He's well aware that someone else might save them, that the situation would likely be resolved without your influence, and that there are other considerations. He's trying to force you to set those considerations aside for the time being so you can focus on establishing your views on that one question in particular.
> But I think most people would consider it common sense that refusing to rescue the 37th kid near the cabin is a minor/excusable sin, but refusing to rescue the one kid in your hometown is inexcusable.
Again my moral intuition straightforwardly disagrees with something! It says that not rescuing the kid in the hometown afterward is very excusable. I wonder why, though?
> I think this represents a sort of declining marginal utility of moral goods. The first time you rescue a kid, you get lots of personal benefits (feeling good about yourself, being regarded as a hero, etc). By the 37th time, these benefits are played out.
That feels like it resonates with my intuition, except my intuition *also* considers the kid in the hometown to be part of the same chain. Maybe by having done so much thankless positive moral work in the past, you've accumulated a massive credit that diminishes any moral necessity for you to take active steps like that in the future.
I notice if I swap the locations, so that it's going into the woods that results in seeing one drowning child while being in the city results in seeing them every day, this feels different—and it also feels closer to real-world situations that immediately come to mind. Maybe my mental image assumes the city is more densely populated? The more people there are who could help, the less each one is obligated to. Bystander effect is bad only when it doesn't come out to having at least a sufficient number of responders for the tradeoff to work out (though the usual presentation of bystander effect implies that it doesn't, so assuming that's true, applying the counterbias is still morally good). I bet there's something in here about comparing how many of these situations one agent can *reasonably expect to encounter* with how many that agent can handle before it reaches a certain burden threshold, then also dividing by the number of agents available in some way. This seems to extend partially across counterfactuals; being by chance the only person there at the time in the city feels different from being by chance the only person there at the time in the forest.
Or maybe it's because the drowning kids in the forest part of the water come *from* the city in the first place that affects it? Aha—that seems to make a larger difference! If I present the image of the protagonist moving from the forest to a *different*, more “normal” city, and *then* failing to rescue a random drowning child, it seems much worse than the original situation, though still not as bad as if the exceptional situation were being presented to the person for the first time, probably due to the credit in the meantime in some way over-discharging their responsibility. But if I assume the second city is structurally and socially indistinguishable from the first one, only with different individuals and with its stream of drowning kids passing by a different cabin that the protagonist never goes to, then it stops being so different again. So it's not due to the entanglement as such.
Maybe if the people in the city are already in an equilibrium where they aren't stopping the outflow of drowning kids, then it's supererogatory to climb too far above the average and compromise the agent's ability to engage in normal resource allocation (including competition) with the other people in the city—if I remove the important business meeting and change it to going out for drinks with a friend, not doing the rescue feels much worse than before, and if I change *that* situation so that the friendship is on the rocks and might be lost if the protagonist doesn't make it to the bar, then the difference disappears again. This feels independent of the intention structure behind the city not saving the stream of drowning kids to me. If the city people are using those resources for something better, the protagonist should probably join in; if the city people are squandering their resources, the protagonist is not obliged to unique levels of self-sacrifice, though it would be morally good to try to convince the city people to do things differently.
Of course, possibly my moral intuition just treats the rescues as more supererogatory than most people's treat them as to begin with, too…
> and if I change *that* situation so that the friendship is on the rocks and might be lost if the protagonist doesn't make it to the bar,
Bring the rescued kid along with you to the bar, hand 'em to the bouncer saying something like "she's your problem now," then tell that semi-estranged friend that if they don't believe your excuse for being an hour late, and covered in kelp, they can ask said bouncer.
Make a rule that the children you rescue as they pass by the cabin have to help you rescue future children who pass by. After rescuing a few kids, you've got a whole team who can rescue later kids without your help.
Scott, and I say this with love, has lost the thread here.
Like the point of the thought experiment was to draw attention to the parallels between potential actions, their costs and their benefit. These examples seem like they are meant to precisely deconstruct those parallels to identify quantum morality and bootstrap a new utilitarianism. It's putting way too much significance on a very particular thought experiment.
But even taken on its face, the answer to wthe apparent contradiction is obvious, right? Why the cost of a ruined suit feels worth a kids life to most people, but donating the same amount of money to save a life via charity is unappealing. It's not that lifes value is morally contingent on distance, or future discounting or causality or any of that. It's that when you save a drowning kid, you get a HUGE benefit as you are now the guy who saved a drowning kid. The costs might be the same, but the benefits are not remotely equal. I guarantee I get my suit money back in free drinks at a bar, and a maybe a GoFundMe , probably before the day is over.
And even if you want to cut the obvious social benefits out of the picture, self perception matters. Personally saving a kid is tangible and rewarding. Donating money to a charity is always undercut by doubts. Such as "is this money actually making a difference?” And ”why am I morally obligated to take on a financial burden that has been empirically rejected by the majority of the population?"
Because saving a drowning child is assumed to reveal something about the rescuer's moral character, while bragging about charity is viewed as performative. The former might be dubious, but the latter is usually correct.
Alternatively: because mentioning "one can save a child through charity" is an implicit moral attack on those who not given charity, whereas saving a drowning child is not such an attack because few of us will ever encounter a drowning child (and most people probably think they would save the drowning child if they ever encountered one).
Something that gets missed is that saving a drowning child is "heroic." Why is it heroic? Because even though most people say they would do it, in practice they don't. The hero takes action to gain social status. In the case of drowning children floating by a cabin, there's no heroism, since the person rescuing them consistently is now engaged in a hobby instead of a single act of will.
Also, people do move away to places like Martha's Vineyard for exactly this reason, to avoid the plebs complaining about them.
Interesting but these are all similar to the “all Cretans are liars; I’m a Cretan” self-reference trap (paradox).
Insert one word “predict”, as in “do you predict that you…” and the trap is closed because it clarifies that this is an endless regression paradox at “heart” IMHO.
All future statements are predictions, and it is self-referential. The gives away is the reference to the “37th child…”
There is no “moral choice” in infinite moral regress as there is no truth value to the statement “this statement is false”
Language is a semantic system which is incomplete under Gödel’s Incompleteness Theorems.
Angelic superintelligences are like Chuck Schumer’s “the Baileys,” a mouthpiece for moral ventriloquism.
We are here on Earth by accident. Nothing happens when you die. We should take personal responsibility for our own moral sense. Share yours if that seems like the right thing to do, and express it in the way that seems right.
There won’t ever be an authority or conclusive argument, because we’re an assembly of talking atoms hallucinating a human experience. That is beautiful and strange. I think helping other sentient beings, and helping them at mass scale, dispassionately and in ways that seem plausibly highly effective, is lovely.
If faced with a drowning child and you are the only person who can help it, you have a 100% obligation to save it. I'll leave open the question of what exactly a 100% obligation means, but it's obviously pretty strong.
If there's there's a drowning child and you're one of five people (all equally qualified in lifesaving) standing around, you have a 20% obligation to save the child.
If there's a child who's going to die of malaria and you're one of eight billion people on the planet who could save it, then you have a one over eight billion obligation to do so.
If there's millions of children going to die of something, and you're one of billions of people on the planet who can do something about it then you have something on the order of a 0.1% obligation to do something. That's not nothing, but it's a lot weaker than the obligation where there was a 1-1 dying child to capable adult ratio.
If there are 5 people, and for some reason the other 4 are jerks who refuse to help drowning children, is your obligation now 100% because your choice is guaranteed to decide the child's fate?
If 3 of them are jerks, is your obligation 50%? And can you make it 0% by becoming a jerk yourself, so that the remaining non-jerk now has 100% responsibility? Or is obligation not additive in this way, and if not, does that suggest a more complicated calculation is necessary?
Thousands of fish have washed up on the beach. A monk is walking along the beach throwing fish into the ocean. A villager laughs at him — “You'll never save all the fish!”.
The monk answers as he throws another fish, “No, but this fish really appreciates it.”
Same reason police catch speeding drivers. When one asks 'Why me? Everyone else is speeding also!' the response is 'when I go fishing, I don't expect to catch EVERY fish!'.
The Alice and Bob thought experiment feels rather strongly off to me. Yes, certainly a person who fails to do wrong due to lack of opportunity might be a worse person than another who actually does wrong. That seems to be a fine way to summarize moral luck, and we would expect eternal judgment to control for moral luck. So far so good. You then conclude that, therefore, moral luck is fake and the worse person must have actually done worse things.
I'm confused by the absence of positives to moral obligations. If someone fulfilled a moral obligation, I love and trust them more, my estimation of something like their honor or their heroism goes up. If someone who was not obliged to does the exact same thing, I "only" like them more, I don't get the same sense that they're worthy of my trust.
It's trite, but I think "moral challenges" is closer to how I feel about these dreadful scenarios. I want to be someone who handles his challenges well, to be heroic, this seems to me more primal and real than these framings of actions in these dreadful scenarios as attempts to avoid blame, in a way that I don't think reducing everything into a single dimension of heroic versus blameworthy can quite capture.
I largely agree -- I soften it down to invitations for stuff like this, because when it comes to helping strangers, it's not quite a challenge, as people avoid the question at no cost to themselves. But there is an invitation: care for the least among you. Some people see it as a beautiful thing to go to, and some do not. I largely chalk the latter up to their unbelievably poor taste hahaha
One of the things that disturbs me is: good intentions are often counterproductive. You mention Africa, and that is a whole basket of great examples.
Feed Africa! Save starving children! Great intentions, only: among other deleterious effects, free food drives local farmers out of business, which leads to more starving children, which leads to more well-meant (but counterproductive) aid.
Medical aid! Reduce infant mortality! Only, without cultural change and reduced birth rates, the population explodes, driving warfare, resulting in millions of deaths.
Far too much aid focuses on the short-term benefits, without considering the long-term consequences. Why? Because, cynically, a lot of aid work is really about making those "helping" feel benevolent, rather than actually making long-term, sustainable improvements.
In practice, reducing infant mortality leads rather directly to decreases in overall fertility; parents who can count on their children surviving to grow up don't have to have "extra" children to make sure that enough of them survive to become adults.
So give to the aid which has good long-term effects. As you are probably aware, most of the effort of the "effective altruist" movement is directed at figuring out which interventions are in fact helpful overall and which not. Follow them.
III. Alice and Bob: If Bob saves more sick people, he'll get exploited by needy "friends" into not-being-rich.
Whatever moral theory you use, it needs to be sustainable; any donation is a door open for unscrupulous agents to cause or simulate suffering in order to extract aid from you.
Not that Bob should do less - while it certainly would be coherent, it doesn't sound very moral - but I think the optimal point between saving no one and saving everyone is heavily skewed downward from maximal efficiency of lives saved per dollar because of this, even when you're altruistic.
For Alice, that applies too, though in a different ways: while there might be less expectations by scammers for her to spend everything if she spends a little, there will still be such expectations in her own opinion; this is common enough that many posts on EAF warn against burning out.
In the classic example you are the only passerby where the child drowning so you are the only one who can save them so according to the Copenhagen Interpretation of Ethics it's your duty to do so.
If we change the situation and on the river bank you find the child's parents and their uncles, lifeguards, firefighters, the local council and the mayor himself (they all have expensive manors overlooking the river) what duty do you have to save the child? According to the Copenhagen Interpretation of Ethics it's their duty to do so because they are first by the river but it's also their legal responsibility to care for the child. In the order of duty of saving the child you are at the end of the list.
Given the blatant purpose for which the drowning child thought experiment was created in the first place I propose the White Saviour Corollary to the Copenhagen Interpretation of Ethics:
"The Copenhagen Interpretation of Ethics says that if you're a westerner when you observe or interact with a Third World problem in any way, you can be blamed for it. At the very least, you are to blame for not doing more."
I hate this just like I hate trolley problem questions. They all have the same stupid property that you’re asked to exercise moral intuitions in a situation so impossibly contrived that the assumptions you must accept unquestioningly could only hold in a world controlled by an evil experimenter who is perfectly capable of saving all the people he wants you to make choices between. The obvious timeless meta-strategy is to refuse to cooperate with all such experiments even hypothetically.
The SHORT VERSION of ALL these controversies is “many people suffer in poverty and medical risk because they live in bad societies but it is always and only the responsibility of better off people in good societies to help them within the system by donating as much as they can bear without ever challenging the system itself”.
In this example, you are not supposed to question the assumption that no one would save 24 children’s’ lives a day at the trivial cost of $50 per life saved by hiring $50/hr lifeguards to work shifts, that somehow no collective action is possible either to raise money for such a spectacularly good cost/benefit ratio charity or to get the relevant public maintenance department a slight budget increase to fix the drowning hazard, and only isolated random unlucky individuals have any power to rescue these children and must do so without ever making the public aware of their absurd plight and trying to fix things that way.
If you want to point to some current country with a shitty government which blocks or steals efforts to ameliorate its citizens’ suffering, don’t give me a f***ing trolley problem to show me I must donate in clever ways to get past the government, save your efforts of persuasion for the bad government, or for good governments elsewhere or the UN to intervene.
Yes. As I said in my comment, a lot of "aid" is there to make the contributors feel benevolent.
There is simply no point to providing aid to cultures and countries where it cannot have positive, long-term effects, and likely just supports the existing, malicious system.
Yes, this is a totally fine argument, but has already conceded that if that were not the case, you would have an obligation to provide aid!
And now we can argue whether the aid in question actually has the deleterious properties you assert.
Or you can say, not so fast! I think the aid is useless, but even if it weren't, I'd still have no obligation to provide it!, and then we can focus on that claim by isolating it from practical considerations, in the hopes that if we can resolve this narrow claim, we can return to discussing the actual effectiveness.
That's the point of the thought experiments, to resolve whether your objection is actually 1. aid is ineffective or 2. even if aid could be effective I'd have no reason to give, or both, or some third thing.
But by isolating which claim we're debating, we can stay focused and not jump around between 1 and 2 depending on how you feel the argument is going.
If your objection is truly 1., and that's why you find these hypotheticals useless, then great! But you better be prepared to do battle on the grounds of "is this aid effective" and not retreat to, "why am I giving aid to Africa when my neighbour..." as many others do.
Again, the point of hypotheticals is not to be realistic. It's to remove factors from consideration to laser focus on a single question. Generally in argument, it's easy to hop around in your logic chain or among the multiple reasons you believe something. This means you'll never change how you think because if you start "losing" on one point you don't concede "okay that one is bad, I will no longer use it" but instead hop to other considerations. These "contrived" situations are meant to keep you from hopping from "is it right to kill a person to save others?" to "Okay I think I can get out of this choice completely by doing X,Y,Z." Whether X, Y, and Z turn out to be flawed or not, you still never had to answer that question, which means that you still don't have clarity on your beliefs and under what circumstances you would change them.
Of course it seems like most people manage it even within the hypothetical, so opposed they are to systematically thinking through their beliefs one point at a time.
Certainty must be a factor here. Both the certainty of help being needed (not from general knowledge of world-poverty, but direct perceptional evidence) and the certainty of the help reaching its intended target make the responsibility more real and vice versa.
I think you're taking too seriously people's rationalizations as reasoning. The transparently correct moral reasoning is much more likely to be rooted in relationships -- you ought to love and care for drowning kids, so you simply *would*, if you have the right relationship with them, simply save every one you can. Which means, yes, in the modern world, living with minimal expenses and donating literally everything you earn to EA charities is a solid choice, one of the things you'd just naturally choose to do.
I believe role ethics (as seen in Stoicism - the concept is even more central in Confucianism, but I am less well read on that philosophy) offers a good descriptive account(*), more so than Copenhagen and declining marginal utility. The idea is that a person has moral obligations depending on what roles they play in the society. Some of those roles we choose (like a vocation, which may present obligations like a captain of a ship in the case of emergency seeing to the rescue of all passengers and crew even at risk of going down with the ship, or parenthood with equally strong obligations to our children), some of them are thrusts upon us by circumstances (like duty to rescue in the capacity of a fellow citizen in a position to do so), and some come down to us as part of being a human being living in the cosmopolis (helping those in need even if they are far off).
Now, there are and can be situations where virtues and obligations pull us in different directions and resolving the conflict can be a nontrivial task (indeed, the Stoics deny the existence of a sage - someone with perfect moral knowledge), but as a practical matter it is not unreasonable to establish a rough hierarchy where your obligations towards fellow citizens outweigh those towards other members of the cosmopolis, which is why you save the drowning child. That however doesn't mean those other obligations disappear: Alice, after doing her duty as a daughter, a neighbor, a member in her work community, perhaps as a mother, etc, would in fact have these obligations. Stoic perspective isn't prescriptive in the sense of saying outright "10% of her net income", but chances are a virtuous person in position of such prosperity would likely (but not necessarily: personal luxury clearly wouldn't be something a virtuous person would choose, but she might e.g. feel to be in an unique position to advance democracy domestically, and use her good fortunes to that cause instead) be moved to act.
* And I dare say prescriptive, insofar as they help make sense of our moral intuitions, which I'm inclined to treat as foundational.
This seems like a sane take. It accounts for both our intuition that Alice really ought to do her duty to her daughter, neighbor, and work community before engaging in telescopic charity, but it also accounts for the intuition that we really ought to help drowning children sometimes, even when they are very far away. It also accounts for the case where, Alice, living in the cabin is called away from saving children by a more moderate need of her own child--the baby's got colic, and needs tending--and we don't find Alice to be totally reprehensible.
The question I always have though about role ethics or Confucian derived ideas of li is how to work out what, in this increasingly atomized cosmopolis in which we all live, I--as an unattached, single person--my roles or li are. It also seems like there is some tension between my intuition that I ought to be pretty free: I believe in divorce, in allowing minors to be emancipated, in--less extremely--moving away from one's hometown community, breaking up with friends, business partners, employers. Those freedoms seem a little bit in tension with an ethics derived from my roles.
The mechanism by which new social roles are constructed is being pushed far beyond its historical peak throughput rate, and facing corresponding clogs and breakdowns.
The argument for the copinhagin interpretation would be that instead of optimizing for getting into heaven you should optomize for "being an empathetic person"
The person who sees dozens of drowning children every day and dozent save them becomes desensitized to drowning people and losses their capacity for empathy.
The person who lives far away from the drowning people doesn't.
That's unfair moral luck, but that's the truth.
I will always remember when I saw a rabbi I knew giving a dollar to a bigger at a wedding. I asked the rabbi why he did that. Doesn't he already give 10percent of his money to charity?
He said yes and indeed the single dollar wouldn't help the bigger that much. On the other hand giving nothing will train him to become un-empathetic. He quotes a biblical verse saying something like "when someone needs money you should be unable to refuse" or something like that (לא תוכל להתעלם) or something...not sure the exact context.
Of course he still gave the 10 percent as well. He didn't think you could completly remove moral obligations by not touching them. Just that the slight additional obligation you have twords situations you touch versus those you don't relates to self empathy training.
Seems like you'd train empathy even more effectively the more you help. The 10% of it makes little sense in comparison to, if you see a person is in need you shouldn't be able to refuse. Isn't donating, and then having a great abundance left and deciding not to continue helping, a form of refusal?
It is a form of refusal but it psychologicly doesn't feel like a form of refusal as saliently. So in terms of training your own psychology it doesn't have the same effect.
>what is the right prescriptive theory that doesn’t just explain moral behavior, but would let us feel dignified and non-idiotic if we followed it?
Nobody has found one as of yet, and not for the lack of trying. I'm pretty sure that there isn't a universally appealing one to be found, even if outright psychopaths aren't included in "us".
As far as I can tell, the moral principle that the average person operates on is "do what's expected from you to maintain respectability", with everything beyond that being strictly supererogatory. This is of course a meta-rule, with actual standards greatly differing throughout time and space, but I doubt that you'll do better at this level of generality.
Never thought about that one before, but it feels natural instantly.
1.) it will help to more evenly distribute the "giving effort" among those that can help (in some situation, does not need to be the same situation)
2.) in real life the odds of success, the utility towards the saved one and the cost for the saving one - all have some uncertainty. Having a degressive motivation to help leads to a more stable equilibrium between "should have helped but did not" and "irrationally exhausts herself and thereby risks the tribe's success"
3.) limits the personal darwian disadvantage agaist freeriders in the own tribe (even if all drowning children are from a different tribe).
We are all in the cabin. The cabin is global. Despite the billions given in foreign aid and charity for decades, there are still people living in poverty and dying by preventable disease etc all over the world. And any given money is likely to go to corrupt state heads. And regardless, the only countries with a booming population are all in africa, despite the low quality of life, and we hardly need more Africans surviving.
"billions" is a superficially big number, but a tiny fraction of world GDP and collectively dwarfed by what countries spend on weapons to kill and maim other human beings
The global incidence of extreme poverty is in fact going steadily down, which is what we'd expect to see if those charitable interventions were working.
That is overwhelmingly due to economic growth and tech advances. For example due to Fritz haber there are billions more people alive than there would otherwise be. Individual charitable aid is so tiny it wouldnt even fix poverty within the nations of those that give it let alone fix the world.
Of course that doesnt mean we shouldn't all give more. But what is optimal? If we had that much more focus on charity we wouldn't have had such focus on the growth that allowed for the people to exist who need that charity.
Tech and charity aren't mutually exclusive causal factors. Pretty sure we didn't get modern insecticide-infused bed nets by banging rocks together, or having some enlightened saint pray for an otherworldly being to restock the warehouse. Norman Borlaug's dwarf wheat was paid for partly by the Rockefeller Foundation and is right up there with the Haber process in terms of ending food scarcity. Lots of EA charities are evaluated in terms of the economic growth they unlock.
The city is an amazing place to live in, with its high tech infrastructure and endless energy, but if you aren't living there, you might not have heard in your childhood that the city is directly powered by drowning children. Every child you rescue from the river, lost to the system, reduces life satisfaction of millions of citizens by about 1%. The children need to drown for the city to be as great as it is.
You're allowed to leave the city after your schoolteacher takes you all on a field trip to the hydro station, but it's only allowed until you turn 18, and if you walk away you might slip and fall into the river.
I have updated towards the following view on the drowning child thought experiment:
1. The current social norms are that you have to save the drowning child but don't have to save children in Africa
2. The current social norms are wrong in the sense that the idea moral ethics disagree with them, but not in the direction you would think. According to ideal moral ethics, the solution isn't that you have to save children in Africa. It's that you don't have to save the drowning child either.
2. Obviously, I would save a drowning child. But not because I have to due to the ideal moral ethics. It's because of a combination of my personal preferences together with the currently existing social norms.
It’s a big problem because society runs off people doing these nice one offs but then other people demand that you have to universalize it and then they stop doing nice things.
I basically agree with the conclusion (except for the "not letting capitalism collapse" part, of course, it will collapse anyway, as is in its nature, the point is to dismantle it in a way that does not tear society and its ethical foundations apart).
But the way you arrived at it was... wild. I was (figuratively) screaming at my screen halfway through the essay that if your hypothetical scenario for moral obligation involves an immensely potent institution sending dying people specifically your way, the obvious way to deflect moral blame, and the only natural reaction honestly, is asking why it doesn't simply use but a tiny fraction of its capabilities to save them itself instead.
Basically, there's a crucial difference between a chaotic, one-off event where you're unpredictably finding yourself as the only (or one of the only) person capable of intervening, and a systemic, predictable event where the system is either:
- alien and incomprehensible - e.g. you can save a drowning rabbit, but have no way of averting the way the nature works. Here, Copenhagen stands, you have no moral responsibility to act, but once you act, you should accept the follow up of caring after the rabbit you've taken out of the circle-of-life loop.
- local and familiar - in which case all notions of individual responsibility are just distractions, the only meaningful course of action is pushing for systemic change.
The most orthodox Marxist crisis theory, based on the tendency of the rate of profit to fall, depends on a labor of theory of value, which seems squiffy.
A revolution of the workers brought on by an intolerable intensification of their exploitation seems less likely in consumer capitalism, where the leisure of the workers, their consumption is an important part of the system.
I'm not opposed to the idea, but I guess I don't necessarily believe that the inherent structural contradictions of capital will lead to its collapse inevitably.
I personally have a different idea of why capitalism will collapse, which goes Capitalism > AI > ASI > Dominance by beings capable of central planning.
Interesting! That's something I've thought about as well, but I see that as a relatively hopeful outcome, a possibility, not something that for capitalism is "in its nature."
I suspect definitional and/or scope mismatch here. To clarify - I am (arguably) not a Marxist, more specifically - not a stereotypical Marxist operating on a grand theory of history where capitalism is a discrete stage of civilizational progress, to be replaced with a more advanced one. I am not saying that people will stop trading or [whatever you think capitalism means]. I am saying that societies based on capitalist principles are bound to experience a collapse - which alone, not saying much, all societies in history eventually collapse, due to more general social dynamics - and, more strongly, that in their specific case capitalism is the very mechanism bringing about this collapse, and as such, not worth trying to preserve.
(In a vastly simplified way, how this plays out is - wealth, and thus power, concentrates in fewer and fewer hands, eventually creating a financier class at top of the society. Because it's measured in money, the more concentrated wealth is, the more decoupled it is from [whatever we want from productive economy]. This eventually makes various destructive (war) or unproductive (financial instruments) "investments" more profitable than expanding productive capital, this makes economy stall, this makes the ruling class push everyone else into debt to maintain their profit levels, this further immiserates the population and bankrupts the state, causing a collapse, and this makes the financiers pack up their money and escape for greener pastures, while the regular citizens of once-wealthy society are left cleaning up their mess.)
(It happened several times in history, and we can observe it happening once again right now, in real time, in the so-called first world economic block, with US as the epicenter.)
Hm, interesting! So you don't have a stadial theory of history, but you believe that any society is eventually going to collapse, which in capitalism will come from too concentrated wealth becoming separated from what's really productive in an economy.
You gave one optimistic view of how AI could disrupt this, but couldn't it be possible that AI (-->ASI, as you put it) allows the financier class to keep consolidating forever? If they have something that makes more and more of the stuff they want, automates more and more of their economy: can't we just end up being cut out of the picture, with not much of a mess to clean up in the first place?
I agree with most of that, but I think it's solvable without a collapse. There are two different things financiers are doing, which the current system (and to some extent even the financiers themselves) mostly fails to distinguish: innovation, and extraction. Making the overall pie bigger vs. capturing a bigger piece of a fixed pie for yourself. Building capital vs. buying land to collect rent.
A Georgist land value tax skips straight to the end of that inevitable "extraction concentrates wealth" progression, compiles the resulting power into something publicly accountable. UBI keeps the local pastures green. Financiers who want more profit have to accept the risks of innovation, deliver products that some other successful business and/or the average UBI recipient is willing to pay for.
Georgist LVT in a modern society also needs to tax other sources of rent extraction like the network effects that keep Facebook afloat, and the theory there is not nearly as clear unfortunately.
I live in a third world country with first world money. My wife runs a small animal rescue out of our property and sponsors animal neutering around the city. The country is also very poor and most humans here are in what most westerners would consider a very bad situation. I spend most of my time and money on research to cure aging since I believe aging is the number one cause of pain and suffering in the world and I believe curing it is within reach.
My wife had to almost entirely stop her animal rescue efforts because it got to the point where it was consuming all of her, and much of my, time to the point where it was significantly interfering with our lives. She has friends in the rescue community who have completely ruined their lives over it, living in squalid conditions because their money all goes to feeding an army of dogs and cats. She also used to volunteer to help homeless children, but that similarly was consuming her life.
Our solution: Build a big wall around our property and don't leave the house. Every time we leave the house, we see the suffering everywhere and it is overwhelming. You can very easily ruin your own life trying to save everyone one at a time, and from an optimization standpoint there are way bigger bangs for your bucks.
Funny? Story (and what prompted me to comment):
About 10 minutes after I finished reading this article I went out to walk around my property, which I haven't done in about a month or so. I got about 20 steps out when I ran into a terrified abandoned kitten. Since I have over-active mirror neurons I was essentially forced to pick it up and rescue it before returning to look for its mother or any siblings in the area. This is my punishment for leaving my walled garden that blocks out the sound of screaming kittens and starving children.
If I owned the property next to the child river I know exactly what I would do. I would build a wall and soundproof my house so I could ignore the problem, knowing that there are better uses of my time and money than completely ruining my life saving 24 children a day. I would strive to avoid going outside, but when I had to I would almost certainly rescue another child before hurriedly returning to my walled garden.
I don't have a solution to the moral dilemma, only a solution to my mirror neurons that make me do irrational things. I suspect that most humans with functioning mirror neurons are not applying some complicated moral philosophy, they are just responding the way they were evolved to respond when they witness pain and suffering of others. Now that we can witness and impact things further away, these mirror neurons can easily be overwhelmed and cease to function as a good solution on their own.
That’s why big social problems should be left to organizations, not individuals. If a social worker for a non profit gets overwhelmed, they can quit their job and go back home. No one will think less of themselves. But if you have to live with it, it becomes more difficult.
Organized social programs often get co-opted by Moloch, doing more harm than good. I don't have a solution to this, but I am unconvinced that organizations should be assumed to inherently do better than free will and law of large numbers.
Instead their net effect may be to breed cycles of dependency that rob populations of agency and personal responsibility and prevent progress at scale.
Clearly some cultural philosophies do better than others.
You seem to have a weird idea of what Moloch is. Moloch isn't just "everything bad", Moloch is when a Nash equilibrium of independent actors is an ethical or welfare race to the bottom. It's inherently harder to avoid a bad Nash equilibrium the more players there are in the game.
The original definition of Moloch, per Wikipedia is:
The fire god of the Ammonites in Canaan, to whom human sacrifices were offered; Molech. Also applied figuratively
This is almost precisely the example I give of a centralized power structure that destroys lives at scale.
I appreciate your definition, but these two things are not the same as far as I understand it, and yes, I use the Wikipedia version of Moloch as one of my mental models.
Thank you, and yes, I read that one a while ago but lost track of the specifics.
For the sake of argument, let's call my 'Moloch' Fred. Given that I have Fred here, does it make the point more worth considering? If we have both Fred and Moloch, my thesis is that my concerns as stated are still valid.
Why do you think that there isn't a equilibrium here?
* People seeking power and money are attracted to running/operating organizations with lots of power/money.
* People wanting to do good aren't as motivated to run large organizations with lots of power/money (it is miserable work).
* Any large social endeavor that has sufficient power or money to enact meaningful change will eventually be dominated by those seeking power and money, rather than those seeking to do good?
* Eventually any large social endeavor will no longer do good.
Note: The above is just a high level hand-wavy illustration, but I am not convinced that we cannot rule out Moloch here.
Large social endeavors do sometimes (often?) end up having all or the majority of their power and money skimmed off by insiders for their own use. However, this doesn't seem to happen 100% of the time. I mean, if principal-agent problems were this bad, corporations wouldn't function at all either and the economy would be reduced to humans acting as individual agents. (And corporations do also fall into ruin by this mechanism.) So I don't think this makes the argument that the optimum amount of non-market interventions is zero and we should just accept that Moloch wins everything always.
Are there examples of powerful/rich charitable organizations not running into this problem over the long term?
I can definitely believe that this can be delayed for quite a while if you have a strong ethos aligned leader, but eventually they need to be replaced and each time replacement happens you may not get lucky with the new pick. This would suggest that while Moloch will eventually win, there is a period of time between now and that inevitability where things can be good/useful. Perhaps there is value in accepting an inevitable fate if there are positive things that come of it along the way? Or perhaps we can try to find ways to shut things down once Moloch shows up?
> Organized social programs often get co-opted by Moloch, doing more harm than good.
That's a common meme, but I don't think it's always, or even often the case. I've personally worked with a organized social program of massive size, funded by a large network of individual donors, doing amazingly good work over decades.
> Our solution: Build a big wall around our property and don't leave the house. Every time we leave the house, we see the suffering everywhere and it is overwhelming.
huh. It's like Siddhartha Gautama's origin story, but in reverse.
(I'm not trying to be sardonic or condescending. It's just an interesting observation about how one man's modus ponens is another man's modus tollens.)
I wasn't aware of that origin story, but you are right it is exactly the opposite of my solution! Perhaps there is some optimal amount of exposure to pain and suffering one needs in order to take appropriate action to address it while not also being debilitated by it?
I have my own strong opinions on the Drowning Child experiment, though I've withheld them, so far. Because about a month ago, I basically said I'd tackle it on my own substack, and then procrastinated since I'm such a lazy layabout. Nonetheless, I'm confident that I've got it figured out, in a way that solves several other ethical questions and adds up to normality. At the highest level, it's tied together by expectation management. Which dovetails with Friston and Bayes Theorem. But it's a lot to explain, and a bit woo.
For now, I'll just say that ethics is basically social-engineering. "Actual" engineering disciplines (e.g. Civil Engineering) recognize that reality imposes hard constraints to what you can reasonably accomplish with the resources you have. If you wanna launch yourself to the moon with nothing but a coke bottle and a bag of mentos, you're not gonna make it. Likewise, any ethical system that says "donate literally 100% of your money to charity, such that your own person dies of starvation in a matter of weeks" is not sustainable. It's not sustainable individually, and it's not sustainable en mass. You have to consider how things interact on a local level. Which is yet another reason why Utilitiarism Is Bananas (TM). I.e. part of the appeal of Utilitarianism is the abstracting/universalizing/agglomerating instinct to shove all the particularities of a scenario conveniently under the rug. I.e. spherical cow syndrome [0].
"Let's save the shrimp! And build a dyson sphere, while we're at it!"
How?
"details... details... "
Sure, if you have a utility function that values the well-being of others, then perhaps you want to give a portion of your resources to charity. But you have to balance it with keeping your own system running. Both physically, and psychologically. And the burden you take upon yourself should, at most, equal the amount of stress you can handle. Which varies from person to person. E.g. commenter Dan Megill mentions [1] that the charity-doctors who denied themselves coke/chocolate/internet didn't have the mental fortitude to stay in Africa. To me, what this indicates is that Peter Singer psy-oped them into taking on more suffering than they could personally handle, and they buckled. Materials Science 101: different materials have different stress-strain curves [2]. Materials are not all created equal.
In sum, there's no objectively optimal amount of exposure. It entirely depends on what you can handle, and what you're willing to handle. I.e. it's subjective. I.e. the weight of the cross you bear is between you and your god.
I thought this was mostly not about wanting to have your life ruined / being exploited? Which are closely related
If I see a drowning kid saving it would be inconvenient but it's not going to ruin my life. And this is partially because there is no Big Seamstress pushing kids into water to ruin people clothes and send their stock to the moon.
If a Megacity is skimping on lifeguard and creating a situation where I can save those kids (and also somehow there is no other person upstream or downstream willing to help them?) saving all the kids would ruin my life (I can't even sleep properly). And related to that is that the city is saving relatively little money (cost of 24/7 lifeguard so if you paid them SWE salaries maybe 2 M$/year) and getting a rather huge benefit. If they value life at 1M$ they get from me 24*365*1M$ = 8760M$ / year.
If a city spends millions to build a dam but they count on my free labor to extract billions per year from my unpaid labour then yeah they are kind of exploiting me.
With charities situation is trickier - if a charity is really saving lives at low cost then it would be great to donate to it (some amount, you probably don't want to ruin your life). But you're donating money so it's harder to verify that's actually happening. And people have obvious incentive (getting your money) to misrepresent the situation to you (so you should be more worried about your money actually being used in the way they claim to).
And people setting up a generic argument which, if accepted, would oblige you to potentially ruin your life (by giving away all your money) while potentially benefiting them (by directing money in their general direction) is extra suspicious.
I don't want to say that one should never give money to charity. I agree with what I think was the original premise of EA (find out what charities are effective and how effective exactly they are and what money you want to use charitably try to use effectively). But it's really hard!
I think most of the criticisms of these extreme life/death hypotheticals as teaching tools or thought experiments are valid, but I'll add another one I think is pretty important.
There never seems to be any scope for local or medium-scale collective action. It's always you, alone, with the power of life/death, or else Angels of Heaven making grand population-wide agreements. For example:
What if in the cabin by the river of children scenario, you found three "roommates" to live there with you (presumably all doing laptop work-from-home etc.) and you all did six-hour shifts as lifeguards, saving all the children? And why does it take a "lobbyist" to possibly get Omelas to do something about the drowning children problem? Ever see "Frankenstein"? You could pick up a drowned kid and walk to City Hall with her body, that might get some attention.
And in reality, that is how things usually improve in human society. Some local group takes the initiative to starting reducing harms and improving people's lives. Sometimes they grow and found Pennsylvania. Mostly they gain status and attention and can have their work helped by or taken up by governments (assuming any govt is not COMPLETELY corrupt.) Global-level co-ordination only happens in like The Silmarrilion -- here on earth see previous about TOTAL corruption.
BR
PS --- The second-most cynical take here would be to get an EPA ruling classifying the stream of children as an illegal "discharge" into a public waterway, getting an injunction against the city for polluting the river with dead kids, which honestly at the rate of one/hr would be some VERY significant contamation indeed, even if you had some horrible mutant species of carrion-eating beavers downstream building their dams out of small human bones.
A more cynical take comes to mind -- after a week of tag-team lifeguarding, you will have 168 children. What do you with them? This quickly become completely unmanageable. In fact after 24 hours all the kids would be so annoying you'd probably start letting the little bastards drown.
"A more cynical take comes to mind -- after a week of tag-team lifeguarding, you will have 168 children. What do you with them? This quickly become completely unmanageable. In fact after 24 hours all the kids would be so annoying you'd probably start letting the little bastards drown."
Even more cynical take: sell 'em to child sex/labour trafficking gangs. The parents obviously don't care, since they all continue to live in a city that allows children to fall into waterways and get swept downriver to drown unless a random stranger saves them. The megacity even more obviously doesn't care about what happens to its minor citizens. The problem is set up such that if you, personally, singlehandedly don't intervene the kids will drown. So clearly nobody is looking for them or dealing with them or trying to prevent the drowning. We don't even know if the bodies are collected for burial or if that too is left to whoever is downstream when the corpses wash ashore.
So who is going to miss one more (or 168 more) 'dead' kids? Profit!
That was my immediate thought, but my comment was already too long. Maybe the Clinton Foundation could send a van a couple times a day to scoop up this free resource. Or establish a Jonestown style mini-nation someplace and train your infinite steam of children who owe your their lives to be an invincible army. So many possibilities.
Exactly! You now have this never-ending (it would seem) source of free labour literally floating down the river to you. For the megacity, this is only 999th on the list of "we're in really deep doo-doo now", so it could even be argued that you are giving the children a better life (how bad must life in the megacity be, if there are 998 *worse* things than drowning kids on the hour every hour all year round?) no matter where you send them or what you do with them.
Yes, an army of infinitely-replenishing children would be great, kind of like Hector (?) and the Dragon's Teeth in Greek mythology. But for maximum Dark Lord chaos points, I think my own army of mutant carrion-eating beavers would slightly edge it out. Add in some weaponized raccoons and it's "Halo 4: The River Strikes Back"
Split the difference: hand the rescued kids over to your cadre of mad scientists (you *do* have a cadre of mad scientists, don't you?) as experimental material to help create the next generation of mutant carrion-eating beavers and weaponized raccoons! After all, the carrion for the beavers has to come from somewhere, right, and where better to ensure a steady supply than the spare parts from the lab experiments?
Train the kids to train the vulture-beavers to construct the weir to simplify the rescue process, then treat the overall situation's rank among that megacity's problems like a scoreboard for your raiders to climb.
>People love trying to find holes in the drowning child thought experiment. [..] So there must be some distinction between the two scenarios. But most people’s cursory and uninspired attempts to find these fail.
Alternative way of framing this article:
> People love trying to find holes in the drowning child thought experiment (DCTE) counter-arguments. So allow me to present even more contrived scenarios that are not the DCTE, and apply the DCTE counter-arguments to those instead and see how they fail.
My takeaway is that you're nerd-sniping yourself by employing ever more sophisticated arguments to a minimally sophisticated "I'll know it when I see it" approach to life in general and the DCTE in particular that most people have.
My intuition goes towards : accident vs systemic issues.
In IT, we have a saying : "your lack of planning is not my emergency".
Similar vibes here. Why is there a drowning child in front of me ? Is it an unfortunate accident, or the predictable and logical consequences of a really poor system ? I feel absolutely no responsibility for the second. In this example :
> Every time a child falls into any of the megacity’s streams, lakes, or rivers, they get swept away and flow past your cabin; there’s a new drowning child every hour or so.
Not my problem. Fix. Your. Damn. System. Or don’t — at this point I don’t care.
The point of the hypothetical is that this isn't really a *source* of drowning children, it's just that all the children that would normally drown in that big city end up in one place.
Still not my problem: what if my cabin is situated such that before the children coming down river reach it, they all get swallowed up by a sinkhole? So I don't even have any drowning children to save, but they're still drowning. The city is the source of the drowning children, let them sort out why one child every hour falls into their damn rivers and lakes.
The absence in general of a Duty to Rescue stems from the principle that one shouldn't be obliged to put oneself at risk on behalf of a stranger to whom one has no allegiance or duty of care, and that might not be the same risk as the stranger's predicament (assuming one didn't cause the latter).
Even with the example of the kid drowning in a puddle, who is to say there isn't a bare mains electrical cable under the water that would electrocute a rescuer as soon as they touched the water or the child?
There's also the snow shovelling example, in which if you public-spiritedly clear the snow from the sidewalk adjoining your dwelling (a sort of anticipatory rescue) and a passer by slips on the patch you cleared then they can sue you for creating the potential hazard, which they could not had they slipped on the uncleared snow!
Or you could pull someone from a crashed car that was in imminent danger of catching fire or being rammed by another vehicle, but in the process break their dislocated neck so they ended up paralyzed for life, again risking a lawsuit.
I gotta be honest with you fam: all that such posts do is make me steel my heart and resolve to not rescue local drowning children either, in the interest of fairness. One man's modus ponens is another's modus tollens and all that.
What you're trying to do here is to erase the concept of supererogatory duty. It's inherently subjective and unquantifiable so every time you say "well, you don't have to do it, but objectively you should donate exactly 12.7% of your income to African orphans, but you don't have to do it," you're not fooling anyone, you just converted that opportunity for charity to ordinary duty.
So here's an alternative you have not even considered: I decide that I have a duty to rescue my own drowning children, I decide that I have a duty to rescue my neighbors' drowning children for reciprocal reasons (and rescuing any drowning child in sight is merely a heuristic in service of that goal), but rescuing African drowning children is entirely supererogatory, I might do it when and how I feel like it, but it's not my obligation that can be objectively quantified. This solves all your "river of death" problems without any mental gymnastics required.
What I'm getting at is, when someone proposes that from assumptions A, B, and C follows conclusion D, you can agree that it does logically follow, but disagree that D is factually true and instead reject some of the original assumptions.
So when someone proposes that I have a moral duty to save a drowning child in front of me, and that the life of a drowning child in Africa has an equivalent moral worth, I can disagree with their conclusion (that I must be miserable because I donate all my money to malaria nets and that still doesn't make a perceptible dent in African suffering) and declare that no, for my purposes the children are not fungible, and also that I don't have a duty to save the local child. What's going to happen if I don't, will Peter Singer put me into an ethical prison? Even usual police, if I were to look at them as the source of morality, would leave me alone in most jurisdictions, or all if I tell them that I don't swim very well.
Then someone might ask me, won't I feel terrible watching the child drown? Sure, *that's* why I'll try to save it, but I don't feel particularly terrible about knowing that thousands of children drown in Africa because I *don't see them*, and why would I try to rewire myself about that? Reaching https://en.wikipedia.org/wiki/Reflective_equilibrium goes both ways and nothing about the process suggests that the end result will be maximally altruistic. So I can choose to retain my normal human reactions to suffering that I can see and alleviate, but harden my heart against infinite suffering elsewhere.
Similarly, wouldn't I get ostracized by the people of my town for letting the child drown, because we have an understanding about saving each other's children? Sure, and that's another good reason to save the local child that doesn't generalize to saving African children because Africans won't help me with anything and my fellow townspeople won't be upset about me not helping Africans without expecting reciprocity.
Then Scott says, all right, but watch this, and adds a bunch of different epicycles, which he then invalidates with more convoluted thought experiments and replaces with further epicycles, but I still find the end result unsatisfactory.
The solution proposed here has a fatal flaw: Rawls' Veil of Ignorance doesn't actually exist. I understand that it would be very nice if it existed, it would let us ground utilitarian ethics pretty soundly, but unfortunately it's completely made up.
The solution in the post you linked, to donate 10% of your income to charity, is also kind of incomplete, because it still tries to make an utilitarian argument, but then suddenly forgets all its principles and says that it's OK to donate 10% because most people donate less, so you can just do that and sleep soundly. Why?
What I think is if not outright missing (upon rereading both posts) then at least not properly articulated is the distinction between ordinary duty and supererogatory duty, such as donating to charity. Ordinary duty, and I'm willing to walk back my objection and include saving a local drowning child, you are obligated to fulfill. Anything above and beyond that you can do if you want, but it's not mandatory.
And that's the crucial part that allows you to have arbitrarily whimsical justifications, such as: really I'm just satisfying my desire to make the world better, so donating exactly 10% of my income scratches my itch, poor Africans are welcome. Or you can imagine that there's a God that will reward you with a place in heaven, or that you entered an esoteric compact before your angelic soul incorporated in a body, or whatever that satisfies your desire to feel like a nice person without having too much troublesome thorny edge cases.
Good post. And I approve of thoughtfully engaging with the substantive details of all ideas. But throughout the post I couldn't help but constantly think "OK, but the main point is that the Copenhagen Intepretation of Ethics has nothing to recommend it as a prescriptive theory. That seems to be the bigger issue."
The person saving the children washing out of the mega city is obviously acting extremely immorally.
"this is only #999 on their list of causes of death"
By saving those children they are neglecting 998 higher priority interventions. For every child saved from drowning they are willfully killing a much higher number of children.
The drowning child saver is a monster by Scott's reckoning.
I am very fond of Scott, but these sort of thought experiments just feel meaningless to me. This is probably a function of different starting premises. I have been reading and reflecting a lot of moral philosophy in the last years, and the place I've (nor dogmatically) arrived at is some type of non-realist contractualism, which means questions of 'ethical' behavior are basically meaningless. There are contracts (formal and informal) one submits to when part of a society, and beyond them there are preferences where people are unconstrained to change them except if they want to. Morality is just a strategically useful evolutionary strategy (both natural and cultural) that allows individuals and the groups they belong to to prosper.
Tbh I find such discussions rather tiresome. Moral intuition evolved not to help us make better moral choices, but to improve our chances of reproduction. Thus the inherent moral framework build into every human is "be as selfish as possible as long as it does not reduce your social standing in your tribe of max. 50 people".
So either go and donate most of your money for mosquito nets for African children or admit that you are not trying to maximize morality in your decisions.
I can easily admit that I like to eat fast food even though I know it's not healthy because it triggers evolved cravings and it's easier than to make the right choices. Moral frameworks like the Copenhagen theory are the intelectual equivalent of saying "if you eat with friends, you only have to count the calories that you eat more than everyone else". It's bullshit and you know it. Stop rationalizing poor decisions and own them, if nothing else.
Actually by relocating to the drowning child cabin you are given a wondrous opportunity to be in the top .01% of life-savers historically and you should really be taking advantage of it, unless you are retiring to your study to do irreplaceable work on AI safety or malaria eradication.
Yeah I kept thinking about this. Perhaps the broader world of the hypothetical is extremely strange -- certainly our glimpse is -- but it would be absurd for anyone to be so sure of their work that the cabin isn't an amazing opportunity. Even the few (less than a hundred) people who have saved more lives could not have had this level of certainty in their impact. The real question is, how does the cabin not get bid up in price by people most willing to take the opportunity? Then it should be allocated to someone who would use it to the max and have low opportunity costs, I would think. You only need like ten sane/normal/good people in the entire world to get a fairly good outcome in that situation, assuming the context isn't saturated with even better opportunities.
I think this is the pretty obvious problem with the whole post. It's an appeal to ethical intuitions, but ethical intuitions are formed by experience and interaction with the world as it exists. In a world without gravity, my horror at seeing a child falling off a cliff would be entirely inappropriate. So the extreme hypotheticals don't "isolate the variables," they just trigger the realization that "this world is different."
I strongly suspect that there is no such thing as a compete and internally consistent moral framework. The obsession that EA types have with trying to come up with a set of moral axioms that can be mapped to all situations is pointless.
Moral frameworks are an emergent property of society which make them effectively determined by consensus, weighted by status and proximity. The problem is that the individual judgements that coalesce into a consensus are not derived from some abstract fundamental set of principles isolated from reality, they're determined by countless factors that can't be formalized or predicted.
For instance..
I could walk past a drowning child and suffer reputation damage
A priest could walk past in a deeply religious society and declare that the child is the devil and so deserves to drown.
The child could be drowning in a holy river that is not to be touched so a passerby is praised for their virtue in ignoring the child and respecting the river gods.
An exceptionally charismatic individual could start a cult in ancient Rome around not saving drowning children because Neptune demands sacrifices. This cult outcompetes Christianity and becomes the foundation of all of western civilization. That passerby is not evil, he's just religious and very orthodox.
an even more charismatic individual could convince an entire nation to adopt a set of beliefs within which saving a drowning child is dysgenic because a healthy child would know how to swim.
You can keep going on and on..
It's better to adopt the general consensus of the society within which you exist or if you insist on changing the status quo, play the status game to increase you and your group's influence on the consensus. Trying to come up with a logical framework is not it because that's not what normal people are basing their judgements on.
ISTM this model is failing to capture all the variables involved. Why on earth /wouldn't/ we be obligated to save the hourly drowning child, forever?
We have a habit of excluding physical and mental health from these calculations. The wet suit and missed lunch don't matter, but dustspecks in the eye, forever, with no prospect of an end, add up.
Consider a model where people generate a finite amount of some resource each day. Let's call it "copium" for convenience. Self-maintenance costs some variable amount of the resource over time. This amount varies randomly, in accordance with some distribution. You can approximate some upper and lower bounds on how much copium you're likely to need to get through the day, but you can't know ahead of time. All decisions and actions you perform cost copium. If you incur a copium cost when you have no copium left, you take permanent damage. If you accumulate enough damage, you die.
This brings the difference between the one-off case and the hourly case into focus: the one-off scenario is worth the copium spend, but in the ongoing scenario you predictably die unless you can make it fit your copium budget.
The rule then becomes - help however many people you sustainably can, and no more than that unless you'd also be willing to sacrifice yourself for them in more immediate ways (the answer to "would you be willing to die to save hundreds of children?", for many people, isn't "no"!)
In the moment, though, when forced to actually decide, the difference between whether you act like a Singer or a Sociopathic Jerk is down to the amount of copium you have left for the day.
Another part of the difference why the cabin in the woods (and saving lives through charity on the other side of the world) feels different from the other examples is that millions or billions of other people could act to prevent the deaths (even if they don’t).
Whilst if a child is drowning in front of you only you can stop them dying.
The other element that I would add is reciprocal moral obligations. We all have a different sets of moral obligations to our direct family, extended family, friends, neighbours, town, country, humanity etc.
Whilst it might be great if everyone in the world treated everyone else like family, it would quickly fall apart to defection.
In most nice societies, you have a moral obligation to help someone whose life is in danger if you are one of the few people who can help and it is relatively simple to do so. This is a great thing and has, along with other social ties, taken hundreds (or thousands) of years to create. To prevent moral hazard (and also defection) it doesn’t really apply if someone has repeatedly got themselves into the situation - it is about extraordinary aid when something goes accidentally wrong.
This explains why in the cabin situation I feel morally mixed - the population of the megacity know this is happening and have clearly chosen to let it happen despite it being easily preventable. However I feel bad for the children (they haven’t made that decision) and at the time of their drowning I am the only one who could save them. But it wouldn’t be simple to save all of them.
This also explains why I don’t naturally feel much of a moral obligation to give to effective charities saving lives on the other side of the world. They are not in any of the communities I have varying degrees of moral obligation to (other than humanity as a whole). Furthermore those with much stronger moral obligations to those people are clearly failing them (although this varies a bit by country). There are also many others who could save them.
The big question is whether this notion of reciprocal moral obligations to differing extents to different communities we are part of, that most of us who have been brought up in ‘nice’ circumstances feel, is logically correct? I think Scott would say they are all very well, but that we should fulfill our obligations to them and then focus on how we can do the most good for humanity as a whole, from a broadly utilitarian perspective. Clearly in a direct impact sense this is correct, but thinking through secondary impacts I’m less sure.
Most directly and specifically around charitable donations from wealthier people in western democracies, if people in a country feel like the successful aren’t giving back to them and the country, this undermines support for the capitalist policies that enable the wealth to be generated in the first place.
More broadly I don’t really think you can just ‘fulfill’ your obligations to those other communities. Part of those obligations are that the more you have, the more you give back (e.g. a rich person donates to the school they attended, if you have more free time than your siblings you are expected to help out your ageing grandparents more etc). So choosing to help humanity as a whole is a form of defection (e.g. if rich people decide to switch their philanthropy to donating to charities abroad rather than at home) from these moral obligations in some sense.
By defecting from these ties and norms you are causing damage to the social fabric (or ‘social trust’ in economic terms) that ultimately created that wealth. In most ‘not nice’ countries the only reciprocal moral obligations that are adhered to are those around the extended family. A key part of why rich countries are rich is that they created strong moral responsibilities to wider communities, particularly your town, your country and other institutions within your country. Rather than a government official being obligated to cut their cousin in, in these countries they are morally obligated not to.
Personally think this is part of the reason for Trump and the populist swing in recent years. ‘Elites’ increasingly have a morality focused on utilitarianism or helping those most sidelined/discriminated against, whilst ordinary people see it more in terms of these communities, which from their perspective the elites are defecting from. For instance in the past the rich people in a town facing issues might have worked together to sort them out, whilst now they are probably more likely to just leave. Or the kids of the rich and powerful would had have a decent chance of being in the military (the death rate of the British aristocracy in WW1 was incredibly high), so ordinary people were more likely to trust elites on foreign policy decisions.
These norms and obligations only work if everyone feels like everyone else feels them and mostly acts on them (rather than being for ‘suckers’), and messing with something that is such a key part of what makes societies stable, rich and ‘nice’ is very dangerous.
This is a huge and underrated driver of NIMBYism. People are willing to destroy housing affordability and massively reduce total prosperity if it means they are more insolated from drowning children.
It’s really about the number of children drowning. One, yes you can save. Many, you cannot.
Singer - the original author of the thought experiment - argues that to impoverish ourselves to the extent that we ourselves are close to impoverishment but not quite there, as the only moral solution.
There are multiple drowning children though, not one. I imagine myself on a boat on the sea or lake with the drowning children. I can rescue the children. However, I can myself drown by capsizing my boat if I take on too many.
I am also in danger of capsizing in future if I take less than capacity, it’s not clear how many but it becomes risky to take on anything near the limit, as the boat is rickety and storms occasionally occur. People who have not maintained their boats have drowned.
All around me, though, are much bigger boats towering over my boat, these boats either don’t help the drowning children, or take on a number of children - which while admittedly more than me is nowhere near their carrying capacity - and the large boats are at no danger of sinking in future storm damage either.
Also on the lake are military who I fund with taxes who are actively drowning the children. Just jumping in and drowning a few every so often. I can’t stop this. It’s for geopolitical reasons.
None of this means you shouldn’t help the drowning children but I wouldn’t worry about relative morality here either. Rescue some, but not to the capacity of the boat, not to put the boat in danger.
I think morality originally started, and still functions for most people, for two things:
a) To pressure friends and strangers around you into helping you and not harming you, and
b) To signal to friends and strangers around you that you're the type of person who'll help and not harm people around you, so that you're worth cultivating as a friend
This has naturally resulted in all sorts of incoherent prescriptions, because to best accomplish those goals, you'll want to say selflessness is an ultimate virtue. But the real goal of moral prescriptions isn't selfless altruism, it's to benefit yourself. And it works out that way because behaviors that aren't beneficial will die out and not spread.
But everything got confused when philosophers, priests, and other big thinkers got involved and took the incoherent moral prescriptions too literally, and tried to resolve all the contradictions in a consistent manner.
There's a reason why you help a kid you pass by drowning, and not a starving African child. It's because you'd want your neighbor to help your kid in such a situation so you tell everyone saving local drowning kids is a necessity, and it's because you want to signal you're a good person who can be trusted in a coalition. The African kid's parent is likely in no position to ever help your kid, and there's such an endless amount of African kids to help that pouring your resources into the cause will outweigh any benefits of good reputation you gain.
Our moral expectations are also based on what we can actually get away with expecting our friends to do. If my child falls into the river, I can expect my friend to save my child, because that's relatively low cost to my friend, high benefit to me. If my child falls into the river 12 times a day, it'll be harder to find a friend who thinks my loyalty is worth diving into the river 12 times a day. If I can't actually get a friend who meets my moral standards, then there's no point in having those moral standards.
Essentially ethics makes sense when centered around a community but we in the west don’t really have communities anymore. Hence the incoherent philosophy.
I've never really seen this version of ethical egoism that's like "it's Moral Mazes all the way down" espoused other than here. Although now that I think of it, Rawlsianism basically assumes that this is what would happen without deliberation behind the Veil of Ignorance, and nobody but maybe Mormons believes the deliberation actually happens. Nonetheless I don't think this is plausible on a human level, even if it probably is from a gene's-eye view, because sympathy and guilt are things. If you suffer for ignoring others' well-being, then others' well-being is at least sometimes more-than-instrumentally important to you.
I subscribe to this as an explanatory theory but not a prescriptive one. Sometimes you have to be better than the soulless, brainless and hopeless forces that made you, because you do have a soul, a brain and a hope. Sometimes you see that you're being puppeted and you think that's the best of all possible worlds.
The most important part of bravery as a virtue isn't that you have ridiculous amounts of it for situations that rarely happen, but that you have enough of it to face the parts of you that are imperfect and acknowledge that you are imperfect, so that fixes and changes can happen at all. And you can't argue someone into being brave. I don't know how else to explain why else people flinch away from being better than what they were designed for.
Yes - and even more so. "Morality" is not a rule system, it is a mishmash of loose heuristics that evolved to help us cooperate in small, local groups because cooperating groups outcompete non-cooperating groups.
With this in mind, I think most seemingly paradoxical moral intuitions make sense. It is all about what someone who saw or heard about some or all of what you did or did not do might be able to infer about your motivations (all in the context of a group of 20-30 people with only eyes, ears, and a theory of mind as evaluation tools).
Contorted moral scenarios are engineered to exploit the incoherencies of our moral system heuristics just like optical illusions show the incoherence of our visual system heuristics. These inconsistencies persisted because they were not relevant in our evolutionary past. There were neither Penrose Triangles nor robotic surgeons out on the savanna.
Right, I don't think Scott or others of an EA persuasion would dispute this, or any of the similar statements made above.
The point is that, we don't live in the savannah anymore, but we still live in networks of people that approximate the social structures we evolved with, and technology and culture put us in some kind of proximity to people who are distant from us, yet whom we also can't help but apply our moral instincts to.
Since our intuitions can't help but be incoherent, but we still want to live in a cooperating group (or to put it in the language of the comment you're responding to, we still want to signal to friends and strangers that we should be helped and not harmed), we have to build something coherent enough to achieve these aims, built out of our evolved moral intuitions.
That's necessarily gonna mean making tradeoffs between different moral intuitions, hence the convoluted thought experiments to figure out what exactly our moral intuitions are, and how we trade them off against each other.
From a prescriptivist standpoint, there won't come a time when it will *not* be more moral to save the next drowning baby-sutured-to-a-famous-violinist floating from the magical post-industrial bubble city filled with burning fertility clinics and infinite trolley switches or whatever the shit. The person who donates 11% of his wealth to mosquito nets is better than the person who donates 10%.
But I'm sorry, I can't do it. I'm flawed. I don't live for others as much as I could. I'm too attached to comfort. I (roughly) tithe but I could give more if I didn't pay for the Internet connection that I'm using to post this. I could be volunteering instead of posting.
Perhaps someday I'll grow in selflessness and I'll get to the point where I love radically, for the whole world. I think that's the call of Christianity in a fallen world I just hope that until I get there, my sins of omission aren't considered too great.
You raise a good point: what if, in order to save the drowning child, they have to be plugged in to your circulatory system for the next nine months (falling into this river automatically gives them kidney disease as well as risk of drowning)?
Are we then permitted to refuse to have attached the drowning child? Cage match between Singer and Thomson!
I have been writing about aphantasia and hyperphantasia and what it might mean, for these thought experiments, if you actually *see* the drowning child or the terrible thing that needed intervention. Our reactions are not wholly philosophical. https://hollisrobbinsanecdotal.substack.com/p/aphantasia-and-the-sixth-sense
I feel bad admonishing Scott for not being universal enough when there's this much opposition in the comments to having even slightly more rational ethics. And I realise he has to take into account that you can't expect anyone to be fully rational or alturisitc. But if you really take Rawl's veil seriously the conclusion should obviously be world communism for all sentient beings.
If Earth was populated with perfectly rational, perfectly allutristic Rawlsians they wouldn't just be donating 10% to bed nets, they'd also be spending like 25% of world gdp building social housing for wild mice etc.
>How much should they pay? Enough to pick the low-hanging fruit and make it so nobody is desperately poor, but not enough to make global capitalism collapse.
I feel like like the 10% level of alturism Scott's proposing is way lower than could be justified by constraints on maintaning economic growth and he's really considering psychological opostion to being more alturistic than anything theoretical here. The top rate of tax used to 90% for a lot of places in the post war period, and modern gdp per capita is about 20x above subsistane level. The theorietically ideal Ralwsians could easily be spending 50%+ of gdp on charitable redistribution imo.
>I think the angelic intelligences would also consider that rich people could defect on the deal after being born, and so try to make the yoke as light as possible.
Considering the posibilty of defections seems to defeat the point of the though experiment since that's no longer behind the veil.
The idea that it’s more “rational” to base your ethics on an obligation to spend all your resources saving millions of strangers around the world is preposterous.
Scott has been consistently saying that it's not an obligation to give anything away, and that you should do it if you have enough of a super ego to do it. Does that change your opinion by what the OP means by "more rational ethics"?
If you take the drowning child argument seriously, then the logically conclusion is you should never spend money on leisure goods. If I spend 10% of my money on charity but I go out to eat once a week, I’m letting a child drown. Every EA, including Scott, knows this but they also know that if you tell people that, then they’re just not going to be persuaded by the arguments. So they make up ad hoc reasons why we really only have to spend limited amounts on charity to get more people on board.
No EA has ever had a good answer to the demandingness objection. They just try to downplay it.
The obvious answer to the demandingness objection is that caving into demandingness results in less charity as people burn out or lose opportunities to leverage themselves. Since you want to ensure that there is globally, across all time more charity than there is otherwise, and that people go into spiraling dynamics as a matter of fact about moral imperatives, you set a guideline that ensures the spiral doesn't happen and then go on your way.
It's known that if you try and break this guideline down you will predictably burn out and cease being an EA, so it is strongly encouraged to follow it (I personally don't follow it but I can see why this is valuable).
And also if you go out to eat once a week., the accounting is something like 1/500 to 1/50 drowning children at most. Which, yeah it isn't great. But if it's what it takes to continue donating, it's closer to an amortization of the donation cost rather than a binary donate / not donation opportunity. So assuming you donate 5500 every year, that goes to something like 11000 per life, which is still pretty damn good considering you're spending a hundred bucks per time you eat out.
If it were true that humans were perfect executors of willpower and don't need creature comforts, then I don't see how the demandingness objection applies, because they don't deeply want the luxury anyway. But they're not, so your morality system has to work with squishy humans.
Ok let’s back up a second. Let’s say I give 10% of my income to charity. I decide to go out and eat a few times a week. Over a certain amount of time, is this not morally equivalent to letting a child drown?
Yes, but if you stop donating, that's letting *more* children drown.
The counterfactual isn't between a world where you're morally perfect and you now, it's between a world where you feel miserable and end up not saving children or a world where you spend more on yourself and end up saving more children, albeit at reduced efficiency compared to a perfect moral agent, which you are not.
I mean it's more rational in that it attempts to base ethics on reason and not the various evolved, sub-conscious biases that humans default to like egoism, tribalism, kin-preference etc. The same way rationalism attempts to rise above evolved psychological biases to have better epistemics. Which then leads to a more universalist view that entails spending your resources on saving stangers.
I'm sure you don't think that the kind of reasoning that Scott's doing here is less rational than the normal, "common sense", intuitive morality that most people have.
The idea that in coming up with normative prescriptions we should ignore human nature is patently absurd. Should ants and bees “rise above” their nature and be more individualistic?
Looks like it's going to be difficult for me to say anything that's not preposterous or absurd. But, yes, if ants and bees were instinctively committing acts that would look unethical under conscious reflection it would be better for them to be more rational beings.
Wow I did not think you were going to bite that bullet because it’s so crazy. If bees were more individualistic, they would all die. Is that more “rational” to you?
Depends on how much more, and other factors. Moderate dose of individualism and forethought might be exactly what's needed to break out of an https://en.wikipedia.org/wiki/Ant_mill
Rawls was in favor of national borders, and opposed immigration as letting countries irresponsibly increase their own populations at the expense of other countries.
I'm willing to discuss saving children's lives if you are willing to discuss the budget for childhood education. Both are socialized spending to do a first order good that should serve greater second order goods. Yet when discussing saving children's lives, most people's brains are hijacked and only focus on the first order good. But with childhood education, most people can look at the second order goods and trade-offs and consequences and see that the initial investment on the first order good is not all we should consider. I consider people that are unwilling to discuss childhood education and only talk about saving child's lives to be engaging in (perhaps unintentional) emotional blackmail
Have I got good news for you about deworming! One of the primary ways deworming helps (on top of not having <censored censored censored> painfully happening to children) is that they attend way more days of school because they are not in pain, and even though *technically* it looks like a low value intervention with large error bars, the fact that those error bars lie in the direction of big educational flow through effects was why GiveWell thought they were good. Sorry I can't seem to find the analysis with about 5 minutes of work (because they are no longer the top charity) but https://blog.givewell.org/2024/08/21/raffles-deworming-and-statistics/ covers the reasoning pretty well given a skim on my part.
With respect to people's actual behavior, there is an article in the Journal of Political Economy by Andreoni, Rao, and Trachtman, "Avoiding the Ask: A Field Experiment on Altruism, Empathy, and Charitable Giving" (June 2017, https://www.journals.uchicago.edu/doi/10.1086/691703 ) with the following abstract:
"If people enjoy giving, then why do they avoid fund-raisers? Partnering with the Salvation Army at Christmastime, we conducted a randomized field experiment placing bell ringers at one or both main entrances to a supermarket, making it easy or difficult to avoid the ask. Additionally, bell ringers either were silent or said “please give.” Making avoidance difficult increased both the rate of giving and donations. Paradoxically, the verbal ask dramatically increased giving but also led to dramatic avoidance. We argue that this illustrates sophisticated awareness of the empathy-altruism link: people avoid empathic stimulation to regulate their giving and guilt."
Personally, pushy bellringers increase my avoidance greatly, not due to guilt, but because I give to other causes and in other ways.
Their intrusive behavior both delays me, causes advertising pollution, doesn't scale, isn't well-documented with an audit trail, and from a virtue signaling perspective, tries to shame me publicly where I have no recourse to challenge the presumption.
Neither is the Chinese telesurgery. Anyone who says "I would prefer to let that guy choke rather than be five minutes late for lunch" is going to be deemed the worst kind of monster. But what if it's "there's no-one else in the room, if I pause the surgery to save his life the patient will die"? *Now* should the surgeon divert the robot?
The problem with the Drowning Child scenario is that it's easy to create thought experiments that go "tiny trivial little inconvenience this side, big huge effect on life that side" and put your thumb on the scale that way. Once you arm-twist people into agreeing "Why no I am not a horrible monster", *then* you hit them with "and now the choice is not a trivial inconvenience, but you have already agreed that refusing makes you a horrible monster, so now you've committed yourself, (evil laughter 'heh-heh-heh' optional here)".
And that's why people feel that they're being suckered into something and want to find an argument contra the Drowning Child. Me, I'm going to bite the bullet, go "why yes I *am* a horrible monster, however could you tell?" and refuse to be arm-twisted in the first place.
I'm not buying a pig in a poke, which is what most of these thought experiments are.
It also seems pertinent that in the examples above it costs a trivial amount of time/ money to save a life whereas in the actually existing world the single most effective charity (per givewell) manages to save a life for $4.5k. This would correspond to multiple days/weeks of most people's income.
Perhaps it's no coincidence that there are no low hanging fruits left (that is, opportunities to help people at scale cheaply)
I mean, sounds like a bargain tbh. There are also lots of less-than-lethal stinky situations you avoid along the way. Like $7 to protect a kid from getting malaria, which is unlikely to kill them but quite intensely unpleasant. Just because that's the amount to statistically have one fewer person die doesn't mean that's the increment of help.
The second sentence ofthe post is "This is natural: it’s obvious you should save the child in the scenario, but much less obvious that you should give lots of charity to poor people (as it seems to imply)."
I fail to see how the implication works here. It doesn't follow from
"I'm happy to spend 5 minutes to save a drowning child"
That i should give "lots of charity" (presumably much more than the average).
Hmmm - the megacity one makes me go "if there are children falling into lakes every hour and drowning, isn't that the problem of the megacity? haven't they noticed yet? aren't the parents complaining?" and I would begin to think maybe the megacity is using this as a method of population control or something. If it's happening this frequently all the time - and even more so if the damn place can afford to build a whopping great dam but not put some fences up around lakes - then now it definitely is *their* problem not mine to solve. I did not commit in any way to be a volunteer lifeguard.
And that makes me think of USAID - why aren't those countries solving their own problems? Why is the USA now in the position of "you saved one drowning child, now you are committed forever to save all the drowning children"? If it's understandable that I don't want to save the 37th or 337th or 3,3337th drowning child because damn it, I just inherited this cabin and isn't it up to the megacity to finally childproof their own rivers or at least hire their own lifeguards, why isn't it understandable for a new administration to go "well we just inherited this and we want to change it"?
This actually begins to sound like a justification for what Trump/DOGE is doing!
When you realize that the West didn't get to be rich via some richer region giving it aid, you then start to wonder about the same process happening elsewhere. And since I think the lack of tropical diseases is one reason why Europe (particularly northern Europe) got rich, anti-malaria charities is one thing I do donate to.
I'm not arguing against giving to anti-malaria charities, what I am arguing about is that the Drowning Child argument and the variations thereof try to do too much. They jump from "this is a once-off, exceptional, very impactful situation - you see a child drowning, won't you save them?" to "okay now you have agreed that it is better to suffer a trivial inconvenience than to permit someone else to suffer a major loss, you are now obligated to constantly do this good thing we want to impose on you".
That's a very big shift: for instance, is having malaria comparable to drowning? If you gave people the choice between "you can have malaria or you can drown", I think most would pick malaria. Malaria in children under five does seem to lead to high mortality:
"Nearly every minute, a child under 5 dies of malaria. Many of these deaths are preventable and treatable. In 2022, there were 249 million malaria cases globally that led to 608,000 deaths in total. Of these deaths, 76 per cent were children under 5 years of age. This translates into a daily toll of over 1,000 children under age 5."
So contributing to anti-malaria charities is well worth it, and you don't even have to get your shoes wet, never mind an expensive suit. *But* that's where the problem lies: the example is chosen to shame people about "for a relatively small inconvenience to you, thousands of lives can be saved, yet you the public are not doing it" and hence why the shock example of "would you let a child drown rather than get your good suit wet?".
That's perfectly fine, as far as it goes. But *how* far is that? Because I don't see that putting a moral obligation on anyone about "and now you must stack up as many inconveniences as possible to save as many lives as possible, and you can never stop, and you can never choose for yourself, you are now obliged to always pick the biggest bang for the buck and to hand over as many bucks as won't result in you starving to death".
*That's* the problem here: that the thought experiment has seemed to expand into this principle of general obligation at the maximum forever, instead of the single example of "you can save a lot of lives by donating to bed nets".
The drowning child situation does a good job of highlighting why it doesn't apply to many other real world scenarios.
A child drowning in the river needs help. If you rescue them, they are safe once they are on dry land. They'll be scared out of their wits and know not to go jump in the river again. Or their parents will.
Financial commitments to provide medicine are nothing like drowning children. A child can choose not to jump in the river, so there is generally not an epidemic of drowning children. Mosquitoes are being born all the time and buzz around. A malaria net reduces some risk of being bitten, but there are always more mosquitoes and malaria is not going anywhere. The closest analogy is not a child drowning in the river, but instead tossing individual inflatable tubes to a sinking boat in the ocean. Maybe it'll help a little bit, but it probably won't help that much.
The distinction seems to be between random unfortunate issues where you can help people versus a systematic issue.
The magical town should build netting to catch the drowning kids in a way they can save themselves. A once off cost with maybe some minor maintenance costs. Then they should try and actually fix it so the kids don't slip into the fast moving water in the first case, although that's a larger task.
With such a setup maybe you get 1 drowning child a month and it's a lot less of an issue and burden on yourself than having them hourly, including when you sleep.
Well, you might still have drowning kids go past when you sleep but no one should expect you to do anything about that. The burden is on the city and it's systems.
The group payment into shared fund is obviously the way of dealing with most systemic issues. The other is to build things with less such issues in the first place.
CORENA (Citizens Own Renewable Energy Network Australia) is a shared revolving funding pool. People like myself donate to CORENA and the donations are used to pay for funding Solar PV and other energy saving systems. The NGOs they provide the interest free funds to pay back the loan based on their electricity cost savings.
Basically the NGO buys solar power and for a couple of years they still pay the previous electricity bill amount, but you CORENA. After a couple of years they've paid it off and their bills go to zero.
This means my initial donation can go to multiple funds over a few years and so it's more effective over time.
If you are in your cabin in the woods and you notice a large bag full of money 💰 floating down the river, then rescue it, there's some expectation around that being your lucky day.
If bags of money float down the river regularly, you'll probably pickup a few for the first few days, but eventually you'll have grabbed enough.
But you should actually tell the bank or people upstream that their system is broken.
The difference is, if you instead tell your neighbour about the bags of money then they and their friends will come and help clean them from the stream very happily. Until eventually word gets to the bank who finds they've got some hole that the bags keep falling into.
There's likely some expectation that the bags of money are the banks. But I'm the original thought experiment the kids have parents and they should be distraught and trying to do something.... Like putting up poles, netting or whatever.
I think in reality if you tell the local community about the kids floating down the river then there will be a lot of people who will try to help. Creating a jetty, putting up some makeshift poles across the river and spending some amount of time on the issue. It won't just all fall onto the burden of a single person.
I have thoughts. Probably going to be too far down the comment thread for anyone else to notice them, but still.
Okay the size of the problem matters. If you are in a situation where a kid drowns every hour, you should have no obligation to any individual child. You instead have a bigger problem and a different solution is needed then rescuing each child. So in this case it would be moral to ignore a bunch of kids drowning while I go to the store and buy some kind of netting that will catch children and let them climb out on their own. If a bunch of children die while I try for a complete solution it's okay. In fact saving any individual kid is almost wrong, that will help someone but it won't solve the actual problem. This often the issue with aid programs, if we send you seeds you can farm, and feed yourself, but sending you food just keeps you starving forever.
I do think distance vs size of a problem leads to some weird places. Imagine tomorrow we discover there is a planet in the Alpha Centari system just 2 light years away that has 100 billion human like aliens, further we know a comet will hit their planet and they will all die unless we stop it from hitting them in 150 years. It turns out to mount a rescue mission to save this planet will take all of earths GDP for the next 100 years. What do you do, does the distance matter, does the fact we know about the problem matter, does the size of the population matter.
I think there is no right answer to that question, and even saying one is moral is a category error. Morality isn't a thing that exists outside of humans, it's how we create rules that allow us to cooperate and coordinate allowing more humans to exist. So if a rule doesn't help more humans cooperate better and exist in greater numbers, or increases the overall level of misery, it perhaps is a bad rule. To apply our instincts to situations that it was never meant to handle is a category error. The right question is what response to the threatened alien planet makes humans better able to cooperate with each other. Perhaps saying let them die, makes it harder for us to trust and shrinks human florishing. Perhaps giving all our resources to theoretically save them, does the same. Looking good is important because it creates trust that enables cooperation. Acting good is important if people are watching. But if you only act good when people are watching, you'll mess up, better to just want to do good.
Regarding the neighbour, I suppose the Copenhagen interpretation could hold he "touched" the problem when he was made aware of it by you, and even more so when a solution was offered which he refused to engage in. If the neighbour is ignorant of the drowning children, he is not at fault and can qualify for the spot in heaven. But if he knows and refuses to help, then he loses out.
Now where did I read something like this before, then? 😀
"14 What good is it, my brothers, if someone says he has faith but does not have works? Can that faith save him? 15 If a brother or sister is poorly clothed and lacking in daily food, 16 and one of you says to them, “Go in peace, be warmed and filled,” without giving them the things needed for the body, what good is that? 17 So also faith by itself, if it does not have works, is dead.
18 But someone will say, “You have faith and I have works.” Show me your faith apart from your works, and I will show you my faith by my works. 19 You believe that God is one; you do well. Even the demons believe — and shudder! 20 Do you want to be shown, you foolish person, that faith apart from works is useless? 21 Was not Abraham our father justified by works when he offered up his son Isaac on the altar? 22 You see that faith was active along with his works, and faith was completed by his works; 23 and the Scripture was fulfilled that says, “Abraham believed God, and it was counted to him as righteousness” — and he was called a friend of God. 24 You see that a person is justified by works and not by faith alone. 25 And in the same way was not also Rahab the prostitute justified by works when she received the messengers and sent them out by another way? 26 For as the body apart from the spirit is dead, so also faith apart from works is dead."
As for the other point:
"For another, if you’re even slightly religious, actually getting the literal spot in Heaven should be one of the top things on your mind when you’re deciding whether to be moral or not."
Ah, no? You are not supposed to act based on "will this get me good boy points or not?", but rather because it is the right thing to do. Avoiding sin and doing good because you fear Hell and desire Heaven is better than nothing, but best of all is to do what is commanded because it is right and because "you shall love the Lord your God with your whole heart and your whole soul and your whole mind, and you shall love your neighbour as yourself".
That's the part the West Wing episode got wrong: Bartlett is complaining about "I kept the rules, I was supposed to get the prize!" (I can't remember if he said it in front of a crucifix, it was set in the Protestant National Cathedral, but if there were a crucifix there complaining 'it's not fair, I was a good guy!' is even more ironic). Not how it works, there is no guarantee that "I check off the list and God makes sure nothing bad happens to me".
It looks to me like the real issue is Singer starting with saving a life at a fairly small cost, and then cranking up the acceptable cost higher and higher. (Merely an impression, just that's how it seems to work.)
I'm also not sure where demands for efficacy fit into this. What if it turns out you're not very good at saving children?
This makes the idea of tithing seem attractive. You have some obligation, but it's at a level which is noticeable but not debilitating. As I understand it, the Jewish version is that you may not wreck yourself. The Christian version is that martyrdom is admirable but not required.
What people should do themselves and what they should blame other people for might be separate questions.
The big solutions seem to be sideways from saving people at personal cost. Clean drinking water isn't the same sort of thing as taking care of sick people. It's not obvious how resources should be divided between research and saving people. By definition, you can't predict which research will pay off, though there are reasonable estimates about the odds once a field is established.
The cabin by the river of hourly drowning children is silly because it assumes that in order to save each child you have to do it by yourself, with no help, and no technological assistance or tools. The actual, practical solution is to enlist the help of the previous children you saved for saving the next ones. Since a new child arrives every hour you will have 10 helpers after just 10 hours. After a week you have 168 helpers! Obviously you don’t need that many helpers so you can let most of them go after they’ve helped rescue one or two kids.
As for technological solutions, the answer is quite simple as well: floating ropes like they use at many outdoor swimming areas. Put enough of these across the river, anchored down at both ends, and children being carried down the river past your cabin can just grab a rope and rescue themselves. This is ultimately the solution to the river problem and why it fails to be as dramatic a burden on the cabin owner as it would seem. I would not want the burden of having to personally rescue every child, every hour, 24/7 forever. But I wouldn’t mind setting up a rope system and a toweling off station nearby so the children could survive.
Think of a more realistic situation where people are dying regularly and the solution is not so easy: traffic fatalities. Sure, we can suggest “simple” solutions such as advanced public transit, better street designs, lower speed limits, etc. But ultimately the solution is very messy and political and not really amenable to quick fixes. I think this is why it fails to be useful for an ethics discussion yet remains a much more important problem in real life.
Maybe I don't get the copenhagen model but it seems like if you stumble across a drowning child you have not asserted power and thus don't have an obligation to save the child.
Have you ever read The Godfather? (Or watched the movies?)
Don Corleone was a problem-solver. When someone had a big problem, they would often ask the Don help. And the Don would often oblige. E.g. I think the first instance of this is when a mortician's daughter gets her jaw broken by two men, and the two men are "only" sentenced to prison for 3 years. And the mortician feels humiliated by the court's light sentence. So he goes to Don for a favor, to seek "justice" on behalf of his daughter.
However, asking the Don for a favor had a cost. When called upon in the future, you were implicitly expected to reciprocate. In other words, Don Corleone had built a patronage network. And this gave the mafia gang extra maneuverability. To frame this in Copenhagen terms, The Don's ostensible altruism increased his power.
It’s been a while since I read the book, but I recall Rawls being pretty insistent that his theory is *only* a theory of justice, not a full theory of morality. I think you have to take his warnings seriously.
I agree very much that slipping between descriptive and proscriptive ethics is a real problem.
As others have commented, some of these thought experiments fail to engage my intuitions. But as an attempt to build a chain of reasoning around political ethics, I like the attempt.
People have the obligation to save nearby children in typical situations. And the limitation to typical situations inherently prevents problems like having to save enough children that it impacts your life. If you create a hypothetical that is not typical, there is nothing to prevent such problems, so you'll have to manually add a lot of clauses to prevent exploits that can't exist in typical situations.
And no, you don't just get to handwave away the taxes objection. "It's an insult to some hypothetical being's intelligence" is not a logical argument, and Scott knows better than that.
(I came up with the taxes objection. I wonder if Scott is trying to reply to me.)
> My favorite heuristic for thinking about this is John Rawls’ “original position” - if we were all pre-incarnation angelic intelligences, knowing we would go to Earth and become humans but ignorant of which human we would become, what deals would we strike with each other to make our time on Earth as pleasant as possible?
I'm also a huge fan of original position, but limiting it to humans misses a huge part of the moral landscape. Any sentient being is one we'd have a chance of waking up as.
I'm confused. Isn't the copenhagen interpretation of ethics supposed to be a reductio ad absurdum? Surely no one actually thinks of it as a good normative theory?
I've always thought that empathy is highly conditioned by proximity/immediacy. This certainly doesn't have to be physical and is, in fact, more often mental proximity/immediacy, I think. We can certainly be very moved by a documentary which vividly portrays suffering, though it's happening on the other side of the world, but feel little empathy for our neighbour suffering from cancer next door because he never leaves the house and we rarely see him.
The right to kill unborn human babies is not only accepted by the majority of western society nowadays but is even being bandied about as a "fundamental human right". Meanwhile cooking a live lobster in a pot of boiling water has been made illegal in many countries. We might be standing right beside (physical proximity) to the pregnant woman having an abortion but we can't see or hear the baby, so we feel little or no empathy for it. But we can hear the lobster "screaming" and see it writhing in the pot...
Since empathy is highly skewed by mental immediacy, it needs to be tempered with logic to create morality. For this we need a clear idea of the final objective - a stated goal as to what our morality seeks to achieve - to which logic can then be applied. Eg. "all human life is inviolable", or "all suffering should be stopped, whether animal or human", (two ideas which are often incompatible...) or whatever. I think we talk a lot about morality without having a clearly defined goal of what that morality is supposed to accomplish. The result is that it is easy to confuse our instinctive empathy with a higher idea of morality.
Here it seems like a hot/cold test. If you are directly experiencing a hot situation like a child drowning and you do nothing you are a cold person. It's probably a tell as to character.
One of the two books a have with me on vacation is Anne Dufourmantelle's In Praise of Risk. She was a philosopher and a psychiatrist who lost her life attempting to save a drowning child.
I think you're circling around questions of scale -- is a moral responsibility personal, neighborhood, town, or even higher level? And that is related to Shannon entropy -- how surprising is the event that triggers the moral responsibility? But it's also related to how catastrophic the event is. How large is child mortality in the magical city? Would you react differently if the consequence of falling in the river was a bad cold, or if only children who had never learned to swim drowned?
On distance, you keep coming up with these examples where someone is in your presence even though they are really far away to get that intuition. There are almost no real life scenarios where that would be the case, certainly not with the charities that Give Well recommends.
What about the guy working at Citadel who argues he can save more children by ignoring all of them to work but tithing 50% of his income to the Against Malaria foundation?
I feel like my moral intuitions are significantly different than yours. I do happen to believe that "touching" a situation seems to have significant moral implications. I do not believe that solving a portion of the drowning children is sufficient, but saving *all* (or as many as we could) would be required of being a moral person. (Not really the thought experiment, but I believe the most moral case would be to work something out with the local city to either fix the problem or get paid to do the work you're already doing. Surely they would value the regular saving of their children at a huge sum, but even $1,000/child would make you quite wealthy and would be enough to hire multiple shifts of lifeguards and solve the problem without ruining your life).
As I understand my moral intuitions, the issue of proximity is not about physical closeness but purely about your ability to understand and interact with the situation. A portal where you can see and interact with Dublin, or control a robot, still offers clear evidence that there is a specific need and that you can meet it.
I do not know if there exists a particular child that my 10% donation could or would help. The money could be wasted, embezzled, stolen, as could whatever was purchased with the money. There could have been a good month for donations or a good month for fewer diseases, and my particular donation was extra and unneeded. Maybe the particular charity I would donate to is well financed and doesn't need more money.
The point is not to gish gallop the reasons why not to give, but to express *uncertainty*. I would save an infinite number of drowning children in front of me, until my body gave out. I would die doing it, if that saved more children. I would rearrange my life to accommodate that need. Learning to swim, rearranging my schedule to be available on a moment's notice, keeping my swim gear close at hand. The need is real and legible.
The thought experiments you presented where someone could choose to ignore the known and legible "children regularly drown here" cabin to intentionally avoid "touching" the situation is evil. Ignoring the 37th drowning child is evil. Saving one child a day while the others drown is evil.
So how do I reconcile not learning more about specific problems in foreign areas? Bandwidth, mostly. There's only so much you can ever learn about specific issues. You would likely need to physically go to a place, learn about the local needs, customs, expectations, etc. Once there, you may learn of great specific need, or you may learn that they are generally fine and think Westerners trying to send them bed nets is silly (or worse, they've all got 100 bed nets and it's filling up their town with unwanted waste).
If a need outside of your personal knowledge and physical proximity seems to you legible enough that it becomes a moral imperative, then I would not hold someone back from donating to that cause. I also do not consider it a moral imperative to learn about potential needs. When it comes to foreign areas (implication that you can't have a detailed and legible knowledge of needs), then I don't think there's an obligation at all to learn about or give. A shorthand explanation would be Kony 2012. Westerners getting worked up about a problem doesn't mean the problem is what we think it is or that our attempts to "help" will do any good or even just not cause harm.
As far as moral behavior, this is a good example of something I value: https://www.bbc.com/news/articles/c5y4xqe60gyo (the guy with special blood who donated enough over his lifetime to save millions of children).
"As I understand my moral intuitions, the issue of proximity is not about physical closeness but purely about your ability to understand and interact with the situation"
I think there's a lot to this, but an obvious problem is that you have control over your ability to understand and interact with a situation--so you have to decide which problems to have this proximity to, and how much!
In some ways, I think the point of GiveWell and similar institutions is to increase your proximity to malaria in Africa: by summarizing the situation and quantifying the effect of the marginal donation, they increase your ability to understand!
"I also do not consider it a moral imperative to learn about potential needs"
This sort of covers the point above, but has the obvious problem that it incentivizes you to avoid learning about potential needs--to deliberately reduce your proximity to issues.
Indeed, I think an uncharitable reading of some of the anti-EA arguments is they are deliberately anti-intellectual in order to prevent oneself from learning something that they _would_ be committed otherwise.
But we almost all agree that in some cases, a certain threshold of proximity commits you to learning more about potential needs; past some point failure to investigate is just a rationalization. If you're facing away from the pond where the child is drowning and your friend (who can't swim) yells, "Turn around! That kid is drowning!", if you refuse to turn around to confirm, you're not "avoiding learning about a potential need" you're avoiding a moral duty you're already entangled with.
My reason for basically agreeing with EA even though I'm sympathetic to your basic point is: if you're here in these comments arguing about bednets you already have enough proximity! You're past that horizon! It's too late to pretend you're just avoiding learning about a new problem. You've learned about it! You are the guy holding your hands up in front of your eyes, saying "I don't see it, I have no obligation to learn about it".
On your fifth ACT comment thread about EA, you almost certainly are in position to know more about bednets than how best to help your second-cousin twice-removed to kick his cocaine addiction, or whatever "local" issue you supposedly are meant to care more about.
I agree with a lot of this, but I'll make a few responses.
The first is about bandwidth, as I mentioned. There's only so much we can process and understand. Picking which topics, if any, to research is not a moral question on its own (though I agree with you entirely that covering your eyes to avoid seeing is a different category and is a moral failing).
The second is the concept of legibility I was trying to get across. I am, obviously, well aware of GiveWell and bed nets. I have no particular reason to be against them, and think they are generally moral people trying to do good.
There are two specific problems with the general approach to foreign aid in their sense. One is the potential for waste/inefficiency or just generally failing to be as helpful as you intend or think you can be. A specific story about bed nets that I heard a while ago was that some people were using them as fishing nets instead of bed nets. Locally in Africa, they were making a different decision about what was important. Maybe they were wrong, but maybe they were entirely correct and the legibility from being local and being part of that culture allowed them to make a better decision.
The second is unintended consequences that can be downright negative. An easy one is that shipping free external production destroys local manufacturing and prohibits the local economy from growing. No one in Africa should be making bed nets, given the costs and incentives of fighting against free Western imports. But it would be much better for Africa if locals were making bed nets! They would benefit economically and long term be better off against malaria. This is also true for food donations, etc. (I happen to think that infrastructure donations can still be good, especially if local crews are hired for the work). A more difficult one can be related to international relations. Western nations have often propped up local leaders in order to try to "help" and made things worse. Autocratic leaders, warlords, corruption, sustained by outside forces in order to make the situation more legible and accessible to Western interests (including charity!). "Should the US arrest/assassinate [African Warlord]?" is a good question that may have extremely positive end results, or it could result in more chaos and instability or just bloodshed increasing before returning to pre-intervention levels - all of which we have seen from US involvement over the last 20 years (living memory for many of us). Even killing/arresting a really evil warlord may not be a good idea, morally or practically.
I'm certainly not against trying to learn more or make good decisions. I think we need to be honest about our first level results and also the unintended consequences of our actions. At the far stage of being a citizen in a Western nation, the steps necessary to make those decisions *well* is a massive ask with no necessarily positive conclusions. I don't think there's any moral failing to not do that research. I see that as very different from seeing and knowing about a particular issue and intentionally ignoring it. I also recognize that some people feel like they've done the research and have come to correct conclusions and are willing to give money or otherwise help. I do not try to talk them out of helping, even if I doubt the efficacy of their long term actions.
Hundred percent agree about bandwidth, but it often cuts in the other direction. I give ~10% of my income to GiveWell, I volunteer with local charities that service people in my immediate community, and I try be a good family member/friend/coworker.
I can assure you that the anonymous giving takes up the *least* mental bandwidth *by far*. Once a year, I skim some blog posts and log into a website and make some quarter-assed decisions about how much to give based on my blog skimming.
The virtuous obligation that I've been shirking because it's more demanding and I'm not sure what I should even be doing is a duty to help a friend through some rough personal circumstances--I think they need both some tough love and some genuine support and I'm not even sure how to provide the right mix of those.
Because of GiveWell and the like, even anonymous donation locally feels more illegible and hard to parse than giving far away: I am far more confident in the value of vitamin A supplementation charities in Africa than in the value of my local food bank or addiction hospital.
As for unintended consequences, again, it's really not clear to me that this pushes more against anonymous far donation than local donation/volunteering. Is it really more likely that I'll be creating bad incentives by buying bednets for poor Africans than by funding my local addiction hospital, and possibly subsidize and lower the cost for drug use?
Am I more confident that I know the right thing to say to my friend that won't on the one hand be too supportive and won't impress upon them the need to make different choices, and on the other won't be too harsh and alienate them from me?
Like, I get that there are abstract reasons why things close to you should be more legible than things far away, but those aren't the only considerations--as I noted elsewhere the problems in poor countries are often lower hanging fruit, instances of problems that rich countries have already solved. Maybe that makes those problems more legible.
At the end of the day, we have to actually *evaluate the legibility* of different things, not just fall back on first principles.
We can actually look at the track record of US interventions and see if we think it's a good use of money to get what we want! If we look at other interventions and don't see catastrophes, then that is itself information!
Elsewhere it's been pointed out that the fishing with bednets issue is well-known, and mostly dismissed as a serious concern by people who have looked into it. Maybe they're wrong, but at some point you have to actually evaluate the object level arguments, whereas to me, way too much of the legibility discussion ignores *what we actually know* about different interventions. And if people want to dispute whether we actually know what we think we know, it's on grounds of basically general skepticism.
I'm not totally against this; I don't like the turn towards longtermism for that reason, but I think that's because it's really hard to evaluate interventions whose payoff is by hypothesis in a world totally different from ours. But malaria, vitamin A, cash to poor people--they don't have that problem. You can study them, and people do! You should use that information in your decision-making!
Again, it's uncharitable, but it's hard not to interpret some of the responses as being isolated demands for rigour: oh sure this intervention is well studied and here's twenty GiveWell reports on it, but like, c'mon, can you ever really _know_ anything about rural Kenya?
TBC I'm not accusing you of this, I think the basic point you're making is reasonable, and from what comes through your comments I don't think you have an unreasonable attitude. But I think what I'm describing is a real failure mode, and anyway even in the more reasonable case, I still think it's better to try stick to object level here: does the evaluation that GW and others do give us sufficient basis for knowledge to drive our moral decision-making? I think the answer is clearly, "yes, at least somewhat" for basically anyone in these comments who isn't a radical skeptic, until such time as someone makes an actual object level argument to the contrary.
Thanks for your response. Again I agree with a lot of it, particularly what you're saying about a studied problem at distance compared to a novel problem close by. I agree that it can be really hard to give advice to a close friend while sometimes easier to determine the needs of people far away - immediate disaster relief comes to mind.
My major reason for pushing back on the thought experiment has to do with how legible and immediate "drowning children in a pond next to you" is, compared to just about every other possible intervention. Giving to GiveWell, or honestly just about every other charity and intervention possible, is not particularly close to that. As a thought experiment for "should I help people in need?" I guess it helps us get to a "yes." But is that really a surprising take? Did we need a world famous philosopher to answer that question? Not really - it's been baked into most moral systems for about all of human history.
The legibility implied by the thought experiment requires understanding how real the problem is, and how much we can positively affect the solution. The further away the problem is physically, the harder it is to do both. And again, not because distance matters so much, but because the further away you are the less you truly understand what's going on. Culture, customs, economic conditions, moral systems, all can be different than you expect. Offering pre-natal care in Ancient Greece would seem like an easy win. Keep those babies alive, when they would otherwise die. We have the tech, and we know the babies are dying. But it would turn out that exposing babies with the intention of them dying was a cultural habit in much of the ancient world. If you were saving these babies you might be causing communal strife and the people there might kill you or chase you away. We consider that morally abhorrent, but in their society it made sense.
That's not to say that we cannot understand the needs elsewhere. We often can. But never so clearly as a child literally dying in front of us where the solutions are obvious, the need is clear, and we're the only ones who can handle it. Trying to extrapolate from such a clear example to far less obvious examples can lead to unexpected consequences. Again, I don't have a problem with people giving to charity, or GiveWell specifically. I think they're good people doing good. But I also don't have a problem with people saying something like "I'll stick to helping people closer to myself [culturally/physically/philosophically] as I better understand what is needed."
I don't think we're actually that far apart, but I just want to say two things:
"Did we need a world famous philosopher to answer that question? Not really - it's been baked into most moral systems for about all of human history.”
As far as I can tell, Scott is on this whole kick because he keeps running into people who don't answer this question affirmatively, and who keep asserting that it's _not_ because of the practical considerations, but because they truly don't believe anyone has any moral duties to people who live in a different country/have different ancestry/whatever. Maybe these people aren't important, maybe they're just trolling, who knows? But it strikes me as at least possible that there is an audience here on this very blog who do need this lesson.
" But I also don't have a roblem with people saying something like "I'll stick to helping people closer to myself [culturally/physically/philosophically] as I better understand what is needed"
I don't either, but I would encourage people to make sure they're being honest with themselves about whether that is actually their reason, and whether it's true that they really do understand the local problem better. And most importantly, that they actually are doing something local! Not just using it rhetorically!
Peter Singer may be needed to talk about helping African adults instead of local kids, but a kid drowning in a pond where you're walking doesn't bridge that gap. Just about every moral system in the world will tell adherents to save the local drowning kid, no need for anything special.
I think that's where the breakdown exists. Some people hear about the drowning kid and decide that also means they should save African adults. The people who disagree with Scott say that there's a leap in logic involved there, and don't see the connection.
What I feel is that because of the disconnect in certainty (about both the certainty of the need and the ability to fix it), there's a lesser moral requirement to help people who are further away. The less you know about them the less moral duty (but I still agree with your earlier comment that intentionally not knowing to avoid moral complications is also immoral). And I think on some level everyone agrees.
If I told you that on one of Jupiter's moons there was a species of intelligent alien life who were living in misery and desperately needed our help, you would give that significantly less credence than humans on earth. And you should. You don't even know for sure that they exist, and it would be really really hard for humans to help them. Should we spend trillions of dollars developing a space program that can reach them? That's an extreme exaggeration, of course, but I think is in the same direction as mine and a lot of other people's hesitation about treating distant issues equally with local. There are levels of knowing that can make saving that species more and more important, for instance proving they exist, that they are miserable, and that we have the ability to help them. Each of those steps is important, though, and as a contrived example none of them has been or at this moment can be proved. How much money, time, and effort should be spent figuring out whether there is such a species and what their needs are? How sure are we that our intervention in their species would be a good thing, even if they are miserable? I can understand and agree with the argument that the number is above $0, but I believe both of us agree it should not be a significant number even if I had not just made them up for this conversation.
Not been to Zimbabwe, but rich/well-off Indians seem to be fine with ignoring the poor around them. As are Russian oligarchs. And mostly, I agree: the 1 drowning child is in a unique situation. Rescued, it is fine. While the dirt-poor of today are dirt-poor tomorrow. - Also, as we now are aware how many lives were saved by USAID each year: paying taxes might be a sufficient sacrifice. It sure is a substantial sacrifice - and all for the "common good" we are told.
First of this sounds like: “Others in other countries are not as good as us, so we don’t need to be good”. Second it doesn’t even seem to be true.
The world’s most philanthropic individual, (100B+ donated) is from India. 2 of the 10 largest philanthropic organizations are from India. It also seems like 800 million people in India are availing free or subsidized food from the Indian Government. To note, only 2.2% of Indian population pays income tax, while it accounts for ~40% of government revenue.
It seems like Indians are doing what they can to fight the poverty around them, and helping them by providing a fraction of 1% of the world’s richest countries budget, doesn’t sound that wrong.
Ok, naive take: in the nearby child case you are the only person that can save the child. In the far away child case, you are one of very many people that could act on it, so your responsibility is much smaller, due to a large denominator. If a child was drowning near you, while 100 other swimmers and life guards were in the pool, it seems fine not to jump in and assume someone else will take care of it.
That seemingly innocent child will grow up to eat a hundred thousand innocent shrimp. But it's OK because those seemingly innocent shrimp would have grown up to eat ten billion diatoms.
It was a silly Severance reference - shameless effort to reintegrate my online world and my televisual streaming world - but are shrimp charismatic megafauna compared to pill bugs? Could people eat pill bugs instead?
I had a pet trilobite as a child, that I got at the natural history museum - I guess because I could afford it. I really loved that thing, without even realizing it was a fossil of a living creature.
ETA: I don't see pill/doodle bugs as much as I did when I was younger. Maybe it's because I'm not so fixated on the ground.
Individually, I think it's a mistake to look at this as a question of obligation. Instead I'd say that moving to a global community provides way more opportunities for good, but those opportunities are failed by our traditional moral intuition. As often as I think utilitarian thought fails as a practical matter, it's really the only thing that can help in this expanded context.
When I de-converting from fundamentalist Christianity, one of my bugbears was the missionary question. Essentially, Baptists believe that a person who is "innocent" - who doesn't know Jesus existed and so cannot reject him, like a baby, cannot be sent to hell. There's no concept of purgatory in Baptist faith so these people get basically free admittance into heaven. The rate of successful proselytizing is really very low. So mostly Baptist missionaries are just condemning souls to hell.
There's no good answer to that in theology because you're not God. But if you were, you might redesign the punishment and rewards system a tad. You might say "okay well learning about a new problem doesn't obligate me to it, so I shouldn't get a negative morality score for failing to solve it. On the other hand, I can get some morality points by helping to solve it.
I don't like this viewpoint because I do feel there is some obligation to solve problems you become aware of, and that it's blameworthy not to seek out problems you can help with solving. But in my mind we can get around that by basically saying "every person has an obligation to use some portion of their resources on others but not all of them. However, going above this obligation is praiseworthy."
I notice this doesn't solve the subtextual issue in this piece - Tax dollars are a communal resource that I *have* to pay even if I don't agree with their use. We don't require that I give to charity generally. Why should we include charitable causes in our tax expenditures? And the answer is "well, because it's a tiny portion of those expenditures and the government can do this better than any private group, because a majority of people agreed to do it this way, and because giving in this way increases our soft power in the world at large and makes it more likely we'll get reciprocal benefits."
All of those are a bit shaky, though. The first doesn't answer the question, the second is...really bad, because the whole point of having the debate is to convince a majority to do it, and the last is both not a moral argument and not really falsifiable. My actual rationale is more like, "Jesus Christ, dude, we spend a trillion dollars a year on pension programs that have grown far beyond their means and you're arguing over the $0.05 we took from you to cure TB and save multiple lives? What the hell is wrong with you?" But that is also not an argument, just my own moral intuition.
> If you end up at the death cabin, you don’t have an obligation to save every single child who passes by
And if the cabin owner *does* save supererogatorily many children, then they should be widely recognized as heroic and saintly, have movies¹ made about them, etc.
I mean, *I* think they're rad people, but I'm not sure The Courageous Heart Of Irene Sendler was a big hit, and that's like the closest possible analogy. If generic wealthy-nation-dwelling people thought that straightforward acts of goodness were intrinsically valuable they'd probably be doing more of them.
I prefer the many worlds interpretation of ethics. That drowning child, like everyone else, is a quantum immortal. They're going to be just fine in their branch of the wave function no matter what you do, so you don't have any particular moral duty to help them.
In fact, for some children in distress, the many worlds interpretation of ethics says you should probably try to kill the child to reduce their chances of continuing in a branch of the wave function where they have survived but are maimed or traumatized...
> "Assume that all unmentioned details are resolved in whatever way makes the thought experiment most unsettling - so for example, maybe the megacity inhabitants are well-intentioned, but haven’t hired their own lifeguards because their city is so vast that this is only #999 on their list of causes of death and nobody’s gotten around to it yet."
Then wouldn't it be a more effective use of time to go help with the other 998 causes of death in the city? You can alter these unmentioned details in whatever way you like to make the thought experiment less convenient, but in real life, there are typically better uses of your time than saving lives one-by-one at a linear rate, and our moral intuitions reflect that.
Maybe rather than buying infinite mosquito nets, it's better to invest in a biotech startup that could eradicate malaria entirely. Or maybe providing constant support to someone can lead to a Black Queen's Race where they lose their own capabilities. My reasoning is motivated here, but that's the beauty of capitalism: in the same way you don't need a pricing czar to decide what everything should cost, you don't need a morality czar to figure out the best charity from first principles. Let the market compute what people need, and if Our World In Data is right, everyone benefits.
To give an extreme example: if you actually think the singularity is happening in two years, then all of this is a moot point anyway, for better or worse. The dude in the cabin should pivot to AI, or better yet, put a net in the river to catch all these kids.
For a different framing of a similar argument, I have a Kantian friend whose argument for why we have to save a drowning child but not every drowning child runs like this:
We all have a perfect duty to avoid servility. If we were all servile, then nobody would be working towards their own ends, and so servility as a concept would become nonsense: everybody is just spooning soup into everyone else's mouths forever.
We have an imperfect duty to avoid indifference. That means, when we can, we need to avoid indifference: if everyone is a Sociopathic Jerk, then nobody is going to do anything moral ever, which is not in anyone's interests. But if the duty to avoid indifference were perfect, that is, if it can never be annulled, then we would violate our perfect duty to avoid servility, because we would all be waist deep in so many rivers 24/7 pulling out children, who would then grow up and wade into their own rivers, and so on.
***
David Friedman objects to Rawlsianism on the basis of its assumption that angelic beings would be so risk averse. In response to this post, I can imagine he would say that we would not agree to the demands of a just society, because that might not be in our interest; if we are willing to accept the risk that we are going to be the one kid in the hole, then we get to live in Omelas.
My intuition is that as an angel, I would have a greater risk appetite than a Rawlsian angel would--and yet I also have a strong intuition that society's should be just. I wonder what would convince my angelic self to assent to the demands of justice, or what a non-angel-powered argument for justice might look like.
I don't think you need the full Rawls conclusion that you only care about the worst off; the point is just that you'd almost certainly conclude you have some reciprocal duties to other people, that would imply the responsibility to save drowning kids in "generic" situations.
The obvious difference between all those drowning children and saving the lives of third-world people is immediacy and ability to judge effectiveness. Whenever a thing is mediated before it actually does any good, and whenever its effectiveness is in reasonable doubt, it is less morally urgent.
For all you know, sending money to save the lives results in the regional war-lords living high on the hog and deliberately keeping people dying because they make good advertising for aid grants.
I would agree, foreign aid is difficult and sometimes results in harm. If only there were some organization of experts who rigorously vetted charities to ensure that they do good responsibly and effectively!
Of course you shouldn't necessarily trust third parties, but Givewell is not a black box you have to trust. if you're skeptical you can consult their published research.
If you did have to do research to know how to save a child in imminent danger, do you think that wouldn't be morally obligated? I remember reading a Spider-man comic some time ago where he saved a civilian in need. The civilian was in a bus that was dangling off the edge of a bridge, and to save him Spider-man needed to make a bunch of quick mental calculations on how far the distance was, what the right type of webbing to use is, etc. The illustrations had visual representations of complex math in the background, like that confused math lady meme https://static.fanpage.it/wp-content/uploads/sites/6/2019/09/math-lady.jpg
Is the fact that Spider-man had to calculations enough to make it so that he's not obligated to save them? What if he wasn't able to do them mentally, and had to take a few minutes to do the calculations by hand? What if he forgot a physics formula and had to Google it? It seems like none of that matters when a person's life is at stake. The reason you need to save drowning children is not because it specifically costs you a wet suit instead of time doing research, the reason you should save them is because it saves a life at trivial cost.
I find so much value in the rationalist toolbox and community, but sometimes it's good to read things like this to remind myself why I'm still a virtue ethicist instead of utilitarian :).
Maybe I'm weird, but I'd love to live in the cabin. In real life, doing good things in person is hard, because you have to figure out how to put yourself in the position to good things, and doing good things by donating to charity is hard, because you have to ask how much to donate and where and so forth. If you live in the cabin, you just get to rescue drowning children all the time and straightforwardly do life-saving levels of good.
But if you miss a single hour, a kid dies and that weighs on your conscience. You can't go to bed without thinking of the children just outside who will die as you sleep. It's much easier psychologically for me to donate to AMF and then compartmentalize that; if tens of thousands of malaria-stricken children were right outside my doorstep I'd go mad. It's rewarding to save one or two drowning children, but a never-ending stream would be hell.
I can make it my problem and do my best to solve it. I'm not sure *how* I'd solve the problem of the children dying when I sleep, but at the very least I could hire a lifeguard or two for a night shift and find the money somehow. Maybe I'd ask the parents for donations to cover it. Maybe I'd put an ad out in the newspaper for people that want to volunteer to save drowning children for 4 hours a night pro bono.
You could address malaria. There are anti-malarial drugs. There are insecticide treated bed nets. There are people who can treat pools of water where mosquitos breed. There are insecticidal foggers that treat neighborhoods where disease is detected. There's are under-leveraged technologies for fighting mosquitos. (sterile males, etc.) And there are charities set up to implement all this if you're not able to be there in person. With a charity, you don't get the benefit of personal recognition. But if you have resources you're willing to direct to the problem there are people who can make use of those resources.
I am aware of all these things. It's a motivation problem. In practice, if we compare the amount I currently donate to address malaria to the number of children I'd be saving if someone made the drowning children cabin my problem, I think the thought experiment version of me does better.
Or you might build your little cabin and explore the remnant forest and write a great book that many people find inspirational, so much so that it furnishes one of the texts instrumental in fostering environmental awareness - which further leads to some non-human life being allowed to keep living their lives, instead of losing them to extinction, at least for a time, until people forgot they ever cared about that.
But then people would go online and complain that your friend's wife did your laundry because she was in love with you, and you did nothing to stop her; and instead of nature study your time would have been better spent on laundry.
So you'll want to make sure that you carve out time from saving children to do laundry, otherwise your efforts will be judged wanting, and possibly "illegitimate" due to your laundry privileges.
Alternative moral construction: We evaluate our relative morality by comparing it to what an average person in a given situation would do.
The average person seeing a drowning child in an isolated situation would, at least we would like to think, rescue that child; so we see a moral obligation to rescue that child.
In a society full of drowning children, the average person has become desensitized and keeps their head down. So there is no moral obligation to rescue any children.
But the trade-off there is that we see the person who rescues drowning children when nobody else is doing so as "good" in some deeper sense. In the first case, the person who rescues the isolated drowning child might be seen as heroic, but not necessarily morally "good", because we'd expect anybody, even a relatively bad person, to make such an effort; we want to recognize some kind of virtue in the act, however. They'll probably make at least the local news, and we'd upvote stories about them on social media.
In a world of constantly drowning children, the person who goes around saving children may never make the news, but we will instead regard them as morally good.
Morality is, in some significant sense, deviation from the average; an evil person does worse than the average person. A normal person just does what everybody else does. And a good person goes above and beyond the average.
Trying to impose an obligation to be good is fundamentally confusing what goodness is, in this framework: To be good is to exceed your obligations.
1. Is there some element of area knowledge or certainty of delivering effective help at play? If we rescue a drowning child, we know that we've done such a thing. If we go into an unfamiliar neighborhood, we don't know if the scared person running past our house is escaping a murderer or fleeing from a crime scene of their own making. Ted Bundy lured some of his victims by playing at being injured and asking for help. Telemedicine would be similar to presence unless we think that the choking Chinese medical student is more likely to be pulling a scam than an in-person one.
2. I agree that, in terms of social obligations at least, the demands we make on people need to be limited and sustainable, if nothing else than to protect people from altruistic paperclip maximization. There are people with their one issue and a kind of tunnel vision who want all resources diverted to their pet issue and have little concern for other people's values.
3. I personally think that the notion you describe of having to help people just because they're close to you is a heuristic that many people *do* employ and that it *does,* explicitly, lead to socioeconomic segregation for *exactly* the reasons you point at. The question then becomes: who is your community? We could justifiably say that someone who got rich by running a factory might have a stronger obligation to their workers or their workers community and play to that. Some people argue for state or national level taxes to try and expand a person's community to fight this trend, as you mention. But I have a bit of sympathy for supposed 'slum lords' who do horrible things but who also likely have many more horrible things done to them for trying to service a poor neighborhood. There is a lot of real, justifiable incentive to just get out of a problematic area and not be associated with that area at all, and a kind of 'tying people to a sinking ship' type of outcome if that's not allowed. "Touching a problem" really can drain everything you have, or leave you dead. If community standards are pathological, it may be justified for a person to cut ties with that pathological community and find some other community which better matches their notion of what a social contract looks like. If they go back to their old community, they can go back on their own terms. I feel the common expectation for unbalanced reciprocity between one group and another is harmful for creating exactly these types of conflicts.
4. There's an old standard of 'raising a hue and a cry.' If you're uniquely aware of a problem, there may be more of an impetus to rally the community for help. To use your phrasing, you may bear a greater burden of trying to get a lifeguard hired if you live near the river even if you're not supposed to save every child yourself.
5. I like Rawls and his veil of ignorance. I think that, practically, it tends to lead to disagreements because different subgroups have different values in terms of what is good which are directly tied to their existence as subgroups. The person who owns a factory is more likely, statistically, to value investment. The person who works at the factory is more likely to value immediate consumption. To an extent, this difference in culture, either individually or generationally, is how the two people found themselves in their disparate positions. We can rail against over-consumption in either scenario, of course, so there are also some potential common values in either case. To return to your analogy, there will be some practical differences about which children are worth saving. For example: Save one child today vs save two tomorrow. And going back to the standard of certainty, do we trust the person who says that they're planning on saving two children tomorrow? Our moral heuristics really should account for the fact that we're all imperfect people with imperfect understandings of what is true and what is likely and what is good.
I think the third puzzle piece that you're missing is that our moral intuitions judge people by their intentions and not by the results of their actions. Constructing elaborate scenarios where the outcomes are certain and easily measurable misses this. In the real world the homeowner protests both the construction of the dam and the society that allows children to drown and is considered morally just regardless of the efficacy of their protest.
For there to be a moral obligation, I think you have to not merely "touch" the situation, but touch it **in a way that people would implicitly rely on**.
I spot a gold bar in a pit, and yell aloud "whoa, look, a gold bar in that pit." Another guy walking nearby hears me, we both stand there and look at the gold bar, he grabs a rope and says "I'm gonna go down there and see if it's real!" I never suggested he do that. But by pointing it out and standing there with him, saying nothing when he proposed his plan, and watching him descend, he now has a pretty reasonable assumption that if the rope snaps and he needs aid I won't just walk away. I am acting consistent with somebody who is on Team Gold Bar Retrieval.
If I'm the doctor working remotely, I'm on Team Surgery. If I'm hanging around the neighborhood pool, I'm on Team Pool Party. Even if my only explicit undertaking had nothing to do with the person actually in danger, everyone else there would expect implicitly that I'm on The Team. If you're alone at the pool at night, do you feel differently than if you're surrounded by strangers, with regard to your risk, because presumably somebody will aid you? Of course. If you want to do a crazy risky dive and everyone there says "ok dude but that's stupid and it's on you if you kill yourself", they are no longer on your Team.
I don't think you have any obligation whatsoever to save any of the kids in Drowning Kid River because under the original facts nobody is relying even implicitly on the assumption that you'll aid them. But let's say your uncle, out of generosity and wholly at his own expense, operated a Kid Catching Contraption with a net and a hydraulic platform that successfully saved 99% of the children, and that Megacity was aware of that. In fact, that's why it's #999 on their list, the Kid Catching Contraption does a fair job all things considered, 1.5 dead kids per week in a town of 50M is plausibly the 999th most important thing to deal with. Your uncle never promised to run it forever for free, but when the city sent him a letter of thanks signed by the Mayor and a giant gold-plated ceremonial key to the city, your uncle didn't object. Parents in Megacity start to talk about the Kid Catcher Cabin downriver, everyone knows the directions, a lot of people have had to go pick up their sopping wet brats from the west bank, waving at your uncle as they drive away. At this point, operating the Kid Catching Contraption is impliedly part of the deal for being the guy who owns Kid Catcher Cabin. I think in this scenario, you would have an obligation to continue to save 99% of the children if you accepted the Cabin, because everyone involved is relying to their detriment on the assumption that you will continue to do so. However if you published a notice in the paper explicitly saying the Kid Catching Contraption would shut down in 60 days, cc'd to the desk of the mayor, put up a public notice at the Kid Collection Site across from your cabin, making it super clear that somebody else is going to have to address this problem, then I think you're free.
I have never once represented being on Team Starving African Kids. (The closest I have ever come is a one-time UNICEF donation, made to prove a point to somebody back during the covid hysteria about the relative global danger of dysentery vs covid.) The starving African kids have no expectation of my aid, and no reason explicit or implicit to believe I am obligated to aid them. Absolutely nothing they do is done in reliance on any expectation of my taking positive actions to aid them. Even Singer's original drowning kid may have, in the back of his mind, the idea that it's okay to swim in pools because you could yell for help-- like maybe the fact that foot traffic in the area means somebody may happen to walk by and so it's 5% safer and that tilted the scales to where the kid does it. I don't even have that extremely attenuated connection in regard to starving African kids.
I have little to comment, other than that I largely agree, and would like to point out that this line of thinking sounds a lot like role ethics (as seen in Stoic and Confucian philosophy, which I discussed in another post). E.g. "team surgery" is equivalent to adopting the social role of a doctor.
Thanks, I went back and read that comment, and it is interesting, I had never looked into that "role ethics" as it wasn't covered in my Ethics course at college (we did the usual Western philosophy survey of virtue theory, divine command theory, natural law theory, consequentialism).
I spent most of my adult life working in a job that had some public utility function, so I was always on Team Civil Rights or Team Public Safety just by virtue of showing up 40 hrs a week, and it's easy to determine your professional obligations in that context. What you describe as the situational roles are where the harder decisions enter, deciding if you have assumed a duty under these circumstances to perform in this role. I think where I'd disagree is that "being a human being in the cosmopolis" doesn't seem like a well-defined role, and if you define it to require aid to remote strangers then the definition is just swallowing the question. But it's an interesting framework, and I'm glad you brought it to my attention.
there are two reasons that I, personally, have put a pause on helping the drowning child.
I don't know if you can make this into a unitary thought experiment, but to me it seems like I was helping this drowning child, and we got out of the lake and I went on my merry way only to see him sprint back to the lake and throw himself in again. how many times am I obligated to save the suicidal child? maybe he he grows out of it, but there's every chance he becomes a suicidal adult.
The other one: I was contributing into a pot for a lifeguard to save all these drowning children, when suddenly a coalition formed of people with much more power and money than me who have committed to not using the money to hire a lifeguard, and further plan to make it as hard as possible for such a whip round to ever happen again.
So, now I only help drowning children where I know I'm not shoveling sand into a bottomless pit, and I know I'm not contributing money to what will eventually become the not hiring a lifeguard pool.
I would assume that the people like Scott who call themselves "effective altruists" have no problem with people discontinuing altruism that isn't effective and being more selective about it. If you throw away some of the weirder EA stuff like longtermism and shrimp welfare, your objection is basically the whole point of what they're doing.
I agree that people are sloppy about their assumptions, you'll hear that Narcan/naloxone saves X lives per year, but if you consider how many of those people just OD again a few weeks later it's not as effective as advertised. I don't know to what extent EAs factor such things into the equation. Theoretically, if you did factor that in, and it were still cost effective, I think they'd be fine with it, whereas maybe you'd have a principled objection to throwing good money repeatedly at bad people to save them from their own volitional acts, and if so that's where your objection would diverge from theirs.
"I went on my merry way only to see him sprint back to the lake and throw himself in again. how many times am I obligated to save the suicidal child? maybe he he grows out of it, but there's every chance he becomes a suicidal adult."
Have you considered that maybe this is just motivated reasoning to avoid the obligation to donate? I don't think it works as an analogy for the actual situation these Africans find themselves in. Not like the warlords hold elections.
It's not *motivated reasoning*. It's an attempt to *formalize* why we don't want to donate. The original proposed moral rule doesn't fit our intuitions well about when we have to donate, so we try to figure out why we don't want to and modify the rule to take it into account.
Rationalists are notorious for creating rules and trying to follow them without sanity checking them. A rule that says that we must donate under circumstances where nobody thinks they must donate has failed the sanity check.
The solution, in both cases, is in forming a coalition that can pool it's resources and effect a larger scale solution: institutionalizing the suicidal child, or mobilizing political opposition to the "Never hire a lifeguard" coalition. You aren't morally responsible in isolation anymore, but your moral obligation has shifted to becoming part of an organized effort to effect larger scale change.
When you make statements like "it seems" and use pronouns like "we," you are making implicit psychological claims about an underspecified group of people without any empirical evidence. If this is your methodology, then a priori you should be explaining why most people DON'T think like you - otherwise what are you doing if people a priori already agree with your moral values?
Also, seemings, like tastes, are agent relative. It doesn't make sense to just say "Carrots are tasty" like this is just a plain fact. It also will do nothing to persuade people who hate carrots or have pica and eat weird shit rather than carrots.
Your inquiry is propped up by a house of cards that crumbles quickly because the presuppositions don't have much rational appeal to people not already stacking cards in the way you are - which again, if was true of most people a priori, you likely wouldn't be defending things that "seem obvious."
To be fair to Scott, he's arguing prescriptively, trying to come to a conclusion about what people *should* think, not objectively explain what people *do* think. Whether or not this is a worthwhile pursuit will vary according to one's opinion, I suppose.
The people who care a lot about the suffering of farmed shrimp don’t consider it their responsibility to do anything at all about the suffering of shrimps that get eaten by whales. I suspect that many of these paradoxes are due to a similar effect. The suffering of people in Africa is perceived as “natural” and not too different from the suffering that anyone would probably face if they started living in the woods away from civilization, modern medicine, etc.
A related but more concrete effect is the “it’s not gonna change anything” effect. A few (or even a lot of) mosquito nets won’t really change the fact that those people and many of their descendants will keep living in a malaria infested country, it will just be a bit more bearable. Malaria used to be common in many European countries, but looking back, any amount of mosquito nets would have been a fairly inconsequential blip in history, compared to civilization finally progressing to the level where it can just decide to eradicate it completely.
This might be a fallacy, but it at least has a psychological effect.
Another thing more directly related to the article: the coalition of angelic intelligences thing was pretty convincing to me as a prescriptive general rule, and I think it’s a good description of what many people actually think in practice, consciously or not. The thing is that while the post took it very abstractly, people might be thinking about a more concrete coalition of real people. And in that context, they might perceive someone living a life of subsistence farming in Africa as not really part of the coalition at all.
I think this is exactly right, only I'd dispute that "natural" is a good category--I'd say "within our control".
The issue is, we have some control over what is within our control; after all it was once natural for all of us to be completely at the mercy of disease, the elements, etc. but now if a crocodile escapes from the zoo and eats me, that's not perceived as natural anymore.
The point is that whales eating shrimp is a hard problem to bring under our control--insofar as we can imagine any avenues at all, they have huge costs, huge tradeoffs (like killing all whales), and may not even affect the scale of wild shrimp suffering.
But malaria in Africa *is not like that!* Or at least, there's plausible reasons to think it's not. Malaria in Africa may be natural, but it's likely that with a not overwhelming amount of resources we could make it otherwise. The EA argument is that when something is plausibly in our control with moderate resources, we should do it--don't let the only thing stopping you from ending a disease be the ill-defined consideration that it's "natural".
The second point I'm a little more sympathetic to, but I think is still debatable: most of us presumably would like our cancer treated even before we live in the world where cancer is perfectly prevented, or police to arrest assailants even before we live in the police abolitionists's dream world where no one is compelled to commit crimes in the first place.
Moreover, is it not possible that nets are part of a strategy precisely to eliminate malaria in poor countries? It should be pretty difficult for an intervention to be cost effective, have a big impact on malaria incidence, and yet be unable to roll out as part of a strategy of complete malaria eradication.
"The people who care a lot about the suffering of farmed shrimp don’t consider it their responsibility to do anything at all about the suffering of shrimps that get eaten by whales."
Bring back commercial whaling! How many shrimp (poor, innocent, suffering, cute little shrimpies) does one big bad whale eat over its lifetime? Clearly the utilitarian calculation of the greatest good for the greatest number means the whales must die!
I mean, Rawls' original position led to a different system than what most rats believe. For what it's worth, his original position led to a different system than what most people believe, to wit. I think the ultimate point is that we as humans aren't very moral--but viz, what normatively should happen (we live in a world where positions are given to merit, ethnostates, working class dictatorships) is generally diffracted and most certainly not one to one with the real world as a whole, which isn't... which is pretty normal. I mean, it's a feature of Christianity. I'm not sure how much the whole real-world/moral-world disjunct is thought about in the literature, but at least on an empirical basis qua central limit theorem if there were some objective scale to "goodness", we'd all fall within 1-2 SDs of it.
The drowning child thought experiment leads to what some would consider a reductio ad absurdum. We should spend all of our money (or almost all of it; I suppose it would be okay to eke out a meager existence) on saving children in the most cost-effective way (let's say malaria nets). Of course, if we did that, we'd be essentially impoverished ourselves. And all of those other people who are living in precarious situations are also morally obligated to help out those people who are even worse off. Why buy food for your malnourished child when it would be more cost-effective to chip in for malaria nets for some other poor people? There's only one person in the world who's the poorest. All the other 8.2 billion people have someone they should be helping. They should cast aside every sort of luxury, every extra article of clothing, every bit of food beyond the barest minimum of calories and nutrients needed to live. Do whatever is necessary to help people who are worse off, even if you, in the grand scheme of things, are incredibly poor yourself. Eventually nobody has anything and we're living in some sort of post-scarcity utopia or something. Except I don't know how we'd have anything resembling civilization at that point.
I consider this somewhat parallel to Parfit's repugnant conclusion. And if utilitarianism keeps getting us to these absurd conclusions, maybe the problem is with utilitarianism.
The problem is that no one single moral principle, including utilitarianism, explains all human moral intuitions. We are attempting to reconcile competing principles, all held in the mind simultaneously, all more or less equally powerful in certain contexts, and which most likely evolved at different times and selected for different environments.
Morality is inherently context dependent, that is to say.
I always thought the Copenhagen Interpretation of Ethics was some sort of straw man argument that no one took seriously. I categorized it as one of those ethical koans that have no obvious best answer that people get obsessed with.
Speaking of koans, do non-Western cultures obsess over ethical thought experiments? If so, do they play headgames with cultural obsessions other than human lives?
Apologies if I sound dismissive of ethical thought experiments, but unless one is totally attached to a particular (unprovable) philosophical view, there are no right or wrong answers to these questions.
I would argue that no society obsesses over it, including ours. The people in this thread, and those who obsess over ethical dilemmas, are a pretty narrow demographic.
Unless you believe it all God's plan, the universe doesn't care who lives or who dies. And the rules of our universe ensure that everything will die.
But humans are social animals. And as a general rule, we extend the most effort to helping our family, our friends, our community, and our tribe in that order. And we generally extend extra assistance to children who may have no social connection to ourselves. Some humans may walk away from a psychological inability to connect to the fundamental behavioral codings of our species, but for the most part, if we saw a child of another tribe drowning we'd instinctively try to help the kid.
I think the Copenhagen interpretation of ethics is using the wrong metric. IMHO a better way to think about it - your moral obligation to help fix a situation should not be greater than the amount of information you have about the situation. This is a useful heuristic even under a pure utilitarian perspective - if you do not really know what is going on, your misguided attempts to help can easily make things worse! If you know for sure a child is drowning and you know enough about the situation to believe you are capable of helping (as opposed to getting in the way of mire capable rescuers) - yes, you should do it. Same for the chocking examples in this post. But if you are being Pascal mugged - well you do not actually know what is going on, so no major obligation to help. Starving children in Africa - do you actually know how to make the problem better without side effects that could make things worse overall? Will your money support food for children, or be stolen and support political corruption, which would result in more starving children in the long run? Will your money be used to buy food from wealthy countries in a way that would reduce demand for local produce, drive local farmers out of business and make things worse in the long run?
Traditional Copenhagen interpretation is then just a second-order heuristic - once you've interacted with the problem sufficiently, surely you have a lot of information about it, and should know how to properly help.
Morality is not generalizable like that, it's more specific. To put it in stark terms, if it's a child of your people, yes, if it's a child of foreigners, no.
The key point of morality, the extreme that limns it, is the case of self-sacrifice. But self-sacrifice has no "engine" outside of "us" vs. "them." You self-sacrifice for us (family, kin, clan, folk, nation), not for them.
Of course in real life, if the cost is small enough, you may sacrifice for a "them" if you're not specifically at war with "them;" if you can swim and it's no huge risk to you, you might rescue a child of ANY genetic distance, or even a puppy or a kitten ...
And this is connected to the relativity of morality: "us" and "them" are not fixed, absolute terms, they depend on various conditions (e.g. with the hypothetical alien invasion, the human race finally does actually become an "us," one race that we're all part of, but since under normal circumstances there is no greater oppositionality, we make do with the relative oppositionalities we have).
You are neglecting the effect of empathic distress. For very many people, the facial expression of any child in distress is enough to take personal risks in order to help them. Some people extend this to members of other species, as you point out. People have died attempting to save complete strangers.
This doesn't undermine your argument that morality is relative, and that the terms are not fixed. I'm just expanding the factors that affect the outcome.
Yeah point taken, although I should think that even that (and things like mirror neurons perhaps) are probably modulated somewhat by relative genetic closeness vs. distance.
Essentially, because having sex isn't the only way of raising the likelhood of your genes being passed along (for there is nothing special about the copies of those genes in your gonads as opposed to your cousin's say, or even more diffusely, someone in the same town), there will be an in-built bias or preference towards one's own kind in general (or to put it another way, gene cluster groups that DIDN'T have a fair number of people unconsciously thinking that way, would be that bit less likely to be helpful for the inclusive fitness of the members comprising them).
But it's probably true that for some people the markers of "innocent progeny needing protection" can be salient to the point of overriding the preference from genetic closeness vs. distance.
Still even those signals, strong as they are, can be overriden for the sake of racial preference (e.g. cf. the direction in some of the more hardcore Jewish religious teaching like the ultra-Zionist "Torah of Kings," that it's ESPECIALLY the children of enemy groups that should be killed, for obvious reasons).
Everything is an interaction, I find. The influence of genetic factors depends upon the environment we live in, and the effect of the environment we live in depends upon genetic factors. I would point out that while a preference for genetic relatives is one way to promote a gene line, it isn't the only way, an impulse toward cooperative norms with strangers can, in certain circumstances, also promote one's family (if we all feel a generalizable desire to help young children, everyone's children stand to benefit). It's a complex interplay of different factors, to be sure.
That's true, but for most traits the "nature" input is more impactful on variation than the "nurture." This is of course generally accepted for traits in individuals, but it's politically verboten to say for races and ethnic groups - yet I can see no reason whatsoever not to extend the same percentage weightings that are generally accepted for individuals these days (e.g. intelligence 60-70%/30-40%) to groups.
That being the case, stranger inclusion is always going to have less importance the more genetically distant the stranger. There are other good reasons for that too (e.g. avoidance of groups who have different diseases than yours, the difficulty of communication the greater the genetic distance, the more different the average thought habits and psychological traits are, etc.)
Again, the importance of "everyone's children benefiting" has limits. Kin altruism is pretty strong, but that fades gradually. The nation strictly so-called (the ethnostate) is the largest feasible grouping where "caring for strangers" (within that national grouping) could still have some relevance to one's inclusive fitness - but beyond that, as an extension to "humanity," not really, UNLESS (again) you were talking about an alien invasion or some great natural disaster - something that bodies up against humanity as a whole and firms up the outline of what "humanity" is, something that might affect everybody, like for example a catastrophic climate change or something like that.
Of course this being a trait like any other that will likely have a strong genetic component, some people will be altruistic to more distant strangers; some are even altruistic to other species. But that's not really where the average is (just like it's not at the other extreme of "kill everybody you meet" :) ).
"That's true, but for most traits the "nature" input is more impactful on variation than the "nurture."
First off, where do you think that has been established? If you are referring to the twin study methodology, then I disagree the research supports the conclusions you have drawn. In particular, we cannot conclude that if 70% of variance in IQ is predicted by genes, then only 30% is due to the environment. If you are referring to something else, could you share it?
Obviously, it makes sense that extension of benefits at some sacrifice to oneself should fall off as genetic distance increases, but the problem is that the individual has no direct way to reliably measure genetic distance. We use proxy indicators instead, including physical appearance but also more subtle things like facial expressions and similarity of espoused beliefs. This was not selected against because benefiting strangers has second order effects on one's own in-groups (esp. if there are other ties like similarity of beliefs). This can even extend to all humanity, if all humanity is linked by economic and other ties. Note that these ties were created by human behavior in the first place, which is presumably just as influenced by our genetic inheritance as any other set of behaviors. It isn't that surprising if many people have the same sort of generalized positive feelings toward humanity that they have toward strangers they pass on the street.
Whatever "secondary effects" you're talking about with strangers are going to be stronger with more familiar faces.
In some ways we don't disagree that much, I just think there's a steeper "falloff" than you do, the more strange the stranger in question. Emotionally I'm not against kumbaya, but it's basically rot: in terms of triage, min-maxing, you know more about and can better help those who are genetically and culturally closer to you, and bar emergencies those more distant can look after themselves better than you can too :)
Re. "nature" being more impactful, it's not just twin studies, it's the entire trend of science these days, and there are more and more layman's books about it (there's a particuarly good one that came out in the last 5 years or so but for the life of me I can't remember it offhand, it was a NYT bestseller type of book). I think it's generally accepted that the "weighting" for most traits is towards Nature. This was fought tooth and nail for a long time, but it's just become impossible for conscientious scientists to ignore.
The sticking point is racial and ethnic average group differences, and that's why the battle was fought at the individual level for so loing, because once that dam breaks, it's curtains for the idea of social engineering, and people are going to realize with horror just how horrific the 20th century was, dominated as it was by that idea. "Nurture" (uber alles) reigned for a long time, mainly because the job of demagogues, educators, etc., would be better if it were true, but it's just not.
Before we are anything else we're a certain "build" of body and brain, for which the blueprint is DNA. Obviously that "expects" a certain kind of environment (e.g. not a lava hellscape); obviously too, it's ULTIMATELY environment that shapes genetics, but that's over a longer term. But in individual and group terms (politically and morally) we have to come to terms with the fact that the late 19th/early 20th century view was more correct than what came after it (which stemmed from various forms of Left-wing wishful thinking, fromk Boasian anthropology etc.)
This post feels personal. I was an aid worker in DR Congo and South Sudan, and now I have a comfortable life in suburbia. And I still know so many aid workers who are, eg, doctors working long hours in terrible conditions with minimal supplies in those countries. They save lives everyday, and they could get an easier, better paying job in America tomorrow and have a more pleasant life. Many do.
$100 in Congo could buy vital supplies for a dying baby, or it could ship the American doctor some chocolate bars from home. But that framing takes a lot of expats down and sends them back to America where they don't have to think about it. The doctors that last are the ones who decide that they can have chocolate, cold cokes, and good internet in the field without being wracked by guilt.
"$100 in Congo could buy vital supplies for a dying baby, or it could ship the American doctor some chocolate bars from home. But that framing takes a lot of expats down and sends them back to America where they don't have to think about it. The doctors that last are the ones who decide that they can have chocolate, cold cokes, and good internet in the field without being wracked by guilt."
i've worked in the aid sector before (Peace Corps) and my best friend was actually an aid worker in South Sudan as well, so I can definitely identify wth this strongly.
What years were you in South sudan? Wondering if you overlapped with him (he was working for the relief and development arm of a very conservative Christian denomination, though that wasn't his religious view himself).
I was there 2014-2015, plus a short-term assignment in 2016. But I spent all my time in Juba, and mostly only saw expats from other orgs at church and frisbee.
If the point of this is to convince someone who would save a drowning child that they should also send bed nets to africa, it fails to engage the core distinction -- saving the drowning child is obviously net positive for the saver (reciprocity from the parents, hero on local news, etc), whereas sending money to africa is at best neutral, beyond the direct monetary cost (status might be negative due to "better-than-thou" social cost to friends, whose own "minimum socially acceptable charity" has now been raised a bit by you).
If the point of this is to convince someone, even someone perfectly altruistic, that, in general, their marginal dollar is best spent on the top rated GiveWell charity, it fails here as well. Historically, in civilization, humanitarian aid would have been mostly net neutral -- in a malthusian world aid simply accelerates the arrival to population collapse. Only technological and institutional progress can actually raise the long-term well-being of a population. So the question is not whether saving the life in africa is net good -- we'll assume it is -- but whether that marginal dollar is better spent on an OpenAI engineer's doordash, because it accelerates the arrival of advanced AI by a few hours -- AI that will eliminate scarcity at the human scale permanently and solve disease and hunger in africa forever -- which will save 100 african lives instead of 1. (Or vice-versa, depending on your beliefs, that marginal dollar is better spent attempting to food-poison the OpenAI engineer's doordash, so that advanced AI is delayed by a few hours, and the extermination of all humans is thus postponed equivalently.)
The meta point is that epistemic uncertainty is a 100% valid reason to not give money to africa, since it is not clear that direct aid is the best marginal use of a charity dollar at all, versus fundamental research, or indeed, just contributing to the economy in the general way of pursuing your own self interest. This is deeper and more complex and profound than we're tempted to think it is: to extend the AI example, if you do assume that AI is either some force for enormous good or bad and that almost nothing else matters for the future well-being of humanity (a perfectly rational position), then the moral coloring of _playing video games_ in 1996 -- the actual economic activity that enabled the current AI boom -- becomes quite stark. So complex and uncertain is the post-hoc measuring of moral quality, that a purely selfish, wasteful and innocuous activity like playing Age of Empires as a kid becomes the necessary accelerant for the transcendence (or extermination) of all of humanity.
I also think aid needs to account for role of Malthusian dynamics, but that doesn't mean that aid is futile. Even if each prevented death causes one more tragic death down the road due to resource constraints, you can still have a big impact by preventing, say, blindness (e.g. Helen Keller Intl).
Or, what you could do is contribute to a charity (or a government agency) which seeks to improve other nations institutional and technological progress (while still saving some lives).
First, great post—I like the idea of explaining some of our moral intuitions in terms of the practicalities of coalition-building.
Now I’m wondering this: Have other people noticed that the thought experiments intended to make utilitarianism look impossibly burdensome assume a world in which (like ours) altruistic behavior is rare, limited, and haphazardly deployed?
If we lived, instead, in a world in which (say) half the population believed in an obligation to help the neediest, that would easily be enough to ensure that only expensively cured diseases are fatal. In that world, the altruist doesn’t have much extra to do, because the marginal utility of looking out for one’s own interests (which we all do much more efficiently than addressing the problems of strangers—go capitalism!) would only occasionally be outweighed by the possibility of helping someone else. And when it is, it would almost always be to help someone nearby—far-away people will have other altruists near them who will help them more quickly and easily.
I’m not sure if the people making these anti-utilitarian arguments are aware of how much they depend the contingent circumstances that prevail on our planet. I understand that this is the only reality there is, but don’t ethicists like to think that their abstract arguments would apply to all rational beings, in whatever social configuration they find themselves in? Yet arguments around utilitarian demandingness seem to be clustered in a region of social possibility space where the behavior they want to argue against is already rare.
(The decreasing burden of altruism as more other people become altruistic suggests there could be a tipping-point dynamic. OTOH, that cultural equilibrium would *not* be *evolutionarily* stable, obviously. I digress.)
Scott’s support for Rawls’s original position hints at this. Our rational pre-selves would choose a world (not just individual behavior but a whole world) in which the rich agree to take cost-effective measures to help the needy. Scott says that it’s “virtuous, but not obligatory” to behave that way here in our world. I don’t know where to draw the virtue/obligation line, but surely one of the benefits of the more altruistic world is that our moral burdens are much lighter.
>I’m not sure if the people making these anti-utilitarian arguments are aware of how much they depend the contingent circumstances that prevail on our planet. I understand that this is the only reality there is, but don’t ethicists like to think that their abstract arguments would apply to all rational beings, in whatever social configuration they find themselves in?
I don't get what your point in bringing this up is. Even if we grant the premise that most or all ethicists are morons -- or that they "like to" be morons, whatever that means -- how does that weaken the case against utilitarianism? These seem to be two wholly unrelated questions.
I'm not even sure what to make of section 1. The point of geographic distance (and in the case of helping the future, time distance) is that those distances are also inferential gaps, ie the causal connection between you and the benefit passes through many uncertain nodes.
My take. In order to be morally obligated to save the drowning child, you need to be in a unique position to save the child. If you are an old man in an expensive suit, and you are watching a child fall in a river together with a team of professional swimmers wearing swimsuits, you should likely not be the first to jump in the river - as the swimmers are in a better position to save the child. If all the professional swimmers are psychopaths and does nothing, you are now obligated, because you are now in a unique position again.
If the problem is systemic, such as the case of the children drowning every hour by your cabin - now your obligation is to alert the local authorities (or society at large), and save all the children until you can get help, and eventually a permanent solution is put in place. If the authorities/society fails to provide such a solution, or are not interested in doing so, you are no longer in a unique position to save the children. Therefore you also have no special obligation anymore.
If you're the only one who cares about saving children in your society at large, I don't think think it makes sense to say you're under a moral obligation to do so. Of course you may still do that, and that would make you a hero, or at least a very good person. But I don't think failing to do so makes you immoral.
A good example would be a doctor in a third world country as others have mentioned. You can choose to work long hours for low pay in bad conditions saving lives of poor children. This makes you a hero. However, you are under no moral obligation to continue doing so. This is because you are in no unique position to solve this systematic issue (poverty). Other doctors, working normal jobs at home could do the same, but are not willing to - and in my view, they are not immoral to fail to do so.
However, if a doctor is on a flight and someone suffers a heart attack, the doctor is obligated to help - because in this situation you are likely to be the only one who can.
That's an interesting expansion of the typical way to frame these problems: there isn't just "good" vs. "bad", there's "heroic", "minimally acceptable," and "bad."
And of course it's just a small additional step to make the whole thing a spectrum with no clear boundaries at all.
Did you read something to change your view on Rawls from November when you wrote "veil of ignorance seems neither very original nor very good"? Genuinely curious
A policy is not just a guideline. A policy is a pre-commitment which reshapes the decision environment, to prevent bad outcomes which would predictably come from doing what seems to be the right thing at the moment.
For instance, the US used to have a policy of not negotiating with terrorists. You don't pay them money, or release their terrorist friends from prison, in exchange for their releasing hostages, because it both encourages more terrorism, and gives those particular terrorists more resources to commit more terrorism.
The problems posed in this post are designed to arrive at the conclusion that the people in the US should give their money to poorer people in other countries until those people are as well off as those in the US. The conclusion appears inevitable because all of the problems are posed in ways which ignore the future consequences, to the US and, looking even further ahead, to the world, and to the Universe, of everyone in the US embracing an ethics in which everyone acted in that manner.
Continuing to ask these questions in public is counter-productive, because the vast majority of people are incapable of looking far enough ahead to get a well-justified answer, and therefore will probably condemn as immoral anyone who does, since they're unlikely to get a satisficing answer by what is basically chance. Social pressures thus practically guarantee that only very suboptimal answers will be given publicly, leading to a consensus around harmful morals.
It's better to let people muddle along with their instinctive morality, which evolved to be both beneficial and evolutionarily stable, than to go all High Modernist on morality, and try to logically deduce how we all ought to behave, in a social environment which guarantees getting a poor answer. That's what gave us Marxist-Leninism.
There's a third option: Find a policy that effectively changes another society to become less of a burden on themselves and on us (in fact, a net positive).
Channeling my annoying teenage Objectivism: I think Ayn Rand's "The Ethics of Emergencies" is on point here. She says to help a drowning stranger in an emergency, but only if it's a *real* emergency — a rare, unanticipated thing. I think this helps resolve the "people drown in the river on the regular" issue as well as the Alice vs. Bob in heaven issue.
Part of our drowning child intuitions come from notions about "what kind of person" would let a child drown, as opposed to the narrow ethics of the situation. Someone (Sam Harris maybe?) has an analogy of learning that both of your grandfathers fought in WWII:
Grandpa Bob was a bomber pilot, and flew many missions dropping bombs on Dresden. Bob tells you that intelligence later confirmed that his bombing missions killed hundreds of civilians. "That's just war," he says.
Grandpa Jim was an infantryman and fought in ferocious urban combat during the liberation of France. Grandpa Jim tells you that one day in the heat of battle, he was fighting house-to-house, shooting and bayoneting enemies at point-blank range. One of the people he impaled with his bayonet turned out to be a teenage girl trying to escape. "That's just war," he says.
Even though Grandpa Bob clearly did a lot more harm from a utilitarian perspective, somehow your impression of Grandpa Jim changes more when you hear his war stories. I think this is because we have some general notions about what kind of person would be able to shrug off impaling a teenage girl with a bayonet, and we make some extrapolations about their expected future behavior (or we recharacterize their past behavior in light of new information).
Ditto for the drowning child analogy: we intuit that ignoring the drowning child is monstrous (partly) because of *what kind of person who would do that*. But that doesn't mean the intuition is correct!
That's an interesting example because it plays with our temporal bias the same way the drowning child argument plays with our spatial bias. Clearly you'd be at least equally horrified to live next to Josef Mengele in 1944 vs. a serial rapist.
I largely agree with John's analysis here, but the complicating factor is what the neighbors will think. You have become, after all, the kind of person who would live next to a Nazi, or a rapist. The deciding factor will probably be community norms.
This Copenhagen school is terrible and sets up really perverse incentives. It gives everyone an incentive to "touch" or become "entangled " with as little as possible to avoid responsibility or ever bring blamed for anything. This actually happens very regularly in organizations and because of the incentives of our legal system. Everyone just won't touch something, even when everyone knows it's bad and everyone wants to stop it, because the touchee is the one who will be blamed. For optimal outcomes you would implement a policy that was almost a total 180 of the Copenhagen school, such that touching something when one has a choice not to absolves you of some level of responsibility. Because at least you helped, or tried when you didn't have to, even if your efforts were a total failure. And I think that that is actually a lot more how people actually react on an intuitive level, when they're not sitting around trying to think of perfect rules that will apply to any given crazy hypothetical.
You and everyone else, are trying to make this more complicated than it is. Morality is a localized (to those people who follow your same rules or those in very close pshsical proximity) set of informal rules about what is "Good" and "Bad". Sets of moral rules are rarely totally consistent and are evolved through memetic selection to judge common issues. It makes no sense to ask about a river of the drowning damned, moral rules just don't cover that scenario.
In my moral system it is "Good" to rescue children from drowning. It is also "Bad" to ruin a $3000 suit by getting it wet. That's about where the morality of my society ends. It's up to you to consider tradeoffs and mechanisms.
The first thing that comes to mind is that by saving every single child that goes by in the river, the megacity faces no consequences for its actions and so will never bother to fix their problem. This differs from the occasional act of child-saving in that occurs in one-off scenarios because we are happy to accept that no system of child-protection is completely perfect.
The moral responsibility for such a systemic failure lies with the people having the children in the first place.
The only sustainable thing is for the megacity to eventually help themselves. Sure, you can spend a bit of time helping them help themselves but the vast majority of the work needs to be done by the megacity and the parents of the children. If this doesn't happen, all that occurs is resources are taken away from societies (and gene pools) who have figured out that caring for their children is a good idea and are given to societies what don't care enough about their children. This leads to the "don't care for children" society to relatively flourish while the "care a lot about children" society relatively declines. This is the exact opposite of the long-term morally good thing to happen.
Now obviously, this assumes relative equality between societies. If one society is composed of people who just had most of their arms and legs blown off as a result of a war they had no part in, it makes sense to help this party more with childcare since this isn't a failure of having a good society but of external circumstances.
>The first thing that comes to mind is that by saving every single child that goes by in the river, the megacity faces no consequences for its actions and so will never bother to fix their problem. This differs from the occasional act of child-saving in that occurs in one-off scenarios because we are happy to accept that no system of child-protection is completely perfect.
I think you might be missing the part where the megacity is unreasonably large, so all these kids *are* just the ones slipping through the cracks.
Then Megacity needs to be broken up, by force if necessary. In any case, moral choice is an inherently collective action. My city and Megacity need to sit down and work out how to save more children.
>And suppose that if Alice was in Bob’s situation, she would do even less, but in fact in real life she satisfies all of her (zero) moral obligations. If there’s only one spot in Heaven, should it go to Alice or Bob?
C. S. Lewis wrote on the Christian answer to this question in "Mere Christianity":
"Human beings judge one another by their external actions. God judges them by their moral choices. When a neurotic who has a pathological horror of cats forces himself to pick up a cat for some good reason, it is quite possible that in God's eyes he has shown more courage than a healthy man may have shown in winning the Victoria Cross. When a man who has been perverted from his youth and taught that cruelty is the right thing, does some tiny little kindness, or refrains from some cruelty he might have committed, and thereby, perhaps, risks being sneered at by his companions, he may, in God's eyes, be doing more than you and I would do if we gave up life itself for a friend.
"It is as well to put this the other way round. Some of us who seem quite nice people may, in fact, have made so little use of a good heredity and a good upbringing that we are really worse than those whom we regard as fiends. Can we be quite certain how we should have behaved if we had been saddled with the psychological outfit, and then with the bad upbringing, and then with the power, say, of Himmler? That is why Christians are told not to judge.
"We see only the results which a man's choices make out of his raw material. But God does not judge him on the raw material at all, but on what he has done with it. Most of the man's psychological make-up is probably due to his body: when his body dies all that will fall off him, and the real central man, the thing that chose, that made the best or the worst out of this material, will stand naked. All sorts of nice things which we thought our own, but which were really due to a good digestion, will fall off some of us: all sorts of nasty things which were due to complexes or bad health will fall off others. We shall then, for the first time, see every one as he really was. There will be surprises."
The sad thing is, this is all just a multiplayer Prisoner's Dilemma.
If each of us saved one drowning child, the problem would be solved. Instead, there is one person complaining that saving 1000 drowning children is too much work, and there are 999 bystanders yelling at him: "of course you idiot, saving 1000 drowning children is too much work, therefore we have rationally decided to save none".
Very well put. USAID was the equivalent of “everyone put a few dollars in this jar, and we’ll use it to hire some lifeguards” before Elon fed it into the wood chipper.
But also USAID was supposed to be "we'll hire some lifeguards" and then blossomed into "well and besides lifeguards, some of that money will go to my cousin Moe for hosting a party for the well-heeled in a foreign city, don't worry, it's all the same as if it went to the lifeguards!"
If Elon had just cancelled the party funds, we wouldn't be having this conversation. In fact, I'm pretty sure this specific post is in response to the people saying, "it's good Elon cancelled the lifeguard fund because the river was far away".
I always took the Copenhagen Interpretation of Ethics to be a joke, meant to help people realize their ethical inconsistencies. I don't think morality makes much sense if you take it seriously, as your various scenarios illustrate.
It seems clear that we have particular duties to certain people that we don't have towards strangers: our families, those we have made promises to, etc. But I don't think you can get around the drowning child experiment: yes, I owe him fewer duties than I do to my brother yet it is still the moral thing to do to rescue him. And the same for those overseas that I can help, but can't see.
C. S. Lewis had this to say about charitable giving:
"In the passage where the New Testament says that every one must work, it gives as a reason 'in order that he may have something to give to those in need.' Charity—giving to the poor — is an essential part of Christian morality: in the frightening parable of the sheep and the goats it seems to be the point on which everything turns. Some people nowadays say that charity ought to be unnecessary and that instead of giving to the poor we ought to be producing a society in which there were no poor to give to. They may be quite right in saying that we ought to produce that kind of society. But if anyone thinks that, as a consequence, you can stop giving in the meantime, then he has parted company with all Christian morality. I do not believe one can settle how much we ought to give. I am afraid the only safe rule is to give more than we can spare. In other words, if our expenditure on comforts, luxuries, amusements, etc, is up to the standard common among those with the same income as our own, we are probably giving away too little. If our charities do not at all pinch or hamper us, I should say they are too small There ought to be things we should like to do and cannot do because our charitable expenditure excludes them. I am speaking now of "charities" in the common way. Particular cases of distress among your own relatives, friends, neighbours or employees, which God, as it were, forces upon your notice, may demand much more: even to the crippling and endangering of your own position. For many of us the great obstacle to charity lies not in our luxurious living or desire for more money, but in our fear — fear of insecurity. This must often be recognised as a temptation."
Lewis walked the walk, as well as talking the talk. He set up a charitable trust and 2/3rds of his book royalties were paid into it. When he died his estate was only worth 38,000 pounds: pretty small potatoes considered he had sold millions of books.
He never had any biological children, though he did have two young stepsons after he married their mother in 1956. That was seven years before his own death, and four years before his wife died of cancer.
He did have an odd living situation before that. After returning from WWI service he lived with the mother of a friend of his who died in the war. He promised him that he would take care of his mother if he died, and he lived with her until her death over thirty years later. She had a teenage daughter at the time, so in a sense Lewis was helping to support her as well: though this was before he became a Christian, and by the time he converted I believe she would have been an adult.
His brother moved in with them in 1930 and stayed in that house until Lewis died. He likely wasn't much of a burden, since he was a retired army captain who I assume had a pension.
> Again sticking to a purely descriptive account of intuitions, I think this represents a sort of declining marginal utility of moral good
This is backwards. It's the increasing marginal cost of your time and/or money. Spending 1 hour to save 1 kid makes sense, spending 16 hours to save 16 kids means it's your entire life.
The Dublin example seems like a strawman to me; physical distance is less important than travel time / effort to get somewhere. If there's a magical portal connecting you to Dublin then for all intents and purposes the distance is zero. If you're on a zoom call with somebody across they planet and they start having a medical emergency you are obligated to call emergency services, even if costs money for some reason.
There's also something of a geometric issue. If your radius of moral concern is 1 km and that contains P people, doubling the radius to 2 km means 4x the area and 4P people. So there's a really high cost to expanding your circle of concern, and it *increases* the more moral you are. (This is obviously simplified but I think the general point stands).
> Well, every day, I’ll rescue one child, then not worry about it. This is better for the children, since it increases their survival rate from 0 to 1/24
Do you just save the first child you see, or when you see a kid do you roll a die and save if it comes up 6? (Assume you otherwise live your life and so simply don't notice many of the kids floating by). The average person considers random methods like this to be very unfair. Some of the kids survive for no legitimate reason. Refusing to save anyone lets more kids die but it's more fair. This is definitely bad logic in this contrived example, but in the real world caring about fairness usually makes things better.
The other issue is about setting boundaries. Saving zero children is a natural Schelling fence. There aren't any other natural Schelling fences. This applies to most of the other examples.
In reality I think people are just generally reactive and not proactive. Most people would save the drowning child (at least once) but don't really seek out opportunities to save lives. And when they donate to charity it's because somebody asked them to, or a cause affecting themselves or a loved one. This is descriptive, and any moral theory people come up with to justify these actions is just rationalization. It does make for a convenient Schelling fence too; help people when explicitly asked but not otherwise.
Copenhagen ethics is deleted?!?! It's such a good piece! Does the author prefer it that way or is it just a hosting thing? Does anyone have it backed up?
Human psyches are fragile. In theory someone 'should' save all 37 children. However, people whose behaviors stray from social consensus in extreme enough ways for long enough are often prone to having psychiatric breakdowns, which obviously ends up saving zero children. I feel like the optimal equilibrium point here is 'hard utilitarianism is true but most people are too psychologically fragile to do much about it outside of donating a little to charity and lightly advocating for political causes without running a serious risk of having a psychiatric breakdown'.
If something happens in a place that you know well, you have a much better chance of making the correct choice. If you are in a foreign land in a foreign culture, I generally look to others for how to act.
As an individual you are not likely to do much harm acting with your body locally. If you are in Zimbabwe and give a starving child a piece of fruit. Likely a positive act short term with minimal long term consequences. Help an individual overseas via some American group. Lots of variables. Is the group just a fraud, how much goes to the person, is my contact info sold, etc. Then there is giving tax money as massive food grants. You have no choice in the matter to support it even if you think it is wrong. Mass food subsidies depress local food production. So many levels ripe for corruption, etc.
A third is that if you ignore something bad that right in front of you it will corrode your soul, harden your heart. Your brain has many deep processing levels that we/you/I have little knowledge of or control over.
Consider Chesterton's Fence. It's no coincidence that a kid floats past once an hour, on the hour, regular as clockwork. This is the result of deliberate action.
Make no attempt to return the children sacrificed to Neptune. Atlantis City does not want them back; and if you sneak them in anyway you risk dooming the entire population to suffer his watery wrath.
My intuition is that once you get into scenarios where innocent children are drowning every hour, this is a systems issue and not the responsibility of any one individual. No one should be expected to take full personal responsibility for this. Reasonable responses would include lobbying the government/city/wealthy individuals to set up a collective to hire lifeguards or build rails around the river to stop kids from falling in.
This all goes back to the question of foreign aid and cutting that. This is where that all started. The fact is, nationalists like myself care very much about whether the child in question is a domestic child or a foreign child. Hypotheticals about children going down the river would lead to me wanting to save as many of them as I could. The situation vis a vis funding malaria nets or HIV medicine is more complicated.
You don't give charity to the enemy during a war. That much most people can agree with (at least until after you've defeated them, then you airdrop food). Most people would just find it absurd to say that Europe and (to a lesser degree) North America are in a biological soft-war with Africa (and the Middle East/Near Asia but that's not the main subject of aid discussions). I naturally don't want to see children suffer, but every gain to Africa is a loss to Europe, and European children, so long as massive amounts of foreigners aren't deported and immigration from these regions isn't totally shut down (we can do that exchange: Africans are never allowed to set foot in Europe again, in exchange for massive amounts of aid, but this isn't in the policy overton window, so cutting aid it is).
I think that giving aid to Africans has a large negative knock on effect on the children of my region, and the children of my family, in the long run. Our fertility rate, like most developed regions in the world is below replacement. The fertility rate of many African countries is lowering over time but still sky high (aid that only involved condoms, birth control pills, anti-abrahamic propaganda, and not food and medicine would be beneficial here).
Since this is a soft-war (or a cold war, but it's a bit different from the Soviet case) it would be immoral to actively start bombing Africa and hurting them, but it would also be immoral to your own side to help the opponent ("I won't kill you, but I don't have to save you"). If you imagine a sliding scale where full on war requires the moral response of killing the aggressors (and inevitably their children even if they aren't deliberate targets) to save your people/children, and no war at all should necessitate giving aid and help, a soft-war represents the inbetween territory. It's not even as high on the scale as to say private charity to Africa should be banned, but certainly if you are using the state to provide charity and we have input into where the state diverts resources it has gained from the people, then it's not wrong to decide you want those resources to go somewhere else instead.
It may be callous to not give people who are suffering help, but if those people will as a large group (even if individually kind) destroy your civilization (the one best equipped for dealing charity and improving the state of technology anyway), then acting callous in the near frame helps slow the ruin that is accumulating and will eventually destroy us in the far frame. That children aren't being helped is unfortunate, but significantly less unfortunate than if they were killed in a war, which could easily be justifiable (again, even if children aren't direct targets). Their parents must act responsibily to save their children, but since these parents have been turned into a horde on a geopolitical level, it would be foolish and immoral to our own people to give them fuel for survival, so long as our leaders insist on allowing them to colonize our countries (in return for our colonization of theirs - even so there's no reason to accept it).
If anything, the problem with DOGE (which started this whole debate) is that cut foreign medicine funds should be returned domestically - a nationalist gesture, rather than the libertarian one Musk prefers. Put all of that money into American cancer research instead.
I do not expect to change your opinion specifically, but if anyone reading this is on the fence, there is plenty of data documenting the economic benefits of immigration. One factor in particular: economic growth is connected to population growth, and the loss of one will negatively impact the other.
Isn't that data highly slanted by country of origin? Not all immigration is equal (European migration between countries, or the immigration of Chinese or Indians might result in net contributions, especially assuming selection effects). How does this data square with African and ME/NA immigrants not being net contributors? How does it square with Canada's massive drive in the last 10 years?
The other issue is that if you purely measure things according to near term economic gain, then immigrants coming to work will obviously always be good, because you are benefiting from far more specialization. Diversity, economically speaking, very often IS a strength. If those immigrants go on welfare, not so much, but it could be that more immigrants contribute than leech. Of course, life isn't just about near term economics and "line go up". These are nice things to have, but there are trade-offs. For the UK, this has meant heavily stressing social services and housing. You could argue "Oh, but then the real problem is zoning" but England is a small and very dense country, where the nearest road is always less than a few miles away. This will just turn our cities into even worse places to live, and increase ethnic conflict.
As for population growth: this is one reason to be very pro-AI/pro-automation, since this can take up the slack of an ageing population, and actually allow some level of population decline safely, without stressing the dependency ratio too much. If we have a good chance of, in the next 50 years, achieving full automation and a basic income guarantee (I believe we do), then it's short sighted to open the floodgates to immigration as a solution, when the trade-offs are so severe, especially given that the number of immigrants needed is so large (since their fertility rate starts to converge with ours once they live here).
The final issue is that mass immigration has demonstrably led to more support for far-right parties, and unless you want to ban democracy, at some point, it's going to be reversed, and the larger the inflows were, and the longer this goes on, the worse and more ethically fraught the reversal process is likely to be.
Well, the studies I've seen have been mostly focused on US illegal immigration from Mexico and Central America, so not exactly high skilled labor. Nevertheless, such illegal immigration appears to helpful overall to the economy, while being a small drain on government budgets (this includes services, including homeless services, which do not always include housing in the US). A large fraction of hispanic immigrants to the US end up working as migrant agricultural laborers (I believe this is the case in Europe as well). Remember that immigrants are consumers as well as employees, so you have to take into account their net positive effect on demand, even those on welfare (which creates employment opportunities for other people). Illegals, so far as I know, can't go on welfare. In any case, the employment rate of immigrants in the US is nearly the same (slightly higher in fact) than native born.
Immigration contributes to economic growth because it contributes to population growth. This is esp. the case in countries (like the US) with declining native birth rates. At some point in the future, we will fall below replacement rate, at which point we will be competing with similar nations (the EU, China) for immigrants from nations with relatively higher birth rates. That's pretty much the underdeveloped nations (due to the demographic transition). The US has a strong position here, if we don't blow it. This factor will only increase if at some point the US experiences net emigration.
AI/Automation does very little to alleviate the effect of declining birth rates, because AI doesn't buy stuff (it does little good to produce things more cost effectively if there is no one to purchase it). Even a basic guaranteed income doesn't address the problem, unless you plan to hand out more money per capital as the population declines. I have no idea how such an arrangement would work, and I doubt anybody does. Certainly there is little public support for such a policy.
As for far-right resistance, well, that's a political issue, not an economic one. It would be sad if it turned out that isolationist policies became more popular, and that ends up what crashes the economy. Certainly I agree that as a sovereign nation the US has a right to regulate who crosses it's borders, but a long term net reduction in immigrants would not be wise.
I'm very much a nationalist too (some would say an ultranationalist), and I'm extremely opposed to migration (immigration, emigration and internal migration), but I don't see that as a reason to reduce foreign aid, either public or private. Quite the contrary. European countries need to be donating tons of money to Africa precisely so that (among other reasons) Africans *don't* try to move to Europe.
But Europeans already donate tons of money to Africa, so I'm sceptical this works. It's also the case that it's not the very poorest Africans that will legally or even illegally immigrate. Even with illegal immigration, it can cost 1000s of pounds equivalent or dollar equivalent to pay the smugglers.
There also would probably be fewer Africans in Sub-Saharan Africa if it wasn't for aid over the decades. Their birthrates are effectively subsidized. Yes, that has a moral dimension (free medicine and other help allows more to survive to adulthood), but then that just feeds back into what I'm talkiing about where we are in a soft biological war, and you are actually weighting two sets of children against each other (since I disagree with the notion that African existence is mechanistically/consequentially neutral).
You’re again neglecting the fact that fertility rates go down as a country becomes more prosperous. They have gone down in Africa too, although sllower than I’d like. Everywhere the demographic transition has started- including in Africa- it seems to proceed to completion (i.e. below replacement fertility) given enough time.
"I naturally don't want to see children suffer, but every gain to Africa is a loss to Europe, and European children, so long as massive amounts of foreigners aren't deported and immigration from these regions isn't totally shut down (we can do that exchange: "
European countries are going to shut down immigration eventually, when the situation gets bad enough. And eventually, start deporting minorities. Maybe it should have happened sooner, but late is better than never. The exchange you mention is what the eventual equilibrium will be, in future.
There's a hidden assumption in the hypotheticals that I think is important to point out because, while I'm not sure *I* disagree with it, I know that a very large number of people do.
Specifically, the assumption that moral weight transfers across transactions; that if I hire a purely-selfish man to save a child for $10, that I'm the one doing good (by giving up $10 to save a child) and not him (by doing a paid job).
Empirically, most people *do not believe this* - this is why doctors are high-status, because their large salaries are *not* considered to buy off the weight of the people they save and distribute it to the people paying. And failing to accept that assumption creates a disequivalence between almost all the hypotheticals here and the actual case of donating to charity, because *you're the one rescuing the kids from the river* whereas *you're not the one building or distributing the bednets*.
I think that's the effect of applying virtue ethics (that it's at least if not more important to be a good person than it is to achieve objectively good outcomes). Humans have evolved multiple ethical standards and they are not all perfectly compatible.
"I think the angelic intelligences would also consider that rich people could defect on the deal after being born, and so try to make the yoke as light as possible."
Doesn't that mean you have to consider the game theory implications for rape and drowning-children-saving too?
This sort of thing may have worked even as recently as a year or two ago, but as many new competitors join, and many of those not interested in competing leave, you really have to put more effort in to win the edgiest comment award these days.
So (according to Google) the Gates foundation has spent around $2 billion combatting HIVAIDS, TB and malaria including funding research to develop more effective nets and to distribute these nets. I think the issue with the statement why hasn’t “X” just given “X” billion dollars to do “X” is matter of how we talk about the problem. Usually people will say something like “it costs X billion dollars so give every child in Africa a mosquito net.” Even if this accounts for the costs of distribution, now money is now longer the bottle neck, it’s time and finding competent non-corrupt people to actually do the work. The problem is that having the money to solve problems doesn’t immediately solve the problem.
So for instance the impacting of losing USAID has less to do with dollars and more to do with loss of institutional memory and personnel. The money from USAID often pays the salaries of people who have been working in foreign aid for their entire careers. They are in charge of transporting and delivering goods and training local volunteers and workers. For instance they may be training people “how to effectively use mosquito nets” because even if you send 1000 mosquito nets to a village if they aren’t used properly then the intervention is ineffective. In terms of how bad this actually is I don’t know enough about foreign aid (not sure anybody does) to give an OOM estimate of something like additional lives lost. I just know enough to know it’s bad.
I think the obvious answer to this dilemma is if society is imposing an obligation on you through persistent inaction on predictable events, your moral responsibility is greatly negated.
There doesn’t need to be anything deeper than that.
I think I get where you're going with this. Sure, when you're confronted with a problem that seems so intractable it does feel like there's nothing you can do so why do anything. But this feeling is an effect of not having exact numbers on the drowning children.
So, it's a pond but how big exactly - does "absolutely full" mean dozens, hundreds or thousands of children? What exactly is the amount of children dropped in every day? How exactly does the pond manage to stay full? If some mysterious phenomenon or nefarious villain is dumping in exactly as many children as are saved, then saving them is indeed pointless. But somebody should investigate that, shouldn't they?
One day, some intrepid nerd starts counting how many kids fall in each day and measuring how fast the average person pulls kid out. They declare that 50 people pulling kids out every day would match the amount of kids falling in and prevent any deaths. Now there's concrete numbers showing why the problem never got any better: not enough dakka. With that info in hand, every additional person joining in now knows that they're inching toward salvation instead of being stuck in an endless struggle.
I'm personally, I like the Talmud's version of the problem - The City's Spring (here including Steinsaltz's interpretation):
"a spring belonging to the residents of a city, if the water was needed for their own lives, i.e., the city’s residents required the spring for drinking water, and it was also needed for the lives of others, their own lives take precedence over the lives of others. Likewise, if the water was needed for their own animals and also for the animals of others, their own animals take precedence over the animals of others. And if the water was needed for their own laundry and also for the laundry of others, their own laundry takes precedence over the laundry of others. However, if the spring water was needed for the lives of others and their own laundry, the lives of others take precedence over their own laundry. Rabbi Yosei disagrees and says: Even their own laundry takes precedence over the lives of others, as the wearing of unlaundered clothes can eventually cause suffering and pose a danger."
Great post. I've written a reply, 'Moral Intuitions Track Virtue Signals', which explores more virtue-ethical psychological explanations for our moral intuitions in these sorts of cases (and Trolley cases too):
We're assuming here that everyone agrees that saving drowning children is the right and moral thing to do. But there have been times and cultures where that was not at all evident to people, indeed where the moral thing to do was let the children drown.
We all know our friend Moloch, but the Aztecs had both a high value on children (see this piece on children and childbirth) *and* that their deaths (with maximum suffering) were also necessary at times.
"In the New Town which the Romans called Carthage, as in the parent cities of Phoenicia, the god who got things done bore the name Moloch, who was perhaps identical with the other deity whom we know as Baal, the Lord. The Romans did not at first quite know what to call him or what to make of him; they had to go back to the grossest myth of Greek or Roman Origins and compare him to Saturn devouring his children. But the worshippers of Moloch were not gross or primitive. They were members of a mature and polished civilization abounding in refinements and luxuries; they were probably far more civilized than the Romans. And Moloch was not a myth; or at any rate his meal was not a myth. These highly civilized people really met together to invoke the blessing of heaven on their empire by throwing hundreds of their infants into a large furnace. We can only realize the combination by imagining a number of Manchester merchants with chimneypot hats and mutton-chop whiskers, going to church every Sunday at eleven o'clock to see a baby roasted alive."
Childbirth was celebrated, but if you had twins, one would be killed.
"This verse relates how blessed the newborn was to be brought into the world. Yet the midwife’s words clearly warn the child that there will be insecurity and grief throughout life. If the mother delivered twins, one of the babies was killed at birth, as twins were feared to be an earthly threat to their parents in Aztec society.
...Children played a large role in the ritual dedicated to the rain god, Tlaloc, which was performed to bring needed rain for the crops. Blood from children was obligatory and this was acquired through small incisions, such as in the tongue. Actual child sacrifice was also performed at the end of the dry season. Two children were selected to be offered up to the rain gods. The tears that they invariably shed before their sacrifice were offered to Tlaloc so that he released much needed rain. In one year of a particularly dire drought, forty-two children between the ages of two and six were sacrificed. It was believed that the earth needed more than just a small sip of water, as represented by the crying children’s tears. In such a dire circumstance, much water was needed; therefore the quantity of children sacrificed was greatly increased. This is the only time that such a large number of children were offered, it would never happen again."
The rain god requires sacrifices, the more weeping the better. Sometimes there is exceptional drought so you need even more sacrifices.
"According to Bernardino de Sahagún, the Aztecs believed that, if sacrifices were not given to Tlaloc, the rain would not come and their crops would not grow. Archaeologists have found the remains of 42 children sacrificed to Tlaloc (and a few to Ehecátl Quetzalcóatl) in the offerings of the Great Pyramid of Tenochtitlan. In every case, the 42 children, mostly males aged around six, were suffering from serious cavities, abscesses or bone infections that would have been painful enough to make them cry continually. Tlaloc required the tears of the young so their tears would wet the earth. As a result, if children did not cry, the priests would sometimes tear off the children's nails before the ritual sacrifice.
...In History of the Things of New Spain Sahagún confesses he was aghast by the fact that, during the first month of the year, the child sacrifices were approved by their own parents, who also ate their children."
The more I know about the Aztecs, the more I think that those who accused them of worshiping devil were... just making a rational conclusion based on the available evidence.
Why are you not comfortable with committing the is/ought fallacy at the level of “when am I obligated to save a child” but totally comfortable with it in the view that saving drowning children is good at all? Isn’t the base intuition just a descriptive rule itself?
Separately, I wonder in the cabin example (let’s assume it’s impossible automate, you personally have to ruin a suit and jump in the river 24x day) how many people you (or anyone else) would actually save. I think by day 3 of being wet/tired/unhappy all the time the number would approach 0 (and indeed one would leave the cabin).
I think this essay and every other such argument underestimates how easy it is for people to refrain from helping a nearby person in distress if (1) they too are in some kind of different distress (2) they tell themselves that they're not responsible for it (3) they know helping that one person won't change anything about the underlying problem that put the person in the situation (4) they tell themselves that other people are also not helping so I'm not bad (5) they can't brag about it or show off their goodheartedness. Besides, if everyone is kind then no one is. Cruelty and insensitivity are the reasons why kindness is valued in the first place.
There's a reason why families (or close-knit genetically related groups) are highly selected for when it comes to personal or group resource allocation: it is highly morally economical. Globalization was partly meant to transcend this atavistic instinct but I'm not sure it has properly succeeded at shifting this centre of gravity of primal loyalties.
It's also easy to argue yourself into helping if 1) You thereby relieve your own empathic distress 2) You gain reputation and social status in the eyes of people who hear about what you did 3) Saving this person helps you persuade others to pursue some sort of larger, institutional solution 4) The in-groups of the person you save now feel they are in debt to you and your in-groups 5) You make a new friend.
In which Scott, having failed an Ideological Turing Test, knocks down a bunch of straw men with implausible thought experiments involving ignoring the main solutions indicated by the premise, then pats himself on the back a couple times. What a sad show.
It is yet another comment of the form "you are wrong, but I am not telling you why, I just came here to make a note that I am intellectually superior to you".
Always behave as if the coalition is still intact. Why? Game theory! Imagine that the coalition will exist in the future, but does not exist now. By acting as a member of the coalition, you bring the coalition closer to existing. When you cross paths with another person who is also acting as a member of this virtuous coalition of the future, your abilities to cooperate with each other will be nearly frictionless. And you speed up the immantizing of the coalition.
If you believe, like I do, that the coalition has nonlinear effects on the world, then discovering even a few members of the coalition in the course of your life is more valuable than most of the good you will do individually. Cells of cooperation can pull large chunks of non-coalition members into coalition aligned action.
The "Coalition" are actually factions within society, and you obtain effectiveness in life by persuading a plurality of society's members to cooperate with you in pursuit of values you approve of. The problem is that there is more than one coalition out there, each competing for that plurality.
An unaddressed flaw of these thought experiments is that they presume the hypothetical drowning children are worth saving. Would they save you, if you were in their position? And does this affect how deserving they are of being saved?
Answering these questions is crucial for the thought experiments to be relevant to real life, because in reality, many—if not most—of these drowning children would not help you if the situation were reversed.
Fair point, but if someone's true objection is "kids should die, I don't care", they should state that objection clearly. As opposed to making up complicated arguments for why the kids kinda matter, but you shouldn't try saving them because... 5D-chess reasons.
The debate gets needlessly complicated because some people on one hand want to let the kids drown, but on the other hand do not want to be (correctly) perceived as the kind of person who wants to let the kids drown.
I take this as a concession some children should indeed be drowned. Now we're just haggling over how much certainty is required that a given child falls into this category.
Consider this thought experiment, somewhat more complex than "baby Jack the Ripper is drowning":
You're a barbarian on a lonely island. Scott's river flows past your hut back into enemy territory. If you do nothing, the children will be rescued by their own family. But drown them now, and you deprive your enemies of future warriors who will raid your village, rape your women, and drown your children.
Of course this isn't 100% certain. It's possible that your tribes form a truce before that happens, but you estimate this has only a 25% chance of occurring.
You could try kidnapping the children, but you're all vectors of a disease against which the enemy tribe is not genetically immune. An estimated 25% of the children you kidnap would succumb to this illness.
To make matters worse, they're lactose-intolerant, but your tribe's success depends largely on dairy farming. Your ability to exploit cattle gives you a major advantage over your enemies. Any kidnapped children who did not die from your diseases would grow up to produce hybrid Ooga-Booga offspring with your tribe's women. As a result, future generations will become progressively less fit, from a lactase and leukocyte perspective.
You estimate that this reduction in fitness will raise the chance of being exterminated from 5% to 80% within the next century.
Do you drown the children?
> the Gulf Arab states that got rich from oil recently give way more development aid than Western countries with similar GDP per capita.
This is an interesting analogy, but I'm skeptical that it works when applied to governments. Especially when one is Arab and the other is not.
*You reach for the drowning child, in that moment, you see with a flash of insight that seems to come from heaven, of this child, grown up, slaughtering thousands.
You pull your hand away in disgust and horror. The child sees this, his eyes lock with yours in confusion and sadness before he vanishes beneath the water.
You go away, convinced of your righteousness in this difficult moment.
Unbeknownst to you, a mile downstream, the child miraculously washes up on a bank, coughing up water. But his once-innocent eyes are black, burned out by the knowledge that in the moral calculus of a stranger who could have saved him, not all lives are equal. He will carry this knowledge on his crusade of slaughter.
Watching from the woods, Satan laughs. Two souls damned for the price of one vision.*
If you want a larger, better written exploration along the same themes as your original thought experiment, I would suggest "Monster" by Naoki Urasawa.
Fine. Imagine there are a million of these drowning future serial killers. Are they so valuable that you lobby the government to create a reservation just for their preservation?
I thought of another way you could have meant this that I didn't think of before. If Omega has informed me that if I save this kid from drowning he will definitely grow up to kill, say, 10 people, regardless of whether I try to stop this in another way, then this just becomes a reverse trolley problem. I don't pull the switch to divert the trolley from a track with 1 person to a track with 10 people, but this doesn't depend on lives having a different amount of value.
I have no idea who Omega is. But since you've pointed out this dilemma can be resolved even if all lives are equal, let's change the scenario to one where the drowning child will grow up to kill you. Do you drown him now, or do you admit your own life is more valuable than his?
Your 1 year old child is about to eat a Tide pod - do you intervene even though your child wouldn’t return the favor? What about a random 1 year old you see about to do that, without any parents around - would you intervene, even though no 1 year old on Earth would help you if the situation were reversed?
Expecting a one-year-old to save anyone from drowning is an unfair interpretation of my comment. When I speak of the situation being reversed, naturally I mean when the child is an adult, and you are the one drowning.
You are a neurosurgeon and have the opportunity to perform lifesaving brain surgery on a child for a small fee. Should it occur to you to decide whether to perform the surgery based on whether you think this child would perform lifesaving brain surgery on you if he were an adult OR if he were a neurosurgeon OR if he were a neurosurgeon offered the same fee or less OR if he had all of your exact characteristics, including your age, your current emotional state, etc. There is no immediately graspable version of what a “reverse” situation would be, and so thinking about this at all when making a moral decision, especially a time sensitive one, makes no sense to me.
Is there no scenario where it would actually be wrong to help the child, even if it cost you no effort or expense? In a recent comment I suggested a thought experiment where the child will grow up to be Jack the Ripper.
Your original objection was that Scott’s thought experiments did not sufficiently mirror real life. You’ve countered with a thought experiment in which the drowning child grows up to be a serial killer. Given that there’s no way to predict (without an alarmingly high false positive rate) which drowning child will grow up to be a killer or even just a drag on society, should this sort of thinking be part of your real life moral calculus?
Right, my original thought experiment was intended to question the egalitarian ideology underlying every iteration of Scott's scenarios—the presumption that all drowning children are equal. But my first interlocutor insinuated I was trying to disguise my misopedia behind "5D-chess", so I figured I'd make my point clearer at the expense of realism.
P.S. I googled "word that means hatred of children" just so I could use it in this comment.
This is empirically false. Most people make some attempt to help each other, even at some risk to themselves. It's complex, because it depends on the context (the out group bias gets in the way in certain cases), but on average most people are modest altruists.
It's not "empirically false" that sampling randomly from the billions of humans alive would result in "many" people who would not save you from drowning. Neither is it inconsistent with "most" people being altruist.
If one is highly selective, one could probably find thousands of people unwilling to rescue a given person.
You are selectively quoting yourself--you also said "if not most", which is what I am arguing against. Most people make some attempt to help.
Empirical studies, for example, have shown that humans often cooperate in anonymous one-shot prisoner’s dilemma games despite the fact that not contributing is the optimal solution.
Helping rates around the world are highly variable and context dependent, but on average it appears that rates of helping strangers in most places are somewhat above 50%.
I've participated in academic one-shot prisoner's dilemma studies. In my experience, anonymous participants always screw each other over. I wouldn't consider them relevant to real life anyway.
The field study is more interesting, but I think you would agree it's hardly representative (Africa and India total 40% of the global population but only get one experiment each), and suffers from serious nonresponse bias (i.e. Kiev).
I didn't read the whole thing, but it's unclear what race the experimenters belonged to. This seems pretty relevant.
my first thought is there is irony here in that people talk about drowning kids but no one ever suggests we should all get red cross lifeguard certification or first aid certs to be ready to save lives now. The morality is always a bit ethereal: "saving lives" but its more like optimizing widgets.
there really isn't near/far in a realistic sense but more like a single issue a person is convicted of to make sacrifice for, and individual ability to make it. Someone is a part time lifeguard, someone pushes to stock narcan in libraries, someone becomes a full time emt, someone uses wealth to fund or create overseas aid programs. You can't universalize this into obligation. A sickly person can't be a lifeguard.
Charity is a heroic act but you can't obligate it. That's duty, and people will ask "who are you, O philosopher, to be the officer who commands us?" Too many variables to that heroic act to try and universalize it.
> no one ever suggests we should all get red cross lifeguard certification or first aid certs to be ready to save lives now.
I think there are memes out there that point approximately in this direction. The general moral obligation to "become stronger", "clean up your room", etc. (And people are specifically taught to provide first aid in various occasions, for example when they get a driving license.)
> Someone is a part time lifeguard, someone pushes to stock narcan in libraries, someone becomes a full time emt, someone uses wealth to fund or create overseas aid programs.
Yeah, division of labor is a good idea.
It can also be a convenient excuse, when some people do X, some people do Y, some people do Z, and most people just say "I am not doing X or Y or Z because... uhm... I am doing... something else... no, don't ask me what specifically".
> my first thought is there is irony here in that people talk about drowning kids but no one ever suggests we should all get red cross lifeguard certification or first aid certs to be ready to save lives now.
The most important overlooked factor in all of these scenarios is moral hazard, pascal muggings, functional decision theory, etc.
It is very unlikely in practice that a drowning child near me is being used to systematically extort all my time and money.
By contrast, far away needy people can be used that way, and them being and staying needy lines the pockets of middlemen taking a cut of charity, and it isn't clear that the charity reduces long run suffering summed over all of their future descendants.
"Is this moral obligation serving as a money pump?" Is the important distinction.
There's this weird verse in the Bible that illustrates how I think of this. Jesus spend all this time helping and healing people, he goes to the poor and downtrodden. Then this woman spends some money washing his feet and Judas complains that the money could have gone to help the poor. Jesus then said, "the poor you have with you always".
I know there are other ways of rain this, but one way I read this is that there's a difference between how you should treat systemic problems versus one-off suffering. It's not that Jesus is totally into alleviating suffering most of the time, but then he ignores it. It's that he sees people suffer and, knowing he can help, had compassion on them. But systemic problems are ... well presumably he could solve them, just like he could have solved the whole Roman conquest thing, but I guess that was outside the scope of his mission. So his obligation didn't extend to solving that problem.
What about the rest of us? I think we have an obligation to work on solving the systemic problem, but that is foolish to treat it like a one-off situation. If you live far from the death river, you should work on the source of the problem. If you live in the cabin, compassion may prevent you from ever leaving the shore.
As president, you can donate $1 billion to feeding African children, vaccinating them, and generally extending their lifespan and lowering the mortality rate. Or, you can donate $1 billion to some AI research that will make AI 1% better.
In the former case, you keep donating the money year after year, but then you hit a recession, and can't donate that year. The aid unfortunately gets cut from the budget, and the billions of Africans who were dependent on aid all die suddenly.
In the latter case, the donation to AI increases global GDP in a sustainable way that is recession-independent. Whether or not a recession happens, the AI research has a net year-over-year improvement, so the highs are higher, and the lows are higher.
The question of which policy you should support has to do with the likelihood of each scenario. If the benefit to AI is minimal or negligible, then it would be a waste. If the aid to Africa helps the African economy develop overtime and becomes self-sustaining, then it is worth it.
These are empirical questions that are worth hedging. A $0 foreign aid policy seems ridiculous, while dedicating 100% of the budget to foreign aid also seems ridiculous. But since no one is advocating 100% of the budget to foreign aid, we are really just arguing between 0% and 1%, in most cases, which seems somewhat pedantic and overly skeptical/cautious.
For most good causes you can find a better cause, and then argue that the former does not deserve the money. But I suspect that most actual government spending does not go towards research and similar things, but instead to things that are clearly worse than both research *and* effective charity.
Yeah that’s what’s driving me a bit crazy about PEPFAR: how is it possible that those countries are still reliant on USAID 20 years later? Is there an end in site? Does anyone care?
Fair point! I wish we would act more like China and focus on building lasting infrastructure and education resources rather than trying to cure AIDS, but I would rather we try to cure AIDS than do nothing at all.
Seems to me like the big thing missing from these contrived hypotheticals is any sense of uncertainty. There are very few real situations where there's a clear choice between "save a life, P=1" or "save 10 minutes of your own time, P=1". Most people dealing with moral reasoning have to account for the fact that they don't always have confidence in the predicted outcomes of their choices.
So, let's say that you're not a great swimmer, so every time you jump in the river, there's only a 40% chance that you're actually able to save the child, and also a 5% chance that you end up drowning yourself.
Also, let's say that in addition to their regular outflow of drowning children, the magical megacity also has a culture that is very fond of lifelike dolls, and their old discarded dolls also flow down the same river as the drowning children. As far as you can tell from your cabin, any given childlike shape you see in the river has a 10% chance of being a child, and a 90% chance of turning out to a doll.
Hopefully this cabin is starting to sound a lot less like a deal that any sane person would take at this point.
(It also doesn't help that this specific hypothetical has plenty of trivially superior solutions, like "just build a giant rope net across the river.")
I find it natural for thought experiments to simplify, and thus appear contrived, in order to isolate the specific dimension of morality that it wants to discuss.
As with all kinds of consequentialist thinking, you would indeed "in reality" reason in terms of expected values rather than concrete, known outcomes, since you don't already know them.
But it might not necessarily add value to a specific question you want to discuss to replace "if you do x, you save a child" with "if you do x, you in 90% of cases save the child and in 5% of cases you die, and by the way in 1% of the cases where you save the child the child grows up to be a mass murderer".
A different way to think about the problem is evolutionary psychology.
In the evolutionary environment, people might sometimes come across 1 drowning child. And if they did, it was likely the child of someone else in the tribe and the social status reward would be significant.
A constant stream of 1 drowning child per hour didn't exist, and the social status rewards wouldn't exist if the children did.
Humans also had constant memetic selection. We have an immune system to protect against viruses. What do we have to protect against bad memes?
We have a lip-service response. We say the socially accepted lines. We often even "believe" them. Because to do otherwise risks being punished. But then our actions show a limited or token effort. An extreme case being a religious person who believes in heaven and hell, but thinks that going to church every week is too much of a bother.
Accurate information about distant lands you had never been to was not a feature of the evolutionary environment. (And might or might not be a feature of the modern environment)
Thus Utilitarianism falls into the same mental defenses as so many other beliefs. The mental mechanism for saying things, and "believing" them, but not taking any action too self detrimental.
There is no such thing as a "morally correct prescriptive theory" because moral rules are negotiated by members of a community, and are context dependent. "What should one person in isolation do if X" is an artificial construct because our moral intuitions didn't evolve to work that way. Morality is a way for people who live and work together to get along. Communities are morally responsible for taking care of their members, and one primary way they do that is by setting up a common set of behavioral norms that will help everyone act in a compatible manner. This means that every conceivable set of moral rules that anyone could adopt will only make sense depending on the moral rules adopted by the people we live and work with.
Of course, individual people still need to make decisions governing their own behavior. I imagine that what happens inside someone's head is a kind of cognitive algorithm that weighs various inputs including what we were taught during our upbringing, our observations of how other people we interact with behave, immediate sensory stimuli, and emotional impulses inherited from our ancestors. The brain is dynamic--cognitive impulses compete with each other to cause behavior and when successful are remembered along with what the outcome was in a given context, the more satisfying the outcome the more likely we are to remember it later in a similar context. This isn't going to result in a logically coherent set of moral rules to follow, any more than our past satisfying experiences cause us to enact the exact same set of behaviors every day.
One such factor is psychological salience: we expect to be more strongly affected by a dying child we see before us than by hearing or reading about one second hand. This is so natural and automatic a response that there is probably no sense trying to fight it, you would just exhaust yourself psychologically. The thirty seventh child is harder to save than the first.
On the other hand, you do have agency: what we can do in this situation is reduce 37 actions per day into one big action, by collaborating with other people in an organized fashion. The real solution is to demand that the denizens of the magic city take care of their kids, and if they can't do it, we should send the police to take them away. By outsourcing moral action to a community institution, we save our own resources and simultaneously gain power and effectiveness. This explains why we evolved this way.
That's why we don't blame the person who failed to save every child in the world--we blame the parents and their communities who failed to prevent that situation from occurring in the first place. This can also be questioned: What if it's a poor community that can't afford to undertake the actions that would be required to save their kids? Humans do not have a clear answer for this, that's why we get into arguments about the utility of foreign aid, but it's clear that there is no single logical moral principle one could formulate that will answer this question unambiguously.
I would intuitively look at these edge cases from a capacity perspective - I lack the power to even think about the suffering of millions of people, or to physically rescue 10000 children drifting down the river next to my house. My intuition requires me to run into a burning house, but Im then allowed a break even if the next house also catches fire.
My first introduction to what's now referred to as the Copenhagen Interpretation of Ethics was when I was a kid, and while walking around the grounds of my school, I noticed some sort of flyer or something on the ground, and picked it up to look at it out of curiosity, then when finding its contents entirely uninteresting, dropped it back where I found it; a passing teacher yelled at me for littering, saying that because I had picked it up it was now my responsibility to properly dispose of it, while if I'd walked past it without doing anything I'd have no responsibility.
Copenhagen Ethics are wring reification of intuition that the directness of the act and the predictability if the result matters, and the way to convince people is try to understand what they actually care about, and this attempt to understand Copenhagen Ethics failed at that.
the long version:
from my point of view, all this post is born in error. Copenhagen Error is failed attempt and reification intuition, and the right reification is about the directness of the contact, and the sureness in the result. how many intermediate steps there are, and how sure you are of the result is the explanation that satisfy the intuition from the examples, and also explain [people who against donating to beggars because they may buy drugs.
at the 4th part of the post it come to morality as coordination vs morality as altruism.
the problem i have with Scott's take is that he ignore that not al people accept the deal, enter the coordination.
and here the conservative objection that those third-worlders are not part of out coordination agreement sound true. and there is a lot to write about that.
the post also wrongly reify the reason that one child rescue is different then many/ it's not utility decline, but the difference between emergency and normal life. you can ask more of people in emergency, in something that happen very rarely. you can't ask the same thing in everyday life. this is why changing the drowning child from rare occurrence to something that happen all the time change the intuition considerably.
and... here are my long-form thoughts on that. in Hebrew.
I think you're not applying the "touching" rule accurately.
My intuition as to how the touching rule goes would be that it's on a spectrum. For example, drowning child in front of you and no-one around = 100% touched.
Letter through your letterbox explaining the problem and requesting funding for a lifeguard = 10% touched.
If you then look at the problems through this lens, the touching rule lines up with our sense of what is right. You can't escape being slightly touched by the drowning children in any of the scenarios - ownership of the cabin isn't the same as the 100% of seeing the drowning child in front of you, but it's somewhere towards that.
Having "touchedness" on a spectrum obviously is a tool that can be used to fudge the equations to give the outcome that feels right, so in that sense it's a cop out. But I think there's also some truth to it in how some people conceptualise duty to charities. For example, a load of charities send round letters, and some people feel a bit bad if they don't give in response to a begging letter. It's as though all the time, we all have so many moral choices that no one choice is foregrounded, and receiving a letter forces you into a binary choice about that one charity.
I guess "touched" by a problem could be reframed as "forced into a finite moral decision". We're always able to donate our time/ money/ energy to an infinite number of moral causes, and somehow this series gets added up to a very small moral imperative to do something vaguely good with our resources, but without specifying what in particular. The moral space has an infinite number of dimensions of causes and ways in which we could do good, and the obligations associated with the vectors associated with every possible option sum to about zero at any standard point in time. When we are "touched", it forces us to engage with a few alternatives, and summing these alternatives gives a stronger moral obligation in one direction. We're touched by the drowning child, but we're also touched by being friends with a lobbyist who asks us for our opinion. In these occasions, the situation of "I have an infinite number of ways to do good or bad with my life" reduces to "I have 3 options for how to behave, and one of them clearly does a lot less harm than the others". These three options don't sum to zero in the moral vector space.
Where this perhaps breaks down is when comparing Alice and Bob, but I think this does match our natural intuition. Whether or not there's any logical basis for it, most people from nice Western countries who go on holidays to poorer places feel a bit of revulsion towards the ultra-rich living next door to the slums. You can justify this by saying that a realistic Bob would be benefitting a lot from the poor people living nearby, whereas Alice doesn't directly benefit from people living in poverty. The extreme version of this is South Africa - you look at people living fabulously and you wonder whether their wealth came from Apartheid. If we modified the thought experiment and Bob's wealth came directly from stolen land, we'd probably all agree he had a moral duty not just out of being touched by these people, but because he owed them something.
I don't know if this is how the Copenhagen interpretation is supposed to be, but I think what I've laid out about is a consistent framework for morality that doesn't get destroyed by the thought experiments in the post.
What are we to make of the fact that, undoubtedly, if 5,000 years ago all humans had dedicated themselves to the saving of drowning children, etc. with the full fervor of an ideal EAer, humanity would still be backwards bronze agers (at best)?
While it perhaps makes some sense to not aggressively carry out the selection, I think modern humanity’s attempt to completely nullify selection is a bad thing. If this magical city can’t be bothered to save their drowning kids maybe we shouldn’t help keep their genes in larger numbers.
Am I missing some explanation of why the rational take isn’t to bite the bullet and admit that common sense / intuitive moral reasoning is bullshit and leads us to ethically indefensible conclusions? This has the air of a hypothetical but comes around to what appears to be an actual prescriptive policy.
The easiest way to reject this argument is to point out that we are NOT, in fact, embodied angelic entities and so are not bound by any agreement that such creatures might have made (via a negotiation which took place under very unspecified epistemic circumstances, I might add). Neither is it clear why we should act as though we were. I fail to understand what abstract quantity is optimized by reasoning from such an axiom.
I think it IS instructive to think about what intuitions such a hypothetical appeals to. The angelic contract is an agreement between equals with expectations of reciprocity. The hypothetical creates that scenario (out of thin air) because, in my view, *those are the conditions required* to make the agreement seem intuitively reasonable. Agreements have to be equitable, otherwise people don't enter into them! Our intuitions tell us that people *shouldn't* enter into them! In actual non-hypothetical reality those conditions, unfortunately, do *not* exist: we are not (cognitively, economically, or culturally) equal to far-flung third-worlders and there is no real possibility of reciprocity: when it comes to international charity, it's clear which direction the money will always be flowing. It's like saying "imagine what energy policy would be optimal in a world in which the second law of thermodynamics didn't hold" and then using those conclusions to direct actual policy in this world. From an intellectual honesty point of view this is no better than a time-share hard-sell. It's a thought experiment that smuggles its argument in via its axioms.
The Veil of Ignorance is only supposed to veil our knowledge of personal identity, not our understanding of game theory, human nature, and economics. If *I* was an angelic entity that knew everything about the world except who I was then I would hedge against the possibility of being born in sub-Saharan Africa by agreeing, in the event that I'm born in the first world, to advocate for aggressive neo-colonialism. That is the policy that maximally improves living conditions there.
This was rather confusing to read, as my own moral intuition is that at some point your obligation to save this or that individual child shifts to creating a coalition to address the problem.
A real world analogue would be if you discovered an infectious disease that killed one child per hour, but for which you had a cure. Should you keep manufacturing this cure and saving children? That’s almost a weird question to ask, because obviously you should tell everyone about it so that society can mobilize resources to solve the problem at scale.
It’s heinous to imagine someone might save a few kids and then stop because it’s too much work. In my view their moral obligation shifts from “save the individual” to “raise awareness”, and to fail to do the latter is at least as reprehensible as failing to save the child.
What's the purpose building a tower of assumptions on the moral intuition of the drowning child scenario[1] only to come back around and try to override the same moral intuition in other cases with a set of rules derived from it?
Either:
- The rules are defined by the feeling
Or:
- The feeling should be constrained by a set of rules (which?)
You start with "it’s obvious you should save the child in the scenario" (why? because you feel like it?) but later you "think this is a gigantic error, the worst thing you could possibly do in this situation."
If this is an error, why do we consider the drowning child scenario in the first place?
I fully agree. John Rawls's 'initial position' argument is the best argument regarding this kind of dilemmas. Why is it the best? Because, by putting the moral agent outside of the world and ignorant of his future self-interests, no bias or partiality can be invoked. It's the eye-of-god position, without any god implied. This essentially aligns with the Kantian categorical imperative, but provides a kind of procedure, in the form of a thought experiment, to achieve this excessively theoretical and abstract kantian ideal.
Well yes, that's pretty much how my moral intuitions actually work, except they start from a choice, with very little prescription or coercion. It's surprising how far you can go if you just frame things as "what kind of world I want to live in?".
I want to live in a world where drowning children are saved. I know that human capacity is limited, which means that 1. if I start thinking that it's my responsibility to save every child I'll just burn out and 2. if I'm in a situation to help, then it's my turn - this is well within my capabilities and so it's something I've precommited to do. I really wish more people would take the time to read about and think through acausal decision theory stuff. It makes a lot of sense once you get through the initial wtf.
The "cabin in the woods" thing is actually pretty good at poking at the limits of this model, because it exists in reality. EMTs do live there. Which significantly strains the model: should they eat that sandwich, if it ups the chance of a child dying by 1%? I hesitate to say either yes or no. I'm going for a cope here, but a practical cope: the system should be set up in such a way that they have time to eat sandwiches. And if it's not, well, I'd hope they skip the sandwich, but won't blame them if they don't.
I think that a stronger match to most peoples intuition would be something like a “weighted original position”. It is a hard and somewhat weird mental move to un-know everything about yourself. There is no separating "yourself" from who concretely you are. So, you also give some vote to later more informed versions of yourself - form implicit coalitions with people that you are actually likely to interact with and where you actually don't know who is going to benefit from the interaction.
So you move to a city, and implicitly put yourself in an original position where you don't know if you are going to drawn or to save others. You non-causally negotiate for others to save you and for you to save them, and rewrite yourself to be a person who follow that contract. Then come the first drawn child. You are the same person, so you save her. And since this incident was uncorrelated with future drowning children, you still want to have the same contract. Then you find yourself in a long-term "bad moral luck" position. The "past you" who signed the original contract still have some weight, but there are more and more "past you" who wouldn't have signed, and they too get a vote.
Some other things that this moral theory explain:
- Discriminating against you because your first name is "Scott" is more outrageous than discriminating against you because you lost a lottery. Because your yesterday self would object much more to an anti-Scott policy than to an equally arbitrary lottery-based policy.
- Unless you are a very hard-core libertarian, you wouldn't honor even an actual contract where someone sells himself to slavery. Because you are not willing to look him in the eyes 10 years from now and tell him that he is the same person who agreed to it. Because he isn't really.
"It seems like Alice got lucky by not being Bob; she has no moral obligations, whereas he has many. "
Welcome to animal rescue. Or to steal from Jalaketu..." Somebody has to and no one else will." There comes a point where it doesn't matter whether it's your obligation, because if you're the only one who sees them and cares enough to help, you're going to do it, to the limits of your time and finances. The child floating by every hour is not an exaggeration when you change species. </3
Wait, are we still trying to construct formal, logically coherent theories about what is moral / what people think is moral? I thought we all more or less agreed that morality is a vague, socially trained intuition based on neural classifiers. Am I mistaken?
My intuition, which I gained by reading this blog, tells me that thinking of things as moral / immoral is more like a habit that people pick up from their peers, more a result of cultural evolution than rational design.
There is a lot of training data saying that saving drowning children is good. There is barely any training data saying that you should donate all your money to charity. And people act accordingly.
The theory is that you should try to refine your morality to a logically coherent theory at least as far as you can tell, so that someone would actually have to be smarter than you to make you morally obligated to act as their money pump.
That is a sensible position - basically one of diminishing returns to your free time, which seems reasonable.
We need a better term for this than Singerism, the guy himself has never, even once, sold all he has to give alms. Perhaps what you mean is sainthood?
> A moral rule of say spend 1% of your time and money on altruism and try to make that as effective as possible would be better...
Maybe I'm missing something obvious, but isn't that almost word-for-word the goal of effective altruism, only with a 0 after the 1 (and they'll help figure out how best to spend it)?
Yeah, right-wing commentators often make the argument that aid to Sub-Saharan Africa is just facilitating the Negroid Population Bomb, and I don't think that's a crazy thing to be concerned about.
However, (A) they are vastly overestimating the extent to which the average african currently gets their calories from western aid, (B) ignoring that mortality reductions and economic gain reduces TFR, (C) ignoring that aid can be used directly to encourage smaller family sizes, and (D) look pretty fucking barbaric when suggesting that mass starvation is just the natural antidote to this problem, in preference to spending 0.5% of western GDP.
I mean... I'm a HBD-pilled pro-eugenics neo-Darwinist, I know that the high-IQ populations of the planet need to prioritise their genetic continuation, SSA's TFR needs to come down, and I don't assign equal value to all human life any more than I equally value all animal life. But unless you value black lives about as much as bacteria I don't see how slashing these aid programs can be morally justified.
I'm having a hard time parsing what you're trying to say here?
What is "the hole" supposed to represent here, exactly?
<sarc>Turning this into a cry of 'racism' against your political opponents is the very definition of rationally altruistic behavior</sarc>
Seriously, it's awfully tired and nobody introduced race into this discussion but you. Stop projecting.
I think you're mistaking me for someone who cares about being called racist, but sure, the case is more general than just the situation in SSA.
I think this is why so much of Christianity is about forgiveness and change and acceptance. The people writing the manuals desperately wanted to be good, and that's the dynamic you need. Of course if being good isn't a priority, it sort of becomes pointless self justification, which is why the average American atheist is cynical about the project -- seeing what is there at the local church is grim vision indeed -- but there's a roadmap there. The requirement isn't to be perfect. The goal is to be perfect. The aesthetic is to be perfect.
"The person who saves the 37th child is more moral than the person who doesn't"
I don't think anyone disagrees that saving the nth child will give you some morality points. The disagreement is whether refusing the save the nth child will lose you morality points.
"Of course you have a moral obligation to save every single child that you can."
Whence comes this moral obligation? Personally, I'll take it from God telling me so, but in a secular world, some guy off the Internet can go whistle.
Not if it's the Internet Atheist version of "I don't believe any of this sky-fairy crap but I will quote it to force you to do something I want you to do".
Not all they that cry "Lord, Lord" will be saved, remember?
There is nothing else for me to be. I've tried to be atheist and it never stuck.
I think "obligated" is a difficult word here and can be avoided, as descriptively we don't require this of anyone.
It would be more accurate to say, "The more you do, the more value your life has" or something similar. You need strong phrasing to communicate the vital importance of doing this but not blame-based to avoid basically saying "it doesn't matter what you did if you didn't do everything."
But once we start discussing morality, we're wading into an entire morass. Morality is good, okay, but what counts as moral? If I think homosexuality is immoral, am I good or bad? How do we determine if it is or is not immoral? If not saving a drowning child is immoral, is not saving a pregnancy from being aborted immoral? How do we distinguish between the two lives there?
Because the people on here telling me to "go back to the hypothetical, engage with the hypothetical" don't want any nuance or grey areas or contemplation of real world, we are supposed to just go "child drowning, must save". Okay then, child in womb about to be killed, must save. Engage with that and then talk to me about morality.
Oh and you can't argue "it's not a child", "it's only a potential person", "it depends on the stage of development" and the rest of such arguments because nuh-uh, that's dodging the hypothetical. After all, we don't list off what age the drowning child is, whether they're a genius or a Downs Syndrome child, who their parents are, or any of the rest of it. So now define morality for me based on actions deliberately chosen or inaction deliberately chosen with no refinements other than "this is a life, you are obliged to save life".
As a worshipper of Tlaloc, I feel my moral duty is to drown as many children as possible and so if I'm not pushing a kid into a pond 24/7, can I really say that my life has value? 😁
Curious what your stance is on potential-humans vs already-existent humans.
Are we also obligated to bring as many people into existence as possible, since we're "killing" them by accepting the alternative course?
This isn't meant to be a gotcha. I'm just curious on your worldview.
I'm also curious, do you strive to be 100% moral under this code yourself?
Bringing up the actual in real life effects of actual charities doesn't seem to motivate anyone, because they fall back onto abstract arguments about why it's not good to do charity that on average saves a life per 6000 dollars. And obviously as you see, it's pointless to discuss hypotheticals when you can have real life details to talk about things.
So yeah, I agree that EA refusing to obey social mores is cultish. Normal people drop it when they see you aren't interested in conversation.
I do think you can persuade people, but it's much closer to discovering existing EAs than it is making them. Doesn't invalidate your point though, especially since this essay is targeted at someone who probably thinks they think a lot about morality.
Not relevant to the point you are making, but apparently the split brain experiments are not as well founded as previously believed: https://www.uva.nl/shared-content/uva/en/news/press-releases/2017/01/split-brain-does-not-lead-to-split-consciousness.html?cb
Pardon me if I'm missing something obvious, but don't “split-brain” patients still potentially have a ton of mutual feedback via the rest of the nervous system and body?
Oh yeah completely separately I'd like to apologize for embodying the failure mode you're talking about here. I'm not good and I use this place as a cathartic dumping ground for my frustrations, whoops.
Sometimes the brain worms get me but I'll try to keep in mind that sometimes third parties have to scroll past my garbage. Need to imagine a stern looking Scott telling me to think if it's a good comment before posting.
"This obsession with arbitrary ethical rules"
I would argue that this piece, and EA in general, is trying to make those rules less arbitrary.
Arbitrariness is a matter of degree. The fewer convoluted assumptions are required before logical implication can take over, the less arbitrary some idea is. Saying "still ultimately arbitrary" and then justifying "ultimately" on the grounds of the is-ought problem being a thing at all... by that standard, the phrase "arbitrary ethical rules" is about as redundant as "wet lakes" or "spherical planets" - unclear what it would even mean for the descriptor not to apply, so using it anyway is more likely a matter of smuggling in misleading connotations.
If someone told me their own hamburger had ketchup on it, just after having taken a bite, I'd be inclined to believe them even if I couldn't see any ketchup there myself - it's not an intrinsically implausible claim, and they'd know as well as anyone would.
Similarly, having observed it directly I consider my own life to have value, and I'm willing to extend the benefit of the doubt to pretty much everyone else's.
Doesn't seem to be deleted: https://laneless.substack.com/p/the-copenhagen-interpretation-of-ethics
It was originally, long before Substack was founded, at a different URL that's no longer online. Possibly people don't know that there's now a Substack.
I linked to it at the time https://entitledtoanopinion.wordpress.com/2015/07/17/you-said-it-better-than-my-years-of-attempts/
Oh, thank goodness - I'd have been sad if a "foundational reference" essay that I reread periodically was gone for good. Link rot comes for everything in the end...
I made sure that it was on the EA forum before the old blog went offline, and that copy I expect to be Extremely Permanent:
https://forum.effectivealtruism.org/posts/QXpxioWSQcNuNnNTy/the-copenhagen-interpretation-of-ethics
Sorry about the trail of dead links I've carelessly left in my wake.
"Sorry about the trail of dead links I've carelessly left in my wake." - Me, the first time I played a Zelda game
This kind of thing is getting far beyond the actual utility of moral thought experiments. Once you're bringing in blatantly nonsensical constructs like the river where all the drowning children from a magical megacity go, you've passed the point where you can get any useful insight from thinking about this hypothetical.
If you want to actually make a moral point around this, it's better to find real-life situations that illustrate your preferred point, even if they're messier or have inconvenient details. The fact that reality has inconvenient details in it is actually germane to moral decision-making.
So much this. My moral intuition just completely checks out somewhere between the examples 2 and 3 and goes "blah, whatever, this is all mega-contrived nonsense, I might just as well imagine me a spaceship while at it". Even though I'm already convinced of the argument Scott makes.
I feel the same about many trolley problems.
Having said that, doing thought experiments is a good discipline.
Theyre a great way to consider things in a stripped down way. They just hurt the brain a bit.
Learning morality from these thought experiments is like learning architecture from an Escher painting.
True that it's hard to learn from these--but they're not for *learning* morality. Thought experiments are the edge cases by which you *test* what you've learned or concluded. In that analogy, it's like looking at what architecture *can't* do by studying an Escher lithograph.
Practically speaking, no one has been persuaded into actually looking into details when they say things like "why would I donate to malaria nets". They fall back onto their preconceptions about how charities are corrupt and oh no nothing ever happens productively when it comes to charities, despite those points being laid out in exhausting detail on givewell's website.
So when people say that hypotheticals are useless and that it takes too much time to find out germane details, it sure does seem like people have a gigantic preference for not having anything damage their self image as a fundamentally morally good person, and this preference happens before any rules about the correct level of meta or object level details arise.
I mean, that's obvious, right?What's your point? That most people don't seem especially saintly when scrutinized by Singer or similarly scrupulous utilitarians?
If it was obvious, there'd be way more pushback re: discussion norms against bad faith. Coming into a discussion with your bottom line written down and being unwilling to update on germane facts that someone has to find for you is rude and should be banned via most ethical systems and not just utilitarianism (or is being stubborn a virtue?)
I'm not saying that they're at fault for being less virtuous, but for *not even attempting to be virtuous by most definitions of virtue*. Neither deontology nor virtue ethics says that's it okay to ignore rules or virtues because it feels uncomfortable. And this isn't a deep seated discomfort that's hard to hide, it's an obvious-by-your-accounting one!
You're just saying "bad people should be good people" at great length here. So yeah, I'd say it's pretty obvious.
> or is being stubborn a virtue?
Plenty of people think of things like maintaining faith and hope in conditions where they are challenged as virtuous, rather than as opportunities to reconsider your beliefs. Usually this is couched in terms of being ultimately right, contra the immediate evidence - seems like a pretty good definition of stubbornness to me.
You're wrong. I was persuaded precisely by the details, specifically by Scott back on SSC - the post which finally pushed me over was *Beware Systemic Change*, oddly enough, but the fuel was all of his writing about poverty and the effectiveness and so on in a specific detailed fashion.
What I think you're saying is "people want to be selfish and will engage in whatever tortured pseudo-logic that lets them indulge in this urge with minimal guilt". And on a purely descriptive level, I agree. I also think that's bad, and we should not in any way encourage that behavior.
Thank you so much for proving me wrong. I should not have been hyperbolic.
And I also agree this shouldn't be encouraged, but I have no idea what a productive way of going about this would be. The unproductive way I've been doing is to post snark and dunks, which I agree is bad and also should not be encouraged but what if it makes me feel a tiny bit better for one moment? have you considered that.
But no seriously, you can't see the exact degree to which someone is bad faith in this way until you've engaged with them substantially, at which point they usually get bored and call you names instead of responding. Any ideas would be welcome
Politics is the mind-killer. It is the little death that precedes total obliteration. I will face the hot takes and I will permit them to pass over me and through me. And when the thinkpieces and quips have gone past, I will turn the inner eye to see its path. Where the dunks have gone there will be nothing. Only I will remain.
But to your point, yes, broadly speaking I agree. Claims that you have an obligation to be Perfectly Rational or Perfectly Moral-Maximising or whatever at all times, and to fall short by a hairs breadth is equivalent to having never tried at all or tried as hard as possible to do the opposite, are utterly Not Helpful and also patently stupid. If I came across as saying that, I strongly apologise. And implied within that position is that it is less than maximally damning to fall short from time to time - not *good* maybe, but you do get credit for the Good Things.
And yes, I agree that there is a lot of bad faith on this topic, because people want to justify their urges to have another six-pack of dubiously-enjoyable beer rather than helping someone else, an urge which only gets greater with greater psychological distance. Construal level theory is applicable here, I think. Frankly, I'm getting pretty hacked off with people arguing in what is obviously bad faith trying to justify both being selfish and viewing themselves as not-selfish.
The basic way I ground things out is "do you accept that, barring incurring some greater Bad Thing, to a first approximation we have some degree of moral obligation to help others in bad situations?" If yes, then we can discuss specifics and frameworks and suchforth. If not, we're from such totally different moral universes then our differences are far more fundamental.
> If I came across as saying that, I strongly apologise.
You did not come across this way.
I actually do think I'm not being helpful, and like, surely there exist norms that we can push for where people don't post such bad faith takes.
> If not, we're from such totally different moral universes
To a certain extent, this is not what Scott believes and it's to his great credit that he doesn't, because it's what motivated him to be persuasive and argue cogently for his point.
Agreed. The day I first encountered Peter Singer's original drowning child essay, I went home and donated to malaria nets. I've been donating 10% of my income to global health charities ever since. Hypothetical situations aren't inherently unpersuasive, even if you can't persuade all the people all the time.
I truly think that most people just don't have money to donate to charity after all of the taxes they pay. People may believe that if spending isn't taken care of immediately that the government will go bankrupt within 1-5 years and if that happens the entire western world will. Ollapse overnight and a whole lot of people, the entire planet, will be suffering a whole lot. People may also believe DOGE actually will make things more efficient and if that ends up being the case its completely fine to continue to help the rest of the world in a streamlined and technologically up to date way.
I honestly haven't kept up with DOGE and whats going on but it seems like theyre going full Shiva on everything and then reinstating things they make mistakes on. Its not the way I think anyone would prefer but if it really is true that the US could bankrupt within 1-5 years then this absolutely had to happen and one can be a moral person that supports this.
I think the mega-death river is actually a pretty reasonable analogy for many real-life situations. Scott has mentioned the rich Zimbabweans who ignore the suffering of their countrymen. These are analogies for simply turning a blind eye to suffering, and the point being illustrated is that morality does not reasonably have any *actual* relationship with distance or entanglement or whatever, it's just more convenient to request that people close to a situation respond to it.
Of course there are plenty of ordinary Angolan businessmen, but I think the assumption must be that the rich Angolan is probably not a legitimate businessman but someone who skims or completely appropriates Western aid or the oil revenues that themselves owe to Western businessmen.
I would mostly agree. It's the distillation of some moral hypothetical into a specific (albeit wholly artificial and nonsensical) scenario that makes it a PARABLE.
I think people are apt to ignore problems if they think they can't do anything useful. They might or might not be right about whether they can do anything useful.
Sometimes the locals are the only ones who can help. Oskar Schindler was in the right place and at the right time to save a good number of Jews. Henry Ford wasn't in a place where he could do much. What he could do, make weapons for the allies, was entirely different from what Oskar could do (making defective shells for the Nazis as a cover for saving Jews).
Even assuming Ford was a moral person who was genuinely interested in helping, he didn't have an avenue to do so in a direct way. I don't consider that a moral failing. That he instead chose to help the war effort (which maybe not coincidentally also gave him a lot of money) is not a moral failing either.
And sometimes we just make mistakes, which we cannot determine at the time. The US returned several boatloads of Jews to Europe at a time when it didn't seem like that was likely a big deal. Hindsight wants us to call the action evil, but that's a kind of bias. It was 1939. Very little of Europe was under the control of the Nazis and there wasn't much reason to think that would change. Even less reason to think that the Nazis planned to exterminate Jews in lands they conquered. The solution of "always accept boatloads of foreigners" is not a reasonable policy and comes with its own negatives and evils, which again would be noticed in hindsight.
Maybe Henry Ford isn't the best example to use here.
https://www.si.edu/object/american-axis-henry-ford-charles-lindbergh-and-rise-third-reich-max-wallace%3Asiris_sil_1094433
I'm totally aware of that. Hence the "even assuming Ford was a moral person" part.
"The solution of "always accept boatloads of foreigners" is not a reasonable policy"
It was America's policy up until 1882 and, for white people, up until 1921.
Which means that "sometimes accept boatloads of foreigners" is a reasonable policy. That does not imply that "always accept boatloads of foreigners" is as well.
Yes, I think even more than physical closeness (which, to me, include all the examples with remote bots, portals and any techno-magical way to be able to experience things and jump in as easily and quickly as if you were physically close - so the thought experiments are not ruling out closeness, because it's very clear those alternatives have the same effect as physical closeness for many things, not only altruism. It just precise what closeness is, when (existing or hypothetical) things make it more complex than physical distance), altruism is boosted by:
- innate empathy (higher for children, higher for people more like you, higher for women, lower for enemies)
- the impression you can help (your efforts are not likely to be in vain)
- the impression you will not loose too much by helping
- this include the fear of establishing a precedent for such help, which indeed can cost a lot if such issue is ultra-common. For me a better interpretation to lack of empathy for common misery than habituation...
- the impression you can gain social status as the "good guy" (direct or indirect bystanders).
On the other hand, it is decreased (decreased a lot, I think) by:
- the impression you are being taken advantage of, scammed in a way.... (i.e. your saving will super-benefit the victim that would become more well off than just fix the issue (like drowning), or, more commonly, it benefit a third party, especially if this third party caused the problem in the first place). This is linked to the "loose too much", but not only, also a little bit to the social status (hero v.s. trop bon trop con (too good=too dumb), but I feel it really is an altruism killer in it's own instinctual way. Maybe THE killer.
I use "instinctual" a lot, because I am fully in the camp of morality being an instinct first, an axiom-based construction (distant) second. So, like other instincts/innate things (like sensory perception), it is easy to construct moral illusions, especially in situations impossible (or unlikely) to happen during human evolution.
I think its a good illustration of how to think about this problem
Here's a real-life situation:
You're a doctor working at a hospital, putting in superhuman effort and working round the clock to save as many people as you possible can. Once you finish your residency, do you have the moral obligation to keep doing this?
You have a moral obligation to be a good person. There are many ways to do that, of which backbreaking labor at a hospital is both not the only option and perhaps not the best option.
You don't have a moral obligation to be a good person - to be a good person is to go above and beyond your obligations. Meeting your obligations doesn't make you good, it makes you normal.
This attitude is toxic and feeds into tribalism and "no cookies" arguments where treating the other tribe well gives credit even for a little while treating your tribe with anything but the most delicate kid gloves invites excoriation.
That is not a prescriptivist statement, but a descriptivist statement, about what the words actually mean to people.
I'm not sure it works as descriptivist either--there are plenty of people who divide the world into "good people" and "bad people", not "the good, the bad, and the average".
I didn't respond at first because in some sense you're right - or we could quibble over what "good" or "Good" mean, which probably isn't productive.
I will say that I don't consider moral to be neutral. Just being a normal person who does normal stuff doesn't make you moral. It doesn't make you immoral, either.
For me to consider someone moral, I believe that they have to do things that are morally positive that are not natural or easy. There has to be at least some effort at doing other than go-with-the-flow.
Again, not doing that doesn't make you evil (usually), but I don't want to dilute the idea of morality to make it natural and easy. It lets everybody get off too easily and with no benefit to society. We should expect more, in the sense of "leave the place better than you found it."
Agreed.
Does the hospital administrator have a moral obligation to hire enough people to finish all the necessary work without sleep deprivation and burnout?
Does it matter? Does the fact that someone else's lack of moral obligation left you in this situation mean you don't need to help?
Maybe you see a drowning child because someone didn't fulfill their moral obligation to add fences. Or because someone pushed the child into the river. Does that change your moral obligation to save them?
Strongly disagree. The utility of unrealistically simple toy models is that they can explain principles that the messiness of real-world examples conceals.
Suppose you're Newton trying to explain how orbits work with the cannon thought experiment, but the person you're talking with keeps bringing up ways in which the example is unrealistic. "What sort of gunpowder could propel a cannonball out of the atmosphere?" they ask, and "What about air resistance slowing the cannonball down?" and so on.
It's not unreasonable to say in that situation "No, ignore all of that and focus on the idea the thought experiment is trying to communicate. If it helps, imagine that the cannon is in a vacuum and the gunpowder is magic."
And sure, if Newton thought hard enough, maybe he could come up with the concept of rockets and provided an entirely realistic example of the principle- but if someone had demanded that of them, they'd still have been missing the point.
You are making sound argument for using extreme examples where the details stop mattering.
Unfortunately, what we have here are highly contrived examples with lots of extra details thrown in to muddle up the analysis.
They may both be hypotheticals/thought experiments, but otherwise they are not similar
>The utility of unrealistically simple toy models is that they can explain principles that the messiness of real-world examples conceals.
Even the most simple, original Drowning Child thought experiment is drawn from messy reality. It asks us to avoid many questions that any person in that situation might ask themselves, intuitively or not: What is the risk to myself, other than the financial risk of ruining my suit? Am I a good enough swimmer to get to the child and pull it to shore? Is the child, once I reach it, going to drown *me* by thrashing around in panic? Is the child 10 meters from shore, or 100? Are there any tools around that could help, like a rope?
Plenty of complications there already, and no need to introduce even more. Or, if you do need to introduce more, start asking yourself if it's really a good thought experiment to begin with.
But Newton never claimed that his cannonball experiment was, by itself, proof of his theories, only that it helped to illustrate an idea that he'd separately demonstrated from real examples. Scott doesn't have the real-world demonstration.
How do you do a "real-world demonstration" of an ethical principle?
Complete agreement.
I'd have thought the opposite is true, or can be, in that well-chosen idealized scenarios can help clarify and emphasize moral points. It's analagous to a cartoon or diagram, in which a few lines can vividly convey all the relevant information in a photo without any extraneous detail.
I actually have found a lot of utility in it because I seem to disagree with basically everyone in this thread, and it has given me context on why I find EA so uncompelling.
By relocating to the drowning child cabin you are given an incredibly rare chance to save many lives, and you should really be taking advantage of it.
On the other hand, you only get the opportunity because the megacity is so careless about the lives of its children. Obviously, saving the drowning children is a good thing, but what would be even better is if the megacity does something to prevent the kids falling into lakes and streams in the first place.
And if they don't bother because "well that sucker downstream will save the kids for us, and we can then spend the money that should go to fencing off dangerous waterways and having lifeguards stationed around pools on trips to Dubai for the rulers of the city", then are we really saving lives in the long run?
You are not really engaging with the thought experiment. Maybe think of this experiment instead: you suddenly develop the superpower of being able to be aware of any time somebody is drowning within say, 100 miles, and being able to teleport to them and teleport back afterward. If you think 100 miles is so little that a significant number of people drowning within that area is the government being lazy or corrupt, then imagine it was 150, or 200, or 1000, or the whole planet if you must. Would you have an obligation to ever use the active part of your powers to save drowning victims, and how much if so?
"You are not really engaging with the thought experiment."
Because it's rigged. It's not honest. It's trying to force me along the path to the pre-determined conclusion: "we think X is the right thing to do and we want you to agree".
I don't try to convert people to Catholicism on here, even though I do think in that case X is right, because I have too much respect for their own minds and souls. I'll be jiggered if I let some thought experiment that is as rigged as a Las Vegas roulette wheel manhandle me into "well of course I agree with everything your cult says".
EDIT: You want me to engage with the thought experiment? Fine. Let's forget the megacity.
Outside my door is a river, and every hour down this river comes a drowning child. Am I obligated to save one of them?
I answer no.
Am I obligated to save every single one of them, morning noon and night, twenty-four drowning children a day every day for the foreseeable future?
Again I answer, no.
But that's not what I'm supposed to answer? Well then make the terms clearer: you're not asking me "do you think you are morally obligated?", you're telling me I'm morally obligated. And you have NOT made that case at all.
There's a group of people who think that if you live in a regular old cabin in the woods in the real world, see a single child drowning in the river outside, and can save them with only a mild inconvenience, you are morally obligated to do so.
The child-drowning-in-a-river-every-hour thought experiment is a way to further explore that belief and discuss where that moral obligation comes from. Of course it's going to sound absurd to you, because you don't agree with the original premise. It's convoluted because it's a distortion of a previous thought experiment.
I'm not a huge fan of the every hour version because it implies an excessive burden on the person who would have to save a child every hour, completely disrupting their life and removing moral obligation to some degree. I think the comparison of the moralities of the two people earning $200k is a much more interesting example.
Save sixteen kids the first day, then formally adopt those. Have them stand watch in shifts, with long bamboo poles for rescuing their future siblings from safely back on shore. If their original parents show up, agree to an out-of-court settlement, conditional on a full-time lifeguard being hired to solve the problem properly.
I mean, if you seriously think you're not morally obligated to save any drowning children (and elsewhere in the thread you said it applies to the original hypothetical with just one child too), then fine, you've finally engaged instead of talking around the thing.
This, and your attitude to moral questions in general, does affect my opinion of the effectiveness of Catholicism, and religion in general, in instilling morals in people though, and I can't be the only one. You're not just a non-missionary, you're an anti-missionary.
Oh dearie, dearie me. You now have a poor opinion of Catholicism, huh? As distinct from up to ten minutes ago when you were on the point of converting?
Yeah, I'm afraid my only reaction here is 😁😁😁😁😁😁😁😁😁😁
Now, who's the one not engaging with the hypothetical? "Just because you think it's bad doesn't mean it's wrong", remember that when it comes to imposing one's own morals or religious beliefs on others in such instances as trans athletes in women's sports, polyamory, no-fault divorce, capitalism, communism, abortion, child-free movement and a lot more.
You don't like the conclusion I come to when faced with the hypothetical original Drowning Child or the variants with the Megacity Drowning Children River? Tough for you, that does not make my view wrong *unless* you can demonstrate from whence comes the moral obligation.
"If you agree to X you are morally obliged to agree to Y". Fine. Demonstrate to me where you get the moral obligation about X in the first instance. You haven't done that, you (and the thought experiment) are assuming we all share Western, Christianity-derived, social values about the importance of life, the duty towards one's neighbour, and what is moral and ethical to do.
That's a presumption, not a proof. Indeed, we are arguing about universal values and objective moral standards in the first place!
I can be just as disappointed about "the effectiveness of Effective Altruism, and rationalism in general, in instilling morals in people" if you refuse to agree with me that "if you agree to save the Drowning Child, you are morally obligated to agree to ban abortion".
Malaria killed 608,000 children globally in 2022. Abortion killed 609,360 children in the USA alone in 2022. Now who cares more about the sacred value of life and the duty to save children?
I think she's engaging, but the experiment seems to be going sideways ;-).
That's the fun with hypotheticals - someone elsewhere said "the choice is snake hands or snake feet and you're going 'I want to pick snake tail'" but why not? It's a hypothetical, nobody in reality is going to get snake hands or snake feet! So why not "Oh I think I'd rather be Rahu instead!" with the snakey tail?
https://pureprayer-web.s3.ap-south-1.amazonaws.com/2024/08/xvJUkEcy-Rahu-Ketu-Dosha-Parihara-Homam-Creatives-2024-600-%C3%97600px.jpg
These things never seem to bother with considering that the value of a human life is not a universal constant, any more than is the value of other life on this planet.
Oh, sure. It's an arm-twisting argument about "you should give to charity", not anything more. Same way the thought experiment about "suppose a famous violinst was connected up to your circulatory system" or "suppose people got pregnant from dandelion seeds floating in the window" about abortion.
It's set up to force you along a path to the conclusion the experimenter wants you to arrive at. You have three choices:
(1) Agree with the conclusion - good, moral person, here's a pat on the head for you
(2) Agree with X but not with Y - tsk, tsk, you are being inconsistent! You don't want to be inconsistent, do you? Only bad and stupid people are inconsistent!
(3) Recognise the trap lying in wait and refuse to agree with X in the first place - and we get what FeaturelessPoint above pulls with me - oh you bad and wicked and evil monster, how could you?
Many people go along with (1) because nobody (or very very few) are willing to be called a monster by people they have been habituated to regard as Authorities (hence why it's always Famous Philosopher or Big Name University coming out with the dumb experiments; we'll all laugh and ignore if it was Joe Schmoe on the Innertubes) and most people want to get along with others so they'll cave on (2). We all want to think of ourselves as moral and good people, after all, and if the Authority says "only viewpoint 1 is acceptable for good and moral people to hold", we'll most of us go along meekly enough.
You have to be hardened enough to go "okay, I'm a monster? fine, I'm a monster!" but it becomes a lot easier if your views have had you called a monster for decades (same way every Republican candidate was "Hitler for real this time", eventually people stop paying attention).
I'm willing to bite that bullet in a hypothetical, because I know it's a hypothetical and what I might or might not do in a spherical cow world of runaway trolleys and nobody in sight for miles around a pond except me and a drowning child, is completely different from what I'd do in real life.
In real life, maybe I don't jump into the pond because I can't swim. Maybe this is my only good suit and if I ruin it, I can't easily afford to replace it, and then I can't go to that interview to get the job that means now I can pay rent and feed my own kids. Maybe I'm scared of water. Maybe I think the kid is just messing around and isn't really drowning. Real life is fucking complicated*, so I have no problem being a contrarian in a simplified thought experiment that I can tell is trying to steer me down path A and not path B.
In real life, I acknowledge the duty to give to charity, because my religion tells me to do so. That's a world away from some smug thought experiment.
*Which is why there is a field called moral theology in Catholicism, and why for instance orthodox Jews get around Sabbath prohibitions by using automated switches to turn on lights etc. The bare rule says X. Real life makes it hard to do X, is Y acceptable? How about Z? "You're a bad Jew and make me think badly of Judaism as instilling moral values if you use automation on the Sabbath" is easy to say when it's not you trying to live your values.
I'm loving your ability to enunciate what the rest of us can only mutely feel.
"Suppose people got pregnant from dandelion seeds floating in the window" - hadn't heard that one but it's funny to me because it puts the thought experimenters about at the level of some adolescent girls circa 1984 - when my fellow schoolgirls earnestly discussed whether one could get pregnant "sitting in the ocean" lol.
Thank you for the compliment!
Yeah, the dandelion seeds comes from Judith Thomson's "A Defense of Abortion"
https://spot.colorado.edu/~heathwoo/Phil160,Fall02/thomson.htm
"Again, suppose it were like this: people-seeds drift about in the air like pollen, and if you open your windows, one may drift in and take root in your carpets or upholstery. You don't want children, so you fix up your windows with fine mesh screens, the very best you can buy. As can happen, however, and on very, very rare occasions does happen, one of the screens is defective, and a seed drifts in and takes root. Does the person-plant who now develops have a right to the use of your house? Surely not--despite the fact that you voluntarily opened your windows, you knowingly kept carpets and upholstered furniture, and you knew that screens were sometimes defective. Someone may argue that you are responsible for its rooting, that it does have a right to your house, because after all you could have lived out your life with bare floors and furniture, or with sealed windows and doors. But this won't do--for by the same token anyone can avoid a pregnancy due to rape by having a hysterectomy, or anyway by never leaving home without a (reliable!) army."
Interestingly, she seems to argue *against* the Drowning Child scenario, though not by mentioning it:
"For we should now, at long last, ask what it comes to, to have a right to life. In some views having a right to life includes having a right to be given at least the bare minimum one needs for continued life. But suppose that what in fact IS the bare minimum a man needs for continued life is something he has no right at all to be given? If I am sick unto death, and the only thing that will save my life is the touch of Henry Fonda's cool hand on my fevered brow. then all the same, I have no right to be given the touch of Henry Fonda's cool hand on my fevered brow. It would be frightfully nice of him to fly in from the West Coast to provide it. It would be less nice, though no doubt well meant, if my friends flew out to the West coast and brought Henry Fonda back with them. But I have no right at all against anybody that he should do this for me."
So by her logic, if you live by the river of drowning children, nobody in the world can force or expect you to rush out and save them every hour, or indeed at all. Just because your cabin is located beside the river, where there is a megacity upstream where the children all tumble into lakes and get washed downstream, puts no obligation whatsoever on you. You didn't do anything to create the river or the city, or the careless parents and negligent city government.
I appreciated this perspective and was surprised is wasn't brought up earlier or given greater weight.
Deontological details are important, but a core part of all of this revolves around who is accountable for stopping an atrocity. I loved Scott's article, but we focused on pushing the extreme boundaries on how to evaluate a hapless individual's response to the megacity drowning machine while literally ignoring the rest of the society.
I've waved this part off as avoiding the pitfalls of the bystander effect; plus the point of the article seems to be answering the question "what should I as an individual do?" as well . But sometimes a problem requires a mobilized, community response.
I also appreciated Deiseach pointing out that when you altruistically remove pain from a dysfunctional system that you can remove the incentives for the system to change which can have a worse outcome.
If it needs to be in the form of a thought experiment:
A high profile, reckless child belonging to a powerful lawmaker who is constantly gallivanting falls in the river. If you save them you know the child will stay mum about it to avoid backlash from their parents, but if they drown the emotionally vexed lawmaker will attempt to re-prioritize riparian safety laws. What do you do?
The megacity is a vibrant democracy. Every child who drowns traumatizes the entire family and their immediate relations and galvanizes them to vote against the status quo and demand policy change, which is the only thing that will ultimately stop the jeopardy to the children long term. Do you save an arbitrary child that afternoon? How about at night after saving every child during your standard waking hours?
No one wants to see an atrocity occur. But sometimes letting things burn allows enough smoke to get in the air that meaningful action can finally happen. We should at least consider this if we're doing an elaborate moral calculus.
Isn’t this “incredibly rare chance” basically equivalent to being on the global rich list? Which most people on this forum are on?
https://www.givingwhatwecan.org/how-rich-am-i
I went to that page and entered my information, but it didn't tell me whether I was on the global rich list or not, and it didn't say how rich someone would have to be in order to be on the global rich list (which I assume is not a real list, but a metaphor meaning in the top 0.01% or something). Do you know?
You mean to tell me that people *don't* wake up sutured to violinists?
And here was me convinced my sister got pregnant from the baby-seeds floating in the window!
Scott has to keep making up fantastical situations because it’s the only way to pump up the drowning child intuition. I don’t regularly encounter strangers who I can see right in front of me having an emergency where they have seconds before dying but are also thousands of miles away.
The important part isn't physical distance, it's ping times.
Hmm, I don't have an ethic where I judge hypotheticals in terms of their realism. In fact, isn't the beauty of the hypothetical the fact that it is so malleable?
It really is a different mode of thinking. For some people, abstract situations are clarifying because they eliminate the ancillary details that obscure the general principle. For others, it's necessary to have all the ancillary details to make the impact of the general principle evident.
I've always favored the former, but I regularly encounter folks who only process things via the latter. Communicating effectively and convincingly across different types requires the ability to switch modes. Sorta like talking to a physicist vs. an engineer.
I redd about this on r/askphilosophy (not sure why this scenario is resurfacing so much lately) and was struck by this comment:
"Singer isn't writing for people walking by ponds that children have fallen into, though. It's a thought experiment ... Singer's point isn't "Intuition, yay!" it's that our intuition privileges the people close to us but we should consider distant folks the same. It's that our intuition is wrong."
That comes very close to sounding like there is no "thought" either sought or required - that he had a point, and smuggled it into a parable.
It seems entirely disingenuous to me. He (Singer) should state his point, assert that he knows the truth and you know a lie, and let the chips fall where they may.
"That comes very close to sounding like there is no "thought" either sought or required - that he had a point, and smuggled it into a parable."
It's a gotcha, and why my withers remain resolutely unwrung by those telling me I'm immoral if I don't fall into line about "if X, then by necessity and compulsion Y".
I kind of know what you mean, but I kind of also feel like thought experiments lay bare uncomfortable truths about ourselves that we can typically hide from behind "germane details"
Is it a good thing to aspire to be the moral equivalent of the Siberian peasant who can’t do math word problems because he rejects hypotheticals? The thought experiments are useful for crystallizing what principles are relevant and how. Most people don’t intuitively think in terms of symbolic abstractions, that’s why hypothetical scenarios. Their practical absurdity is beside the point.
Given Russian history, I sorta suspect the Siberian peasant is capable of doing math word problems in private, but has developed an exceptionally vigilant spam filter. Smooth-talking outsider comes along, saying things without evidence? Don't try to figure out the scam, just play dumb, avoid giving offense, and wait for him to leave.
I agree. Maybe the Siberian peasant is too stupid to do maths problems, or maybe he remembers the last time some government guy from the Big City turned up and asked the villagers to agree to a harmless imaginary statement.
They're still scrubbing the bloodstains out of the floor in that hut.
As we are hyperbolic discounters of time, perhaps we similarly discount space.
Perhaps more precisely, people discount help according to their social circle's accounting of that help. Distance is part of it, relatedness is another (especially in collectivist cultures like mine)
That's a good point. Intuitively, the kid you can see drowning is probably your neighbor's kid or your second cousin twice removed or something. The kid you can't see drowning has nothing to do with you, and you have nothing to do with any of the people that would be grateful for them being saved
Honestly that's not intuitive to me? I've never thought of the drowning child thought experiment and thought "wow, that's probably related to someone I know!" and if we imagine that the drowning child is not at all related, e.g they're a Nigerian tourist or something, it still seems like I'm just as obligated to save them.
I think they are trying to explain why our moral intuitions discount space.
https://www.lesswrong.com/w/adaptation-executors
In other words, caring about space (for space's sake) may have evolved because it was a near-enough proxy for relatedness.
ohh that makes more sense, like kin selection
So people intuitively recognize that you should save drowning children bcs that intuition evolved to help people related to you pass their genes on. They don't have that intuition for people far away bcs that had no reason to evolve since helping people hundreds of miles away doesn't help your genes.
In older times people just said that other tribes didn't really matter and only they did, so that's why they only helped their own tribe. Nowadays people are more egalitarian and recognize that everyone has moral worth, so they have twist themselves into knots to justify their intuition that you don't have to help far-off strangers.
If there are two people choking, one 10cm away inside a bank vault you don't know how to unlock (and might be charged with a felony for trying), the other a hundred meters away across clear open ground, who do you have the greater responsibility to?
Available bandwidth and ping times are more important than literal spatial distance.
I would agree with this idea. It also seems like a near vs far mode thing. The suffering of children in Africa is very conceptually distant, and we perceive it with a very low resolution. A child drowning right next to you just feels a lot more real.
In Garrett Cullity's The Moral Demands of Affluence his argument is that the "Extreme Argument" (Singer's drowning child) would require us to compromise our own impartially acceptable goods. And we don't even ask that of the people we are saving, so they can't ask it of us. (Kind of hard to do a tl;dr on it because his entire 300 page book is solely on this topic.)
"My strategy is to begin by describing certain personal goods—friendships and commitments to personal projects will be my leading examples—that are in an important sense constituted by attitudes of personal partiality. Focusing on these goods involves no bias towards the well-off: they are goods that have fundamental importance to people’s lives, irrespective of the material standard of living of those who possess them. Next, I shall point out the way in which your pursuit of these goods would be fundamentally compromised if you were attempting to follow the Extreme Demand. Your life would have to be altruistically focused, in a distinctive way that I shall describe. The rest of the chapter then demonstrates that this is not just a tough consequence of the Extreme Demand: it is a reason for rejecting it. If other people’s interests in life are to ground a requirement on us to save them—as surely they do—then, I shall argue, it must be impartially acceptable to pursue the kinds of good that give people such interests. An ethical outlook will be impartially rejectable if it does not properly accommodate the pursuit of these goods—on any plausible conception of appropriate impartiality."
Never heard of it before but I think you chose a good except, thanks.
I don't know if you were the one to recommend it in a prior ACX post, but I saw a comment about that book and devoured it. I am shocked that it doesn't seem to have any influence on modern EA, because it deals a strong counterargument to the standard rejection of the infinite demand problem.
I don't think it was me. I live in an Asian timezone so rarely post here (or on any other moderately popular American thing since they always have 500 comments by the time I wake up).
But maybe we both saw the same original recommendation? I read it probably 5-6 years ago, so maybe I saw it back on SSC in the day??
No, this was within a few months ago.
I'm not well versed on the matter--isn't the entire point of Singer's drowning child that it is not an extreme demand? It doesn't require you to sacrifice important personal goods like friendships or personal projects, just that you accept a minor inconvenience.
Edit--I read more about it, and Singer's drowning child is not the extreme demand, but the child-per-hour argument could be. The iterative vs aggregative question to moral duty seems particularly relevant to Scott's post.
With the disclaimer that it has been many years since I read the book and Cullitty's "life-saving analogy" is slightly different from Singer's for reasons he explains in the book. But part of the Singer's argument is he isn't actually asking us "just" to save one single child with his life-saving analogy. That's just the wedge entry point and you then are required by his logic to iterate on it.
"I do not claim to have invented what I am calling the ‘life-saving analogy’. Peter Singer did that, in 1972, when he compared the failure to donate money towards relief of the then-recent Bengal famine with the failure to stop to pull a drowning child from a shallow pond."
Singer certainly wouldn't say "you donated to the Bengal famine in 1972 so you are relieved from all further charity for the rest of your life." He would ask you to iterate for the next thing. After all his other books advocate for recurring monthly donations, not one offs.
"No matter how many lives I may have saved already, the wrongness of not saving the next one is to be determined by iterating the same comparison."
Singer doesn't let you off the hook if you see a school bus plunge into the river and have saved a single child from it. You can't just say "eh, I did my part, time to get to work, I'm already running late for morning standup".
And then once you iterate it seems to lead inexorably to the Extreme Demand.
"An iterative approach to the life-saving analogy leads to the conclusion that
you are required to get as close as you productively can to meeting the
Extreme Demand"
So I think Cullitty, at least, believes that (some variation of) Singer's argument requires the Extreme Demand.
I think it also abolishes the "why help one near person when the same resources would help one hundred far persons?" arguments I see bruited about.
If you don't get to say "I donated once/I pulled one kid out of a river once" and that's it, no more obligations, then neither do you get to argue that people far away should be prioritised in giving over people near to you (and I've seen plenty of arguments about 'this is why you shouldn't give to the soup kitchen on your street when that same money would help many more people ten thousand miles away').
If I'm obliged to help those far away, I am *also* obliged to help those near to me. I'm obliged to donate to malaria net charities to save 100 children, but I'm *also* obliged to give that one homeless guy money when he begs from me.
Distance is not an excuse in either case; if I'm obliged to help one, I am obliged to help all, and I don't get off the hook by saying "but I just transferred 10% of my wages to GiveWell" when confronted by the beggar at my bus stop.
There are no drowning children.
The correct scenario and question to ask is:
"An entire society focuses on a sexual practice that spreads an incurable disease. People are now dying because of this disease."
Is it your moral responsibility to pay to reduce the disease incidence for people in this society given that they are spreading the disease?
If you're going to get moralistic about (I assume) HIV you should also bear in mind it gets transmitted from mothers to newborns, who obviously have no moral responsibility for their plight.
We can focus on the newborns:
"An entire society focuses on a sexual practice that spreads an incurable disease. The disease is also passed on to the children."
->
"An entire society has collectively decided that drowning children is sexually pleasurable. Should you save the child and ignore the sexual practices of the society?"
This is standard motte and bailey. You cannot consider one without the other in this thought experiment and turn around and apply it to real life.
This is kind of an absurd argument given that the sexual practice in question is just "having sex" and infecting children is in no way a necessary consequence for people to fulfill their desire.
This is again a question of moral luck: people in the United States can, relatively trivially, get the medicine necessary to have sex without passing on HIV and people in some nations cannot.
This is false:
> use of a cloth to remove vaginal secretions during intercourse (dry sex) (relative risk, 37.95)
Dry sex has a risk ratio that is higher than blood transfusion (relative risk, 10.89).
The prevalence of dry sex is over 50% in places with high HIV rates.
"Just sex" has a transmission rate of 0.08%, which means someone needs to have sex with an infected person OVER 1000 times to be infected with HIV.
Okay, fine - we should definitely discourage this. I don't think that gives us the license to ignore newborns getting HIV or that this is tantamount to deliberately drowning children.
Why not colonization for the greater good, then?
The figure you cite is based on a study which observed 19 cases of HIV in total: https://pubmed.ncbi.nlm.nih.gov/2391598/
A much larger meta-analysis found much, much lower hazard ratios for dry sex practices (maybe 0-80% higher): https://pubmed.ncbi.nlm.nih.gov/21358808/
Do you really think the world's assembled anti-HIV efforts would have ignored this out of embarrassment or stupidity? It's largely a sexually transmitted disease - they are not squeamish when it comes to studying which sexual activities are associated with increased risk. It is easy to find tables with estimated infection rates for anal sex, sex while carrying open STI sores, and so on.
I suggest you come up with some other way of blaming HIV incidence on the backwards culture of Africans.
>An entire society has collectively decided that drowning children is sexually pleasurable. Should you save the child and ignore the sexual practices of the society
Those are two separate questions. You should save the child and do what you can to encourage the societal change that you think would be beneficial.
I get that this is some kind of tortured analogy to HIV, so I guess the real question is do you actually think non-profits aren’t also spending money on safer sex education in addition to HAART?
I hope you mean the newborns because mothers obviously have moral responsibility for their newborns plight.
I was referring to the newborns (although it's definitely not universally true that their mothers are at fault).
And from rapists to rapees, often, they say.
That doesn't mean long-term utilitarian arguments about the consequences of policy go away. It is conceivable that refusing to pay for HIV medication would ultimately produce a society where the risks of HIV are so terrifying that no-one engages in unprotected sex, and thus the number of infected newborns drops to zero. Even if, in the short term, more newborns die.
I don't know if this is actually true, but 'moralism' isn't the correct frame to analyse this problem with. "Fewer people should die of HIV" is a 'moralising' position.
I think "maybe if we let them all have AIDS it'll work itself out" is:
1. Very ghoulish
2. Obviously wrong, given that some countries already have wildly high rates
Yeah, some countries do have wildly high rates, but were these countries with zero access to HIV meds? How do you know this didn't exacerbate the problem?
I don't know what the correct solution to the calculus here would look like, I'm just pointing out that calling your critic a 'moraliser' in response is nonsensical. There's no utilitarian calculus without a moral definition of utility.
Countries in the first world with ready access to these medications are way better off than much poorer countries who rely on foreign aid to get them.
And my point was not that it's absurd to invoke morality, my point was that invoking it to assign responsibility to victims was incorrect for the population of victims who have no agency.
EDIT: Also antiretrovirals can virtually eliminate transmission so it's hard to see how the "moral hazard" argument would work here, at least in a scenario where you adequately supply everyone.
> Countries in the first world with ready access to these medications...
Are, among other things, overwhelmingly white/asian, and I'm HBD-pilled, so I don't assume that identical policies are going to yield identical outcomes in both areas.
> EDIT: Also antiretrovirals can virtually eliminate transmission
If they remember to take them, sure. IQ has an impact on how reliably that happens, though.
So there is a question here of judging people as a society. If the newborns in a society all get HIV because the society is bad, but then the newborns grow up to be part of the same society, how do we judge them?
There's a reasonable counterargument that any specific newborn isn't blameworthy because he's a blank slate that might not grow up like that society, so we shouldn't judge him as a social average. But then this is already effectively a newborn we know nothing about except that he's from that society, so maybe judging him by the priors of his society (instead of global priors) makes more sense.
(There's also, in this particular case, a second objection that AIDS aid might gradually help push that society away from the disease as a group; this is a practical question I have no particular insight about)
Judging someone who has taken no actions is pretty incoherent.
Umm, if you're talking about HIV, don't all sexual practices that involve the exchange of semen, saliva, or vaginal secretions increase the spread of the incurable disease? Doesn't this include all societies that allow or encourage physical sexual connection?
Does it make any difference whether or not you're a member of the society?
Reposting:
This is false:
> use of a cloth to remove vaginal secretions during intercourse (dry sex) (relative risk, 37.95)
Dry sex has a risk ratio that is higher than blood transfusion (relative risk, 10.89).
The prevalence of dry sex is over 50% in places with high HIV rates.
"Just sex" has a transmission rate of 0.08%, which means someone needs to have sex with an infected person OVER 1000 times to be infected with HIV.
If you’re going to say this and repeat it. then get it right. The O.08% figure applies to the situation where the infected male is asymptomatic. If he is sick the chance of transmission is 8x higher, or 0.64, enuf bigger to be harder to shrug at. That works out to a 12% chance of infection if thewoman has sex with the symptomatic man 20 times.
The spread of HIV depends on multiple sexual partners. That’s why I thought the blaming of the Catholic Church in Africa was odd.
The blaming was down to opposition to condoms. Condoms are the highest good, you see, and being opposed to them means that you are a no-fun wet blanket who thinks having fun sex with no consequences like kids is a bad thing. This makes Westerners mad about their current attitudes to sex, because it makes them feel like they're being blamed and are bad people (see the comments above about people wanting to believe they're good and moral while not doing something to help) so they use cases like "condoms reduce spread of AIDS, the church wants people to die" to make themselves feel justified.
It's not so much that the church is against condoms in committed relationships, it's the position that even using condoms in extramarital or promiscuous sex makes those things a worse sin instead of not as bad, that really riles people up wrt AIDS and other diseases.
Sex that makes two people happy is good on that basis in and of itself. Is it in all circumstances net good? No.
"being opposed to them means that you are a no-fun wet blanket who thinks having fun sex with no consequences like kids is a bad thing. This makes Westerners mad about their current attitudes to sex, because it makes them feel like they're being blamed and are bad people"
All this is an accurate description of the Catholic church's perspective. Judge and ye shall be called judgey.
The real reason to blame the Catholic Church in Africa is that Islam is so negatively correlated with HIV rates there :)
Can you please post a link to the stat you quote of 37x increased risk of HIV transmission when dry sex is practiced vs intercourse with no interference with the vagina’s
secretions? I have looked quickly on Google Scholar and asked the “deep research” version of GPT and cannot find any figures remotely like what you are quoting.
All I found was this study saying there was no difference. https://pubmed.ncbi.nlm.nih.gov/8562002/
Yeah I found that too. That 37x-as-likely sounded like bullshit to me from the start, because it's too big and precise. This just isn't the kind of data from which you can extract such a precise number, or find such a large difference between groups. To get such a big number, and to trust it, you'd have to have a control group of couples and a dry sex group, and assign couples randomly to the groups and then followed them for a year doing HIV tests. Um, obviously such a study was not done, and it's not possible to get data you're confident of just by interviewing people about practices and number of partners, etc. In fact it's probably not possible even group the people studied into dry sex and regular sex groups . You'd have to find people who have *only* done dry sex or only done plain vanilla intercourse. In one of the studies I looked at the best they could do was look at women who reported they had had dry sex at least once in the period they asked about.
I really don't doubt that dry sex creates abrasions on the genitals of both partners and that this ups the chance of HIV transmission. Really irritates me when people spout bullshit that supports it though.
The odds of spreading HIV only through semen, saliva, or vaginal secretions are practically nil which is why HIV is not transmitted through oral sex.
Isn't it only non-monogomous sex that can really spread any of these diseases? (Okay, you could be born with it, and give it to your one sexual partner, but none of these diseases can long survive on such a patry pathway, all in practice require promiscuity). Pre-1960 there have been a lot of societies (damn near all of them in theory if not in practice?) that encourage only monogomous sexual connections.
You can also share needles. No sex necessary.
Sure, but aren't we in the context of responding to the comment "don't all sexual practices increase the spread? Doesn't this include all societies?"
Pretty hard to think of any that came close to it in practice, though, ain't it?
Absolutely, but also pretty easy to think of several that were much closer than the West today.
Do we have any real idea how many sexual partners the average 19th century Venetian shop-owner, US cowboy, or
1920’s flapper had?
We don't have time series data, but we have extensive literary evidence. That's usually the case with history. Demanding time series data smells of an isolated demand for rigour. After all, you seemed comfortable ruling out any society being entirely monogamous in your previous comment.
it wasn't "all societies", not even close. Read some anthropology.
It really depends on the kind of sex you're having much more than the number of partners- anal sex is *massively* more dangerous than vaginal. In Australia, for example, gay men are 100x more likely to have HIV than female sex workers (and most of the female sex workers most likely got it through drug use rather than sex. Most sex workers in Australia don't use IV drugs, but there's a somewhat significant number who do).
> Is it your moral responsibility to pay to reduce the disease incidence for people in this society given that they are spreading the disease?
You're a battlefield medic. As you triage the incoming casualties, you realise that some of them are members of the enemy forces.
Do you help them, or do you toss them out of the tent?
I would feel no obligation to treat the enemy soldiers. If I did treat them, I'd be going above and beyond my moral duty.
You might be interested in the knowledge that your personal ethics are at odds with the Geneva convention.
I know. It doesn't keep me up at night.
Would you not want enemy medics to treat captured friendlies?
Do you value not treating enemy soldiers more than you value society maintaining such a norm?
The point in war is to kill or injure enemy soldiers, so no, I don't see any intrinsic value in doing the opposite of that (treating the wounds of the enemy,) when you are at war. (Nor would I expect the enemy to treat our wounded, although that would be a nice bonus.) War is brutal in and of itself and so if you want to avoid brutality and cruelty, my suggestion is to avoid war.
Out shallow societal norms about war crimes are a joke because 1. As demonstrated countless times, we throw the norm out of the window when it is convenient to do so. And 2. It's perpetrating a gigantic fraud to imagine criminal war vs. just War when War itself is a crime
Is your argument that no child actually dies which could have been reasonably prevented without incurring greater moral wrong? Because that's patently false.
Is your argument that discussions about the nuances of moral theory and intuitions and how they cash out in actual object-level behavior is useless? Because that could work, but would need a greater explanation.
Is your argument that discussions about the nuances of moral theory and intuitions and how they cash out in actual object-level behavior is rhetorically not effective? Because that's false - I myself was persuaded by precisely that kind of argument and to this day find them more persuasive than other forms. We can talk about if other forms would be more effective, but that'd need more explanation.
Is your argument that worrying about malaria is inefficient compared to worrying about HIV? Because any even somewhat reasonable numbers say that's not true.
Is your argument that worrying about HIV is a waste of time because people with HIV are too ignorant and engage in risky behaviour? Because then the solution is education.
Is your argument that worrying about HIV is a waste of time because people with HIV are too stupid to avoid engaging in risky behaviour? Because then you'd need more evidence to support this claim.
Or do you just want to beat your own hobby horse about how African People Bad? I assume I don't need to bother saying why that position is oderous.
Hi Alan, I would like to see a single piece of writing from Scott on the importance of education against practices like dry sex or having sex with infants to "cure" HIV. I would also like to see where in this post, his analogies are anything like societal practices encouraging dry sex or having sex with infants to "cure" HIV. Please make it so that a five year old can understand, thank you!
I'm South African. South Africa has for decades had major public awareness campaigns about AIDS that (among other things) explain and warn against those specific practices. Everyone who went to a South African high school heard the spiel repeatedly, and wrote exams testing their understanding of it. It was on TV, the radio, newspapers, the internet.
Those awareness campaigns are funded by the international aid that Scott has repeatedly endorsed in writing. I would have thought it fairly obvious to everyone that said funding is allocated to local education programs, in addition to condom and ARV distribution, etc.
Here is an additional hypothetical. I won't call it an analogy, because it describes a strictly worse situation than reality.
A child is drowning in a river, because their parents pushed them in. Should you save the child? Or should you let the child die because clearly those parents are terrible people who don't deserve to have children?
South Africa can't even reliably supply its cities with electricity, my dude.
According to a source I find online, South Africa funds just over 70% of it's anti-AIDS budget directly, 18-24% comes from PEPFAR depending on the year, and the rest from something called the Global Fund.
Deserve or no, if the child is going back to those parents, it would seem to have good odds of being pushed in the river again.
The child will also inevitably die of old age if nothing else, so I guess we don't have any moral obligation to them whatsoever?
The standard unit used to measure such things is a QALY (quality-adjusted life year).
So your argument is that current educational programs don't exist (not true, as Synchrotron describes at least in the case of SA, and a cursory search finds similar programs in at least a dozen African countries) or that they're not effective? Because again, an even cursory glance at the literature suggests that while they're obviously far from perfect, rates of safer sex practices does improve with education, albeit very unevenly depending on country and specific program.
Actually, I'll make it easier for you. What, precisely, is your actual argument?
I think he just wants to bitch about Africans. He's not making an argument. His posts are kinda pathetic
I think you are on the right track here. The common issue with extrapolating all these drowning child experiments is that the child presumably has no agency in the matter. The intuitions change very quickly if they do.
"You save the child drowning in the pond and point out the "Danger: No Swimming" sign. The child thanks you, then immediately jumps back into the pond and swims out towards the middle, and proceeds towards drowning. Do you save them again, and how often do you let them repeat that?"
"You see some adult men swimming in a pond, and one of them starts to drown. You save him, and then the next day you see him swimming in the pond again, apparently not having learned his lesson. Do you hang around in case he starts to drown? If he does start to drown, do you save him again? How often do you repeat that?"
All that before you get to the questions of "Can you actually save the person?" "Will going out to help only drown both of you?" "How likely are you to make things worse by trying to save them?" That last one doesn't fit the metaphor at all, but is in fact usually what happens with foreign aid: the situation is made somewhat worse.
Another question is: How much do you know the situation? Is the child actually drowning? Is he swimming? Filming a movie?
Another is: How much responsibility have the Megacity for all the drowning?
I think what happens is Alexander took a case that is rare and unpredictable and said it happens all the time. This of course inverts our intuitions.
In this case, in real life, it would be like:
"We have no responsibility to save YOUR children, but we don't like to hear them crying so we added a net at the border so they can drown at your end".
Indeed. In fairness it is Singer’s base example, and people just use it because it seems to be difficult for most to grapple with. Singer is not someone I feel good about based on his writing that I have read, but maybe he is a decent person.
"How likely are you to make things worse by trying to save them?" That last one doesn't fit the metaphor at all, but is in fact usually what happens with foreign aid: the situation is made somewhat worse."
That's in the blurb for Scott's dad's book:
https://www.amazon.com/Edge-Everyday-Adventures-Disaster-Medicine/dp/B0F1CL61T9
"The “Law of Unintended Consequences” reared its ugly head despite the best of intentions. For example, when the US flew in free food for the starving people of Port Au Prince, it put the farmers out of business. They just couldn’t compete against free food. Many were forced to abandon their farms and move to tent cities so that they could get fed and obtain the services they needed."
I was reading the obituary of a neighbor’s father, a doctor, and I learned that he had always had a special passion for Haiti, from way back in the 80s, and “had made over 150 trips there”.
How admirable, I thought. And there was really nothing else to think of in connection with that, beyond its evidence of his compassion.
Among the many things I don’t understand is why people look so hard for (and frequently find) unintended consequences when talking about ostensibly altruistic acts, but rarely when talking about “selfish” ones. The example taken from the blurb of Scott’s father’s book is a single paragraph among others, most of which extol the virtue of voluntarism (although I haven’t read the book, so it may include a lot of similar examples of do-gooding gone wrong.) But even in the case of the farmers who lost their market, we don’t know for sure that that itself wasn’t a blessing in disguise – maybe some of them went on to find different, better paying and less arduous work. Maybe some of the people who were prevented from starving went on to do good works far in excess of saving a drowning child.
But as soon as it comes to “selfish” acts – starting a business with the aim of becoming rich, a business that fills a societal need or want – we don’t try to look for unintended consequences (we call them externalities); instead we point to the good they are doing. Even if we admit the negative externalities (the classic case is pollution, but another more modern one is social media platforms’ responsibility for increased political polarization), we still say “but look at all the good they’re doing,” or at least the potential good, if the benefits are still in the future.
One reason for saving a drowning child might be so that you don’t hate yourself for not doing it, which is only tangentially related to desiring others to see you as virtuous. Should that count as an argument against altruism? Why does the argument against the possibility of true altruism not also get applied to selfishness? Even the most selfish, sociopathic and least self-aware person will bring on themself *some* negative consequences of their actions – the loss of opportunities for even more selfishness; the loss of the possibility of truly mutually beneficial relationships; a victim who seeks revenge. Even if they die before realizing these negative consequences, their legacy and the reputation of their descendants will be tarnished.
Unintended consequences are not synonymous with externalities. The reason people focus on them with regards to altruistic motives is that the general default mode towards apparently altruistic acts is “do it” when in fact it might make things worse, whereas there is a default of assuming selfish acts are harmful to others, often in excess of what is really there.
Yes, I agree, unintended consequences are not synonymous with externalities -- externalities can be unintended, which is the rationale for environmental review of projects, but some of them are planned for, and some of them are intended to be mitigated whereas others are ignored or covered up. I don't agree that the default mode toward selfish acts is "don't do it," however. Selfishness is in many cases held up as a virtue (e.g. the selfish gene; the profit motive; the adversarial process in legal proceedings; the notion of survival of the fittest and competition for an ecological niche).
Read that again, please.
The point is the Drowning Child argument tries to hit us over the head with "do this self-evidently good thing or else what kind of monster are you?" without consideration of unintended effects. Donating to anti-malaria charities is a good thing.
So is feeding the hungry. And yet the intervention in Haiti ended up causing *more* hunger and undermining local food production. So was the self-evidently good thing an unalloyed good, or should we maybe look before we leap into the pond?
I think this is obviously the wrong analysis of pepfar but even if it were right this wouldn’t be a good argument against the against malaria foundation
Can you please describe to me why this is obviously wrong for PEPFAR? Please explain it to me like I'm five, thank you.
" 'An entire society focuses on a sexual practice that spreads an incurable disease. People are now dying because of this disease.'
Is it your moral responsibility to pay to reduce the disease incidence for people in this society given that they are spreading the disease?"
Even if we granted the premise that it's not our moral responsibility to save people who recklessly endangered themself or others, many of the people who are getting HIV were not reckless. Some of them are literal babies or women and children who were raped. Many others didn't have the education to know how HIV is spread and how to avoid being infected; if someone mistakenly believes that sex with a virgin renders them immune to HIV, can you blame them for getting HIV when they thought they couldn't?
But I would definitely contest that premise. If someone is drowning in front of you, you're obligated to save them. It doesn't matter if they got there by recklessly playing near the lake or through no fault of their own. If someone will die unless you intervene, you have to help regardless of how they got into that position.
A drowning person taken out of the water is safe. An AIDS patient needs lifelong treatment.
Are you also opposed to paying for a family member's medical care if they were injured playing sports?
> ...behave as if the coalition is still intact...
I think you may have snuck Kant in through the back door. Isn't this kind of what his ethics is? Behave according to those principles that you could reasonably wish were inflexible laws of nature (or, in this case, were agreed to by the angelic coalition).
No, Kant relies on the idea of immoral actions being illogical, because they contradict the rules that also provide the environment where the action even makes sense to do.
Lies only make sense if people trust you to tell the truth.
Theft only makes sense if you think you get to keep what you take,
Etc.
>My favorite heuristic for thinking about this is John Rawls’ “original position” - if we were all pre-incarnation angelic intelligences, knowing we would go to Earth and become humans but ignorant of which human we would become, what deals would we strike with each other to make our time on Earth as pleasant as possible? So for example, we would probably agree not to commit rape, because we wouldn’t know if we would be the offender or the victim, and we would expect rape to hurt the victim more than it helped the offender.
No, it's trivially obviously false that we would agree to that (or anything else) in this scenario. If we don't have any information about which humans we are, then we're equally likely as not to end up being sadomasochists, so any agreement premised on the assumption that we want to minimize suffering for either ourselves or others is dead on arrival. All other conceivable agreements are also trivially DOA in this scenario, since we also don't have any information about whether we're going to want or care about any possible outcomes that might result. Consistently applied Rawlsianism is just roundabout moral nihilism.
In order for it to be possible that the intelligences behind the veil of ignorance might have any reason to agree to anything, you have to add as a kludge that they know masochism, suicidality, and other such preferences will be highly unusual among humans in the society they're born into, and that it's therefore highly unlikely they'll end up with such traits. But if they can know that, then there's no reason why they can't also know the commonality of other traits, and then there's no reason why they shouldn't be able to at least make a well-informed Bayesian estimate of whether they're more likely to end up the offender or victim in a rape, or whatever else you want them not to know, and so the whole experiment becomes pointless.
Masochists tend to be very picky about the kind of pain they want. I have no idea whether this is as true about what kind of pain sadists want to impose.
I think that's a misstating of what the veil makes you ignorant of.. The point isn't that you don't know anything about the society into which you will be incarnated; the point is that you don't know what role in that society you will have.
Firstly, as a masochist myself, you are heavily misrepresenting masochism. Secondly, as someone who's met a weirdly large number of people who have committed rape, I'm pretty sure the net utility *for rapists* is at least slightly negative - some of them get something out of it, but some of them are deeply traumatized by it and very seriously regret it (and that's ignoring the ones who actually get reported and charged and go to prison, because I haven't met any of those).
I've wondered whether there were people who committed rape once and found they didn't like it and never did it again, or maybe once more to be certain and then never again.
It makes no difference to the victims, but it might make a difference to rape prevention strategy.
I *think* the stats I read showed that most rapists only do it once? But I don't remember super clearly, and I don't have a link to that source.
From what I can tell, it is both that the majority of rapists only have one victim, and the majority of rapes are committed by serial offenders.
Yeah, that sounds about right. I definitely meant that the majority of people who commit rape only do so once, not that the majority of rapes are committed by one-time offenders. Probably should have clarified, though, so thanks for that.
The veil of ignorance is about circumstances not values. So you know what you value you just don't know what circumstances you'll end up in.
It's about both. Rawls is very clear on this.
You can try to invent an alternate version of the VOI where you arbitrarily know what your values will be without knowing anything else, but I'm not sure how such a blatantly arbitrary thought experiment is supposed to be a compelling argument for anything.
The point isn't that you know what values will be, but that you know the distribution of values/preferences and circumstances, from which yours will be randomly chosen.
I already explained in my original post why this doesn't work. If you grant the souls this kind of probabilistic information, then there's no reason why they can't also make well-informed probabilistic guesses regarding all the other things they're supposed to remain ignorant of, which makes their "ignorance" functionally meaningless.
It does work. If you don’t know whether you will be born a sexual predator or a victim, you should assume you’ll be a victim and therefore advocate for a society that prevents sexual assault.
Why?
Remember the point of this experiment is to determine the rules.
Including of course the false positives.
The whole point of the veil is to be arbitrary. You only know *this* which is what the constructor of the thought experiment has predetermined is the important thing.
I mostly agree with this, but still some versions of the veil make the arbitrariness more obvious than others.
Ah, but what is the problem? Obviousness or arbitrariness?
That depends on what the goal is.
> we're equally likely as not to end up being sadomasochists
I think a lot of ethical thought experiments are pointless too, but the point that you could be a masochist is complete nonsense. Sadomasochists are a small minority of people, full-time ones even more so. Rawls’ angels could assume their human avatars wouldn’t like pain. The point is to apply that frame to actual human ethical questions, and humans can assume that the drowning child doesn’t enjoy drowning and that children in Africa don’t enjoy starving or dying of malaria. Otherwise is just silly sophistry.
I already explained in my original post why this doesn't work. If you grant the "angels" this kind of probabilistic information, then there's no reason why they can't also make well-informed probabilistic guesses regarding all the other things they're supposed to remain ignorant of, which makes their "ignorance" functionally meaningless.
I don't understand. How does probabilistic information about the personality makeup of the human species mean you can't be incarnated at random? Are they supposed to be making decisions with no knowledge of the world whatsoever?
>Are they supposed to be making decisions with no knowledge of the world whatsoever?
Not exactly. Souls behind the VOI are allowed to know general rules that apply to all human interactions; there's no reason why they can't know that humans inhale oxygen and exhale carbon dioxide, or other such things. They just aren't allowed any information that might incentivize them to favour the interests of any one person or group of people over those of any other person or group of people. So they can't know that "sadomasochists are a small minority of people", because then it would be rational for them to treat the interests of non-sadomasochists as a group as more important than those of sadomasochists as a group.
Okay, sorry I didn't see your reply here:
https://www.astralcodexten.com/p/more-drowning-children/comment/102705348
https://archive.org/details/a-theory-of-justice-john-rawls-1971/page/133/mode/2up
So... yeah, it looks like your quote is accurate, Rawls intended for the VoI to preclude any information about group size and relative probability of who you'd incarnate as.
At a glance, Rawls does seem to be making a lot of stipulations or assumptions about the value system of the angels, though (maximin principle, conservative harm avoidance, some stipulation of 'monetary gain' as if he were doing economics), so... it looks like "maybe you all incarnate as hellraiser cenobites" would contradict his thought experiment. But maybe I'd have to read it again.
There's perhaps a more fundamental objection to "you can't know how common different groups are", which is that subgroups are in principle infinitely sub-divisable. Are the "ginger swedish lesbians born with twelve fingers" group supposed to be exactly as common as "people over five feet tall"?
I have never heard it claimed that Rawls prohibits probabilistic knowledge. Indexical ignorance is precisely the ignorance Rawls seems to be requiring.
Then you have not actually read Rawls, because not only does he state this prohibition explicitly, but he also explicitly acknowledges that removing this prohibition would make his argument completely nonsensical.
Could you quote where he says that?
From "A Theory of Justice", pages 134-135 in the latest edition:
>Now there appear to be three chief features of situations that give plausibility to this unusual rule.20 First, since the rule takes no account of the likelihoods of the possible circumstances, there must be some reason for sharply discounting estimates of these probabilities. Offhand, the most natural rule of choice would seem to be to compute the expectation of monetary gain for each decision and then to adopt the course of action with the highest prospect. (This expectation is defined as follows: let us suppose that gij represent the numbers in the gain-and-loss table, where i is the row index and j is the column index; and let pj, j 1, 2, 3, be the likelihoods of the circumstances, with pj 1. Then the expectation for the ith decision is equal to pjgij.) Thus it must be, for example, that the situation is one in which a knowledge of likelihoods is impossible, or at best extremely insecure.
>[...]
>Let us review briefly the nature of the original position with these three special features in mind. To begin with, the veil of ignorance excludes all knowledge of likelihoods. The parties have no basis for determining the probable nature of their society, or their place in it. Thus they have no basis for probability calculations. [...] Not only are they unable to conjecture the likelihoods of the various possible circumstances, they cannot say much about what the possible circumstances are, much less
enumerate them and foresee the outcome of each alternative available. Those deciding are much more in the dark than illustrations by numerical tables suggest.
The Rawls veil of ignorance works even if the "angelic intelligences" know every single fact about what will result from the society they choose except which human they will end up being. In that case it's basically rule total utilitarianism. It also works, somewhat, if there's only one intelligence doing the choosing, although there it ends up looking like rule average utilitarianism.
I think the mistake you're making is assuming that behind the veil of ignorance you're choosing with the same intelligence and values that you have in life, which can leak information about which human you are, causing a failure to come to agreement, but part of the experiment is that behind the veil you have a completely standardized mind.
>I think the mistake you're making is assuming that behind the veil of ignorance you're choosing with the same intelligence and values that you have in life,
...What? The fact that you're *not* doing this is my whole point!
Then I fail to understand what you mean by "But if they can know that, then there's no reason why they can't also know the commonality of other traits, and then there's no reason why they shouldn't be able to at least make a well-informed Bayesian estimate of whether they're more likely to end up the offender or victim in a rape, or whatever else you want them not to know, and so the whole experiment becomes pointless." The only thing they're supposed to not know is which particular human they end up as. Bayesian estimates of what a generic human is likely to experience are on the table! (The original Rawls book does handle this badly, but it's because Rawls has a particular (and common) blind spot about probability rather than it being an inherent defect of the thought experiment.)
What I mean is that the whole goal of the VOI is to justify some kind of egalitarian intuition. But this only sort-of appears to work in Rawls' original version because the souls lack *any* ability to guess, even probabilistically, what sort of people they're going to be (a point which Rawls states explicitly). If they're allowed to make informed guesses as to what sorts of people they'll most likely be, then there's no reason for them not to make rules where an East Asian's interests count for 36x more than a Pacific Islander's, or where a Christian's interests count for 31000x more than a Zoroastrian's, or where an autistic person's interests count for only 1% those of an allistic, or to any number of the other sorts of discriminatory rules which the whole point of proposing the VOI is to avoid.
If you're trying to maximise your expected utility, you don't want a scenario "where an autistic person's interests count for only 1% those of an allistic".
This is because in a world with 99 allistics/1 autistic, and a dispute between an autistic in which the autistic loses 50x as much as the allistic gains, you have:
a 1% chance of being the autistic and losing 50
a 1% chance of being *the specific* allistic in the dispute and gaining 1
a 98% chance of being someone else
...which is an EV of -49/100.
You'd be in support of a measure that hurt the autistic by 50 in order to make the lives of *all* the allistics better by 1, but that's not valuing an autistic's interests at 1% of an allistic's, it's just not valuing them at twice as important..
Why would privileges only accrue to the specific allistic in the dispute in this scenario? That's never been how discrimination has worked. If you were born white in Apartheid South Africa, you wouldn't need to get into a specific, identifiable dispute with a black person to be favoured over them for the highest-paying jobs, for your vote to count more than theirs in an election, etc. you'd just get all that automatically.
"So for example, we would probably agree not to commit rape, because we wouldn’t know if we would be the offender or the victim, and we would expect rape to hurt the victim more than it helped the offender."
Unless, of course, the rapist got much more pleasure than the victim felt suffering, so the total amount of happiness in the world increased:
https://pbs.twimg.com/media/DOinMW5UQAA_omS.jpg:large
I broadly agree that we should "do unto others as we would have them do unto us" but yeah, depends on the tastes of both ourselves and the other person.
"There is at least one sado-masochist on earth, therefore when I'm born my chances of being a sado-masochist are around 50%" is certainly a take.
"If we don't have any information about which humans we are, then we're equally likely as not to end up being sadomasochists"
Huh, I think way fewer than 50% of humans are sadomasochists actually
This is among the more hilarious misunderstandings of VOI I've seen. Scott is correct.
Your assumption that you can go from knowledge of population level traits/desires to (probabilistic) knowledge of circumstance also doesn't follow.
I would draw a distinction between "observing a problem" and "touching a problem" in jai's original post. Trace is commenting on the "touching" side of things, specifically the pattern where a charity solicits money to solve a problem, spend that money to make poor progress on the problem, and defends this as "everyone's mad at us trying to help even though not trying would be worse". It is possible to fruitfully spend money in distant weird to you circumstances you don't properly understand, but if you think you're helping somewhere you're familiar with, you're more likely to be right.
I think the distance objection does not refer to literal distance, but our lack of knowledge and increase in risk of harm the further we are from the people we're trying to help.
For example, consider the classic insecticide-treated mosquito nets to prevent malaria. Straightforward lifesaving intervention that GiveWell loves, right? It turns out that many of the hungry families who received such nets decided to use them to catch fish instead. This not only failed to prevent malaria, but also poisoned fish and people with insecticide. We didn't save as many drowning children as we hoped, and may have even pushed more of them underwater, because we were epistemically too far away to appreciate the entire socioeconomic context of the problem.
The further you are in physical and social space and time from the people you're trying to help, the greater the risk that your intervention might not only fail to help, but might actually harm. This is the main reason for discount rates. It's not that people in the far future are worth less morally, but that our interventions become more uncertain and risky. We're discounting our actions, not the goals of our actions. Yes, this is learned epistemic helplessness, but it is justified epistemic helplessness.
> It turns out that many of the hungry families who received such nets decided to use them to catch fish instead. This not only failed to prevent malaria, but also poisoned fish and people with insecticide.
I think this Vox article does a good job deflating this claim: https://www.vox.com/future-perfect/2024/1/25/24047975/malaria-mosquito-bednets-prevention-fishing-marc-andreessen
The best study we have on bed net toxicity—as opposed to one 2015 NYT article that made a guess based on one observation in one community—is from a 2021 paper that’s linked in the Vox article. It does a thorough job summarizing all known evidence regarding the issue, and concludes with a lot of uncertainty. However:
> I asked the study’s lead author, David Larsen, chair of the department of public health at Syracuse’s Falk College of Sport & Human Dynamics and an expert on malaria and mosquito-borne illnesses, for his reaction to Andreessen citing his work. He found the idea that one should stop using bednets because of the issues the paper raises ridiculous:
> “Andreessen is missing a lot of the nuance. In another study we discussed with traditional leaders the damage they thought ITNs [insecticide-treated nets] were doing to the fisheries. Although the traditional leaders attributed fishery decline to ITN fishing, they were adamant that the ITNs must continue. Malaria is a scourge, and controlling malaria should be the priority. In 2015 ITNs were estimated to have saved more than 10 million lives — likely 20-25 million at this point.
>“… ITNs are perhaps the most impactful medical intervention of this century. Is there another intervention that has saved so many lives? Maybe the COVID-19 vaccine. ITNs are hugely effective at reducing malaria transmission, and malaria is one of the most impactful pathogens on humanity. My thought is that local communities should decide for themselves through their processes. They should know the potential risk that ITN fishing poses, but they also experience the real risk of malaria transmission.”
There’s no good evidence that bed net toxicity kills a lot of people, and there’s extremely good evidence that they’re one of the best interventions out there for reducing child mortality. See also the article’s comments on nets getting used for fishing; the studies on net effectiveness account for this. Even if the nets do cause some level of harm, the downsides are enormously outweighed by the upsides, which are massive:
> A systematic review by the Cochrane Collaboration, probably the most respected reviewer of evidence on medical issues, found that across five different randomized studies, insecticide-treated nets reduce child mortality from all causes by 17 percent, and save 5.6 lives for every 1,000 children protected by nets.
This doesn’t mean that we should stop studying possible downsides of bed nets or avoid finding ways to improve them, but it does mean that 1) they do prevent malaria, extremely well, and 2) they save pretty much as many children as we thought.
To add, the Against Malaria Foundation specifically knows about this failure mode and sends someone to randomly check up on households to see if they're using the nets correctly. The rate of observed compliance failure isn't close to zero, but it isn't close to a high number either. See: https://www.givewell.org/charities/amf#Monitoring_and_evaluation_2
Maybe I'm too cynical but I haven't seen anyone change their mind when you add context that defies their expectation, I feel like they either sputter about how that's not their real objection (which if you think about it is pretty damn rude to say "this is why I believe in X" and then immediately go "I don't believe in X why would you think I believe in X") or they just stop engaging.
That's good news! Thanks.
But I think we agree that the general principle still stands that moral interventions further in time and space from ourselves generally have more risk. We can reduce the risk with careful study, but helping people far away is rarely as straightforward as "saving a child from drowning" where the benefit is clear and immediate. I find the "drowning child" thought experiment to be unhelpful as a metaphor for that reason.
We're not saving drowning children. We're writing policies to gather resources to hire technicians to build machines to pluck children from rivers at some point in the future. In expectation we aim to save children from drowning, but unlike the thought experiment there are many layers and linkages where things can go wrong, and that should be acknowledged and respected.
Sure—but then shouldn’t we respond by being very careful about international health interventions and trying as hard as we can to make sure that they’re evidence-based, as opposed to throwing up our hands and giving up on ever helping people in other countries? The former is basically the entire goal of the organizations that Scott is asking people to listen to (GiveWell, etc). Hell, GiveWell’s AMF review is something like 30 pages long with well over 100 citations.
There has to be some point where it’s acceptable to say “Alright, we’ve done a pretty good job trying to assess whether this intervention works and it still looks good, let’s do it.” Going back again to the organizations that Scott wants people to donate to, I think that bar has been met.
I believe that where the bar lies should be for each person to decide for themself. Also, it's not enough for an intervention to have positive effect, but it must have a more positive effect than doing what we would otherwise do anyway. That's a much harder bar to clear.
I personally do think many international interventions have positive effects in expectation. But I am skeptical that they have more positive effect than the "null hypothesis" of simply acting as the market incentivises. I'm honestly really not sure if sending bed nets to Uganda helps save more lives in the long run than just buying Ugandan exports when they make sense to buy and thereby encouraging Ugandan economic development, or just keeping my money in the bank and thereby lowering international interest rates and helping Uganda and all other countries.
The market is a superintelligent artificial intelligence that is meant to optimize exactly this. To be fair, part of the process of optimization is precisely people sometimes deciding that donating is best. Market efficiency is achieved by individuals taking advantage of inefficiencies. But I don't think I have any comparative advantage.
The market optimizes something very different from "human flourishing". Economic resources and productivity are conducive enough to human flourishing that we've been able to gain a lot by taking advantage of the market being smarter than individuals, but now it's taking us down the path of racing toward AI, so in the end we're very likely to lose more than we ever gained by listening to the market. And in the meantime, Moloch is very much an aspect of "who" the market is.
The market is not selecting for AI.
Moloch is an aspect of everything. It would be cherry-picking to say that it uniquely destroys the efficient market hypothesis vs. all other solutions. Efficiently functioning markets very much is demonstrated in the real world as leading to vastly better outcomes than any other known system of resource allocation.
This argument proves too much, though. If the maximally efficient way to save lives is sitting back and letting markets do their thing, wouldn’t that also mean that we should get rid of food stamps, welfare, and every other social program in the US? After all, these programs weren’t created by market forces—they were created by voters who wanted to help the unfortunate (or help themselves) and who probably weren’t thinking all that hard about the economic consequences of these policies. The true market-based approach would to destroy the social safety net, lower taxes by a proportional amount, ban all private charities that give things away at below-market prices, and let the chips fall where they may.
Markets are good at doing what they do, but there’s no law of economics that says markets must maximize human welfare. They maximize economic efficiency, which is somewhat correlated with human welfare but a very imperfect proxy for it. I don’t think that I can beat the market at what it does best (which is why I’m mostly invested in the S&P), but when it comes to something the market isn’t designed for and doesn’t really care about, I trust it far less.
Moreover: Is that your true objection? If someone came out with a miracle study proving that donations to the AMF save more lives than investments in the S&P (I know this is sort of impossible to quantify, it let’s say they did), would you then agree that donating to the AMF is a good idea if you want to improve human welfare?
The market does an impressive job at optimizing for the welfare of people who have money. LVT + UBI would neatly sort out most of the associated misalignment problems.
Stories about clothing donation - unsorted heaps of your old sportsball and fun-run and corporate team building tee shirts having more value than the most beautiful locally-produced textiles - are depressing in this regard, and bring to mind the African economist who - 20 years ago or so - received a tiny bit of attention for asking Western do-gooders to basically leave Africa alone.
Do you also apply this heuristic to acts that we might call selfish? Starting a clothing business to make lots of money by jumping on microtrends in fashion carries the risk of encouraging young people to overextend their credit. Discarded, no-longer-in-fashion garments may end up clogging landfills. And yet it’s the ostensibly altruistic projects that we attack for ending up “doing more harm than good." The others we praise for their entrepreneurial spirit.
> insecticide-treated nets reduce child mortality from all causes by 17 percent, and save 5.6 lives for every 1,000 children protected by nets
I'm curious as to how the math here works out. If they're reducing child mortality by 17%, how does that not imply 170 lives saved per 1000 children? Everyone goes through an infant stage during their lives, right?
17 percent of the total risk of child mortality. If the total risk of child mortality without bednets was 100% then Africa wouldn't have made it long enough for this even to become a charity.
Oh, I see. I thought it was 17% in absolute terms, not as a reduction of prior risk.
That's exactly the answer. When you're helping the child in the lake across the street there are a lot of implied social contracts at play, between you and your neighbors, you and your city, you and your country. That child will grow up to pay taxes and be the teacher of your grandchildren or the doctor that will take care of you as you age.
There's no such contract with the far away child. You don't know if the child's drowning because their society keeps throwing children in lakes. You don't know that the money you send won't be used to throw even more children in lakes. You don't even know if that child will be saved just to grow up and come make war with your own society.
There’s something to this, but I’m not sure if it’s enough. Suppose you’re American and taking a vacation (or on a business trip, or working there temporarily) in rural China and you see a drowning child.
Would you decide not to save them because it’s not your country? What if Omega tells you that it’s a genuine accident and the locals are not routinely leaving children to drown?
Something like that really happened. A british diplomat saving a chinese woman among a bunch of passive chinese bystanders, IIRC. https://www.bbc.com/news/world-asia-china-54961075
If you read the article many of the Chinese bystanders were not passive, obviously the drowning child scenario assumes a shallow lake, very few people would dive into a fast moving river without some sense of training (the diplomat competed in triathlons).
Unironically yes. When you travel to a foreign country like this, you are an outsider and you aren’t really supposed to interact with the locals very much. I wouldn’t talk to them, I wouldn’t make friends with them, so I sure as hell am not going to get involved in their private affairs like this. It’s none of my business and as an outsider I wouldn’t be welcome to participate. I’m pretty sure that if I saved the child, I would be called a creep for laying hands on him. Without knowledge of the actual language, I wouldn’t have the tools to explain myself otherwise.
Honestly, I think your thought experiment kind of illuminates why we save local children and not distant ones. The local children are theoretically members of our community, and though community bonds are weaker than ever, they aren’t non-existent, they still matter. Ergo, we save the child to reinforce this community norm, and we hope someone else saves our own children from drowning some day.
That doesn’t transfer if we do it in a foreign country.
"I'm pretty sure that if I saved the child, I would be called a creep for laying hands on him." I'm not a mind reader, but this sure reads like a bad faith argument to me.
Have you read the fable of the snow child? It’s a story about a fox who saves a girl who was lost in the woods in the winter. Upon bringing the girl home the parents shoot the fox because he’s a fox and they were afraid that he was going to steal their hens. The girl of course admonished the parents for this, but it didn’t change the fact that the fox was dead.
Do not underestimate the power of xenophobia.
Communicating with foreigners can be a very high stakes situation. When people are naturally suspicious of you, it’s critical that you stick to pre-approved socially accepted scripts and to not deviate from them, otherwise the outcomes can be very unpredictable. Drowning children is a rare enough event that we don’t have universally agreed upon procedures for how to handle them.
This is middling sophistry.
There have, I don't think commonly but still non-zero, honor killings because a (male) foreigner interacted with a local (female) child. The interaction that the tourist probably thought was merely being polite was enough for her to be marked unclean.
If you happen to stumble into a living thought experiment where a child is drowning in a shallow pond, it's worth the risk of a cultural misunderstanding and the child's later death instead of the certainty of their current drowning. But such cultures do exist.
As someone who would like to be saved from any hypothetical future drownings, even if they were to happen in foreign countries, or in my own country in instances where the only potential saviours are foreigners, I very much dispute your last sentence as logically following from the previous.
Indeed, I would like the community of people who would feel obligated to save my children from drowning to be as large as possible, all else equal.
To deal with the objections below, switch it up so it's in your home country but a visiting tourist: a cruise ship docks at your local beach; you know at this time of year the majority of swimmers are tourists not locals. You see a kid drowning... Do you ignore them because they're from a different country?
>That child will grow up to pay taxes and be the teacher of your grandchildren or the doctor that will take care of you as you age.
Would you accept that our moral circle should expand as the economy becomes more globalised then? It's standard in the modern economy for kids on the other side of the world to grow up to make your clothes, grow your coffee etc.
Yes, but the economic bound is not enough. You need a cultural bound. None of these variables are binary in practice, so the amount of economic ties, cultural ties and amount of help we send tend to be proportional to each other.
"You don't know if the child's drowning because their society keeps throwing children in lakes." That doesn't seem like a good reason to not save the child.
"You don't know that the money you send won't be used to throw even more children in lakes." This would be an argument against dropping money randomly, but we have fairly robust ways of evaluating charitable giving.
"You don't even know if that child will be saved just to grow up and come make war with your own society." Saving them with a life preserver that says 'Your drowning prevented by [my society]" seems like an excellent way to prevent that, with the added benefit that they'll tell all their friends to not make war on your society, too.
This is an over-simplification. Children are routinely indoctrinated by their societies throughout early adulthood to become warriors or to create more warriors. There is absolutely a real risk that a random person saved will be your enemy in the future. Saving them from a vast distance can indeed by seen by that society as a great way of helping their future takeover while impoverishing their enemy. Moloch is everywhere.
Allowing for the sake of argument that that's a significant problem, seems to me the obvious patch would be forking some percentage of nets off the main production line before the insecticide-treatment step, adding a cheap but visible feature to them (maybe weights? A cord for retrieval?) which would make the insecticide-free nets more useful for fishing and/or less useful for beds, then distributing those through the same channel until fish-net demand is saturated.
I think uncertainty in outcome and timing explains a lot, at least for my own behavior.
If I am certain of a benefit to others while uncertain about how grumpy I will be after the good deed, the finger is on the balance to help.
The inverse is also true. Giving for the certainty of a relief is very different from giving with the non-zero chance that funds get diverted to wars, corruption or crime organisations.
Thank you. This is very much my intuition as well, and I'm glad somebody else laid it out clearly. The biggest flaw in all these thought experiments, IMO, is that you're assumed to have 100% accurate knowledge of the situation. Accurately knowing the details of the river and the megacity and the drowning children is FAR more important to moral culpability than whether you happen to have a cabin nearby, or whether you happen to live there.
Sounds like we need some kind of social arrangement where we are gently compelled to work together to solve social problems cooperatively with roughly equal burdens and benefits, determined by need and ability to contribute. What would we call this...rule of Social cooperation? Perhaps social...ism?
Nah, sounds scary. Lets just keep letting the rules be defined by the Sociopathic Jerks Convention, with voting shares determined by capitol contributions.
Right, the trick is that the altruistic people need to make rules that exclude the sociopaths jerks from accumulating power and which make cooperation the better choice from their own selfish perspective.
Perhaps a good start, just a bare minimum, would be to strictly limit the amount of capitol that any one person can control. (could be through wealth taxes, could be through expropriation, could be through enforcement of antitrust....whatever, trying to keep this on a higher level.) The extreme inequality here leads to further multiplication of the power of the wealthy, and because the Sociopathic Jerk Convention (E.g. the behavior of corporations, which are amoral) is running the show, their rules allow them to further multiply their power.
The altruistic people need to be assertive and willing to fight. There are more of us than there are of them by a huge margin.
"The altruistic people need to be assertive and willing to fight. There are more of us than there are of them by a huge margin."
Where do you get this from?
Better yet, why not leverage the ambitions of entrepreneurs to invest their time, money and creativity to solve problems for consumers? Big investments require big bets and huge risks which need to be offset with immense potential rewards.
I think a lower bound is more important - and more feasible to enforce - than an upper bound.
When you go to the most powerful capitalist in the world and tell him "your net worth is above this arbitrary line, so a new law almost everyone else agreed on says you have to give some of it up," is he going to actually cooperate with that policy in good faith? Or is he going to hire the best lawyers and accountants in the world to find some loophole, possibly involving his idiot nephew becoming (on paper) the new second or third wealthiest capitalist in the world?
One trouble with Rawlsian veils (better yet Harsanyian veils) is that networks of billions of interacting people are complex adaptive systems with emergent characteristics and outcomes. If we want to establish morality by what actions would be lead to the best outcomes, then we need to actually play through the system and see how it develops.
May I suggest that a world where everyone gave everything beyond basic sustenance to anyone worse off than them would scale into a world where nobody invested, saved for the future and everyone felt like a slave to humanity because they would be. It would be a world of complete and total destitution devoid of any ability to help people across the world.
I think it is more realistic to take real humans with their real natures and find rules and ethics and institutions which build upon this human nature in a productive way. I would offer that this is more of a world where altruism, utilitarianism and egoism overlap. Science does this by rewarding scientists with reputation for creating knowledge beneficial to humanity. Free markets do this by rewarding producers for solving problems for consumers. Democracy does this (in theory at least) by aligning the welfare of the politician with the citizenry voting for them. Charities do this by recognizing the benefactors with praise and bronze inscriptions.
There are good reasons why pretty much nobody gives everything to charity. Effective Altruists need to take it up a level.
>Nah, sounds scary.
Given the history of actually existing socialism, very scary indeed.
Socialism has a history far broader than whatever particular example you are thinking of.
Economic system and body count don't seem correlated meaningfully...mercantilism and capitalism gets colonialism (and neocolonialism) and slavery, also fun wars like Vietnam and Iraq, communists get the great purge and great leap forward.
Authoritarianism has the body count, regardless of if it's socialist, capitalist, theocratic, mercantilism or whatever you prefer.
>Socialism has a history far broader than whatever particular example you are thinking of.
And /none/ of it has been notably successful, in any significant respect (& particularly compared to its competitor(s)—just the opposite, in fact—so... well, I remain skeptical.
Of course, it depends on what you call "socialism". Is "capitalism but with government services" deserving of the name? If so, it DOES work! (...but I, of course, would credit that to the other component at work.)
>mercantilism and capitalism gets colonialism (and neocolonialism) and slavery, also fun wars like Vietnam and Iraq, communists get the great purge and great leap forward.
I think there are many things wrong with this attempt at "death-toll parity", but no one ever changes their minds on this topic & it's always a huge exhausting slog... so I'm just registering my objection; those who agree can nod wisely, those who don't can frown severely, and we all end up exactly where we would've anyway except without spending hours flinging studies and arguments and so forth at each other!
Well, I don't want to turn this thread into a debate on socialism, and you're very right that how we define our terms is contested and critical.
I would suggest that there are many examples, such as with Allende, where it seems like it was going to work really well and the CIA simply could not have that.
I'd also note that life for the average Cuban is far, far better under Communism that it was under Batista, for example, and possibly the results in the other countries you are thinking about are better than you think when looked at from the perspective from the average poor person rather than the very-small middle and upper classes, who typically control the narrative.
Regardless, I was just saying that authoritarianism is orthogonal to economic model. And it is authoritarianism, regardless of the economic model, which is "scary." The Nazis were not less horrific simply because they had a right-wing social and economic program.
I'm not sure how your comment addressed that.
Would Venezuela be an example of a country where non-authoritarian socialism has gone badly? (May be you can wriggle out of this by saying it's a bit authoritarian?)
I suppose Norway would be an example of a country where socialism has gone pretty well (though with a fairly large dose of capitalism, and the advantage of massive oil reserves - not that those saved Venezuela).
Norway is not Socialism, per ChatGPT at least. It is a Social Democracy, not a Socialist government, though one may quibble about the distinction. Norway has:
Private Property and Business:
Individuals can own businesses and land
Market economy drives most goods and services.
Stock Market and Investment:
Norway has a well-functioning stock exchange
Encourages entrepreneurship and foreign investment
Profit Incentives:
While taxes are high, businesses still operate for profit
Wealth creation is encouraged, though it's heavily taxed and redistributed
I would personally argue that it would function even better with higher profit motive and less government intervention, but it is a misnomer to claim it is Socialism.
Venezuela went very authoritarian, but also, I wouldn't make the claim that every socialist experiment, even less authoritarian ones, are good. Norway is a possible example of a good one as you mention. Cuba is an obvious example. One could argue China is doing really well, and you can say it's capitalist but also they haul off billionaires to labor camps if they get too out of line so I would push back on that.
But Venezuela failed. This stuff is complicated. Anyone who says HURR DURR SOCIALISIM BAD is ignorant, many of them proudly so.
Chile was not working "very well" under Allende. The CIA was not able to cause a general strike and the legislature requesting his removal.
As for Batista vs Castro, https://www.bradford-delong.com/2008/02/lets-get-even-m.html
Let's look at how people vote with their feet. Do we see people moving on-net to socialist countries from capitalist ones?
Cuba has a *much* lower emigration rate than capitalist countries in the region, in fact.
Cuba restricts emigration, but do people immigrate to there? My understanding is that even Haitians don't want to.
Haitians tried migrating to Cuba at one point (I forget the exact year), but they weren't allowed in.
At my workplace (before I quit, angrily—and, as it turns out, unwisely), we had several Cubans who had come over here to 'Merica on rafts and the like. They were bigger fans of America than most Americans—the yard manager bought a Corvette and had it emblazoned with a giant American flag, always wore "America: Land of the FREE!" or "...of Opportunity!" etc. T-shirts, and so on (& I once witnessed his eyes get wet at the anthem before a game(!)—and... uh... well, they would talk about Cuban food, women, weather, vistas, but to a man they said they'd die trying to sneak back in the U.S. rather than accept being forced to go back & remain.
Anecdote, of course. But I get the impression that this is the modal Cuban, over here; granted, they're self-selected—but one doesn't see very many Proud Cuban Forever, "I'd die before leaving my adopted Cuba!", etc., expats, going the other direction.
why, and how?
i'd much prefer actually existing socialism to actually existing capitalism.
Speaking personally, I can feel that the appeal of both socialism and effective altruism are linked to the same set of intuitions about solving social problems.
To me, the big difference is: (many) socialists seem more attached to a specific idea of how to act in accordance with that intuition than with actually figuring out the best way to operationalize them.
Socialists tend to presume they know the answer even in cases where their preferred answer does not seem like it actually achieves the goals they're supposed to be working towards.
Or, maybe a different way of saying it: I think ~120 years ago, socialism would have felt a lot like EA today: the ideology of smart, scientific, conscientious but not sentimentalist, universalists. But the actual history of socialism means that a lot of the intellectual energy of socialism has gone into retroactively justifying the USSR and Mao and whatever, so that original core has become very diluted.
TBC, I don't mean this as a complete dismissal of socialism, I think there are lots of people who consider themselves socialists who I think basically have the right moral intuitions and attitudes, and I absolutely feel the pull of socialist ideas... But I often find myself frustrated how quickly so many socialists just refuse to engage with the fact that capitalism has been absolutely necessary to generate the resources necessary for a universalist moral program, or will completely abandon any pretence of conscientiousness as soon as awkward facts about communist totalitarianism are mentioned.
I'd say "hollowed out" rather than "diluted." Anybody who got sufficiently sick of trying to justify the USSR, and still cared about the original virtuous goal, started calling their personal agenda something else and focusing it in different directions.
I'm not sure that's 100% true, especially if you consider young people whose identities on these things aren't totally formed yet.
"To me, the big difference is: (many) socialists seem more attached to a specific idea of how to act in accordance with that intuition than with actually figuring out the best way to operationalize them."
Yes, clearly. That's because socialism (and capitalism) includes a large component of moral axioms and value claims as well as claims about facts, and you are not going to argue someone out of their moral axioms.
I'm opposed to capitalism partially for evidence-based reasons and partly because of basic values (I think it's morally wrong to derive most of your income from non-labor sources) and you couldn't convince me out of having my values even if you changed my opinion about some facts.
"or will completely abandon any pretence of conscientiousness as soon as awkward facts about communist totalitarianism are mentioned."
what facts, or "facts", are you thinking of, and why would you expect they would change my mind?
I'm aware socialist countries tend to be authoritarian (not necessarily "totalitarian", whatever you think that means), but I'm not really bothered by that in principle, since I don't view political freedom as self evidently good.
"Yes, clearly. That's because socialism (and capitalism) includes a large component of moral axioms and value claims as well as claims about facts, and you are not going to argue someone out of their moral axioms."
That's totally fair, but in the context of the original comment, which implied that "socialism" was just a method to implement the strategy of gently compelling people to work together to solve social problems, the fact that socialism has other moral axioms that may be unrelated to the project of solving those problems--or at least, that the problems socialism see itself as solving might be different than the problems suggested by Scott's post.
"what facts, or "facts", are you thinking of, and why would you expect they would change my mind?"
The usual ones about gulags and the Cultural Revolution and so forth; I'm sure you already know them. And I didn't say that they should make you change your mind, I said that socialists abandon their conscientiousness in the face of those facts: they tend to defend actions and outcomes that are canonically the sort of thing our hypothetical strategy of "gently compelling people to cooperatively solve problems" is meant to be *solving*.
Again, this is fine, you're allowed to think that the occasional gulag is justified to make sure that no one derives income from non-labour sources. I'm not saying you shouldn't be a socialist, I'm saying that being a socialist is *different* from the project that Scott loosely alludes to and that the top-level commenter suggests is basically achieved by socialism.
I'm explaining to the top-level commenter why some people who are sympathetic to the goal that Scott outlines, and who have some sympathy for the intuition that this has something in common with socialism, might still not consider themselves to be socialist, or at least, might think that the two projects aren't exactly identical.
Okay, you've changed my mind. I'm now convinced that promoting a social norm of saving strangers is actively evil because of second-order effects. Thanks!
Reading "More Drowning Children", the thought that came up for me was, "Damn, he has greatest ability to write reticulated hypotheticals which primarily serve to justify his priors of any one I've ever read!"
My second thought: For me, the issue is more, "At the end of this ever-escalating set of drowning children, do I ever get to do anything other than the minimal activities that allow me to survive to rescue more drowning children?" Not what you're getting ads, I know, but what you're doing seems to me to point in that direction.
I might as well take the role of the angel on your shoulder, whispering into your ear to tempt you, saying, why not give all you have to help those in extreme need just once, to see how it feels? What if your material comfort was always at the whims of strange coincidence, and goodness was the true measure of man? What if you found out you liked being a penniless saint more than a Respectable Person? You might enjoy it more than you think. Just think about it. :)
Penniless saints have done far less good in the world as a whole than wealthy countries and wealthy billionaires who then had enough time and capacity to look beyond their near term needs.
Sounds like something you'd hear from media sponsored by billionaires, or in history books written by billionaires, or in a society which overemphasizes the achievements of billionaires while ignoring the harm they are doing, etc.
Yeah, well, that just, like, your opinion man. Maybe bring some examples of poorer or different societies that do less harm and more good?
I actually completely agree with this post. You shouldn't take your own feeling of "feeling good" as the entire idea behind morality. Yes, billionaires giving to charity will do more good than a penniless saint (being influential can make up for this gap -- Gandhi may have done more good than the amount than the amount of money in his pocket -- but the random penniless saint won't outweigh $100,000,000 to charity).
That being said, billionaires can save 100,000 lives, but you personally could save 1 life. If you don't save that one life you could, seems like you're saying you don't value saving lives at all.
You could say "one of my highest utility action is to become a billionaire first, AND THEN donate all my money to the causes which are the most effective" and yes! I might even agree with you! If you dedicate yourself to that then you're doing good! But if instead, you say "well it's difficult to do the maximally efficient thing so I'm not even going to save ONE LIFE", then you're giving an excuse for not saving a life even if you wanted to.
You could say "one of my highest utility actions is to CONVINCE all the billionaires to donate their money to charity". and yes! I might even agree with you! If you dedicate yourself to that then you're doing good! But if instead you say "well, most people who say they're moral aren't doing that, so clearly the idea of morality is bunk and I'm not a bad person for not following the natural conclusion of my morality" then that's a problem.
Someone who weaves complicated webs to not do anything different than what they wanted to IS, IN FACT a worse person than if that same person donated enough to charity to save one life.
No matter what, all morality says you need to either *try*, OR say you don't value saving any lives (an internally consistent moral position that wouldn't care if your mom was tortured with a razor blade) OR do what scott says in the post and assume that looking cool/feeling good about yourself IS morality, and therefore there's no moral difference between saving 0 lives or 1 life and 10,000 lives if they provide the same societal benefit and warm feeling of fuzzyness about being a good person in your gut.
You do, in fact, have to choose one of those 3.
I'm not sure what complexity there is. The invisible hand makes free societies wealthy, and wealthy societies give more to charity. No external effort, no waiting, no convincing, marketing, sales, or anything else needed. Lowest effort, highest utility.
There is more in heaven and earth than billionaires. There are also a lot more millionaires, and even more hundred-thousand-aires than there are billionaires. Grow the whole pie. This isn't zero-sum.
"At the end of this ever-escalating set of drowning children, do I ever get to do anything other than the minimal activities that allow me to survive to rescue more drowning children?"
In the thought example, some of the saved children should take over the job, and the others maybe give thanks for saving their lives.
In real life, no one is ever going to reward you, because the kind of people with the capacity and desire to reward you are probably too busy saving kids themselves. Until the day comes when there's finally no more kids to save anywhere, then MAYBE society will throw you a bone, but we'll probably all be dead before that happens.
This is a the point of the first issue of Kurt Busiek's Astro City comic. Samaritan, a Superman-like hero, can never rest, literally (I think), because with his super-hearing he can always, 24/7, hear a drowning child in Africa, and he can get there in 2 seconds, so he feels compelled to do so.
Seems like the superior solution would be finding an EA / moral entrepreneur who will happily pay market value for the cabin, and then set up a net or chute or some sort of ongoing technological solution that diverts drowning children into an area with towels, snacks, and a phone where they can call their parents for pickup. Parents are charged an entrance fee to enter and retreive their saved children.
I unironically think the moral equivalent of this for Scott's favorite African use cases is something like "sweatshops."
"Parents are charged an entrance fee to enter and retrieve their saved children."
What if the parents don't turn up?
"Look, do you really think that by now *nobody* has realised all the missing children are due to them falling into lakes and streams? One child per hour every hour every day every month all year? 8,760 child fatalities due to drowning per year for this one city alone? Come on, haven't *you* figured out by now that this is happening on purpose?
Don't want to be bothered with your kids anymore? Don't worry, eventually they'll wander off and fall into one of our many unsecured lakes, streams, ponds, and waterways, and that's that problem off your hands! Your kid is too stupid to figure out that they shouldn't go near this body of water? Then they're too stupid to live, but Nature - in the guise of drowning - will take care of that.
You keep saving all these kids, our population is going to explode! And the genetically unfit will survive and reproduce! It will ruin our society!"
This would be a strong incentive for parents to teach their children how to swim, and not to jump in fast moving rivers.
And yet some would still fall, just as occurs in our world.
Bodies of water have inherent danger. Yet, it is worth the tradeoff of not posting lifeguards at every single river, every pond and stream, just to stop the potential of some children drowning. Life is life, and tragic accidents happen. Safetyism is worse.
> What if the parents don't turn up?
Vocational training as a rescue-chute maintenance technician.
Economic development leads to lower fertility. I definitely think population and fertility rates in Africa are huge social problems, but the best way to address them is to make Africans more prosperous so they adopt the norms about sex and family size that other countries have adopted as they get richer.
They refuse to pay and have you arrested for kidnapping. It turns out the LAW generally adopts the Copenhagen view.
in the dam lobbying example, surely lobbying for a different dam is touching the children
Yeah, probably. But what about voting for the Different Dam Party? Or voting for a party whose headline 5 policies you greatly support but also have a nasty line in their manifesto about building a different dam?
I think at some point Scott has to accept that people reading this blog are exactly the types of people to optimize for their own coolness and not at all for truth seeking or morality, when you see them go into contortions to avoid intuition pumps. The problem is upstream of logical argument, and in whatever thought process preventing them from thinking they could be at all immoral.
I would think his readership is on average more into truth seeking or morality vs coolness maximation than the average person
If so, that just shows how low a bar that is. :(
Depends. Some people are here for the careful explorations of morality. Some people are here because they heard it was where all the smart kids hang out, and they are desparate to prove they belong, which often means showing off your ability to do cognitive somersaults over things like empathy or basic moral intuition. Its essentially trangressive intellectualism as psychic fashion.
Yup, this. Thank you for explaining it.
Although I am being bad for not mentioning that I'm really talking about the commenters. If you were persuaded, the most likely time you mention it (if you do *at all* which you probably don't because mentioning donations are gauche) is a random start or end of year open thread, probably with no direct direct link back to the persuasive post. If you weren't persuaded, you likely fall into the above failure mode. (Edit: and therefore immediately respond)
> Some people are here for
I also think the proportions have shifted over time, and are still shifting.
"on average"
Have you actually met any of the 'people reading this blog'? Try coming to an SSC Meetup, or Less online or Manifest, rather than just making shit up.
People who actually come to meetups may not be representative of readers.
Yup, people who go to meetups are several tiers above the average commentator, who cannot seem to grasp the purpose of hypotheticals and post things like "well this just makes me WANT to drown people (unstated subtext) because I don't like your arguments". Even if those types of people went to meetups, they'd know better than to say things like that!
And "seeming cool" doesn't mean "fashionable" or "obviously pandering to populist sentiments" (both of which I agree would be a bad way to describe even the current commentators) in this context, but something more like "self conception preserving" or "alliance affirming". Someone replying a post about morality about how obviously they love their family, and obviously giving is local because then it'd be reciprocal are not thinking truth seeking thoughts but "yay friends" or "yay status quo".
If you think you have a simpler explanation of why over 50% of the replies are point missing or explicitly talk about how they don't want to engage with the hypotheticals, with only reference to the replier's surface level feelings rather than marshaling object level arguments on why it'd be inappropriate to use hypotheticals then I'm all ears. But saying "people just make mistakes" is not an answer when the mistakes are all correlated in this fashion.
>when you see them go into contortions to avoid intuition pumps
Funny enough it was Scott himself in his What We Owe The Future review that broached the idea you probably should just hit da bricks and stop playing the philosophy game! He wanted to avoid the intuition pumps because they're bad. When you *know* someone is rigging the game in ways that aren't beneficial to you, you are not obligated to go along with the rigging.
Ever-more-contrived thought experiments are not about truth-seeking, either.
>whatever thought process preventing them from thinking they could be at all immoral.
The phrase you're looking for is "human nature."
I’m confused by the use of ethical thought experiments designed to hone our moral intuitions, but which rely on increasingly fantastical scenarios and ethical epicycles upon epicycles. Mid-way through I was wondering if you were going to say “gotcha! this was all a way of showing that the drowning-child mode of talking about ethics is getting a bit out of hand.” Aren’t there more realistic examples we could be using? Or is the unreality part of the point?
Like with scientific experiments, you try to get down to testing just one variable in thought experiments. The realism isn't the point, just like when a scientist who is studying the effects of some chemical on mice ensures that they each get perfectly identical and unchanging diets over the course of the experiment. The scientist isn't worried about whether it is realistic that their diets would be so static because that's not what's being tested right now.
You can build back to realistic scenarios after you've gotten answers to some of the core questions. But reality is usually messy and involves lots of variables at once, so unless you have done the work to answer some of those more basic questions, you're going to get stuck in the muck, unsure what factors are really at play. Same as if the scientist just splashed unmeasured amounts of the chemical onto random field mice in a local park.
The problem is, the drowning child thought experiment, in its *original* form, is already the most free of confounders, as it is much simpler than the scenarios Scott proposed here. So the equivalent of your mouse science example would be: I give my mice a certain drug, and the mice are held und the most supremely controlled circumstances such as their diet. But the drug did not have any effect. So now instead I let my mice roam free in the garden and feed them leftovers from the employee canteen, and then I give them the drug again and see if it works now.
The original Drowning Child thought experiment is "you'd save a child if you saw it drowning, wouldn't you?" and the majority of people will go "of course I would".
*Then* it sandbags you with "okay, so now you have agreed to save *all* the drowning children forever" and people not unreasonably go "hold on, that's not what I agreed to!"
And then the proposers go "oh how greedy and selfish and immoral those people are, not like wonderful me who optimises for truth seeking and morality".
No, it asks you _why_ you feel so sure you have to save the one drowning child, but you never even think about the others. The point is to make you realize that _is_ what you (implicitly) agree with; that it's _your_ judgement that thinks you're greedy and selfish for not saving children.
Some people actually respond in the desired way to the thought experiment; they can't think of any compelling answer to the question "what's the difference?"
Other people propose answers like: "the difference is spatial proximity", and so Scott counter-proposes other thought experiments to try and isolate that variable and discovers that it actually doesn't seem very explanatory.
The point of these iterated versions is to isolate different variables that have been proposed to see if they actually work to answer the question; and if we can discover an answer, figure out what it suggests about our actual moral obligations vis a vis funding AIDS reduction in Africa or whatever.
But Scott *isn't* isolating any variables, nor is he trying to. He's just constantly changing all the variables on a whim, including "variables" that aren't actually variable to begin with (e.g. laws of physics). Continuing the analogy from before, what Scott is doing here is like if one of the scientists were to notice that the mice seem to be becoming unhealthy, and another scientist proposes that it might be because their diets don't contain enough protein. Then the first scientist says, "okay, let's test for that. We'll send the mice to an alternate universe where the speed of light is 2 m/s slower than it is in our world, genetically modify them to have pink fur with purple polka dots, give them all tiny ear piercings, and start adding protein to their diets -- if your theory is correct, this should resolve their health issues."
I guess I disagree? People claimed that the clear difference between drowning kids and malarial kids in Kenya is distance, so Scott lists some (not even all that unrealistic) examples where you're physically distant to see if the intuition holds?
After rejecting physical distance he tries to think of some other factors: Copenhagen-style "entanglement", the repeated nature of the malaria situation as opposed to the (usually) one-off nature of the drowning child. He decides that these are indeed the operative intuitions, and then challenges them, finding all versions unsatisfying of using these as a complete basis for moral action, before laying out his preferred resolution.
I agree the examples come fast and thick, and sometimes it feels like we're nested a few levels deep, but I think he's exactly looking at the variables "physical distance", "declining marginal value of moral action", "entanglement with the situation" , and trying to isolate them individually and then in various combinations/interpretations.
This is a completely inaccurate presentation of the original argument which makes me think you’ve never even seen/read it.
In point of fact, I have read it, but thank you for your interest.
Actually, what's going on here is that we observed some effect X in the original experiment (the drowning child). Then someone claimed "yes, but that effect only occurs when their living space is a small cage. In more naturally-sized living spaces, the effect X would vanish. The chemical isn't sufficient on its own." And so we go on to run the test again, but now the scientist builds a (still contained and controlled in other ways) testing environment where the mice live in burrows made of dirt instead of cages.
It's trying to apply rules of logic to something that is squishy and not made of logic.
Really there are no reason to believe that our moral intuitions are coherent. They probably aren't. Thought experiments are fun and useful for trying to explore the edges and reasons of our intuitions, but they have their limits. This article may have gracefully (or not gracefully, depending on your perspective) bumped up against them.
You could have a framework where you expect yourself, and hopefully others, to donate a portion of thier time and/or money to helping others (call it the old 10 percent tithe, although admittedly everyone has thier own number). If you already expect yourself to do this, then adding on saving a drowning kid once hardly costs you more in the big picture, and is the right thing to do since you're uniquely positioned to do it. if it's really important to you, you can just take it out of your mental tithe ledger and skip one life unit of donation that month (although you probably won't because it's in the noise anyway). But if you're by the drowning river and this is happening so often it's significantly cutting into your tithe, it's perfectly reasonable to start actually taking your lifeguard duties out of your mental tithe, and start wondering if this is the most effective way for your tithe to save lives. And if not, then we all reasonably conclude you're fine (even better off) not doing it.
Also this reminds me of my favorite short story:
https://www.newyorker.com/magazine/1996/01/22/the-falls
"... doesn't seem to be a simple one-to-one correspondence where you’re the only person who can help: [sociopathic jerk thought experiment]"
I'm not sure if this tells us tooo much about the effect of other people in real-world moral dilemmata; one might bite the bullet and say "sure, /in that case/, where you know you're the only one who can help, you should; but in any real situation, there will be 1000 other people of whom you know little—any one of which /could/ help."
That is, if we're considering whether there is some sort of dilution of moral responsibility, I don't think the S.J.C. example really captures the salient considerations/intuitions.
-------------
I disagree with the other commenters about the utility of these thought-experiments in general, though.
They're /supposed/ to be extreme, so as to isolate the effect of x or y factor upon moral judgments—the only other options are to (a) waste all your time arguing small details & becoming confused (or, perhaps, just becoming frustrated by arguing with someone who's become confused) by the interplay of the thousand messy complications in real-world scenarios, or (b) throw up your hands & say "there's no way to systematize it, man, it's just... like... ineffable!"
If there is some issue with one of the thought experiments, such that it does not apply / isn't quite analogous / *is* isomorphic in structure but *isn't* analyzed correctly / etc., it ought to be possible to point it out. (Compare: "Yo, Einstein, man, these Gedankenexperimente are too extreme to ever be useful in reality! Speed of LIGHT? Let's think about more PRACTICAL stuff!")
I can't help but feel that some reactions of the "these are too whacky, bro" sort must come from a sense of frustration at the objector's inability to articulate why the (argument from the) scenario isn't convincing.
I'm sympathetic, though, because I think that sometimes one /can correctly/ dismiss such a scenario—intuiting that there's something wrong—without necessarily being able to put it to one's interlocutor in a convincing way.
Still—no reason to throw the bathwater out with the baby. It's still warm enough to soak in for a while!
edit: removing this post, it is being misinterpreted so I am going to give up
In the Christian tradition, Jesus explains precisely what decides someone's eternal fate in Matthew 25 -- suffice it to say, it really is just meeting the material and social needs of the worst off people. No requirement you're religious in any way, and Jesus does mention that it'll lead to a lot of surprise both from disappointed "devout" people and confused but delighted skeptics.
Obviously there are other traditions and legends, but presuming Heaven is a Judeo-Christian term of art for a specific kind of eternal fate, it seemed relevant.
John 5:24
I'm not sure what it would mean to believe the gospel, but like absolutely refuse to care for a neighbor as though they were yourself. It is a gibberish idea.
Yeah that’s what James says in James 2:18. Contrast Ephesians 2:6-10. Seems like a contradiction! But it’s not. Paul explains in some detail in Romans 3.
Martin Luther would like to inform you about the epistle of straw 😁
https://www.thegospelcoalition.org/themelios/article/the-epistle-of-straw-reflections-on-luther-and-the-epistle-of-james/
The "actually existing Christian tradition" would say that the morally relevant aspect of action is the act of the will, not the change in external circumstances brought about. This is why the charity of the poor woman who gave her last coin was of greater merit than those of the rich.
Obviously one cannot harden one's heart to the poor and still be righteous. What I am saying is that external impact is in some cases disconnected from moral goodness; thus, the rich man who gives 1000 dollars has not done a moral act 100 times better than the poor man who gives 10 dollars.
> What if it is primarily the cultivation of a certain purity or nobility of soul?
Interesting theory. How much does that cost to do, at a median level of success? In terms of man-hours, or purchasing power parity, or wattage for sacred hot springs, or whatever standard fits best.
Would soul-cultivation be directly incompatible with funding antimalarial bed nets, or is there room for Pareto improvements in someone hedging between the possibilities? "Tithe 10%, and also do these weekly exercises, from which you'll personally benefit" isn't an obviously harsher standard to live up to than tithing by itself.
> After all, if the soul is immortal, its quality is infinitely more valuable than any material and temporal vicissitudes.
That's not how discount rates work. It's entirely possible for something of infinite duration to have finite net present value. https://en.wikipedia.org/wiki/Gabriel%27s_horn
Giving up on explaining the nobility of soul formulation. However, I will say that immortality of the soul is not shaped like the linked image; the amount of suffering in Hell or Purgatory or the amount of joy in Heaven is far greater than anything terrestrial.
Here is a hole that I think is relevant.
I would argue that saving drowning children is actually a very-high-utility action, because you can call the child's parents to pick the child up and they'll be super grateful, and even if they don't pay money, you'll accrue social and reputational benefits. Tacking on "...oh, but your water-damaged suit!" is misleading, because even with a water-damaged suit, saving the child is still obviously net-positive-utility.
(So, for example, if you get the chance to move to a cabin and rescue drowning children all day, you could totally just do that and make a living off it. Start a Patreon, have a little website with a heartwarming story about how you're able to save all these children thanks to the generosity of your patrons. When you save a child, send them back to their parents with a link to your venmo.)
The Drowning Child story takes a situation in which saving the drowning child is obviously high-utility, and conflates it with a situation in which saving the person-with-a-disease is obviously negative-utility.
I don't have a moral about whether you should give lots of money to charity. I just think the drowning child story is misleading us, because it says "...you chose to save the drowning child, so for consistency you should make the same moral decision in other equivalent situations" but the situations are not actually equivalent.
I would argue that it's mostly false that society gives you kudos for saving drowning children. Society gives you very little. The *child's parents* are the people who are rewarding you.
You can get an award in many countries for saving a drowning child or saving someone from a burning building. Not that much utility granted but some?
In steps the entrepreneurial nonprofit Singer Fineries, the world's first designer of high-end suits, gowns, and other formalwear that are all 100% waterproof. For the go-getting drowning-child-saver on the go! Ethically sourced materials made by fair trade certified artisans vetted by GiveWell, all proceeds donated to effective charities, carbon-neutral, etc.
Even better, the SF corporation will provide training in practical needlework, tailoring, and seamstressing for every saved child and hold a position open for them to work on the waterproof clothing. Sweatshops, you say? No, not at all! Ethical pro-child independence, we say! Earn your own living, live your own life, free of the neglectful parents who let you tumble into the lake and left it up to a stranger to save you!
"(So, for example, if you get the chance to move to a cabin and rescue drowning children all day, you could totally just do that and make a living off it. Start a Patreon, have a little website with a heartwarming story about how you're able to save all these children thanks to the generosity of your patrons. When you save a child, send them back to their parents with a link to your venmo.)"
I like the cut of your jib, young lion, but I think the EA and those inspired by Singer would be appalled. You're not supposed to *benefit* from this, you are supposed to engage in it via scrupulosity-evoked guilt! You should be paring yourself down to the bone to save drowning children every spare minute! You surely should *not* be making a nice little living from being a professional lifeguard! 😁
I have to say, if you must live beside a river full of dumb brats whose inattentive parents can't be bothered to keep them from drowning themselves, you may as well make a go of it how you can. Venmo those negligent caretakers for every cent you can, and don't forget to shame them on social media if they don't cough up!
Unless of course, by benefiting, you end up doing more total good over the long term.
Replace the original thought experiment with an orphan, who has no-one in the world and no social capital whatsoever.
Do your intuitions change? Mine don't.
Yes the "touching" thing is dumb but:
"But I think most people would consider it common sense that refusing to rescue the 37th kid near the cabin is a minor/excusable sin, but refusing to rescue the one kid in your hometown is inexcusable."
What?!?! I cannot for a second imagine that a majority of people would say "just picking a number of kids you're down to save is fine in this situation". That there is a diminishing marginal utility of saving dead kids!
If this is happening I genuinely think that someone living in this cabin needs to realize their life has been turned upside down by fate and that their new main life goal has to be "saving the 24 lives that are ending per day" by whatever means possible. Calling every newspaper to make sure people are aware of the 24 daily drowning kids in the river. Begging any person you see to trade with you in saving 12 kids a day so you can sleep. Make other people "touch the problem." Whatever the solution is--if a problem this immediate, substantial, and solvable appears and no one else is doing anything about it, you have to do what you can to get every kid saved.
I took it as "personally stop whatever else you may be doing to physically save the kids, despite the effect on your own life, sleep deprivation, etc." (until you pass out and drown)
if other means are available, damn right I'm making sure there are lifeguards
I think this speaks to a completely sane worldview, but it is less commonly used to navigate the world than espoused.
"What?!?! I cannot for a second imagine that a majority of people would say "just picking a number of kids you're down to save is fine in this situation". That there is a diminishing marginal utility of saving dead kids!"
Why not? You're one person, there's a kid in the river every hour, it's physically impossible for you to save every kid in 24 hours in perpetuity. You have to eat, sleep, catch your breath after jumping into the river and pulling the last kid out, etc., never mind working at your job.
So most people would agree that yeah, you can't save them all, not on your own. Maybe after saving 37 kids straight you collapse from fatigue and end up in hospital. That means all the rest of the kids drown unless someone takes over from you. Or you work a reasonable rate of "I save one kid every daylight hour for ten hours, then that's it".
If you're discounting the need to have some connection to the harms in order to be responsible for curing them, be it causality or proximity or association, then you're stuck back into the original problem we're trying to escape here. Other than your proximity to the river, there's nothing special about your situation unless or until you've assumed a duty. You are best positioned to intervene if it's just physically jumping in 24 times a day, but we're advanced humans with technology and an economy, so your neighbor a half mile in from the river could just as equally hire a person or pay for a contraption to save the kids as you could. If there is no need for a connection, merely awareness, then why isn't your new main life goal saving the 2800 children under the age of 5 who die every day from dysentery? Because there are other people doing something about it? Not very well, it would seem!
So... the thing we should do is rebuild the coalition and the general pot, yes?
I was amazed that this essay wasn't about /didn't get to USAID. USAID is a global aid program Trump is essentially closing. As a result he (and America) are being blamed for killing poor foreigners who will apparently no longer survive due to not receiving the aid. Would it not be our problem at all if we'd never given any aid? Are we really the ones killing people by no longer voluntarily providing aid?
https://www.bu.edu/articles/2025/mathematician-tracks-deaths-from-usaid-medicaid-cuts/
https://www.nytimes.com/interactive/2025/03/15/opinion/foreign-aid-cuts-impact.html
https://www.reuters.com/world/us/usaid-official-warns-unnecessary-deaths-trumps-foreign-aid-block-then-says-hes-2025-03-03/
Yes, because if the actual aid is shut down all of a sudden without prior warning, you are exhibiting the vice of akrasia, and not giving people who you now have an obligation to time to adjust or plan out their response. Now the USA does have at least a little obligation towards poorer countries, so when it goes to start fulfilling those obligations again, people will not trust it.
There is an actual argument against USAID (it is used to spew evil filth into the rest of the world) but I actually agree with Scott on the exact points of good which he highlighted it was doing, so a sufficiently competent statesman should be able to shut down the bad parts and keep the good parts.
A sufficiently competent and powerful statesman. It would take a great deal of power to be able to pick and choose when dismantling these organizations.
Yes, so the deaths are really the fault of the people opposing Trump becoming all-powerful dictator-for-life.
If we make Trump an eternal dictator of the entire planet, all the drowning children will become his personal property, and then he will have an incentive to save them. Perhaps he will order Elon Musk to build a fleet of self-driving boats to rescue those kids.
Yes, and if they keep dying, you can just declare it another instance of those classic "theodicy" things.
> There is an actual argument against USAID (it is used to spew evil filth into the rest of the world) but I actually agree with Scott on the exact points of good which he highlighted it was doing, so a sufficiently competent statesman should be able to shut down the bad parts and keep the good parts.
Yes, this is something that frankly appalled me about Musk's handling of the situation. So far as I can tell, the percentage of USAID funding that was actually being spent on DEI or wokeness-related programs was small, and it's not like Musk couldn't have afforded to hire auditors with basic reading comprehension to go in and surgically remove the objectionable items. He chose to go in with a hatchet on day one for the sake of cheap political theatre.
https://newsable.asianetnews.com/gallery/world/usaid-craziest-spends-revealed-millions-on-condoms-for-taliban-afghan-poppy-farms-to-drag-shows-in-ecuador-shk-srb8eg#image7
I don't think $47,000 for a Transgender Opera in Colombia is a wise use of taxpayer funding, but every item on that list combined amounts to less than half a billion dollars, and USAID was spending 40 billion a year.
There are even items on that list that I'm not sure should have been axed. Is anyone going to die because Afghanistan lacks condoms? Not directly, but there might be some risky abortions avoided, not to mention that Afghan TFR is well-above replacement and could place demographic pressure on limited agricultural resources, possibly triggering war or famine. I don't have a high opinion of Palestinian society, but unless the plan is to liquidate the region's population then constructing a pier to facilitate imports of essential food items isn't an automatically terrible idea.
Here are some at least perceived serious problems with USAID and rationale for the rapid action:
1) Lots of the funding was not going directly to overseas, but directly into Washington Beltway NGOs in the US. Yes, presumably much ended up overseas, but certainly parts of it simply enriched politically-connected individuals of the opposition party.
2) In many cases USAID funding directly sponsored and supported poltical ambitions and patrons of one party, not both parties in the US. This rendered it perceived as not just neutral but actively harmful to the opposition party.
3) Because the first 100 days of a lame-duck US President's term are widely perceived to be much more effective and important than the remainder, it is/was necessary to move very quickly to shut it down, both to actually succeed (it is already tied up in courts), and to see the impact on individual recipients, and use the impulse response of the system to better-understand the fraud and patronage that might be involved.
Fixing e.g. PEPFAR after the fact is not ideal, but letting the perfect be the enemy of the good is also not ideal.
Then why not do this in a non-lame-duck Presidential term?
Because a presidential term where all branches are controlled by one party is incredibly rare and hard to predict, and certainly that term will not be controlled by 'you' (whoever you is), and might not lead to the same desired outcome.
For example, right now, the house of representatives is balanced on a knife edge of control where any absences render it evenly split or controlled by the opposition.
If the control of multiple branches was so important, then why try to invest the all-important 100 days in shutting down the programs by executive order without involving Congress? That could have been done in the first term just fine.
> a lame-duck US President
lame duck
1. An elected officeholder or group continuing in office during the period between failure to win an election and the inauguration of a successor.
2. An officeholder who has chosen not to run for reelection or is ineligible for reelection.
Wow. Today I learned about definition #2. God, do I hate stupid extra definitions of terms that ruin the first, good definition of those terms (see also: literally)
This is interesting, as I have always understand lame duck-ness to be definition 2, not 1. I would have reversed their order based on my own experience.
This whole "we have to do it quickly or it'll be impossible" / "we'll fix the rest later" part seems incoherent to me.
First, they shut down USAID in *two weeks*, not 100 days.
Second, if they only have power for a brief moment and must use it... how will they bring back the good parts later?
Third, how do you even KNOW if a $50b-a-year agency is more bad then good in two weeks?
This just seems like a post-hoc justification for a staggering level of carelessness and in curiosity in a Chesterton's Fence scenario.
I could maybe buy the limited-time-window argument, but people in Scott's comment section were saying it would only have taken a few interns a couple of weeks to read through all the axed NSF grant proposals, so... even under time pressure, I think Musk could likely have done better.
> Lots of the funding was not going directly to overseas, but directly into Washington Beltway NGOs in the US. Yes, presumably much ended up overseas, but certainly parts of it simply enriched politically-connected individuals of the opposition party.
If you're paying the staff who run a charity NGO, and they talk to their patrons and vote for the party who funds them, then... yes, you will be 'enriching politically-connected individuals of the opposition party', almost by definition. I don't know a solution to this problem other than the GOP being less negligent when it comes to institution-building.
A lot of those 40 billion is NGO color revolution money though.
>condoms
So like I said, evil filth. Do you have anything unambiguously morally good besides PEPFAR which the USAID was doing?
At least on paper, less than 10% of US foreign aid is/was allocated to 'human rights and democracy', or anything that could plausibly be interpreted as 'NGO color revolution money'.
The sexual revolution debate aside, I don't think any and all birth control is wrong, so... gonna have to differ on the condoms.
The problem is trying to disentangle the good parts from the bad parts, since any attempt to question it is met with the "people will die!" defence and asking the civil servants "so what did you do last week?" is seemingly intolerable interference.
Nothing wrong with gutting fat or rot. Some servants however, really do say "I stopped HIV babies from dying," and it is competent statesmanship to be able to distinguishing between the two, or at least undo the problem when there is just cause.
So what is refusing to state whether or not you did save HIV babies from dying?
An admission that you are useless and need to go. My problem is that Musk seems to not have actually asked, or if he did, he did not do it well.
I would think it is worse to take action to foreseeably cause death, as opposed to neglecting to take action to foreseeably prevent death. (If this weren't the case, the answer to the trolley problem would be obvious)
I do admire that you continue to advocate for some version of "EA values" in these increasingly screw-you-got-mine times, even if it's largely academic to me as a Poor with limited resources wrt the scale of drowning children. Not having any realistic path towards ameliorating that state of affairs means it's even more important to Be Excellent To Others in whatever small capacities present themselves, I think. Everyone can do the mundane things somebody has to and nobody else will, if one cares to notice them, Copenhagen be damned. (While acknowledging that yes, there's real value in blissful ignorance! Premature burnout from the daunting scale of worldwide lifeguard duties is worse than at least helping the local drownees and ignoring the ones in the next city over.)
The real problem comes with coordinating others to act similarly, so the burden is collective but light, versus an endless uphill battle for a few heroic souls. That always feels missing from such philosophical musings - the kind of people susceptible to Singerian arguments aren't the ones who most needed convincing. Classic memes like religion sort of work for instilling the charitable drive, but come with a whole host of other "entanglements" that aren't all desirable.
I think a core objection to giving lots of money to be charity might be skepticism that the people being saved actually exist.
Like... the Effective Altruism page about malaria bednets has this long string of numbers they multiply together, to figure out how many dollars it takes to save a life. And that's legit cool. Of course, when you multiply a string of numbers like that, you tend to get huge error bars because all the uncertainties multiply. But they're smart people, I assume, and they're trying really hard, so I'm sure they're not trying to be deceptive. I have to respect that they've done as much as they have.
But... I'm in an environment where people will say anything to get me to give them money, and I guess I've gotten used to expecting evidence that people's claims are real? And I know that, if I buy a bunch of bednets to prevent malaria, no evidence will ever be provided that any malaria was prevented. At best they'll have some statistics and some best-guess counterfactuals.
And -- I mean, I'm sure the bednets people are good people. I've never met any of them personally, but they're working on a charity that does really good things, so they must be really good people with the best of intentions. But it sort of feels like they don't really have an incentive structure that aligns with communicating honestly.
I dunno. The internet in general isn't a high-trust place. I guess probably the people in the charity part of the internet are especially honest and trustworthy, so rationally I'd probably have to concede that the charity really is saving lives. But I don't feel it.
>I'm in an environment where people will say anything to get me to give them money<
So... got a lot of it laying around then, eh?
Hey, unrelated but FYI, I've been meaning to tell you ever since we last saw each other at that university or high-school wherein we were real good friends: if you give me some money, I'll write you into my next book as a badass superhero. Also, I may be on the verge of solving world peace and stuff, if only I had the funds... ah, the tragedy of it all—to have the solution in my hands, yet be stymied by a mere want of filthy, filthy lucre–
"I'm in an environment where people will say anything to get me to give them money"
Begging emails from charities. Gave one donation to a specific cause one time, got hailshowers of "donate donate please give donate we need money for this special thing donate donate" begging emails until I directed them all to spam.
That sort of nagging turns me away more than anything; I don't have a zillion dollars to donate to all the good causes, and I'm going to give to what I judge most in need/most effective. I am not an ATM you can hit up every time you want a donation for anything and everything. And of course they used the tearjerking heartstrings tugging route: here's little Conchita or whomever who is the human equivalent of a one-legged blind puppy, don't you feel sorry for her? Here's Anita who is the homeless mother of twenty in a war zone who has to pick up grains from bird droppings to feed her earless quadruplets, don't you feel sorry for her?
No, in fact, because you've so desensitised me with all the begging and all the hard cases, I have no problems shrugging and going "not my circus, not my monkeys".
There's a thought experiment, where someone runs up to you and says: "Give me a hundred dollars right now or else TEN TRILLION BILLION GAZILLION people will die horribly!"
And the thought experiment says: "Okay, that's a crazy claim, but as a Bayesian you have to assign some probability to the chance that it's true. And then you have to multiply that by the disutility of ten trillion billion gazillion people dying horribly, and check if it's worse than giving a hundred dollars to an obvious scammer. And what if the number was even more than that?"
But in practice people don't do this, we just say "no thanks, I don't believe you" and walk away. I'm not sure what rule we're applying here, but it seems to work pretty well.
And when I think about buying anti-malaria bednets, I feel like that same sort of rule is getting applied.
GiveWell is mostly advertising that you donate to charities that are not them. So it really seems like your thought experiment is in the opposite direction: someone tells you to give to an unrelated third party and you're trying to come up with reasons why the third party isn't really a third party.
The easy out for this is that because the claim is physically impossible the actual expected utility is always 0. Probability doesn't innately have to be an asymptote of 0, it can just be 0.
Our knowledge of what's physically possible is probabilistic, though, so this out doesn't really work. I think a more realistic out is that even though we don't have the cognitive resources to correctly estimate a probability at 1/3^^^3 or something by reasoning explicitly, conservation of evidence implies that most general statements about what's happening to about 3^^^3 entities are going to have a probability of about ~1/3^^^3 or lower so failing a straightforward logical argument why it's much larger in this case then if you have any risk averseness at all, and probably even if you don't, you should ignore such possibilities.
Pascal’s mugging is indeed a bad way to go.
Bed nets do appear to pencil out give reasonable bounds on utility, but that doesn’t mean everything crosses that threshold.
I don't think this is the core objection, it's more often an excuse. If everyone trusted the EA people's figures, most people still wouldn't donate anywhere near as much as EA people say they should.
GiveDirectly has a web-page with a real-time-updated list of testimonials from people who receive money and saying what they did with it, so I don't think this is the main blocker.
What sort of evidence would convince you? What do you think is missing?
After thinking it over somewhat, sadly I think I have to admit that this *was* an excuse.
I recant the above statement. I do think that statistics are easy to lie with, or easy to get confused and report overly optimistic numbers despite the best of intentions. But I don't think it was my core objection.
Proximity based moral obligations work because the further away something is, the less power you have to actually affect things, and therefore the less responsible you are for them. You may say 'give to effective charities' but how do I know that those charities are actually effective and are not lying to me, or covering up horrific side effects, or ignorant of their side effects? Therefore, it would seem that I have more of an obligation to give to charities whose effects I can easily check up on in my day to day life*.
By this principle, the person in the NYC portal has an obligation, since he can actually see and actually help. If the guy screws up following your instructions, the situation is not worse than before. If you come up with a highly implausible scenario where his screwup can cause massive damage, then it becomes more morally complicated.
Same for the robot operator, since he is in control of the robot and knows what it is doing, assuming he knows it won't risk the patient's life. If you were a non-surgeon robot operator who came across the robot in the middle of an operation (the surgeon took an urgent bathroom break?) it would be immoral for you to help, since you wouldn't know what effect messing with the surgery would have.
In the same way, if I am simply told that going into a pond and pressing a button would save a drowning child halfway across the world, well, I have no way to verify that now do I? It could blow up a dam for all I know.
For the drowning child question, you always have a moral obligation if it occurs, but you don't necessarily have an obligation to put yourself into situations where moral obligations occur. Going out of your way to avoid touching things however is the sin of denying/attempting to subvert God's providence, see Book of Jonah.
So my Copenhagen answer is as follows: if a morally tough situation happens to occur to you, it is God's providence, and He wants you to do it.
>God notices there is one extra spot in Heaven, and plans to give it to either you or your neighbor. He knows that you jump in the water and save one child per day, and your neighbor (if he were in the same situation) would save zero. He also knows that you are willing to pay 80% of the cost of a lifeguard, and your neighbor (who makes exactly the same amount of money as you) would pay 0%
The neighbor judged himself when he called you a monster for not doing enough to save people didn't he? He also was touched by the situation when he involved himself in it by commenting and refusing the counteroffer. Its also seems fairly proximate to him for him to be auto-involved, and he is cognizant of this and in denial. Problem solved.
I recognize where you are going with this, and my point is not that you are a monster for not doing enough, but that your donations can have side effects which you cannot detect and cannot evaluate to adjust for in time, or they can end up not doing anything. Sure you can export it to other EAs to verify, but how can you trust them to be honest or competent? The crypto fiasco is a good example here.
>Alice vs Bob
God is Omnipotent and Omnibenevolent, he can have infinite spots in heaven and design life by his providence so that both Alice and Bob can have the appropriate moral tests which they can legitimately succeed or fail at. Bob would likely have a nicer spot in heaven though assuming he succeeded, because he had more opportunity for virtuous merit.
*Note that this argument is not an argument against giving to charity, only against giving to 'untrusted' charities, which I classify EAs as because they seem to be focused on minmaxing a single issue as if life is designed like a video game without considering side effects they can't see, and are prone to falling for things that smell suspiciously like the St Petersburg Paradox.
My logic leads me to conclude that it is optimal to use your money to help the homeless near you since you have the most knowledge and power-capacity towards it, which I have been half-heartedly doing but should put more effort into.
I've helped homeless people sometimes but more often than not I haven't. Homeless people sometimes have simple problems that you can help with (e.g. need a coat) but often it would require an expert to actually help them out as much as a malaria net would help someone in Africa.
This is true if the distribution of problems are the same near and far, but if you live in a rich country and are thinking of donating to a far away poor country, that's probably not true: the people near you with _real_ problems are people with medical conditions that require expert knowledge to solve, or mental problems that we may not know how to help, and so forth. While the people in poor countries may have problems like vitamin A deficiency which is easily solved by giving them vitamin A, or endemic malaria which is relatively easily solved by giving them bednets.
Even with the distance, I'm pretty confident it's much easier for me to get a hundred vitamin A capsules to Kenya than to cure whatever it is that affects the homeless guy who stays in the shelter a few blocks away from me.
Indeed, the whole point of charity evaluators like GiveWell is to quantify how easily a dollar of yours will translate into meaningful effects on the lives of others.
You lost me when you brought Jonah into your argument. IIRC, God's brief to Jonah was that he specifically was to go to Ninevah and preach against the evil there. After trying to avoid the task, Jonah finally slogged his way to Ninevah and preached what God had told him to preach. But he failed God because didn't preach about His mercy, as well. But nowhere in the story do I remember God telling Jonah to preach about His mercy.
How can we know God's will? God didn't tell Jonah that there was an additional item on his SoW. The only takeaway I get from Johah, is that if I rescue a drowning child, I need to preach about God's mercy as I pull the kid out of the water. In the trolly scenario, God's will may be that the five people tied to the track die, and the bystander lives. But His providence put us in the control of a trolly car, and he left us the choice between killing five people tied to the track or a single bystander. We don't know what God's optimal solution is.
You misunderstood my point. God gave Jonah a job, he tried to evade it entirely, and that was clearly the sin, which indicates that trying to cleverly dodge moral responsibility by removing proximity is bad.
Jonah's behavior in chapter 4 is not relevant to the point.
>How can we know God's will
Study moral theology and you can guesstimate what the correct action in a given situation is.
>preach about God's mercy as I pull the kid out of the water
As others pointed out, you can recruit the kid to help you pull more kids out of the water.
>trolley problem
See double effect.
So, what does Christian moral theology indicate we should do if we find ourselves in a trolley problem scenario? Bear in mind that this is also the type of ethical koan that has troubled Talmudic scholars. For instance, Rabbi Avrohom Karelitz asked whether it is ethical to deflect a projectile from a larger crowd toward a smaller one. Karelitz maintained that Jewish law does not allow actively causing harm to others, even to save a greater number of people. He argued that human beings cannot know God's intent, so they do not have the authority to make calculations about whose life is more valuable. Thus, according to Karelitz, it would be neither ethically or halachically permissible to deflect a projectile from a larger crowd toward a smaller one because doing so would constitute an act of direct harm.
As for me, I'd say the heck with the Karelitz, I'd deflect the projectile toward the fewest number of victims. I don't know what I'd do if my child was in the group receiving the deflection, though. But I'd probably make my decision by reflex without considering the downstream ramifications. Ethical problems do not lend themselves to logical analysis because human existence is greater than logical formulas. Sure, we could all be Vulcans and obey the logical constructs of our culture, but the minute we encountered a Gödelian paradox, we'd be helpless.
You are free to flip the lever, but not push the fat man, since the trolley running over a single person is a side effect, while pushing the fat man is directly evil.
The trolley running over a single person is a side effect of moving the trolley, the fat man dying is a side effect of moving the fat man. There isn't really a sharp line here.
Its not a side effect though. You are actively choosing to push the fat man, ie it is you active will that the fat man be pushed, and the trolley is stopped by the means of pushing the fat man.
I will point out that I think a more nuanced framing of Rawlsian ethics is inter-temporal Rawlsian ethics where we both don’t know **where** we will be born or **when** we will be born.
Instead of the argument of keeping taxes on the rich low so they don’t defect, those in the future will want as much growth in the past as possible to maximize the total wealth of the world and the total number of medical breakthroughs available to them.
There is now a balance of being fair and equitable at a single spacetime slice and people farther in the future wanting more growth and investment in previous time slices that better benefit them.
I think this makes the tradeoffs we often confront in redistribution vs investment more salient and makes the correct policy more difficult to easily figure out.
(Sorry if this was mentioned in another comment, I looked at about half.)
I think intertemporal Rawlsian ethics is a wonderful idea, but it’s *really* sensitive to your discount function and error bars on the probabilities of stable growth and maintenance of a functioning civilization , isn’t it?
Yes! That’s why it’s so hard to know what to do!
And that’s the tradeoff we face here existing in the world now too.
> First, she could lobby the megacity to redirect the dam; this would cause the drowning children to go somewhere else - they would be equally dead, but it’s not your problem.
By the standards of morality in the thought experiment this is the correct solution. The prevailing standards in this hypothetical world allow for magical child flushing rivers accepted without significant protest or mitigation. Objectively, you are not doing anything wrong.
Morality is not an abstract thing to be discovered and while basic survivorship bias means that societies with a sense of morality that result in Child River O'Death are unlikely to be tremendously advanced we can say that certain moral codes can be more effective than others at human flourishing, you cannot use thought experiments to find the rules because there are none. It's all a blur of human messiness.
If I got to choose, I would rather give 1000 people a sandwich or something instead of torturing 1000 people by cutting off their fingers.
Sure, you can argue "you cannot use thought experiments to find the rules because there are none".
Yes, it's "all a blur of human messiness".
Would you rather give 1000 people a sandwich or torture 1000 people? If you prefer one, well, you might even have a reason, let's get to the bottom of it. I'll call it "morality". And if this hypothetical seems to have a preference, we can probably assume other hypotheticals do, like the ones Scott is using.
If you don't prefer either, cause "nothing means anything, man, we're all like, dust in space" then I hope that I'm not one of the 1000 people.
> If you don't prefer either, cause "nothing means anything, man, we're all like, dust in space" then I hope that I'm not one of the 1000 people.
Yes, I fully understand that you're gesturing at the normal human tendency towards being pro-social in the case of Sandwich V. Torture. But the issue with these "thought experiments" which consist of implausible setup followed by binary choice of options designed to be as far apart as possible is that the human tendency you're referencing does not operate according to rules of logic and any answer I can give provides no information on how morality or decision making works. Or: if I'm in a position to torture 1,000 people by cutting off their fingers, I need more information before I can tell you my choice because my actual choices - the thing we are trying to model and/or predict - depend on those variables.
Crafting a hypothetical to try to disprove that someone's objection to a previous hypothetical - or even worse, a concrete course of action which comes with all of the contingent details that do matter a great deal - wasn't their real objection is useless, because it requires inventing situations that are outside the distribution of the thing being studied.
If someone came to you with an idealized method of calculating an object's trajectory and you point out that it is unlikely to be correct because it doesn't take gravity into account, them producing a thought experiment where the object is in a perfect vacuum without the influence of gravity does not mean that gravity isn't your real objection to their method.
A new reign of terror is needed. this comment section sucks. I think I saw someone advocating the defunding of pepfar on grounds of "the kids mother shouldn't have been wrong about sexual hygeine" or something.
more productively, I disagree with the veil of ignorance approach. Just be the kind of person that the future you hope for would admire you (or at least not condemn you). much simpler and more emotionally compelling, and I think it leads to better behavior.
I think this points at something important, but the intuition is sharper if you also stipulate the future knowing your context and thoughts and being much wiser and much, much kinder. Some people believe this means it is "good to believe" in a religion, but I think that is sort of silly and arrogant. Of course there are many people who have enough empathy to know your thoughts and there are very moral people.
People utterly refusing to engage really indicates the change in audience from people who find this kind of discussion interesting on its own merits (SA is clearly doing this to probe the limits of moral thinking as an intellectual exercise) and people who view this kind of moral discussion as a personal attack on them. Discussing morality like this feels like a core part of the rationalist movement and refusing to do so is not a good sign.
To flip this a little: I think it's maybe good that Scott is spreading EA ideas outside their natural constituency. In the spirit of, "if you never miss a flight you're spending too much time in airports", I propose "if you're not getting bad faith pushback and refusal to engage, you're not doing enough to spread pro-social ideas".
While I think the commenting population has gotten worse since the Substack move, I also think the drowning child is a terrible thought experiment and more complicated versions are not so much enlightening as they are a mild form of torture, like that episode of The Good Place where the gang gets explores a hundred variations on the trolley problem.
Discussing morality is interesting. *This particular branch* is exhausted and everyone is entrenched in the degree to which they admire or despise Singer's Mugging. The juice has been squeezed.
Were they advocating, or playing devils advocate?
I am rabidly opposed to the rapid abolition of USAID.
But I am, in fact, quite struck by how appalling the continuation of the AIDS crisis in Southern Africa is and how little we are willing to condemn the sexual behavior that appears to be the driving factor in this crisis.
Babies may be blameless, but it is legitimately fucked up that a very-easy-to-prevent disease has such a high prevalence. AIDS is not malaria. The prevalence does not appear to be reduced by PEPFAR over multiples decades
Failing to engage with the thought experiment is a failure to examine your own moral system, and a failure to contribute anything useful to the discussion. Any of these comments (which there are way too many) that say something like "it's too abstract/too weird what if I change the premise of the thought experiment so I don't have to choose any bad option/ignore the thought experiment because it's dumb" are missing the whole point.
If your answer to the trolley problem is "this wouldn't happen to me, why would I think about it" then you're failing to find what your moral system prioritizes. If your answer to a would-you-rather have snakes for arms or snakes for legs is "neither to be honest" you're being annoying. If your answer to "what superpower would you have" is "the superpower to create superpowers" you're not being clever, you're avoiding having to make a choice. Just make a choice! Choose one of the options given to you in any of these scenarios, please! And if you still say "well um technically the rules state *any* superpower" then change the rules yourself so you can't choose the thing that's the most boring, obviously unintended, easily-avoided-if-the-question-is-just-phrased-a-different-way option. Choose! Pull the lever to kill 1 person instead of 5 or not! What are you so afraid of? Learning about yourself?
Scott says this in the article:
"Assume that all unmentioned details are resolved in whatever way makes the thought experiment most unsettling - so for example, maybe the megacity inhabitants are well-intentioned, but haven’t hired their own lifeguards because their city is so vast that this is only #999 on their list of causes of death and nobody’s gotten around to it yet."
And I think it's worth a whole post by itself why people are so reluctant to choose. Anybody unwilling to take these steps to try to figure out what they genuinely prioritize is *actively avoiding* setting up a framework for their own priorities, moral or otherwise. It’s not just that these people about dodging an uncomfortable choice — they're also refusing to engage with the process of decision-making itself. I cannot imagine setting up any reasonable moral system if I didn't do something so simple as *imagine decisions I don't have to make right now, but could have to make*. If I don't do that I'm basically letting whatever random emotions or vibes I feel in the moment, when I really really have to choose, BE my moral system. Why would people do that to themselves? Something something defense mechanisms? Something something instinctually keeping their options open when there's no pressing need to choose?
I don't know. I would choose snakes for legs though.
>But people are making decisions just fine.
This is not my experience. People in the comments are talking about how it's "far beyond the actual utility of moral thought experiments", "How is bringing up all of these absurd hypotheticals supposed to help your interests?", "never encountered a hypothetical that wasn't profoundly unrealistic". This is a post about hypotheticals. If they don't engage with it, instead dismissing the use of hypotheticals altogether, well, refer to my main post.
Many other comments dismiss the hypotheticals with "but, like, we're all atoms and stuff, man, so like, what means anything?" I have a hard time believing these people wouldn't care if their loved ones were tortured. If they say they would care, great, they've just made a decision in a hypothetical, hopefully they're willing to make decisions in other hypotheticals.
>Religion and culture already control their actions enough to make civilized society possible. Nothing more is needed.
Nothing more is needed? There are things that I think are bad that exist in the world (malaria, starvation, poverty, torture) that I would prefer that there is less of. If I can make it so there's less of this stuff, then I'd like to. To do that, it seems I first have to decide what this bad stuff is, and to quantify how bad bad stuff is compared to each other (paperclip vs cutting off fingers with a knife). That's morality, and it can help guide decisions, too!
Locally they may find it more fulfilling, but it makes things globally less fulfilling for everyone.
The state of global utility has very little to do with individual choices.
It's completely determined by individual choices.
Would you believe that Scott actually wrote that post?
https://www.lesswrong.com/posts/neQ7eXuaXpiYw7SBy/the-least-convenient-possible-world
This is the post I've been thinking the entire time I'm reading the comments
In 2009 on LessWrong, no less! I love it, I missed this one. Yeah I guess you can’t force all readers to read this before each article dealing with hypotheticals.
Maybe a disclaimer like “if you’re about to dismiss the use of hypotheticals, visit this lesswrong post” at the top? But I imagine the comment section would then also have people arguing against this lesswrong post, which seems kinda dumb. Also, do you really want homework on every post? “Read the entire sequences before this post to understand this best”. Ehhhhhhh
Maybe I’m going about this all wrong and I should just be ignoring all the comments that don’t engage with the hypothetical, because *I’m* not discussing the hypothetical either, I’m countering people who aren’t discussing it. So I’ve made the comment section EVEN LESS about the actual post. I don’t know, ignoring a huge chunk of comments who just don’t get the post feels weird though.
I agree that people who do this are annoying. Though too many thought experiments are also annoying. In my experience the reason that the type of person who refuses to answer a hypothetical does that is because they interpret it as a "gotcha" type question that is being asked by the asker for the purpose of pinning them down and then lording it over them by explaining why they're wrong or inferior in some manner. I don't think that's always, or even often, the intention of the asker, but that is how the reluctant askee tends to view it.
Particularly when it's explicitly being set up for maximum psychological discomfort.
Yeah, this. Scott has a good track record of not using antagonistic thought experiments, but elsewhere online, that's not the case. It makes sense some commenters would apply a general purpose cached response to not indulge it.
Agreed. I think moral reasoning of this sort is worthless at convincing others and the methods of analytic moral philosophy in general are not good. But failing to engage with hypotheticals (including explaining why they're irrelevant or undermotivated) is like a guy with a big club saying "Me no understand why engage in 'abstract' reasoning, I just hit you with club."
I have 15,000 USD in credit card debt, near-term plans for marriage and children, plus a genetic predisposition towards cancer that may be passed on to my children. To what degree is my money my own?
If I always knew that I would have obligations to my family, and I could never fully predict my capacity to meet those obligations, then how should I think about the money that I gave to effective altruism when I was younger?
I think Scott is correct, to a first approximation, and that there is virtue in buying bed nets, but no obligation. I also agree with the comment about bed nets being a rare example where we can be confident that we're doing a good thing, despite how distant the problem is from our everyday lives.
Even so, I think the rhetoric around effective altruism is sometimes a bit... I don't know, maybe tone deaf or something? Because lots of people aren't great with money, and when you ask them to tithe 10% they're going to think of all the times they couldn't afford to help a loved one, and they're going to extrapolate that into the future, and they might decide that virtue is poor compensation for an increased risk of struggling to feed their future children, or whatever.
And, yeah, it's too easy to use this as an excuse for being lazy and not demonstrating virtue. And people who aren't good with money could sometimes be better, with a bit more effort. I really do think that Scott is mostly right, here. But it also feels like he's missing something important.
If it helps, I think you're accidental collateral damage. He's mostly talking to the types of person who say they wouldn't donate to charity even if they had means, and brag about this fact. I think insofar as EA is concerned, you should put on your own parachute first. There's no point in ruining your life when someone else can do similarly without ruin.
In theory if we lived in a world where everyone was already donating a lot, yeah maybe that would be a concern (but probably not since societal effort has nonlinear effects). But we're very far from that world, and I think it's wrong to think we are.
In my tradition, when it comes to questions of the heart, even one penny from a struggling widow is more than all the billions the hyper rich donate. There are important questions about how to help people in need, but that is the landscape, and we are travelers on it. Your heart isn't defined by magnitude of impact. It's beauty is captured in that wordless prayer that others might be better off than you.
You are among the richest people to have ever lived.
Don’t believe me? Look here: https://www.givingwhatwecan.org/how-rich-am-i
> To what degree is my money my own?
How about, to the degree that it’s not from luck or circumstance? Including of course the country and time of your birth.
Doesn’t this produce a paradox though? If I believe that as a median American I’m expected to donate $32,000 per year to reduce myself to the global median of $8,000 why would I bother working at all?
You could of course conclude that not only is every $ I fail to donate a theft from the global poor but that every hour I fail to work is an equivalent theft. Except, even as an EA sympathetic person that feels ridiculous.
I’m not sure there’s a clean solution to this whole paradox, but I’m also not sure there’s a clean above model works well.
I already knew that. And I think you missed the point of my question, which is "To what degree does my money belong to me, as opposed to my family? And how will I justify my altruism to my family if I find myself unable to pay their medical bills in the future?"
Your philosophy would be more convincing if I could reasonably expect strangers to altruistically help me if I find myself in need, such that the selflessness isn't so unilateral. But Scott already pointed out that I can do no such thing, and at best I can pretend. But by pretending, I would be gambling with the lives of the people I love.
I know it's possible to be okay with that. I might even agree that it's noble. But they say that the strong do what they can while the weak suffer what they must, and there's as much truth in that as there is in effective altruism. The world isn't just. And I have neither the obligation nor the inclination to be a saint.
Objections this doesn't address:
- It costs much more money to actually save a life through charity ($4000+) than to save a drowning child in these thought experiments.
- A natural morality IMO is that helping people is never obligatory; it's supererogatory: nice but not obligatory. The only obligation is not to hurt others. Saving drowning children, and saving lives through charity, are equally good, but neither is obligatory. (Going further, of course there's the nihilist view that the concept of morality makes no sense; also various other moral non-cognitivist views, like that moral intuition is much like a taste, and not something that can be objectively correct or incorrect, so there's no reason to expect it to be consistent.)
Or they can abandon the intuition that saving the drowning child is obligatory; or abandon the meta-intuition that analogous situations ought to be treated analogously, and instead rely on their intuition in each case separately. Or of course abandon the intuition that charity is not obligatory, as the EAs would like them. If we find a contradiction between people's different moral intuitions, that doesn't tell which one should be abandoned.
Why would you ever abandon that intuition? It seems I would rather take that as axiomatic, and then work backwards from it.
I don't feel a pressing need to resolve metaethics wrt charity. And ultimately all of this discussion can easily be discounted as so much sophistry, but dear god let me not get to a point where I'm ever thinking that saving a drowning child is not obligatory lest it undermine my courage to act
In the original drowning child experiment, you are wearing an expensive suit at the time.
I've never encountered a hypothetical that wasn't profoundly unrealistic. You have put Bob in a world where there is no local government. No police force to call and no local volunteer fire department. There's no local Red Cross to lobby for a solution to the Death River. Delete these real aspects of the real world and there will be an abundance of problems that are too big for one guy in one cabin next to one river to solve.
Also. If Bob is so isolated in his cabin, where are the 35 kids floating down the river coming from, all of them still alive? You also omitted the impact of their grieving parents who would be lobbying and suing local government for its failure to take action.
This hypothetical is as unrealistic as speculation about the sex lives of worms on Pluto.
They aren't unrealistic. They are realistic but rare.
I'd describe it as farcical but directionally relevant to some elements of reality. There are indeed many people we can help, and to the one who suffers, it hardly matters if they're in our backyard or not, so long as either way we don't help them. And to the person in the cabin, it hardly matters if people suffer and die nearby or far, so long as they've resolved to ignore them. There is no governance covering both people, yes. That part is accurate to real life, for international aid at least. But they are indeed sharing a world, with capacity to save or be saved.
Realistic? Name a city or county in the United States that would not react to the fact that 35 children drowned every single day in a river within their boundaries.
Rare? Name one point in the history where such an event has taken place. These events are not rare—they are nonexistent.
> Delete these real aspects of the real world
Isn't DOGE in the process of doing just that?
> No police force to call and no local fire department
That would be the lifeguard, though. Somebody has to pay for all that, so they still have to address the question of whether they will.
A privately hired lifeguard has nothing in common with a publicly funded fire department, which exists in every city and every suburban county in the United States. In the United States, citizens are not expected to deal with large scale problems such as 35 kids floating down the river every single day.
Incorrect, at least on a few levels. Many if not most small municipalities throughout American history rely on volunteer and citizen-based fire departments.
Likewise, as a society that arguably aspires to maximal freedom in the US Constitution, Americans are very much expected to try to deal with large-scale problems through private mechanisms, and in fact our charities and charitable giving as a percent of our GDP, and in absolute dollar terms, are world-leading by a significant margin.
Also just to tie this back to a percentage figure, Americans give 1.47% of GDP to charity, roughly double second place New Zealand, at least per ChatGPT and assuming no hallucinations, and the dollar figures are two orders of magnitude larger.
Charitable Donations as a Percentage of GDP:
According to a 2016 report by CAF:
United States: Charitable giving constituted 1.44% of its GDP, totaling approximately $258.5 billion. (Wikipedia)
New Zealand: Donations amounted to 0.79% of GDP, around $1.1 billion.
Canada: Charitable contributions were 0.77% of GDP, equating to $12.4 billion.
First of all, modern volunteer fires departments almost always receive some municipal funds for equipment, building, et cetera. By happenstance, I once attended a community fundraiser for a volunteer fire department. That well attended community fundraiser brought the communal into the equation.
Second of all, charities represent a communal effort to solve communal problems. They are one gigantic step beyond a single man at a cabin expected to deal with an obviously communal problem,
The hypothetical also assumes that the parents of these hundreds of dying children will play no role in trying to achieve a communal solution to this communal problem.
While I don't disagree, you still haven't demonstrated that receiving municipal funds are a better solution. That it tends to evolve in that direction is meaningful, but might actually be an antipattern co-opted by Moloch rent seekers for example.
This is devils' advocacy - I have no experience in this area, but I do think that the volunteers throughout our history deserve due credit and may have done a great job relative to our current system.
I think volunteerism is great too. However, I distinguish individual efforts from communal volunteerism. Believe it or not, I had a dear friend who managed to reach out his unusually long arm and grab a kid just before he landed in a raging flooded river. This event happened once in his lifetime.
Communal volunteerism realizes that there is a recurring problem in society that could be helped by a self-organized group standing ready to help. The Red Cross, founded in the 19th century, served as a template for many of these organizations.
BTW, the hypothetical guy in the cabin could have built a weir 1/2 mile up the river from his cabin. A weir is a flow through dam used for catching fish. This one would be designed for catching drowning kids.
I view this from an evolutionary perspective. We are hard wired to react vigorously to events that happen in our presence, such as a lion attacking a child. We have no evolutionary wiring to respond to events outside of our personal experience. It's hard to go against evolutionary hardwiring.
Hypotheticals aren't meant to be realistic, they're meant to isolate one part of a logic chain so you can discuss it without those other factors being in consideration. People have a bad habit in debate of switching between arguments when they're losing. The hypothetical forces you to focus only on one specific part of your reasoning so you can either account for the faults the other person thinks it has, or admit that it's flawed and abandon it. It's a technique for reaching agreement.
A quick (sigh) hypothetical example:
"If you have $500 you must give it to me. I would use $500 to make rent. If I make rent I will be able to use my shower. I will look presentable for an interview. I will then get this high-paying job and I can pay you back. Also you do not need the $500. You have an emergency savings fund already and no immediate expenses."
Most of the time an argument would look like this:
"Yeah, dude I'm saving the money in case some big expense comes up, sorry."
"I need that money way more than you though!"
"It's not fair to ask me to pay your expenses."
"I'm going to get a job and then you won't have to!"
"Are you sure you'll get the job?"
"Yeah! So it's just this one time!"
"What if you don't?"
"I will but even if I don't I still need to make rent and you won't miss the money!"
"That's not the point though."
"Yes it is!"
etc.
See how the money requester is bouncing back and forth between his argument that he should get the money because he's going to get the job and pay it back, and his argument that you have an obligation to give him money because you don't immediately need it? You can isolate one of those arguments with a hypothetical:
"Let's say tomorrow the perfect candidate walks into their office and takes the job before you have a chance to have your interview. This guy's exactly who they're looking for, immaculately qualified, and they hire him on the spot. So you don't get the job. You can't pay me back. Do you *still* think I should give you the money?"
That's unlikely to happen, but now you can talk about *just* the argument that you owe this dude money because you have more, without having him constantly try to jump back to the job thing.
Of course, this assumes good faith and a willingness to actually explore the argument together. In this particular case you'd be better served by just saying "no" and leaving. But in this blog's community, there is significant interest in getting to the bottom of why we hold certain beliefs, and if those beliefs are wrong, changing them.
Scott wants to know the answer to a specific question: "There is an argument that you are only responsible for saving people you naturally encounter in day-to-day life. Is it wrong to structure your life in such a way that you don't naturally encounter people in urgent need? Do you have a duty to save people you choose not to be in proximity to?" He's well aware that someone else might save them, that the situation would likely be resolved without your influence, and that there are other considerations. He's trying to force you to set those considerations aside for the time being so you can focus on establishing your views on that one question in particular.
I can tell you from personal experience that in Uganda at least there is indeed no police or fire department to call.
Well, this hypothetical makes sense in Uganda, then.
> But I think most people would consider it common sense that refusing to rescue the 37th kid near the cabin is a minor/excusable sin, but refusing to rescue the one kid in your hometown is inexcusable.
Again my moral intuition straightforwardly disagrees with something! It says that not rescuing the kid in the hometown afterward is very excusable. I wonder why, though?
> I think this represents a sort of declining marginal utility of moral goods. The first time you rescue a kid, you get lots of personal benefits (feeling good about yourself, being regarded as a hero, etc). By the 37th time, these benefits are played out.
That feels like it resonates with my intuition, except my intuition *also* considers the kid in the hometown to be part of the same chain. Maybe by having done so much thankless positive moral work in the past, you've accumulated a massive credit that diminishes any moral necessity for you to take active steps like that in the future.
I notice if I swap the locations, so that it's going into the woods that results in seeing one drowning child while being in the city results in seeing them every day, this feels different—and it also feels closer to real-world situations that immediately come to mind. Maybe my mental image assumes the city is more densely populated? The more people there are who could help, the less each one is obligated to. Bystander effect is bad only when it doesn't come out to having at least a sufficient number of responders for the tradeoff to work out (though the usual presentation of bystander effect implies that it doesn't, so assuming that's true, applying the counterbias is still morally good). I bet there's something in here about comparing how many of these situations one agent can *reasonably expect to encounter* with how many that agent can handle before it reaches a certain burden threshold, then also dividing by the number of agents available in some way. This seems to extend partially across counterfactuals; being by chance the only person there at the time in the city feels different from being by chance the only person there at the time in the forest.
Or maybe it's because the drowning kids in the forest part of the water come *from* the city in the first place that affects it? Aha—that seems to make a larger difference! If I present the image of the protagonist moving from the forest to a *different*, more “normal” city, and *then* failing to rescue a random drowning child, it seems much worse than the original situation, though still not as bad as if the exceptional situation were being presented to the person for the first time, probably due to the credit in the meantime in some way over-discharging their responsibility. But if I assume the second city is structurally and socially indistinguishable from the first one, only with different individuals and with its stream of drowning kids passing by a different cabin that the protagonist never goes to, then it stops being so different again. So it's not due to the entanglement as such.
Maybe if the people in the city are already in an equilibrium where they aren't stopping the outflow of drowning kids, then it's supererogatory to climb too far above the average and compromise the agent's ability to engage in normal resource allocation (including competition) with the other people in the city—if I remove the important business meeting and change it to going out for drinks with a friend, not doing the rescue feels much worse than before, and if I change *that* situation so that the friendship is on the rocks and might be lost if the protagonist doesn't make it to the bar, then the difference disappears again. This feels independent of the intention structure behind the city not saving the stream of drowning kids to me. If the city people are using those resources for something better, the protagonist should probably join in; if the city people are squandering their resources, the protagonist is not obliged to unique levels of self-sacrifice, though it would be morally good to try to convince the city people to do things differently.
Of course, possibly my moral intuition just treats the rescues as more supererogatory than most people's treat them as to begin with, too…
> and if I change *that* situation so that the friendship is on the rocks and might be lost if the protagonist doesn't make it to the bar,
Bring the rescued kid along with you to the bar, hand 'em to the bouncer saying something like "she's your problem now," then tell that semi-estranged friend that if they don't believe your excuse for being an hour late, and covered in kelp, they can ask said bouncer.
Make a rule that the children you rescue as they pass by the cabin have to help you rescue future children who pass by. After rescuing a few kids, you've got a whole team who can rescue later kids without your help.
+1000. Building scalable solutions to problems almost literally IS civilization.
Scott, and I say this with love, has lost the thread here.
Like the point of the thought experiment was to draw attention to the parallels between potential actions, their costs and their benefit. These examples seem like they are meant to precisely deconstruct those parallels to identify quantum morality and bootstrap a new utilitarianism. It's putting way too much significance on a very particular thought experiment.
But even taken on its face, the answer to wthe apparent contradiction is obvious, right? Why the cost of a ruined suit feels worth a kids life to most people, but donating the same amount of money to save a life via charity is unappealing. It's not that lifes value is morally contingent on distance, or future discounting or causality or any of that. It's that when you save a drowning kid, you get a HUGE benefit as you are now the guy who saved a drowning kid. The costs might be the same, but the benefits are not remotely equal. I guarantee I get my suit money back in free drinks at a bar, and a maybe a GoFundMe , probably before the day is over.
And even if you want to cut the obvious social benefits out of the picture, self perception matters. Personally saving a kid is tangible and rewarding. Donating money to a charity is always undercut by doubts. Such as "is this money actually making a difference?” And ”why am I morally obligated to take on a financial burden that has been empirically rejected by the majority of the population?"
Because saving a drowning child is assumed to reveal something about the rescuer's moral character, while bragging about charity is viewed as performative. The former might be dubious, but the latter is usually correct.
Alternatively: because mentioning "one can save a child through charity" is an implicit moral attack on those who not given charity, whereas saving a drowning child is not such an attack because few of us will ever encounter a drowning child (and most people probably think they would save the drowning child if they ever encountered one).
Something that gets missed is that saving a drowning child is "heroic." Why is it heroic? Because even though most people say they would do it, in practice they don't. The hero takes action to gain social status. In the case of drowning children floating by a cabin, there's no heroism, since the person rescuing them consistently is now engaged in a hobby instead of a single act of will.
Also, people do move away to places like Martha's Vineyard for exactly this reason, to avoid the plebs complaining about them.
Interesting but these are all similar to the “all Cretans are liars; I’m a Cretan” self-reference trap (paradox).
Insert one word “predict”, as in “do you predict that you…” and the trap is closed because it clarifies that this is an endless regression paradox at “heart” IMHO.
All future statements are predictions, and it is self-referential. The gives away is the reference to the “37th child…”
There is no “moral choice” in infinite moral regress as there is no truth value to the statement “this statement is false”
Language is a semantic system which is incomplete under Gödel’s Incompleteness Theorems.
It’s relatively easy to generate paradoxes.
Morality is a personal, aesthetic choice.
Angelic superintelligences are like Chuck Schumer’s “the Baileys,” a mouthpiece for moral ventriloquism.
We are here on Earth by accident. Nothing happens when you die. We should take personal responsibility for our own moral sense. Share yours if that seems like the right thing to do, and express it in the way that seems right.
There won’t ever be an authority or conclusive argument, because we’re an assembly of talking atoms hallucinating a human experience. That is beautiful and strange. I think helping other sentient beings, and helping them at mass scale, dispassionately and in ways that seem plausibly highly effective, is lovely.
Is this a reasonable moral intuition?
If faced with a drowning child and you are the only person who can help it, you have a 100% obligation to save it. I'll leave open the question of what exactly a 100% obligation means, but it's obviously pretty strong.
If there's there's a drowning child and you're one of five people (all equally qualified in lifesaving) standing around, you have a 20% obligation to save the child.
If there's a child who's going to die of malaria and you're one of eight billion people on the planet who could save it, then you have a one over eight billion obligation to do so.
If there's millions of children going to die of something, and you're one of billions of people on the planet who can do something about it then you have something on the order of a 0.1% obligation to do something. That's not nothing, but it's a lot weaker than the obligation where there was a 1-1 dying child to capable adult ratio.
If there are 5 people, and for some reason the other 4 are jerks who refuse to help drowning children, is your obligation now 100% because your choice is guaranteed to decide the child's fate?
If 3 of them are jerks, is your obligation 50%? And can you make it 0% by becoming a jerk yourself, so that the remaining non-jerk now has 100% responsibility? Or is obligation not additive in this way, and if not, does that suggest a more complicated calculation is necessary?
Only if I can reduce my obligation to zero by declaring myself a jerk.
I think the angelic original position makes a lot of sense BUT the key is the pool of people who you might become.
If the pool includes not just humans, but livestock also, you would become vegans out of moral consideration.
If you and all your angelic buddies are limited to a pool of rich people, that also shifts the calculus.
Said another way, the consideration is that you can become anyone within a particular pool so you want to be fair towards everyone in the pool.
In reality, the closest thing to pools we have are our family, community, country, with each getting a greater level of care.
A zen story I once heard:
Thousands of fish have washed up on the beach. A monk is walking along the beach throwing fish into the ocean. A villager laughs at him — “You'll never save all the fish!”.
The monk answers as he throws another fish, “No, but this fish really appreciates it.”
Same reason police catch speeding drivers. When one asks 'Why me? Everyone else is speeding also!' the response is 'when I go fishing, I don't expect to catch EVERY fish!'.
Something something donuts and coffee
The Alice and Bob thought experiment feels rather strongly off to me. Yes, certainly a person who fails to do wrong due to lack of opportunity might be a worse person than another who actually does wrong. That seems to be a fine way to summarize moral luck, and we would expect eternal judgment to control for moral luck. So far so good. You then conclude that, therefore, moral luck is fake and the worse person must have actually done worse things.
I'm confused by the absence of positives to moral obligations. If someone fulfilled a moral obligation, I love and trust them more, my estimation of something like their honor or their heroism goes up. If someone who was not obliged to does the exact same thing, I "only" like them more, I don't get the same sense that they're worthy of my trust.
It's trite, but I think "moral challenges" is closer to how I feel about these dreadful scenarios. I want to be someone who handles his challenges well, to be heroic, this seems to me more primal and real than these framings of actions in these dreadful scenarios as attempts to avoid blame, in a way that I don't think reducing everything into a single dimension of heroic versus blameworthy can quite capture.
I largely agree -- I soften it down to invitations for stuff like this, because when it comes to helping strangers, it's not quite a challenge, as people avoid the question at no cost to themselves. But there is an invitation: care for the least among you. Some people see it as a beautiful thing to go to, and some do not. I largely chalk the latter up to their unbelievably poor taste hahaha
One of the things that disturbs me is: good intentions are often counterproductive. You mention Africa, and that is a whole basket of great examples.
Feed Africa! Save starving children! Great intentions, only: among other deleterious effects, free food drives local farmers out of business, which leads to more starving children, which leads to more well-meant (but counterproductive) aid.
Medical aid! Reduce infant mortality! Only, without cultural change and reduced birth rates, the population explodes, driving warfare, resulting in millions of deaths.
Far too much aid focuses on the short-term benefits, without considering the long-term consequences. Why? Because, cynically, a lot of aid work is really about making those "helping" feel benevolent, rather than actually making long-term, sustainable improvements.
In practice, reducing infant mortality leads rather directly to decreases in overall fertility; parents who can count on their children surviving to grow up don't have to have "extra" children to make sure that enough of them survive to become adults.
So give to the aid which has good long-term effects. As you are probably aware, most of the effort of the "effective altruist" movement is directed at figuring out which interventions are in fact helpful overall and which not. Follow them.
III. Alice and Bob: If Bob saves more sick people, he'll get exploited by needy "friends" into not-being-rich.
Whatever moral theory you use, it needs to be sustainable; any donation is a door open for unscrupulous agents to cause or simulate suffering in order to extract aid from you.
Not that Bob should do less - while it certainly would be coherent, it doesn't sound very moral - but I think the optimal point between saving no one and saving everyone is heavily skewed downward from maximal efficiency of lives saved per dollar because of this, even when you're altruistic.
For Alice, that applies too, though in a different ways: while there might be less expectations by scammers for her to spend everything if she spends a little, there will still be such expectations in her own opinion; this is common enough that many posts on EAF warn against burning out.
The drowning children thing never worked for me.
In the classic example you are the only passerby where the child drowning so you are the only one who can save them so according to the Copenhagen Interpretation of Ethics it's your duty to do so.
If we change the situation and on the river bank you find the child's parents and their uncles, lifeguards, firefighters, the local council and the mayor himself (they all have expensive manors overlooking the river) what duty do you have to save the child? According to the Copenhagen Interpretation of Ethics it's their duty to do so because they are first by the river but it's also their legal responsibility to care for the child. In the order of duty of saving the child you are at the end of the list.
Given the blatant purpose for which the drowning child thought experiment was created in the first place I propose the White Saviour Corollary to the Copenhagen Interpretation of Ethics:
"The Copenhagen Interpretation of Ethics says that if you're a westerner when you observe or interact with a Third World problem in any way, you can be blamed for it. At the very least, you are to blame for not doing more."
I hate this just like I hate trolley problem questions. They all have the same stupid property that you’re asked to exercise moral intuitions in a situation so impossibly contrived that the assumptions you must accept unquestioningly could only hold in a world controlled by an evil experimenter who is perfectly capable of saving all the people he wants you to make choices between. The obvious timeless meta-strategy is to refuse to cooperate with all such experiments even hypothetically.
The SHORT VERSION of ALL these controversies is “many people suffer in poverty and medical risk because they live in bad societies but it is always and only the responsibility of better off people in good societies to help them within the system by donating as much as they can bear without ever challenging the system itself”.
In this example, you are not supposed to question the assumption that no one would save 24 children’s’ lives a day at the trivial cost of $50 per life saved by hiring $50/hr lifeguards to work shifts, that somehow no collective action is possible either to raise money for such a spectacularly good cost/benefit ratio charity or to get the relevant public maintenance department a slight budget increase to fix the drowning hazard, and only isolated random unlucky individuals have any power to rescue these children and must do so without ever making the public aware of their absurd plight and trying to fix things that way.
If you want to point to some current country with a shitty government which blocks or steals efforts to ameliorate its citizens’ suffering, don’t give me a f***ing trolley problem to show me I must donate in clever ways to get past the government, save your efforts of persuasion for the bad government, or for good governments elsewhere or the UN to intervene.
Yes. As I said in my comment, a lot of "aid" is there to make the contributors feel benevolent.
There is simply no point to providing aid to cultures and countries where it cannot have positive, long-term effects, and likely just supports the existing, malicious system.
Yes, this is a totally fine argument, but has already conceded that if that were not the case, you would have an obligation to provide aid!
And now we can argue whether the aid in question actually has the deleterious properties you assert.
Or you can say, not so fast! I think the aid is useless, but even if it weren't, I'd still have no obligation to provide it!, and then we can focus on that claim by isolating it from practical considerations, in the hopes that if we can resolve this narrow claim, we can return to discussing the actual effectiveness.
That's the point of the thought experiments, to resolve whether your objection is actually 1. aid is ineffective or 2. even if aid could be effective I'd have no reason to give, or both, or some third thing.
But by isolating which claim we're debating, we can stay focused and not jump around between 1 and 2 depending on how you feel the argument is going.
If your objection is truly 1., and that's why you find these hypotheticals useless, then great! But you better be prepared to do battle on the grounds of "is this aid effective" and not retreat to, "why am I giving aid to Africa when my neighbour..." as many others do.
Again, the point of hypotheticals is not to be realistic. It's to remove factors from consideration to laser focus on a single question. Generally in argument, it's easy to hop around in your logic chain or among the multiple reasons you believe something. This means you'll never change how you think because if you start "losing" on one point you don't concede "okay that one is bad, I will no longer use it" but instead hop to other considerations. These "contrived" situations are meant to keep you from hopping from "is it right to kill a person to save others?" to "Okay I think I can get out of this choice completely by doing X,Y,Z." Whether X, Y, and Z turn out to be flawed or not, you still never had to answer that question, which means that you still don't have clarity on your beliefs and under what circumstances you would change them.
Of course it seems like most people manage it even within the hypothetical, so opposed they are to systematically thinking through their beliefs one point at a time.
Certainty must be a factor here. Both the certainty of help being needed (not from general knowledge of world-poverty, but direct perceptional evidence) and the certainty of the help reaching its intended target make the responsibility more real and vice versa.
I think you're taking too seriously people's rationalizations as reasoning. The transparently correct moral reasoning is much more likely to be rooted in relationships -- you ought to love and care for drowning kids, so you simply *would*, if you have the right relationship with them, simply save every one you can. Which means, yes, in the modern world, living with minimal expenses and donating literally everything you earn to EA charities is a solid choice, one of the things you'd just naturally choose to do.
I believe role ethics (as seen in Stoicism - the concept is even more central in Confucianism, but I am less well read on that philosophy) offers a good descriptive account(*), more so than Copenhagen and declining marginal utility. The idea is that a person has moral obligations depending on what roles they play in the society. Some of those roles we choose (like a vocation, which may present obligations like a captain of a ship in the case of emergency seeing to the rescue of all passengers and crew even at risk of going down with the ship, or parenthood with equally strong obligations to our children), some of them are thrusts upon us by circumstances (like duty to rescue in the capacity of a fellow citizen in a position to do so), and some come down to us as part of being a human being living in the cosmopolis (helping those in need even if they are far off).
Now, there are and can be situations where virtues and obligations pull us in different directions and resolving the conflict can be a nontrivial task (indeed, the Stoics deny the existence of a sage - someone with perfect moral knowledge), but as a practical matter it is not unreasonable to establish a rough hierarchy where your obligations towards fellow citizens outweigh those towards other members of the cosmopolis, which is why you save the drowning child. That however doesn't mean those other obligations disappear: Alice, after doing her duty as a daughter, a neighbor, a member in her work community, perhaps as a mother, etc, would in fact have these obligations. Stoic perspective isn't prescriptive in the sense of saying outright "10% of her net income", but chances are a virtuous person in position of such prosperity would likely (but not necessarily: personal luxury clearly wouldn't be something a virtuous person would choose, but she might e.g. feel to be in an unique position to advance democracy domestically, and use her good fortunes to that cause instead) be moved to act.
* And I dare say prescriptive, insofar as they help make sense of our moral intuitions, which I'm inclined to treat as foundational.
This seems like a sane take. It accounts for both our intuition that Alice really ought to do her duty to her daughter, neighbor, and work community before engaging in telescopic charity, but it also accounts for the intuition that we really ought to help drowning children sometimes, even when they are very far away. It also accounts for the case where, Alice, living in the cabin is called away from saving children by a more moderate need of her own child--the baby's got colic, and needs tending--and we don't find Alice to be totally reprehensible.
The question I always have though about role ethics or Confucian derived ideas of li is how to work out what, in this increasingly atomized cosmopolis in which we all live, I--as an unattached, single person--my roles or li are. It also seems like there is some tension between my intuition that I ought to be pretty free: I believe in divorce, in allowing minors to be emancipated, in--less extremely--moving away from one's hometown community, breaking up with friends, business partners, employers. Those freedoms seem a little bit in tension with an ethics derived from my roles.
The mechanism by which new social roles are constructed is being pushed far beyond its historical peak throughput rate, and facing corresponding clogs and breakdowns.
The argument for the copinhagin interpretation would be that instead of optimizing for getting into heaven you should optomize for "being an empathetic person"
The person who sees dozens of drowning children every day and dozent save them becomes desensitized to drowning people and losses their capacity for empathy.
The person who lives far away from the drowning people doesn't.
That's unfair moral luck, but that's the truth.
I will always remember when I saw a rabbi I knew giving a dollar to a bigger at a wedding. I asked the rabbi why he did that. Doesn't he already give 10percent of his money to charity?
He said yes and indeed the single dollar wouldn't help the bigger that much. On the other hand giving nothing will train him to become un-empathetic. He quotes a biblical verse saying something like "when someone needs money you should be unable to refuse" or something like that (לא תוכל להתעלם) or something...not sure the exact context.
Of course he still gave the 10 percent as well. He didn't think you could completly remove moral obligations by not touching them. Just that the slight additional obligation you have twords situations you touch versus those you don't relates to self empathy training.
Seems like you'd train empathy even more effectively the more you help. The 10% of it makes little sense in comparison to, if you see a person is in need you shouldn't be able to refuse. Isn't donating, and then having a great abundance left and deciding not to continue helping, a form of refusal?
It is a form of refusal but it psychologicly doesn't feel like a form of refusal as saliently. So in terms of training your own psychology it doesn't have the same effect.
>what is the right prescriptive theory that doesn’t just explain moral behavior, but would let us feel dignified and non-idiotic if we followed it?
Nobody has found one as of yet, and not for the lack of trying. I'm pretty sure that there isn't a universally appealing one to be found, even if outright psychopaths aren't included in "us".
As far as I can tell, the moral principle that the average person operates on is "do what's expected from you to maintain respectability", with everything beyond that being strictly supererogatory. This is of course a meta-rule, with actual standards greatly differing throughout time and space, but I doubt that you'll do better at this level of generality.
About: The 37th vs 1st child aspect.
Never thought about that one before, but it feels natural instantly.
1.) it will help to more evenly distribute the "giving effort" among those that can help (in some situation, does not need to be the same situation)
2.) in real life the odds of success, the utility towards the saved one and the cost for the saving one - all have some uncertainty. Having a degressive motivation to help leads to a more stable equilibrium between "should have helped but did not" and "irrationally exhausts herself and thereby risks the tribe's success"
3.) limits the personal darwian disadvantage agaist freeriders in the own tribe (even if all drowning children are from a different tribe).
We are all in the cabin. The cabin is global. Despite the billions given in foreign aid and charity for decades, there are still people living in poverty and dying by preventable disease etc all over the world. And any given money is likely to go to corrupt state heads. And regardless, the only countries with a booming population are all in africa, despite the low quality of life, and we hardly need more Africans surviving.
"billions" is a superficially big number, but a tiny fraction of world GDP and collectively dwarfed by what countries spend on weapons to kill and maim other human beings
The global incidence of extreme poverty is in fact going steadily down, which is what we'd expect to see if those charitable interventions were working.
That is overwhelmingly due to economic growth and tech advances. For example due to Fritz haber there are billions more people alive than there would otherwise be. Individual charitable aid is so tiny it wouldnt even fix poverty within the nations of those that give it let alone fix the world.
Of course that doesnt mean we shouldn't all give more. But what is optimal? If we had that much more focus on charity we wouldn't have had such focus on the growth that allowed for the people to exist who need that charity.
Tech and charity aren't mutually exclusive causal factors. Pretty sure we didn't get modern insecticide-infused bed nets by banging rocks together, or having some enlightened saint pray for an otherworldly being to restock the warehouse. Norman Borlaug's dwarf wheat was paid for partly by the Rockefeller Foundation and is right up there with the Haber process in terms of ending food scarcity. Lots of EA charities are evaluated in terms of the economic growth they unlock.
Is the magical city of drowning children called "Omelas"?
The city is an amazing place to live in, with its high tech infrastructure and endless energy, but if you aren't living there, you might not have heard in your childhood that the city is directly powered by drowning children. Every child you rescue from the river, lost to the system, reduces life satisfaction of millions of citizens by about 1%. The children need to drown for the city to be as great as it is.
You're allowed to leave the city after your schoolteacher takes you all on a field trip to the hydro station, but it's only allowed until you turn 18, and if you walk away you might slip and fall into the river.
I have updated towards the following view on the drowning child thought experiment:
1. The current social norms are that you have to save the drowning child but don't have to save children in Africa
2. The current social norms are wrong in the sense that the idea moral ethics disagree with them, but not in the direction you would think. According to ideal moral ethics, the solution isn't that you have to save children in Africa. It's that you don't have to save the drowning child either.
2. Obviously, I would save a drowning child. But not because I have to due to the ideal moral ethics. It's because of a combination of my personal preferences together with the currently existing social norms.
It’s a big problem because society runs off people doing these nice one offs but then other people demand that you have to universalize it and then they stop doing nice things.
I basically agree with the conclusion (except for the "not letting capitalism collapse" part, of course, it will collapse anyway, as is in its nature, the point is to dismantle it in a way that does not tear society and its ethical foundations apart).
But the way you arrived at it was... wild. I was (figuratively) screaming at my screen halfway through the essay that if your hypothetical scenario for moral obligation involves an immensely potent institution sending dying people specifically your way, the obvious way to deflect moral blame, and the only natural reaction honestly, is asking why it doesn't simply use but a tiny fraction of its capabilities to save them itself instead.
Basically, there's a crucial difference between a chaotic, one-off event where you're unpredictably finding yourself as the only (or one of the only) person capable of intervening, and a systemic, predictable event where the system is either:
- alien and incomprehensible - e.g. you can save a drowning rabbit, but have no way of averting the way the nature works. Here, Copenhagen stands, you have no moral responsibility to act, but once you act, you should accept the follow up of caring after the rabbit you've taken out of the circle-of-life loop.
- local and familiar - in which case all notions of individual responsibility are just distractions, the only meaningful course of action is pushing for systemic change.
Why do you think that capitalism will collapse?
The most orthodox Marxist crisis theory, based on the tendency of the rate of profit to fall, depends on a labor of theory of value, which seems squiffy.
A revolution of the workers brought on by an intolerable intensification of their exploitation seems less likely in consumer capitalism, where the leisure of the workers, their consumption is an important part of the system.
I'm not opposed to the idea, but I guess I don't necessarily believe that the inherent structural contradictions of capital will lead to its collapse inevitably.
I personally have a different idea of why capitalism will collapse, which goes Capitalism > AI > ASI > Dominance by beings capable of central planning.
Interesting! That's something I've thought about as well, but I see that as a relatively hopeful outcome, a possibility, not something that for capitalism is "in its nature."
I suspect definitional and/or scope mismatch here. To clarify - I am (arguably) not a Marxist, more specifically - not a stereotypical Marxist operating on a grand theory of history where capitalism is a discrete stage of civilizational progress, to be replaced with a more advanced one. I am not saying that people will stop trading or [whatever you think capitalism means]. I am saying that societies based on capitalist principles are bound to experience a collapse - which alone, not saying much, all societies in history eventually collapse, due to more general social dynamics - and, more strongly, that in their specific case capitalism is the very mechanism bringing about this collapse, and as such, not worth trying to preserve.
(In a vastly simplified way, how this plays out is - wealth, and thus power, concentrates in fewer and fewer hands, eventually creating a financier class at top of the society. Because it's measured in money, the more concentrated wealth is, the more decoupled it is from [whatever we want from productive economy]. This eventually makes various destructive (war) or unproductive (financial instruments) "investments" more profitable than expanding productive capital, this makes economy stall, this makes the ruling class push everyone else into debt to maintain their profit levels, this further immiserates the population and bankrupts the state, causing a collapse, and this makes the financiers pack up their money and escape for greener pastures, while the regular citizens of once-wealthy society are left cleaning up their mess.)
(It happened several times in history, and we can observe it happening once again right now, in real time, in the so-called first world economic block, with US as the epicenter.)
Hm, interesting! So you don't have a stadial theory of history, but you believe that any society is eventually going to collapse, which in capitalism will come from too concentrated wealth becoming separated from what's really productive in an economy.
You gave one optimistic view of how AI could disrupt this, but couldn't it be possible that AI (-->ASI, as you put it) allows the financier class to keep consolidating forever? If they have something that makes more and more of the stuff they want, automates more and more of their economy: can't we just end up being cut out of the picture, with not much of a mess to clean up in the first place?
I agree with most of that, but I think it's solvable without a collapse. There are two different things financiers are doing, which the current system (and to some extent even the financiers themselves) mostly fails to distinguish: innovation, and extraction. Making the overall pie bigger vs. capturing a bigger piece of a fixed pie for yourself. Building capital vs. buying land to collect rent.
A Georgist land value tax skips straight to the end of that inevitable "extraction concentrates wealth" progression, compiles the resulting power into something publicly accountable. UBI keeps the local pastures green. Financiers who want more profit have to accept the risks of innovation, deliver products that some other successful business and/or the average UBI recipient is willing to pay for.
Georgist LVT in a modern society also needs to tax other sources of rent extraction like the network effects that keep Facebook afloat, and the theory there is not nearly as clear unfortunately.
There's an existing regulatory framework for public utilities.
Not Just a Thought Experiment
I live in a third world country with first world money. My wife runs a small animal rescue out of our property and sponsors animal neutering around the city. The country is also very poor and most humans here are in what most westerners would consider a very bad situation. I spend most of my time and money on research to cure aging since I believe aging is the number one cause of pain and suffering in the world and I believe curing it is within reach.
My wife had to almost entirely stop her animal rescue efforts because it got to the point where it was consuming all of her, and much of my, time to the point where it was significantly interfering with our lives. She has friends in the rescue community who have completely ruined their lives over it, living in squalid conditions because their money all goes to feeding an army of dogs and cats. She also used to volunteer to help homeless children, but that similarly was consuming her life.
Our solution: Build a big wall around our property and don't leave the house. Every time we leave the house, we see the suffering everywhere and it is overwhelming. You can very easily ruin your own life trying to save everyone one at a time, and from an optimization standpoint there are way bigger bangs for your bucks.
Funny? Story (and what prompted me to comment):
About 10 minutes after I finished reading this article I went out to walk around my property, which I haven't done in about a month or so. I got about 20 steps out when I ran into a terrified abandoned kitten. Since I have over-active mirror neurons I was essentially forced to pick it up and rescue it before returning to look for its mother or any siblings in the area. This is my punishment for leaving my walled garden that blocks out the sound of screaming kittens and starving children.
If I owned the property next to the child river I know exactly what I would do. I would build a wall and soundproof my house so I could ignore the problem, knowing that there are better uses of my time and money than completely ruining my life saving 24 children a day. I would strive to avoid going outside, but when I had to I would almost certainly rescue another child before hurriedly returning to my walled garden.
I don't have a solution to the moral dilemma, only a solution to my mirror neurons that make me do irrational things. I suspect that most humans with functioning mirror neurons are not applying some complicated moral philosophy, they are just responding the way they were evolved to respond when they witness pain and suffering of others. Now that we can witness and impact things further away, these mirror neurons can easily be overwhelmed and cease to function as a good solution on their own.
That’s why big social problems should be left to organizations, not individuals. If a social worker for a non profit gets overwhelmed, they can quit their job and go back home. No one will think less of themselves. But if you have to live with it, it becomes more difficult.
Organized social programs often get co-opted by Moloch, doing more harm than good. I don't have a solution to this, but I am unconvinced that organizations should be assumed to inherently do better than free will and law of large numbers.
Instead their net effect may be to breed cycles of dependency that rob populations of agency and personal responsibility and prevent progress at scale.
Clearly some cultural philosophies do better than others.
You seem to have a weird idea of what Moloch is. Moloch isn't just "everything bad", Moloch is when a Nash equilibrium of independent actors is an ethical or welfare race to the bottom. It's inherently harder to avoid a bad Nash equilibrium the more players there are in the game.
The original definition of Moloch, per Wikipedia is:
The fire god of the Ammonites in Canaan, to whom human sacrifices were offered; Molech. Also applied figuratively
This is almost precisely the example I give of a centralized power structure that destroys lives at scale.
I appreciate your definition, but these two things are not the same as far as I understand it, and yes, I use the Wikipedia version of Moloch as one of my mental models.
Around here the definition in common use comes from this post on Scott's old blog: https://slatestarcodex.com/2014/07/30/meditations-on-moloch/
Thank you, and yes, I read that one a while ago but lost track of the specifics.
For the sake of argument, let's call my 'Moloch' Fred. Given that I have Fred here, does it make the point more worth considering? If we have both Fred and Moloch, my thesis is that my concerns as stated are still valid.
Why do you think that there isn't a equilibrium here?
* People seeking power and money are attracted to running/operating organizations with lots of power/money.
* People wanting to do good aren't as motivated to run large organizations with lots of power/money (it is miserable work).
* Any large social endeavor that has sufficient power or money to enact meaningful change will eventually be dominated by those seeking power and money, rather than those seeking to do good?
* Eventually any large social endeavor will no longer do good.
Note: The above is just a high level hand-wavy illustration, but I am not convinced that we cannot rule out Moloch here.
Large social endeavors do sometimes (often?) end up having all or the majority of their power and money skimmed off by insiders for their own use. However, this doesn't seem to happen 100% of the time. I mean, if principal-agent problems were this bad, corporations wouldn't function at all either and the economy would be reduced to humans acting as individual agents. (And corporations do also fall into ruin by this mechanism.) So I don't think this makes the argument that the optimum amount of non-market interventions is zero and we should just accept that Moloch wins everything always.
Are there examples of powerful/rich charitable organizations not running into this problem over the long term?
I can definitely believe that this can be delayed for quite a while if you have a strong ethos aligned leader, but eventually they need to be replaced and each time replacement happens you may not get lucky with the new pick. This would suggest that while Moloch will eventually win, there is a period of time between now and that inevitability where things can be good/useful. Perhaps there is value in accepting an inevitable fate if there are positive things that come of it along the way? Or perhaps we can try to find ways to shut things down once Moloch shows up?
> Organized social programs often get co-opted by Moloch, doing more harm than good.
That's a common meme, but I don't think it's always, or even often the case. I've personally worked with a organized social program of massive size, funded by a large network of individual donors, doing amazingly good work over decades.
> Our solution: Build a big wall around our property and don't leave the house. Every time we leave the house, we see the suffering everywhere and it is overwhelming.
huh. It's like Siddhartha Gautama's origin story, but in reverse.
(I'm not trying to be sardonic or condescending. It's just an interesting observation about how one man's modus ponens is another man's modus tollens.)
I wasn't aware of that origin story, but you are right it is exactly the opposite of my solution! Perhaps there is some optimal amount of exposure to pain and suffering one needs in order to take appropriate action to address it while not also being debilitated by it?
I have my own strong opinions on the Drowning Child experiment, though I've withheld them, so far. Because about a month ago, I basically said I'd tackle it on my own substack, and then procrastinated since I'm such a lazy layabout. Nonetheless, I'm confident that I've got it figured out, in a way that solves several other ethical questions and adds up to normality. At the highest level, it's tied together by expectation management. Which dovetails with Friston and Bayes Theorem. But it's a lot to explain, and a bit woo.
For now, I'll just say that ethics is basically social-engineering. "Actual" engineering disciplines (e.g. Civil Engineering) recognize that reality imposes hard constraints to what you can reasonably accomplish with the resources you have. If you wanna launch yourself to the moon with nothing but a coke bottle and a bag of mentos, you're not gonna make it. Likewise, any ethical system that says "donate literally 100% of your money to charity, such that your own person dies of starvation in a matter of weeks" is not sustainable. It's not sustainable individually, and it's not sustainable en mass. You have to consider how things interact on a local level. Which is yet another reason why Utilitiarism Is Bananas (TM). I.e. part of the appeal of Utilitarianism is the abstracting/universalizing/agglomerating instinct to shove all the particularities of a scenario conveniently under the rug. I.e. spherical cow syndrome [0].
"Let's save the shrimp! And build a dyson sphere, while we're at it!"
How?
"details... details... "
Sure, if you have a utility function that values the well-being of others, then perhaps you want to give a portion of your resources to charity. But you have to balance it with keeping your own system running. Both physically, and psychologically. And the burden you take upon yourself should, at most, equal the amount of stress you can handle. Which varies from person to person. E.g. commenter Dan Megill mentions [1] that the charity-doctors who denied themselves coke/chocolate/internet didn't have the mental fortitude to stay in Africa. To me, what this indicates is that Peter Singer psy-oped them into taking on more suffering than they could personally handle, and they buckled. Materials Science 101: different materials have different stress-strain curves [2]. Materials are not all created equal.
In sum, there's no objectively optimal amount of exposure. It entirely depends on what you can handle, and what you're willing to handle. I.e. it's subjective. I.e. the weight of the cross you bear is between you and your god.
[0] https://en.wikipedia.org/wiki/Spherical_cow
[1] https://www.astralcodexten.com/p/more-drowning-children/comment/102299122
[2] https://en.wikipedia.org/wiki/Yield_(engineering)
I thought this was mostly not about wanting to have your life ruined / being exploited? Which are closely related
If I see a drowning kid saving it would be inconvenient but it's not going to ruin my life. And this is partially because there is no Big Seamstress pushing kids into water to ruin people clothes and send their stock to the moon.
If a Megacity is skimping on lifeguard and creating a situation where I can save those kids (and also somehow there is no other person upstream or downstream willing to help them?) saving all the kids would ruin my life (I can't even sleep properly). And related to that is that the city is saving relatively little money (cost of 24/7 lifeguard so if you paid them SWE salaries maybe 2 M$/year) and getting a rather huge benefit. If they value life at 1M$ they get from me 24*365*1M$ = 8760M$ / year.
If a city spends millions to build a dam but they count on my free labor to extract billions per year from my unpaid labour then yeah they are kind of exploiting me.
With charities situation is trickier - if a charity is really saving lives at low cost then it would be great to donate to it (some amount, you probably don't want to ruin your life). But you're donating money so it's harder to verify that's actually happening. And people have obvious incentive (getting your money) to misrepresent the situation to you (so you should be more worried about your money actually being used in the way they claim to).
And people setting up a generic argument which, if accepted, would oblige you to potentially ruin your life (by giving away all your money) while potentially benefiting them (by directing money in their general direction) is extra suspicious.
I don't want to say that one should never give money to charity. I agree with what I think was the original premise of EA (find out what charities are effective and how effective exactly they are and what money you want to use charitably try to use effectively). But it's really hard!
I think most of the criticisms of these extreme life/death hypotheticals as teaching tools or thought experiments are valid, but I'll add another one I think is pretty important.
There never seems to be any scope for local or medium-scale collective action. It's always you, alone, with the power of life/death, or else Angels of Heaven making grand population-wide agreements. For example:
What if in the cabin by the river of children scenario, you found three "roommates" to live there with you (presumably all doing laptop work-from-home etc.) and you all did six-hour shifts as lifeguards, saving all the children? And why does it take a "lobbyist" to possibly get Omelas to do something about the drowning children problem? Ever see "Frankenstein"? You could pick up a drowned kid and walk to City Hall with her body, that might get some attention.
And in reality, that is how things usually improve in human society. Some local group takes the initiative to starting reducing harms and improving people's lives. Sometimes they grow and found Pennsylvania. Mostly they gain status and attention and can have their work helped by or taken up by governments (assuming any govt is not COMPLETELY corrupt.) Global-level co-ordination only happens in like The Silmarrilion -- here on earth see previous about TOTAL corruption.
BR
PS --- The second-most cynical take here would be to get an EPA ruling classifying the stream of children as an illegal "discharge" into a public waterway, getting an injunction against the city for polluting the river with dead kids, which honestly at the rate of one/hr would be some VERY significant contamation indeed, even if you had some horrible mutant species of carrion-eating beavers downstream building their dams out of small human bones.
A more cynical take comes to mind -- after a week of tag-team lifeguarding, you will have 168 children. What do you with them? This quickly become completely unmanageable. In fact after 24 hours all the kids would be so annoying you'd probably start letting the little bastards drown.
"A more cynical take comes to mind -- after a week of tag-team lifeguarding, you will have 168 children. What do you with them? This quickly become completely unmanageable. In fact after 24 hours all the kids would be so annoying you'd probably start letting the little bastards drown."
Even more cynical take: sell 'em to child sex/labour trafficking gangs. The parents obviously don't care, since they all continue to live in a city that allows children to fall into waterways and get swept downriver to drown unless a random stranger saves them. The megacity even more obviously doesn't care about what happens to its minor citizens. The problem is set up such that if you, personally, singlehandedly don't intervene the kids will drown. So clearly nobody is looking for them or dealing with them or trying to prevent the drowning. We don't even know if the bodies are collected for burial or if that too is left to whoever is downstream when the corpses wash ashore.
So who is going to miss one more (or 168 more) 'dead' kids? Profit!
That was my immediate thought, but my comment was already too long. Maybe the Clinton Foundation could send a van a couple times a day to scoop up this free resource. Or establish a Jonestown style mini-nation someplace and train your infinite steam of children who owe your their lives to be an invincible army. So many possibilities.
Exactly! You now have this never-ending (it would seem) source of free labour literally floating down the river to you. For the megacity, this is only 999th on the list of "we're in really deep doo-doo now", so it could even be argued that you are giving the children a better life (how bad must life in the megacity be, if there are 998 *worse* things than drowning kids on the hour every hour all year round?) no matter where you send them or what you do with them.
Yes, an army of infinitely-replenishing children would be great, kind of like Hector (?) and the Dragon's Teeth in Greek mythology. But for maximum Dark Lord chaos points, I think my own army of mutant carrion-eating beavers would slightly edge it out. Add in some weaponized raccoons and it's "Halo 4: The River Strikes Back"
Split the difference: hand the rescued kids over to your cadre of mad scientists (you *do* have a cadre of mad scientists, don't you?) as experimental material to help create the next generation of mutant carrion-eating beavers and weaponized raccoons! After all, the carrion for the beavers has to come from somewhere, right, and where better to ensure a steady supply than the spare parts from the lab experiments?
Train the kids to train the vulture-beavers to construct the weir to simplify the rescue process, then treat the overall situation's rank among that megacity's problems like a scoreboard for your raiders to climb.
https://viruscomix.com/page464.html
The observer will destroy reality so no problem right
>People love trying to find holes in the drowning child thought experiment. [..] So there must be some distinction between the two scenarios. But most people’s cursory and uninspired attempts to find these fail.
Alternative way of framing this article:
> People love trying to find holes in the drowning child thought experiment (DCTE) counter-arguments. So allow me to present even more contrived scenarios that are not the DCTE, and apply the DCTE counter-arguments to those instead and see how they fail.
My takeaway is that you're nerd-sniping yourself by employing ever more sophisticated arguments to a minimally sophisticated "I'll know it when I see it" approach to life in general and the DCTE in particular that most people have.
My intuition goes towards : accident vs systemic issues.
In IT, we have a saying : "your lack of planning is not my emergency".
Similar vibes here. Why is there a drowning child in front of me ? Is it an unfortunate accident, or the predictable and logical consequences of a really poor system ? I feel absolutely no responsibility for the second. In this example :
> Every time a child falls into any of the megacity’s streams, lakes, or rivers, they get swept away and flow past your cabin; there’s a new drowning child every hour or so.
Not my problem. Fix. Your. Damn. System. Or don’t — at this point I don’t care.
The point of the hypothetical is that this isn't really a *source* of drowning children, it's just that all the children that would normally drown in that big city end up in one place.
Still not my problem: what if my cabin is situated such that before the children coming down river reach it, they all get swallowed up by a sinkhole? So I don't even have any drowning children to save, but they're still drowning. The city is the source of the drowning children, let them sort out why one child every hour falls into their damn rivers and lakes.
The absence in general of a Duty to Rescue stems from the principle that one shouldn't be obliged to put oneself at risk on behalf of a stranger to whom one has no allegiance or duty of care, and that might not be the same risk as the stranger's predicament (assuming one didn't cause the latter).
Even with the example of the kid drowning in a puddle, who is to say there isn't a bare mains electrical cable under the water that would electrocute a rescuer as soon as they touched the water or the child?
There's also the snow shovelling example, in which if you public-spiritedly clear the snow from the sidewalk adjoining your dwelling (a sort of anticipatory rescue) and a passer by slips on the patch you cleared then they can sue you for creating the potential hazard, which they could not had they slipped on the uncleared snow!
Or you could pull someone from a crashed car that was in imminent danger of catching fire or being rammed by another vehicle, but in the process break their dislocated neck so they ended up paralyzed for life, again risking a lawsuit.
I gotta be honest with you fam: all that such posts do is make me steel my heart and resolve to not rescue local drowning children either, in the interest of fairness. One man's modus ponens is another's modus tollens and all that.
What you're trying to do here is to erase the concept of supererogatory duty. It's inherently subjective and unquantifiable so every time you say "well, you don't have to do it, but objectively you should donate exactly 12.7% of your income to African orphans, but you don't have to do it," you're not fooling anyone, you just converted that opportunity for charity to ordinary duty.
So here's an alternative you have not even considered: I decide that I have a duty to rescue my own drowning children, I decide that I have a duty to rescue my neighbors' drowning children for reciprocal reasons (and rescuing any drowning child in sight is merely a heuristic in service of that goal), but rescuing African drowning children is entirely supererogatory, I might do it when and how I feel like it, but it's not my obligation that can be objectively quantified. This solves all your "river of death" problems without any mental gymnastics required.
In conclusion: https://knowyourmeme.com/photos/2996252-moral-circles-heatmap
It sounds like you are perhaps saying that everything is commensurable? Except like it’s some kind of gotcha?
https://slatestarcodex.com/2014/12/19/nobody-is-perfect-everything-is-commensurable/
Kind of, but not really.
What I'm getting at is, when someone proposes that from assumptions A, B, and C follows conclusion D, you can agree that it does logically follow, but disagree that D is factually true and instead reject some of the original assumptions.
So when someone proposes that I have a moral duty to save a drowning child in front of me, and that the life of a drowning child in Africa has an equivalent moral worth, I can disagree with their conclusion (that I must be miserable because I donate all my money to malaria nets and that still doesn't make a perceptible dent in African suffering) and declare that no, for my purposes the children are not fungible, and also that I don't have a duty to save the local child. What's going to happen if I don't, will Peter Singer put me into an ethical prison? Even usual police, if I were to look at them as the source of morality, would leave me alone in most jurisdictions, or all if I tell them that I don't swim very well.
Then someone might ask me, won't I feel terrible watching the child drown? Sure, *that's* why I'll try to save it, but I don't feel particularly terrible about knowing that thousands of children drown in Africa because I *don't see them*, and why would I try to rewire myself about that? Reaching https://en.wikipedia.org/wiki/Reflective_equilibrium goes both ways and nothing about the process suggests that the end result will be maximally altruistic. So I can choose to retain my normal human reactions to suffering that I can see and alleviate, but harden my heart against infinite suffering elsewhere.
Similarly, wouldn't I get ostracized by the people of my town for letting the child drown, because we have an understanding about saving each other's children? Sure, and that's another good reason to save the local child that doesn't generalize to saving African children because Africans won't help me with anything and my fellow townspeople won't be upset about me not helping Africans without expecting reciprocity.
Then Scott says, all right, but watch this, and adds a bunch of different epicycles, which he then invalidates with more convoluted thought experiments and replaces with further epicycles, but I still find the end result unsatisfactory.
The solution proposed here has a fatal flaw: Rawls' Veil of Ignorance doesn't actually exist. I understand that it would be very nice if it existed, it would let us ground utilitarian ethics pretty soundly, but unfortunately it's completely made up.
The solution in the post you linked, to donate 10% of your income to charity, is also kind of incomplete, because it still tries to make an utilitarian argument, but then suddenly forgets all its principles and says that it's OK to donate 10% because most people donate less, so you can just do that and sleep soundly. Why?
What I think is if not outright missing (upon rereading both posts) then at least not properly articulated is the distinction between ordinary duty and supererogatory duty, such as donating to charity. Ordinary duty, and I'm willing to walk back my objection and include saving a local drowning child, you are obligated to fulfill. Anything above and beyond that you can do if you want, but it's not mandatory.
And that's the crucial part that allows you to have arbitrarily whimsical justifications, such as: really I'm just satisfying my desire to make the world better, so donating exactly 10% of my income scratches my itch, poor Africans are welcome. Or you can imagine that there's a God that will reward you with a place in heaven, or that you entered an esoteric compact before your angelic soul incorporated in a body, or whatever that satisfies your desire to feel like a nice person without having too much troublesome thorny edge cases.
There's something charming about an aethist solving ethical dilemmas by recommending we behave as if an angelic coalition is still in effect.
Good post. And I approve of thoughtfully engaging with the substantive details of all ideas. But throughout the post I couldn't help but constantly think "OK, but the main point is that the Copenhagen Intepretation of Ethics has nothing to recommend it as a prescriptive theory. That seems to be the bigger issue."
The person saving the children washing out of the mega city is obviously acting extremely immorally.
"this is only #999 on their list of causes of death"
By saving those children they are neglecting 998 higher priority interventions. For every child saved from drowning they are willfully killing a much higher number of children.
The drowning child saver is a monster by Scott's reckoning.
I feel you're neglecting neglectedness as a consideration, and tractability seems like a straightforward consideration to add.
I am very fond of Scott, but these sort of thought experiments just feel meaningless to me. This is probably a function of different starting premises. I have been reading and reflecting a lot of moral philosophy in the last years, and the place I've (nor dogmatically) arrived at is some type of non-realist contractualism, which means questions of 'ethical' behavior are basically meaningless. There are contracts (formal and informal) one submits to when part of a society, and beyond them there are preferences where people are unconstrained to change them except if they want to. Morality is just a strategically useful evolutionary strategy (both natural and cultural) that allows individuals and the groups they belong to to prosper.
Tbh I find such discussions rather tiresome. Moral intuition evolved not to help us make better moral choices, but to improve our chances of reproduction. Thus the inherent moral framework build into every human is "be as selfish as possible as long as it does not reduce your social standing in your tribe of max. 50 people".
So either go and donate most of your money for mosquito nets for African children or admit that you are not trying to maximize morality in your decisions.
I can easily admit that I like to eat fast food even though I know it's not healthy because it triggers evolved cravings and it's easier than to make the right choices. Moral frameworks like the Copenhagen theory are the intelectual equivalent of saying "if you eat with friends, you only have to count the calories that you eat more than everyone else". It's bullshit and you know it. Stop rationalizing poor decisions and own them, if nothing else.
Actually by relocating to the drowning child cabin you are given a wondrous opportunity to be in the top .01% of life-savers historically and you should really be taking advantage of it, unless you are retiring to your study to do irreplaceable work on AI safety or malaria eradication.
Yeah I kept thinking about this. Perhaps the broader world of the hypothetical is extremely strange -- certainly our glimpse is -- but it would be absurd for anyone to be so sure of their work that the cabin isn't an amazing opportunity. Even the few (less than a hundred) people who have saved more lives could not have had this level of certainty in their impact. The real question is, how does the cabin not get bid up in price by people most willing to take the opportunity? Then it should be allocated to someone who would use it to the max and have low opportunity costs, I would think. You only need like ten sane/normal/good people in the entire world to get a fairly good outcome in that situation, assuming the context isn't saturated with even better opportunities.
I think you're all missing the obvious solution: drain the river. Kids can't drown if there's no river to drown in, now can they?
I think this is the pretty obvious problem with the whole post. It's an appeal to ethical intuitions, but ethical intuitions are formed by experience and interaction with the world as it exists. In a world without gravity, my horror at seeing a child falling off a cliff would be entirely inappropriate. So the extreme hypotheticals don't "isolate the variables," they just trigger the realization that "this world is different."
I strongly suspect that there is no such thing as a compete and internally consistent moral framework. The obsession that EA types have with trying to come up with a set of moral axioms that can be mapped to all situations is pointless.
Moral frameworks are an emergent property of society which make them effectively determined by consensus, weighted by status and proximity. The problem is that the individual judgements that coalesce into a consensus are not derived from some abstract fundamental set of principles isolated from reality, they're determined by countless factors that can't be formalized or predicted.
For instance..
I could walk past a drowning child and suffer reputation damage
A priest could walk past in a deeply religious society and declare that the child is the devil and so deserves to drown.
The child could be drowning in a holy river that is not to be touched so a passerby is praised for their virtue in ignoring the child and respecting the river gods.
An exceptionally charismatic individual could start a cult in ancient Rome around not saving drowning children because Neptune demands sacrifices. This cult outcompetes Christianity and becomes the foundation of all of western civilization. That passerby is not evil, he's just religious and very orthodox.
an even more charismatic individual could convince an entire nation to adopt a set of beliefs within which saving a drowning child is dysgenic because a healthy child would know how to swim.
You can keep going on and on..
It's better to adopt the general consensus of the society within which you exist or if you insist on changing the status quo, play the status game to increase you and your group's influence on the consensus. Trying to come up with a logical framework is not it because that's not what normal people are basing their judgements on.
ISTM this model is failing to capture all the variables involved. Why on earth /wouldn't/ we be obligated to save the hourly drowning child, forever?
We have a habit of excluding physical and mental health from these calculations. The wet suit and missed lunch don't matter, but dustspecks in the eye, forever, with no prospect of an end, add up.
Consider a model where people generate a finite amount of some resource each day. Let's call it "copium" for convenience. Self-maintenance costs some variable amount of the resource over time. This amount varies randomly, in accordance with some distribution. You can approximate some upper and lower bounds on how much copium you're likely to need to get through the day, but you can't know ahead of time. All decisions and actions you perform cost copium. If you incur a copium cost when you have no copium left, you take permanent damage. If you accumulate enough damage, you die.
This brings the difference between the one-off case and the hourly case into focus: the one-off scenario is worth the copium spend, but in the ongoing scenario you predictably die unless you can make it fit your copium budget.
The rule then becomes - help however many people you sustainably can, and no more than that unless you'd also be willing to sacrifice yourself for them in more immediate ways (the answer to "would you be willing to die to save hundreds of children?", for many people, isn't "no"!)
In the moment, though, when forced to actually decide, the difference between whether you act like a Singer or a Sociopathic Jerk is down to the amount of copium you have left for the day.
Another part of the difference why the cabin in the woods (and saving lives through charity on the other side of the world) feels different from the other examples is that millions or billions of other people could act to prevent the deaths (even if they don’t).
Whilst if a child is drowning in front of you only you can stop them dying.
The other element that I would add is reciprocal moral obligations. We all have a different sets of moral obligations to our direct family, extended family, friends, neighbours, town, country, humanity etc.
Whilst it might be great if everyone in the world treated everyone else like family, it would quickly fall apart to defection.
In most nice societies, you have a moral obligation to help someone whose life is in danger if you are one of the few people who can help and it is relatively simple to do so. This is a great thing and has, along with other social ties, taken hundreds (or thousands) of years to create. To prevent moral hazard (and also defection) it doesn’t really apply if someone has repeatedly got themselves into the situation - it is about extraordinary aid when something goes accidentally wrong.
This explains why in the cabin situation I feel morally mixed - the population of the megacity know this is happening and have clearly chosen to let it happen despite it being easily preventable. However I feel bad for the children (they haven’t made that decision) and at the time of their drowning I am the only one who could save them. But it wouldn’t be simple to save all of them.
This also explains why I don’t naturally feel much of a moral obligation to give to effective charities saving lives on the other side of the world. They are not in any of the communities I have varying degrees of moral obligation to (other than humanity as a whole). Furthermore those with much stronger moral obligations to those people are clearly failing them (although this varies a bit by country). There are also many others who could save them.
The big question is whether this notion of reciprocal moral obligations to differing extents to different communities we are part of, that most of us who have been brought up in ‘nice’ circumstances feel, is logically correct? I think Scott would say they are all very well, but that we should fulfill our obligations to them and then focus on how we can do the most good for humanity as a whole, from a broadly utilitarian perspective. Clearly in a direct impact sense this is correct, but thinking through secondary impacts I’m less sure.
Most directly and specifically around charitable donations from wealthier people in western democracies, if people in a country feel like the successful aren’t giving back to them and the country, this undermines support for the capitalist policies that enable the wealth to be generated in the first place.
More broadly I don’t really think you can just ‘fulfill’ your obligations to those other communities. Part of those obligations are that the more you have, the more you give back (e.g. a rich person donates to the school they attended, if you have more free time than your siblings you are expected to help out your ageing grandparents more etc). So choosing to help humanity as a whole is a form of defection (e.g. if rich people decide to switch their philanthropy to donating to charities abroad rather than at home) from these moral obligations in some sense.
By defecting from these ties and norms you are causing damage to the social fabric (or ‘social trust’ in economic terms) that ultimately created that wealth. In most ‘not nice’ countries the only reciprocal moral obligations that are adhered to are those around the extended family. A key part of why rich countries are rich is that they created strong moral responsibilities to wider communities, particularly your town, your country and other institutions within your country. Rather than a government official being obligated to cut their cousin in, in these countries they are morally obligated not to.
Personally think this is part of the reason for Trump and the populist swing in recent years. ‘Elites’ increasingly have a morality focused on utilitarianism or helping those most sidelined/discriminated against, whilst ordinary people see it more in terms of these communities, which from their perspective the elites are defecting from. For instance in the past the rich people in a town facing issues might have worked together to sort them out, whilst now they are probably more likely to just leave. Or the kids of the rich and powerful would had have a decent chance of being in the military (the death rate of the British aristocracy in WW1 was incredibly high), so ordinary people were more likely to trust elites on foreign policy decisions.
These norms and obligations only work if everyone feels like everyone else feels them and mostly acts on them (rather than being for ‘suckers’), and messing with something that is such a key part of what makes societies stable, rich and ‘nice’ is very dangerous.
This is a huge and underrated driver of NIMBYism. People are willing to destroy housing affordability and massively reduce total prosperity if it means they are more insolated from drowning children.
It’s really about the number of children drowning. One, yes you can save. Many, you cannot.
Singer - the original author of the thought experiment - argues that to impoverish ourselves to the extent that we ourselves are close to impoverishment but not quite there, as the only moral solution.
There are multiple drowning children though, not one. I imagine myself on a boat on the sea or lake with the drowning children. I can rescue the children. However, I can myself drown by capsizing my boat if I take on too many.
I am also in danger of capsizing in future if I take less than capacity, it’s not clear how many but it becomes risky to take on anything near the limit, as the boat is rickety and storms occasionally occur. People who have not maintained their boats have drowned.
All around me, though, are much bigger boats towering over my boat, these boats either don’t help the drowning children, or take on a number of children - which while admittedly more than me is nowhere near their carrying capacity - and the large boats are at no danger of sinking in future storm damage either.
Also on the lake are military who I fund with taxes who are actively drowning the children. Just jumping in and drowning a few every so often. I can’t stop this. It’s for geopolitical reasons.
None of this means you shouldn’t help the drowning children but I wouldn’t worry about relative morality here either. Rescue some, but not to the capacity of the boat, not to put the boat in danger.
I deleted my old blog, but the essay is still around:
https://forum.effectivealtruism.org/posts/QXpxioWSQcNuNnNTy/the-copenhagen-interpretation-of-ethics
Sorry about the dead links though :-/
From the subreddit:
I think morality originally started, and still functions for most people, for two things:
a) To pressure friends and strangers around you into helping you and not harming you, and
b) To signal to friends and strangers around you that you're the type of person who'll help and not harm people around you, so that you're worth cultivating as a friend
This has naturally resulted in all sorts of incoherent prescriptions, because to best accomplish those goals, you'll want to say selflessness is an ultimate virtue. But the real goal of moral prescriptions isn't selfless altruism, it's to benefit yourself. And it works out that way because behaviors that aren't beneficial will die out and not spread.
But everything got confused when philosophers, priests, and other big thinkers got involved and took the incoherent moral prescriptions too literally, and tried to resolve all the contradictions in a consistent manner.
There's a reason why you help a kid you pass by drowning, and not a starving African child. It's because you'd want your neighbor to help your kid in such a situation so you tell everyone saving local drowning kids is a necessity, and it's because you want to signal you're a good person who can be trusted in a coalition. The African kid's parent is likely in no position to ever help your kid, and there's such an endless amount of African kids to help that pouring your resources into the cause will outweigh any benefits of good reputation you gain.
Our moral expectations are also based on what we can actually get away with expecting our friends to do. If my child falls into the river, I can expect my friend to save my child, because that's relatively low cost to my friend, high benefit to me. If my child falls into the river 12 times a day, it'll be harder to find a friend who thinks my loyalty is worth diving into the river 12 times a day. If I can't actually get a friend who meets my moral standards, then there's no point in having those moral standards.
Essentially ethics makes sense when centered around a community but we in the west don’t really have communities anymore. Hence the incoherent philosophy.
I've never really seen this version of ethical egoism that's like "it's Moral Mazes all the way down" espoused other than here. Although now that I think of it, Rawlsianism basically assumes that this is what would happen without deliberation behind the Veil of Ignorance, and nobody but maybe Mormons believes the deliberation actually happens. Nonetheless I don't think this is plausible on a human level, even if it probably is from a gene's-eye view, because sympathy and guilt are things. If you suffer for ignoring others' well-being, then others' well-being is at least sometimes more-than-instrumentally important to you.
I subscribe to this as an explanatory theory but not a prescriptive one. Sometimes you have to be better than the soulless, brainless and hopeless forces that made you, because you do have a soul, a brain and a hope. Sometimes you see that you're being puppeted and you think that's the best of all possible worlds.
The most important part of bravery as a virtue isn't that you have ridiculous amounts of it for situations that rarely happen, but that you have enough of it to face the parts of you that are imperfect and acknowledge that you are imperfect, so that fixes and changes can happen at all. And you can't argue someone into being brave. I don't know how else to explain why else people flinch away from being better than what they were designed for.
Yes - and even more so. "Morality" is not a rule system, it is a mishmash of loose heuristics that evolved to help us cooperate in small, local groups because cooperating groups outcompete non-cooperating groups.
With this in mind, I think most seemingly paradoxical moral intuitions make sense. It is all about what someone who saw or heard about some or all of what you did or did not do might be able to infer about your motivations (all in the context of a group of 20-30 people with only eyes, ears, and a theory of mind as evaluation tools).
Contorted moral scenarios are engineered to exploit the incoherencies of our moral system heuristics just like optical illusions show the incoherence of our visual system heuristics. These inconsistencies persisted because they were not relevant in our evolutionary past. There were neither Penrose Triangles nor robotic surgeons out on the savanna.
Right, I don't think Scott or others of an EA persuasion would dispute this, or any of the similar statements made above.
The point is that, we don't live in the savannah anymore, but we still live in networks of people that approximate the social structures we evolved with, and technology and culture put us in some kind of proximity to people who are distant from us, yet whom we also can't help but apply our moral instincts to.
Since our intuitions can't help but be incoherent, but we still want to live in a cooperating group (or to put it in the language of the comment you're responding to, we still want to signal to friends and strangers that we should be helped and not harmed), we have to build something coherent enough to achieve these aims, built out of our evolved moral intuitions.
That's necessarily gonna mean making tradeoffs between different moral intuitions, hence the convoluted thought experiments to figure out what exactly our moral intuitions are, and how we trade them off against each other.
From a prescriptivist standpoint, there won't come a time when it will *not* be more moral to save the next drowning baby-sutured-to-a-famous-violinist floating from the magical post-industrial bubble city filled with burning fertility clinics and infinite trolley switches or whatever the shit. The person who donates 11% of his wealth to mosquito nets is better than the person who donates 10%.
But I'm sorry, I can't do it. I'm flawed. I don't live for others as much as I could. I'm too attached to comfort. I (roughly) tithe but I could give more if I didn't pay for the Internet connection that I'm using to post this. I could be volunteering instead of posting.
Perhaps someday I'll grow in selflessness and I'll get to the point where I love radically, for the whole world. I think that's the call of Christianity in a fallen world I just hope that until I get there, my sins of omission aren't considered too great.
You raise a good point: what if, in order to save the drowning child, they have to be plugged in to your circulatory system for the next nine months (falling into this river automatically gives them kidney disease as well as risk of drowning)?
Are we then permitted to refuse to have attached the drowning child? Cage match between Singer and Thomson!
The fact that Singer, who is fine with killing toddlers, is taken seriously for his "ethical intuitions" is also a tell.
I have been writing about aphantasia and hyperphantasia and what it might mean, for these thought experiments, if you actually *see* the drowning child or the terrible thing that needed intervention. Our reactions are not wholly philosophical. https://hollisrobbinsanecdotal.substack.com/p/aphantasia-and-the-sixth-sense
I feel bad admonishing Scott for not being universal enough when there's this much opposition in the comments to having even slightly more rational ethics. And I realise he has to take into account that you can't expect anyone to be fully rational or alturisitc. But if you really take Rawl's veil seriously the conclusion should obviously be world communism for all sentient beings.
If Earth was populated with perfectly rational, perfectly allutristic Rawlsians they wouldn't just be donating 10% to bed nets, they'd also be spending like 25% of world gdp building social housing for wild mice etc.
>How much should they pay? Enough to pick the low-hanging fruit and make it so nobody is desperately poor, but not enough to make global capitalism collapse.
I feel like like the 10% level of alturism Scott's proposing is way lower than could be justified by constraints on maintaning economic growth and he's really considering psychological opostion to being more alturistic than anything theoretical here. The top rate of tax used to 90% for a lot of places in the post war period, and modern gdp per capita is about 20x above subsistane level. The theorietically ideal Ralwsians could easily be spending 50%+ of gdp on charitable redistribution imo.
>I think the angelic intelligences would also consider that rich people could defect on the deal after being born, and so try to make the yoke as light as possible.
Considering the posibilty of defections seems to defeat the point of the though experiment since that's no longer behind the veil.
The idea that it’s more “rational” to base your ethics on an obligation to spend all your resources saving millions of strangers around the world is preposterous.
Scott has been consistently saying that it's not an obligation to give anything away, and that you should do it if you have enough of a super ego to do it. Does that change your opinion by what the OP means by "more rational ethics"?
If you take the drowning child argument seriously, then the logically conclusion is you should never spend money on leisure goods. If I spend 10% of my money on charity but I go out to eat once a week, I’m letting a child drown. Every EA, including Scott, knows this but they also know that if you tell people that, then they’re just not going to be persuaded by the arguments. So they make up ad hoc reasons why we really only have to spend limited amounts on charity to get more people on board.
No EA has ever had a good answer to the demandingness objection. They just try to downplay it.
The obvious answer to the demandingness objection is that caving into demandingness results in less charity as people burn out or lose opportunities to leverage themselves. Since you want to ensure that there is globally, across all time more charity than there is otherwise, and that people go into spiraling dynamics as a matter of fact about moral imperatives, you set a guideline that ensures the spiral doesn't happen and then go on your way.
It's known that if you try and break this guideline down you will predictably burn out and cease being an EA, so it is strongly encouraged to follow it (I personally don't follow it but I can see why this is valuable).
And also if you go out to eat once a week., the accounting is something like 1/500 to 1/50 drowning children at most. Which, yeah it isn't great. But if it's what it takes to continue donating, it's closer to an amortization of the donation cost rather than a binary donate / not donation opportunity. So assuming you donate 5500 every year, that goes to something like 11000 per life, which is still pretty damn good considering you're spending a hundred bucks per time you eat out.
If it were true that humans were perfect executors of willpower and don't need creature comforts, then I don't see how the demandingness objection applies, because they don't deeply want the luxury anyway. But they're not, so your morality system has to work with squishy humans.
Ok let’s back up a second. Let’s say I give 10% of my income to charity. I decide to go out and eat a few times a week. Over a certain amount of time, is this not morally equivalent to letting a child drown?
Yes, but if you stop donating, that's letting *more* children drown.
The counterfactual isn't between a world where you're morally perfect and you now, it's between a world where you feel miserable and end up not saving children or a world where you spend more on yourself and end up saving more children, albeit at reduced efficiency compared to a perfect moral agent, which you are not.
I mean it's more rational in that it attempts to base ethics on reason and not the various evolved, sub-conscious biases that humans default to like egoism, tribalism, kin-preference etc. The same way rationalism attempts to rise above evolved psychological biases to have better epistemics. Which then leads to a more universalist view that entails spending your resources on saving stangers.
I'm sure you don't think that the kind of reasoning that Scott's doing here is less rational than the normal, "common sense", intuitive morality that most people have.
The idea that in coming up with normative prescriptions we should ignore human nature is patently absurd. Should ants and bees “rise above” their nature and be more individualistic?
Looks like it's going to be difficult for me to say anything that's not preposterous or absurd. But, yes, if ants and bees were instinctively committing acts that would look unethical under conscious reflection it would be better for them to be more rational beings.
Wow I did not think you were going to bite that bullet because it’s so crazy. If bees were more individualistic, they would all die. Is that more “rational” to you?
Depends on how much more, and other factors. Moderate dose of individualism and forethought might be exactly what's needed to break out of an https://en.wikipedia.org/wiki/Ant_mill
But what about the suffering shrimp, Brandon?
Well shrimp are obviously higher on the moral hierarchy so we should put all our resources towards them.
Singer: Billions for shrimp, but kill them babies!
Billions of shrimp must die so that whales may live.
And if you eat the shrimp afterwards, people are much less judgmental than if you eat the kids 😁
Rawls was in favor of national borders, and opposed immigration as letting countries irresponsibly increase their own populations at the expense of other countries.
I'm willing to discuss saving children's lives if you are willing to discuss the budget for childhood education. Both are socialized spending to do a first order good that should serve greater second order goods. Yet when discussing saving children's lives, most people's brains are hijacked and only focus on the first order good. But with childhood education, most people can look at the second order goods and trade-offs and consequences and see that the initial investment on the first order good is not all we should consider. I consider people that are unwilling to discuss childhood education and only talk about saving child's lives to be engaging in (perhaps unintentional) emotional blackmail
Have I got good news for you about deworming! One of the primary ways deworming helps (on top of not having <censored censored censored> painfully happening to children) is that they attend way more days of school because they are not in pain, and even though *technically* it looks like a low value intervention with large error bars, the fact that those error bars lie in the direction of big educational flow through effects was why GiveWell thought they were good. Sorry I can't seem to find the analysis with about 5 minutes of work (because they are no longer the top charity) but https://blog.givewell.org/2024/08/21/raffles-deworming-and-statistics/ covers the reasoning pretty well given a skim on my part.
With respect to people's actual behavior, there is an article in the Journal of Political Economy by Andreoni, Rao, and Trachtman, "Avoiding the Ask: A Field Experiment on Altruism, Empathy, and Charitable Giving" (June 2017, https://www.journals.uchicago.edu/doi/10.1086/691703 ) with the following abstract:
"If people enjoy giving, then why do they avoid fund-raisers? Partnering with the Salvation Army at Christmastime, we conducted a randomized field experiment placing bell ringers at one or both main entrances to a supermarket, making it easy or difficult to avoid the ask. Additionally, bell ringers either were silent or said “please give.” Making avoidance difficult increased both the rate of giving and donations. Paradoxically, the verbal ask dramatically increased giving but also led to dramatic avoidance. We argue that this illustrates sophisticated awareness of the empathy-altruism link: people avoid empathic stimulation to regulate their giving and guilt."
Personally, pushy bellringers increase my avoidance greatly, not due to guilt, but because I give to other causes and in other ways.
Their intrusive behavior both delays me, causes advertising pollution, doesn't scale, isn't well-documented with an audit trail, and from a virtue signaling perspective, tries to shame me publicly where I have no recourse to challenge the presumption.
Given that the Dublin portal had to be shut down temporarily due to people acting the maggot, I think this isn't a great example:
https://www.youtube.com/watch?v=j_8wZhXjAGc
Neither is the Chinese telesurgery. Anyone who says "I would prefer to let that guy choke rather than be five minutes late for lunch" is going to be deemed the worst kind of monster. But what if it's "there's no-one else in the room, if I pause the surgery to save his life the patient will die"? *Now* should the surgeon divert the robot?
The problem with the Drowning Child scenario is that it's easy to create thought experiments that go "tiny trivial little inconvenience this side, big huge effect on life that side" and put your thumb on the scale that way. Once you arm-twist people into agreeing "Why no I am not a horrible monster", *then* you hit them with "and now the choice is not a trivial inconvenience, but you have already agreed that refusing makes you a horrible monster, so now you've committed yourself, (evil laughter 'heh-heh-heh' optional here)".
And that's why people feel that they're being suckered into something and want to find an argument contra the Drowning Child. Me, I'm going to bite the bullet, go "why yes I *am* a horrible monster, however could you tell?" and refuse to be arm-twisted in the first place.
I'm not buying a pig in a poke, which is what most of these thought experiments are.
It also seems pertinent that in the examples above it costs a trivial amount of time/ money to save a life whereas in the actually existing world the single most effective charity (per givewell) manages to save a life for $4.5k. This would correspond to multiple days/weeks of most people's income.
Perhaps it's no coincidence that there are no low hanging fruits left (that is, opportunities to help people at scale cheaply)
I mean, sounds like a bargain tbh. There are also lots of less-than-lethal stinky situations you avoid along the way. Like $7 to protect a kid from getting malaria, which is unlikely to kill them but quite intensely unpleasant. Just because that's the amount to statistically have one fewer person die doesn't mean that's the increment of help.
The second sentence ofthe post is "This is natural: it’s obvious you should save the child in the scenario, but much less obvious that you should give lots of charity to poor people (as it seems to imply)."
I fail to see how the implication works here. It doesn't follow from
"I'm happy to spend 5 minutes to save a drowning child"
That i should give "lots of charity" (presumably much more than the average).
What is the inconsistency that I'm missing here?
Hmmm - the megacity one makes me go "if there are children falling into lakes every hour and drowning, isn't that the problem of the megacity? haven't they noticed yet? aren't the parents complaining?" and I would begin to think maybe the megacity is using this as a method of population control or something. If it's happening this frequently all the time - and even more so if the damn place can afford to build a whopping great dam but not put some fences up around lakes - then now it definitely is *their* problem not mine to solve. I did not commit in any way to be a volunteer lifeguard.
And that makes me think of USAID - why aren't those countries solving their own problems? Why is the USA now in the position of "you saved one drowning child, now you are committed forever to save all the drowning children"? If it's understandable that I don't want to save the 37th or 337th or 3,3337th drowning child because damn it, I just inherited this cabin and isn't it up to the megacity to finally childproof their own rivers or at least hire their own lifeguards, why isn't it understandable for a new administration to go "well we just inherited this and we want to change it"?
This actually begins to sound like a justification for what Trump/DOGE is doing!
When you realize that the West didn't get to be rich via some richer region giving it aid, you then start to wonder about the same process happening elsewhere. And since I think the lack of tropical diseases is one reason why Europe (particularly northern Europe) got rich, anti-malaria charities is one thing I do donate to.
I'm not arguing against giving to anti-malaria charities, what I am arguing about is that the Drowning Child argument and the variations thereof try to do too much. They jump from "this is a once-off, exceptional, very impactful situation - you see a child drowning, won't you save them?" to "okay now you have agreed that it is better to suffer a trivial inconvenience than to permit someone else to suffer a major loss, you are now obligated to constantly do this good thing we want to impose on you".
That's a very big shift: for instance, is having malaria comparable to drowning? If you gave people the choice between "you can have malaria or you can drown", I think most would pick malaria. Malaria in children under five does seem to lead to high mortality:
https://data.unicef.org/topic/child-health/malaria/
"Nearly every minute, a child under 5 dies of malaria. Many of these deaths are preventable and treatable. In 2022, there were 249 million malaria cases globally that led to 608,000 deaths in total. Of these deaths, 76 per cent were children under 5 years of age. This translates into a daily toll of over 1,000 children under age 5."
So contributing to anti-malaria charities is well worth it, and you don't even have to get your shoes wet, never mind an expensive suit. *But* that's where the problem lies: the example is chosen to shame people about "for a relatively small inconvenience to you, thousands of lives can be saved, yet you the public are not doing it" and hence why the shock example of "would you let a child drown rather than get your good suit wet?".
That's perfectly fine, as far as it goes. But *how* far is that? Because I don't see that putting a moral obligation on anyone about "and now you must stack up as many inconveniences as possible to save as many lives as possible, and you can never stop, and you can never choose for yourself, you are now obliged to always pick the biggest bang for the buck and to hand over as many bucks as won't result in you starving to death".
*That's* the problem here: that the thought experiment has seemed to expand into this principle of general obligation at the maximum forever, instead of the single example of "you can save a lot of lives by donating to bed nets".
The drowning child situation does a good job of highlighting why it doesn't apply to many other real world scenarios.
A child drowning in the river needs help. If you rescue them, they are safe once they are on dry land. They'll be scared out of their wits and know not to go jump in the river again. Or their parents will.
Financial commitments to provide medicine are nothing like drowning children. A child can choose not to jump in the river, so there is generally not an epidemic of drowning children. Mosquitoes are being born all the time and buzz around. A malaria net reduces some risk of being bitten, but there are always more mosquitoes and malaria is not going anywhere. The closest analogy is not a child drowning in the river, but instead tossing individual inflatable tubes to a sinking boat in the ocean. Maybe it'll help a little bit, but it probably won't help that much.
The distinction seems to be between random unfortunate issues where you can help people versus a systematic issue.
The magical town should build netting to catch the drowning kids in a way they can save themselves. A once off cost with maybe some minor maintenance costs. Then they should try and actually fix it so the kids don't slip into the fast moving water in the first case, although that's a larger task.
With such a setup maybe you get 1 drowning child a month and it's a lot less of an issue and burden on yourself than having them hourly, including when you sleep.
Well, you might still have drowning kids go past when you sleep but no one should expect you to do anything about that. The burden is on the city and it's systems.
The group payment into shared fund is obviously the way of dealing with most systemic issues. The other is to build things with less such issues in the first place.
CORENA (Citizens Own Renewable Energy Network Australia) is a shared revolving funding pool. People like myself donate to CORENA and the donations are used to pay for funding Solar PV and other energy saving systems. The NGOs they provide the interest free funds to pay back the loan based on their electricity cost savings.
Basically the NGO buys solar power and for a couple of years they still pay the previous electricity bill amount, but you CORENA. After a couple of years they've paid it off and their bills go to zero.
This means my initial donation can go to multiple funds over a few years and so it's more effective over time.
https://corenafund.org.au/ is the CORENA website.
I think there's a collarary to luck.
If you are in your cabin in the woods and you notice a large bag full of money 💰 floating down the river, then rescue it, there's some expectation around that being your lucky day.
If bags of money float down the river regularly, you'll probably pickup a few for the first few days, but eventually you'll have grabbed enough.
But you should actually tell the bank or people upstream that their system is broken.
The difference is, if you instead tell your neighbour about the bags of money then they and their friends will come and help clean them from the stream very happily. Until eventually word gets to the bank who finds they've got some hole that the bags keep falling into.
There's likely some expectation that the bags of money are the banks. But I'm the original thought experiment the kids have parents and they should be distraught and trying to do something.... Like putting up poles, netting or whatever.
I think in reality if you tell the local community about the kids floating down the river then there will be a lot of people who will try to help. Creating a jetty, putting up some makeshift poles across the river and spending some amount of time on the issue. It won't just all fall onto the burden of a single person.
I have thoughts. Probably going to be too far down the comment thread for anyone else to notice them, but still.
Okay the size of the problem matters. If you are in a situation where a kid drowns every hour, you should have no obligation to any individual child. You instead have a bigger problem and a different solution is needed then rescuing each child. So in this case it would be moral to ignore a bunch of kids drowning while I go to the store and buy some kind of netting that will catch children and let them climb out on their own. If a bunch of children die while I try for a complete solution it's okay. In fact saving any individual kid is almost wrong, that will help someone but it won't solve the actual problem. This often the issue with aid programs, if we send you seeds you can farm, and feed yourself, but sending you food just keeps you starving forever.
I do think distance vs size of a problem leads to some weird places. Imagine tomorrow we discover there is a planet in the Alpha Centari system just 2 light years away that has 100 billion human like aliens, further we know a comet will hit their planet and they will all die unless we stop it from hitting them in 150 years. It turns out to mount a rescue mission to save this planet will take all of earths GDP for the next 100 years. What do you do, does the distance matter, does the fact we know about the problem matter, does the size of the population matter.
I think there is no right answer to that question, and even saying one is moral is a category error. Morality isn't a thing that exists outside of humans, it's how we create rules that allow us to cooperate and coordinate allowing more humans to exist. So if a rule doesn't help more humans cooperate better and exist in greater numbers, or increases the overall level of misery, it perhaps is a bad rule. To apply our instincts to situations that it was never meant to handle is a category error. The right question is what response to the threatened alien planet makes humans better able to cooperate with each other. Perhaps saying let them die, makes it harder for us to trust and shrinks human florishing. Perhaps giving all our resources to theoretically save them, does the same. Looking good is important because it creates trust that enables cooperation. Acting good is important if people are watching. But if you only act good when people are watching, you'll mess up, better to just want to do good.
Regarding the neighbour, I suppose the Copenhagen interpretation could hold he "touched" the problem when he was made aware of it by you, and even more so when a solution was offered which he refused to engage in. If the neighbour is ignorant of the drowning children, he is not at fault and can qualify for the spot in heaven. But if he knows and refuses to help, then he loses out.
Now where did I read something like this before, then? 😀
https://www.biblegateway.com/passage/?search=James%202&version=ESV
Faith Without Works Is Dead
"14 What good is it, my brothers, if someone says he has faith but does not have works? Can that faith save him? 15 If a brother or sister is poorly clothed and lacking in daily food, 16 and one of you says to them, “Go in peace, be warmed and filled,” without giving them the things needed for the body, what good is that? 17 So also faith by itself, if it does not have works, is dead.
18 But someone will say, “You have faith and I have works.” Show me your faith apart from your works, and I will show you my faith by my works. 19 You believe that God is one; you do well. Even the demons believe — and shudder! 20 Do you want to be shown, you foolish person, that faith apart from works is useless? 21 Was not Abraham our father justified by works when he offered up his son Isaac on the altar? 22 You see that faith was active along with his works, and faith was completed by his works; 23 and the Scripture was fulfilled that says, “Abraham believed God, and it was counted to him as righteousness” — and he was called a friend of God. 24 You see that a person is justified by works and not by faith alone. 25 And in the same way was not also Rahab the prostitute justified by works when she received the messengers and sent them out by another way? 26 For as the body apart from the spirit is dead, so also faith apart from works is dead."
As for the other point:
"For another, if you’re even slightly religious, actually getting the literal spot in Heaven should be one of the top things on your mind when you’re deciding whether to be moral or not."
Ah, no? You are not supposed to act based on "will this get me good boy points or not?", but rather because it is the right thing to do. Avoiding sin and doing good because you fear Hell and desire Heaven is better than nothing, but best of all is to do what is commanded because it is right and because "you shall love the Lord your God with your whole heart and your whole soul and your whole mind, and you shall love your neighbour as yourself".
That's the part the West Wing episode got wrong: Bartlett is complaining about "I kept the rules, I was supposed to get the prize!" (I can't remember if he said it in front of a crucifix, it was set in the Protestant National Cathedral, but if there were a crucifix there complaining 'it's not fair, I was a good guy!' is even more ironic). Not how it works, there is no guarantee that "I check off the list and God makes sure nothing bad happens to me".
It looks to me like the real issue is Singer starting with saving a life at a fairly small cost, and then cranking up the acceptable cost higher and higher. (Merely an impression, just that's how it seems to work.)
I'm also not sure where demands for efficacy fit into this. What if it turns out you're not very good at saving children?
This makes the idea of tithing seem attractive. You have some obligation, but it's at a level which is noticeable but not debilitating. As I understand it, the Jewish version is that you may not wreck yourself. The Christian version is that martyrdom is admirable but not required.
What people should do themselves and what they should blame other people for might be separate questions.
The big solutions seem to be sideways from saving people at personal cost. Clean drinking water isn't the same sort of thing as taking care of sick people. It's not obvious how resources should be divided between research and saving people. By definition, you can't predict which research will pay off, though there are reasonable estimates about the odds once a field is established.
The cabin by the river of hourly drowning children is silly because it assumes that in order to save each child you have to do it by yourself, with no help, and no technological assistance or tools. The actual, practical solution is to enlist the help of the previous children you saved for saving the next ones. Since a new child arrives every hour you will have 10 helpers after just 10 hours. After a week you have 168 helpers! Obviously you don’t need that many helpers so you can let most of them go after they’ve helped rescue one or two kids.
As for technological solutions, the answer is quite simple as well: floating ropes like they use at many outdoor swimming areas. Put enough of these across the river, anchored down at both ends, and children being carried down the river past your cabin can just grab a rope and rescue themselves. This is ultimately the solution to the river problem and why it fails to be as dramatic a burden on the cabin owner as it would seem. I would not want the burden of having to personally rescue every child, every hour, 24/7 forever. But I wouldn’t mind setting up a rope system and a toweling off station nearby so the children could survive.
Think of a more realistic situation where people are dying regularly and the solution is not so easy: traffic fatalities. Sure, we can suggest “simple” solutions such as advanced public transit, better street designs, lower speed limits, etc. But ultimately the solution is very messy and political and not really amenable to quick fixes. I think this is why it fails to be useful for an ethics discussion yet remains a much more important problem in real life.
Maybe I don't get the copenhagen model but it seems like if you stumble across a drowning child you have not asserted power and thus don't have an obligation to save the child.
Have you ever read The Godfather? (Or watched the movies?)
Don Corleone was a problem-solver. When someone had a big problem, they would often ask the Don help. And the Don would often oblige. E.g. I think the first instance of this is when a mortician's daughter gets her jaw broken by two men, and the two men are "only" sentenced to prison for 3 years. And the mortician feels humiliated by the court's light sentence. So he goes to Don for a favor, to seek "justice" on behalf of his daughter.
However, asking the Don for a favor had a cost. When called upon in the future, you were implicitly expected to reciprocate. In other words, Don Corleone had built a patronage network. And this gave the mafia gang extra maneuverability. To frame this in Copenhagen terms, The Don's ostensible altruism increased his power.
It’s been a while since I read the book, but I recall Rawls being pretty insistent that his theory is *only* a theory of justice, not a full theory of morality. I think you have to take his warnings seriously.
I agree very much that slipping between descriptive and proscriptive ethics is a real problem.
As others have commented, some of these thought experiments fail to engage my intuitions. But as an attempt to build a chain of reasoning around political ethics, I like the attempt.
People have the obligation to save nearby children in typical situations. And the limitation to typical situations inherently prevents problems like having to save enough children that it impacts your life. If you create a hypothetical that is not typical, there is nothing to prevent such problems, so you'll have to manually add a lot of clauses to prevent exploits that can't exist in typical situations.
And no, you don't just get to handwave away the taxes objection. "It's an insult to some hypothetical being's intelligence" is not a logical argument, and Scott knows better than that.
(I came up with the taxes objection. I wonder if Scott is trying to reply to me.)
Ignore drowning children every day. But I plug my brain into a matrix that rewrites my memory such that I DO save the children every day.
If life is just matter and energy the outcomes are the same for me vs actually saving children.
> My favorite heuristic for thinking about this is John Rawls’ “original position” - if we were all pre-incarnation angelic intelligences, knowing we would go to Earth and become humans but ignorant of which human we would become, what deals would we strike with each other to make our time on Earth as pleasant as possible?
I'm also a huge fan of original position, but limiting it to humans misses a huge part of the moral landscape. Any sentient being is one we'd have a chance of waking up as.
I'm confused. Isn't the copenhagen interpretation of ethics supposed to be a reductio ad absurdum? Surely no one actually thinks of it as a good normative theory?
I've always thought that empathy is highly conditioned by proximity/immediacy. This certainly doesn't have to be physical and is, in fact, more often mental proximity/immediacy, I think. We can certainly be very moved by a documentary which vividly portrays suffering, though it's happening on the other side of the world, but feel little empathy for our neighbour suffering from cancer next door because he never leaves the house and we rarely see him.
The right to kill unborn human babies is not only accepted by the majority of western society nowadays but is even being bandied about as a "fundamental human right". Meanwhile cooking a live lobster in a pot of boiling water has been made illegal in many countries. We might be standing right beside (physical proximity) to the pregnant woman having an abortion but we can't see or hear the baby, so we feel little or no empathy for it. But we can hear the lobster "screaming" and see it writhing in the pot...
Since empathy is highly skewed by mental immediacy, it needs to be tempered with logic to create morality. For this we need a clear idea of the final objective - a stated goal as to what our morality seeks to achieve - to which logic can then be applied. Eg. "all human life is inviolable", or "all suffering should be stopped, whether animal or human", (two ideas which are often incompatible...) or whatever. I think we talk a lot about morality without having a clearly defined goal of what that morality is supposed to accomplish. The result is that it is easy to confuse our instinctive empathy with a higher idea of morality.
Here it seems like a hot/cold test. If you are directly experiencing a hot situation like a child drowning and you do nothing you are a cold person. It's probably a tell as to character.
One of the two books a have with me on vacation is Anne Dufourmantelle's In Praise of Risk. She was a philosopher and a psychiatrist who lost her life attempting to save a drowning child.
That makes me less willing to listen to her.
I think you're circling around questions of scale -- is a moral responsibility personal, neighborhood, town, or even higher level? And that is related to Shannon entropy -- how surprising is the event that triggers the moral responsibility? But it's also related to how catastrophic the event is. How large is child mortality in the magical city? Would you react differently if the consequence of falling in the river was a bad cold, or if only children who had never learned to swim drowned?
On distance, you keep coming up with these examples where someone is in your presence even though they are really far away to get that intuition. There are almost no real life scenarios where that would be the case, certainly not with the charities that Give Well recommends.
What about the guy working at Citadel who argues he can save more children by ignoring all of them to work but tithing 50% of his income to the Against Malaria foundation?
I feel like my moral intuitions are significantly different than yours. I do happen to believe that "touching" a situation seems to have significant moral implications. I do not believe that solving a portion of the drowning children is sufficient, but saving *all* (or as many as we could) would be required of being a moral person. (Not really the thought experiment, but I believe the most moral case would be to work something out with the local city to either fix the problem or get paid to do the work you're already doing. Surely they would value the regular saving of their children at a huge sum, but even $1,000/child would make you quite wealthy and would be enough to hire multiple shifts of lifeguards and solve the problem without ruining your life).
As I understand my moral intuitions, the issue of proximity is not about physical closeness but purely about your ability to understand and interact with the situation. A portal where you can see and interact with Dublin, or control a robot, still offers clear evidence that there is a specific need and that you can meet it.
I do not know if there exists a particular child that my 10% donation could or would help. The money could be wasted, embezzled, stolen, as could whatever was purchased with the money. There could have been a good month for donations or a good month for fewer diseases, and my particular donation was extra and unneeded. Maybe the particular charity I would donate to is well financed and doesn't need more money.
The point is not to gish gallop the reasons why not to give, but to express *uncertainty*. I would save an infinite number of drowning children in front of me, until my body gave out. I would die doing it, if that saved more children. I would rearrange my life to accommodate that need. Learning to swim, rearranging my schedule to be available on a moment's notice, keeping my swim gear close at hand. The need is real and legible.
The thought experiments you presented where someone could choose to ignore the known and legible "children regularly drown here" cabin to intentionally avoid "touching" the situation is evil. Ignoring the 37th drowning child is evil. Saving one child a day while the others drown is evil.
So how do I reconcile not learning more about specific problems in foreign areas? Bandwidth, mostly. There's only so much you can ever learn about specific issues. You would likely need to physically go to a place, learn about the local needs, customs, expectations, etc. Once there, you may learn of great specific need, or you may learn that they are generally fine and think Westerners trying to send them bed nets is silly (or worse, they've all got 100 bed nets and it's filling up their town with unwanted waste).
If a need outside of your personal knowledge and physical proximity seems to you legible enough that it becomes a moral imperative, then I would not hold someone back from donating to that cause. I also do not consider it a moral imperative to learn about potential needs. When it comes to foreign areas (implication that you can't have a detailed and legible knowledge of needs), then I don't think there's an obligation at all to learn about or give. A shorthand explanation would be Kony 2012. Westerners getting worked up about a problem doesn't mean the problem is what we think it is or that our attempts to "help" will do any good or even just not cause harm.
As far as moral behavior, this is a good example of something I value: https://www.bbc.com/news/articles/c5y4xqe60gyo (the guy with special blood who donated enough over his lifetime to save millions of children).
"As I understand my moral intuitions, the issue of proximity is not about physical closeness but purely about your ability to understand and interact with the situation"
I think there's a lot to this, but an obvious problem is that you have control over your ability to understand and interact with a situation--so you have to decide which problems to have this proximity to, and how much!
In some ways, I think the point of GiveWell and similar institutions is to increase your proximity to malaria in Africa: by summarizing the situation and quantifying the effect of the marginal donation, they increase your ability to understand!
"I also do not consider it a moral imperative to learn about potential needs"
This sort of covers the point above, but has the obvious problem that it incentivizes you to avoid learning about potential needs--to deliberately reduce your proximity to issues.
Indeed, I think an uncharitable reading of some of the anti-EA arguments is they are deliberately anti-intellectual in order to prevent oneself from learning something that they _would_ be committed otherwise.
But we almost all agree that in some cases, a certain threshold of proximity commits you to learning more about potential needs; past some point failure to investigate is just a rationalization. If you're facing away from the pond where the child is drowning and your friend (who can't swim) yells, "Turn around! That kid is drowning!", if you refuse to turn around to confirm, you're not "avoiding learning about a potential need" you're avoiding a moral duty you're already entangled with.
My reason for basically agreeing with EA even though I'm sympathetic to your basic point is: if you're here in these comments arguing about bednets you already have enough proximity! You're past that horizon! It's too late to pretend you're just avoiding learning about a new problem. You've learned about it! You are the guy holding your hands up in front of your eyes, saying "I don't see it, I have no obligation to learn about it".
On your fifth ACT comment thread about EA, you almost certainly are in position to know more about bednets than how best to help your second-cousin twice-removed to kick his cocaine addiction, or whatever "local" issue you supposedly are meant to care more about.
I agree with a lot of this, but I'll make a few responses.
The first is about bandwidth, as I mentioned. There's only so much we can process and understand. Picking which topics, if any, to research is not a moral question on its own (though I agree with you entirely that covering your eyes to avoid seeing is a different category and is a moral failing).
The second is the concept of legibility I was trying to get across. I am, obviously, well aware of GiveWell and bed nets. I have no particular reason to be against them, and think they are generally moral people trying to do good.
There are two specific problems with the general approach to foreign aid in their sense. One is the potential for waste/inefficiency or just generally failing to be as helpful as you intend or think you can be. A specific story about bed nets that I heard a while ago was that some people were using them as fishing nets instead of bed nets. Locally in Africa, they were making a different decision about what was important. Maybe they were wrong, but maybe they were entirely correct and the legibility from being local and being part of that culture allowed them to make a better decision.
The second is unintended consequences that can be downright negative. An easy one is that shipping free external production destroys local manufacturing and prohibits the local economy from growing. No one in Africa should be making bed nets, given the costs and incentives of fighting against free Western imports. But it would be much better for Africa if locals were making bed nets! They would benefit economically and long term be better off against malaria. This is also true for food donations, etc. (I happen to think that infrastructure donations can still be good, especially if local crews are hired for the work). A more difficult one can be related to international relations. Western nations have often propped up local leaders in order to try to "help" and made things worse. Autocratic leaders, warlords, corruption, sustained by outside forces in order to make the situation more legible and accessible to Western interests (including charity!). "Should the US arrest/assassinate [African Warlord]?" is a good question that may have extremely positive end results, or it could result in more chaos and instability or just bloodshed increasing before returning to pre-intervention levels - all of which we have seen from US involvement over the last 20 years (living memory for many of us). Even killing/arresting a really evil warlord may not be a good idea, morally or practically.
I'm certainly not against trying to learn more or make good decisions. I think we need to be honest about our first level results and also the unintended consequences of our actions. At the far stage of being a citizen in a Western nation, the steps necessary to make those decisions *well* is a massive ask with no necessarily positive conclusions. I don't think there's any moral failing to not do that research. I see that as very different from seeing and knowing about a particular issue and intentionally ignoring it. I also recognize that some people feel like they've done the research and have come to correct conclusions and are willing to give money or otherwise help. I do not try to talk them out of helping, even if I doubt the efficacy of their long term actions.
Hundred percent agree about bandwidth, but it often cuts in the other direction. I give ~10% of my income to GiveWell, I volunteer with local charities that service people in my immediate community, and I try be a good family member/friend/coworker.
I can assure you that the anonymous giving takes up the *least* mental bandwidth *by far*. Once a year, I skim some blog posts and log into a website and make some quarter-assed decisions about how much to give based on my blog skimming.
The virtuous obligation that I've been shirking because it's more demanding and I'm not sure what I should even be doing is a duty to help a friend through some rough personal circumstances--I think they need both some tough love and some genuine support and I'm not even sure how to provide the right mix of those.
Because of GiveWell and the like, even anonymous donation locally feels more illegible and hard to parse than giving far away: I am far more confident in the value of vitamin A supplementation charities in Africa than in the value of my local food bank or addiction hospital.
As for unintended consequences, again, it's really not clear to me that this pushes more against anonymous far donation than local donation/volunteering. Is it really more likely that I'll be creating bad incentives by buying bednets for poor Africans than by funding my local addiction hospital, and possibly subsidize and lower the cost for drug use?
Am I more confident that I know the right thing to say to my friend that won't on the one hand be too supportive and won't impress upon them the need to make different choices, and on the other won't be too harsh and alienate them from me?
Like, I get that there are abstract reasons why things close to you should be more legible than things far away, but those aren't the only considerations--as I noted elsewhere the problems in poor countries are often lower hanging fruit, instances of problems that rich countries have already solved. Maybe that makes those problems more legible.
At the end of the day, we have to actually *evaluate the legibility* of different things, not just fall back on first principles.
We can actually look at the track record of US interventions and see if we think it's a good use of money to get what we want! If we look at other interventions and don't see catastrophes, then that is itself information!
Elsewhere it's been pointed out that the fishing with bednets issue is well-known, and mostly dismissed as a serious concern by people who have looked into it. Maybe they're wrong, but at some point you have to actually evaluate the object level arguments, whereas to me, way too much of the legibility discussion ignores *what we actually know* about different interventions. And if people want to dispute whether we actually know what we think we know, it's on grounds of basically general skepticism.
I'm not totally against this; I don't like the turn towards longtermism for that reason, but I think that's because it's really hard to evaluate interventions whose payoff is by hypothesis in a world totally different from ours. But malaria, vitamin A, cash to poor people--they don't have that problem. You can study them, and people do! You should use that information in your decision-making!
Again, it's uncharitable, but it's hard not to interpret some of the responses as being isolated demands for rigour: oh sure this intervention is well studied and here's twenty GiveWell reports on it, but like, c'mon, can you ever really _know_ anything about rural Kenya?
TBC I'm not accusing you of this, I think the basic point you're making is reasonable, and from what comes through your comments I don't think you have an unreasonable attitude. But I think what I'm describing is a real failure mode, and anyway even in the more reasonable case, I still think it's better to try stick to object level here: does the evaluation that GW and others do give us sufficient basis for knowledge to drive our moral decision-making? I think the answer is clearly, "yes, at least somewhat" for basically anyone in these comments who isn't a radical skeptic, until such time as someone makes an actual object level argument to the contrary.
Thanks for your response. Again I agree with a lot of it, particularly what you're saying about a studied problem at distance compared to a novel problem close by. I agree that it can be really hard to give advice to a close friend while sometimes easier to determine the needs of people far away - immediate disaster relief comes to mind.
My major reason for pushing back on the thought experiment has to do with how legible and immediate "drowning children in a pond next to you" is, compared to just about every other possible intervention. Giving to GiveWell, or honestly just about every other charity and intervention possible, is not particularly close to that. As a thought experiment for "should I help people in need?" I guess it helps us get to a "yes." But is that really a surprising take? Did we need a world famous philosopher to answer that question? Not really - it's been baked into most moral systems for about all of human history.
The legibility implied by the thought experiment requires understanding how real the problem is, and how much we can positively affect the solution. The further away the problem is physically, the harder it is to do both. And again, not because distance matters so much, but because the further away you are the less you truly understand what's going on. Culture, customs, economic conditions, moral systems, all can be different than you expect. Offering pre-natal care in Ancient Greece would seem like an easy win. Keep those babies alive, when they would otherwise die. We have the tech, and we know the babies are dying. But it would turn out that exposing babies with the intention of them dying was a cultural habit in much of the ancient world. If you were saving these babies you might be causing communal strife and the people there might kill you or chase you away. We consider that morally abhorrent, but in their society it made sense.
That's not to say that we cannot understand the needs elsewhere. We often can. But never so clearly as a child literally dying in front of us where the solutions are obvious, the need is clear, and we're the only ones who can handle it. Trying to extrapolate from such a clear example to far less obvious examples can lead to unexpected consequences. Again, I don't have a problem with people giving to charity, or GiveWell specifically. I think they're good people doing good. But I also don't have a problem with people saying something like "I'll stick to helping people closer to myself [culturally/physically/philosophically] as I better understand what is needed."
I don't think we're actually that far apart, but I just want to say two things:
"Did we need a world famous philosopher to answer that question? Not really - it's been baked into most moral systems for about all of human history.”
As far as I can tell, Scott is on this whole kick because he keeps running into people who don't answer this question affirmatively, and who keep asserting that it's _not_ because of the practical considerations, but because they truly don't believe anyone has any moral duties to people who live in a different country/have different ancestry/whatever. Maybe these people aren't important, maybe they're just trolling, who knows? But it strikes me as at least possible that there is an audience here on this very blog who do need this lesson.
" But I also don't have a roblem with people saying something like "I'll stick to helping people closer to myself [culturally/physically/philosophically] as I better understand what is needed"
I don't either, but I would encourage people to make sure they're being honest with themselves about whether that is actually their reason, and whether it's true that they really do understand the local problem better. And most importantly, that they actually are doing something local! Not just using it rhetorically!
Peter Singer may be needed to talk about helping African adults instead of local kids, but a kid drowning in a pond where you're walking doesn't bridge that gap. Just about every moral system in the world will tell adherents to save the local drowning kid, no need for anything special.
I think that's where the breakdown exists. Some people hear about the drowning kid and decide that also means they should save African adults. The people who disagree with Scott say that there's a leap in logic involved there, and don't see the connection.
What I feel is that because of the disconnect in certainty (about both the certainty of the need and the ability to fix it), there's a lesser moral requirement to help people who are further away. The less you know about them the less moral duty (but I still agree with your earlier comment that intentionally not knowing to avoid moral complications is also immoral). And I think on some level everyone agrees.
If I told you that on one of Jupiter's moons there was a species of intelligent alien life who were living in misery and desperately needed our help, you would give that significantly less credence than humans on earth. And you should. You don't even know for sure that they exist, and it would be really really hard for humans to help them. Should we spend trillions of dollars developing a space program that can reach them? That's an extreme exaggeration, of course, but I think is in the same direction as mine and a lot of other people's hesitation about treating distant issues equally with local. There are levels of knowing that can make saving that species more and more important, for instance proving they exist, that they are miserable, and that we have the ability to help them. Each of those steps is important, though, and as a contrived example none of them has been or at this moment can be proved. How much money, time, and effort should be spent figuring out whether there is such a species and what their needs are? How sure are we that our intervention in their species would be a good thing, even if they are miserable? I can understand and agree with the argument that the number is above $0, but I believe both of us agree it should not be a significant number even if I had not just made them up for this conversation.
Not been to Zimbabwe, but rich/well-off Indians seem to be fine with ignoring the poor around them. As are Russian oligarchs. And mostly, I agree: the 1 drowning child is in a unique situation. Rescued, it is fine. While the dirt-poor of today are dirt-poor tomorrow. - Also, as we now are aware how many lives were saved by USAID each year: paying taxes might be a sufficient sacrifice. It sure is a substantial sacrifice - and all for the "common good" we are told.
First of this sounds like: “Others in other countries are not as good as us, so we don’t need to be good”. Second it doesn’t even seem to be true.
The world’s most philanthropic individual, (100B+ donated) is from India. 2 of the 10 largest philanthropic organizations are from India. It also seems like 800 million people in India are availing free or subsidized food from the Indian Government. To note, only 2.2% of Indian population pays income tax, while it accounts for ~40% of government revenue.
It seems like Indians are doing what they can to fight the poverty around them, and helping them by providing a fraction of 1% of the world’s richest countries budget, doesn’t sound that wrong.
Tata died in 1904. Good to remember him. My intention was not to let "the rich" in any poor country look "bad"; we should learn from them. 🙏
Ok, naive take: in the nearby child case you are the only person that can save the child. In the far away child case, you are one of very many people that could act on it, so your responsibility is much smaller, due to a large denominator. If a child was drowning near you, while 100 other swimmers and life guards were in the pool, it seems fine not to jump in and assume someone else will take care of it.
Did the child smirk as it looked back, or did it not - and was the child *never cruel*?
That seemingly innocent child will grow up to eat a hundred thousand innocent shrimp. But it's OK because those seemingly innocent shrimp would have grown up to eat ten billion diatoms.
It was a silly Severance reference - shameless effort to reintegrate my online world and my televisual streaming world - but are shrimp charismatic megafauna compared to pill bugs? Could people eat pill bugs instead?
Haven't seen Severance myself. Integrate away!
And there's nothing new under the sun, pill bugs are just repackaged trilobites. Would bet trilobites were delicious.
I had a pet trilobite as a child, that I got at the natural history museum - I guess because I could afford it. I really loved that thing, without even realizing it was a fossil of a living creature.
ETA: I don't see pill/doodle bugs as much as I did when I was younger. Maybe it's because I'm not so fixated on the ground.
Individually, I think it's a mistake to look at this as a question of obligation. Instead I'd say that moving to a global community provides way more opportunities for good, but those opportunities are failed by our traditional moral intuition. As often as I think utilitarian thought fails as a practical matter, it's really the only thing that can help in this expanded context.
When I de-converting from fundamentalist Christianity, one of my bugbears was the missionary question. Essentially, Baptists believe that a person who is "innocent" - who doesn't know Jesus existed and so cannot reject him, like a baby, cannot be sent to hell. There's no concept of purgatory in Baptist faith so these people get basically free admittance into heaven. The rate of successful proselytizing is really very low. So mostly Baptist missionaries are just condemning souls to hell.
There's no good answer to that in theology because you're not God. But if you were, you might redesign the punishment and rewards system a tad. You might say "okay well learning about a new problem doesn't obligate me to it, so I shouldn't get a negative morality score for failing to solve it. On the other hand, I can get some morality points by helping to solve it.
I don't like this viewpoint because I do feel there is some obligation to solve problems you become aware of, and that it's blameworthy not to seek out problems you can help with solving. But in my mind we can get around that by basically saying "every person has an obligation to use some portion of their resources on others but not all of them. However, going above this obligation is praiseworthy."
I notice this doesn't solve the subtextual issue in this piece - Tax dollars are a communal resource that I *have* to pay even if I don't agree with their use. We don't require that I give to charity generally. Why should we include charitable causes in our tax expenditures? And the answer is "well, because it's a tiny portion of those expenditures and the government can do this better than any private group, because a majority of people agreed to do it this way, and because giving in this way increases our soft power in the world at large and makes it more likely we'll get reciprocal benefits."
All of those are a bit shaky, though. The first doesn't answer the question, the second is...really bad, because the whole point of having the debate is to convince a majority to do it, and the last is both not a moral argument and not really falsifiable. My actual rationale is more like, "Jesus Christ, dude, we spend a trillion dollars a year on pension programs that have grown far beyond their means and you're arguing over the $0.05 we took from you to cure TB and save multiple lives? What the hell is wrong with you?" But that is also not an argument, just my own moral intuition.
> If you end up at the death cabin, you don’t have an obligation to save every single child who passes by
And if the cabin owner *does* save supererogatorily many children, then they should be widely recognized as heroic and saintly, have movies¹ made about them, etc.
¹ https://en.wikipedia.org/wiki/Schindler%27s_List
I mean, *I* think they're rad people, but I'm not sure The Courageous Heart Of Irene Sendler was a big hit, and that's like the closest possible analogy. If generic wealthy-nation-dwelling people thought that straightforward acts of goodness were intrinsically valuable they'd probably be doing more of them.
I prefer the many worlds interpretation of ethics. That drowning child, like everyone else, is a quantum immortal. They're going to be just fine in their branch of the wave function no matter what you do, so you don't have any particular moral duty to help them.
In fact, for some children in distress, the many worlds interpretation of ethics says you should probably try to kill the child to reduce their chances of continuing in a branch of the wave function where they have survived but are maimed or traumatized...
> "Assume that all unmentioned details are resolved in whatever way makes the thought experiment most unsettling - so for example, maybe the megacity inhabitants are well-intentioned, but haven’t hired their own lifeguards because their city is so vast that this is only #999 on their list of causes of death and nobody’s gotten around to it yet."
Then wouldn't it be a more effective use of time to go help with the other 998 causes of death in the city? You can alter these unmentioned details in whatever way you like to make the thought experiment less convenient, but in real life, there are typically better uses of your time than saving lives one-by-one at a linear rate, and our moral intuitions reflect that.
Maybe rather than buying infinite mosquito nets, it's better to invest in a biotech startup that could eradicate malaria entirely. Or maybe providing constant support to someone can lead to a Black Queen's Race where they lose their own capabilities. My reasoning is motivated here, but that's the beauty of capitalism: in the same way you don't need a pricing czar to decide what everything should cost, you don't need a morality czar to figure out the best charity from first principles. Let the market compute what people need, and if Our World In Data is right, everyone benefits.
To give an extreme example: if you actually think the singularity is happening in two years, then all of this is a moot point anyway, for better or worse. The dude in the cabin should pivot to AI, or better yet, put a net in the river to catch all these kids.
For a different framing of a similar argument, I have a Kantian friend whose argument for why we have to save a drowning child but not every drowning child runs like this:
We all have a perfect duty to avoid servility. If we were all servile, then nobody would be working towards their own ends, and so servility as a concept would become nonsense: everybody is just spooning soup into everyone else's mouths forever.
We have an imperfect duty to avoid indifference. That means, when we can, we need to avoid indifference: if everyone is a Sociopathic Jerk, then nobody is going to do anything moral ever, which is not in anyone's interests. But if the duty to avoid indifference were perfect, that is, if it can never be annulled, then we would violate our perfect duty to avoid servility, because we would all be waist deep in so many rivers 24/7 pulling out children, who would then grow up and wade into their own rivers, and so on.
***
David Friedman objects to Rawlsianism on the basis of its assumption that angelic beings would be so risk averse. In response to this post, I can imagine he would say that we would not agree to the demands of a just society, because that might not be in our interest; if we are willing to accept the risk that we are going to be the one kid in the hole, then we get to live in Omelas.
My intuition is that as an angel, I would have a greater risk appetite than a Rawlsian angel would--and yet I also have a strong intuition that society's should be just. I wonder what would convince my angelic self to assent to the demands of justice, or what a non-angel-powered argument for justice might look like.
I don't think you need the full Rawls conclusion that you only care about the worst off; the point is just that you'd almost certainly conclude you have some reciprocal duties to other people, that would imply the responsibility to save drowning kids in "generic" situations.
I think that's right, and saves the thought experiment for me in this case.
Great article!
The obvious difference between all those drowning children and saving the lives of third-world people is immediacy and ability to judge effectiveness. Whenever a thing is mediated before it actually does any good, and whenever its effectiveness is in reasonable doubt, it is less morally urgent.
For all you know, sending money to save the lives results in the regional war-lords living high on the hog and deliberately keeping people dying because they make good advertising for aid grants.
I would agree, foreign aid is difficult and sometimes results in harm. If only there were some organization of experts who rigorously vetted charities to ensure that they do good responsibly and effectively!
https://www.givewell.org/
[edited this comment bcs somehow I made FOUR TYPOS in two sentences!]
Even there there is the question of trusting third parties to be not only reliable themselves but to not be fooled.
Of course you shouldn't necessarily trust third parties, but Givewell is not a black box you have to trust. if you're skeptical you can consult their published research.
https://www.givewell.org/international/technical/programs/insecticide-treated-nets is a good taste of what their research looks like, see for yourself how likely it is they're fooled.
I don't have to do research to know about saving the child.
If you did have to do research to know how to save a child in imminent danger, do you think that wouldn't be morally obligated? I remember reading a Spider-man comic some time ago where he saved a civilian in need. The civilian was in a bus that was dangling off the edge of a bridge, and to save him Spider-man needed to make a bunch of quick mental calculations on how far the distance was, what the right type of webbing to use is, etc. The illustrations had visual representations of complex math in the background, like that confused math lady meme https://static.fanpage.it/wp-content/uploads/sites/6/2019/09/math-lady.jpg
Is the fact that Spider-man had to calculations enough to make it so that he's not obligated to save them? What if he wasn't able to do them mentally, and had to take a few minutes to do the calculations by hand? What if he forgot a physics formula and had to Google it? It seems like none of that matters when a person's life is at stake. The reason you need to save drowning children is not because it specifically costs you a wet suit instead of time doing research, the reason you should save them is because it saves a life at trivial cost.
I could literally spend my life in research. And not help anyone.
That's the advantage of the near things. You can spend a lot more time helping and less discerning whether you can help.
I find so much value in the rationalist toolbox and community, but sometimes it's good to read things like this to remind myself why I'm still a virtue ethicist instead of utilitarian :).
Maybe I'm weird, but I'd love to live in the cabin. In real life, doing good things in person is hard, because you have to figure out how to put yourself in the position to good things, and doing good things by donating to charity is hard, because you have to ask how much to donate and where and so forth. If you live in the cabin, you just get to rescue drowning children all the time and straightforwardly do life-saving levels of good.
But if you miss a single hour, a kid dies and that weighs on your conscience. You can't go to bed without thinking of the children just outside who will die as you sleep. It's much easier psychologically for me to donate to AMF and then compartmentalize that; if tens of thousands of malaria-stricken children were right outside my doorstep I'd go mad. It's rewarding to save one or two drowning children, but a never-ending stream would be hell.
I can make it my problem and do my best to solve it. I'm not sure *how* I'd solve the problem of the children dying when I sleep, but at the very least I could hire a lifeguard or two for a night shift and find the money somehow. Maybe I'd ask the parents for donations to cover it. Maybe I'd put an ad out in the newspaper for people that want to volunteer to save drowning children for 4 hours a night pro bono.
I can't solve malaria.
You could address malaria. There are anti-malarial drugs. There are insecticide treated bed nets. There are people who can treat pools of water where mosquitos breed. There are insecticidal foggers that treat neighborhoods where disease is detected. There's are under-leveraged technologies for fighting mosquitos. (sterile males, etc.) And there are charities set up to implement all this if you're not able to be there in person. With a charity, you don't get the benefit of personal recognition. But if you have resources you're willing to direct to the problem there are people who can make use of those resources.
https://www.gatesfoundation.org/our-work/programs/global-health/malaria
I am aware of all these things. It's a motivation problem. In practice, if we compare the amount I currently donate to address malaria to the number of children I'd be saving if someone made the drowning children cabin my problem, I think the thought experiment version of me does better.
Or you might build your little cabin and explore the remnant forest and write a great book that many people find inspirational, so much so that it furnishes one of the texts instrumental in fostering environmental awareness - which further leads to some non-human life being allowed to keep living their lives, instead of losing them to extinction, at least for a time, until people forgot they ever cared about that.
But then people would go online and complain that your friend's wife did your laundry because she was in love with you, and you did nothing to stop her; and instead of nature study your time would have been better spent on laundry.
So you'll want to make sure that you carve out time from saving children to do laundry, otherwise your efforts will be judged wanting, and possibly "illegitimate" due to your laundry privileges.
Alternative moral construction: We evaluate our relative morality by comparing it to what an average person in a given situation would do.
The average person seeing a drowning child in an isolated situation would, at least we would like to think, rescue that child; so we see a moral obligation to rescue that child.
In a society full of drowning children, the average person has become desensitized and keeps their head down. So there is no moral obligation to rescue any children.
But the trade-off there is that we see the person who rescues drowning children when nobody else is doing so as "good" in some deeper sense. In the first case, the person who rescues the isolated drowning child might be seen as heroic, but not necessarily morally "good", because we'd expect anybody, even a relatively bad person, to make such an effort; we want to recognize some kind of virtue in the act, however. They'll probably make at least the local news, and we'd upvote stories about them on social media.
In a world of constantly drowning children, the person who goes around saving children may never make the news, but we will instead regard them as morally good.
Morality is, in some significant sense, deviation from the average; an evil person does worse than the average person. A normal person just does what everybody else does. And a good person goes above and beyond the average.
Trying to impose an obligation to be good is fundamentally confusing what goodness is, in this framework: To be good is to exceed your obligations.
1. Is there some element of area knowledge or certainty of delivering effective help at play? If we rescue a drowning child, we know that we've done such a thing. If we go into an unfamiliar neighborhood, we don't know if the scared person running past our house is escaping a murderer or fleeing from a crime scene of their own making. Ted Bundy lured some of his victims by playing at being injured and asking for help. Telemedicine would be similar to presence unless we think that the choking Chinese medical student is more likely to be pulling a scam than an in-person one.
2. I agree that, in terms of social obligations at least, the demands we make on people need to be limited and sustainable, if nothing else than to protect people from altruistic paperclip maximization. There are people with their one issue and a kind of tunnel vision who want all resources diverted to their pet issue and have little concern for other people's values.
3. I personally think that the notion you describe of having to help people just because they're close to you is a heuristic that many people *do* employ and that it *does,* explicitly, lead to socioeconomic segregation for *exactly* the reasons you point at. The question then becomes: who is your community? We could justifiably say that someone who got rich by running a factory might have a stronger obligation to their workers or their workers community and play to that. Some people argue for state or national level taxes to try and expand a person's community to fight this trend, as you mention. But I have a bit of sympathy for supposed 'slum lords' who do horrible things but who also likely have many more horrible things done to them for trying to service a poor neighborhood. There is a lot of real, justifiable incentive to just get out of a problematic area and not be associated with that area at all, and a kind of 'tying people to a sinking ship' type of outcome if that's not allowed. "Touching a problem" really can drain everything you have, or leave you dead. If community standards are pathological, it may be justified for a person to cut ties with that pathological community and find some other community which better matches their notion of what a social contract looks like. If they go back to their old community, they can go back on their own terms. I feel the common expectation for unbalanced reciprocity between one group and another is harmful for creating exactly these types of conflicts.
4. There's an old standard of 'raising a hue and a cry.' If you're uniquely aware of a problem, there may be more of an impetus to rally the community for help. To use your phrasing, you may bear a greater burden of trying to get a lifeguard hired if you live near the river even if you're not supposed to save every child yourself.
5. I like Rawls and his veil of ignorance. I think that, practically, it tends to lead to disagreements because different subgroups have different values in terms of what is good which are directly tied to their existence as subgroups. The person who owns a factory is more likely, statistically, to value investment. The person who works at the factory is more likely to value immediate consumption. To an extent, this difference in culture, either individually or generationally, is how the two people found themselves in their disparate positions. We can rail against over-consumption in either scenario, of course, so there are also some potential common values in either case. To return to your analogy, there will be some practical differences about which children are worth saving. For example: Save one child today vs save two tomorrow. And going back to the standard of certainty, do we trust the person who says that they're planning on saving two children tomorrow? Our moral heuristics really should account for the fact that we're all imperfect people with imperfect understandings of what is true and what is likely and what is good.
I think the third puzzle piece that you're missing is that our moral intuitions judge people by their intentions and not by the results of their actions. Constructing elaborate scenarios where the outcomes are certain and easily measurable misses this. In the real world the homeowner protests both the construction of the dam and the society that allows children to drown and is considered morally just regardless of the efficacy of their protest.
For there to be a moral obligation, I think you have to not merely "touch" the situation, but touch it **in a way that people would implicitly rely on**.
I spot a gold bar in a pit, and yell aloud "whoa, look, a gold bar in that pit." Another guy walking nearby hears me, we both stand there and look at the gold bar, he grabs a rope and says "I'm gonna go down there and see if it's real!" I never suggested he do that. But by pointing it out and standing there with him, saying nothing when he proposed his plan, and watching him descend, he now has a pretty reasonable assumption that if the rope snaps and he needs aid I won't just walk away. I am acting consistent with somebody who is on Team Gold Bar Retrieval.
If I'm the doctor working remotely, I'm on Team Surgery. If I'm hanging around the neighborhood pool, I'm on Team Pool Party. Even if my only explicit undertaking had nothing to do with the person actually in danger, everyone else there would expect implicitly that I'm on The Team. If you're alone at the pool at night, do you feel differently than if you're surrounded by strangers, with regard to your risk, because presumably somebody will aid you? Of course. If you want to do a crazy risky dive and everyone there says "ok dude but that's stupid and it's on you if you kill yourself", they are no longer on your Team.
I don't think you have any obligation whatsoever to save any of the kids in Drowning Kid River because under the original facts nobody is relying even implicitly on the assumption that you'll aid them. But let's say your uncle, out of generosity and wholly at his own expense, operated a Kid Catching Contraption with a net and a hydraulic platform that successfully saved 99% of the children, and that Megacity was aware of that. In fact, that's why it's #999 on their list, the Kid Catching Contraption does a fair job all things considered, 1.5 dead kids per week in a town of 50M is plausibly the 999th most important thing to deal with. Your uncle never promised to run it forever for free, but when the city sent him a letter of thanks signed by the Mayor and a giant gold-plated ceremonial key to the city, your uncle didn't object. Parents in Megacity start to talk about the Kid Catcher Cabin downriver, everyone knows the directions, a lot of people have had to go pick up their sopping wet brats from the west bank, waving at your uncle as they drive away. At this point, operating the Kid Catching Contraption is impliedly part of the deal for being the guy who owns Kid Catcher Cabin. I think in this scenario, you would have an obligation to continue to save 99% of the children if you accepted the Cabin, because everyone involved is relying to their detriment on the assumption that you will continue to do so. However if you published a notice in the paper explicitly saying the Kid Catching Contraption would shut down in 60 days, cc'd to the desk of the mayor, put up a public notice at the Kid Collection Site across from your cabin, making it super clear that somebody else is going to have to address this problem, then I think you're free.
I have never once represented being on Team Starving African Kids. (The closest I have ever come is a one-time UNICEF donation, made to prove a point to somebody back during the covid hysteria about the relative global danger of dysentery vs covid.) The starving African kids have no expectation of my aid, and no reason explicit or implicit to believe I am obligated to aid them. Absolutely nothing they do is done in reliance on any expectation of my taking positive actions to aid them. Even Singer's original drowning kid may have, in the back of his mind, the idea that it's okay to swim in pools because you could yell for help-- like maybe the fact that foot traffic in the area means somebody may happen to walk by and so it's 5% safer and that tilted the scales to where the kid does it. I don't even have that extremely attenuated connection in regard to starving African kids.
I have little to comment, other than that I largely agree, and would like to point out that this line of thinking sounds a lot like role ethics (as seen in Stoic and Confucian philosophy, which I discussed in another post). E.g. "team surgery" is equivalent to adopting the social role of a doctor.
Thanks, I went back and read that comment, and it is interesting, I had never looked into that "role ethics" as it wasn't covered in my Ethics course at college (we did the usual Western philosophy survey of virtue theory, divine command theory, natural law theory, consequentialism).
I spent most of my adult life working in a job that had some public utility function, so I was always on Team Civil Rights or Team Public Safety just by virtue of showing up 40 hrs a week, and it's easy to determine your professional obligations in that context. What you describe as the situational roles are where the harder decisions enter, deciding if you have assumed a duty under these circumstances to perform in this role. I think where I'd disagree is that "being a human being in the cosmopolis" doesn't seem like a well-defined role, and if you define it to require aid to remote strangers then the definition is just swallowing the question. But it's an interesting framework, and I'm glad you brought it to my attention.
there are two reasons that I, personally, have put a pause on helping the drowning child.
I don't know if you can make this into a unitary thought experiment, but to me it seems like I was helping this drowning child, and we got out of the lake and I went on my merry way only to see him sprint back to the lake and throw himself in again. how many times am I obligated to save the suicidal child? maybe he he grows out of it, but there's every chance he becomes a suicidal adult.
The other one: I was contributing into a pot for a lifeguard to save all these drowning children, when suddenly a coalition formed of people with much more power and money than me who have committed to not using the money to hire a lifeguard, and further plan to make it as hard as possible for such a whip round to ever happen again.
So, now I only help drowning children where I know I'm not shoveling sand into a bottomless pit, and I know I'm not contributing money to what will eventually become the not hiring a lifeguard pool.
I would assume that the people like Scott who call themselves "effective altruists" have no problem with people discontinuing altruism that isn't effective and being more selective about it. If you throw away some of the weirder EA stuff like longtermism and shrimp welfare, your objection is basically the whole point of what they're doing.
I agree that people are sloppy about their assumptions, you'll hear that Narcan/naloxone saves X lives per year, but if you consider how many of those people just OD again a few weeks later it's not as effective as advertised. I don't know to what extent EAs factor such things into the equation. Theoretically, if you did factor that in, and it were still cost effective, I think they'd be fine with it, whereas maybe you'd have a principled objection to throwing good money repeatedly at bad people to save them from their own volitional acts, and if so that's where your objection would diverge from theirs.
"I went on my merry way only to see him sprint back to the lake and throw himself in again. how many times am I obligated to save the suicidal child? maybe he he grows out of it, but there's every chance he becomes a suicidal adult."
Have you considered that maybe this is just motivated reasoning to avoid the obligation to donate? I don't think it works as an analogy for the actual situation these Africans find themselves in. Not like the warlords hold elections.
It's not *motivated reasoning*. It's an attempt to *formalize* why we don't want to donate. The original proposed moral rule doesn't fit our intuitions well about when we have to donate, so we try to figure out why we don't want to and modify the rule to take it into account.
Rationalists are notorious for creating rules and trying to follow them without sanity checking them. A rule that says that we must donate under circumstances where nobody thinks they must donate has failed the sanity check.
The solution, in both cases, is in forming a coalition that can pool it's resources and effect a larger scale solution: institutionalizing the suicidal child, or mobilizing political opposition to the "Never hire a lifeguard" coalition. You aren't morally responsible in isolation anymore, but your moral obligation has shifted to becoming part of an organized effort to effect larger scale change.
When you make statements like "it seems" and use pronouns like "we," you are making implicit psychological claims about an underspecified group of people without any empirical evidence. If this is your methodology, then a priori you should be explaining why most people DON'T think like you - otherwise what are you doing if people a priori already agree with your moral values?
Also, seemings, like tastes, are agent relative. It doesn't make sense to just say "Carrots are tasty" like this is just a plain fact. It also will do nothing to persuade people who hate carrots or have pica and eat weird shit rather than carrots.
Your inquiry is propped up by a house of cards that crumbles quickly because the presuppositions don't have much rational appeal to people not already stacking cards in the way you are - which again, if was true of most people a priori, you likely wouldn't be defending things that "seem obvious."
To be fair to Scott, he's arguing prescriptively, trying to come to a conclusion about what people *should* think, not objectively explain what people *do* think. Whether or not this is a worthwhile pursuit will vary according to one's opinion, I suppose.
The people who care a lot about the suffering of farmed shrimp don’t consider it their responsibility to do anything at all about the suffering of shrimps that get eaten by whales. I suspect that many of these paradoxes are due to a similar effect. The suffering of people in Africa is perceived as “natural” and not too different from the suffering that anyone would probably face if they started living in the woods away from civilization, modern medicine, etc.
A related but more concrete effect is the “it’s not gonna change anything” effect. A few (or even a lot of) mosquito nets won’t really change the fact that those people and many of their descendants will keep living in a malaria infested country, it will just be a bit more bearable. Malaria used to be common in many European countries, but looking back, any amount of mosquito nets would have been a fairly inconsequential blip in history, compared to civilization finally progressing to the level where it can just decide to eradicate it completely.
This might be a fallacy, but it at least has a psychological effect.
Another thing more directly related to the article: the coalition of angelic intelligences thing was pretty convincing to me as a prescriptive general rule, and I think it’s a good description of what many people actually think in practice, consciously or not. The thing is that while the post took it very abstractly, people might be thinking about a more concrete coalition of real people. And in that context, they might perceive someone living a life of subsistence farming in Africa as not really part of the coalition at all.
I think this is exactly right, only I'd dispute that "natural" is a good category--I'd say "within our control".
The issue is, we have some control over what is within our control; after all it was once natural for all of us to be completely at the mercy of disease, the elements, etc. but now if a crocodile escapes from the zoo and eats me, that's not perceived as natural anymore.
The point is that whales eating shrimp is a hard problem to bring under our control--insofar as we can imagine any avenues at all, they have huge costs, huge tradeoffs (like killing all whales), and may not even affect the scale of wild shrimp suffering.
But malaria in Africa *is not like that!* Or at least, there's plausible reasons to think it's not. Malaria in Africa may be natural, but it's likely that with a not overwhelming amount of resources we could make it otherwise. The EA argument is that when something is plausibly in our control with moderate resources, we should do it--don't let the only thing stopping you from ending a disease be the ill-defined consideration that it's "natural".
The second point I'm a little more sympathetic to, but I think is still debatable: most of us presumably would like our cancer treated even before we live in the world where cancer is perfectly prevented, or police to arrest assailants even before we live in the police abolitionists's dream world where no one is compelled to commit crimes in the first place.
Moreover, is it not possible that nets are part of a strategy precisely to eliminate malaria in poor countries? It should be pretty difficult for an intervention to be cost effective, have a big impact on malaria incidence, and yet be unable to roll out as part of a strategy of complete malaria eradication.
"The people who care a lot about the suffering of farmed shrimp don’t consider it their responsibility to do anything at all about the suffering of shrimps that get eaten by whales."
Bring back commercial whaling! How many shrimp (poor, innocent, suffering, cute little shrimpies) does one big bad whale eat over its lifetime? Clearly the utilitarian calculation of the greatest good for the greatest number means the whales must die!
I mean, Rawls' original position led to a different system than what most rats believe. For what it's worth, his original position led to a different system than what most people believe, to wit. I think the ultimate point is that we as humans aren't very moral--but viz, what normatively should happen (we live in a world where positions are given to merit, ethnostates, working class dictatorships) is generally diffracted and most certainly not one to one with the real world as a whole, which isn't... which is pretty normal. I mean, it's a feature of Christianity. I'm not sure how much the whole real-world/moral-world disjunct is thought about in the literature, but at least on an empirical basis qua central limit theorem if there were some objective scale to "goodness", we'd all fall within 1-2 SDs of it.
The drowning child thought experiment leads to what some would consider a reductio ad absurdum. We should spend all of our money (or almost all of it; I suppose it would be okay to eke out a meager existence) on saving children in the most cost-effective way (let's say malaria nets). Of course, if we did that, we'd be essentially impoverished ourselves. And all of those other people who are living in precarious situations are also morally obligated to help out those people who are even worse off. Why buy food for your malnourished child when it would be more cost-effective to chip in for malaria nets for some other poor people? There's only one person in the world who's the poorest. All the other 8.2 billion people have someone they should be helping. They should cast aside every sort of luxury, every extra article of clothing, every bit of food beyond the barest minimum of calories and nutrients needed to live. Do whatever is necessary to help people who are worse off, even if you, in the grand scheme of things, are incredibly poor yourself. Eventually nobody has anything and we're living in some sort of post-scarcity utopia or something. Except I don't know how we'd have anything resembling civilization at that point.
I consider this somewhat parallel to Parfit's repugnant conclusion. And if utilitarianism keeps getting us to these absurd conclusions, maybe the problem is with utilitarianism.
The problem is that no one single moral principle, including utilitarianism, explains all human moral intuitions. We are attempting to reconcile competing principles, all held in the mind simultaneously, all more or less equally powerful in certain contexts, and which most likely evolved at different times and selected for different environments.
Morality is inherently context dependent, that is to say.
I always thought the Copenhagen Interpretation of Ethics was some sort of straw man argument that no one took seriously. I categorized it as one of those ethical koans that have no obvious best answer that people get obsessed with.
Speaking of koans, do non-Western cultures obsess over ethical thought experiments? If so, do they play headgames with cultural obsessions other than human lives?
Apologies if I sound dismissive of ethical thought experiments, but unless one is totally attached to a particular (unprovable) philosophical view, there are no right or wrong answers to these questions.
I would argue that no society obsesses over it, including ours. The people in this thread, and those who obsess over ethical dilemmas, are a pretty narrow demographic.
The universe doesn't care about drowning children.
https://imgur.com/pictures-sad-children-by-john-campbell-2ODpSOm
Unless you believe it all God's plan, the universe doesn't care who lives or who dies. And the rules of our universe ensure that everything will die.
But humans are social animals. And as a general rule, we extend the most effort to helping our family, our friends, our community, and our tribe in that order. And we generally extend extra assistance to children who may have no social connection to ourselves. Some humans may walk away from a psychological inability to connect to the fundamental behavioral codings of our species, but for the most part, if we saw a child of another tribe drowning we'd instinctively try to help the kid.
I think the Copenhagen interpretation of ethics is using the wrong metric. IMHO a better way to think about it - your moral obligation to help fix a situation should not be greater than the amount of information you have about the situation. This is a useful heuristic even under a pure utilitarian perspective - if you do not really know what is going on, your misguided attempts to help can easily make things worse! If you know for sure a child is drowning and you know enough about the situation to believe you are capable of helping (as opposed to getting in the way of mire capable rescuers) - yes, you should do it. Same for the chocking examples in this post. But if you are being Pascal mugged - well you do not actually know what is going on, so no major obligation to help. Starving children in Africa - do you actually know how to make the problem better without side effects that could make things worse overall? Will your money support food for children, or be stolen and support political corruption, which would result in more starving children in the long run? Will your money be used to buy food from wealthy countries in a way that would reduce demand for local produce, drive local farmers out of business and make things worse in the long run?
Traditional Copenhagen interpretation is then just a second-order heuristic - once you've interacted with the problem sufficiently, surely you have a lot of information about it, and should know how to properly help.
The problem here is that there is no way to know if you know enough--unknown factors are unknown. This is generally true, but esp. so in emergencies.
Morality is not generalizable like that, it's more specific. To put it in stark terms, if it's a child of your people, yes, if it's a child of foreigners, no.
The key point of morality, the extreme that limns it, is the case of self-sacrifice. But self-sacrifice has no "engine" outside of "us" vs. "them." You self-sacrifice for us (family, kin, clan, folk, nation), not for them.
Of course in real life, if the cost is small enough, you may sacrifice for a "them" if you're not specifically at war with "them;" if you can swim and it's no huge risk to you, you might rescue a child of ANY genetic distance, or even a puppy or a kitten ...
And this is connected to the relativity of morality: "us" and "them" are not fixed, absolute terms, they depend on various conditions (e.g. with the hypothetical alien invasion, the human race finally does actually become an "us," one race that we're all part of, but since under normal circumstances there is no greater oppositionality, we make do with the relative oppositionalities we have).
You are neglecting the effect of empathic distress. For very many people, the facial expression of any child in distress is enough to take personal risks in order to help them. Some people extend this to members of other species, as you point out. People have died attempting to save complete strangers.
This doesn't undermine your argument that morality is relative, and that the terms are not fixed. I'm just expanding the factors that affect the outcome.
Yeah point taken, although I should think that even that (and things like mirror neurons perhaps) are probably modulated somewhat by relative genetic closeness vs. distance.
Essentially, because having sex isn't the only way of raising the likelhood of your genes being passed along (for there is nothing special about the copies of those genes in your gonads as opposed to your cousin's say, or even more diffusely, someone in the same town), there will be an in-built bias or preference towards one's own kind in general (or to put it another way, gene cluster groups that DIDN'T have a fair number of people unconsciously thinking that way, would be that bit less likely to be helpful for the inclusive fitness of the members comprising them).
But it's probably true that for some people the markers of "innocent progeny needing protection" can be salient to the point of overriding the preference from genetic closeness vs. distance.
Still even those signals, strong as they are, can be overriden for the sake of racial preference (e.g. cf. the direction in some of the more hardcore Jewish religious teaching like the ultra-Zionist "Torah of Kings," that it's ESPECIALLY the children of enemy groups that should be killed, for obvious reasons).
Everything is an interaction, I find. The influence of genetic factors depends upon the environment we live in, and the effect of the environment we live in depends upon genetic factors. I would point out that while a preference for genetic relatives is one way to promote a gene line, it isn't the only way, an impulse toward cooperative norms with strangers can, in certain circumstances, also promote one's family (if we all feel a generalizable desire to help young children, everyone's children stand to benefit). It's a complex interplay of different factors, to be sure.
That's true, but for most traits the "nature" input is more impactful on variation than the "nurture." This is of course generally accepted for traits in individuals, but it's politically verboten to say for races and ethnic groups - yet I can see no reason whatsoever not to extend the same percentage weightings that are generally accepted for individuals these days (e.g. intelligence 60-70%/30-40%) to groups.
That being the case, stranger inclusion is always going to have less importance the more genetically distant the stranger. There are other good reasons for that too (e.g. avoidance of groups who have different diseases than yours, the difficulty of communication the greater the genetic distance, the more different the average thought habits and psychological traits are, etc.)
Again, the importance of "everyone's children benefiting" has limits. Kin altruism is pretty strong, but that fades gradually. The nation strictly so-called (the ethnostate) is the largest feasible grouping where "caring for strangers" (within that national grouping) could still have some relevance to one's inclusive fitness - but beyond that, as an extension to "humanity," not really, UNLESS (again) you were talking about an alien invasion or some great natural disaster - something that bodies up against humanity as a whole and firms up the outline of what "humanity" is, something that might affect everybody, like for example a catastrophic climate change or something like that.
Of course this being a trait like any other that will likely have a strong genetic component, some people will be altruistic to more distant strangers; some are even altruistic to other species. But that's not really where the average is (just like it's not at the other extreme of "kill everybody you meet" :) ).
"That's true, but for most traits the "nature" input is more impactful on variation than the "nurture."
First off, where do you think that has been established? If you are referring to the twin study methodology, then I disagree the research supports the conclusions you have drawn. In particular, we cannot conclude that if 70% of variance in IQ is predicted by genes, then only 30% is due to the environment. If you are referring to something else, could you share it?
Obviously, it makes sense that extension of benefits at some sacrifice to oneself should fall off as genetic distance increases, but the problem is that the individual has no direct way to reliably measure genetic distance. We use proxy indicators instead, including physical appearance but also more subtle things like facial expressions and similarity of espoused beliefs. This was not selected against because benefiting strangers has second order effects on one's own in-groups (esp. if there are other ties like similarity of beliefs). This can even extend to all humanity, if all humanity is linked by economic and other ties. Note that these ties were created by human behavior in the first place, which is presumably just as influenced by our genetic inheritance as any other set of behaviors. It isn't that surprising if many people have the same sort of generalized positive feelings toward humanity that they have toward strangers they pass on the street.
Whatever "secondary effects" you're talking about with strangers are going to be stronger with more familiar faces.
In some ways we don't disagree that much, I just think there's a steeper "falloff" than you do, the more strange the stranger in question. Emotionally I'm not against kumbaya, but it's basically rot: in terms of triage, min-maxing, you know more about and can better help those who are genetically and culturally closer to you, and bar emergencies those more distant can look after themselves better than you can too :)
Re. "nature" being more impactful, it's not just twin studies, it's the entire trend of science these days, and there are more and more layman's books about it (there's a particuarly good one that came out in the last 5 years or so but for the life of me I can't remember it offhand, it was a NYT bestseller type of book). I think it's generally accepted that the "weighting" for most traits is towards Nature. This was fought tooth and nail for a long time, but it's just become impossible for conscientious scientists to ignore.
The sticking point is racial and ethnic average group differences, and that's why the battle was fought at the individual level for so loing, because once that dam breaks, it's curtains for the idea of social engineering, and people are going to realize with horror just how horrific the 20th century was, dominated as it was by that idea. "Nurture" (uber alles) reigned for a long time, mainly because the job of demagogues, educators, etc., would be better if it were true, but it's just not.
Before we are anything else we're a certain "build" of body and brain, for which the blueprint is DNA. Obviously that "expects" a certain kind of environment (e.g. not a lava hellscape); obviously too, it's ULTIMATELY environment that shapes genetics, but that's over a longer term. But in individual and group terms (politically and morally) we have to come to terms with the fact that the late 19th/early 20th century view was more correct than what came after it (which stemmed from various forms of Left-wing wishful thinking, fromk Boasian anthropology etc.)
This post feels personal. I was an aid worker in DR Congo and South Sudan, and now I have a comfortable life in suburbia. And I still know so many aid workers who are, eg, doctors working long hours in terrible conditions with minimal supplies in those countries. They save lives everyday, and they could get an easier, better paying job in America tomorrow and have a more pleasant life. Many do.
$100 in Congo could buy vital supplies for a dying baby, or it could ship the American doctor some chocolate bars from home. But that framing takes a lot of expats down and sends them back to America where they don't have to think about it. The doctors that last are the ones who decide that they can have chocolate, cold cokes, and good internet in the field without being wracked by guilt.
Faith in God helps.
This is a great post I hope Scott reads it
That's because being a good person is a spectrum, not a binary, and very few of us expect anyone to be perfect.
"$100 in Congo could buy vital supplies for a dying baby, or it could ship the American doctor some chocolate bars from home. But that framing takes a lot of expats down and sends them back to America where they don't have to think about it. The doctors that last are the ones who decide that they can have chocolate, cold cokes, and good internet in the field without being wracked by guilt."
i've worked in the aid sector before (Peace Corps) and my best friend was actually an aid worker in South Sudan as well, so I can definitely identify wth this strongly.
What years were you in South sudan? Wondering if you overlapped with him (he was working for the relief and development arm of a very conservative Christian denomination, though that wasn't his religious view himself).
I was there 2014-2015, plus a short-term assignment in 2016. But I spent all my time in Juba, and mostly only saw expats from other orgs at church and frisbee.
He left slightly before you, i think, so you wouldn't have overlapped.
How did you like your experience? He liked the work he was doing but IIRC didn't feel like the people there really appreciated it.
My experience was overall positive. I did finance, so I didn't see much of beneficiaries.
Like most things, aid gets messier, more complicated, and less like you imagined the closer you get, but we did good work.
If the point of this is to convince someone who would save a drowning child that they should also send bed nets to africa, it fails to engage the core distinction -- saving the drowning child is obviously net positive for the saver (reciprocity from the parents, hero on local news, etc), whereas sending money to africa is at best neutral, beyond the direct monetary cost (status might be negative due to "better-than-thou" social cost to friends, whose own "minimum socially acceptable charity" has now been raised a bit by you).
If the point of this is to convince someone, even someone perfectly altruistic, that, in general, their marginal dollar is best spent on the top rated GiveWell charity, it fails here as well. Historically, in civilization, humanitarian aid would have been mostly net neutral -- in a malthusian world aid simply accelerates the arrival to population collapse. Only technological and institutional progress can actually raise the long-term well-being of a population. So the question is not whether saving the life in africa is net good -- we'll assume it is -- but whether that marginal dollar is better spent on an OpenAI engineer's doordash, because it accelerates the arrival of advanced AI by a few hours -- AI that will eliminate scarcity at the human scale permanently and solve disease and hunger in africa forever -- which will save 100 african lives instead of 1. (Or vice-versa, depending on your beliefs, that marginal dollar is better spent attempting to food-poison the OpenAI engineer's doordash, so that advanced AI is delayed by a few hours, and the extermination of all humans is thus postponed equivalently.)
The meta point is that epistemic uncertainty is a 100% valid reason to not give money to africa, since it is not clear that direct aid is the best marginal use of a charity dollar at all, versus fundamental research, or indeed, just contributing to the economy in the general way of pursuing your own self interest. This is deeper and more complex and profound than we're tempted to think it is: to extend the AI example, if you do assume that AI is either some force for enormous good or bad and that almost nothing else matters for the future well-being of humanity (a perfectly rational position), then the moral coloring of _playing video games_ in 1996 -- the actual economic activity that enabled the current AI boom -- becomes quite stark. So complex and uncertain is the post-hoc measuring of moral quality, that a purely selfish, wasteful and innocuous activity like playing Age of Empires as a kid becomes the necessary accelerant for the transcendence (or extermination) of all of humanity.
I also think aid needs to account for role of Malthusian dynamics, but that doesn't mean that aid is futile. Even if each prevented death causes one more tragic death down the road due to resource constraints, you can still have a big impact by preventing, say, blindness (e.g. Helen Keller Intl).
Or, what you could do is contribute to a charity (or a government agency) which seeks to improve other nations institutional and technological progress (while still saving some lives).
First, great post—I like the idea of explaining some of our moral intuitions in terms of the practicalities of coalition-building.
Now I’m wondering this: Have other people noticed that the thought experiments intended to make utilitarianism look impossibly burdensome assume a world in which (like ours) altruistic behavior is rare, limited, and haphazardly deployed?
If we lived, instead, in a world in which (say) half the population believed in an obligation to help the neediest, that would easily be enough to ensure that only expensively cured diseases are fatal. In that world, the altruist doesn’t have much extra to do, because the marginal utility of looking out for one’s own interests (which we all do much more efficiently than addressing the problems of strangers—go capitalism!) would only occasionally be outweighed by the possibility of helping someone else. And when it is, it would almost always be to help someone nearby—far-away people will have other altruists near them who will help them more quickly and easily.
I’m not sure if the people making these anti-utilitarian arguments are aware of how much they depend the contingent circumstances that prevail on our planet. I understand that this is the only reality there is, but don’t ethicists like to think that their abstract arguments would apply to all rational beings, in whatever social configuration they find themselves in? Yet arguments around utilitarian demandingness seem to be clustered in a region of social possibility space where the behavior they want to argue against is already rare.
(The decreasing burden of altruism as more other people become altruistic suggests there could be a tipping-point dynamic. OTOH, that cultural equilibrium would *not* be *evolutionarily* stable, obviously. I digress.)
Scott’s support for Rawls’s original position hints at this. Our rational pre-selves would choose a world (not just individual behavior but a whole world) in which the rich agree to take cost-effective measures to help the needy. Scott says that it’s “virtuous, but not obligatory” to behave that way here in our world. I don’t know where to draw the virtue/obligation line, but surely one of the benefits of the more altruistic world is that our moral burdens are much lighter.
>I’m not sure if the people making these anti-utilitarian arguments are aware of how much they depend the contingent circumstances that prevail on our planet. I understand that this is the only reality there is, but don’t ethicists like to think that their abstract arguments would apply to all rational beings, in whatever social configuration they find themselves in?
I don't get what your point in bringing this up is. Even if we grant the premise that most or all ethicists are morons -- or that they "like to" be morons, whatever that means -- how does that weaken the case against utilitarianism? These seem to be two wholly unrelated questions.
I'm not even sure what to make of section 1. The point of geographic distance (and in the case of helping the future, time distance) is that those distances are also inferential gaps, ie the causal connection between you and the benefit passes through many uncertain nodes.
My take. In order to be morally obligated to save the drowning child, you need to be in a unique position to save the child. If you are an old man in an expensive suit, and you are watching a child fall in a river together with a team of professional swimmers wearing swimsuits, you should likely not be the first to jump in the river - as the swimmers are in a better position to save the child. If all the professional swimmers are psychopaths and does nothing, you are now obligated, because you are now in a unique position again.
If the problem is systemic, such as the case of the children drowning every hour by your cabin - now your obligation is to alert the local authorities (or society at large), and save all the children until you can get help, and eventually a permanent solution is put in place. If the authorities/society fails to provide such a solution, or are not interested in doing so, you are no longer in a unique position to save the children. Therefore you also have no special obligation anymore.
If you're the only one who cares about saving children in your society at large, I don't think think it makes sense to say you're under a moral obligation to do so. Of course you may still do that, and that would make you a hero, or at least a very good person. But I don't think failing to do so makes you immoral.
A good example would be a doctor in a third world country as others have mentioned. You can choose to work long hours for low pay in bad conditions saving lives of poor children. This makes you a hero. However, you are under no moral obligation to continue doing so. This is because you are in no unique position to solve this systematic issue (poverty). Other doctors, working normal jobs at home could do the same, but are not willing to - and in my view, they are not immoral to fail to do so.
However, if a doctor is on a flight and someone suffers a heart attack, the doctor is obligated to help - because in this situation you are likely to be the only one who can.
That's an interesting expansion of the typical way to frame these problems: there isn't just "good" vs. "bad", there's "heroic", "minimally acceptable," and "bad."
And of course it's just a small additional step to make the whole thing a spectrum with no clear boundaries at all.
Did you read something to change your view on Rawls from November when you wrote "veil of ignorance seems neither very original nor very good"? Genuinely curious
Think more consequentially. Think policy.
A policy is not just a guideline. A policy is a pre-commitment which reshapes the decision environment, to prevent bad outcomes which would predictably come from doing what seems to be the right thing at the moment.
For instance, the US used to have a policy of not negotiating with terrorists. You don't pay them money, or release their terrorist friends from prison, in exchange for their releasing hostages, because it both encourages more terrorism, and gives those particular terrorists more resources to commit more terrorism.
The problems posed in this post are designed to arrive at the conclusion that the people in the US should give their money to poorer people in other countries until those people are as well off as those in the US. The conclusion appears inevitable because all of the problems are posed in ways which ignore the future consequences, to the US and, looking even further ahead, to the world, and to the Universe, of everyone in the US embracing an ethics in which everyone acted in that manner.
Continuing to ask these questions in public is counter-productive, because the vast majority of people are incapable of looking far enough ahead to get a well-justified answer, and therefore will probably condemn as immoral anyone who does, since they're unlikely to get a satisficing answer by what is basically chance. Social pressures thus practically guarantee that only very suboptimal answers will be given publicly, leading to a consensus around harmful morals.
It's better to let people muddle along with their instinctive morality, which evolved to be both beneficial and evolutionarily stable, than to go all High Modernist on morality, and try to logically deduce how we all ought to behave, in a social environment which guarantees getting a poor answer. That's what gave us Marxist-Leninism.
There's a third option: Find a policy that effectively changes another society to become less of a burden on themselves and on us (in fact, a net positive).
Channeling my annoying teenage Objectivism: I think Ayn Rand's "The Ethics of Emergencies" is on point here. She says to help a drowning stranger in an emergency, but only if it's a *real* emergency — a rare, unanticipated thing. I think this helps resolve the "people drown in the river on the regular" issue as well as the Alice vs. Bob in heaven issue.
Part of our drowning child intuitions come from notions about "what kind of person" would let a child drown, as opposed to the narrow ethics of the situation. Someone (Sam Harris maybe?) has an analogy of learning that both of your grandfathers fought in WWII:
Grandpa Bob was a bomber pilot, and flew many missions dropping bombs on Dresden. Bob tells you that intelligence later confirmed that his bombing missions killed hundreds of civilians. "That's just war," he says.
Grandpa Jim was an infantryman and fought in ferocious urban combat during the liberation of France. Grandpa Jim tells you that one day in the heat of battle, he was fighting house-to-house, shooting and bayoneting enemies at point-blank range. One of the people he impaled with his bayonet turned out to be a teenage girl trying to escape. "That's just war," he says.
Even though Grandpa Bob clearly did a lot more harm from a utilitarian perspective, somehow your impression of Grandpa Jim changes more when you hear his war stories. I think this is because we have some general notions about what kind of person would be able to shrug off impaling a teenage girl with a bayonet, and we make some extrapolations about their expected future behavior (or we recharacterize their past behavior in light of new information).
Ditto for the drowning child analogy: we intuit that ignoring the drowning child is monstrous (partly) because of *what kind of person who would do that*. But that doesn't mean the intuition is correct!
That's an interesting example because it plays with our temporal bias the same way the drowning child argument plays with our spatial bias. Clearly you'd be at least equally horrified to live next to Josef Mengele in 1944 vs. a serial rapist.
I largely agree with John's analysis here, but the complicating factor is what the neighbors will think. You have become, after all, the kind of person who would live next to a Nazi, or a rapist. The deciding factor will probably be community norms.
This Copenhagen school is terrible and sets up really perverse incentives. It gives everyone an incentive to "touch" or become "entangled " with as little as possible to avoid responsibility or ever bring blamed for anything. This actually happens very regularly in organizations and because of the incentives of our legal system. Everyone just won't touch something, even when everyone knows it's bad and everyone wants to stop it, because the touchee is the one who will be blamed. For optimal outcomes you would implement a policy that was almost a total 180 of the Copenhagen school, such that touching something when one has a choice not to absolves you of some level of responsibility. Because at least you helped, or tried when you didn't have to, even if your efforts were a total failure. And I think that that is actually a lot more how people actually react on an intuitive level, when they're not sitting around trying to think of perfect rules that will apply to any given crazy hypothetical.
You and everyone else, are trying to make this more complicated than it is. Morality is a localized (to those people who follow your same rules or those in very close pshsical proximity) set of informal rules about what is "Good" and "Bad". Sets of moral rules are rarely totally consistent and are evolved through memetic selection to judge common issues. It makes no sense to ask about a river of the drowning damned, moral rules just don't cover that scenario.
In my moral system it is "Good" to rescue children from drowning. It is also "Bad" to ruin a $3000 suit by getting it wet. That's about where the morality of my society ends. It's up to you to consider tradeoffs and mechanisms.
"No mother, no Father only nothiness above"
The first thing that comes to mind is that by saving every single child that goes by in the river, the megacity faces no consequences for its actions and so will never bother to fix their problem. This differs from the occasional act of child-saving in that occurs in one-off scenarios because we are happy to accept that no system of child-protection is completely perfect.
The moral responsibility for such a systemic failure lies with the people having the children in the first place.
The only sustainable thing is for the megacity to eventually help themselves. Sure, you can spend a bit of time helping them help themselves but the vast majority of the work needs to be done by the megacity and the parents of the children. If this doesn't happen, all that occurs is resources are taken away from societies (and gene pools) who have figured out that caring for their children is a good idea and are given to societies what don't care enough about their children. This leads to the "don't care for children" society to relatively flourish while the "care a lot about children" society relatively declines. This is the exact opposite of the long-term morally good thing to happen.
Now obviously, this assumes relative equality between societies. If one society is composed of people who just had most of their arms and legs blown off as a result of a war they had no part in, it makes sense to help this party more with childcare since this isn't a failure of having a good society but of external circumstances.
>The first thing that comes to mind is that by saving every single child that goes by in the river, the megacity faces no consequences for its actions and so will never bother to fix their problem. This differs from the occasional act of child-saving in that occurs in one-off scenarios because we are happy to accept that no system of child-protection is completely perfect.
I think you might be missing the part where the megacity is unreasonably large, so all these kids *are* just the ones slipping through the cracks.
Then Megacity needs to be broken up, by force if necessary. In any case, moral choice is an inherently collective action. My city and Megacity need to sit down and work out how to save more children.
>And suppose that if Alice was in Bob’s situation, she would do even less, but in fact in real life she satisfies all of her (zero) moral obligations. If there’s only one spot in Heaven, should it go to Alice or Bob?
C. S. Lewis wrote on the Christian answer to this question in "Mere Christianity":
"Human beings judge one another by their external actions. God judges them by their moral choices. When a neurotic who has a pathological horror of cats forces himself to pick up a cat for some good reason, it is quite possible that in God's eyes he has shown more courage than a healthy man may have shown in winning the Victoria Cross. When a man who has been perverted from his youth and taught that cruelty is the right thing, does some tiny little kindness, or refrains from some cruelty he might have committed, and thereby, perhaps, risks being sneered at by his companions, he may, in God's eyes, be doing more than you and I would do if we gave up life itself for a friend.
"It is as well to put this the other way round. Some of us who seem quite nice people may, in fact, have made so little use of a good heredity and a good upbringing that we are really worse than those whom we regard as fiends. Can we be quite certain how we should have behaved if we had been saddled with the psychological outfit, and then with the bad upbringing, and then with the power, say, of Himmler? That is why Christians are told not to judge.
"We see only the results which a man's choices make out of his raw material. But God does not judge him on the raw material at all, but on what he has done with it. Most of the man's psychological make-up is probably due to his body: when his body dies all that will fall off him, and the real central man, the thing that chose, that made the best or the worst out of this material, will stand naked. All sorts of nice things which we thought our own, but which were really due to a good digestion, will fall off some of us: all sorts of nasty things which were due to complexes or bad health will fall off others. We shall then, for the first time, see every one as he really was. There will be surprises."
Thanks. That’s new to me, more so than Drowning Child.
The sad thing is, this is all just a multiplayer Prisoner's Dilemma.
If each of us saved one drowning child, the problem would be solved. Instead, there is one person complaining that saving 1000 drowning children is too much work, and there are 999 bystanders yelling at him: "of course you idiot, saving 1000 drowning children is too much work, therefore we have rationally decided to save none".
Very well put. USAID was the equivalent of “everyone put a few dollars in this jar, and we’ll use it to hire some lifeguards” before Elon fed it into the wood chipper.
But also USAID was supposed to be "we'll hire some lifeguards" and then blossomed into "well and besides lifeguards, some of that money will go to my cousin Moe for hosting a party for the well-heeled in a foreign city, don't worry, it's all the same as if it went to the lifeguards!"
And instead of a collection jar, it’s funded via theft through intimidation and the threat of imprisonment.
Could this argument also be made without hysterical exaggeration, or would the normal version just make you sound like an asshole?
If Elon had just cancelled the party funds, we wouldn't be having this conversation. In fact, I'm pretty sure this specific post is in response to the people saying, "it's good Elon cancelled the lifeguard fund because the river was far away".
I always took the Copenhagen Interpretation of Ethics to be a joke, meant to help people realize their ethical inconsistencies. I don't think morality makes much sense if you take it seriously, as your various scenarios illustrate.
It seems clear that we have particular duties to certain people that we don't have towards strangers: our families, those we have made promises to, etc. But I don't think you can get around the drowning child experiment: yes, I owe him fewer duties than I do to my brother yet it is still the moral thing to do to rescue him. And the same for those overseas that I can help, but can't see.
C. S. Lewis had this to say about charitable giving:
"In the passage where the New Testament says that every one must work, it gives as a reason 'in order that he may have something to give to those in need.' Charity—giving to the poor — is an essential part of Christian morality: in the frightening parable of the sheep and the goats it seems to be the point on which everything turns. Some people nowadays say that charity ought to be unnecessary and that instead of giving to the poor we ought to be producing a society in which there were no poor to give to. They may be quite right in saying that we ought to produce that kind of society. But if anyone thinks that, as a consequence, you can stop giving in the meantime, then he has parted company with all Christian morality. I do not believe one can settle how much we ought to give. I am afraid the only safe rule is to give more than we can spare. In other words, if our expenditure on comforts, luxuries, amusements, etc, is up to the standard common among those with the same income as our own, we are probably giving away too little. If our charities do not at all pinch or hamper us, I should say they are too small There ought to be things we should like to do and cannot do because our charitable expenditure excludes them. I am speaking now of "charities" in the common way. Particular cases of distress among your own relatives, friends, neighbours or employees, which God, as it were, forces upon your notice, may demand much more: even to the crippling and endangering of your own position. For many of us the great obstacle to charity lies not in our luxurious living or desire for more money, but in our fear — fear of insecurity. This must often be recognised as a temptation."
Lewis walked the walk, as well as talking the talk. He set up a charitable trust and 2/3rds of his book royalties were paid into it. When he died his estate was only worth 38,000 pounds: pretty small potatoes considered he had sold millions of books.
That’s a wonderful passage by C.S. Lewis, thank you for sharing!
"When he died his estate was only worth 38,000 pounds: pretty small potatoes considered he had sold millions of books."
He was childless, right? So it was porbably an easier thing for him than for many other people (not to say that it's any less admirable).
He never had any biological children, though he did have two young stepsons after he married their mother in 1956. That was seven years before his own death, and four years before his wife died of cancer.
He did have an odd living situation before that. After returning from WWI service he lived with the mother of a friend of his who died in the war. He promised him that he would take care of his mother if he died, and he lived with her until her death over thirty years later. She had a teenage daughter at the time, so in a sense Lewis was helping to support her as well: though this was before he became a Christian, and by the time he converted I believe she would have been an adult.
His brother moved in with them in 1930 and stayed in that house until Lewis died. He likely wasn't much of a burden, since he was a retired army captain who I assume had a pension.
> Again sticking to a purely descriptive account of intuitions, I think this represents a sort of declining marginal utility of moral good
This is backwards. It's the increasing marginal cost of your time and/or money. Spending 1 hour to save 1 kid makes sense, spending 16 hours to save 16 kids means it's your entire life.
The Dublin example seems like a strawman to me; physical distance is less important than travel time / effort to get somewhere. If there's a magical portal connecting you to Dublin then for all intents and purposes the distance is zero. If you're on a zoom call with somebody across they planet and they start having a medical emergency you are obligated to call emergency services, even if costs money for some reason.
There's also something of a geometric issue. If your radius of moral concern is 1 km and that contains P people, doubling the radius to 2 km means 4x the area and 4P people. So there's a really high cost to expanding your circle of concern, and it *increases* the more moral you are. (This is obviously simplified but I think the general point stands).
> Well, every day, I’ll rescue one child, then not worry about it. This is better for the children, since it increases their survival rate from 0 to 1/24
Do you just save the first child you see, or when you see a kid do you roll a die and save if it comes up 6? (Assume you otherwise live your life and so simply don't notice many of the kids floating by). The average person considers random methods like this to be very unfair. Some of the kids survive for no legitimate reason. Refusing to save anyone lets more kids die but it's more fair. This is definitely bad logic in this contrived example, but in the real world caring about fairness usually makes things better.
The other issue is about setting boundaries. Saving zero children is a natural Schelling fence. There aren't any other natural Schelling fences. This applies to most of the other examples.
In reality I think people are just generally reactive and not proactive. Most people would save the drowning child (at least once) but don't really seek out opportunities to save lives. And when they donate to charity it's because somebody asked them to, or a cause affecting themselves or a loved one. This is descriptive, and any moral theory people come up with to justify these actions is just rationalization. It does make for a convenient Schelling fence too; help people when explicitly asked but not otherwise.
Copenhagen ethics is deleted?!?! It's such a good piece! Does the author prefer it that way or is it just a hosting thing? Does anyone have it backed up?
It's backed up multiple places that you can find in this comment thread. Here is one such place I don't think others have linked yet https://gwern.net/doc/philosophy/ethics/2015-06-24-jai-thecopenhageninterpretationofethics.html
Human psyches are fragile. In theory someone 'should' save all 37 children. However, people whose behaviors stray from social consensus in extreme enough ways for long enough are often prone to having psychiatric breakdowns, which obviously ends up saving zero children. I feel like the optimal equilibrium point here is 'hard utilitarianism is true but most people are too psychologically fragile to do much about it outside of donating a little to charity and lightly advocating for political causes without running a serious risk of having a psychiatric breakdown'.
Some reasons closeness makes a difference.
If something happens in a place that you know well, you have a much better chance of making the correct choice. If you are in a foreign land in a foreign culture, I generally look to others for how to act.
As an individual you are not likely to do much harm acting with your body locally. If you are in Zimbabwe and give a starving child a piece of fruit. Likely a positive act short term with minimal long term consequences. Help an individual overseas via some American group. Lots of variables. Is the group just a fraud, how much goes to the person, is my contact info sold, etc. Then there is giving tax money as massive food grants. You have no choice in the matter to support it even if you think it is wrong. Mass food subsidies depress local food production. So many levels ripe for corruption, etc.
A third is that if you ignore something bad that right in front of you it will corrode your soul, harden your heart. Your brain has many deep processing levels that we/you/I have little knowledge of or control over.
Consider Chesterton's Fence. It's no coincidence that a kid floats past once an hour, on the hour, regular as clockwork. This is the result of deliberate action.
Make no attempt to return the children sacrificed to Neptune. Atlantis City does not want them back; and if you sneak them in anyway you risk dooming the entire population to suffer his watery wrath.
Or more ambitiously: find Neptune and kill him.
I've long been a fan of the "Copenhagen Interpretation of Ethics" concept, but always as a description of a *broken* view of ethics.
It's a way our ethics instincts lead us to believe bad and nonsensical things, not a set of rules for a moral life.
I'm not sure Scott took the essay the same way...?
My intuition is that once you get into scenarios where innocent children are drowning every hour, this is a systems issue and not the responsibility of any one individual. No one should be expected to take full personal responsibility for this. Reasonable responses would include lobbying the government/city/wealthy individuals to set up a collective to hire lifeguards or build rails around the river to stop kids from falling in.
This all goes back to the question of foreign aid and cutting that. This is where that all started. The fact is, nationalists like myself care very much about whether the child in question is a domestic child or a foreign child. Hypotheticals about children going down the river would lead to me wanting to save as many of them as I could. The situation vis a vis funding malaria nets or HIV medicine is more complicated.
You don't give charity to the enemy during a war. That much most people can agree with (at least until after you've defeated them, then you airdrop food). Most people would just find it absurd to say that Europe and (to a lesser degree) North America are in a biological soft-war with Africa (and the Middle East/Near Asia but that's not the main subject of aid discussions). I naturally don't want to see children suffer, but every gain to Africa is a loss to Europe, and European children, so long as massive amounts of foreigners aren't deported and immigration from these regions isn't totally shut down (we can do that exchange: Africans are never allowed to set foot in Europe again, in exchange for massive amounts of aid, but this isn't in the policy overton window, so cutting aid it is).
I think that giving aid to Africans has a large negative knock on effect on the children of my region, and the children of my family, in the long run. Our fertility rate, like most developed regions in the world is below replacement. The fertility rate of many African countries is lowering over time but still sky high (aid that only involved condoms, birth control pills, anti-abrahamic propaganda, and not food and medicine would be beneficial here).
Since this is a soft-war (or a cold war, but it's a bit different from the Soviet case) it would be immoral to actively start bombing Africa and hurting them, but it would also be immoral to your own side to help the opponent ("I won't kill you, but I don't have to save you"). If you imagine a sliding scale where full on war requires the moral response of killing the aggressors (and inevitably their children even if they aren't deliberate targets) to save your people/children, and no war at all should necessitate giving aid and help, a soft-war represents the inbetween territory. It's not even as high on the scale as to say private charity to Africa should be banned, but certainly if you are using the state to provide charity and we have input into where the state diverts resources it has gained from the people, then it's not wrong to decide you want those resources to go somewhere else instead.
It may be callous to not give people who are suffering help, but if those people will as a large group (even if individually kind) destroy your civilization (the one best equipped for dealing charity and improving the state of technology anyway), then acting callous in the near frame helps slow the ruin that is accumulating and will eventually destroy us in the far frame. That children aren't being helped is unfortunate, but significantly less unfortunate than if they were killed in a war, which could easily be justifiable (again, even if children aren't direct targets). Their parents must act responsibily to save their children, but since these parents have been turned into a horde on a geopolitical level, it would be foolish and immoral to our own people to give them fuel for survival, so long as our leaders insist on allowing them to colonize our countries (in return for our colonization of theirs - even so there's no reason to accept it).
If anything, the problem with DOGE (which started this whole debate) is that cut foreign medicine funds should be returned domestically - a nationalist gesture, rather than the libertarian one Musk prefers. Put all of that money into American cancer research instead.
I do not expect to change your opinion specifically, but if anyone reading this is on the fence, there is plenty of data documenting the economic benefits of immigration. One factor in particular: economic growth is connected to population growth, and the loss of one will negatively impact the other.
Isn't that data highly slanted by country of origin? Not all immigration is equal (European migration between countries, or the immigration of Chinese or Indians might result in net contributions, especially assuming selection effects). How does this data square with African and ME/NA immigrants not being net contributors? How does it square with Canada's massive drive in the last 10 years?
The other issue is that if you purely measure things according to near term economic gain, then immigrants coming to work will obviously always be good, because you are benefiting from far more specialization. Diversity, economically speaking, very often IS a strength. If those immigrants go on welfare, not so much, but it could be that more immigrants contribute than leech. Of course, life isn't just about near term economics and "line go up". These are nice things to have, but there are trade-offs. For the UK, this has meant heavily stressing social services and housing. You could argue "Oh, but then the real problem is zoning" but England is a small and very dense country, where the nearest road is always less than a few miles away. This will just turn our cities into even worse places to live, and increase ethnic conflict.
As for population growth: this is one reason to be very pro-AI/pro-automation, since this can take up the slack of an ageing population, and actually allow some level of population decline safely, without stressing the dependency ratio too much. If we have a good chance of, in the next 50 years, achieving full automation and a basic income guarantee (I believe we do), then it's short sighted to open the floodgates to immigration as a solution, when the trade-offs are so severe, especially given that the number of immigrants needed is so large (since their fertility rate starts to converge with ours once they live here).
The final issue is that mass immigration has demonstrably led to more support for far-right parties, and unless you want to ban democracy, at some point, it's going to be reversed, and the larger the inflows were, and the longer this goes on, the worse and more ethically fraught the reversal process is likely to be.
Well, the studies I've seen have been mostly focused on US illegal immigration from Mexico and Central America, so not exactly high skilled labor. Nevertheless, such illegal immigration appears to helpful overall to the economy, while being a small drain on government budgets (this includes services, including homeless services, which do not always include housing in the US). A large fraction of hispanic immigrants to the US end up working as migrant agricultural laborers (I believe this is the case in Europe as well). Remember that immigrants are consumers as well as employees, so you have to take into account their net positive effect on demand, even those on welfare (which creates employment opportunities for other people). Illegals, so far as I know, can't go on welfare. In any case, the employment rate of immigrants in the US is nearly the same (slightly higher in fact) than native born.
Immigration contributes to economic growth because it contributes to population growth. This is esp. the case in countries (like the US) with declining native birth rates. At some point in the future, we will fall below replacement rate, at which point we will be competing with similar nations (the EU, China) for immigrants from nations with relatively higher birth rates. That's pretty much the underdeveloped nations (due to the demographic transition). The US has a strong position here, if we don't blow it. This factor will only increase if at some point the US experiences net emigration.
AI/Automation does very little to alleviate the effect of declining birth rates, because AI doesn't buy stuff (it does little good to produce things more cost effectively if there is no one to purchase it). Even a basic guaranteed income doesn't address the problem, unless you plan to hand out more money per capital as the population declines. I have no idea how such an arrangement would work, and I doubt anybody does. Certainly there is little public support for such a policy.
As for far-right resistance, well, that's a political issue, not an economic one. It would be sad if it turned out that isolationist policies became more popular, and that ends up what crashes the economy. Certainly I agree that as a sovereign nation the US has a right to regulate who crosses it's borders, but a long term net reduction in immigrants would not be wise.
I'm very much a nationalist too (some would say an ultranationalist), and I'm extremely opposed to migration (immigration, emigration and internal migration), but I don't see that as a reason to reduce foreign aid, either public or private. Quite the contrary. European countries need to be donating tons of money to Africa precisely so that (among other reasons) Africans *don't* try to move to Europe.
But Europeans already donate tons of money to Africa, so I'm sceptical this works. It's also the case that it's not the very poorest Africans that will legally or even illegally immigrate. Even with illegal immigration, it can cost 1000s of pounds equivalent or dollar equivalent to pay the smugglers.
There also would probably be fewer Africans in Sub-Saharan Africa if it wasn't for aid over the decades. Their birthrates are effectively subsidized. Yes, that has a moral dimension (free medicine and other help allows more to survive to adulthood), but then that just feeds back into what I'm talkiing about where we are in a soft biological war, and you are actually weighting two sets of children against each other (since I disagree with the notion that African existence is mechanistically/consequentially neutral).
You’re again neglecting the fact that fertility rates go down as a country becomes more prosperous. They have gone down in Africa too, although sllower than I’d like. Everywhere the demographic transition has started- including in Africa- it seems to proceed to completion (i.e. below replacement fertility) given enough time.
"I naturally don't want to see children suffer, but every gain to Africa is a loss to Europe, and European children, so long as massive amounts of foreigners aren't deported and immigration from these regions isn't totally shut down (we can do that exchange: "
European countries are going to shut down immigration eventually, when the situation gets bad enough. And eventually, start deporting minorities. Maybe it should have happened sooner, but late is better than never. The exchange you mention is what the eventual equilibrium will be, in future.
Hey Scott I think you should read https://www.gutenberg.org/cache/epub/35354/pg35354-images.html
There's a hidden assumption in the hypotheticals that I think is important to point out because, while I'm not sure *I* disagree with it, I know that a very large number of people do.
Specifically, the assumption that moral weight transfers across transactions; that if I hire a purely-selfish man to save a child for $10, that I'm the one doing good (by giving up $10 to save a child) and not him (by doing a paid job).
Empirically, most people *do not believe this* - this is why doctors are high-status, because their large salaries are *not* considered to buy off the weight of the people they save and distribute it to the people paying. And failing to accept that assumption creates a disequivalence between almost all the hypotheticals here and the actual case of donating to charity, because *you're the one rescuing the kids from the river* whereas *you're not the one building or distributing the bednets*.
I think that's the effect of applying virtue ethics (that it's at least if not more important to be a good person than it is to achieve objectively good outcomes). Humans have evolved multiple ethical standards and they are not all perfectly compatible.
"I think the angelic intelligences would also consider that rich people could defect on the deal after being born, and so try to make the yoke as light as possible."
Doesn't that mean you have to consider the game theory implications for rape and drowning-children-saving too?
I'd rather drown the child, thanks.
This sort of thing may have worked even as recently as a year or two ago, but as many new competitors join, and many of those not interested in competing leave, you really have to put more effort in to win the edgiest comment award these days.
The site gets as much effort as it deserves.
I have been puzzling over the situation in Africa and would appreciate some help understanding what’s going on over there.
A couple questions:
1) what is the real impact of the recent interruption of America aid to Africa?
2) since mosquito nets are so cost effective, why hasn’t the gates foundation fully funded the distribution of mosquito nets?
So (according to Google) the Gates foundation has spent around $2 billion combatting HIVAIDS, TB and malaria including funding research to develop more effective nets and to distribute these nets. I think the issue with the statement why hasn’t “X” just given “X” billion dollars to do “X” is matter of how we talk about the problem. Usually people will say something like “it costs X billion dollars so give every child in Africa a mosquito net.” Even if this accounts for the costs of distribution, now money is now longer the bottle neck, it’s time and finding competent non-corrupt people to actually do the work. The problem is that having the money to solve problems doesn’t immediately solve the problem.
So for instance the impacting of losing USAID has less to do with dollars and more to do with loss of institutional memory and personnel. The money from USAID often pays the salaries of people who have been working in foreign aid for their entire careers. They are in charge of transporting and delivering goods and training local volunteers and workers. For instance they may be training people “how to effectively use mosquito nets” because even if you send 1000 mosquito nets to a village if they aren’t used properly then the intervention is ineffective. In terms of how bad this actually is I don’t know enough about foreign aid (not sure anybody does) to give an OOM estimate of something like additional lives lost. I just know enough to know it’s bad.
So I guess my question is: should I donate money for mosquito nets if money is no longer the bottle neck?
I think the obvious answer to this dilemma is if society is imposing an obligation on you through persistent inaction on predictable events, your moral responsibility is greatly negated.
There doesn’t need to be anything deeper than that.
Here's a rephrasing of the pond story that does not omit the hard part.
Every day for your entire life you walk past a pond absolutely full of drowning children.
Every day, there are about five people pulling kids out. But you never see any difference - it's as full every day as it was the day before.
I think I get where you're going with this. Sure, when you're confronted with a problem that seems so intractable it does feel like there's nothing you can do so why do anything. But this feeling is an effect of not having exact numbers on the drowning children.
So, it's a pond but how big exactly - does "absolutely full" mean dozens, hundreds or thousands of children? What exactly is the amount of children dropped in every day? How exactly does the pond manage to stay full? If some mysterious phenomenon or nefarious villain is dumping in exactly as many children as are saved, then saving them is indeed pointless. But somebody should investigate that, shouldn't they?
One day, some intrepid nerd starts counting how many kids fall in each day and measuring how fast the average person pulls kid out. They declare that 50 people pulling kids out every day would match the amount of kids falling in and prevent any deaths. Now there's concrete numbers showing why the problem never got any better: not enough dakka. With that info in hand, every additional person joining in now knows that they're inching toward salvation instead of being stuck in an endless struggle.
I'm personally, I like the Talmud's version of the problem - The City's Spring (here including Steinsaltz's interpretation):
"a spring belonging to the residents of a city, if the water was needed for their own lives, i.e., the city’s residents required the spring for drinking water, and it was also needed for the lives of others, their own lives take precedence over the lives of others. Likewise, if the water was needed for their own animals and also for the animals of others, their own animals take precedence over the animals of others. And if the water was needed for their own laundry and also for the laundry of others, their own laundry takes precedence over the laundry of others. However, if the spring water was needed for the lives of others and their own laundry, the lives of others take precedence over their own laundry. Rabbi Yosei disagrees and says: Even their own laundry takes precedence over the lives of others, as the wearing of unlaundered clothes can eventually cause suffering and pose a danger."
https://www.sefaria.org.il/Nedarim.80b.7?lang=en&with=Steinsaltz
Great post. I've written a reply, 'Moral Intuitions Track Virtue Signals', which explores more virtue-ethical psychological explanations for our moral intuitions in these sorts of cases (and Trolley cases too):
https://www.goodthoughts.blog/p/moral-intuitions-track-virtue-signals
We're assuming here that everyone agrees that saving drowning children is the right and moral thing to do. But there have been times and cultures where that was not at all evident to people, indeed where the moral thing to do was let the children drown.
We all know our friend Moloch, but the Aztecs had both a high value on children (see this piece on children and childbirth) *and* that their deaths (with maximum suffering) were also necessary at times.
https://www.worldinvisible.com/library/chesterton/everlasting/part1c7.htm
"In the New Town which the Romans called Carthage, as in the parent cities of Phoenicia, the god who got things done bore the name Moloch, who was perhaps identical with the other deity whom we know as Baal, the Lord. The Romans did not at first quite know what to call him or what to make of him; they had to go back to the grossest myth of Greek or Roman Origins and compare him to Saturn devouring his children. But the worshippers of Moloch were not gross or primitive. They were members of a mature and polished civilization abounding in refinements and luxuries; they were probably far more civilized than the Romans. And Moloch was not a myth; or at any rate his meal was not a myth. These highly civilized people really met together to invoke the blessing of heaven on their empire by throwing hundreds of their infants into a large furnace. We can only realize the combination by imagining a number of Manchester merchants with chimneypot hats and mutton-chop whiskers, going to church every Sunday at eleven o'clock to see a baby roasted alive."
Childbirth was celebrated, but if you had twins, one would be killed.
https://www.eiu.edu/historia/Thoele.pdf
"This verse relates how blessed the newborn was to be brought into the world. Yet the midwife’s words clearly warn the child that there will be insecurity and grief throughout life. If the mother delivered twins, one of the babies was killed at birth, as twins were feared to be an earthly threat to their parents in Aztec society.
...Children played a large role in the ritual dedicated to the rain god, Tlaloc, which was performed to bring needed rain for the crops. Blood from children was obligatory and this was acquired through small incisions, such as in the tongue. Actual child sacrifice was also performed at the end of the dry season. Two children were selected to be offered up to the rain gods. The tears that they invariably shed before their sacrifice were offered to Tlaloc so that he released much needed rain. In one year of a particularly dire drought, forty-two children between the ages of two and six were sacrificed. It was believed that the earth needed more than just a small sip of water, as represented by the crying children’s tears. In such a dire circumstance, much water was needed; therefore the quantity of children sacrificed was greatly increased. This is the only time that such a large number of children were offered, it would never happen again."
The rain god requires sacrifices, the more weeping the better. Sometimes there is exceptional drought so you need even more sacrifices.
https://en.wikipedia.org/wiki/Human_sacrifice_in_pre-Columbian_cultures
"According to Bernardino de Sahagún, the Aztecs believed that, if sacrifices were not given to Tlaloc, the rain would not come and their crops would not grow. Archaeologists have found the remains of 42 children sacrificed to Tlaloc (and a few to Ehecátl Quetzalcóatl) in the offerings of the Great Pyramid of Tenochtitlan. In every case, the 42 children, mostly males aged around six, were suffering from serious cavities, abscesses or bone infections that would have been painful enough to make them cry continually. Tlaloc required the tears of the young so their tears would wet the earth. As a result, if children did not cry, the priests would sometimes tear off the children's nails before the ritual sacrifice.
...In History of the Things of New Spain Sahagún confesses he was aghast by the fact that, during the first month of the year, the child sacrifices were approved by their own parents, who also ate their children."
The more I know about the Aztecs, the more I think that those who accused them of worshiping devil were... just making a rational conclusion based on the available evidence.
Why are you not comfortable with committing the is/ought fallacy at the level of “when am I obligated to save a child” but totally comfortable with it in the view that saving drowning children is good at all? Isn’t the base intuition just a descriptive rule itself?
Separately, I wonder in the cabin example (let’s assume it’s impossible automate, you personally have to ruin a suit and jump in the river 24x day) how many people you (or anyone else) would actually save. I think by day 3 of being wet/tired/unhappy all the time the number would approach 0 (and indeed one would leave the cabin).
Yep, and go call the police, because clearly something criminal is happening. This is also likely to save more children than you ever could.
I think this essay and every other such argument underestimates how easy it is for people to refrain from helping a nearby person in distress if (1) they too are in some kind of different distress (2) they tell themselves that they're not responsible for it (3) they know helping that one person won't change anything about the underlying problem that put the person in the situation (4) they tell themselves that other people are also not helping so I'm not bad (5) they can't brag about it or show off their goodheartedness. Besides, if everyone is kind then no one is. Cruelty and insensitivity are the reasons why kindness is valued in the first place.
There's a reason why families (or close-knit genetically related groups) are highly selected for when it comes to personal or group resource allocation: it is highly morally economical. Globalization was partly meant to transcend this atavistic instinct but I'm not sure it has properly succeeded at shifting this centre of gravity of primal loyalties.
It's also easy to argue yourself into helping if 1) You thereby relieve your own empathic distress 2) You gain reputation and social status in the eyes of people who hear about what you did 3) Saving this person helps you persuade others to pursue some sort of larger, institutional solution 4) The in-groups of the person you save now feel they are in debt to you and your in-groups 5) You make a new friend.
In which Scott, having failed an Ideological Turing Test, knocks down a bunch of straw men with implausible thought experiments involving ignoring the main solutions indicated by the premise, then pats himself on the back a couple times. What a sad show.
Please explain - what are the main solutions indicated by the premise?
It is yet another comment of the form "you are wrong, but I am not telling you why, I just came here to make a note that I am intellectually superior to you".
Low hanging fruit for moderators.
Always behave as if the coalition is still intact. Why? Game theory! Imagine that the coalition will exist in the future, but does not exist now. By acting as a member of the coalition, you bring the coalition closer to existing. When you cross paths with another person who is also acting as a member of this virtuous coalition of the future, your abilities to cooperate with each other will be nearly frictionless. And you speed up the immantizing of the coalition.
If you believe, like I do, that the coalition has nonlinear effects on the world, then discovering even a few members of the coalition in the course of your life is more valuable than most of the good you will do individually. Cells of cooperation can pull large chunks of non-coalition members into coalition aligned action.
The "Coalition" are actually factions within society, and you obtain effectiveness in life by persuading a plurality of society's members to cooperate with you in pursuit of values you approve of. The problem is that there is more than one coalition out there, each competing for that plurality.
An unaddressed flaw of these thought experiments is that they presume the hypothetical drowning children are worth saving. Would they save you, if you were in their position? And does this affect how deserving they are of being saved?
Answering these questions is crucial for the thought experiments to be relevant to real life, because in reality, many—if not most—of these drowning children would not help you if the situation were reversed.
Fair point, but if someone's true objection is "kids should die, I don't care", they should state that objection clearly. As opposed to making up complicated arguments for why the kids kinda matter, but you shouldn't try saving them because... 5D-chess reasons.
The debate gets needlessly complicated because some people on one hand want to let the kids drown, but on the other hand do not want to be (correctly) perceived as the kind of person who wants to let the kids drown.
Fine, let's be blunt. Assume the drowning child will grow up to be Jack the Ripper—do you save him, or hold him under yourself?
I take this as a concession some children should indeed be drowned. Now we're just haggling over how much certainty is required that a given child falls into this category.
Consider this thought experiment, somewhat more complex than "baby Jack the Ripper is drowning":
You're a barbarian on a lonely island. Scott's river flows past your hut back into enemy territory. If you do nothing, the children will be rescued by their own family. But drown them now, and you deprive your enemies of future warriors who will raid your village, rape your women, and drown your children.
Of course this isn't 100% certain. It's possible that your tribes form a truce before that happens, but you estimate this has only a 25% chance of occurring.
You could try kidnapping the children, but you're all vectors of a disease against which the enemy tribe is not genetically immune. An estimated 25% of the children you kidnap would succumb to this illness.
To make matters worse, they're lactose-intolerant, but your tribe's success depends largely on dairy farming. Your ability to exploit cattle gives you a major advantage over your enemies. Any kidnapped children who did not die from your diseases would grow up to produce hybrid Ooga-Booga offspring with your tribe's women. As a result, future generations will become progressively less fit, from a lactase and leukocyte perspective.
You estimate that this reduction in fitness will raise the chance of being exterminated from 5% to 80% within the next century.
Do you drown the children?
> the Gulf Arab states that got rich from oil recently give way more development aid than Western countries with similar GDP per capita.
This is an interesting analogy, but I'm skeptical that it works when applied to governments. Especially when one is Arab and the other is not.
What is your estimate about what fraction of e.g. African children are future Hitlers? Are we talking approximately about 1% or 10% or 50%?
In the scenario, 100% of the drowning children who survive to adulthood will eventually try to exterminate your tribe.
*You reach for the drowning child, in that moment, you see with a flash of insight that seems to come from heaven, of this child, grown up, slaughtering thousands.
You pull your hand away in disgust and horror. The child sees this, his eyes lock with yours in confusion and sadness before he vanishes beneath the water.
You go away, convinced of your righteousness in this difficult moment.
Unbeknownst to you, a mile downstream, the child miraculously washes up on a bank, coughing up water. But his once-innocent eyes are black, burned out by the knowledge that in the moral calculus of a stranger who could have saved him, not all lives are equal. He will carry this knowledge on his crusade of slaughter.
Watching from the woods, Satan laughs. Two souls damned for the price of one vision.*
I would like this comment, but liking appears to be disabled.
If you want a larger, better written exploration along the same themes as your original thought experiment, I would suggest "Monster" by Naoki Urasawa.
I'll bite this bullet. Even a serial killer's life has value, you should save him from drowning and then lock him up.
But he wouldn't be Jack the Ripper if he didn't grow up to rip anyone.
He also wouldn't be Jack the Ripper if he didn't grow up at all.
Fine. Imagine there are a million of these drowning future serial killers. Are they so valuable that you lobby the government to create a reservation just for their preservation?
I thought of another way you could have meant this that I didn't think of before. If Omega has informed me that if I save this kid from drowning he will definitely grow up to kill, say, 10 people, regardless of whether I try to stop this in another way, then this just becomes a reverse trolley problem. I don't pull the switch to divert the trolley from a track with 1 person to a track with 10 people, but this doesn't depend on lives having a different amount of value.
I have no idea who Omega is. But since you've pointed out this dilemma can be resolved even if all lives are equal, let's change the scenario to one where the drowning child will grow up to kill you. Do you drown him now, or do you admit your own life is more valuable than his?
Your 1 year old child is about to eat a Tide pod - do you intervene even though your child wouldn’t return the favor? What about a random 1 year old you see about to do that, without any parents around - would you intervene, even though no 1 year old on Earth would help you if the situation were reversed?
Expecting a one-year-old to save anyone from drowning is an unfair interpretation of my comment. When I speak of the situation being reversed, naturally I mean when the child is an adult, and you are the one drowning.
You are a neurosurgeon and have the opportunity to perform lifesaving brain surgery on a child for a small fee. Should it occur to you to decide whether to perform the surgery based on whether you think this child would perform lifesaving brain surgery on you if he were an adult OR if he were a neurosurgeon OR if he were a neurosurgeon offered the same fee or less OR if he had all of your exact characteristics, including your age, your current emotional state, etc. There is no immediately graspable version of what a “reverse” situation would be, and so thinking about this at all when making a moral decision, especially a time sensitive one, makes no sense to me.
Is there no scenario where it would actually be wrong to help the child, even if it cost you no effort or expense? In a recent comment I suggested a thought experiment where the child will grow up to be Jack the Ripper.
Your original objection was that Scott’s thought experiments did not sufficiently mirror real life. You’ve countered with a thought experiment in which the drowning child grows up to be a serial killer. Given that there’s no way to predict (without an alarmingly high false positive rate) which drowning child will grow up to be a killer or even just a drag on society, should this sort of thinking be part of your real life moral calculus?
Right, my original thought experiment was intended to question the egalitarian ideology underlying every iteration of Scott's scenarios—the presumption that all drowning children are equal. But my first interlocutor insinuated I was trying to disguise my misopedia behind "5D-chess", so I figured I'd make my point clearer at the expense of realism.
P.S. I googled "word that means hatred of children" just so I could use it in this comment.
I think Misopedia is a Japanese recipe inventory
This is empirically false. Most people make some attempt to help each other, even at some risk to themselves. It's complex, because it depends on the context (the out group bias gets in the way in certain cases), but on average most people are modest altruists.
It's not "empirically false" that sampling randomly from the billions of humans alive would result in "many" people who would not save you from drowning. Neither is it inconsistent with "most" people being altruist.
If one is highly selective, one could probably find thousands of people unwilling to rescue a given person.
You are selectively quoting yourself--you also said "if not most", which is what I am arguing against. Most people make some attempt to help.
Empirical studies, for example, have shown that humans often cooperate in anonymous one-shot prisoner’s dilemma games despite the fact that not contributing is the optimal solution.
Here's another study, using field methodology: https://www.jstor.org/stable/pdf/27858212.pdf?casa_token=EAloEDeZmgYAAAAA:WwcQQVRiTFuhIb6u2zT1kmyoFLHHJSa4Khriiq60kjMlrW8fbKYU7AJ5BoKbrOVJgrnAVS0zjsd9jEvxe7mt9FNFv4uVlmKYIy_q_knBoNeK12P-t1tS
Helping rates around the world are highly variable and context dependent, but on average it appears that rates of helping strangers in most places are somewhat above 50%.
I've participated in academic one-shot prisoner's dilemma studies. In my experience, anonymous participants always screw each other over. I wouldn't consider them relevant to real life anyway.
The field study is more interesting, but I think you would agree it's hardly representative (Africa and India total 40% of the global population but only get one experiment each), and suffers from serious nonresponse bias (i.e. Kiev).
I didn't read the whole thing, but it's unclear what race the experimenters belonged to. This seems pretty relevant.
my first thought is there is irony here in that people talk about drowning kids but no one ever suggests we should all get red cross lifeguard certification or first aid certs to be ready to save lives now. The morality is always a bit ethereal: "saving lives" but its more like optimizing widgets.
there really isn't near/far in a realistic sense but more like a single issue a person is convicted of to make sacrifice for, and individual ability to make it. Someone is a part time lifeguard, someone pushes to stock narcan in libraries, someone becomes a full time emt, someone uses wealth to fund or create overseas aid programs. You can't universalize this into obligation. A sickly person can't be a lifeguard.
Charity is a heroic act but you can't obligate it. That's duty, and people will ask "who are you, O philosopher, to be the officer who commands us?" Too many variables to that heroic act to try and universalize it.
> no one ever suggests we should all get red cross lifeguard certification or first aid certs to be ready to save lives now.
I think there are memes out there that point approximately in this direction. The general moral obligation to "become stronger", "clean up your room", etc. (And people are specifically taught to provide first aid in various occasions, for example when they get a driving license.)
> Someone is a part time lifeguard, someone pushes to stock narcan in libraries, someone becomes a full time emt, someone uses wealth to fund or create overseas aid programs.
Yeah, division of labor is a good idea.
It can also be a convenient excuse, when some people do X, some people do Y, some people do Z, and most people just say "I am not doing X or Y or Z because... uhm... I am doing... something else... no, don't ask me what specifically".
> my first thought is there is irony here in that people talk about drowning kids but no one ever suggests we should all get red cross lifeguard certification or first aid certs to be ready to save lives now.
Tons of people say everyone should have first-aid training. Here's the AMA, for example: "Why everyone should be trained to give CPR" (https://www.ama-assn.org/delivering-care/public-health/why-everyone-should-be-trained-give-cpr)
The most important overlooked factor in all of these scenarios is moral hazard, pascal muggings, functional decision theory, etc.
It is very unlikely in practice that a drowning child near me is being used to systematically extort all my time and money.
By contrast, far away needy people can be used that way, and them being and staying needy lines the pockets of middlemen taking a cut of charity, and it isn't clear that the charity reduces long run suffering summed over all of their future descendants.
"Is this moral obligation serving as a money pump?" Is the important distinction.
There's this weird verse in the Bible that illustrates how I think of this. Jesus spend all this time helping and healing people, he goes to the poor and downtrodden. Then this woman spends some money washing his feet and Judas complains that the money could have gone to help the poor. Jesus then said, "the poor you have with you always".
I know there are other ways of rain this, but one way I read this is that there's a difference between how you should treat systemic problems versus one-off suffering. It's not that Jesus is totally into alleviating suffering most of the time, but then he ignores it. It's that he sees people suffer and, knowing he can help, had compassion on them. But systemic problems are ... well presumably he could solve them, just like he could have solved the whole Roman conquest thing, but I guess that was outside the scope of his mission. So his obligation didn't extend to solving that problem.
What about the rest of us? I think we have an obligation to work on solving the systemic problem, but that is foolish to treat it like a one-off situation. If you live far from the death river, you should work on the source of the problem. If you live in the cabin, compassion may prevent you from ever leaving the shore.
Here's a scenario:
As president, you can donate $1 billion to feeding African children, vaccinating them, and generally extending their lifespan and lowering the mortality rate. Or, you can donate $1 billion to some AI research that will make AI 1% better.
In the former case, you keep donating the money year after year, but then you hit a recession, and can't donate that year. The aid unfortunately gets cut from the budget, and the billions of Africans who were dependent on aid all die suddenly.
In the latter case, the donation to AI increases global GDP in a sustainable way that is recession-independent. Whether or not a recession happens, the AI research has a net year-over-year improvement, so the highs are higher, and the lows are higher.
The question of which policy you should support has to do with the likelihood of each scenario. If the benefit to AI is minimal or negligible, then it would be a waste. If the aid to Africa helps the African economy develop overtime and becomes self-sustaining, then it is worth it.
These are empirical questions that are worth hedging. A $0 foreign aid policy seems ridiculous, while dedicating 100% of the budget to foreign aid also seems ridiculous. But since no one is advocating 100% of the budget to foreign aid, we are really just arguing between 0% and 1%, in most cases, which seems somewhat pedantic and overly skeptical/cautious.
For most good causes you can find a better cause, and then argue that the former does not deserve the money. But I suspect that most actual government spending does not go towards research and similar things, but instead to things that are clearly worse than both research *and* effective charity.
Yeah that’s what’s driving me a bit crazy about PEPFAR: how is it possible that those countries are still reliant on USAID 20 years later? Is there an end in site? Does anyone care?
Fair point! I wish we would act more like China and focus on building lasting infrastructure and education resources rather than trying to cure AIDS, but I would rather we try to cure AIDS than do nothing at all.
Seems to me like the big thing missing from these contrived hypotheticals is any sense of uncertainty. There are very few real situations where there's a clear choice between "save a life, P=1" or "save 10 minutes of your own time, P=1". Most people dealing with moral reasoning have to account for the fact that they don't always have confidence in the predicted outcomes of their choices.
So, let's say that you're not a great swimmer, so every time you jump in the river, there's only a 40% chance that you're actually able to save the child, and also a 5% chance that you end up drowning yourself.
Also, let's say that in addition to their regular outflow of drowning children, the magical megacity also has a culture that is very fond of lifelike dolls, and their old discarded dolls also flow down the same river as the drowning children. As far as you can tell from your cabin, any given childlike shape you see in the river has a 10% chance of being a child, and a 90% chance of turning out to a doll.
Hopefully this cabin is starting to sound a lot less like a deal that any sane person would take at this point.
(It also doesn't help that this specific hypothetical has plenty of trivially superior solutions, like "just build a giant rope net across the river.")
I find it natural for thought experiments to simplify, and thus appear contrived, in order to isolate the specific dimension of morality that it wants to discuss.
As with all kinds of consequentialist thinking, you would indeed "in reality" reason in terms of expected values rather than concrete, known outcomes, since you don't already know them.
But it might not necessarily add value to a specific question you want to discuss to replace "if you do x, you save a child" with "if you do x, you in 90% of cases save the child and in 5% of cases you die, and by the way in 1% of the cases where you save the child the child grows up to be a mass murderer".
A different way to think about the problem is evolutionary psychology.
In the evolutionary environment, people might sometimes come across 1 drowning child. And if they did, it was likely the child of someone else in the tribe and the social status reward would be significant.
A constant stream of 1 drowning child per hour didn't exist, and the social status rewards wouldn't exist if the children did.
Humans also had constant memetic selection. We have an immune system to protect against viruses. What do we have to protect against bad memes?
We have a lip-service response. We say the socially accepted lines. We often even "believe" them. Because to do otherwise risks being punished. But then our actions show a limited or token effort. An extreme case being a religious person who believes in heaven and hell, but thinks that going to church every week is too much of a bother.
Accurate information about distant lands you had never been to was not a feature of the evolutionary environment. (And might or might not be a feature of the modern environment)
Thus Utilitarianism falls into the same mental defenses as so many other beliefs. The mental mechanism for saying things, and "believing" them, but not taking any action too self detrimental.
There is no such thing as a "morally correct prescriptive theory" because moral rules are negotiated by members of a community, and are context dependent. "What should one person in isolation do if X" is an artificial construct because our moral intuitions didn't evolve to work that way. Morality is a way for people who live and work together to get along. Communities are morally responsible for taking care of their members, and one primary way they do that is by setting up a common set of behavioral norms that will help everyone act in a compatible manner. This means that every conceivable set of moral rules that anyone could adopt will only make sense depending on the moral rules adopted by the people we live and work with.
Of course, individual people still need to make decisions governing their own behavior. I imagine that what happens inside someone's head is a kind of cognitive algorithm that weighs various inputs including what we were taught during our upbringing, our observations of how other people we interact with behave, immediate sensory stimuli, and emotional impulses inherited from our ancestors. The brain is dynamic--cognitive impulses compete with each other to cause behavior and when successful are remembered along with what the outcome was in a given context, the more satisfying the outcome the more likely we are to remember it later in a similar context. This isn't going to result in a logically coherent set of moral rules to follow, any more than our past satisfying experiences cause us to enact the exact same set of behaviors every day.
One such factor is psychological salience: we expect to be more strongly affected by a dying child we see before us than by hearing or reading about one second hand. This is so natural and automatic a response that there is probably no sense trying to fight it, you would just exhaust yourself psychologically. The thirty seventh child is harder to save than the first.
On the other hand, you do have agency: what we can do in this situation is reduce 37 actions per day into one big action, by collaborating with other people in an organized fashion. The real solution is to demand that the denizens of the magic city take care of their kids, and if they can't do it, we should send the police to take them away. By outsourcing moral action to a community institution, we save our own resources and simultaneously gain power and effectiveness. This explains why we evolved this way.
That's why we don't blame the person who failed to save every child in the world--we blame the parents and their communities who failed to prevent that situation from occurring in the first place. This can also be questioned: What if it's a poor community that can't afford to undertake the actions that would be required to save their kids? Humans do not have a clear answer for this, that's why we get into arguments about the utility of foreign aid, but it's clear that there is no single logical moral principle one could formulate that will answer this question unambiguously.
Chasing one is a fool's errand.
I would intuitively look at these edge cases from a capacity perspective - I lack the power to even think about the suffering of millions of people, or to physically rescue 10000 children drifting down the river next to my house. My intuition requires me to run into a burning house, but Im then allowed a break even if the next house also catches fire.
My first introduction to what's now referred to as the Copenhagen Interpretation of Ethics was when I was a kid, and while walking around the grounds of my school, I noticed some sort of flyer or something on the ground, and picked it up to look at it out of curiosity, then when finding its contents entirely uninteresting, dropped it back where I found it; a passing teacher yelled at me for littering, saying that because I had picked it up it was now my responsibility to properly dispose of it, while if I'd walked past it without doing anything I'd have no responsibility.
tl:dr
Copenhagen Ethics are wring reification of intuition that the directness of the act and the predictability if the result matters, and the way to convince people is try to understand what they actually care about, and this attempt to understand Copenhagen Ethics failed at that.
the long version:
from my point of view, all this post is born in error. Copenhagen Error is failed attempt and reification intuition, and the right reification is about the directness of the contact, and the sureness in the result. how many intermediate steps there are, and how sure you are of the result is the explanation that satisfy the intuition from the examples, and also explain [people who against donating to beggars because they may buy drugs.
at the 4th part of the post it come to morality as coordination vs morality as altruism.
https://www.lesswrong.com/posts/PsHgyC4b2PsE3QyxL/morality-as-coordination-vs-altruism
the problem i have with Scott's take is that he ignore that not al people accept the deal, enter the coordination.
and here the conservative objection that those third-worlders are not part of out coordination agreement sound true. and there is a lot to write about that.
the post also wrongly reify the reason that one child rescue is different then many/ it's not utility decline, but the difference between emergency and normal life. you can ask more of people in emergency, in something that happen very rarely. you can't ask the same thing in everyday life. this is why changing the drowning child from rare occurrence to something that happen all the time change the intuition considerably.
and... here are my long-form thoughts on that. in Hebrew.
https://hadoveretharishona.wordpress.com/2024/07/18/%d7%90%d7%99%d7%a0%d7%98%d7%95%d7%90%d7%99%d7%a6%d7%99%d7%94-%d7%94%d7%99%d7%92%d7%99%d7%95%d7%9f-%d7%95%d7%98%d7%99%d7%a2%d7%95%d7%9f-%d7%94%d7%99%d7%9c%d7%93-%d7%94%d7%98%d7%95%d7%91%d7%a2/
https://hadoveretharishona.wordpress.com/2024/07/22/%d7%9e%d7%95%d7%a1%d7%a8-%d7%9b%d7%a7%d7%95%d7%90%d7%95%d7%a8%d7%93%d7%99%d7%a0%d7%a6%d7%99%d7%94-%d7%95%d7%94%d7%99%d7%9c%d7%93-%d7%94%d7%98%d7%95%d7%91%d7%a2/
These all add a major cost, while the point is should you do it if it had no major cost.
I think you're not applying the "touching" rule accurately.
My intuition as to how the touching rule goes would be that it's on a spectrum. For example, drowning child in front of you and no-one around = 100% touched.
Letter through your letterbox explaining the problem and requesting funding for a lifeguard = 10% touched.
If you then look at the problems through this lens, the touching rule lines up with our sense of what is right. You can't escape being slightly touched by the drowning children in any of the scenarios - ownership of the cabin isn't the same as the 100% of seeing the drowning child in front of you, but it's somewhere towards that.
Having "touchedness" on a spectrum obviously is a tool that can be used to fudge the equations to give the outcome that feels right, so in that sense it's a cop out. But I think there's also some truth to it in how some people conceptualise duty to charities. For example, a load of charities send round letters, and some people feel a bit bad if they don't give in response to a begging letter. It's as though all the time, we all have so many moral choices that no one choice is foregrounded, and receiving a letter forces you into a binary choice about that one charity.
I guess "touched" by a problem could be reframed as "forced into a finite moral decision". We're always able to donate our time/ money/ energy to an infinite number of moral causes, and somehow this series gets added up to a very small moral imperative to do something vaguely good with our resources, but without specifying what in particular. The moral space has an infinite number of dimensions of causes and ways in which we could do good, and the obligations associated with the vectors associated with every possible option sum to about zero at any standard point in time. When we are "touched", it forces us to engage with a few alternatives, and summing these alternatives gives a stronger moral obligation in one direction. We're touched by the drowning child, but we're also touched by being friends with a lobbyist who asks us for our opinion. In these occasions, the situation of "I have an infinite number of ways to do good or bad with my life" reduces to "I have 3 options for how to behave, and one of them clearly does a lot less harm than the others". These three options don't sum to zero in the moral vector space.
Where this perhaps breaks down is when comparing Alice and Bob, but I think this does match our natural intuition. Whether or not there's any logical basis for it, most people from nice Western countries who go on holidays to poorer places feel a bit of revulsion towards the ultra-rich living next door to the slums. You can justify this by saying that a realistic Bob would be benefitting a lot from the poor people living nearby, whereas Alice doesn't directly benefit from people living in poverty. The extreme version of this is South Africa - you look at people living fabulously and you wonder whether their wealth came from Apartheid. If we modified the thought experiment and Bob's wealth came directly from stolen land, we'd probably all agree he had a moral duty not just out of being touched by these people, but because he owed them something.
I don't know if this is how the Copenhagen interpretation is supposed to be, but I think what I've laid out about is a consistent framework for morality that doesn't get destroyed by the thought experiments in the post.
What are we to make of the fact that, undoubtedly, if 5,000 years ago all humans had dedicated themselves to the saving of drowning children, etc. with the full fervor of an ideal EAer, humanity would still be backwards bronze agers (at best)?
While it perhaps makes some sense to not aggressively carry out the selection, I think modern humanity’s attempt to completely nullify selection is a bad thing. If this magical city can’t be bothered to save their drowning kids maybe we shouldn’t help keep their genes in larger numbers.
Am I missing some explanation of why the rational take isn’t to bite the bullet and admit that common sense / intuitive moral reasoning is bullshit and leads us to ethically indefensible conclusions? This has the air of a hypothetical but comes around to what appears to be an actual prescriptive policy.
The easiest way to reject this argument is to point out that we are NOT, in fact, embodied angelic entities and so are not bound by any agreement that such creatures might have made (via a negotiation which took place under very unspecified epistemic circumstances, I might add). Neither is it clear why we should act as though we were. I fail to understand what abstract quantity is optimized by reasoning from such an axiom.
I think it IS instructive to think about what intuitions such a hypothetical appeals to. The angelic contract is an agreement between equals with expectations of reciprocity. The hypothetical creates that scenario (out of thin air) because, in my view, *those are the conditions required* to make the agreement seem intuitively reasonable. Agreements have to be equitable, otherwise people don't enter into them! Our intuitions tell us that people *shouldn't* enter into them! In actual non-hypothetical reality those conditions, unfortunately, do *not* exist: we are not (cognitively, economically, or culturally) equal to far-flung third-worlders and there is no real possibility of reciprocity: when it comes to international charity, it's clear which direction the money will always be flowing. It's like saying "imagine what energy policy would be optimal in a world in which the second law of thermodynamics didn't hold" and then using those conclusions to direct actual policy in this world. From an intellectual honesty point of view this is no better than a time-share hard-sell. It's a thought experiment that smuggles its argument in via its axioms.
The Veil of Ignorance is only supposed to veil our knowledge of personal identity, not our understanding of game theory, human nature, and economics. If *I* was an angelic entity that knew everything about the world except who I was then I would hedge against the possibility of being born in sub-Saharan Africa by agreeing, in the event that I'm born in the first world, to advocate for aggressive neo-colonialism. That is the policy that maximally improves living conditions there.
This was rather confusing to read, as my own moral intuition is that at some point your obligation to save this or that individual child shifts to creating a coalition to address the problem.
A real world analogue would be if you discovered an infectious disease that killed one child per hour, but for which you had a cure. Should you keep manufacturing this cure and saving children? That’s almost a weird question to ask, because obviously you should tell everyone about it so that society can mobilize resources to solve the problem at scale.
It’s heinous to imagine someone might save a few kids and then stop because it’s too much work. In my view their moral obligation shifts from “save the individual” to “raise awareness”, and to fail to do the latter is at least as reprehensible as failing to save the child.
What's the purpose building a tower of assumptions on the moral intuition of the drowning child scenario[1] only to come back around and try to override the same moral intuition in other cases with a set of rules derived from it?
Either:
- The rules are defined by the feeling
Or:
- The feeling should be constrained by a set of rules (which?)
You start with "it’s obvious you should save the child in the scenario" (why? because you feel like it?) but later you "think this is a gigantic error, the worst thing you could possibly do in this situation."
If this is an error, why do we consider the drowning child scenario in the first place?
[1] https://www.astralcodexten.com/p/effective-altruism-as-a-tower-of
I fully agree. John Rawls's 'initial position' argument is the best argument regarding this kind of dilemmas. Why is it the best? Because, by putting the moral agent outside of the world and ignorant of his future self-interests, no bias or partiality can be invoked. It's the eye-of-god position, without any god implied. This essentially aligns with the Kantian categorical imperative, but provides a kind of procedure, in the form of a thought experiment, to achieve this excessively theoretical and abstract kantian ideal.
Well yes, that's pretty much how my moral intuitions actually work, except they start from a choice, with very little prescription or coercion. It's surprising how far you can go if you just frame things as "what kind of world I want to live in?".
I want to live in a world where drowning children are saved. I know that human capacity is limited, which means that 1. if I start thinking that it's my responsibility to save every child I'll just burn out and 2. if I'm in a situation to help, then it's my turn - this is well within my capabilities and so it's something I've precommited to do. I really wish more people would take the time to read about and think through acausal decision theory stuff. It makes a lot of sense once you get through the initial wtf.
The "cabin in the woods" thing is actually pretty good at poking at the limits of this model, because it exists in reality. EMTs do live there. Which significantly strains the model: should they eat that sandwich, if it ups the chance of a child dying by 1%? I hesitate to say either yes or no. I'm going for a cope here, but a practical cope: the system should be set up in such a way that they have time to eat sandwiches. And if it's not, well, I'd hope they skip the sandwich, but won't blame them if they don't.
I think that a stronger match to most peoples intuition would be something like a “weighted original position”. It is a hard and somewhat weird mental move to un-know everything about yourself. There is no separating "yourself" from who concretely you are. So, you also give some vote to later more informed versions of yourself - form implicit coalitions with people that you are actually likely to interact with and where you actually don't know who is going to benefit from the interaction.
So you move to a city, and implicitly put yourself in an original position where you don't know if you are going to drawn or to save others. You non-causally negotiate for others to save you and for you to save them, and rewrite yourself to be a person who follow that contract. Then come the first drawn child. You are the same person, so you save her. And since this incident was uncorrelated with future drowning children, you still want to have the same contract. Then you find yourself in a long-term "bad moral luck" position. The "past you" who signed the original contract still have some weight, but there are more and more "past you" who wouldn't have signed, and they too get a vote.
Some other things that this moral theory explain:
- Discriminating against you because your first name is "Scott" is more outrageous than discriminating against you because you lost a lottery. Because your yesterday self would object much more to an anti-Scott policy than to an equally arbitrary lottery-based policy.
- Unless you are a very hard-core libertarian, you wouldn't honor even an actual contract where someone sells himself to slavery. Because you are not willing to look him in the eyes 10 years from now and tell him that he is the same person who agreed to it. Because he isn't really.
"It seems like Alice got lucky by not being Bob; she has no moral obligations, whereas he has many. "
Welcome to animal rescue. Or to steal from Jalaketu..." Somebody has to and no one else will." There comes a point where it doesn't matter whether it's your obligation, because if you're the only one who sees them and cares enough to help, you're going to do it, to the limits of your time and finances. The child floating by every hour is not an exaggeration when you change species. </3
Wait, are we still trying to construct formal, logically coherent theories about what is moral / what people think is moral? I thought we all more or less agreed that morality is a vague, socially trained intuition based on neural classifiers. Am I mistaken?
My intuition, which I gained by reading this blog, tells me that thinking of things as moral / immoral is more like a habit that people pick up from their peers, more a result of cultural evolution than rational design.
There is a lot of training data saying that saving drowning children is good. There is barely any training data saying that you should donate all your money to charity. And people act accordingly.
The theory is that you should try to refine your morality to a logically coherent theory at least as far as you can tell, so that someone would actually have to be smarter than you to make you morally obligated to act as their money pump.