> A moral rule of say spend 1% of your time and money on altruism and try to make that as effective as possible would be better...
Maybe I'm missing something obvious, but isn't that almost word-for-word the goal of effective altruism, only with a 0 after the 1 (and they'll help figure out how best to spend it)?
Yeah, right-wing commentators often make the argument that aid to Sub-Saharan Africa is just facilitating the Negroid Population Bomb, and I don't think that's a crazy thing to be concerned about.
However, (A) they are vastly overestimating the extent to which the average african currently gets their calories from western aid, (B) ignoring that mortality reductions and economic gain reduces TFR, (C) ignoring that aid can be used directly to encourage smaller family sizes, and (D) look pretty fucking barbaric when suggesting that mass starvation is just the natural antidote to this problem, in preference to spending 0.5% of western GDP.
I mean... I'm a HBD-pilled pro-eugenics neo-Darwinist, I know that the high-IQ populations of the planet need to prioritise their genetic continuation, SSA's TFR needs to come down, and I don't assign equal value to all human life any more than I equally value all animal life. But unless you value black lives about as much as bacteria I don't see how slashing these aid programs can be morally justified.
I think this is why so much of Christianity is about forgiveness and change and acceptance. The people writing the manuals desperately wanted to be good, and that's the dynamic you need. Of course if being good isn't a priority, it sort of becomes pointless self justification, which is why the average American atheist is cynical about the project -- seeing what is there at the local church is grim vision indeed -- but there's a roadmap there. The requirement isn't to be perfect. The goal is to be perfect. The aesthetic is to be perfect.
"The person who saves the 37th child is more moral than the person who doesn't"
I don't think anyone disagrees that saving the nth child will give you some morality points. The disagreement is whether refusing the save the nth child will lose you morality points.
Not if it's the Internet Atheist version of "I don't believe any of this sky-fairy crap but I will quote it to force you to do something I want you to do".
Not all they that cry "Lord, Lord" will be saved, remember?
I think "obligated" is a difficult word here and can be avoided, as descriptively we don't require this of anyone.
It would be more accurate to say, "The more you do, the more value your life has" or something similar. You need strong phrasing to communicate the vital importance of doing this but not blame-based to avoid basically saying "it doesn't matter what you did if you didn't do everything."
But once we start discussing morality, we're wading into an entire morass. Morality is good, okay, but what counts as moral? If I think homosexuality is immoral, am I good or bad? How do we determine if it is or is not immoral? If not saving a drowning child is immoral, is not saving a pregnancy from being aborted immoral? How do we distinguish between the two lives there?
Because the people on here telling me to "go back to the hypothetical, engage with the hypothetical" don't want any nuance or grey areas or contemplation of real world, we are supposed to just go "child drowning, must save". Okay then, child in womb about to be killed, must save. Engage with that and then talk to me about morality.
Oh and you can't argue "it's not a child", "it's only a potential person", "it depends on the stage of development" and the rest of such arguments because nuh-uh, that's dodging the hypothetical. After all, we don't list off what age the drowning child is, whether they're a genius or a Downs Syndrome child, who their parents are, or any of the rest of it. So now define morality for me based on actions deliberately chosen or inaction deliberately chosen with no refinements other than "this is a life, you are obliged to save life".
As a worshipper of Tlaloc, I feel my moral duty is to drown as many children as possible and so if I'm not pushing a kid into a pond 24/7, can I really say that my life has value? 😁
Bringing up the actual in real life effects of actual charities doesn't seem to motivate anyone, because they fall back onto abstract arguments about why it's not good to do charity that on average saves a life per 6000 dollars. And obviously as you see, it's pointless to discuss hypotheticals when you can have real life details to talk about things.
So yeah, I agree that EA refusing to obey social mores is cultish. Normal people drop it when they see you aren't interested in conversation.
I do think you can persuade people, but it's much closer to discovering existing EAs than it is making them. Doesn't invalidate your point though, especially since this essay is targeted at someone who probably thinks they think a lot about morality.
Pardon me if I'm missing something obvious, but don't “split-brain” patients still potentially have a ton of mutual feedback via the rest of the nervous system and body?
Oh yeah completely separately I'd like to apologize for embodying the failure mode you're talking about here. I'm not good and I use this place as a cathartic dumping ground for my frustrations, whoops.
Sometimes the brain worms get me but I'll try to keep in mind that sometimes third parties have to scroll past my garbage. Need to imagine a stern looking Scott telling me to think if it's a good comment before posting.
Arbitrariness is a matter of degree. The fewer convoluted assumptions are required before logical implication can take over, the less arbitrary some idea is. Saying "still ultimately arbitrary" and then justifying "ultimately" on the grounds of the is-ought problem being a thing at all... by that standard, the phrase "arbitrary ethical rules" is about as redundant as "wet lakes" or "spherical planets" - unclear what it would even mean for the descriptor not to apply, so using it anyway is more likely a matter of smuggling in misleading connotations.
If someone told me their own hamburger had ketchup on it, just after having taken a bite, I'd be inclined to believe them even if I couldn't see any ketchup there myself - it's not an intrinsically implausible claim, and they'd know as well as anyone would.
Similarly, having observed it directly I consider my own life to have value, and I'm willing to extend the benefit of the doubt to pretty much everyone else's.
It was originally, long before Substack was founded, at a different URL that's no longer online. Possibly people don't know that there's now a Substack.
Oh, thank goodness - I'd have been sad if a "foundational reference" essay that I reread periodically was gone for good. Link rot comes for everything in the end...
This kind of thing is getting far beyond the actual utility of moral thought experiments. Once you're bringing in blatantly nonsensical constructs like the river where all the drowning children from a magical megacity go, you've passed the point where you can get any useful insight from thinking about this hypothetical.
If you want to actually make a moral point around this, it's better to find real-life situations that illustrate your preferred point, even if they're messier or have inconvenient details. The fact that reality has inconvenient details in it is actually germane to moral decision-making.
So much this. My moral intuition just completely checks out somewhere between the examples 2 and 3 and goes "blah, whatever, this is all mega-contrived nonsense, I might just as well imagine me a spaceship while at it". Even though I'm already convinced of the argument Scott makes.
True that it's hard to learn from these--but they're not for *learning* morality. Thought experiments are the edge cases by which you *test* what you've learned or concluded. In that analogy, it's like looking at what architecture *can't* do by studying an Escher lithograph.
Practically speaking, no one has been persuaded into actually looking into details when they say things like "why would I donate to malaria nets". They fall back onto their preconceptions about how charities are corrupt and oh no nothing ever happens productively when it comes to charities, despite those points being laid out in exhausting detail on givewell's website.
So when people say that hypotheticals are useless and that it takes too much time to find out germane details, it sure does seem like people have a gigantic preference for not having anything damage their self image as a fundamentally morally good person, and this preference happens before any rules about the correct level of meta or object level details arise.
I mean, that's obvious, right?What's your point? That most people don't seem especially saintly when scrutinized by Singer or similarly scrupulous utilitarians?
If it was obvious, there'd be way more pushback re: discussion norms against bad faith. Coming into a discussion with your bottom line written down and being unwilling to update on germane facts that someone has to find for you is rude and should be banned via most ethical systems and not just utilitarianism (or is being stubborn a virtue?)
I'm not saying that they're at fault for being less virtuous, but for *not even attempting to be virtuous by most definitions of virtue*. Neither deontology nor virtue ethics says that's it okay to ignore rules or virtues because it feels uncomfortable. And this isn't a deep seated discomfort that's hard to hide, it's an obvious-by-your-accounting one!
Plenty of people think of things like maintaining faith and hope in conditions where they are challenged as virtuous, rather than as opportunities to reconsider your beliefs. Usually this is couched in terms of being ultimately right, contra the immediate evidence - seems like a pretty good definition of stubbornness to me.
You're wrong. I was persuaded precisely by the details, specifically by Scott back on SSC - the post which finally pushed me over was *Beware Systemic Change*, oddly enough, but the fuel was all of his writing about poverty and the effectiveness and so on in a specific detailed fashion.
What I think you're saying is "people want to be selfish and will engage in whatever tortured pseudo-logic that lets them indulge in this urge with minimal guilt". And on a purely descriptive level, I agree. I also think that's bad, and we should not in any way encourage that behavior.
Thank you so much for proving me wrong. I should not have been hyperbolic.
And I also agree this shouldn't be encouraged, but I have no idea what a productive way of going about this would be. The unproductive way I've been doing is to post snark and dunks, which I agree is bad and also should not be encouraged but what if it makes me feel a tiny bit better for one moment? have you considered that.
But no seriously, you can't see the exact degree to which someone is bad faith in this way until you've engaged with them substantially, at which point they usually get bored and call you names instead of responding. Any ideas would be welcome
Politics is the mind-killer. It is the little death that precedes total obliteration. I will face the hot takes and I will permit them to pass over me and through me. And when the thinkpieces and quips have gone past, I will turn the inner eye to see its path. Where the dunks have gone there will be nothing. Only I will remain.
But to your point, yes, broadly speaking I agree. Claims that you have an obligation to be Perfectly Rational or Perfectly Moral-Maximising or whatever at all times, and to fall short by a hairs breadth is equivalent to having never tried at all or tried as hard as possible to do the opposite, are utterly Not Helpful and also patently stupid. If I came across as saying that, I strongly apologise. And implied within that position is that it is less than maximally damning to fall short from time to time - not *good* maybe, but you do get credit for the Good Things.
And yes, I agree that there is a lot of bad faith on this topic, because people want to justify their urges to have another six-pack of dubiously-enjoyable beer rather than helping someone else, an urge which only gets greater with greater psychological distance. Construal level theory is applicable here, I think. Frankly, I'm getting pretty hacked off with people arguing in what is obviously bad faith trying to justify both being selfish and viewing themselves as not-selfish.
The basic way I ground things out is "do you accept that, barring incurring some greater Bad Thing, to a first approximation we have some degree of moral obligation to help others in bad situations?" If yes, then we can discuss specifics and frameworks and suchforth. If not, we're from such totally different moral universes then our differences are far more fundamental.
> If I came across as saying that, I strongly apologise.
You did not come across this way.
I actually do think I'm not being helpful, and like, surely there exist norms that we can push for where people don't post such bad faith takes.
> If not, we're from such totally different moral universes
To a certain extent, this is not what Scott believes and it's to his great credit that he doesn't, because it's what motivated him to be persuasive and argue cogently for his point.
Agreed. The day I first encountered Peter Singer's original drowning child essay, I went home and donated to malaria nets. I've been donating 10% of my income to global health charities ever since. Hypothetical situations aren't inherently unpersuasive, even if you can't persuade all the people all the time.
I truly think that most people just don't have money to donate to charity after all of the taxes they pay. People may believe that if spending isn't taken care of immediately that the government will go bankrupt within 1-5 years and if that happens the entire western world will. Ollapse overnight and a whole lot of people, the entire planet, will be suffering a whole lot. People may also believe DOGE actually will make things more efficient and if that ends up being the case its completely fine to continue to help the rest of the world in a streamlined and technologically up to date way.
I honestly haven't kept up with DOGE and whats going on but it seems like theyre going full Shiva on everything and then reinstating things they make mistakes on. Its not the way I think anyone would prefer but if it really is true that the US could bankrupt within 1-5 years then this absolutely had to happen and one can be a moral person that supports this.
I think the mega-death river is actually a pretty reasonable analogy for many real-life situations. Scott has mentioned the rich Zimbabweans who ignore the suffering of their countrymen. These are analogies for simply turning a blind eye to suffering, and the point being illustrated is that morality does not reasonably have any *actual* relationship with distance or entanglement or whatever, it's just more convenient to request that people close to a situation respond to it.
Of course there are plenty of ordinary Angolan businessmen, but I think the assumption must be that the rich Angolan is probably not a legitimate businessman but someone who skims or completely appropriates Western aid or the oil revenues that themselves owe to Western businessmen.
I would mostly agree. It's the distillation of some moral hypothetical into a specific (albeit wholly artificial and nonsensical) scenario that makes it a PARABLE.
I think people are apt to ignore problems if they think they can't do anything useful. They might or might not be right about whether they can do anything useful.
Sometimes the locals are the only ones who can help. Oskar Schindler was in the right place and at the right time to save a good number of Jews. Henry Ford wasn't in a place where he could do much. What he could do, make weapons for the allies, was entirely different from what Oskar could do (making defective shells for the Nazis as a cover for saving Jews).
Even assuming Ford was a moral person who was genuinely interested in helping, he didn't have an avenue to do so in a direct way. I don't consider that a moral failing. That he instead chose to help the war effort (which maybe not coincidentally also gave him a lot of money) is not a moral failing either.
And sometimes we just make mistakes, which we cannot determine at the time. The US returned several boatloads of Jews to Europe at a time when it didn't seem like that was likely a big deal. Hindsight wants us to call the action evil, but that's a kind of bias. It was 1939. Very little of Europe was under the control of the Nazis and there wasn't much reason to think that would change. Even less reason to think that the Nazis planned to exterminate Jews in lands they conquered. The solution of "always accept boatloads of foreigners" is not a reasonable policy and comes with its own negatives and evils, which again would be noticed in hindsight.
Which means that "sometimes accept boatloads of foreigners" is a reasonable policy. That does not imply that "always accept boatloads of foreigners" is as well.
Yes, I think even more than physical closeness (which, to me, include all the examples with remote bots, portals and any techno-magical way to be able to experience things and jump in as easily and quickly as if you were physically close - so the thought experiments are not ruling out closeness, because it's very clear those alternatives have the same effect as physical closeness for many things, not only altruism. It just precise what closeness is, when (existing or hypothetical) things make it more complex than physical distance), altruism is boosted by:
- innate empathy (higher for children, higher for people more like you, higher for women, lower for enemies)
- the impression you can help (your efforts are not likely to be in vain)
- the impression you will not loose too much by helping
- this include the fear of establishing a precedent for such help, which indeed can cost a lot if such issue is ultra-common. For me a better interpretation to lack of empathy for common misery than habituation...
- the impression you can gain social status as the "good guy" (direct or indirect bystanders).
On the other hand, it is decreased (decreased a lot, I think) by:
- the impression you are being taken advantage of, scammed in a way.... (i.e. your saving will super-benefit the victim that would become more well off than just fix the issue (like drowning), or, more commonly, it benefit a third party, especially if this third party caused the problem in the first place). This is linked to the "loose too much", but not only, also a little bit to the social status (hero v.s. trop bon trop con (too good=too dumb), but I feel it really is an altruism killer in it's own instinctual way. Maybe THE killer.
I use "instinctual" a lot, because I am fully in the camp of morality being an instinct first, an axiom-based construction (distant) second. So, like other instincts/innate things (like sensory perception), it is easy to construct moral illusions, especially in situations impossible (or unlikely) to happen during human evolution.
You're a doctor working at a hospital, putting in superhuman effort and working round the clock to save as many people as you possible can. Once you finish your residency, do you have the moral obligation to keep doing this?
You have a moral obligation to be a good person. There are many ways to do that, of which backbreaking labor at a hospital is both not the only option and perhaps not the best option.
You don't have a moral obligation to be a good person - to be a good person is to go above and beyond your obligations. Meeting your obligations doesn't make you good, it makes you normal.
This attitude is toxic and feeds into tribalism and "no cookies" arguments where treating the other tribe well gives credit even for a little while treating your tribe with anything but the most delicate kid gloves invites excoriation.
I'm not sure it works as descriptivist either--there are plenty of people who divide the world into "good people" and "bad people", not "the good, the bad, and the average".
I didn't respond at first because in some sense you're right - or we could quibble over what "good" or "Good" mean, which probably isn't productive.
I will say that I don't consider moral to be neutral. Just being a normal person who does normal stuff doesn't make you moral. It doesn't make you immoral, either.
For me to consider someone moral, I believe that they have to do things that are morally positive that are not natural or easy. There has to be at least some effort at doing other than go-with-the-flow.
Again, not doing that doesn't make you evil (usually), but I don't want to dilute the idea of morality to make it natural and easy. It lets everybody get off too easily and with no benefit to society. We should expect more, in the sense of "leave the place better than you found it."
Does it matter? Does the fact that someone else's lack of moral obligation left you in this situation mean you don't need to help?
Maybe you see a drowning child because someone didn't fulfill their moral obligation to add fences. Or because someone pushed the child into the river. Does that change your moral obligation to save them?
Strongly disagree. The utility of unrealistically simple toy models is that they can explain principles that the messiness of real-world examples conceals.
Suppose you're Newton trying to explain how orbits work with the cannon thought experiment, but the person you're talking with keeps bringing up ways in which the example is unrealistic. "What sort of gunpowder could propel a cannonball out of the atmosphere?" they ask, and "What about air resistance slowing the cannonball down?" and so on.
It's not unreasonable to say in that situation "No, ignore all of that and focus on the idea the thought experiment is trying to communicate. If it helps, imagine that the cannon is in a vacuum and the gunpowder is magic."
And sure, if Newton thought hard enough, maybe he could come up with the concept of rockets and provided an entirely realistic example of the principle- but if someone had demanded that of them, they'd still have been missing the point.
>The utility of unrealistically simple toy models is that they can explain principles that the messiness of real-world examples conceals.
Even the most simple, original Drowning Child thought experiment is drawn from messy reality. It asks us to avoid many questions that any person in that situation might ask themselves, intuitively or not: What is the risk to myself, other than the financial risk of ruining my suit? Am I a good enough swimmer to get to the child and pull it to shore? Is the child, once I reach it, going to drown *me* by thrashing around in panic? Is the child 10 meters from shore, or 100? Are there any tools around that could help, like a rope?
Plenty of complications there already, and no need to introduce even more. Or, if you do need to introduce more, start asking yourself if it's really a good thought experiment to begin with.
But Newton never claimed that his cannonball experiment was, by itself, proof of his theories, only that it helped to illustrate an idea that he'd separately demonstrated from real examples. Scott doesn't have the real-world demonstration.
I'd have thought the opposite is true, or can be, in that well-chosen idealized scenarios can help clarify and emphasize moral points. It's analagous to a cartoon or diagram, in which a few lines can vividly convey all the relevant information in a photo without any extraneous detail.
I actually have found a lot of utility in it because I seem to disagree with basically everyone in this thread, and it has given me context on why I find EA so uncompelling.
By relocating to the drowning child cabin you are given an incredibly rare chance to save many lives, and you should really be taking advantage of it.
On the other hand, you only get the opportunity because the megacity is so careless about the lives of its children. Obviously, saving the drowning children is a good thing, but what would be even better is if the megacity does something to prevent the kids falling into lakes and streams in the first place.
And if they don't bother because "well that sucker downstream will save the kids for us, and we can then spend the money that should go to fencing off dangerous waterways and having lifeguards stationed around pools on trips to Dubai for the rulers of the city", then are we really saving lives in the long run?
You are not really engaging with the thought experiment. Maybe think of this experiment instead: you suddenly develop the superpower of being able to be aware of any time somebody is drowning within say, 100 miles, and being able to teleport to them and teleport back afterward. If you think 100 miles is so little that a significant number of people drowning within that area is the government being lazy or corrupt, then imagine it was 150, or 200, or 1000, or the whole planet if you must. Would you have an obligation to ever use the active part of your powers to save drowning victims, and how much if so?
"You are not really engaging with the thought experiment."
Because it's rigged. It's not honest. It's trying to force me along the path to the pre-determined conclusion: "we think X is the right thing to do and we want you to agree".
I don't try to convert people to Catholicism on here, even though I do think in that case X is right, because I have too much respect for their own minds and souls. I'll be jiggered if I let some thought experiment that is as rigged as a Las Vegas roulette wheel manhandle me into "well of course I agree with everything your cult says".
EDIT: You want me to engage with the thought experiment? Fine. Let's forget the megacity.
Outside my door is a river, and every hour down this river comes a drowning child. Am I obligated to save one of them?
I answer no.
Am I obligated to save every single one of them, morning noon and night, twenty-four drowning children a day every day for the foreseeable future?
Again I answer, no.
But that's not what I'm supposed to answer? Well then make the terms clearer: you're not asking me "do you think you are morally obligated?", you're telling me I'm morally obligated. And you have NOT made that case at all.
There's a group of people who think that if you live in a regular old cabin in the woods in the real world, see a single child drowning in the river outside, and can save them with only a mild inconvenience, you are morally obligated to do so.
The child-drowning-in-a-river-every-hour thought experiment is a way to further explore that belief and discuss where that moral obligation comes from. Of course it's going to sound absurd to you, because you don't agree with the original premise. It's convoluted because it's a distortion of a previous thought experiment.
I'm not a huge fan of the every hour version because it implies an excessive burden on the person who would have to save a child every hour, completely disrupting their life and removing moral obligation to some degree. I think the comparison of the moralities of the two people earning $200k is a much more interesting example.
Save sixteen kids the first day, then formally adopt those. Have them stand watch in shifts, with long bamboo poles for rescuing their future siblings from safely back on shore. If their original parents show up, agree to an out-of-court settlement, conditional on a full-time lifeguard being hired to solve the problem properly.
I mean, if you seriously think you're not morally obligated to save any drowning children (and elsewhere in the thread you said it applies to the original hypothetical with just one child too), then fine, you've finally engaged instead of talking around the thing.
This, and your attitude to moral questions in general, does affect my opinion of the effectiveness of Catholicism, and religion in general, in instilling morals in people though, and I can't be the only one. You're not just a non-missionary, you're an anti-missionary.
Oh dearie, dearie me. You now have a poor opinion of Catholicism, huh? As distinct from up to ten minutes ago when you were on the point of converting?
Yeah, I'm afraid my only reaction here is 😁😁😁😁😁😁😁😁😁😁
Now, who's the one not engaging with the hypothetical? "Just because you think it's bad doesn't mean it's wrong", remember that when it comes to imposing one's own morals or religious beliefs on others in such instances as trans athletes in women's sports, polyamory, no-fault divorce, capitalism, communism, abortion, child-free movement and a lot more.
You don't like the conclusion I come to when faced with the hypothetical original Drowning Child or the variants with the Megacity Drowning Children River? Tough for you, that does not make my view wrong *unless* you can demonstrate from whence comes the moral obligation.
"If you agree to X you are morally obliged to agree to Y". Fine. Demonstrate to me where you get the moral obligation about X in the first instance. You haven't done that, you (and the thought experiment) are assuming we all share Western, Christianity-derived, social values about the importance of life, the duty towards one's neighbour, and what is moral and ethical to do.
That's a presumption, not a proof. Indeed, we are arguing about universal values and objective moral standards in the first place!
I can be just as disappointed about "the effectiveness of Effective Altruism, and rationalism in general, in instilling morals in people" if you refuse to agree with me that "if you agree to save the Drowning Child, you are morally obligated to agree to ban abortion".
Malaria killed 608,000 children globally in 2022. Abortion killed 609,360 children in the USA alone in 2022. Now who cares more about the sacred value of life and the duty to save children?
That's the fun with hypotheticals - someone elsewhere said "the choice is snake hands or snake feet and you're going 'I want to pick snake tail'" but why not? It's a hypothetical, nobody in reality is going to get snake hands or snake feet! So why not "Oh I think I'd rather be Rahu instead!" with the snakey tail?
These things never seem to bother with considering that the value of a human life is not a universal constant, any more than is the value of other life on this planet.
Oh, sure. It's an arm-twisting argument about "you should give to charity", not anything more. Same way the thought experiment about "suppose a famous violinst was connected up to your circulatory system" or "suppose people got pregnant from dandelion seeds floating in the window" about abortion.
It's set up to force you along a path to the conclusion the experimenter wants you to arrive at. You have three choices:
(1) Agree with the conclusion - good, moral person, here's a pat on the head for you
(2) Agree with X but not with Y - tsk, tsk, you are being inconsistent! You don't want to be inconsistent, do you? Only bad and stupid people are inconsistent!
(3) Recognise the trap lying in wait and refuse to agree with X in the first place - and we get what FeaturelessPoint above pulls with me - oh you bad and wicked and evil monster, how could you?
Many people go along with (1) because nobody (or very very few) are willing to be called a monster by people they have been habituated to regard as Authorities (hence why it's always Famous Philosopher or Big Name University coming out with the dumb experiments; we'll all laugh and ignore if it was Joe Schmoe on the Innertubes) and most people want to get along with others so they'll cave on (2). We all want to think of ourselves as moral and good people, after all, and if the Authority says "only viewpoint 1 is acceptable for good and moral people to hold", we'll most of us go along meekly enough.
You have to be hardened enough to go "okay, I'm a monster? fine, I'm a monster!" but it becomes a lot easier if your views have had you called a monster for decades (same way every Republican candidate was "Hitler for real this time", eventually people stop paying attention).
I'm willing to bite that bullet in a hypothetical, because I know it's a hypothetical and what I might or might not do in a spherical cow world of runaway trolleys and nobody in sight for miles around a pond except me and a drowning child, is completely different from what I'd do in real life.
In real life, maybe I don't jump into the pond because I can't swim. Maybe this is my only good suit and if I ruin it, I can't easily afford to replace it, and then I can't go to that interview to get the job that means now I can pay rent and feed my own kids. Maybe I'm scared of water. Maybe I think the kid is just messing around and isn't really drowning. Real life is fucking complicated*, so I have no problem being a contrarian in a simplified thought experiment that I can tell is trying to steer me down path A and not path B.
In real life, I acknowledge the duty to give to charity, because my religion tells me to do so. That's a world away from some smug thought experiment.
*Which is why there is a field called moral theology in Catholicism, and why for instance orthodox Jews get around Sabbath prohibitions by using automated switches to turn on lights etc. The bare rule says X. Real life makes it hard to do X, is Y acceptable? How about Z? "You're a bad Jew and make me think badly of Judaism as instilling moral values if you use automation on the Sabbath" is easy to say when it's not you trying to live your values.
I'm loving your ability to enunciate what the rest of us can only mutely feel.
"Suppose people got pregnant from dandelion seeds floating in the window" - hadn't heard that one but it's funny to me because it puts the thought experimenters about at the level of some adolescent girls circa 1984 - when my fellow schoolgirls earnestly discussed whether one could get pregnant "sitting in the ocean" lol.
"Again, suppose it were like this: people-seeds drift about in the air like pollen, and if you open your windows, one may drift in and take root in your carpets or upholstery. You don't want children, so you fix up your windows with fine mesh screens, the very best you can buy. As can happen, however, and on very, very rare occasions does happen, one of the screens is defective, and a seed drifts in and takes root. Does the person-plant who now develops have a right to the use of your house? Surely not--despite the fact that you voluntarily opened your windows, you knowingly kept carpets and upholstered furniture, and you knew that screens were sometimes defective. Someone may argue that you are responsible for its rooting, that it does have a right to your house, because after all you could have lived out your life with bare floors and furniture, or with sealed windows and doors. But this won't do--for by the same token anyone can avoid a pregnancy due to rape by having a hysterectomy, or anyway by never leaving home without a (reliable!) army."
Interestingly, she seems to argue *against* the Drowning Child scenario, though not by mentioning it:
"For we should now, at long last, ask what it comes to, to have a right to life. In some views having a right to life includes having a right to be given at least the bare minimum one needs for continued life. But suppose that what in fact IS the bare minimum a man needs for continued life is something he has no right at all to be given? If I am sick unto death, and the only thing that will save my life is the touch of Henry Fonda's cool hand on my fevered brow. then all the same, I have no right to be given the touch of Henry Fonda's cool hand on my fevered brow. It would be frightfully nice of him to fly in from the West Coast to provide it. It would be less nice, though no doubt well meant, if my friends flew out to the West coast and brought Henry Fonda back with them. But I have no right at all against anybody that he should do this for me."
So by her logic, if you live by the river of drowning children, nobody in the world can force or expect you to rush out and save them every hour, or indeed at all. Just because your cabin is located beside the river, where there is a megacity upstream where the children all tumble into lakes and get washed downstream, puts no obligation whatsoever on you. You didn't do anything to create the river or the city, or the careless parents and negligent city government.
I appreciated this perspective and was surprised is wasn't brought up earlier or given greater weight.
Deontological details are important, but a core part of all of this revolves around who is accountable for stopping an atrocity. I loved Scott's article, but we focused on pushing the extreme boundaries on how to evaluate a hapless individual's response to the megacity drowning machine while literally ignoring the rest of the society.
I've waved this part off as avoiding the pitfalls of the bystander effect; plus the point of the article seems to be answering the question "what should I as an individual do?" as well . But sometimes a problem requires a mobilized, community response.
I also appreciated Deiseach pointing out that when you altruistically remove pain from a dysfunctional system that you can remove the incentives for the system to change which can have a worse outcome.
If it needs to be in the form of a thought experiment:
A high profile, reckless child belonging to a powerful lawmaker who is constantly gallivanting falls in the river. If you save them you know the child will stay mum about it to avoid backlash from their parents, but if they drown the emotionally vexed lawmaker will attempt to re-prioritize riparian safety laws. What do you do?
The megacity is a vibrant democracy. Every child who drowns traumatizes the entire family and their immediate relations and galvanizes them to vote against the status quo and demand policy change, which is the only thing that will ultimately stop the jeopardy to the children long term. Do you save an arbitrary child that afternoon? How about at night after saving every child during your standard waking hours?
No one wants to see an atrocity occur. But sometimes letting things burn allows enough smoke to get in the air that meaningful action can finally happen. We should at least consider this if we're doing an elaborate moral calculus.
I went to that page and entered my information, but it didn't tell me whether I was on the global rich list or not, and it didn't say how rich someone would have to be in order to be on the global rich list (which I assume is not a real list, but a metaphor meaning in the top 0.01% or something). Do you know?
Scott has to keep making up fantastical situations because it’s the only way to pump up the drowning child intuition. I don’t regularly encounter strangers who I can see right in front of me having an emergency where they have seconds before dying but are also thousands of miles away.
Hmm, I don't have an ethic where I judge hypotheticals in terms of their realism. In fact, isn't the beauty of the hypothetical the fact that it is so malleable?
It really is a different mode of thinking. For some people, abstract situations are clarifying because they eliminate the ancillary details that obscure the general principle. For others, it's necessary to have all the ancillary details to make the impact of the general principle evident.
I've always favored the former, but I regularly encounter folks who only process things via the latter. Communicating effectively and convincingly across different types requires the ability to switch modes. Sorta like talking to a physicist vs. an engineer.
I redd about this on r/askphilosophy (not sure why this scenario is resurfacing so much lately) and was struck by this comment:
"Singer isn't writing for people walking by ponds that children have fallen into, though. It's a thought experiment ... Singer's point isn't "Intuition, yay!" it's that our intuition privileges the people close to us but we should consider distant folks the same. It's that our intuition is wrong."
That comes very close to sounding like there is no "thought" either sought or required - that he had a point, and smuggled it into a parable.
It seems entirely disingenuous to me. He (Singer) should state his point, assert that he knows the truth and you know a lie, and let the chips fall where they may.
"That comes very close to sounding like there is no "thought" either sought or required - that he had a point, and smuggled it into a parable."
It's a gotcha, and why my withers remain resolutely unwrung by those telling me I'm immoral if I don't fall into line about "if X, then by necessity and compulsion Y".
I kind of know what you mean, but I kind of also feel like thought experiments lay bare uncomfortable truths about ourselves that we can typically hide from behind "germane details"
Is it a good thing to aspire to be the moral equivalent of the Siberian peasant who can’t do math word problems because he rejects hypotheticals? The thought experiments are useful for crystallizing what principles are relevant and how. Most people don’t intuitively think in terms of symbolic abstractions, that’s why hypothetical scenarios. Their practical absurdity is beside the point.
Given Russian history, I sorta suspect the Siberian peasant is capable of doing math word problems in private, but has developed an exceptionally vigilant spam filter. Smooth-talking outsider comes along, saying things without evidence? Don't try to figure out the scam, just play dumb, avoid giving offense, and wait for him to leave.
I agree. Maybe the Siberian peasant is too stupid to do maths problems, or maybe he remembers the last time some government guy from the Big City turned up and asked the villagers to agree to a harmless imaginary statement.
They're still scrubbing the bloodstains out of the floor in that hut.
Perhaps more precisely, people discount help according to their social circle's accounting of that help. Distance is part of it, relatedness is another (especially in collectivist cultures like mine)
That's a good point. Intuitively, the kid you can see drowning is probably your neighbor's kid or your second cousin twice removed or something. The kid you can't see drowning has nothing to do with you, and you have nothing to do with any of the people that would be grateful for them being saved
Honestly that's not intuitive to me? I've never thought of the drowning child thought experiment and thought "wow, that's probably related to someone I know!" and if we imagine that the drowning child is not at all related, e.g they're a Nigerian tourist or something, it still seems like I'm just as obligated to save them.
So people intuitively recognize that you should save drowning children bcs that intuition evolved to help people related to you pass their genes on. They don't have that intuition for people far away bcs that had no reason to evolve since helping people hundreds of miles away doesn't help your genes.
In older times people just said that other tribes didn't really matter and only they did, so that's why they only helped their own tribe. Nowadays people are more egalitarian and recognize that everyone has moral worth, so they have twist themselves into knots to justify their intuition that you don't have to help far-off strangers.
If there are two people choking, one 10cm away inside a bank vault you don't know how to unlock (and might be charged with a felony for trying), the other a hundred meters away across clear open ground, who do you have the greater responsibility to?
Available bandwidth and ping times are more important than literal spatial distance.
I would agree with this idea. It also seems like a near vs far mode thing. The suffering of children in Africa is very conceptually distant, and we perceive it with a very low resolution. A child drowning right next to you just feels a lot more real.
In Garrett Cullity's The Moral Demands of Affluence his argument is that the "Extreme Argument" (Singer's drowning child) would require us to compromise our own impartially acceptable goods. And we don't even ask that of the people we are saving, so they can't ask it of us. (Kind of hard to do a tl;dr on it because his entire 300 page book is solely on this topic.)
"My strategy is to begin by describing certain personal goods—friendships and commitments to personal projects will be my leading examples—that are in an important sense constituted by attitudes of personal partiality. Focusing on these goods involves no bias towards the well-off: they are goods that have fundamental importance to people’s lives, irrespective of the material standard of living of those who possess them. Next, I shall point out the way in which your pursuit of these goods would be fundamentally compromised if you were attempting to follow the Extreme Demand. Your life would have to be altruistically focused, in a distinctive way that I shall describe. The rest of the chapter then demonstrates that this is not just a tough consequence of the Extreme Demand: it is a reason for rejecting it. If other people’s interests in life are to ground a requirement on us to save them—as surely they do—then, I shall argue, it must be impartially acceptable to pursue the kinds of good that give people such interests. An ethical outlook will be impartially rejectable if it does not properly accommodate the pursuit of these goods—on any plausible conception of appropriate impartiality."
I don't know if you were the one to recommend it in a prior ACX post, but I saw a comment about that book and devoured it. I am shocked that it doesn't seem to have any influence on modern EA, because it deals a strong counterargument to the standard rejection of the infinite demand problem.
I don't think it was me. I live in an Asian timezone so rarely post here (or on any other moderately popular American thing since they always have 500 comments by the time I wake up).
But maybe we both saw the same original recommendation? I read it probably 5-6 years ago, so maybe I saw it back on SSC in the day??
I'm not well versed on the matter--isn't the entire point of Singer's drowning child that it is not an extreme demand? It doesn't require you to sacrifice important personal goods like friendships or personal projects, just that you accept a minor inconvenience.
Edit--I read more about it, and Singer's drowning child is not the extreme demand, but the child-per-hour argument could be. The iterative vs aggregative question to moral duty seems particularly relevant to Scott's post.
With the disclaimer that it has been many years since I read the book and Cullitty's "life-saving analogy" is slightly different from Singer's for reasons he explains in the book. But part of the Singer's argument is he isn't actually asking us "just" to save one single child with his life-saving analogy. That's just the wedge entry point and you then are required by his logic to iterate on it.
"I do not claim to have invented what I am calling the ‘life-saving analogy’. Peter Singer did that, in 1972, when he compared the failure to donate money towards relief of the then-recent Bengal famine with the failure to stop to pull a drowning child from a shallow pond."
Singer certainly wouldn't say "you donated to the Bengal famine in 1972 so you are relieved from all further charity for the rest of your life." He would ask you to iterate for the next thing. After all his other books advocate for recurring monthly donations, not one offs.
"No matter how many lives I may have saved already, the wrongness of not saving the next one is to be determined by iterating the same comparison."
Singer doesn't let you off the hook if you see a school bus plunge into the river and have saved a single child from it. You can't just say "eh, I did my part, time to get to work, I'm already running late for morning standup".
And then once you iterate it seems to lead inexorably to the Extreme Demand.
"An iterative approach to the life-saving analogy leads to the conclusion that
you are required to get as close as you productively can to meeting the
Extreme Demand"
So I think Cullitty, at least, believes that (some variation of) Singer's argument requires the Extreme Demand.
I think it also abolishes the "why help one near person when the same resources would help one hundred far persons?" arguments I see bruited about.
If you don't get to say "I donated once/I pulled one kid out of a river once" and that's it, no more obligations, then neither do you get to argue that people far away should be prioritised in giving over people near to you (and I've seen plenty of arguments about 'this is why you shouldn't give to the soup kitchen on your street when that same money would help many more people ten thousand miles away').
If I'm obliged to help those far away, I am *also* obliged to help those near to me. I'm obliged to donate to malaria net charities to save 100 children, but I'm *also* obliged to give that one homeless guy money when he begs from me.
Distance is not an excuse in either case; if I'm obliged to help one, I am obliged to help all, and I don't get off the hook by saying "but I just transferred 10% of my wages to GiveWell" when confronted by the beggar at my bus stop.
If you're going to get moralistic about (I assume) HIV you should also bear in mind it gets transmitted from mothers to newborns, who obviously have no moral responsibility for their plight.
"An entire society focuses on a sexual practice that spreads an incurable disease. The disease is also passed on to the children."
->
"An entire society has collectively decided that drowning children is sexually pleasurable. Should you save the child and ignore the sexual practices of the society?"
This is standard motte and bailey. You cannot consider one without the other in this thought experiment and turn around and apply it to real life.
This is kind of an absurd argument given that the sexual practice in question is just "having sex" and infecting children is in no way a necessary consequence for people to fulfill their desire.
This is again a question of moral luck: people in the United States can, relatively trivially, get the medicine necessary to have sex without passing on HIV and people in some nations cannot.
Okay, fine - we should definitely discourage this. I don't think that gives us the license to ignore newborns getting HIV or that this is tantamount to deliberately drowning children.
Do you really think the world's assembled anti-HIV efforts would have ignored this out of embarrassment or stupidity? It's largely a sexually transmitted disease - they are not squeamish when it comes to studying which sexual activities are associated with increased risk. It is easy to find tables with estimated infection rates for anal sex, sex while carrying open STI sores, and so on.
I suggest you come up with some other way of blaming HIV incidence on the backwards culture of Africans.
>An entire society has collectively decided that drowning children is sexually pleasurable. Should you save the child and ignore the sexual practices of the society
Those are two separate questions. You should save the child and do what you can to encourage the societal change that you think would be beneficial.
I get that this is some kind of tortured analogy to HIV, so I guess the real question is do you actually think non-profits aren’t also spending money on safer sex education in addition to HAART?
That doesn't mean long-term utilitarian arguments about the consequences of policy go away. It is conceivable that refusing to pay for HIV medication would ultimately produce a society where the risks of HIV are so terrifying that no-one engages in unprotected sex, and thus the number of infected newborns drops to zero. Even if, in the short term, more newborns die.
I don't know if this is actually true, but 'moralism' isn't the correct frame to analyse this problem with. "Fewer people should die of HIV" is a 'moralising' position.
Yeah, some countries do have wildly high rates, but were these countries with zero access to HIV meds? How do you know this didn't exacerbate the problem?
I don't know what the correct solution to the calculus here would look like, I'm just pointing out that calling your critic a 'moraliser' in response is nonsensical. There's no utilitarian calculus without a moral definition of utility.
Countries in the first world with ready access to these medications are way better off than much poorer countries who rely on foreign aid to get them.
And my point was not that it's absurd to invoke morality, my point was that invoking it to assign responsibility to victims was incorrect for the population of victims who have no agency.
EDIT: Also antiretrovirals can virtually eliminate transmission so it's hard to see how the "moral hazard" argument would work here, at least in a scenario where you adequately supply everyone.
> Countries in the first world with ready access to these medications...
Are, among other things, overwhelmingly white/asian, and I'm HBD-pilled, so I don't assume that identical policies are going to yield identical outcomes in both areas.
> EDIT: Also antiretrovirals can virtually eliminate transmission
If they remember to take them, sure. IQ has an impact on how reliably that happens, though.
So there is a question here of judging people as a society. If the newborns in a society all get HIV because the society is bad, but then the newborns grow up to be part of the same society, how do we judge them?
There's a reasonable counterargument that any specific newborn isn't blameworthy because he's a blank slate that might not grow up like that society, so we shouldn't judge him as a social average. But then this is already effectively a newborn we know nothing about except that he's from that society, so maybe judging him by the priors of his society (instead of global priors) makes more sense.
(There's also, in this particular case, a second objection that AIDS aid might gradually help push that society away from the disease as a group; this is a practical question I have no particular insight about)
Umm, if you're talking about HIV, don't all sexual practices that involve the exchange of semen, saliva, or vaginal secretions increase the spread of the incurable disease? Doesn't this include all societies that allow or encourage physical sexual connection?
Does it make any difference whether or not you're a member of the society?
If you’re going to say this and repeat it. then get it right. The O.08% figure applies to the situation where the infected male is asymptomatic. If he is sick the chance of transmission is 8x higher, or 0.64, enuf bigger to be harder to shrug at. That works out to a 12% chance of infection if thewoman has sex with the symptomatic man 20 times.
The blaming was down to opposition to condoms. Condoms are the highest good, you see, and being opposed to them means that you are a no-fun wet blanket who thinks having fun sex with no consequences like kids is a bad thing. This makes Westerners mad about their current attitudes to sex, because it makes them feel like they're being blamed and are bad people (see the comments above about people wanting to believe they're good and moral while not doing something to help) so they use cases like "condoms reduce spread of AIDS, the church wants people to die" to make themselves feel justified.
It's not so much that the church is against condoms in committed relationships, it's the position that even using condoms in extramarital or promiscuous sex makes those things a worse sin instead of not as bad, that really riles people up wrt AIDS and other diseases.
"being opposed to them means that you are a no-fun wet blanket who thinks having fun sex with no consequences like kids is a bad thing. This makes Westerners mad about their current attitudes to sex, because it makes them feel like they're being blamed and are bad people"
All this is an accurate description of the Catholic church's perspective. Judge and ye shall be called judgey.
Can you please post a link to the stat you quote of 37x increased risk of HIV transmission when dry sex is practiced vs intercourse with no interference with the vagina’s
secretions? I have looked quickly on Google Scholar and asked the “deep research” version of GPT and cannot find any figures remotely like what you are quoting.
Yeah I found that too. That 37x-as-likely sounded like bullshit to me from the start, because it's too big and precise. This just isn't the kind of data from which you can extract such a precise number, or find such a large difference between groups. To get such a big number, and to trust it, you'd have to have a control group of couples and a dry sex group, and assign couples randomly to the groups and then followed them for a year doing HIV tests. Um, obviously such a study was not done, and it's not possible to get data you're confident of just by interviewing people about practices and number of partners, etc. In fact it's probably not possible even group the people studied into dry sex and regular sex groups . You'd have to find people who have *only* done dry sex or only done plain vanilla intercourse. In one of the studies I looked at the best they could do was look at women who reported they had had dry sex at least once in the period they asked about.
I really don't doubt that dry sex creates abrasions on the genitals of both partners and that this ups the chance of HIV transmission. Really irritates me when people spout bullshit that supports it though.
Isn't it only non-monogomous sex that can really spread any of these diseases? (Okay, you could be born with it, and give it to your one sexual partner, but none of these diseases can long survive on such a patry pathway, all in practice require promiscuity). Pre-1960 there have been a lot of societies (damn near all of them in theory if not in practice?) that encourage only monogomous sexual connections.
We don't have time series data, but we have extensive literary evidence. That's usually the case with history. Demanding time series data smells of an isolated demand for rigour. After all, you seemed comfortable ruling out any society being entirely monogamous in your previous comment.
it wasn't "all societies", not even close. Read some anthropology.
It really depends on the kind of sex you're having much more than the number of partners- anal sex is *massively* more dangerous than vaginal. In Australia, for example, gay men are 100x more likely to have HIV than female sex workers (and most of the female sex workers most likely got it through drug use rather than sex. Most sex workers in Australia don't use IV drugs, but there's a somewhat significant number who do).
The point in war is to kill or injure enemy soldiers, so no, I don't see any intrinsic value in doing the opposite of that (treating the wounds of the enemy,) when you are at war. (Nor would I expect the enemy to treat our wounded, although that would be a nice bonus.) War is brutal in and of itself and so if you want to avoid brutality and cruelty, my suggestion is to avoid war.
Out shallow societal norms about war crimes are a joke because 1. As demonstrated countless times, we throw the norm out of the window when it is convenient to do so. And 2. It's perpetrating a gigantic fraud to imagine criminal war vs. just War when War itself is a crime
Is your argument that no child actually dies which could have been reasonably prevented without incurring greater moral wrong? Because that's patently false.
Is your argument that discussions about the nuances of moral theory and intuitions and how they cash out in actual object-level behavior is useless? Because that could work, but would need a greater explanation.
Is your argument that discussions about the nuances of moral theory and intuitions and how they cash out in actual object-level behavior is rhetorically not effective? Because that's false - I myself was persuaded by precisely that kind of argument and to this day find them more persuasive than other forms. We can talk about if other forms would be more effective, but that'd need more explanation.
Is your argument that worrying about malaria is inefficient compared to worrying about HIV? Because any even somewhat reasonable numbers say that's not true.
Is your argument that worrying about HIV is a waste of time because people with HIV are too ignorant and engage in risky behaviour? Because then the solution is education.
Is your argument that worrying about HIV is a waste of time because people with HIV are too stupid to avoid engaging in risky behaviour? Because then you'd need more evidence to support this claim.
Or do you just want to beat your own hobby horse about how African People Bad? I assume I don't need to bother saying why that position is oderous.
Hi Alan, I would like to see a single piece of writing from Scott on the importance of education against practices like dry sex or having sex with infants to "cure" HIV. I would also like to see where in this post, his analogies are anything like societal practices encouraging dry sex or having sex with infants to "cure" HIV. Please make it so that a five year old can understand, thank you!
I'm South African. South Africa has for decades had major public awareness campaigns about AIDS that (among other things) explain and warn against those specific practices. Everyone who went to a South African high school heard the spiel repeatedly, and wrote exams testing their understanding of it. It was on TV, the radio, newspapers, the internet.
Those awareness campaigns are funded by the international aid that Scott has repeatedly endorsed in writing. I would have thought it fairly obvious to everyone that said funding is allocated to local education programs, in addition to condom and ARV distribution, etc.
Here is an additional hypothetical. I won't call it an analogy, because it describes a strictly worse situation than reality.
A child is drowning in a river, because their parents pushed them in. Should you save the child? Or should you let the child die because clearly those parents are terrible people who don't deserve to have children?
According to a source I find online, South Africa funds just over 70% of it's anti-AIDS budget directly, 18-24% comes from PEPFAR depending on the year, and the rest from something called the Global Fund.
So your argument is that current educational programs don't exist (not true, as Synchrotron describes at least in the case of SA, and a cursory search finds similar programs in at least a dozen African countries) or that they're not effective? Because again, an even cursory glance at the literature suggests that while they're obviously far from perfect, rates of safer sex practices does improve with education, albeit very unevenly depending on country and specific program.
Actually, I'll make it easier for you. What, precisely, is your actual argument?
I think you are on the right track here. The common issue with extrapolating all these drowning child experiments is that the child presumably has no agency in the matter. The intuitions change very quickly if they do.
"You save the child drowning in the pond and point out the "Danger: No Swimming" sign. The child thanks you, then immediately jumps back into the pond and swims out towards the middle, and proceeds towards drowning. Do you save them again, and how often do you let them repeat that?"
"You see some adult men swimming in a pond, and one of them starts to drown. You save him, and then the next day you see him swimming in the pond again, apparently not having learned his lesson. Do you hang around in case he starts to drown? If he does start to drown, do you save him again? How often do you repeat that?"
All that before you get to the questions of "Can you actually save the person?" "Will going out to help only drown both of you?" "How likely are you to make things worse by trying to save them?" That last one doesn't fit the metaphor at all, but is in fact usually what happens with foreign aid: the situation is made somewhat worse.
Another question is: How much do you know the situation? Is the child actually drowning? Is he swimming? Filming a movie?
Another is: How much responsibility have the Megacity for all the drowning?
I think what happens is Alexander took a case that is rare and unpredictable and said it happens all the time. This of course inverts our intuitions.
In this case, in real life, it would be like:
"We have no responsibility to save YOUR children, but we don't like to hear them crying so we added a net at the border so they can drown at your end".
Indeed. In fairness it is Singer’s base example, and people just use it because it seems to be difficult for most to grapple with. Singer is not someone I feel good about based on his writing that I have read, but maybe he is a decent person.
"How likely are you to make things worse by trying to save them?" That last one doesn't fit the metaphor at all, but is in fact usually what happens with foreign aid: the situation is made somewhat worse."
"The “Law of Unintended Consequences” reared its ugly head despite the best of intentions. For example, when the US flew in free food for the starving people of Port Au Prince, it put the farmers out of business. They just couldn’t compete against free food. Many were forced to abandon their farms and move to tent cities so that they could get fed and obtain the services they needed."
I was reading the obituary of a neighbor’s father, a doctor, and I learned that he had always had a special passion for Haiti, from way back in the 80s, and “had made over 150 trips there”.
How admirable, I thought. And there was really nothing else to think of in connection with that, beyond its evidence of his compassion.
Among the many things I don’t understand is why people look so hard for (and frequently find) unintended consequences when talking about ostensibly altruistic acts, but rarely when talking about “selfish” ones. The example taken from the blurb of Scott’s father’s book is a single paragraph among others, most of which extol the virtue of voluntarism (although I haven’t read the book, so it may include a lot of similar examples of do-gooding gone wrong.) But even in the case of the farmers who lost their market, we don’t know for sure that that itself wasn’t a blessing in disguise – maybe some of them went on to find different, better paying and less arduous work. Maybe some of the people who were prevented from starving went on to do good works far in excess of saving a drowning child.
But as soon as it comes to “selfish” acts – starting a business with the aim of becoming rich, a business that fills a societal need or want – we don’t try to look for unintended consequences (we call them externalities); instead we point to the good they are doing. Even if we admit the negative externalities (the classic case is pollution, but another more modern one is social media platforms’ responsibility for increased political polarization), we still say “but look at all the good they’re doing,” or at least the potential good, if the benefits are still in the future.
One reason for saving a drowning child might be so that you don’t hate yourself for not doing it, which is only tangentially related to desiring others to see you as virtuous. Should that count as an argument against altruism? Why does the argument against the possibility of true altruism not also get applied to selfishness? Even the most selfish, sociopathic and least self-aware person will bring on themself *some* negative consequences of their actions – the loss of opportunities for even more selfishness; the loss of the possibility of truly mutually beneficial relationships; a victim who seeks revenge. Even if they die before realizing these negative consequences, their legacy and the reputation of their descendants will be tarnished.
Unintended consequences are not synonymous with externalities. The reason people focus on them with regards to altruistic motives is that the general default mode towards apparently altruistic acts is “do it” when in fact it might make things worse, whereas there is a default of assuming selfish acts are harmful to others, often in excess of what is really there.
Yes, I agree, unintended consequences are not synonymous with externalities -- externalities can be unintended, which is the rationale for environmental review of projects, but some of them are planned for, and some of them are intended to be mitigated whereas others are ignored or covered up. I don't agree that the default mode toward selfish acts is "don't do it," however. Selfishness is in many cases held up as a virtue (e.g. the selfish gene; the profit motive; the adversarial process in legal proceedings; the notion of survival of the fittest and competition for an ecological niche).
The point is the Drowning Child argument tries to hit us over the head with "do this self-evidently good thing or else what kind of monster are you?" without consideration of unintended effects. Donating to anti-malaria charities is a good thing.
So is feeding the hungry. And yet the intervention in Haiti ended up causing *more* hunger and undermining local food production. So was the self-evidently good thing an unalloyed good, or should we maybe look before we leap into the pond?
I think this is obviously the wrong analysis of pepfar but even if it were right this wouldn’t be a good argument against the against malaria foundation
" 'An entire society focuses on a sexual practice that spreads an incurable disease. People are now dying because of this disease.'
Is it your moral responsibility to pay to reduce the disease incidence for people in this society given that they are spreading the disease?"
Even if we granted the premise that it's not our moral responsibility to save people who recklessly endangered themself or others, many of the people who are getting HIV were not reckless. Some of them are literal babies or women and children who were raped. Many others didn't have the education to know how HIV is spread and how to avoid being infected; if someone mistakenly believes that sex with a virgin renders them immune to HIV, can you blame them for getting HIV when they thought they couldn't?
But I would definitely contest that premise. If someone is drowning in front of you, you're obligated to save them. It doesn't matter if they got there by recklessly playing near the lake or through no fault of their own. If someone will die unless you intervene, you have to help regardless of how they got into that position.
> ...behave as if the coalition is still intact...
I think you may have snuck Kant in through the back door. Isn't this kind of what his ethics is? Behave according to those principles that you could reasonably wish were inflexible laws of nature (or, in this case, were agreed to by the angelic coalition).
No, Kant relies on the idea of immoral actions being illogical, because they contradict the rules that also provide the environment where the action even makes sense to do.
Lies only make sense if people trust you to tell the truth.
Theft only makes sense if you think you get to keep what you take,
>My favorite heuristic for thinking about this is John Rawls’ “original position” - if we were all pre-incarnation angelic intelligences, knowing we would go to Earth and become humans but ignorant of which human we would become, what deals would we strike with each other to make our time on Earth as pleasant as possible? So for example, we would probably agree not to commit rape, because we wouldn’t know if we would be the offender or the victim, and we would expect rape to hurt the victim more than it helped the offender.
No, it's trivially obviously false that we would agree to that (or anything else) in this scenario. If we don't have any information about which humans we are, then we're equally likely as not to end up being sadomasochists, so any agreement premised on the assumption that we want to minimize suffering for either ourselves or others is dead on arrival. All other conceivable agreements are also trivially DOA in this scenario, since we also don't have any information about whether we're going to want or care about any possible outcomes that might result. Consistently applied Rawlsianism is just roundabout moral nihilism.
In order for it to be possible that the intelligences behind the veil of ignorance might have any reason to agree to anything, you have to add as a kludge that they know masochism, suicidality, and other such preferences will be highly unusual among humans in the society they're born into, and that it's therefore highly unlikely they'll end up with such traits. But if they can know that, then there's no reason why they can't also know the commonality of other traits, and then there's no reason why they shouldn't be able to at least make a well-informed Bayesian estimate of whether they're more likely to end up the offender or victim in a rape, or whatever else you want them not to know, and so the whole experiment becomes pointless.
Masochists tend to be very picky about the kind of pain they want. I have no idea whether this is as true about what kind of pain sadists want to impose.
I think that's a misstating of what the veil makes you ignorant of.. The point isn't that you don't know anything about the society into which you will be incarnated; the point is that you don't know what role in that society you will have.
Firstly, as a masochist myself, you are heavily misrepresenting masochism. Secondly, as someone who's met a weirdly large number of people who have committed rape, I'm pretty sure the net utility *for rapists* is at least slightly negative - some of them get something out of it, but some of them are deeply traumatized by it and very seriously regret it (and that's ignoring the ones who actually get reported and charged and go to prison, because I haven't met any of those).
I've wondered whether there were people who committed rape once and found they didn't like it and never did it again, or maybe once more to be certain and then never again.
It makes no difference to the victims, but it might make a difference to rape prevention strategy.
Yeah, that sounds about right. I definitely meant that the majority of people who commit rape only do so once, not that the majority of rapes are committed by one-time offenders. Probably should have clarified, though, so thanks for that.
You can try to invent an alternate version of the VOI where you arbitrarily know what your values will be without knowing anything else, but I'm not sure how such a blatantly arbitrary thought experiment is supposed to be a compelling argument for anything.
The point isn't that you know what values will be, but that you know the distribution of values/preferences and circumstances, from which yours will be randomly chosen.
I already explained in my original post why this doesn't work. If you grant the souls this kind of probabilistic information, then there's no reason why they can't also make well-informed probabilistic guesses regarding all the other things they're supposed to remain ignorant of, which makes their "ignorance" functionally meaningless.
It does work. If you don’t know whether you will be born a sexual predator or a victim, you should assume you’ll be a victim and therefore advocate for a society that prevents sexual assault.
The whole point of the veil is to be arbitrary. You only know *this* which is what the constructor of the thought experiment has predetermined is the important thing.
> we're equally likely as not to end up being sadomasochists
I think a lot of ethical thought experiments are pointless too, but the point that you could be a masochist is complete nonsense. Sadomasochists are a small minority of people, full-time ones even more so. Rawls’ angels could assume their human avatars wouldn’t like pain. The point is to apply that frame to actual human ethical questions, and humans can assume that the drowning child doesn’t enjoy drowning and that children in Africa don’t enjoy starving or dying of malaria. Otherwise is just silly sophistry.
I already explained in my original post why this doesn't work. If you grant the "angels" this kind of probabilistic information, then there's no reason why they can't also make well-informed probabilistic guesses regarding all the other things they're supposed to remain ignorant of, which makes their "ignorance" functionally meaningless.
I don't understand. How does probabilistic information about the personality makeup of the human species mean you can't be incarnated at random? Are they supposed to be making decisions with no knowledge of the world whatsoever?
>Are they supposed to be making decisions with no knowledge of the world whatsoever?
Not exactly. Souls behind the VOI are allowed to know general rules that apply to all human interactions; there's no reason why they can't know that humans inhale oxygen and exhale carbon dioxide, or other such things. They just aren't allowed any information that might incentivize them to favour the interests of any one person or group of people over those of any other person or group of people. So they can't know that "sadomasochists are a small minority of people", because then it would be rational for them to treat the interests of non-sadomasochists as a group as more important than those of sadomasochists as a group.
So... yeah, it looks like your quote is accurate, Rawls intended for the VoI to preclude any information about group size and relative probability of who you'd incarnate as.
At a glance, Rawls does seem to be making a lot of stipulations or assumptions about the value system of the angels, though (maximin principle, conservative harm avoidance, some stipulation of 'monetary gain' as if he were doing economics), so... it looks like "maybe you all incarnate as hellraiser cenobites" would contradict his thought experiment. But maybe I'd have to read it again.
There's perhaps a more fundamental objection to "you can't know how common different groups are", which is that subgroups are in principle infinitely sub-divisable. Are the "ginger swedish lesbians born with twelve fingers" group supposed to be exactly as common as "people over five feet tall"?
I have never heard it claimed that Rawls prohibits probabilistic knowledge. Indexical ignorance is precisely the ignorance Rawls seems to be requiring.
Then you have not actually read Rawls, because not only does he state this prohibition explicitly, but he also explicitly acknowledges that removing this prohibition would make his argument completely nonsensical.
From "A Theory of Justice", pages 134-135 in the latest edition:
>Now there appear to be three chief features of situations that give plausibility to this unusual rule.20 First, since the rule takes no account of the likelihoods of the possible circumstances, there must be some reason for sharply discounting estimates of these probabilities. Offhand, the most natural rule of choice would seem to be to compute the expectation of monetary gain for each decision and then to adopt the course of action with the highest prospect. (This expectation is defined as follows: let us suppose that gij represent the numbers in the gain-and-loss table, where i is the row index and j is the column index; and let pj, j 1, 2, 3, be the likelihoods of the circumstances, with pj 1. Then the expectation for the ith decision is equal to pjgij.) Thus it must be, for example, that the situation is one in which a knowledge of likelihoods is impossible, or at best extremely insecure.
>[...]
>Let us review briefly the nature of the original position with these three special features in mind. To begin with, the veil of ignorance excludes all knowledge of likelihoods. The parties have no basis for determining the probable nature of their society, or their place in it. Thus they have no basis for probability calculations. [...] Not only are they unable to conjecture the likelihoods of the various possible circumstances, they cannot say much about what the possible circumstances are, much less
enumerate them and foresee the outcome of each alternative available. Those deciding are much more in the dark than illustrations by numerical tables suggest.
The Rawls veil of ignorance works even if the "angelic intelligences" know every single fact about what will result from the society they choose except which human they will end up being. In that case it's basically rule total utilitarianism. It also works, somewhat, if there's only one intelligence doing the choosing, although there it ends up looking like rule average utilitarianism.
I think the mistake you're making is assuming that behind the veil of ignorance you're choosing with the same intelligence and values that you have in life, which can leak information about which human you are, causing a failure to come to agreement, but part of the experiment is that behind the veil you have a completely standardized mind.
>I think the mistake you're making is assuming that behind the veil of ignorance you're choosing with the same intelligence and values that you have in life,
...What? The fact that you're *not* doing this is my whole point!
Then I fail to understand what you mean by "But if they can know that, then there's no reason why they can't also know the commonality of other traits, and then there's no reason why they shouldn't be able to at least make a well-informed Bayesian estimate of whether they're more likely to end up the offender or victim in a rape, or whatever else you want them not to know, and so the whole experiment becomes pointless." The only thing they're supposed to not know is which particular human they end up as. Bayesian estimates of what a generic human is likely to experience are on the table! (The original Rawls book does handle this badly, but it's because Rawls has a particular (and common) blind spot about probability rather than it being an inherent defect of the thought experiment.)
What I mean is that the whole goal of the VOI is to justify some kind of egalitarian intuition. But this only sort-of appears to work in Rawls' original version because the souls lack *any* ability to guess, even probabilistically, what sort of people they're going to be (a point which Rawls states explicitly). If they're allowed to make informed guesses as to what sorts of people they'll most likely be, then there's no reason for them not to make rules where an East Asian's interests count for 36x more than a Pacific Islander's, or where a Christian's interests count for 31000x more than a Zoroastrian's, or where an autistic person's interests count for only 1% those of an allistic, or to any number of the other sorts of discriminatory rules which the whole point of proposing the VOI is to avoid.
If you're trying to maximise your expected utility, you don't want a scenario "where an autistic person's interests count for only 1% those of an allistic".
This is because in a world with 99 allistics/1 autistic, and a dispute between an autistic in which the autistic loses 50x as much as the allistic gains, you have:
a 1% chance of being the autistic and losing 50
a 1% chance of being *the specific* allistic in the dispute and gaining 1
a 98% chance of being someone else
...which is an EV of -49/100.
You'd be in support of a measure that hurt the autistic by 50 in order to make the lives of *all* the allistics better by 1, but that's not valuing an autistic's interests at 1% of an allistic's, it's just not valuing them at twice as important..
Why would privileges only accrue to the specific allistic in the dispute in this scenario? That's never been how discrimination has worked. If you were born white in Apartheid South Africa, you wouldn't need to get into a specific, identifiable dispute with a black person to be favoured over them for the highest-paying jobs, for your vote to count more than theirs in an election, etc. you'd just get all that automatically.
"So for example, we would probably agree not to commit rape, because we wouldn’t know if we would be the offender or the victim, and we would expect rape to hurt the victim more than it helped the offender."
Unless, of course, the rapist got much more pleasure than the victim felt suffering, so the total amount of happiness in the world increased:
I broadly agree that we should "do unto others as we would have them do unto us" but yeah, depends on the tastes of both ourselves and the other person.
I would draw a distinction between "observing a problem" and "touching a problem" in jai's original post. Trace is commenting on the "touching" side of things, specifically the pattern where a charity solicits money to solve a problem, spend that money to make poor progress on the problem, and defends this as "everyone's mad at us trying to help even though not trying would be worse". It is possible to fruitfully spend money in distant weird to you circumstances you don't properly understand, but if you think you're helping somewhere you're familiar with, you're more likely to be right.
I think the distance objection does not refer to literal distance, but our lack of knowledge and increase in risk of harm the further we are from the people we're trying to help.
For example, consider the classic insecticide-treated mosquito nets to prevent malaria. Straightforward lifesaving intervention that GiveWell loves, right? It turns out that many of the hungry families who received such nets decided to use them to catch fish instead. This not only failed to prevent malaria, but also poisoned fish and people with insecticide. We didn't save as many drowning children as we hoped, and may have even pushed more of them underwater, because we were epistemically too far away to appreciate the entire socioeconomic context of the problem.
The further you are in physical and social space and time from the people you're trying to help, the greater the risk that your intervention might not only fail to help, but might actually harm. This is the main reason for discount rates. It's not that people in the far future are worth less morally, but that our interventions become more uncertain and risky. We're discounting our actions, not the goals of our actions. Yes, this is learned epistemic helplessness, but it is justified epistemic helplessness.
> It turns out that many of the hungry families who received such nets decided to use them to catch fish instead. This not only failed to prevent malaria, but also poisoned fish and people with insecticide.
The best study we have on bed net toxicity—as opposed to one 2015 NYT article that made a guess based on one observation in one community—is from a 2021 paper that’s linked in the Vox article. It does a thorough job summarizing all known evidence regarding the issue, and concludes with a lot of uncertainty. However:
> I asked the study’s lead author, David Larsen, chair of the department of public health at Syracuse’s Falk College of Sport & Human Dynamics and an expert on malaria and mosquito-borne illnesses, for his reaction to Andreessen citing his work. He found the idea that one should stop using bednets because of the issues the paper raises ridiculous:
> “Andreessen is missing a lot of the nuance. In another study we discussed with traditional leaders the damage they thought ITNs [insecticide-treated nets] were doing to the fisheries. Although the traditional leaders attributed fishery decline to ITN fishing, they were adamant that the ITNs must continue. Malaria is a scourge, and controlling malaria should be the priority. In 2015 ITNs were estimated to have saved more than 10 million lives — likely 20-25 million at this point.
>“… ITNs are perhaps the most impactful medical intervention of this century. Is there another intervention that has saved so many lives? Maybe the COVID-19 vaccine. ITNs are hugely effective at reducing malaria transmission, and malaria is one of the most impactful pathogens on humanity. My thought is that local communities should decide for themselves through their processes. They should know the potential risk that ITN fishing poses, but they also experience the real risk of malaria transmission.”
There’s no good evidence that bed net toxicity kills a lot of people, and there’s extremely good evidence that they’re one of the best interventions out there for reducing child mortality. See also the article’s comments on nets getting used for fishing; the studies on net effectiveness account for this. Even if the nets do cause some level of harm, the downsides are enormously outweighed by the upsides, which are massive:
> A systematic review by the Cochrane Collaboration, probably the most respected reviewer of evidence on medical issues, found that across five different randomized studies, insecticide-treated nets reduce child mortality from all causes by 17 percent, and save 5.6 lives for every 1,000 children protected by nets.
This doesn’t mean that we should stop studying possible downsides of bed nets or avoid finding ways to improve them, but it does mean that 1) they do prevent malaria, extremely well, and 2) they save pretty much as many children as we thought.
To add, the Against Malaria Foundation specifically knows about this failure mode and sends someone to randomly check up on households to see if they're using the nets correctly. The rate of observed compliance failure isn't close to zero, but it isn't close to a high number either. See: https://www.givewell.org/charities/amf#Monitoring_and_evaluation_2
Maybe I'm too cynical but I haven't seen anyone change their mind when you add context that defies their expectation, I feel like they either sputter about how that's not their real objection (which if you think about it is pretty damn rude to say "this is why I believe in X" and then immediately go "I don't believe in X why would you think I believe in X") or they just stop engaging.
But I think we agree that the general principle still stands that moral interventions further in time and space from ourselves generally have more risk. We can reduce the risk with careful study, but helping people far away is rarely as straightforward as "saving a child from drowning" where the benefit is clear and immediate. I find the "drowning child" thought experiment to be unhelpful as a metaphor for that reason.
We're not saving drowning children. We're writing policies to gather resources to hire technicians to build machines to pluck children from rivers at some point in the future. In expectation we aim to save children from drowning, but unlike the thought experiment there are many layers and linkages where things can go wrong, and that should be acknowledged and respected.
Sure—but then shouldn’t we respond by being very careful about international health interventions and trying as hard as we can to make sure that they’re evidence-based, as opposed to throwing up our hands and giving up on ever helping people in other countries? The former is basically the entire goal of the organizations that Scott is asking people to listen to (GiveWell, etc). Hell, GiveWell’s AMF review is something like 30 pages long with well over 100 citations.
There has to be some point where it’s acceptable to say “Alright, we’ve done a pretty good job trying to assess whether this intervention works and it still looks good, let’s do it.” Going back again to the organizations that Scott wants people to donate to, I think that bar has been met.
I believe that where the bar lies should be for each person to decide for themself. Also, it's not enough for an intervention to have positive effect, but it must have a more positive effect than doing what we would otherwise do anyway. That's a much harder bar to clear.
I personally do think many international interventions have positive effects in expectation. But I am skeptical that they have more positive effect than the "null hypothesis" of simply acting as the market incentivises. I'm honestly really not sure if sending bed nets to Uganda helps save more lives in the long run than just buying Ugandan exports when they make sense to buy and thereby encouraging Ugandan economic development, or just keeping my money in the bank and thereby lowering international interest rates and helping Uganda and all other countries.
The market is a superintelligent artificial intelligence that is meant to optimize exactly this. To be fair, part of the process of optimization is precisely people sometimes deciding that donating is best. Market efficiency is achieved by individuals taking advantage of inefficiencies. But I don't think I have any comparative advantage.
The market optimizes something very different from "human flourishing". Economic resources and productivity are conducive enough to human flourishing that we've been able to gain a lot by taking advantage of the market being smarter than individuals, but now it's taking us down the path of racing toward AI, so in the end we're very likely to lose more than we ever gained by listening to the market. And in the meantime, Moloch is very much an aspect of "who" the market is.
Moloch is an aspect of everything. It would be cherry-picking to say that it uniquely destroys the efficient market hypothesis vs. all other solutions. Efficiently functioning markets very much is demonstrated in the real world as leading to vastly better outcomes than any other known system of resource allocation.
This argument proves too much, though. If the maximally efficient way to save lives is sitting back and letting markets do their thing, wouldn’t that also mean that we should get rid of food stamps, welfare, and every other social program in the US? After all, these programs weren’t created by market forces—they were created by voters who wanted to help the unfortunate (or help themselves) and who probably weren’t thinking all that hard about the economic consequences of these policies. The true market-based approach would to destroy the social safety net, lower taxes by a proportional amount, ban all private charities that give things away at below-market prices, and let the chips fall where they may.
Markets are good at doing what they do, but there’s no law of economics that says markets must maximize human welfare. They maximize economic efficiency, which is somewhat correlated with human welfare but a very imperfect proxy for it. I don’t think that I can beat the market at what it does best (which is why I’m mostly invested in the S&P), but when it comes to something the market isn’t designed for and doesn’t really care about, I trust it far less.
Moreover: Is that your true objection? If someone came out with a miracle study proving that donations to the AMF save more lives than investments in the S&P (I know this is sort of impossible to quantify, it let’s say they did), would you then agree that donating to the AMF is a good idea if you want to improve human welfare?
The market does an impressive job at optimizing for the welfare of people who have money. LVT + UBI would neatly sort out most of the associated misalignment problems.
Stories about clothing donation - unsorted heaps of your old sportsball and fun-run and corporate team building tee shirts having more value than the most beautiful locally-produced textiles - are depressing in this regard, and bring to mind the African economist who - 20 years ago or so - received a tiny bit of attention for asking Western do-gooders to basically leave Africa alone.
Do you also apply this heuristic to acts that we might call selfish? Starting a clothing business to make lots of money by jumping on microtrends in fashion carries the risk of encouraging young people to overextend their credit. Discarded, no-longer-in-fashion garments may end up clogging landfills. And yet it’s the ostensibly altruistic projects that we attack for ending up “doing more harm than good." The others we praise for their entrepreneurial spirit.
> insecticide-treated nets reduce child mortality from all causes by 17 percent, and save 5.6 lives for every 1,000 children protected by nets
I'm curious as to how the math here works out. If they're reducing child mortality by 17%, how does that not imply 170 lives saved per 1000 children? Everyone goes through an infant stage during their lives, right?
17 percent of the total risk of child mortality. If the total risk of child mortality without bednets was 100% then Africa wouldn't have made it long enough for this even to become a charity.
That's exactly the answer. When you're helping the child in the lake across the street there are a lot of implied social contracts at play, between you and your neighbors, you and your city, you and your country. That child will grow up to pay taxes and be the teacher of your grandchildren or the doctor that will take care of you as you age.
There's no such contract with the far away child. You don't know if the child's drowning because their society keeps throwing children in lakes. You don't know that the money you send won't be used to throw even more children in lakes. You don't even know if that child will be saved just to grow up and come make war with your own society.
There’s something to this, but I’m not sure if it’s enough. Suppose you’re American and taking a vacation (or on a business trip, or working there temporarily) in rural China and you see a drowning child.
Would you decide not to save them because it’s not your country? What if Omega tells you that it’s a genuine accident and the locals are not routinely leaving children to drown?
If you read the article many of the Chinese bystanders were not passive, obviously the drowning child scenario assumes a shallow lake, very few people would dive into a fast moving river without some sense of training (the diplomat competed in triathlons).
Unironically yes. When you travel to a foreign country like this, you are an outsider and you aren’t really supposed to interact with the locals very much. I wouldn’t talk to them, I wouldn’t make friends with them, so I sure as hell am not going to get involved in their private affairs like this. It’s none of my business and as an outsider I wouldn’t be welcome to participate. I’m pretty sure that if I saved the child, I would be called a creep for laying hands on him. Without knowledge of the actual language, I wouldn’t have the tools to explain myself otherwise.
Honestly, I think your thought experiment kind of illuminates why we save local children and not distant ones. The local children are theoretically members of our community, and though community bonds are weaker than ever, they aren’t non-existent, they still matter. Ergo, we save the child to reinforce this community norm, and we hope someone else saves our own children from drowning some day.
That doesn’t transfer if we do it in a foreign country.
"I'm pretty sure that if I saved the child, I would be called a creep for laying hands on him." I'm not a mind reader, but this sure reads like a bad faith argument to me.
Have you read the fable of the snow child? It’s a story about a fox who saves a girl who was lost in the woods in the winter. Upon bringing the girl home the parents shoot the fox because he’s a fox and they were afraid that he was going to steal their hens. The girl of course admonished the parents for this, but it didn’t change the fact that the fox was dead.
Do not underestimate the power of xenophobia.
Communicating with foreigners can be a very high stakes situation. When people are naturally suspicious of you, it’s critical that you stick to pre-approved socially accepted scripts and to not deviate from them, otherwise the outcomes can be very unpredictable. Drowning children is a rare enough event that we don’t have universally agreed upon procedures for how to handle them.
There have, I don't think commonly but still non-zero, honor killings because a (male) foreigner interacted with a local (female) child. The interaction that the tourist probably thought was merely being polite was enough for her to be marked unclean.
If you happen to stumble into a living thought experiment where a child is drowning in a shallow pond, it's worth the risk of a cultural misunderstanding and the child's later death instead of the certainty of their current drowning. But such cultures do exist.
As someone who would like to be saved from any hypothetical future drownings, even if they were to happen in foreign countries, or in my own country in instances where the only potential saviours are foreigners, I very much dispute your last sentence as logically following from the previous.
Indeed, I would like the community of people who would feel obligated to save my children from drowning to be as large as possible, all else equal.
To deal with the objections below, switch it up so it's in your home country but a visiting tourist: a cruise ship docks at your local beach; you know at this time of year the majority of swimmers are tourists not locals. You see a kid drowning... Do you ignore them because they're from a different country?
>That child will grow up to pay taxes and be the teacher of your grandchildren or the doctor that will take care of you as you age.
Would you accept that our moral circle should expand as the economy becomes more globalised then? It's standard in the modern economy for kids on the other side of the world to grow up to make your clothes, grow your coffee etc.
Yes, but the economic bound is not enough. You need a cultural bound. None of these variables are binary in practice, so the amount of economic ties, cultural ties and amount of help we send tend to be proportional to each other.
"You don't know if the child's drowning because their society keeps throwing children in lakes." That doesn't seem like a good reason to not save the child.
"You don't know that the money you send won't be used to throw even more children in lakes." This would be an argument against dropping money randomly, but we have fairly robust ways of evaluating charitable giving.
"You don't even know if that child will be saved just to grow up and come make war with your own society." Saving them with a life preserver that says 'Your drowning prevented by [my society]" seems like an excellent way to prevent that, with the added benefit that they'll tell all their friends to not make war on your society, too.
This is an over-simplification. Children are routinely indoctrinated by their societies throughout early adulthood to become warriors or to create more warriors. There is absolutely a real risk that a random person saved will be your enemy in the future. Saving them from a vast distance can indeed by seen by that society as a great way of helping their future takeover while impoverishing their enemy. Moloch is everywhere.
Allowing for the sake of argument that that's a significant problem, seems to me the obvious patch would be forking some percentage of nets off the main production line before the insecticide-treatment step, adding a cheap but visible feature to them (maybe weights? A cord for retrieval?) which would make the insecticide-free nets more useful for fishing and/or less useful for beds, then distributing those through the same channel until fish-net demand is saturated.
I think uncertainty in outcome and timing explains a lot, at least for my own behavior.
If I am certain of a benefit to others while uncertain about how grumpy I will be after the good deed, the finger is on the balance to help.
The inverse is also true. Giving for the certainty of a relief is very different from giving with the non-zero chance that funds get diverted to wars, corruption or crime organisations.
Thank you. This is very much my intuition as well, and I'm glad somebody else laid it out clearly. The biggest flaw in all these thought experiments, IMO, is that you're assumed to have 100% accurate knowledge of the situation. Accurately knowing the details of the river and the megacity and the drowning children is FAR more important to moral culpability than whether you happen to have a cabin nearby, or whether you happen to live there.
Sounds like we need some kind of social arrangement where we are gently compelled to work together to solve social problems cooperatively with roughly equal burdens and benefits, determined by need and ability to contribute. What would we call this...rule of Social cooperation? Perhaps social...ism?
Nah, sounds scary. Lets just keep letting the rules be defined by the Sociopathic Jerks Convention, with voting shares determined by capitol contributions.
Right, the trick is that the altruistic people need to make rules that exclude the sociopaths jerks from accumulating power and which make cooperation the better choice from their own selfish perspective.
Perhaps a good start, just a bare minimum, would be to strictly limit the amount of capitol that any one person can control. (could be through wealth taxes, could be through expropriation, could be through enforcement of antitrust....whatever, trying to keep this on a higher level.) The extreme inequality here leads to further multiplication of the power of the wealthy, and because the Sociopathic Jerk Convention (E.g. the behavior of corporations, which are amoral) is running the show, their rules allow them to further multiply their power.
The altruistic people need to be assertive and willing to fight. There are more of us than there are of them by a huge margin.
Better yet, why not leverage the ambitions of entrepreneurs to invest their time, money and creativity to solve problems for consumers? Big investments require big bets and huge risks which need to be offset with immense potential rewards.
I think a lower bound is more important - and more feasible to enforce - than an upper bound.
When you go to the most powerful capitalist in the world and tell him "your net worth is above this arbitrary line, so a new law almost everyone else agreed on says you have to give some of it up," is he going to actually cooperate with that policy in good faith? Or is he going to hire the best lawyers and accountants in the world to find some loophole, possibly involving his idiot nephew becoming (on paper) the new second or third wealthiest capitalist in the world?
One trouble with Rawlsian veils (better yet Harsanyian veils) is that networks of billions of interacting people are complex adaptive systems with emergent characteristics and outcomes. If we want to establish morality by what actions would be lead to the best outcomes, then we need to actually play through the system and see how it develops.
May I suggest that a world where everyone gave everything beyond basic sustenance to anyone worse off than them would scale into a world where nobody invested, saved for the future and everyone felt like a slave to humanity because they would be. It would be a world of complete and total destitution devoid of any ability to help people across the world.
I think it is more realistic to take real humans with their real natures and find rules and ethics and institutions which build upon this human nature in a productive way. I would offer that this is more of a world where altruism, utilitarianism and egoism overlap. Science does this by rewarding scientists with reputation for creating knowledge beneficial to humanity. Free markets do this by rewarding producers for solving problems for consumers. Democracy does this (in theory at least) by aligning the welfare of the politician with the citizenry voting for them. Charities do this by recognizing the benefactors with praise and bronze inscriptions.
There are good reasons why pretty much nobody gives everything to charity. Effective Altruists need to take it up a level.
Socialism has a history far broader than whatever particular example you are thinking of.
Economic system and body count don't seem correlated meaningfully...mercantilism and capitalism gets colonialism (and neocolonialism) and slavery, also fun wars like Vietnam and Iraq, communists get the great purge and great leap forward.
Authoritarianism has the body count, regardless of if it's socialist, capitalist, theocratic, mercantilism or whatever you prefer.
>Socialism has a history far broader than whatever particular example you are thinking of.
And /none/ of it has been notably successful, in any significant respect (& particularly compared to its competitor(s)—just the opposite, in fact—so... well, I remain skeptical.
Of course, it depends on what you call "socialism". Is "capitalism but with government services" deserving of the name? If so, it DOES work! (...but I, of course, would credit that to the other component at work.)
>mercantilism and capitalism gets colonialism (and neocolonialism) and slavery, also fun wars like Vietnam and Iraq, communists get the great purge and great leap forward.
I think there are many things wrong with this attempt at "death-toll parity", but no one ever changes their minds on this topic & it's always a huge exhausting slog... so I'm just registering my objection; those who agree can nod wisely, those who don't can frown severely, and we all end up exactly where we would've anyway except without spending hours flinging studies and arguments and so forth at each other!
Well, I don't want to turn this thread into a debate on socialism, and you're very right that how we define our terms is contested and critical.
I would suggest that there are many examples, such as with Allende, where it seems like it was going to work really well and the CIA simply could not have that.
I'd also note that life for the average Cuban is far, far better under Communism that it was under Batista, for example, and possibly the results in the other countries you are thinking about are better than you think when looked at from the perspective from the average poor person rather than the very-small middle and upper classes, who typically control the narrative.
Regardless, I was just saying that authoritarianism is orthogonal to economic model. And it is authoritarianism, regardless of the economic model, which is "scary." The Nazis were not less horrific simply because they had a right-wing social and economic program.
Would Venezuela be an example of a country where non-authoritarian socialism has gone badly? (May be you can wriggle out of this by saying it's a bit authoritarian?)
I suppose Norway would be an example of a country where socialism has gone pretty well (though with a fairly large dose of capitalism, and the advantage of massive oil reserves - not that those saved Venezuela).
Norway is not Socialism, per ChatGPT at least. It is a Social Democracy, not a Socialist government, though one may quibble about the distinction. Norway has:
Private Property and Business:
Individuals can own businesses and land
Market economy drives most goods and services.
Stock Market and Investment:
Norway has a well-functioning stock exchange
Encourages entrepreneurship and foreign investment
Profit Incentives:
While taxes are high, businesses still operate for profit
Wealth creation is encouraged, though it's heavily taxed and redistributed
I would personally argue that it would function even better with higher profit motive and less government intervention, but it is a misnomer to claim it is Socialism.
Venezuela went very authoritarian, but also, I wouldn't make the claim that every socialist experiment, even less authoritarian ones, are good. Norway is a possible example of a good one as you mention. Cuba is an obvious example. One could argue China is doing really well, and you can say it's capitalist but also they haul off billionaires to labor camps if they get too out of line so I would push back on that.
But Venezuela failed. This stuff is complicated. Anyone who says HURR DURR SOCIALISIM BAD is ignorant, many of them proudly so.
At my workplace (before I quit, angrily—and, as it turns out, unwisely), we had several Cubans who had come over here to 'Merica on rafts and the like. They were bigger fans of America than most Americans—the yard manager bought a Corvette and had it emblazoned with a giant American flag, always wore "America: Land of the FREE!" or "...of Opportunity!" etc. T-shirts, and so on (& I once witnessed his eyes get wet at the anthem before a game(!)—and... uh... well, they would talk about Cuban food, women, weather, vistas, but to a man they said they'd die trying to sneak back in the U.S. rather than accept being forced to go back & remain.
Anecdote, of course. But I get the impression that this is the modal Cuban, over here; granted, they're self-selected—but one doesn't see very many Proud Cuban Forever, "I'd die before leaving my adopted Cuba!", etc., expats, going the other direction.
Speaking personally, I can feel that the appeal of both socialism and effective altruism are linked to the same set of intuitions about solving social problems.
To me, the big difference is: (many) socialists seem more attached to a specific idea of how to act in accordance with that intuition than with actually figuring out the best way to operationalize them.
Socialists tend to presume they know the answer even in cases where their preferred answer does not seem like it actually achieves the goals they're supposed to be working towards.
Or, maybe a different way of saying it: I think ~120 years ago, socialism would have felt a lot like EA today: the ideology of smart, scientific, conscientious but not sentimentalist, universalists. But the actual history of socialism means that a lot of the intellectual energy of socialism has gone into retroactively justifying the USSR and Mao and whatever, so that original core has become very diluted.
TBC, I don't mean this as a complete dismissal of socialism, I think there are lots of people who consider themselves socialists who I think basically have the right moral intuitions and attitudes, and I absolutely feel the pull of socialist ideas... But I often find myself frustrated how quickly so many socialists just refuse to engage with the fact that capitalism has been absolutely necessary to generate the resources necessary for a universalist moral program, or will completely abandon any pretence of conscientiousness as soon as awkward facts about communist totalitarianism are mentioned.
I'd say "hollowed out" rather than "diluted." Anybody who got sufficiently sick of trying to justify the USSR, and still cared about the original virtuous goal, started calling their personal agenda something else and focusing it in different directions.
"To me, the big difference is: (many) socialists seem more attached to a specific idea of how to act in accordance with that intuition than with actually figuring out the best way to operationalize them."
Yes, clearly. That's because socialism (and capitalism) includes a large component of moral axioms and value claims as well as claims about facts, and you are not going to argue someone out of their moral axioms.
I'm opposed to capitalism partially for evidence-based reasons and partly because of basic values (I think it's morally wrong to derive most of your income from non-labor sources) and you couldn't convince me out of having my values even if you changed my opinion about some facts.
"or will completely abandon any pretence of conscientiousness as soon as awkward facts about communist totalitarianism are mentioned."
what facts, or "facts", are you thinking of, and why would you expect they would change my mind?
I'm aware socialist countries tend to be authoritarian (not necessarily "totalitarian", whatever you think that means), but I'm not really bothered by that in principle, since I don't view political freedom as self evidently good.
"Yes, clearly. That's because socialism (and capitalism) includes a large component of moral axioms and value claims as well as claims about facts, and you are not going to argue someone out of their moral axioms."
That's totally fair, but in the context of the original comment, which implied that "socialism" was just a method to implement the strategy of gently compelling people to work together to solve social problems, the fact that socialism has other moral axioms that may be unrelated to the project of solving those problems--or at least, that the problems socialism see itself as solving might be different than the problems suggested by Scott's post.
"what facts, or "facts", are you thinking of, and why would you expect they would change my mind?"
The usual ones about gulags and the Cultural Revolution and so forth; I'm sure you already know them. And I didn't say that they should make you change your mind, I said that socialists abandon their conscientiousness in the face of those facts: they tend to defend actions and outcomes that are canonically the sort of thing our hypothetical strategy of "gently compelling people to cooperatively solve problems" is meant to be *solving*.
Again, this is fine, you're allowed to think that the occasional gulag is justified to make sure that no one derives income from non-labour sources. I'm not saying you shouldn't be a socialist, I'm saying that being a socialist is *different* from the project that Scott loosely alludes to and that the top-level commenter suggests is basically achieved by socialism.
I'm explaining to the top-level commenter why some people who are sympathetic to the goal that Scott outlines, and who have some sympathy for the intuition that this has something in common with socialism, might still not consider themselves to be socialist, or at least, might think that the two projects aren't exactly identical.
Okay, you've changed my mind. I'm now convinced that promoting a social norm of saving strangers is actively evil because of second-order effects. Thanks!
Reading "More Drowning Children", the thought that came up for me was, "Damn, he has greatest ability to write reticulated hypotheticals which primarily serve to justify his priors of any one I've ever read!"
My second thought: For me, the issue is more, "At the end of this ever-escalating set of drowning children, do I ever get to do anything other than the minimal activities that allow me to survive to rescue more drowning children?" Not what you're getting ads, I know, but what you're doing seems to me to point in that direction.
I might as well take the role of the angel on your shoulder, whispering into your ear to tempt you, saying, why not give all you have to help those in extreme need just once, to see how it feels? What if your material comfort was always at the whims of strange coincidence, and goodness was the true measure of man? What if you found out you liked being a penniless saint more than a Respectable Person? You might enjoy it more than you think. Just think about it. :)
Penniless saints have done far less good in the world as a whole than wealthy countries and wealthy billionaires who then had enough time and capacity to look beyond their near term needs.
Sounds like something you'd hear from media sponsored by billionaires, or in history books written by billionaires, or in a society which overemphasizes the achievements of billionaires while ignoring the harm they are doing, etc.
I actually completely agree with this post. You shouldn't take your own feeling of "feeling good" as the entire idea behind morality. Yes, billionaires giving to charity will do more good than a penniless saint (being influential can make up for this gap -- Gandhi may have done more good than the amount than the amount of money in his pocket -- but the random penniless saint won't outweigh $100,000,000 to charity).
That being said, billionaires can save 100,000 lives, but you personally could save 1 life. If you don't save that one life you could, seems like you're saying you don't value saving lives at all.
You could say "one of my highest utility action is to become a billionaire first, AND THEN donate all my money to the causes which are the most effective" and yes! I might even agree with you! If you dedicate yourself to that then you're doing good! But if instead, you say "well it's difficult to do the maximally efficient thing so I'm not even going to save ONE LIFE", then you're giving an excuse for not saving a life even if you wanted to.
You could say "one of my highest utility actions is to CONVINCE all the billionaires to donate their money to charity". and yes! I might even agree with you! If you dedicate yourself to that then you're doing good! But if instead you say "well, most people who say they're moral aren't doing that, so clearly the idea of morality is bunk and I'm not a bad person for not following the natural conclusion of my morality" then that's a problem.
Someone who weaves complicated webs to not do anything different than what they wanted to IS, IN FACT a worse person than if that same person donated enough to charity to save one life.
No matter what, all morality says you need to either *try*, OR say you don't value saving any lives (an internally consistent moral position that wouldn't care if your mom was tortured with a razor blade) OR do what scott says in the post and assume that looking cool/feeling good about yourself IS morality, and therefore there's no moral difference between saving 0 lives or 1 life and 10,000 lives if they provide the same societal benefit and warm feeling of fuzzyness about being a good person in your gut.
I'm not sure what complexity there is. The invisible hand makes free societies wealthy, and wealthy societies give more to charity. No external effort, no waiting, no convincing, marketing, sales, or anything else needed. Lowest effort, highest utility.
There is more in heaven and earth than billionaires. There are also a lot more millionaires, and even more hundred-thousand-aires than there are billionaires. Grow the whole pie. This isn't zero-sum.
"At the end of this ever-escalating set of drowning children, do I ever get to do anything other than the minimal activities that allow me to survive to rescue more drowning children?"
In the thought example, some of the saved children should take over the job, and the others maybe give thanks for saving their lives.
In real life, no one is ever going to reward you, because the kind of people with the capacity and desire to reward you are probably too busy saving kids themselves. Until the day comes when there's finally no more kids to save anywhere, then MAYBE society will throw you a bone, but we'll probably all be dead before that happens.
This is a the point of the first issue of Kurt Busiek's Astro City comic. Samaritan, a Superman-like hero, can never rest, literally (I think), because with his super-hearing he can always, 24/7, hear a drowning child in Africa, and he can get there in 2 seconds, so he feels compelled to do so.
Seems like the superior solution would be finding an EA / moral entrepreneur who will happily pay market value for the cabin, and then set up a net or chute or some sort of ongoing technological solution that diverts drowning children into an area with towels, snacks, and a phone where they can call their parents for pickup. Parents are charged an entrance fee to enter and retreive their saved children.
I unironically think the moral equivalent of this for Scott's favorite African use cases is something like "sweatshops."
"Parents are charged an entrance fee to enter and retrieve their saved children."
What if the parents don't turn up?
"Look, do you really think that by now *nobody* has realised all the missing children are due to them falling into lakes and streams? One child per hour every hour every day every month all year? 8,760 child fatalities due to drowning per year for this one city alone? Come on, haven't *you* figured out by now that this is happening on purpose?
Don't want to be bothered with your kids anymore? Don't worry, eventually they'll wander off and fall into one of our many unsecured lakes, streams, ponds, and waterways, and that's that problem off your hands! Your kid is too stupid to figure out that they shouldn't go near this body of water? Then they're too stupid to live, but Nature - in the guise of drowning - will take care of that.
You keep saving all these kids, our population is going to explode! And the genetically unfit will survive and reproduce! It will ruin our society!"
Bodies of water have inherent danger. Yet, it is worth the tradeoff of not posting lifeguards at every single river, every pond and stream, just to stop the potential of some children drowning. Life is life, and tragic accidents happen. Safetyism is worse.
Economic development leads to lower fertility. I definitely think population and fertility rates in Africa are huge social problems, but the best way to address them is to make Africans more prosperous so they adopt the norms about sex and family size that other countries have adopted as they get richer.
Yeah, probably. But what about voting for the Different Dam Party? Or voting for a party whose headline 5 policies you greatly support but also have a nasty line in their manifesto about building a different dam?
I think at some point Scott has to accept that people reading this blog are exactly the types of people to optimize for their own coolness and not at all for truth seeking or morality, when you see them go into contortions to avoid intuition pumps. The problem is upstream of logical argument, and in whatever thought process preventing them from thinking they could be at all immoral.
Depends. Some people are here for the careful explorations of morality. Some people are here because they heard it was where all the smart kids hang out, and they are desparate to prove they belong, which often means showing off your ability to do cognitive somersaults over things like empathy or basic moral intuition. Its essentially trangressive intellectualism as psychic fashion.
Although I am being bad for not mentioning that I'm really talking about the commenters. If you were persuaded, the most likely time you mention it (if you do *at all* which you probably don't because mentioning donations are gauche) is a random start or end of year open thread, probably with no direct direct link back to the persuasive post. If you weren't persuaded, you likely fall into the above failure mode. (Edit: and therefore immediately respond)
Yup, people who go to meetups are several tiers above the average commentator, who cannot seem to grasp the purpose of hypotheticals and post things like "well this just makes me WANT to drown people (unstated subtext) because I don't like your arguments". Even if those types of people went to meetups, they'd know better than to say things like that!
And "seeming cool" doesn't mean "fashionable" or "obviously pandering to populist sentiments" (both of which I agree would be a bad way to describe even the current commentators) in this context, but something more like "self conception preserving" or "alliance affirming". Someone replying a post about morality about how obviously they love their family, and obviously giving is local because then it'd be reciprocal are not thinking truth seeking thoughts but "yay friends" or "yay status quo".
If you think you have a simpler explanation of why over 50% of the replies are point missing or explicitly talk about how they don't want to engage with the hypotheticals, with only reference to the replier's surface level feelings rather than marshaling object level arguments on why it'd be inappropriate to use hypotheticals then I'm all ears. But saying "people just make mistakes" is not an answer when the mistakes are all correlated in this fashion.
>when you see them go into contortions to avoid intuition pumps
Funny enough it was Scott himself in his What We Owe The Future review that broached the idea you probably should just hit da bricks and stop playing the philosophy game! He wanted to avoid the intuition pumps because they're bad. When you *know* someone is rigging the game in ways that aren't beneficial to you, you are not obligated to go along with the rigging.
Ever-more-contrived thought experiments are not about truth-seeking, either.
>whatever thought process preventing them from thinking they could be at all immoral.
I’m confused by the use of ethical thought experiments designed to hone our moral intuitions, but which rely on increasingly fantastical scenarios and ethical epicycles upon epicycles. Mid-way through I was wondering if you were going to say “gotcha! this was all a way of showing that the drowning-child mode of talking about ethics is getting a bit out of hand.” Aren’t there more realistic examples we could be using? Or is the unreality part of the point?
Like with scientific experiments, you try to get down to testing just one variable in thought experiments. The realism isn't the point, just like when a scientist who is studying the effects of some chemical on mice ensures that they each get perfectly identical and unchanging diets over the course of the experiment. The scientist isn't worried about whether it is realistic that their diets would be so static because that's not what's being tested right now.
You can build back to realistic scenarios after you've gotten answers to some of the core questions. But reality is usually messy and involves lots of variables at once, so unless you have done the work to answer some of those more basic questions, you're going to get stuck in the muck, unsure what factors are really at play. Same as if the scientist just splashed unmeasured amounts of the chemical onto random field mice in a local park.
The problem is, the drowning child thought experiment, in its *original* form, is already the most free of confounders, as it is much simpler than the scenarios Scott proposed here. So the equivalent of your mouse science example would be: I give my mice a certain drug, and the mice are held und the most supremely controlled circumstances such as their diet. But the drug did not have any effect. So now instead I let my mice roam free in the garden and feed them leftovers from the employee canteen, and then I give them the drug again and see if it works now.
The original Drowning Child thought experiment is "you'd save a child if you saw it drowning, wouldn't you?" and the majority of people will go "of course I would".
*Then* it sandbags you with "okay, so now you have agreed to save *all* the drowning children forever" and people not unreasonably go "hold on, that's not what I agreed to!"
And then the proposers go "oh how greedy and selfish and immoral those people are, not like wonderful me who optimises for truth seeking and morality".
No, it asks you _why_ you feel so sure you have to save the one drowning child, but you never even think about the others. The point is to make you realize that _is_ what you (implicitly) agree with; that it's _your_ judgement that thinks you're greedy and selfish for not saving children.
Some people actually respond in the desired way to the thought experiment; they can't think of any compelling answer to the question "what's the difference?"
Other people propose answers like: "the difference is spatial proximity", and so Scott counter-proposes other thought experiments to try and isolate that variable and discovers that it actually doesn't seem very explanatory.
The point of these iterated versions is to isolate different variables that have been proposed to see if they actually work to answer the question; and if we can discover an answer, figure out what it suggests about our actual moral obligations vis a vis funding AIDS reduction in Africa or whatever.
But Scott *isn't* isolating any variables, nor is he trying to. He's just constantly changing all the variables on a whim, including "variables" that aren't actually variable to begin with (e.g. laws of physics). Continuing the analogy from before, what Scott is doing here is like if one of the scientists were to notice that the mice seem to be becoming unhealthy, and another scientist proposes that it might be because their diets don't contain enough protein. Then the first scientist says, "okay, let's test for that. We'll send the mice to an alternate universe where the speed of light is 2 m/s slower than it is in our world, genetically modify them to have pink fur with purple polka dots, give them all tiny ear piercings, and start adding protein to their diets -- if your theory is correct, this should resolve their health issues."
I guess I disagree? People claimed that the clear difference between drowning kids and malarial kids in Kenya is distance, so Scott lists some (not even all that unrealistic) examples where you're physically distant to see if the intuition holds?
After rejecting physical distance he tries to think of some other factors: Copenhagen-style "entanglement", the repeated nature of the malaria situation as opposed to the (usually) one-off nature of the drowning child. He decides that these are indeed the operative intuitions, and then challenges them, finding all versions unsatisfying of using these as a complete basis for moral action, before laying out his preferred resolution.
I agree the examples come fast and thick, and sometimes it feels like we're nested a few levels deep, but I think he's exactly looking at the variables "physical distance", "declining marginal value of moral action", "entanglement with the situation" , and trying to isolate them individually and then in various combinations/interpretations.
Actually, what's going on here is that we observed some effect X in the original experiment (the drowning child). Then someone claimed "yes, but that effect only occurs when their living space is a small cage. In more naturally-sized living spaces, the effect X would vanish. The chemical isn't sufficient on its own." And so we go on to run the test again, but now the scientist builds a (still contained and controlled in other ways) testing environment where the mice live in burrows made of dirt instead of cages.
It's trying to apply rules of logic to something that is squishy and not made of logic.
Really there are no reason to believe that our moral intuitions are coherent. They probably aren't. Thought experiments are fun and useful for trying to explore the edges and reasons of our intuitions, but they have their limits. This article may have gracefully (or not gracefully, depending on your perspective) bumped up against them.
You could have a framework where you expect yourself, and hopefully others, to donate a portion of thier time and/or money to helping others (call it the old 10 percent tithe, although admittedly everyone has thier own number). If you already expect yourself to do this, then adding on saving a drowning kid once hardly costs you more in the big picture, and is the right thing to do since you're uniquely positioned to do it. if it's really important to you, you can just take it out of your mental tithe ledger and skip one life unit of donation that month (although you probably won't because it's in the noise anyway). But if you're by the drowning river and this is happening so often it's significantly cutting into your tithe, it's perfectly reasonable to start actually taking your lifeguard duties out of your mental tithe, and start wondering if this is the most effective way for your tithe to save lives. And if not, then we all reasonably conclude you're fine (even better off) not doing it.
"... doesn't seem to be a simple one-to-one correspondence where you’re the only person who can help: [sociopathic jerk thought experiment]"
I'm not sure if this tells us tooo much about the effect of other people in real-world moral dilemmata; one might bite the bullet and say "sure, /in that case/, where you know you're the only one who can help, you should; but in any real situation, there will be 1000 other people of whom you know little—any one of which /could/ help."
That is, if we're considering whether there is some sort of dilution of moral responsibility, I don't think the S.J.C. example really captures the salient considerations/intuitions.
-------------
I disagree with the other commenters about the utility of these thought-experiments in general, though.
They're /supposed/ to be extreme, so as to isolate the effect of x or y factor upon moral judgments—the only other options are to (a) waste all your time arguing small details & becoming confused (or, perhaps, just becoming frustrated by arguing with someone who's become confused) by the interplay of the thousand messy complications in real-world scenarios, or (b) throw up your hands & say "there's no way to systematize it, man, it's just... like... ineffable!"
If there is some issue with one of the thought experiments, such that it does not apply / isn't quite analogous / *is* isomorphic in structure but *isn't* analyzed correctly / etc., it ought to be possible to point it out. (Compare: "Yo, Einstein, man, these Gedankenexperimente are too extreme to ever be useful in reality! Speed of LIGHT? Let's think about more PRACTICAL stuff!")
I can't help but feel that some reactions of the "these are too whacky, bro" sort must come from a sense of frustration at the objector's inability to articulate why the (argument from the) scenario isn't convincing.
I'm sympathetic, though, because I think that sometimes one /can correctly/ dismiss such a scenario—intuiting that there's something wrong—without necessarily being able to put it to one's interlocutor in a convincing way.
Still—no reason to throw the bathwater out with the baby. It's still warm enough to soak in for a while!
In the Christian tradition, Jesus explains precisely what decides someone's eternal fate in Matthew 25 -- suffice it to say, it really is just meeting the material and social needs of the worst off people. No requirement you're religious in any way, and Jesus does mention that it'll lead to a lot of surprise both from disappointed "devout" people and confused but delighted skeptics.
Obviously there are other traditions and legends, but presuming Heaven is a Judeo-Christian term of art for a specific kind of eternal fate, it seemed relevant.
I'm not sure what it would mean to believe the gospel, but like absolutely refuse to care for a neighbor as though they were yourself. It is a gibberish idea.
Yeah that’s what James says in James 2:18. Contrast Ephesians 2:6-10. Seems like a contradiction! But it’s not. Paul explains in some detail in Romans 3.
The "actually existing Christian tradition" would say that the morally relevant aspect of action is the act of the will, not the change in external circumstances brought about. This is why the charity of the poor woman who gave her last coin was of greater merit than those of the rich.
Obviously one cannot harden one's heart to the poor and still be righteous. What I am saying is that external impact is in some cases disconnected from moral goodness; thus, the rich man who gives 1000 dollars has not done a moral act 100 times better than the poor man who gives 10 dollars.
> What if it is primarily the cultivation of a certain purity or nobility of soul?
Interesting theory. How much does that cost to do, at a median level of success? In terms of man-hours, or purchasing power parity, or wattage for sacred hot springs, or whatever standard fits best.
Would soul-cultivation be directly incompatible with funding antimalarial bed nets, or is there room for Pareto improvements in someone hedging between the possibilities? "Tithe 10%, and also do these weekly exercises, from which you'll personally benefit" isn't an obviously harsher standard to live up to than tithing by itself.
> After all, if the soul is immortal, its quality is infinitely more valuable than any material and temporal vicissitudes.
Giving up on explaining the nobility of soul formulation. However, I will say that immortality of the soul is not shaped like the linked image; the amount of suffering in Hell or Purgatory or the amount of joy in Heaven is far greater than anything terrestrial.
I would argue that saving drowning children is actually a very-high-utility action, because you can call the child's parents to pick the child up and they'll be super grateful, and even if they don't pay money, you'll accrue social and reputational benefits. Tacking on "...oh, but your water-damaged suit!" is misleading, because even with a water-damaged suit, saving the child is still obviously net-positive-utility.
(So, for example, if you get the chance to move to a cabin and rescue drowning children all day, you could totally just do that and make a living off it. Start a Patreon, have a little website with a heartwarming story about how you're able to save all these children thanks to the generosity of your patrons. When you save a child, send them back to their parents with a link to your venmo.)
The Drowning Child story takes a situation in which saving the drowning child is obviously high-utility, and conflates it with a situation in which saving the person-with-a-disease is obviously negative-utility.
I don't have a moral about whether you should give lots of money to charity. I just think the drowning child story is misleading us, because it says "...you chose to save the drowning child, so for consistency you should make the same moral decision in other equivalent situations" but the situations are not actually equivalent.
I would argue that it's mostly false that society gives you kudos for saving drowning children. Society gives you very little. The *child's parents* are the people who are rewarding you.
In steps the entrepreneurial nonprofit Singer Fineries, the world's first designer of high-end suits, gowns, and other formalwear that are all 100% waterproof. For the go-getting drowning-child-saver on the go! Ethically sourced materials made by fair trade certified artisans vetted by GiveWell, all proceeds donated to effective charities, carbon-neutral, etc.
Even better, the SF corporation will provide training in practical needlework, tailoring, and seamstressing for every saved child and hold a position open for them to work on the waterproof clothing. Sweatshops, you say? No, not at all! Ethical pro-child independence, we say! Earn your own living, live your own life, free of the neglectful parents who let you tumble into the lake and left it up to a stranger to save you!
"(So, for example, if you get the chance to move to a cabin and rescue drowning children all day, you could totally just do that and make a living off it. Start a Patreon, have a little website with a heartwarming story about how you're able to save all these children thanks to the generosity of your patrons. When you save a child, send them back to their parents with a link to your venmo.)"
I like the cut of your jib, young lion, but I think the EA and those inspired by Singer would be appalled. You're not supposed to *benefit* from this, you are supposed to engage in it via scrupulosity-evoked guilt! You should be paring yourself down to the bone to save drowning children every spare minute! You surely should *not* be making a nice little living from being a professional lifeguard! 😁
I have to say, if you must live beside a river full of dumb brats whose inattentive parents can't be bothered to keep them from drowning themselves, you may as well make a go of it how you can. Venmo those negligent caretakers for every cent you can, and don't forget to shame them on social media if they don't cough up!
"But I think most people would consider it common sense that refusing to rescue the 37th kid near the cabin is a minor/excusable sin, but refusing to rescue the one kid in your hometown is inexcusable."
What?!?! I cannot for a second imagine that a majority of people would say "just picking a number of kids you're down to save is fine in this situation". That there is a diminishing marginal utility of saving dead kids!
If this is happening I genuinely think that someone living in this cabin needs to realize their life has been turned upside down by fate and that their new main life goal has to be "saving the 24 lives that are ending per day" by whatever means possible. Calling every newspaper to make sure people are aware of the 24 daily drowning kids in the river. Begging any person you see to trade with you in saving 12 kids a day so you can sleep. Make other people "touch the problem." Whatever the solution is--if a problem this immediate, substantial, and solvable appears and no one else is doing anything about it, you have to do what you can to get every kid saved.
I took it as "personally stop whatever else you may be doing to physically save the kids, despite the effect on your own life, sleep deprivation, etc." (until you pass out and drown)
if other means are available, damn right I'm making sure there are lifeguards
"What?!?! I cannot for a second imagine that a majority of people would say "just picking a number of kids you're down to save is fine in this situation". That there is a diminishing marginal utility of saving dead kids!"
Why not? You're one person, there's a kid in the river every hour, it's physically impossible for you to save every kid in 24 hours in perpetuity. You have to eat, sleep, catch your breath after jumping into the river and pulling the last kid out, etc., never mind working at your job.
So most people would agree that yeah, you can't save them all, not on your own. Maybe after saving 37 kids straight you collapse from fatigue and end up in hospital. That means all the rest of the kids drown unless someone takes over from you. Or you work a reasonable rate of "I save one kid every daylight hour for ten hours, then that's it".
If you're discounting the need to have some connection to the harms in order to be responsible for curing them, be it causality or proximity or association, then you're stuck back into the original problem we're trying to escape here. Other than your proximity to the river, there's nothing special about your situation unless or until you've assumed a duty. You are best positioned to intervene if it's just physically jumping in 24 times a day, but we're advanced humans with technology and an economy, so your neighbor a half mile in from the river could just as equally hire a person or pay for a contraption to save the kids as you could. If there is no need for a connection, merely awareness, then why isn't your new main life goal saving the 2800 children under the age of 5 who die every day from dysentery? Because there are other people doing something about it? Not very well, it would seem!
I was amazed that this essay wasn't about /didn't get to USAID. USAID is a global aid program Trump is essentially closing. As a result he (and America) are being blamed for killing poor foreigners who will apparently no longer survive due to not receiving the aid. Would it not be our problem at all if we'd never given any aid? Are we really the ones killing people by no longer voluntarily providing aid?
Yes, because if the actual aid is shut down all of a sudden without prior warning, you are exhibiting the vice of akrasia, and not giving people who you now have an obligation to time to adjust or plan out their response. Now the USA does have at least a little obligation towards poorer countries, so when it goes to start fulfilling those obligations again, people will not trust it.
There is an actual argument against USAID (it is used to spew evil filth into the rest of the world) but I actually agree with Scott on the exact points of good which he highlighted it was doing, so a sufficiently competent statesman should be able to shut down the bad parts and keep the good parts.
A sufficiently competent and powerful statesman. It would take a great deal of power to be able to pick and choose when dismantling these organizations.
If we make Trump an eternal dictator of the entire planet, all the drowning children will become his personal property, and then he will have an incentive to save them. Perhaps he will order Elon Musk to build a fleet of self-driving boats to rescue those kids.
> There is an actual argument against USAID (it is used to spew evil filth into the rest of the world) but I actually agree with Scott on the exact points of good which he highlighted it was doing, so a sufficiently competent statesman should be able to shut down the bad parts and keep the good parts.
Yes, this is something that frankly appalled me about Musk's handling of the situation. So far as I can tell, the percentage of USAID funding that was actually being spent on DEI or wokeness-related programs was small, and it's not like Musk couldn't have afforded to hire auditors with basic reading comprehension to go in and surgically remove the objectionable items. He chose to go in with a hatchet on day one for the sake of cheap political theatre.
I don't think $47,000 for a Transgender Opera in Colombia is a wise use of taxpayer funding, but every item on that list combined amounts to less than half a billion dollars, and USAID was spending 40 billion a year.
There are even items on that list that I'm not sure should have been axed. Is anyone going to die because Afghanistan lacks condoms? Not directly, but there might be some risky abortions avoided, not to mention that Afghan TFR is well-above replacement and could place demographic pressure on limited agricultural resources, possibly triggering war or famine. I don't have a high opinion of Palestinian society, but unless the plan is to liquidate the region's population then constructing a pier to facilitate imports of essential food items isn't an automatically terrible idea.
Here are some at least perceived serious problems with USAID and rationale for the rapid action:
1) Lots of the funding was not going directly to overseas, but directly into Washington Beltway NGOs in the US. Yes, presumably much ended up overseas, but certainly parts of it simply enriched politically-connected individuals of the opposition party.
2) In many cases USAID funding directly sponsored and supported poltical ambitions and patrons of one party, not both parties in the US. This rendered it perceived as not just neutral but actively harmful to the opposition party.
3) Because the first 100 days of a lame-duck US President's term are widely perceived to be much more effective and important than the remainder, it is/was necessary to move very quickly to shut it down, both to actually succeed (it is already tied up in courts), and to see the impact on individual recipients, and use the impulse response of the system to better-understand the fraud and patronage that might be involved.
Fixing e.g. PEPFAR after the fact is not ideal, but letting the perfect be the enemy of the good is also not ideal.
Because a presidential term where all branches are controlled by one party is incredibly rare and hard to predict, and certainly that term will not be controlled by 'you' (whoever you is), and might not lead to the same desired outcome.
For example, right now, the house of representatives is balanced on a knife edge of control where any absences render it evenly split or controlled by the opposition.
If the control of multiple branches was so important, then why try to invest the all-important 100 days in shutting down the programs by executive order without involving Congress? That could have been done in the first term just fine.
1. An elected officeholder or group continuing in office during the period between failure to win an election and the inauguration of a successor.
2. An officeholder who has chosen not to run for reelection or is ineligible for reelection.
Wow. Today I learned about definition #2. God, do I hate stupid extra definitions of terms that ruin the first, good definition of those terms (see also: literally)
This is interesting, as I have always understand lame duck-ness to be definition 2, not 1. I would have reversed their order based on my own experience.
I could maybe buy the limited-time-window argument, but people in Scott's comment section were saying it would only have taken a few interns a couple of weeks to read through all the axed NSF grant proposals, so... even under time pressure, I think Musk could likely have done better.
> Lots of the funding was not going directly to overseas, but directly into Washington Beltway NGOs in the US. Yes, presumably much ended up overseas, but certainly parts of it simply enriched politically-connected individuals of the opposition party.
If you're paying the staff who run a charity NGO, and they talk to their patrons and vote for the party who funds them, then... yes, you will be 'enriching politically-connected individuals of the opposition party', almost by definition. I don't know a solution to this problem other than the GOP being less negligent when it comes to institution-building.
At least on paper, less than 10% of US foreign aid is/was allocated to 'human rights and democracy', or anything that could plausibly be interpreted as 'NGO color revolution money'.
The sexual revolution debate aside, I don't think any and all birth control is wrong, so... gonna have to differ on the condoms.
The problem is trying to disentangle the good parts from the bad parts, since any attempt to question it is met with the "people will die!" defence and asking the civil servants "so what did you do last week?" is seemingly intolerable interference.
Nothing wrong with gutting fat or rot. Some servants however, really do say "I stopped HIV babies from dying," and it is competent statesmanship to be able to distinguishing between the two, or at least undo the problem when there is just cause.
I would think it is worse to take action to foreseeably cause death, as opposed to neglecting to take action to foreseeably prevent death. (If this weren't the case, the answer to the trolley problem would be obvious)
I do admire that you continue to advocate for some version of "EA values" in these increasingly screw-you-got-mine times, even if it's largely academic to me as a Poor with limited resources wrt the scale of drowning children. Not having any realistic path towards ameliorating that state of affairs means it's even more important to Be Excellent To Others in whatever small capacities present themselves, I think. Everyone can do the mundane things somebody has to and nobody else will, if one cares to notice them, Copenhagen be damned. (While acknowledging that yes, there's real value in blissful ignorance! Premature burnout from the daunting scale of worldwide lifeguard duties is worse than at least helping the local drownees and ignoring the ones in the next city over.)
The real problem comes with coordinating others to act similarly, so the burden is collective but light, versus an endless uphill battle for a few heroic souls. That always feels missing from such philosophical musings - the kind of people susceptible to Singerian arguments aren't the ones who most needed convincing. Classic memes like religion sort of work for instilling the charitable drive, but come with a whole host of other "entanglements" that aren't all desirable.
I think a core objection to giving lots of money to be charity might be skepticism that the people being saved actually exist.
Like... the Effective Altruism page about malaria bednets has this long string of numbers they multiply together, to figure out how many dollars it takes to save a life. And that's legit cool. Of course, when you multiply a string of numbers like that, you tend to get huge error bars because all the uncertainties multiply. But they're smart people, I assume, and they're trying really hard, so I'm sure they're not trying to be deceptive. I have to respect that they've done as much as they have.
But... I'm in an environment where people will say anything to get me to give them money, and I guess I've gotten used to expecting evidence that people's claims are real? And I know that, if I buy a bunch of bednets to prevent malaria, no evidence will ever be provided that any malaria was prevented. At best they'll have some statistics and some best-guess counterfactuals.
And -- I mean, I'm sure the bednets people are good people. I've never met any of them personally, but they're working on a charity that does really good things, so they must be really good people with the best of intentions. But it sort of feels like they don't really have an incentive structure that aligns with communicating honestly.
I dunno. The internet in general isn't a high-trust place. I guess probably the people in the charity part of the internet are especially honest and trustworthy, so rationally I'd probably have to concede that the charity really is saving lives. But I don't feel it.
>I'm in an environment where people will say anything to get me to give them money<
So... got a lot of it laying around then, eh?
Hey, unrelated but FYI, I've been meaning to tell you ever since we last saw each other at that university or high-school wherein we were real good friends: if you give me some money, I'll write you into my next book as a badass superhero. Also, I may be on the verge of solving world peace and stuff, if only I had the funds... ah, the tragedy of it all—to have the solution in my hands, yet be stymied by a mere want of filthy, filthy lucre–
"I'm in an environment where people will say anything to get me to give them money"
Begging emails from charities. Gave one donation to a specific cause one time, got hailshowers of "donate donate please give donate we need money for this special thing donate donate" begging emails until I directed them all to spam.
That sort of nagging turns me away more than anything; I don't have a zillion dollars to donate to all the good causes, and I'm going to give to what I judge most in need/most effective. I am not an ATM you can hit up every time you want a donation for anything and everything. And of course they used the tearjerking heartstrings tugging route: here's little Conchita or whomever who is the human equivalent of a one-legged blind puppy, don't you feel sorry for her? Here's Anita who is the homeless mother of twenty in a war zone who has to pick up grains from bird droppings to feed her earless quadruplets, don't you feel sorry for her?
No, in fact, because you've so desensitised me with all the begging and all the hard cases, I have no problems shrugging and going "not my circus, not my monkeys".
There's a thought experiment, where someone runs up to you and says: "Give me a hundred dollars right now or else TEN TRILLION BILLION GAZILLION people will die horribly!"
And the thought experiment says: "Okay, that's a crazy claim, but as a Bayesian you have to assign some probability to the chance that it's true. And then you have to multiply that by the disutility of ten trillion billion gazillion people dying horribly, and check if it's worse than giving a hundred dollars to an obvious scammer. And what if the number was even more than that?"
But in practice people don't do this, we just say "no thanks, I don't believe you" and walk away. I'm not sure what rule we're applying here, but it seems to work pretty well.
And when I think about buying anti-malaria bednets, I feel like that same sort of rule is getting applied.
GiveWell is mostly advertising that you donate to charities that are not them. So it really seems like your thought experiment is in the opposite direction: someone tells you to give to an unrelated third party and you're trying to come up with reasons why the third party isn't really a third party.
The easy out for this is that because the claim is physically impossible the actual expected utility is always 0. Probability doesn't innately have to be an asymptote of 0, it can just be 0.
Our knowledge of what's physically possible is probabilistic, though, so this out doesn't really work. I think a more realistic out is that even though we don't have the cognitive resources to correctly estimate a probability at 1/3^^^3 or something by reasoning explicitly, conservation of evidence implies that most general statements about what's happening to about 3^^^3 entities are going to have a probability of about ~1/3^^^3 or lower so failing a straightforward logical argument why it's much larger in this case then if you have any risk averseness at all, and probably even if you don't, you should ignore such possibilities.
I don't think this is the core objection, it's more often an excuse. If everyone trusted the EA people's figures, most people still wouldn't donate anywhere near as much as EA people say they should.
GiveDirectly has a web-page with a real-time-updated list of testimonials from people who receive money and saying what they did with it, so I don't think this is the main blocker.
After thinking it over somewhat, sadly I think I have to admit that this *was* an excuse.
I recant the above statement. I do think that statistics are easy to lie with, or easy to get confused and report overly optimistic numbers despite the best of intentions. But I don't think it was my core objection.
Proximity based moral obligations work because the further away something is, the less power you have to actually affect things, and therefore the less responsible you are for them. You may say 'give to effective charities' but how do I know that those charities are actually effective and are not lying to me, or covering up horrific side effects, or ignorant of their side effects? Therefore, it would seem that I have more of an obligation to give to charities whose effects I can easily check up on in my day to day life*.
By this principle, the person in the NYC portal has an obligation, since he can actually see and actually help. If the guy screws up following your instructions, the situation is not worse than before. If you come up with a highly implausible scenario where his screwup can cause massive damage, then it becomes more morally complicated.
Same for the robot operator, since he is in control of the robot and knows what it is doing, assuming he knows it won't risk the patient's life. If you were a non-surgeon robot operator who came across the robot in the middle of an operation (the surgeon took an urgent bathroom break?) it would be immoral for you to help, since you wouldn't know what effect messing with the surgery would have.
In the same way, if I am simply told that going into a pond and pressing a button would save a drowning child halfway across the world, well, I have no way to verify that now do I? It could blow up a dam for all I know.
For the drowning child question, you always have a moral obligation if it occurs, but you don't necessarily have an obligation to put yourself into situations where moral obligations occur. Going out of your way to avoid touching things however is the sin of denying/attempting to subvert God's providence, see Book of Jonah.
So my Copenhagen answer is as follows: if a morally tough situation happens to occur to you, it is God's providence, and He wants you to do it.
>God notices there is one extra spot in Heaven, and plans to give it to either you or your neighbor. He knows that you jump in the water and save one child per day, and your neighbor (if he were in the same situation) would save zero. He also knows that you are willing to pay 80% of the cost of a lifeguard, and your neighbor (who makes exactly the same amount of money as you) would pay 0%
The neighbor judged himself when he called you a monster for not doing enough to save people didn't he? He also was touched by the situation when he involved himself in it by commenting and refusing the counteroffer. Its also seems fairly proximate to him for him to be auto-involved, and he is cognizant of this and in denial. Problem solved.
I recognize where you are going with this, and my point is not that you are a monster for not doing enough, but that your donations can have side effects which you cannot detect and cannot evaluate to adjust for in time, or they can end up not doing anything. Sure you can export it to other EAs to verify, but how can you trust them to be honest or competent? The crypto fiasco is a good example here.
>Alice vs Bob
God is Omnipotent and Omnibenevolent, he can have infinite spots in heaven and design life by his providence so that both Alice and Bob can have the appropriate moral tests which they can legitimately succeed or fail at. Bob would likely have a nicer spot in heaven though assuming he succeeded, because he had more opportunity for virtuous merit.
*Note that this argument is not an argument against giving to charity, only against giving to 'untrusted' charities, which I classify EAs as because they seem to be focused on minmaxing a single issue as if life is designed like a video game without considering side effects they can't see, and are prone to falling for things that smell suspiciously like the St Petersburg Paradox.
My logic leads me to conclude that it is optimal to use your money to help the homeless near you since you have the most knowledge and power-capacity towards it, which I have been half-heartedly doing but should put more effort into.
I've helped homeless people sometimes but more often than not I haven't. Homeless people sometimes have simple problems that you can help with (e.g. need a coat) but often it would require an expert to actually help them out as much as a malaria net would help someone in Africa.
This is true if the distribution of problems are the same near and far, but if you live in a rich country and are thinking of donating to a far away poor country, that's probably not true: the people near you with _real_ problems are people with medical conditions that require expert knowledge to solve, or mental problems that we may not know how to help, and so forth. While the people in poor countries may have problems like vitamin A deficiency which is easily solved by giving them vitamin A, or endemic malaria which is relatively easily solved by giving them bednets.
Even with the distance, I'm pretty confident it's much easier for me to get a hundred vitamin A capsules to Kenya than to cure whatever it is that affects the homeless guy who stays in the shelter a few blocks away from me.
Indeed, the whole point of charity evaluators like GiveWell is to quantify how easily a dollar of yours will translate into meaningful effects on the lives of others.
You lost me when you brought Jonah into your argument. IIRC, God's brief to Jonah was that he specifically was to go to Ninevah and preach against the evil there. After trying to avoid the task, Jonah finally slogged his way to Ninevah and preached what God had told him to preach. But he failed God because didn't preach about His mercy, as well. But nowhere in the story do I remember God telling Jonah to preach about His mercy.
How can we know God's will? God didn't tell Jonah that there was an additional item on his SoW. The only takeaway I get from Johah, is that if I rescue a drowning child, I need to preach about God's mercy as I pull the kid out of the water. In the trolly scenario, God's will may be that the five people tied to the track die, and the bystander lives. But His providence put us in the control of a trolly car, and he left us the choice between killing five people tied to the track or a single bystander. We don't know what God's optimal solution is.
You misunderstood my point. God gave Jonah a job, he tried to evade it entirely, and that was clearly the sin, which indicates that trying to cleverly dodge moral responsibility by removing proximity is bad.
Jonah's behavior in chapter 4 is not relevant to the point.
>How can we know God's will
Study moral theology and you can guesstimate what the correct action in a given situation is.
>preach about God's mercy as I pull the kid out of the water
As others pointed out, you can recruit the kid to help you pull more kids out of the water.
So, what does Christian moral theology indicate we should do if we find ourselves in a trolley problem scenario? Bear in mind that this is also the type of ethical koan that has troubled Talmudic scholars. For instance, Rabbi Avrohom Karelitz asked whether it is ethical to deflect a projectile from a larger crowd toward a smaller one. Karelitz maintained that Jewish law does not allow actively causing harm to others, even to save a greater number of people. He argued that human beings cannot know God's intent, so they do not have the authority to make calculations about whose life is more valuable. Thus, according to Karelitz, it would be neither ethically or halachically permissible to deflect a projectile from a larger crowd toward a smaller one because doing so would constitute an act of direct harm.
As for me, I'd say the heck with the Karelitz, I'd deflect the projectile toward the fewest number of victims. I don't know what I'd do if my child was in the group receiving the deflection, though. But I'd probably make my decision by reflex without considering the downstream ramifications. Ethical problems do not lend themselves to logical analysis because human existence is greater than logical formulas. Sure, we could all be Vulcans and obey the logical constructs of our culture, but the minute we encountered a Gödelian paradox, we'd be helpless.
You are free to flip the lever, but not push the fat man, since the trolley running over a single person is a side effect, while pushing the fat man is directly evil.
The trolley running over a single person is a side effect of moving the trolley, the fat man dying is a side effect of moving the fat man. There isn't really a sharp line here.
Its not a side effect though. You are actively choosing to push the fat man, ie it is you active will that the fat man be pushed, and the trolley is stopped by the means of pushing the fat man.
I will point out that I think a more nuanced framing of Rawlsian ethics is inter-temporal Rawlsian ethics where we both don’t know **where** we will be born or **when** we will be born.
Instead of the argument of keeping taxes on the rich low so they don’t defect, those in the future will want as much growth in the past as possible to maximize the total wealth of the world and the total number of medical breakthroughs available to them.
There is now a balance of being fair and equitable at a single spacetime slice and people farther in the future wanting more growth and investment in previous time slices that better benefit them.
I think this makes the tradeoffs we often confront in redistribution vs investment more salient and makes the correct policy more difficult to easily figure out.
(Sorry if this was mentioned in another comment, I looked at about half.)
I think intertemporal Rawlsian ethics is a wonderful idea, but it’s *really* sensitive to your discount function and error bars on the probabilities of stable growth and maintenance of a functioning civilization , isn’t it?
> First, she could lobby the megacity to redirect the dam; this would cause the drowning children to go somewhere else - they would be equally dead, but it’s not your problem.
By the standards of morality in the thought experiment this is the correct solution. The prevailing standards in this hypothetical world allow for magical child flushing rivers accepted without significant protest or mitigation. Objectively, you are not doing anything wrong.
Morality is not an abstract thing to be discovered and while basic survivorship bias means that societies with a sense of morality that result in Child River O'Death are unlikely to be tremendously advanced we can say that certain moral codes can be more effective than others at human flourishing, you cannot use thought experiments to find the rules because there are none. It's all a blur of human messiness.
If I got to choose, I would rather give 1000 people a sandwich or something instead of torturing 1000 people by cutting off their fingers.
Sure, you can argue "you cannot use thought experiments to find the rules because there are none".
Yes, it's "all a blur of human messiness".
Would you rather give 1000 people a sandwich or torture 1000 people? If you prefer one, well, you might even have a reason, let's get to the bottom of it. I'll call it "morality". And if this hypothetical seems to have a preference, we can probably assume other hypotheticals do, like the ones Scott is using.
If you don't prefer either, cause "nothing means anything, man, we're all like, dust in space" then I hope that I'm not one of the 1000 people.
> If you don't prefer either, cause "nothing means anything, man, we're all like, dust in space" then I hope that I'm not one of the 1000 people.
Yes, I fully understand that you're gesturing at the normal human tendency towards being pro-social in the case of Sandwich V. Torture. But the issue with these "thought experiments" which consist of implausible setup followed by binary choice of options designed to be as far apart as possible is that the human tendency you're referencing does not operate according to rules of logic and any answer I can give provides no information on how morality or decision making works. Or: if I'm in a position to torture 1,000 people by cutting off their fingers, I need more information before I can tell you my choice because my actual choices - the thing we are trying to model and/or predict - depend on those variables.
Crafting a hypothetical to try to disprove that someone's objection to a previous hypothetical - or even worse, a concrete course of action which comes with all of the contingent details that do matter a great deal - wasn't their real objection is useless, because it requires inventing situations that are outside the distribution of the thing being studied.
If someone came to you with an idealized method of calculating an object's trajectory and you point out that it is unlikely to be correct because it doesn't take gravity into account, them producing a thought experiment where the object is in a perfect vacuum without the influence of gravity does not mean that gravity isn't your real objection to their method.
A new reign of terror is needed. this comment section sucks. I think I saw someone advocating the defunding of pepfar on grounds of "the kids mother shouldn't have been wrong about sexual hygeine" or something.
more productively, I disagree with the veil of ignorance approach. Just be the kind of person that the future you hope for would admire you (or at least not condemn you). much simpler and more emotionally compelling, and I think it leads to better behavior.
I think this points at something important, but the intuition is sharper if you also stipulate the future knowing your context and thoughts and being much wiser and much, much kinder. Some people believe this means it is "good to believe" in a religion, but I think that is sort of silly and arrogant. Of course there are many people who have enough empathy to know your thoughts and there are very moral people.
People utterly refusing to engage really indicates the change in audience from people who find this kind of discussion interesting on its own merits (SA is clearly doing this to probe the limits of moral thinking as an intellectual exercise) and people who view this kind of moral discussion as a personal attack on them. Discussing morality like this feels like a core part of the rationalist movement and refusing to do so is not a good sign.
To flip this a little: I think it's maybe good that Scott is spreading EA ideas outside their natural constituency. In the spirit of, "if you never miss a flight you're spending too much time in airports", I propose "if you're not getting bad faith pushback and refusal to engage, you're not doing enough to spread pro-social ideas".
While I think the commenting population has gotten worse since the Substack move, I also think the drowning child is a terrible thought experiment and more complicated versions are not so much enlightening as they are a mild form of torture, like that episode of The Good Place where the gang gets explores a hundred variations on the trolley problem.
Discussing morality is interesting. *This particular branch* is exhausted and everyone is entrenched in the degree to which they admire or despise Singer's Mugging. The juice has been squeezed.
I am rabidly opposed to the rapid abolition of USAID.
But I am, in fact, quite struck by how appalling the continuation of the AIDS crisis in Southern Africa is and how little we are willing to condemn the sexual behavior that appears to be the driving factor in this crisis.
Babies may be blameless, but it is legitimately fucked up that a very-easy-to-prevent disease has such a high prevalence. AIDS is not malaria. The prevalence does not appear to be reduced by PEPFAR over multiples decades
Failing to engage with the thought experiment is a failure to examine your own moral system, and a failure to contribute anything useful to the discussion. Any of these comments (which there are way too many) that say something like "it's too abstract/too weird what if I change the premise of the thought experiment so I don't have to choose any bad option/ignore the thought experiment because it's dumb" are missing the whole point.
If your answer to the trolley problem is "this wouldn't happen to me, why would I think about it" then you're failing to find what your moral system prioritizes. If your answer to a would-you-rather have snakes for arms or snakes for legs is "neither to be honest" you're being annoying. If your answer to "what superpower would you have" is "the superpower to create superpowers" you're not being clever, you're avoiding having to make a choice. Just make a choice! Choose one of the options given to you in any of these scenarios, please! And if you still say "well um technically the rules state *any* superpower" then change the rules yourself so you can't choose the thing that's the most boring, obviously unintended, easily-avoided-if-the-question-is-just-phrased-a-different-way option. Choose! Pull the lever to kill 1 person instead of 5 or not! What are you so afraid of? Learning about yourself?
Scott says this in the article:
"Assume that all unmentioned details are resolved in whatever way makes the thought experiment most unsettling - so for example, maybe the megacity inhabitants are well-intentioned, but haven’t hired their own lifeguards because their city is so vast that this is only #999 on their list of causes of death and nobody’s gotten around to it yet."
And I think it's worth a whole post by itself why people are so reluctant to choose. Anybody unwilling to take these steps to try to figure out what they genuinely prioritize is *actively avoiding* setting up a framework for their own priorities, moral or otherwise. It’s not just that these people about dodging an uncomfortable choice — they're also refusing to engage with the process of decision-making itself. I cannot imagine setting up any reasonable moral system if I didn't do something so simple as *imagine decisions I don't have to make right now, but could have to make*. If I don't do that I'm basically letting whatever random emotions or vibes I feel in the moment, when I really really have to choose, BE my moral system. Why would people do that to themselves? Something something defense mechanisms? Something something instinctually keeping their options open when there's no pressing need to choose?
I don't know. I would choose snakes for legs though.
This is not my experience. People in the comments are talking about how it's "far beyond the actual utility of moral thought experiments", "How is bringing up all of these absurd hypotheticals supposed to help your interests?", "never encountered a hypothetical that wasn't profoundly unrealistic". This is a post about hypotheticals. If they don't engage with it, instead dismissing the use of hypotheticals altogether, well, refer to my main post.
Many other comments dismiss the hypotheticals with "but, like, we're all atoms and stuff, man, so like, what means anything?" I have a hard time believing these people wouldn't care if their loved ones were tortured. If they say they would care, great, they've just made a decision in a hypothetical, hopefully they're willing to make decisions in other hypotheticals.
>Religion and culture already control their actions enough to make civilized society possible. Nothing more is needed.
Nothing more is needed? There are things that I think are bad that exist in the world (malaria, starvation, poverty, torture) that I would prefer that there is less of. If I can make it so there's less of this stuff, then I'd like to. To do that, it seems I first have to decide what this bad stuff is, and to quantify how bad bad stuff is compared to each other (paperclip vs cutting off fingers with a knife). That's morality, and it can help guide decisions, too!
In 2009 on LessWrong, no less! I love it, I missed this one. Yeah I guess you can’t force all readers to read this before each article dealing with hypotheticals.
Maybe a disclaimer like “if you’re about to dismiss the use of hypotheticals, visit this lesswrong post” at the top? But I imagine the comment section would then also have people arguing against this lesswrong post, which seems kinda dumb. Also, do you really want homework on every post? “Read the entire sequences before this post to understand this best”. Ehhhhhhh
Maybe I’m going about this all wrong and I should just be ignoring all the comments that don’t engage with the hypothetical, because *I’m* not discussing the hypothetical either, I’m countering people who aren’t discussing it. So I’ve made the comment section EVEN LESS about the actual post. I don’t know, ignoring a huge chunk of comments who just don’t get the post feels weird though.
I agree that people who do this are annoying. Though too many thought experiments are also annoying. In my experience the reason that the type of person who refuses to answer a hypothetical does that is because they interpret it as a "gotcha" type question that is being asked by the asker for the purpose of pinning them down and then lording it over them by explaining why they're wrong or inferior in some manner. I don't think that's always, or even often, the intention of the asker, but that is how the reluctant askee tends to view it.
Yeah, this. Scott has a good track record of not using antagonistic thought experiments, but elsewhere online, that's not the case. It makes sense some commenters would apply a general purpose cached response to not indulge it.
Agreed. I think moral reasoning of this sort is worthless at convincing others and the methods of analytic moral philosophy in general are not good. But failing to engage with hypotheticals (including explaining why they're irrelevant or undermotivated) is like a guy with a big club saying "Me no understand why engage in 'abstract' reasoning, I just hit you with club."
I have 15,000 USD in credit card debt, near-term plans for marriage and children, plus a genetic predisposition towards cancer that may be passed on to my children. To what degree is my money my own?
If I always knew that I would have obligations to my family, and I could never fully predict my capacity to meet those obligations, then how should I think about the money that I gave to effective altruism when I was younger?
I think Scott is correct, to a first approximation, and that there is virtue in buying bed nets, but no obligation. I also agree with the comment about bed nets being a rare example where we can be confident that we're doing a good thing, despite how distant the problem is from our everyday lives.
Even so, I think the rhetoric around effective altruism is sometimes a bit... I don't know, maybe tone deaf or something? Because lots of people aren't great with money, and when you ask them to tithe 10% they're going to think of all the times they couldn't afford to help a loved one, and they're going to extrapolate that into the future, and they might decide that virtue is poor compensation for an increased risk of struggling to feed their future children, or whatever.
And, yeah, it's too easy to use this as an excuse for being lazy and not demonstrating virtue. And people who aren't good with money could sometimes be better, with a bit more effort. I really do think that Scott is mostly right, here. But it also feels like he's missing something important.
If it helps, I think you're accidental collateral damage. He's mostly talking to the types of person who say they wouldn't donate to charity even if they had means, and brag about this fact. I think insofar as EA is concerned, you should put on your own parachute first. There's no point in ruining your life when someone else can do similarly without ruin.
In theory if we lived in a world where everyone was already donating a lot, yeah maybe that would be a concern (but probably not since societal effort has nonlinear effects). But we're very far from that world, and I think it's wrong to think we are.
In my tradition, when it comes to questions of the heart, even one penny from a struggling widow is more than all the billions the hyper rich donate. There are important questions about how to help people in need, but that is the landscape, and we are travelers on it. Your heart isn't defined by magnitude of impact. It's beauty is captured in that wordless prayer that others might be better off than you.
Doesn’t this produce a paradox though? If I believe that as a median American I’m expected to donate $32,000 per year to reduce myself to the global median of $8,000 why would I bother working at all?
You could of course conclude that not only is every $ I fail to donate a theft from the global poor but that every hour I fail to work is an equivalent theft. Except, even as an EA sympathetic person that feels ridiculous.
I’m not sure there’s a clean solution to this whole paradox, but I’m also not sure there’s a clean above model works well.
I already knew that. And I think you missed the point of my question, which is "To what degree does my money belong to me, as opposed to my family? And how will I justify my altruism to my family if I find myself unable to pay their medical bills in the future?"
Your philosophy would be more convincing if I could reasonably expect strangers to altruistically help me if I find myself in need, such that the selflessness isn't so unilateral. But Scott already pointed out that I can do no such thing, and at best I can pretend. But by pretending, I would be gambling with the lives of the people I love.
I know it's possible to be okay with that. I might even agree that it's noble. But they say that the strong do what they can while the weak suffer what they must, and there's as much truth in that as there is in effective altruism. The world isn't just. And I have neither the obligation nor the inclination to be a saint.
- It costs much more money to actually save a life through charity ($4000+) than to save a drowning child in these thought experiments.
- A natural morality IMO is that helping people is never obligatory; it's supererogatory: nice but not obligatory. The only obligation is not to hurt others. Saving drowning children, and saving lives through charity, are equally good, but neither is obligatory. (Going further, of course there's the nihilist view that the concept of morality makes no sense; also various other moral non-cognitivist views, like that moral intuition is much like a taste, and not something that can be objectively correct or incorrect, so there's no reason to expect it to be consistent.)
Or they can abandon the intuition that saving the drowning child is obligatory; or abandon the meta-intuition that analogous situations ought to be treated analogously, and instead rely on their intuition in each case separately. Or of course abandon the intuition that charity is not obligatory, as the EAs would like them. If we find a contradiction between people's different moral intuitions, that doesn't tell which one should be abandoned.
Why would you ever abandon that intuition? It seems I would rather take that as axiomatic, and then work backwards from it.
I don't feel a pressing need to resolve metaethics wrt charity. And ultimately all of this discussion can easily be discounted as so much sophistry, but dear god let me not get to a point where I'm ever thinking that saving a drowning child is not obligatory lest it undermine my courage to act
I've never encountered a hypothetical that wasn't profoundly unrealistic. You have put Bob in a world where there is no local government. No police force to call and no local volunteer fire department. There's no local Red Cross to lobby for a solution to the Death River. Delete these real aspects of the real world and there will be an abundance of problems that are too big for one guy in one cabin next to one river to solve.
Also. If Bob is so isolated in his cabin, where are the 35 kids floating down the river coming from, all of them still alive? You also omitted the impact of their grieving parents who would be lobbying and suing local government for its failure to take action.
This hypothetical is as unrealistic as speculation about the sex lives of worms on Pluto.
I'd describe it as farcical but directionally relevant to some elements of reality. There are indeed many people we can help, and to the one who suffers, it hardly matters if they're in our backyard or not, so long as either way we don't help them. And to the person in the cabin, it hardly matters if people suffer and die nearby or far, so long as they've resolved to ignore them. There is no governance covering both people, yes. That part is accurate to real life, for international aid at least. But they are indeed sharing a world, with capacity to save or be saved.
Realistic? Name a city or county in the United States that would not react to the fact that 35 children drowned every single day in a river within their boundaries.
Rare? Name one point in the history where such an event has taken place. These events are not rare—they are nonexistent.
A privately hired lifeguard has nothing in common with a publicly funded fire department, which exists in every city and every suburban county in the United States. In the United States, citizens are not expected to deal with large scale problems such as 35 kids floating down the river every single day.
Incorrect, at least on a few levels. Many if not most small municipalities throughout American history rely on volunteer and citizen-based fire departments.
Likewise, as a society that arguably aspires to maximal freedom in the US Constitution, Americans are very much expected to try to deal with large-scale problems through private mechanisms, and in fact our charities and charitable giving as a percent of our GDP, and in absolute dollar terms, are world-leading by a significant margin.
Also just to tie this back to a percentage figure, Americans give 1.47% of GDP to charity, roughly double second place New Zealand, at least per ChatGPT and assuming no hallucinations, and the dollar figures are two orders of magnitude larger.
Charitable Donations as a Percentage of GDP:
According to a 2016 report by CAF:
United States: Charitable giving constituted 1.44% of its GDP, totaling approximately $258.5 billion. (Wikipedia)
New Zealand: Donations amounted to 0.79% of GDP, around $1.1 billion.
Canada: Charitable contributions were 0.77% of GDP, equating to $12.4 billion.
First of all, modern volunteer fires departments almost always receive some municipal funds for equipment, building, et cetera. By happenstance, I once attended a community fundraiser for a volunteer fire department. That well attended community fundraiser brought the communal into the equation.
Second of all, charities represent a communal effort to solve communal problems. They are one gigantic step beyond a single man at a cabin expected to deal with an obviously communal problem,
The hypothetical also assumes that the parents of these hundreds of dying children will play no role in trying to achieve a communal solution to this communal problem.
While I don't disagree, you still haven't demonstrated that receiving municipal funds are a better solution. That it tends to evolve in that direction is meaningful, but might actually be an antipattern co-opted by Moloch rent seekers for example.
This is devils' advocacy - I have no experience in this area, but I do think that the volunteers throughout our history deserve due credit and may have done a great job relative to our current system.
I think volunteerism is great too. However, I distinguish individual efforts from communal volunteerism. Believe it or not, I had a dear friend who managed to reach out his unusually long arm and grab a kid just before he landed in a raging flooded river. This event happened once in his lifetime.
Communal volunteerism realizes that there is a recurring problem in society that could be helped by a self-organized group standing ready to help. The Red Cross, founded in the 19th century, served as a template for many of these organizations.
BTW, the hypothetical guy in the cabin could have built a weir 1/2 mile up the river from his cabin. A weir is a flow through dam used for catching fish. This one would be designed for catching drowning kids.
I view this from an evolutionary perspective. We are hard wired to react vigorously to events that happen in our presence, such as a lion attacking a child. We have no evolutionary wiring to respond to events outside of our personal experience. It's hard to go against evolutionary hardwiring.
Hypotheticals aren't meant to be realistic, they're meant to isolate one part of a logic chain so you can discuss it without those other factors being in consideration. People have a bad habit in debate of switching between arguments when they're losing. The hypothetical forces you to focus only on one specific part of your reasoning so you can either account for the faults the other person thinks it has, or admit that it's flawed and abandon it. It's a technique for reaching agreement.
A quick (sigh) hypothetical example:
"If you have $500 you must give it to me. I would use $500 to make rent. If I make rent I will be able to use my shower. I will look presentable for an interview. I will then get this high-paying job and I can pay you back. Also you do not need the $500. You have an emergency savings fund already and no immediate expenses."
Most of the time an argument would look like this:
"Yeah, dude I'm saving the money in case some big expense comes up, sorry."
"I need that money way more than you though!"
"It's not fair to ask me to pay your expenses."
"I'm going to get a job and then you won't have to!"
"Are you sure you'll get the job?"
"Yeah! So it's just this one time!"
"What if you don't?"
"I will but even if I don't I still need to make rent and you won't miss the money!"
"That's not the point though."
"Yes it is!"
etc.
See how the money requester is bouncing back and forth between his argument that he should get the money because he's going to get the job and pay it back, and his argument that you have an obligation to give him money because you don't immediately need it? You can isolate one of those arguments with a hypothetical:
"Let's say tomorrow the perfect candidate walks into their office and takes the job before you have a chance to have your interview. This guy's exactly who they're looking for, immaculately qualified, and they hire him on the spot. So you don't get the job. You can't pay me back. Do you *still* think I should give you the money?"
That's unlikely to happen, but now you can talk about *just* the argument that you owe this dude money because you have more, without having him constantly try to jump back to the job thing.
Of course, this assumes good faith and a willingness to actually explore the argument together. In this particular case you'd be better served by just saying "no" and leaving. But in this blog's community, there is significant interest in getting to the bottom of why we hold certain beliefs, and if those beliefs are wrong, changing them.
Scott wants to know the answer to a specific question: "There is an argument that you are only responsible for saving people you naturally encounter in day-to-day life. Is it wrong to structure your life in such a way that you don't naturally encounter people in urgent need? Do you have a duty to save people you choose not to be in proximity to?" He's well aware that someone else might save them, that the situation would likely be resolved without your influence, and that there are other considerations. He's trying to force you to set those considerations aside for the time being so you can focus on establishing your views on that one question in particular.
> But I think most people would consider it common sense that refusing to rescue the 37th kid near the cabin is a minor/excusable sin, but refusing to rescue the one kid in your hometown is inexcusable.
Again my moral intuition straightforwardly disagrees with something! It says that not rescuing the kid in the hometown afterward is very excusable. I wonder why, though?
> I think this represents a sort of declining marginal utility of moral goods. The first time you rescue a kid, you get lots of personal benefits (feeling good about yourself, being regarded as a hero, etc). By the 37th time, these benefits are played out.
That feels like it resonates with my intuition, except my intuition *also* considers the kid in the hometown to be part of the same chain. Maybe by having done so much thankless positive moral work in the past, you've accumulated a massive credit that diminishes any moral necessity for you to take active steps like that in the future.
I notice if I swap the locations, so that it's going into the woods that results in seeing one drowning child while being in the city results in seeing them every day, this feels different—and it also feels closer to real-world situations that immediately come to mind. Maybe my mental image assumes the city is more densely populated? The more people there are who could help, the less each one is obligated to. Bystander effect is bad only when it doesn't come out to having at least a sufficient number of responders for the tradeoff to work out (though the usual presentation of bystander effect implies that it doesn't, so assuming that's true, applying the counterbias is still morally good). I bet there's something in here about comparing how many of these situations one agent can *reasonably expect to encounter* with how many that agent can handle before it reaches a certain burden threshold, then also dividing by the number of agents available in some way. This seems to extend partially across counterfactuals; being by chance the only person there at the time in the city feels different from being by chance the only person there at the time in the forest.
Or maybe it's because the drowning kids in the forest part of the water come *from* the city in the first place that affects it? Aha—that seems to make a larger difference! If I present the image of the protagonist moving from the forest to a *different*, more “normal” city, and *then* failing to rescue a random drowning child, it seems much worse than the original situation, though still not as bad as if the exceptional situation were being presented to the person for the first time, probably due to the credit in the meantime in some way over-discharging their responsibility. But if I assume the second city is structurally and socially indistinguishable from the first one, only with different individuals and with its stream of drowning kids passing by a different cabin that the protagonist never goes to, then it stops being so different again. So it's not due to the entanglement as such.
Maybe if the people in the city are already in an equilibrium where they aren't stopping the outflow of drowning kids, then it's supererogatory to climb too far above the average and compromise the agent's ability to engage in normal resource allocation (including competition) with the other people in the city—if I remove the important business meeting and change it to going out for drinks with a friend, not doing the rescue feels much worse than before, and if I change *that* situation so that the friendship is on the rocks and might be lost if the protagonist doesn't make it to the bar, then the difference disappears again. This feels independent of the intention structure behind the city not saving the stream of drowning kids to me. If the city people are using those resources for something better, the protagonist should probably join in; if the city people are squandering their resources, the protagonist is not obliged to unique levels of self-sacrifice, though it would be morally good to try to convince the city people to do things differently.
Of course, possibly my moral intuition just treats the rescues as more supererogatory than most people's treat them as to begin with, too…
> and if I change *that* situation so that the friendship is on the rocks and might be lost if the protagonist doesn't make it to the bar,
Bring the rescued kid along with you to the bar, hand 'em to the bouncer saying something like "she's your problem now," then tell that semi-estranged friend that if they don't believe your excuse for being an hour late, and covered in kelp, they can ask said bouncer.
Make a rule that the children you rescue as they pass by the cabin have to help you rescue future children who pass by. After rescuing a few kids, you've got a whole team who can rescue later kids without your help.
Scott, and I say this with love, has lost the thread here.
Like the point of the thought experiment was to draw attention to the parallels between potential actions, their costs and their benefit. These examples seem like they are meant to precisely deconstruct those parallels to identify quantum morality and bootstrap a new utilitarianism. It's putting way too much significance on a very particular thought experiment.
But even taken on its face, the answer to wthe apparent contradiction is obvious, right? Why the cost of a ruined suit feels worth a kids life to most people, but donating the same amount of money to save a life via charity is unappealing. It's not that lifes value is morally contingent on distance, or future discounting or causality or any of that. It's that when you save a drowning kid, you get a HUGE benefit as you are now the guy who saved a drowning kid. The costs might be the same, but the benefits are not remotely equal. I guarantee I get my suit money back in free drinks at a bar, and a maybe a GoFundMe , probably before the day is over.
And even if you want to cut the obvious social benefits out of the picture, self perception matters. Personally saving a kid is tangible and rewarding. Donating money to a charity is always undercut by doubts. Such as "is this money actually making a difference?” And ”why am I morally obligated to take on a financial burden that has been empirically rejected by the majority of the population?"
Because saving a drowning child is assumed to reveal something about the rescuer's moral character, while bragging about charity is viewed as performative. The former might be dubious, but the latter is usually correct.
Alternatively: because mentioning "one can save a child through charity" is an implicit moral attack on those who not given charity, whereas saving a drowning child is not such an attack because few of us will ever encounter a drowning child (and most people probably think they would save the drowning child if they ever encountered one).
Something that gets missed is that saving a drowning child is "heroic." Why is it heroic? Because even though most people say they would do it, in practice they don't. The hero takes action to gain social status. In the case of drowning children floating by a cabin, there's no heroism, since the person rescuing them consistently is now engaged in a hobby instead of a single act of will.
Also, people do move away to places like Martha's Vineyard for exactly this reason, to avoid the plebs complaining about them.
Interesting but these are all similar to the “all Cretans are liars; I’m a Cretan” self-reference trap (paradox).
Insert one word “predict”, as in “do you predict that you…” and the trap is closed because it clarifies that this is an endless regression paradox at “heart” IMHO.
All future statements are predictions, and it is self-referential. The gives away is the reference to the “37th child…”
There is no “moral choice” in infinite moral regress as there is no truth value to the statement “this statement is false”
Language is a semantic system which is incomplete under Gödel’s Incompleteness Theorems.
Angelic superintelligences are like Chuck Schumer’s “the Baileys,” a mouthpiece for moral ventriloquism.
We are here on Earth by accident. Nothing happens when you die. We should take personal responsibility for our own moral sense. Share yours if that seems like the right thing to do, and express it in the way that seems right.
There won’t ever be an authority or conclusive argument, because we’re an assembly of talking atoms hallucinating a human experience. That is beautiful and strange. I think helping other sentient beings, and helping them at mass scale, dispassionately and in ways that seem plausibly highly effective, is lovely.
If faced with a drowning child and you are the only person who can help it, you have a 100% obligation to save it. I'll leave open the question of what exactly a 100% obligation means, but it's obviously pretty strong.
If there's there's a drowning child and you're one of five people (all equally qualified in lifesaving) standing around, you have a 20% obligation to save the child.
If there's a child who's going to die of malaria and you're one of eight billion people on the planet who could save it, then you have a one over eight billion obligation to do so.
If there's millions of children going to die of something, and you're one of billions of people on the planet who can do something about it then you have something on the order of a 0.1% obligation to do something. That's not nothing, but it's a lot weaker than the obligation where there was a 1-1 dying child to capable adult ratio.
If there are 5 people, and for some reason the other 4 are jerks who refuse to help drowning children, is your obligation now 100% because your choice is guaranteed to decide the child's fate?
If 3 of them are jerks, is your obligation 50%? And can you make it 0% by becoming a jerk yourself, so that the remaining non-jerk now has 100% responsibility? Or is obligation not additive in this way, and if not, does that suggest a more complicated calculation is necessary?
Thousands of fish have washed up on the beach. A monk is walking along the beach throwing fish into the ocean. A villager laughs at him — “You'll never save all the fish!”.
The monk answers as he throws another fish, “No, but this fish really appreciates it.”
Same reason police catch speeding drivers. When one asks 'Why me? Everyone else is speeding also!' the response is 'when I go fishing, I don't expect to catch EVERY fish!'.
The Alice and Bob thought experiment feels rather strongly off to me. Yes, certainly a person who fails to do wrong due to lack of opportunity might be a worse person than another who actually does wrong. That seems to be a fine way to summarize moral luck, and we would expect eternal judgment to control for moral luck. So far so good. You then conclude that, therefore, moral luck is fake and the worse person must have actually done worse things.
I'm confused by the absence of positives to moral obligations. If someone fulfilled a moral obligation, I love and trust them more, my estimation of something like their honor or their heroism goes up. If someone who was not obliged to does the exact same thing, I "only" like them more, I don't get the same sense that they're worthy of my trust.
It's trite, but I think "moral challenges" is closer to how I feel about these dreadful scenarios. I want to be someone who handles his challenges well, to be heroic, this seems to me more primal and real than these framings of actions in these dreadful scenarios as attempts to avoid blame, in a way that I don't think reducing everything into a single dimension of heroic versus blameworthy can quite capture.
I largely agree -- I soften it down to invitations for stuff like this, because when it comes to helping strangers, it's not quite a challenge, as people avoid the question at no cost to themselves. But there is an invitation: care for the least among you. Some people see it as a beautiful thing to go to, and some do not. I largely chalk the latter up to their unbelievably poor taste hahaha
One of the things that disturbs me is: good intentions are often counterproductive. You mention Africa, and that is a whole basket of great examples.
Feed Africa! Save starving children! Great intentions, only: among other deleterious effects, free food drives local farmers out of business, which leads to more starving children, which leads to more well-meant (but counterproductive) aid.
Medical aid! Reduce infant mortality! Only, without cultural change and reduced birth rates, the population explodes, driving warfare, resulting in millions of deaths.
Far too much aid focuses on the short-term benefits, without considering the long-term consequences. Why? Because, cynically, a lot of aid work is really about making those "helping" feel benevolent, rather than actually making long-term, sustainable improvements.
In practice, reducing infant mortality leads rather directly to decreases in overall fertility; parents who can count on their children surviving to grow up don't have to have "extra" children to make sure that enough of them survive to become adults.
So give to the aid which has good long-term effects. As you are probably aware, most of the effort of the "effective altruist" movement is directed at figuring out which interventions are in fact helpful overall and which not. Follow them.
III. Alice and Bob: If Bob saves more sick people, he'll get exploited by needy "friends" into not-being-rich.
Whatever moral theory you use, it needs to be sustainable; any donation is a door open for unscrupulous agents to cause or simulate suffering in order to extract aid from you.
Not that Bob should do less - while it certainly would be coherent, it doesn't sound very moral - but I think the optimal point between saving no one and saving everyone is heavily skewed downward from maximal efficiency of lives saved per dollar because of this, even when you're altruistic.
For Alice, that applies too, though in a different ways: while there might be less expectations by scammers for her to spend everything if she spends a little, there will still be such expectations in her own opinion; this is common enough that many posts on EAF warn against burning out.
In the classic example you are the only passerby where the child drowning so you are the only one who can save them so according to the Copenhagen Interpretation of Ethics it's your duty to do so.
If we change the situation and on the river bank you find the child's parents and their uncles, lifeguards, firefighters, the local council and the mayor himself (they all have expensive manors overlooking the river) what duty do you have to save the child? According to the Copenhagen Interpretation of Ethics it's their duty to do so because they are first by the river but it's also their legal responsibility to care for the child. In the order of duty of saving the child you are at the end of the list.
Given the blatant purpose for which the drowning child thought experiment was created in the first place I propose the White Saviour Corollary to the Copenhagen Interpretation of Ethics:
"The Copenhagen Interpretation of Ethics says that if you're a westerner when you observe or interact with a Third World problem in any way, you can be blamed for it. At the very least, you are to blame for not doing more."
I hate this just like I hate trolley problem questions. They all have the same stupid property that you’re asked to exercise moral intuitions in a situation so impossibly contrived that the assumptions you must accept unquestioningly could only hold in a world controlled by an evil experimenter who is perfectly capable of saving all the people he wants you to make choices between. The obvious timeless meta-strategy is to refuse to cooperate with all such experiments even hypothetically.
The SHORT VERSION of ALL these controversies is “many people suffer in poverty and medical risk because they live in bad societies but it is always and only the responsibility of better off people in good societies to help them within the system by donating as much as they can bear without ever challenging the system itself”.
In this example, you are not supposed to question the assumption that no one would save 24 children’s’ lives a day at the trivial cost of $50 per life saved by hiring $50/hr lifeguards to work shifts, that somehow no collective action is possible either to raise money for such a spectacularly good cost/benefit ratio charity or to get the relevant public maintenance department a slight budget increase to fix the drowning hazard, and only isolated random unlucky individuals have any power to rescue these children and must do so without ever making the public aware of their absurd plight and trying to fix things that way.
If you want to point to some current country with a shitty government which blocks or steals efforts to ameliorate its citizens’ suffering, don’t give me a f***ing trolley problem to show me I must donate in clever ways to get past the government, save your efforts of persuasion for the bad government, or for good governments elsewhere or the UN to intervene.
Yes. As I said in my comment, a lot of "aid" is there to make the contributors feel benevolent.
There is simply no point to providing aid to cultures and countries where it cannot have positive, long-term effects, and likely just supports the existing, malicious system.
Yes, this is a totally fine argument, but has already conceded that if that were not the case, you would have an obligation to provide aid!
And now we can argue whether the aid in question actually has the deleterious properties you assert.
Or you can say, not so fast! I think the aid is useless, but even if it weren't, I'd still have no obligation to provide it!, and then we can focus on that claim by isolating it from practical considerations, in the hopes that if we can resolve this narrow claim, we can return to discussing the actual effectiveness.
That's the point of the thought experiments, to resolve whether your objection is actually 1. aid is ineffective or 2. even if aid could be effective I'd have no reason to give, or both, or some third thing.
But by isolating which claim we're debating, we can stay focused and not jump around between 1 and 2 depending on how you feel the argument is going.
If your objection is truly 1., and that's why you find these hypotheticals useless, then great! But you better be prepared to do battle on the grounds of "is this aid effective" and not retreat to, "why am I giving aid to Africa when my neighbour..." as many others do.
Again, the point of hypotheticals is not to be realistic. It's to remove factors from consideration to laser focus on a single question. Generally in argument, it's easy to hop around in your logic chain or among the multiple reasons you believe something. This means you'll never change how you think because if you start "losing" on one point you don't concede "okay that one is bad, I will no longer use it" but instead hop to other considerations. These "contrived" situations are meant to keep you from hopping from "is it right to kill a person to save others?" to "Okay I think I can get out of this choice completely by doing X,Y,Z." Whether X, Y, and Z turn out to be flawed or not, you still never had to answer that question, which means that you still don't have clarity on your beliefs and under what circumstances you would change them.
Of course it seems like most people manage it even within the hypothetical, so opposed they are to systematically thinking through their beliefs one point at a time.
Certainty must be a factor here. Both the certainty of help being needed (not from general knowledge of world-poverty, but direct perceptional evidence) and the certainty of the help reaching its intended target make the responsibility more real and vice versa.
I think you're taking too seriously people's rationalizations as reasoning. The transparently correct moral reasoning is much more likely to be rooted in relationships -- you ought to love and care for drowning kids, so you simply *would*, if you have the right relationship with them, simply save every one you can. Which means, yes, in the modern world, living with minimal expenses and donating literally everything you earn to EA charities is a solid choice, one of the things you'd just naturally choose to do.
I believe role ethics (as seen in Stoicism - the concept is even more central in Confucianism, but I am less well read on that philosophy) offers a good descriptive account(*), more so than Copenhagen and declining marginal utility. The idea is that a person has moral obligations depending on what roles they play in the society. Some of those roles we choose (like a vocation, which may present obligations like a captain of a ship in the case of emergency seeing to the rescue of all passengers and crew even at risk of going down with the ship, or parenthood with equally strong obligations to our children), some of them are thrusts upon us by circumstances (like duty to rescue in the capacity of a fellow citizen in a position to do so), and some come down to us as part of being a human being living in the cosmopolis (helping those in need even if they are far off).
Now, there are and can be situations where virtues and obligations pull us in different directions and resolving the conflict can be a nontrivial task (indeed, the Stoics deny the existence of a sage - someone with perfect moral knowledge), but as a practical matter it is not unreasonable to establish a rough hierarchy where your obligations towards fellow citizens outweigh those towards other members of the cosmopolis, which is why you save the drowning child. That however doesn't mean those other obligations disappear: Alice, after doing her duty as a daughter, a neighbor, a member in her work community, perhaps as a mother, etc, would in fact have these obligations. Stoic perspective isn't prescriptive in the sense of saying outright "10% of her net income", but chances are a virtuous person in position of such prosperity would likely (but not necessarily: personal luxury clearly wouldn't be something a virtuous person would choose, but she might e.g. feel to be in an unique position to advance democracy domestically, and use her good fortunes to that cause instead) be moved to act.
* And I dare say prescriptive, insofar as they help make sense of our moral intuitions, which I'm inclined to treat as foundational.
This seems like a sane take. It accounts for both our intuition that Alice really ought to do her duty to her daughter, neighbor, and work community before engaging in telescopic charity, but it also accounts for the intuition that we really ought to help drowning children sometimes, even when they are very far away. It also accounts for the case where, Alice, living in the cabin is called away from saving children by a more moderate need of her own child--the baby's got colic, and needs tending--and we don't find Alice to be totally reprehensible.
The question I always have though about role ethics or Confucian derived ideas of li is how to work out what, in this increasingly atomized cosmopolis in which we all live, I--as an unattached, single person--my roles or li are. It also seems like there is some tension between my intuition that I ought to be pretty free: I believe in divorce, in allowing minors to be emancipated, in--less extremely--moving away from one's hometown community, breaking up with friends, business partners, employers. Those freedoms seem a little bit in tension with an ethics derived from my roles.
The mechanism by which new social roles are constructed is being pushed far beyond its historical peak throughput rate, and facing corresponding clogs and breakdowns.
The argument for the copinhagin interpretation would be that instead of optimizing for getting into heaven you should optomize for "being an empathetic person"
The person who sees dozens of drowning children every day and dozent save them becomes desensitized to drowning people and losses their capacity for empathy.
The person who lives far away from the drowning people doesn't.
That's unfair moral luck, but that's the truth.
I will always remember when I saw a rabbi I knew giving a dollar to a bigger at a wedding. I asked the rabbi why he did that. Doesn't he already give 10percent of his money to charity?
He said yes and indeed the single dollar wouldn't help the bigger that much. On the other hand giving nothing will train him to become un-empathetic. He quotes a biblical verse saying something like "when someone needs money you should be unable to refuse" or something like that (לא תוכל להתעלם) or something...not sure the exact context.
Of course he still gave the 10 percent as well. He didn't think you could completly remove moral obligations by not touching them. Just that the slight additional obligation you have twords situations you touch versus those you don't relates to self empathy training.
Seems like you'd train empathy even more effectively the more you help. The 10% of it makes little sense in comparison to, if you see a person is in need you shouldn't be able to refuse. Isn't donating, and then having a great abundance left and deciding not to continue helping, a form of refusal?
It is a form of refusal but it psychologicly doesn't feel like a form of refusal as saliently. So in terms of training your own psychology it doesn't have the same effect.
>what is the right prescriptive theory that doesn’t just explain moral behavior, but would let us feel dignified and non-idiotic if we followed it?
Nobody has found one as of yet, and not for the lack of trying. I'm pretty sure that there isn't a universally appealing one to be found, even if outright psychopaths aren't included in "us".
As far as I can tell, the moral principle that the average person operates on is "do what's expected from you to maintain respectability", with everything beyond that being strictly supererogatory. This is of course a meta-rule, with actual standards greatly differing throughout time and space, but I doubt that you'll do better at this level of generality.
Never thought about that one before, but it feels natural instantly.
1.) it will help to more evenly distribute the "giving effort" among those that can help (in some situation, does not need to be the same situation)
2.) in real life the odds of success, the utility towards the saved one and the cost for the saving one - all have some uncertainty. Having a degressive motivation to help leads to a more stable equilibrium between "should have helped but did not" and "irrationally exhausts herself and thereby risks the tribe's success"
3.) limits the personal darwian disadvantage agaist freeriders in the own tribe (even if all drowning children are from a different tribe).
We are all in the cabin. The cabin is global. Despite the billions given in foreign aid and charity for decades, there are still people living in poverty and dying by preventable disease etc all over the world. And any given money is likely to go to corrupt state heads. And regardless, the only countries with a booming population are all in africa, despite the low quality of life, and we hardly need more Africans surviving.
"billions" is a superficially big number, but a tiny fraction of world GDP and collectively dwarfed by what countries spend on weapons to kill and maim other human beings
The global incidence of extreme poverty is in fact going steadily down, which is what we'd expect to see if those charitable interventions were working.
That is overwhelmingly due to economic growth and tech advances. For example due to Fritz haber there are billions more people alive than there would otherwise be. Individual charitable aid is so tiny it wouldnt even fix poverty within the nations of those that give it let alone fix the world.
Of course that doesnt mean we shouldn't all give more. But what is optimal? If we had that much more focus on charity we wouldn't have had such focus on the growth that allowed for the people to exist who need that charity.
Tech and charity aren't mutually exclusive causal factors. Pretty sure we didn't get modern insecticide-infused bed nets by banging rocks together, or having some enlightened saint pray for an otherworldly being to restock the warehouse. Norman Borlaug's dwarf wheat was paid for partly by the Rockefeller Foundation and is right up there with the Haber process in terms of ending food scarcity. Lots of EA charities are evaluated in terms of the economic growth they unlock.
The city is an amazing place to live in, with its high tech infrastructure and endless energy, but if you aren't living there, you might not have heard in your childhood that the city is directly powered by drowning children. Every child you rescue from the river, lost to the system, reduces life satisfaction of millions of citizens by about 1%. The children need to drown for the city to be as great as it is.
You're allowed to leave the city after your schoolteacher takes you all on a field trip to the hydro station, but it's only allowed until you turn 18, and if you walk away you might slip and fall into the river.
I have updated towards the following view on the drowning child thought experiment:
1. The current social norms are that you have to save the drowning child but don't have to save children in Africa
2. The current social norms are wrong in the sense that the idea moral ethics disagree with them, but not in the direction you would think. According to ideal moral ethics, the solution isn't that you have to save children in Africa. It's that you don't have to save the drowning child either.
2. Obviously, I would save a drowning child. But not because I have to due to the ideal moral ethics. It's because of a combination of my personal preferences together with the currently existing social norms.
It’s a big problem because society runs off people doing these nice one offs but then other people demand that you have to universalize it and then they stop doing nice things.
I basically agree with the conclusion (except for the "not letting capitalism collapse" part, of course, it will collapse anyway, as is in its nature, the point is to dismantle it in a way that does not tear society and its ethical foundations apart).
But the way you arrived at it was... wild. I was (figuratively) screaming at my screen halfway through the essay that if your hypothetical scenario for moral obligation involves an immensely potent institution sending dying people specifically your way, the obvious way to deflect moral blame, and the only natural reaction honestly, is asking why it doesn't simply use but a tiny fraction of its capabilities to save them itself instead.
Basically, there's a crucial difference between a chaotic, one-off event where you're unpredictably finding yourself as the only (or one of the only) person capable of intervening, and a systemic, predictable event where the system is either:
- alien and incomprehensible - e.g. you can save a drowning rabbit, but have no way of averting the way the nature works. Here, Copenhagen stands, you have no moral responsibility to act, but once you act, you should accept the follow up of caring after the rabbit you've taken out of the circle-of-life loop.
- local and familiar - in which case all notions of individual responsibility are just distractions, the only meaningful course of action is pushing for systemic change.
The most orthodox Marxist crisis theory, based on the tendency of the rate of profit to fall, depends on a labor of theory of value, which seems squiffy.
A revolution of the workers brought on by an intolerable intensification of their exploitation seems less likely in consumer capitalism, where the leisure of the workers, their consumption is an important part of the system.
I'm not opposed to the idea, but I guess I don't necessarily believe that the inherent structural contradictions of capital will lead to its collapse inevitably.
I personally have a different idea of why capitalism will collapse, which goes Capitalism > AI > ASI > Dominance by beings capable of central planning.
Interesting! That's something I've thought about as well, but I see that as a relatively hopeful outcome, a possibility, not something that for capitalism is "in its nature."
I suspect definitional and/or scope mismatch here. To clarify - I am (arguably) not a Marxist, more specifically - not a stereotypical Marxist operating on a grand theory of history where capitalism is a discrete stage of civilizational progress, to be replaced with a more advanced one. I am not saying that people will stop trading or [whatever you think capitalism means]. I am saying that societies based on capitalist principles are bound to experience a collapse - which alone, not saying much, all societies in history eventually collapse, due to more general social dynamics - and, more strongly, that in their specific case capitalism is the very mechanism bringing about this collapse, and as such, not worth trying to preserve.
(In a vastly simplified way, how this plays out is - wealth, and thus power, concentrates in fewer and fewer hands, eventually creating a financier class at top of the society. Because it's measured in money, the more concentrated wealth is, the more decoupled it is from [whatever we want from productive economy]. This eventually makes various destructive (war) or unproductive (financial instruments) "investments" more profitable than expanding productive capital, this makes economy stall, this makes the ruling class push everyone else into debt to maintain their profit levels, this further immiserates the population and bankrupts the state, causing a collapse, and this makes the financiers pack up their money and escape for greener pastures, while the regular citizens of once-wealthy society are left cleaning up their mess.)
(It happened several times in history, and we can observe it happening once again right now, in real time, in the so-called first world economic block, with US as the epicenter.)
Hm, interesting! So you don't have a stadial theory of history, but you believe that any society is eventually going to collapse, which in capitalism will come from too concentrated wealth becoming separated from what's really productive in an economy.
You gave one optimistic view of how AI could disrupt this, but couldn't it be possible that AI (-->ASI, as you put it) allows the financier class to keep consolidating forever? If they have something that makes more and more of the stuff they want, automates more and more of their economy: can't we just end up being cut out of the picture, with not much of a mess to clean up in the first place?
I agree with most of that, but I think it's solvable without a collapse. There are two different things financiers are doing, which the current system (and to some extent even the financiers themselves) mostly fails to distinguish: innovation, and extraction. Making the overall pie bigger vs. capturing a bigger piece of a fixed pie for yourself. Building capital vs. buying land to collect rent.
A Georgist land value tax skips straight to the end of that inevitable "extraction concentrates wealth" progression, compiles the resulting power into something publicly accountable. UBI keeps the local pastures green. Financiers who want more profit have to accept the risks of innovation, deliver products that some other successful business and/or the average UBI recipient is willing to pay for.
Georgist LVT in a modern society also needs to tax other sources of rent extraction like the network effects that keep Facebook afloat, and the theory there is not nearly as clear unfortunately.
I live in a third world country with first world money. My wife runs a small animal rescue out of our property and sponsors animal neutering around the city. The country is also very poor and most humans here are in what most westerners would consider a very bad situation. I spend most of my time and money on research to cure aging since I believe aging is the number one cause of pain and suffering in the world and I believe curing it is within reach.
My wife had to almost entirely stop her animal rescue efforts because it got to the point where it was consuming all of her, and much of my, time to the point where it was significantly interfering with our lives. She has friends in the rescue community who have completely ruined their lives over it, living in squalid conditions because their money all goes to feeding an army of dogs and cats. She also used to volunteer to help homeless children, but that similarly was consuming her life.
Our solution: Build a big wall around our property and don't leave the house. Every time we leave the house, we see the suffering everywhere and it is overwhelming. You can very easily ruin your own life trying to save everyone one at a time, and from an optimization standpoint there are way bigger bangs for your bucks.
Funny? Story (and what prompted me to comment):
About 10 minutes after I finished reading this article I went out to walk around my property, which I haven't done in about a month or so. I got about 20 steps out when I ran into a terrified abandoned kitten. Since I have over-active mirror neurons I was essentially forced to pick it up and rescue it before returning to look for its mother or any siblings in the area. This is my punishment for leaving my walled garden that blocks out the sound of screaming kittens and starving children.
If I owned the property next to the child river I know exactly what I would do. I would build a wall and soundproof my house so I could ignore the problem, knowing that there are better uses of my time and money than completely ruining my life saving 24 children a day. I would strive to avoid going outside, but when I had to I would almost certainly rescue another child before hurriedly returning to my walled garden.
I don't have a solution to the moral dilemma, only a solution to my mirror neurons that make me do irrational things. I suspect that most humans with functioning mirror neurons are not applying some complicated moral philosophy, they are just responding the way they were evolved to respond when they witness pain and suffering of others. Now that we can witness and impact things further away, these mirror neurons can easily be overwhelmed and cease to function as a good solution on their own.
That’s why big social problems should be left to organizations, not individuals. If a social worker for a non profit gets overwhelmed, they can quit their job and go back home. No one will think less of themselves. But if you have to live with it, it becomes more difficult.
Organized social programs often get co-opted by Moloch, doing more harm than good. I don't have a solution to this, but I am unconvinced that organizations should be assumed to inherently do better than free will and law of large numbers.
Instead their net effect may be to breed cycles of dependency that rob populations of agency and personal responsibility and prevent progress at scale.
Clearly some cultural philosophies do better than others.
You seem to have a weird idea of what Moloch is. Moloch isn't just "everything bad", Moloch is when a Nash equilibrium of independent actors is an ethical or welfare race to the bottom. It's inherently harder to avoid a bad Nash equilibrium the more players there are in the game.
The original definition of Moloch, per Wikipedia is:
The fire god of the Ammonites in Canaan, to whom human sacrifices were offered; Molech. Also applied figuratively
This is almost precisely the example I give of a centralized power structure that destroys lives at scale.
I appreciate your definition, but these two things are not the same as far as I understand it, and yes, I use the Wikipedia version of Moloch as one of my mental models.
Thank you, and yes, I read that one a while ago but lost track of the specifics.
For the sake of argument, let's call my 'Moloch' Fred. Given that I have Fred here, does it make the point more worth considering? If we have both Fred and Moloch, my thesis is that my concerns as stated are still valid.
Why do you think that there isn't a equilibrium here?
* People seeking power and money are attracted to running/operating organizations with lots of power/money.
* People wanting to do good aren't as motivated to run large organizations with lots of power/money (it is miserable work).
* Any large social endeavor that has sufficient power or money to enact meaningful change will eventually be dominated by those seeking power and money, rather than those seeking to do good?
* Eventually any large social endeavor will no longer do good.
Note: The above is just a high level hand-wavy illustration, but I am not convinced that we cannot rule out Moloch here.
Large social endeavors do sometimes (often?) end up having all or the majority of their power and money skimmed off by insiders for their own use. However, this doesn't seem to happen 100% of the time. I mean, if principal-agent problems were this bad, corporations wouldn't function at all either and the economy would be reduced to humans acting as individual agents. (And corporations do also fall into ruin by this mechanism.) So I don't think this makes the argument that the optimum amount of non-market interventions is zero and we should just accept that Moloch wins everything always.
Are there examples of powerful/rich charitable organizations not running into this problem over the long term?
I can definitely believe that this can be delayed for quite a while if you have a strong ethos aligned leader, but eventually they need to be replaced and each time replacement happens you may not get lucky with the new pick. This would suggest that while Moloch will eventually win, there is a period of time between now and that inevitability where things can be good/useful. Perhaps there is value in accepting an inevitable fate if there are positive things that come of it along the way? Or perhaps we can try to find ways to shut things down once Moloch shows up?
> Organized social programs often get co-opted by Moloch, doing more harm than good.
That's a common meme, but I don't think it's always, or even often the case. I've personally worked with a organized social program of massive size, funded by a large network of individual donors, doing amazingly good work over decades.
> Our solution: Build a big wall around our property and don't leave the house. Every time we leave the house, we see the suffering everywhere and it is overwhelming.
huh. It's like Siddhartha Gautama's origin story, but in reverse.
(I'm not trying to be sardonic or condescending. It's just an interesting observation about how one man's modus ponens is another man's modus tollens.)
I wasn't aware of that origin story, but you are right it is exactly the opposite of my solution! Perhaps there is some optimal amount of exposure to pain and suffering one needs in order to take appropriate action to address it while not also being debilitated by it?
I have my own strong opinions on the Drowning Child experiment, though I've withheld them, so far. Because about a month ago, I basically said I'd tackle it on my own substack, and then procrastinated since I'm such a lazy layabout. Nonetheless, I'm confident that I've got it figured out, in a way that solves several other ethical questions and adds up to normality. At the highest level, it's tied together by expectation management. Which dovetails with Friston and Bayes Theorem. But it's a lot to explain, and a bit woo.
For now, I'll just say that ethics is basically social-engineering. "Actual" engineering disciplines (e.g. Civil Engineering) recognize that reality imposes hard constraints to what you can reasonably accomplish with the resources you have. If you wanna launch yourself to the moon with nothing but a coke bottle and a bag of mentos, you're not gonna make it. Likewise, any ethical system that says "donate literally 100% of your money to charity, such that your own person dies of starvation in a matter of weeks" is not sustainable. It's not sustainable individually, and it's not sustainable en mass. You have to consider how things interact on a local level. Which is yet another reason why Utilitiarism Is Bananas (TM). I.e. part of the appeal of Utilitarianism is the abstracting/universalizing/agglomerating instinct to shove all the particularities of a scenario conveniently under the rug. I.e. spherical cow syndrome [0].
"Let's save the shrimp! And build a dyson sphere, while we're at it!"
How?
"details... details... "
Sure, if you have a utility function that values the well-being of others, then perhaps you want to give a portion of your resources to charity. But you have to balance it with keeping your own system running. Both physically, and psychologically. And the burden you take upon yourself should, at most, equal the amount of stress you can handle. Which varies from person to person. E.g. commenter Dan Megill mentions [1] that the charity-doctors who denied themselves coke/chocolate/internet didn't have the mental fortitude to stay in Africa. To me, what this indicates is that Peter Singer psy-oped them into taking on more suffering than they could personally handle, and they buckled. Materials Science 101: different materials have different stress-strain curves [2]. Materials are not all created equal.
In sum, there's no objectively optimal amount of exposure. It entirely depends on what you can handle, and what you're willing to handle. I.e. it's subjective. I.e. the weight of the cross you bear is between you and your god.
I thought this was mostly not about wanting to have your life ruined / being exploited? Which are closely related
If I see a drowning kid saving it would be inconvenient but it's not going to ruin my life. And this is partially because there is no Big Seamstress pushing kids into water to ruin people clothes and send their stock to the moon.
If a Megacity is skimping on lifeguard and creating a situation where I can save those kids (and also somehow there is no other person upstream or downstream willing to help them?) saving all the kids would ruin my life (I can't even sleep properly). And related to that is that the city is saving relatively little money (cost of 24/7 lifeguard so if you paid them SWE salaries maybe 2 M$/year) and getting a rather huge benefit. If they value life at 1M$ they get from me 24*365*1M$ = 8760M$ / year.
If a city spends millions to build a dam but they count on my free labor to extract billions per year from my unpaid labour then yeah they are kind of exploiting me.
With charities situation is trickier - if a charity is really saving lives at low cost then it would be great to donate to it (some amount, you probably don't want to ruin your life). But you're donating money so it's harder to verify that's actually happening. And people have obvious incentive (getting your money) to misrepresent the situation to you (so you should be more worried about your money actually being used in the way they claim to).
And people setting up a generic argument which, if accepted, would oblige you to potentially ruin your life (by giving away all your money) while potentially benefiting them (by directing money in their general direction) is extra suspicious.
I don't want to say that one should never give money to charity. I agree with what I think was the original premise of EA (find out what charities are effective and how effective exactly they are and what money you want to use charitably try to use effectively). But it's really hard!
I think most of the criticisms of these extreme life/death hypotheticals as teaching tools or thought experiments are valid, but I'll add another one I think is pretty important.
There never seems to be any scope for local or medium-scale collective action. It's always you, alone, with the power of life/death, or else Angels of Heaven making grand population-wide agreements. For example:
What if in the cabin by the river of children scenario, you found three "roommates" to live there with you (presumably all doing laptop work-from-home etc.) and you all did six-hour shifts as lifeguards, saving all the children? And why does it take a "lobbyist" to possibly get Omelas to do something about the drowning children problem? Ever see "Frankenstein"? You could pick up a drowned kid and walk to City Hall with her body, that might get some attention.
And in reality, that is how things usually improve in human society. Some local group takes the initiative to starting reducing harms and improving people's lives. Sometimes they grow and found Pennsylvania. Mostly they gain status and attention and can have their work helped by or taken up by governments (assuming any govt is not COMPLETELY corrupt.) Global-level co-ordination only happens in like The Silmarrilion -- here on earth see previous about TOTAL corruption.
BR
PS --- The second-most cynical take here would be to get an EPA ruling classifying the stream of children as an illegal "discharge" into a public waterway, getting an injunction against the city for polluting the river with dead kids, which honestly at the rate of one/hr would be some VERY significant contamation indeed, even if you had some horrible mutant species of carrion-eating beavers downstream building their dams out of small human bones.
A more cynical take comes to mind -- after a week of tag-team lifeguarding, you will have 168 children. What do you with them? This quickly become completely unmanageable. In fact after 24 hours all the kids would be so annoying you'd probably start letting the little bastards drown.
"A more cynical take comes to mind -- after a week of tag-team lifeguarding, you will have 168 children. What do you with them? This quickly become completely unmanageable. In fact after 24 hours all the kids would be so annoying you'd probably start letting the little bastards drown."
Even more cynical take: sell 'em to child sex/labour trafficking gangs. The parents obviously don't care, since they all continue to live in a city that allows children to fall into waterways and get swept downriver to drown unless a random stranger saves them. The megacity even more obviously doesn't care about what happens to its minor citizens. The problem is set up such that if you, personally, singlehandedly don't intervene the kids will drown. So clearly nobody is looking for them or dealing with them or trying to prevent the drowning. We don't even know if the bodies are collected for burial or if that too is left to whoever is downstream when the corpses wash ashore.
So who is going to miss one more (or 168 more) 'dead' kids? Profit!
That was my immediate thought, but my comment was already too long. Maybe the Clinton Foundation could send a van a couple times a day to scoop up this free resource. Or establish a Jonestown style mini-nation someplace and train your infinite steam of children who owe your their lives to be an invincible army. So many possibilities.
Exactly! You now have this never-ending (it would seem) source of free labour literally floating down the river to you. For the megacity, this is only 999th on the list of "we're in really deep doo-doo now", so it could even be argued that you are giving the children a better life (how bad must life in the megacity be, if there are 998 *worse* things than drowning kids on the hour every hour all year round?) no matter where you send them or what you do with them.
Yes, an army of infinitely-replenishing children would be great, kind of like Hector (?) and the Dragon's Teeth in Greek mythology. But for maximum Dark Lord chaos points, I think my own army of mutant carrion-eating beavers would slightly edge it out. Add in some weaponized raccoons and it's "Halo 4: The River Strikes Back"
Split the difference: hand the rescued kids over to your cadre of mad scientists (you *do* have a cadre of mad scientists, don't you?) as experimental material to help create the next generation of mutant carrion-eating beavers and weaponized raccoons! After all, the carrion for the beavers has to come from somewhere, right, and where better to ensure a steady supply than the spare parts from the lab experiments?
Train the kids to train the vulture-beavers to construct the weir to simplify the rescue process, then treat the overall situation's rank among that megacity's problems like a scoreboard for your raiders to climb.
>People love trying to find holes in the drowning child thought experiment. [..] So there must be some distinction between the two scenarios. But most people’s cursory and uninspired attempts to find these fail.
Alternative way of framing this article:
> People love trying to find holes in the drowning child thought experiment (DCTE) counter-arguments. So allow me to present even more contrived scenarios that are not the DCTE, and apply the DCTE counter-arguments to those instead and see how they fail.
My takeaway is that you're nerd-sniping yourself by employing ever more sophisticated arguments to a minimally sophisticated "I'll know it when I see it" approach to life in general and the DCTE in particular that most people have.
My intuition goes towards : accident vs systemic issues.
In IT, we have a saying : "your lack of planning is not my emergency".
Similar vibes here. Why is there a drowning child in front of me ? Is it an unfortunate accident, or the predictable and logical consequences of a really poor system ? I feel absolutely no responsibility for the second. In this example :
> Every time a child falls into any of the megacity’s streams, lakes, or rivers, they get swept away and flow past your cabin; there’s a new drowning child every hour or so.
Not my problem. Fix. Your. Damn. System. Or don’t — at this point I don’t care.
The point of the hypothetical is that this isn't really a *source* of drowning children, it's just that all the children that would normally drown in that big city end up in one place.
Still not my problem: what if my cabin is situated such that before the children coming down river reach it, they all get swallowed up by a sinkhole? So I don't even have any drowning children to save, but they're still drowning. The city is the source of the drowning children, let them sort out why one child every hour falls into their damn rivers and lakes.
The absence in general of a Duty to Rescue stems from the principle that one shouldn't be obliged to put oneself at risk on behalf of a stranger to whom one has no allegiance or duty of care, and that might not be the same risk as the stranger's predicament (assuming one didn't cause the latter).
Even with the example of the kid drowning in a puddle, who is to say there isn't a bare mains electrical cable under the water that would electrocute a rescuer as soon as they touched the water or the child?
There's also the snow shovelling example, in which if you public-spiritedly clear the snow from the sidewalk adjoining your dwelling (a sort of anticipatory rescue) and a passer by slips on the patch you cleared then they can sue you for creating the potential hazard, which they could not had they slipped on the uncleared snow!
Or you could pull someone from a crashed car that was in imminent danger of catching fire or being rammed by another vehicle, but in the process break their dislocated neck so they ended up paralyzed for life, again risking a lawsuit.
I gotta be honest with you fam: all that such posts do is make me steel my heart and resolve to not rescue local drowning children either, in the interest of fairness. One man's modus ponens is another's modus tollens and all that.
What you're trying to do here is to erase the concept of supererogatory duty. It's inherently subjective and unquantifiable so every time you say "well, you don't have to do it, but objectively you should donate exactly 12.7% of your income to African orphans, but you don't have to do it," you're not fooling anyone, you just converted that opportunity for charity to ordinary duty.
So here's an alternative you have not even considered: I decide that I have a duty to rescue my own drowning children, I decide that I have a duty to rescue my neighbors' drowning children for reciprocal reasons (and rescuing any drowning child in sight is merely a heuristic in service of that goal), but rescuing African drowning children is entirely supererogatory, I might do it when and how I feel like it, but it's not my obligation that can be objectively quantified. This solves all your "river of death" problems without any mental gymnastics required.
What I'm getting at is, when someone proposes that from assumptions A, B, and C follows conclusion D, you can agree that it does logically follow, but disagree that D is factually true and instead reject some of the original assumptions.
So when someone proposes that I have a moral duty to save a drowning child in front of me, and that the life of a drowning child in Africa has an equivalent moral worth, I can disagree with their conclusion (that I must be miserable because I donate all my money to malaria nets and that still doesn't make a perceptible dent in African suffering) and declare that no, for my purposes the children are not fungible, and also that I don't have a duty to save the local child. What's going to happen if I don't, will Peter Singer put me into an ethical prison? Even usual police, if I were to look at them as the source of morality, would leave me alone in most jurisdictions, or all if I tell them that I don't swim very well.
Then someone might ask me, won't I feel terrible watching the child drown? Sure, *that's* why I'll try to save it, but I don't feel particularly terrible about knowing that thousands of children drown in Africa because I *don't see them*, and why would I try to rewire myself about that? Reaching https://en.wikipedia.org/wiki/Reflective_equilibrium goes both ways and nothing about the process suggests that the end result will be maximally altruistic. So I can choose to retain my normal human reactions to suffering that I can see and alleviate, but harden my heart against infinite suffering elsewhere.
Similarly, wouldn't I get ostracized by the people of my town for letting the child drown, because we have an understanding about saving each other's children? Sure, and that's another good reason to save the local child that doesn't generalize to saving African children because Africans won't help me with anything and my fellow townspeople won't be upset about me not helping Africans without expecting reciprocity.
Then Scott says, all right, but watch this, and adds a bunch of different epicycles, which he then invalidates with more convoluted thought experiments and replaces with further epicycles, but I still find the end result unsatisfactory.
The solution proposed here has a fatal flaw: Rawls' Veil of Ignorance doesn't actually exist. I understand that it would be very nice if it existed, it would let us ground utilitarian ethics pretty soundly, but unfortunately it's completely made up.
The solution in the post you linked, to donate 10% of your income to charity, is also kind of incomplete, because it still tries to make an utilitarian argument, but then suddenly forgets all its principles and says that it's OK to donate 10% because most people donate less, so you can just do that and sleep soundly. Why?
What I think is if not outright missing (upon rereading both posts) then at least not properly articulated is the distinction between ordinary duty and supererogatory duty, such as donating to charity. Ordinary duty, and I'm willing to walk back my objection and include saving a local drowning child, you are obligated to fulfill. Anything above and beyond that you can do if you want, but it's not mandatory.
And that's the crucial part that allows you to have arbitrarily whimsical justifications, such as: really I'm just satisfying my desire to make the world better, so donating exactly 10% of my income scratches my itch, poor Africans are welcome. Or you can imagine that there's a God that will reward you with a place in heaven, or that you entered an esoteric compact before your angelic soul incorporated in a body, or whatever that satisfies your desire to feel like a nice person without having too much troublesome thorny edge cases.
Good post. And I approve of thoughtfully engaging with the substantive details of all ideas. But throughout the post I couldn't help but constantly think "OK, but the main point is that the Copenhagen Intepretation of Ethics has nothing to recommend it as a prescriptive theory. That seems to be the bigger issue."
The person saving the children washing out of the mega city is obviously acting extremely immorally.
"this is only #999 on their list of causes of death"
By saving those children they are neglecting 998 higher priority interventions. For every child saved from drowning they are willfully killing a much higher number of children.
The drowning child saver is a monster by Scott's reckoning.
I am very fond of Scott, but these sort of thought experiments just feel meaningless to me. This is probably a function of different starting premises. I have been reading and reflecting a lot of moral philosophy in the last years, and the place I've (nor dogmatically) arrived at is some type of non-realist contractualism, which means questions of 'ethical' behavior are basically meaningless. There are contracts (formal and informal) one submits to when part of a society, and beyond them there are preferences where people are unconstrained to change them except if they want to. Morality is just a strategically useful evolutionary strategy (both natural and cultural) that allows individuals and the groups they belong to to prosper.
Tbh I find such discussions rather tiresome. Moral intuition evolved not to help us make better moral choices, but to improve our chances of reproduction. Thus the inherent moral framework build into every human is "be as selfish as possible as long as it does not reduce your social standing in your tribe of max. 50 people".
So either go and donate most of your money for mosquito nets for African children or admit that you are not trying to maximize morality in your decisions.
I can easily admit that I like to eat fast food even though I know it's not healthy because it triggers evolved cravings and it's easier than to make the right choices. Moral frameworks like the Copenhagen theory are the intelectual equivalent of saying "if you eat with friends, you only have to count the calories that you eat more than everyone else". It's bullshit and you know it. Stop rationalizing poor decisions and own them, if nothing else.
Actually by relocating to the drowning child cabin you are given a wondrous opportunity to be in the top .01% of life-savers historically and you should really be taking advantage of it, unless you are retiring to your study to do irreplaceable work on AI safety or malaria eradication.
Yeah I kept thinking about this. Perhaps the broader world of the hypothetical is extremely strange -- certainly our glimpse is -- but it would be absurd for anyone to be so sure of their work that the cabin isn't an amazing opportunity. Even the few (less than a hundred) people who have saved more lives could not have had this level of certainty in their impact. The real question is, how does the cabin not get bid up in price by people most willing to take the opportunity? Then it should be allocated to someone who would use it to the max and have low opportunity costs, I would think. You only need like ten sane/normal/good people in the entire world to get a fairly good outcome in that situation, assuming the context isn't saturated with even better opportunities.
I think this is the pretty obvious problem with the whole post. It's an appeal to ethical intuitions, but ethical intuitions are formed by experience and interaction with the world as it exists. In a world without gravity, my horror at seeing a child falling off a cliff would be entirely inappropriate. So the extreme hypotheticals don't "isolate the variables," they just trigger the realization that "this world is different."
I strongly suspect that there is no such thing as a compete and internally consistent moral framework. The obsession that EA types have with trying to come up with a set of moral axioms that can be mapped to all situations is pointless.
Moral frameworks are an emergent property of society which make them effectively determined by consensus, weighted by status and proximity. The problem is that the individual judgements that coalesce into a consensus are not derived from some abstract fundamental set of principles isolated from reality, they're determined by countless factors that can't be formalized or predicted.
For instance..
I could walk past a drowning child and suffer reputation damage
A priest could walk past in a deeply religious society and declare that the child is the devil and so deserves to drown.
The child could be drowning in a holy river that is not to be touched so a passerby is praised for their virtue in ignoring the child and respecting the river gods.
An exceptionally charismatic individual could start a cult in ancient Rome around not saving drowning children because Neptune demands sacrifices. This cult outcompetes Christianity and becomes the foundation of all of western civilization. That passerby is not evil, he's just religious and very orthodox.
an even more charismatic individual could convince an entire nation to adopt a set of beliefs within which saving a drowning child is dysgenic because a healthy child would know how to swim.
You can keep going on and on..
It's better to adopt the general consensus of the society within which you exist or if you insist on changing the status quo, play the status game to increase you and your group's influence on the consensus. Trying to come up with a logical framework is not it because that's not what normal people are basing their judgements on.
ISTM this model is failing to capture all the variables involved. Why on earth /wouldn't/ we be obligated to save the hourly drowning child, forever?
We have a habit of excluding physical and mental health from these calculations. The wet suit and missed lunch don't matter, but dustspecks in the eye, forever, with no prospect of an end, add up.
Consider a model where people generate a finite amount of some resource each day. Let's call it "copium" for convenience. Self-maintenance costs some variable amount of the resource over time. This amount varies randomly, in accordance with some distribution. You can approximate some upper and lower bounds on how much copium you're likely to need to get through the day, but you can't know ahead of time. All decisions and actions you perform cost copium. If you incur a copium cost when you have no copium left, you take permanent damage. If you accumulate enough damage, you die.
This brings the difference between the one-off case and the hourly case into focus: the one-off scenario is worth the copium spend, but in the ongoing scenario you predictably die unless you can make it fit your copium budget.
The rule then becomes - help however many people you sustainably can, and no more than that unless you'd also be willing to sacrifice yourself for them in more immediate ways (the answer to "would you be willing to die to save hundreds of children?", for many people, isn't "no"!)
In the moment, though, when forced to actually decide, the difference between whether you act like a Singer or a Sociopathic Jerk is down to the amount of copium you have left for the day.
Another part of the difference why the cabin in the woods (and saving lives through charity on the other side of the world) feels different from the other examples is that millions or billions of other people could act to prevent the deaths (even if they don’t).
Whilst if a child is drowning in front of you only you can stop them dying.
The other element that I would add is reciprocal moral obligations. We all have a different sets of moral obligations to our direct family, extended family, friends, neighbours, town, country, humanity etc.
Whilst it might be great if everyone in the world treated everyone else like family, it would quickly fall apart to defection.
In most nice societies, you have a moral obligation to help someone whose life is in danger if you are one of the few people who can help and it is relatively simple to do so. This is a great thing and has, along with other social ties, taken hundreds (or thousands) of years to create. To prevent moral hazard (and also defection) it doesn’t really apply if someone has repeatedly got themselves into the situation - it is about extraordinary aid when something goes accidentally wrong.
This explains why in the cabin situation I feel morally mixed - the population of the megacity know this is happening and have clearly chosen to let it happen despite it being easily preventable. However I feel bad for the children (they haven’t made that decision) and at the time of their drowning I am the only one who could save them. But it wouldn’t be simple to save all of them.
This also explains why I don’t naturally feel much of a moral obligation to give to effective charities saving lives on the other side of the world. They are not in any of the communities I have varying degrees of moral obligation to (other than humanity as a whole). Furthermore those with much stronger moral obligations to those people are clearly failing them (although this varies a bit by country). There are also many others who could save them.
The big question is whether this notion of reciprocal moral obligations to differing extents to different communities we are part of, that most of us who have been brought up in ‘nice’ circumstances feel, is logically correct? I think Scott would say they are all very well, but that we should fulfill our obligations to them and then focus on how we can do the most good for humanity as a whole, from a broadly utilitarian perspective. Clearly in a direct impact sense this is correct, but thinking through secondary impacts I’m less sure.
Most directly and specifically around charitable donations from wealthier people in western democracies, if people in a country feel like the successful aren’t giving back to them and the country, this undermines support for the capitalist policies that enable the wealth to be generated in the first place.
More broadly I don’t really think you can just ‘fulfill’ your obligations to those other communities. Part of those obligations are that the more you have, the more you give back (e.g. a rich person donates to the school they attended, if you have more free time than your siblings you are expected to help out your ageing grandparents more etc). So choosing to help humanity as a whole is a form of defection (e.g. if rich people decide to switch their philanthropy to donating to charities abroad rather than at home) from these moral obligations in some sense.
By defecting from these ties and norms you are causing damage to the social fabric (or ‘social trust’ in economic terms) that ultimately created that wealth. In most ‘not nice’ countries the only reciprocal moral obligations that are adhered to are those around the extended family. A key part of why rich countries are rich is that they created strong moral responsibilities to wider communities, particularly your town, your country and other institutions within your country. Rather than a government official being obligated to cut their cousin in, in these countries they are morally obligated not to.
Personally think this is part of the reason for Trump and the populist swing in recent years. ‘Elites’ increasingly have a morality focused on utilitarianism or helping those most sidelined/discriminated against, whilst ordinary people see it more in terms of these communities, which from their perspective the elites are defecting from. For instance in the past the rich people in a town facing issues might have worked together to sort them out, whilst now they are probably more likely to just leave. Or the kids of the rich and powerful would had have a decent chance of being in the military (the death rate of the British aristocracy in WW1 was incredibly high), so ordinary people were more likely to trust elites on foreign policy decisions.
These norms and obligations only work if everyone feels like everyone else feels them and mostly acts on them (rather than being for ‘suckers’), and messing with something that is such a key part of what makes societies stable, rich and ‘nice’ is very dangerous.
This is a huge and underrated driver of NIMBYism. People are willing to destroy housing affordability and massively reduce total prosperity if it means they are more insolated from drowning children.
It’s really about the number of children drowning. One, yes you can save. Many, you cannot.
Singer - the original author of the thought experiment - argues that to impoverish ourselves to the extent that we ourselves are close to impoverishment but not quite there, as the only moral solution.
There are multiple drowning children though, not one. I imagine myself on a boat on the sea or lake with the drowning children. I can rescue the children. However, I can myself drown by capsizing my boat if I take on too many.
I am also in danger of capsizing in future if I take less than capacity, it’s not clear how many but it becomes risky to take on anything near the limit, as the boat is rickety and storms occasionally occur. People who have not maintained their boats have drowned.
All around me, though, are much bigger boats towering over my boat, these boats either don’t help the drowning children, or take on a number of children - which while admittedly more than me is nowhere near their carrying capacity - and the large boats are at no danger of sinking in future storm damage either.
Also on the lake are military who I fund with taxes who are actively drowning the children. Just jumping in and drowning a few every so often. I can’t stop this. It’s for geopolitical reasons.
None of this means you shouldn’t help the drowning children but I wouldn’t worry about relative morality here either. Rescue some, but not to the capacity of the boat, not to put the boat in danger.
I think morality originally started, and still functions for most people, for two things:
a) To pressure friends and strangers around you into helping you and not harming you, and
b) To signal to friends and strangers around you that you're the type of person who'll help and not harm people around you, so that you're worth cultivating as a friend
This has naturally resulted in all sorts of incoherent prescriptions, because to best accomplish those goals, you'll want to say selflessness is an ultimate virtue. But the real goal of moral prescriptions isn't selfless altruism, it's to benefit yourself. And it works out that way because behaviors that aren't beneficial will die out and not spread.
But everything got confused when philosophers, priests, and other big thinkers got involved and took the incoherent moral prescriptions too literally, and tried to resolve all the contradictions in a consistent manner.
There's a reason why you help a kid you pass by drowning, and not a starving African child. It's because you'd want your neighbor to help your kid in such a situation so you tell everyone saving local drowning kids is a necessity, and it's because you want to signal you're a good person who can be trusted in a coalition. The African kid's parent is likely in no position to ever help your kid, and there's such an endless amount of African kids to help that pouring your resources into the cause will outweigh any benefits of good reputation you gain.
Our moral expectations are also based on what we can actually get away with expecting our friends to do. If my child falls into the river, I can expect my friend to save my child, because that's relatively low cost to my friend, high benefit to me. If my child falls into the river 12 times a day, it'll be harder to find a friend who thinks my loyalty is worth diving into the river 12 times a day. If I can't actually get a friend who meets my moral standards, then there's no point in having those moral standards.
Essentially ethics makes sense when centered around a community but we in the west don’t really have communities anymore. Hence the incoherent philosophy.
I've never really seen this version of ethical egoism that's like "it's Moral Mazes all the way down" espoused other than here. Although now that I think of it, Rawlsianism basically assumes that this is what would happen without deliberation behind the Veil of Ignorance, and nobody but maybe Mormons believes the deliberation actually happens. Nonetheless I don't think this is plausible on a human level, even if it probably is from a gene's-eye view, because sympathy and guilt are things. If you suffer for ignoring others' well-being, then others' well-being is at least sometimes more-than-instrumentally important to you.
I subscribe to this as an explanatory theory but not a prescriptive one. Sometimes you have to be better than the soulless, brainless and hopeless forces that made you, because you do have a soul, a brain and a hope. Sometimes you see that you're being puppeted and you think that's the best of all possible worlds.
The most important part of bravery as a virtue isn't that you have ridiculous amounts of it for situations that rarely happen, but that you have enough of it to face the parts of you that are imperfect and acknowledge that you are imperfect, so that fixes and changes can happen at all. And you can't argue someone into being brave. I don't know how else to explain why else people flinch away from being better than what they were designed for.
Yes - and even more so. "Morality" is not a rule system, it is a mishmash of loose heuristics that evolved to help us cooperate in small, local groups because cooperating groups outcompete non-cooperating groups.
With this in mind, I think most seemingly paradoxical moral intuitions make sense. It is all about what someone who saw or heard about some or all of what you did or did not do might be able to infer about your motivations (all in the context of a group of 20-30 people with only eyes, ears, and a theory of mind as evaluation tools).
Contorted moral scenarios are engineered to exploit the incoherencies of our moral system heuristics just like optical illusions show the incoherence of our visual system heuristics. These inconsistencies persisted because they were not relevant in our evolutionary past. There were neither Penrose Triangles nor robotic surgeons out on the savanna.
Right, I don't think Scott or others of an EA persuasion would dispute this, or any of the similar statements made above.
The point is that, we don't live in the savannah anymore, but we still live in networks of people that approximate the social structures we evolved with, and technology and culture put us in some kind of proximity to people who are distant from us, yet whom we also can't help but apply our moral instincts to.
Since our intuitions can't help but be incoherent, but we still want to live in a cooperating group (or to put it in the language of the comment you're responding to, we still want to signal to friends and strangers that we should be helped and not harmed), we have to build something coherent enough to achieve these aims, built out of our evolved moral intuitions.
That's necessarily gonna mean making tradeoffs between different moral intuitions, hence the convoluted thought experiments to figure out what exactly our moral intuitions are, and how we trade them off against each other.
From a prescriptivist standpoint, there won't come a time when it will *not* be more moral to save the next drowning baby-sutured-to-a-famous-violinist floating from the magical post-industrial bubble city filled with burning fertility clinics and infinite trolley switches or whatever the shit. The person who donates 11% of his wealth to mosquito nets is better than the person who donates 10%.
But I'm sorry, I can't do it. I'm flawed. I don't live for others as much as I could. I'm too attached to comfort. I (roughly) tithe but I could give more if I didn't pay for the Internet connection that I'm using to post this. I could be volunteering instead of posting.
Perhaps someday I'll grow in selflessness and I'll get to the point where I love radically, for the whole world. I think that's the call of Christianity in a fallen world I just hope that until I get there, my sins of omission aren't considered too great.
You raise a good point: what if, in order to save the drowning child, they have to be plugged in to your circulatory system for the next nine months (falling into this river automatically gives them kidney disease as well as risk of drowning)?
Are we then permitted to refuse to have attached the drowning child? Cage match between Singer and Thomson!
I have been writing about aphantasia and hyperphantasia and what it might mean, for these thought experiments, if you actually *see* the drowning child or the terrible thing that needed intervention. Our reactions are not wholly philosophical. https://hollisrobbinsanecdotal.substack.com/p/aphantasia-and-the-sixth-sense
I feel bad admonishing Scott for not being universal enough when there's this much opposition in the comments to having even slightly more rational ethics. And I realise he has to take into account that you can't expect anyone to be fully rational or alturisitc. But if you really take Rawl's veil seriously the conclusion should obviously be world communism for all sentient beings.
If Earth was populated with perfectly rational, perfectly allutristic Rawlsians they wouldn't just be donating 10% to bed nets, they'd also be spending like 25% of world gdp building social housing for wild mice etc.
>How much should they pay? Enough to pick the low-hanging fruit and make it so nobody is desperately poor, but not enough to make global capitalism collapse.
I feel like like the 10% level of alturism Scott's proposing is way lower than could be justified by constraints on maintaning economic growth and he's really considering psychological opostion to being more alturistic than anything theoretical here. The top rate of tax used to 90% for a lot of places in the post war period, and modern gdp per capita is about 20x above subsistane level. The theorietically ideal Ralwsians could easily be spending 50%+ of gdp on charitable redistribution imo.
>I think the angelic intelligences would also consider that rich people could defect on the deal after being born, and so try to make the yoke as light as possible.
Considering the posibilty of defections seems to defeat the point of the though experiment since that's no longer behind the veil.
That is a sensible position - basically one of diminishing returns to your free time, which seems reasonable.
We need a better term for this than Singerism, the guy himself has never, even once, sold all he has to give alms. Perhaps what you mean is sainthood?
> A moral rule of say spend 1% of your time and money on altruism and try to make that as effective as possible would be better...
Maybe I'm missing something obvious, but isn't that almost word-for-word the goal of effective altruism, only with a 0 after the 1 (and they'll help figure out how best to spend it)?
Yeah, right-wing commentators often make the argument that aid to Sub-Saharan Africa is just facilitating the Negroid Population Bomb, and I don't think that's a crazy thing to be concerned about.
However, (A) they are vastly overestimating the extent to which the average african currently gets their calories from western aid, (B) ignoring that mortality reductions and economic gain reduces TFR, (C) ignoring that aid can be used directly to encourage smaller family sizes, and (D) look pretty fucking barbaric when suggesting that mass starvation is just the natural antidote to this problem, in preference to spending 0.5% of western GDP.
I mean... I'm a HBD-pilled pro-eugenics neo-Darwinist, I know that the high-IQ populations of the planet need to prioritise their genetic continuation, SSA's TFR needs to come down, and I don't assign equal value to all human life any more than I equally value all animal life. But unless you value black lives about as much as bacteria I don't see how slashing these aid programs can be morally justified.
I'm having a hard time parsing what you're trying to say here?
What is "the hole" supposed to represent here, exactly?
<sarc>Turning this into a cry of 'racism' against your political opponents is the very definition of rationally altruistic behavior</sarc>
Seriously, it's awfully tired and nobody introduced race into this discussion but you. Stop projecting.
I think you're mistaking me for someone who cares about being called racist, but sure, the case is more general than just the situation in SSA.
I think this is why so much of Christianity is about forgiveness and change and acceptance. The people writing the manuals desperately wanted to be good, and that's the dynamic you need. Of course if being good isn't a priority, it sort of becomes pointless self justification, which is why the average American atheist is cynical about the project -- seeing what is there at the local church is grim vision indeed -- but there's a roadmap there. The requirement isn't to be perfect. The goal is to be perfect. The aesthetic is to be perfect.
"The person who saves the 37th child is more moral than the person who doesn't"
I don't think anyone disagrees that saving the nth child will give you some morality points. The disagreement is whether refusing the save the nth child will lose you morality points.
"Of course you have a moral obligation to save every single child that you can."
Whence comes this moral obligation? Personally, I'll take it from God telling me so, but in a secular world, some guy off the Internet can go whistle.
Not if it's the Internet Atheist version of "I don't believe any of this sky-fairy crap but I will quote it to force you to do something I want you to do".
Not all they that cry "Lord, Lord" will be saved, remember?
There is nothing else for me to be. I've tried to be atheist and it never stuck.
I think "obligated" is a difficult word here and can be avoided, as descriptively we don't require this of anyone.
It would be more accurate to say, "The more you do, the more value your life has" or something similar. You need strong phrasing to communicate the vital importance of doing this but not blame-based to avoid basically saying "it doesn't matter what you did if you didn't do everything."
But once we start discussing morality, we're wading into an entire morass. Morality is good, okay, but what counts as moral? If I think homosexuality is immoral, am I good or bad? How do we determine if it is or is not immoral? If not saving a drowning child is immoral, is not saving a pregnancy from being aborted immoral? How do we distinguish between the two lives there?
Because the people on here telling me to "go back to the hypothetical, engage with the hypothetical" don't want any nuance or grey areas or contemplation of real world, we are supposed to just go "child drowning, must save". Okay then, child in womb about to be killed, must save. Engage with that and then talk to me about morality.
Oh and you can't argue "it's not a child", "it's only a potential person", "it depends on the stage of development" and the rest of such arguments because nuh-uh, that's dodging the hypothetical. After all, we don't list off what age the drowning child is, whether they're a genius or a Downs Syndrome child, who their parents are, or any of the rest of it. So now define morality for me based on actions deliberately chosen or inaction deliberately chosen with no refinements other than "this is a life, you are obliged to save life".
As a worshipper of Tlaloc, I feel my moral duty is to drown as many children as possible and so if I'm not pushing a kid into a pond 24/7, can I really say that my life has value? 😁
Curious what your stance is on potential-humans vs already-existent humans.
Are we also obligated to bring as many people into existence as possible, since we're "killing" them by accepting the alternative course?
This isn't meant to be a gotcha. I'm just curious on your worldview.
I'm also curious, do you strive to be 100% moral under this code yourself?
Bringing up the actual in real life effects of actual charities doesn't seem to motivate anyone, because they fall back onto abstract arguments about why it's not good to do charity that on average saves a life per 6000 dollars. And obviously as you see, it's pointless to discuss hypotheticals when you can have real life details to talk about things.
So yeah, I agree that EA refusing to obey social mores is cultish. Normal people drop it when they see you aren't interested in conversation.
I do think you can persuade people, but it's much closer to discovering existing EAs than it is making them. Doesn't invalidate your point though, especially since this essay is targeted at someone who probably thinks they think a lot about morality.
Not relevant to the point you are making, but apparently the split brain experiments are not as well founded as previously believed: https://www.uva.nl/shared-content/uva/en/news/press-releases/2017/01/split-brain-does-not-lead-to-split-consciousness.html?cb
Pardon me if I'm missing something obvious, but don't “split-brain” patients still potentially have a ton of mutual feedback via the rest of the nervous system and body?
Oh yeah completely separately I'd like to apologize for embodying the failure mode you're talking about here. I'm not good and I use this place as a cathartic dumping ground for my frustrations, whoops.
Sometimes the brain worms get me but I'll try to keep in mind that sometimes third parties have to scroll past my garbage. Need to imagine a stern looking Scott telling me to think if it's a good comment before posting.
"This obsession with arbitrary ethical rules"
I would argue that this piece, and EA in general, is trying to make those rules less arbitrary.
Arbitrariness is a matter of degree. The fewer convoluted assumptions are required before logical implication can take over, the less arbitrary some idea is. Saying "still ultimately arbitrary" and then justifying "ultimately" on the grounds of the is-ought problem being a thing at all... by that standard, the phrase "arbitrary ethical rules" is about as redundant as "wet lakes" or "spherical planets" - unclear what it would even mean for the descriptor not to apply, so using it anyway is more likely a matter of smuggling in misleading connotations.
If someone told me their own hamburger had ketchup on it, just after having taken a bite, I'd be inclined to believe them even if I couldn't see any ketchup there myself - it's not an intrinsically implausible claim, and they'd know as well as anyone would.
Similarly, having observed it directly I consider my own life to have value, and I'm willing to extend the benefit of the doubt to pretty much everyone else's.
Doesn't seem to be deleted: https://laneless.substack.com/p/the-copenhagen-interpretation-of-ethics
It was originally, long before Substack was founded, at a different URL that's no longer online. Possibly people don't know that there's now a Substack.
I linked to it at the time https://entitledtoanopinion.wordpress.com/2015/07/17/you-said-it-better-than-my-years-of-attempts/
Oh, thank goodness - I'd have been sad if a "foundational reference" essay that I reread periodically was gone for good. Link rot comes for everything in the end...
I made sure that it was on the EA forum before the old blog went offline, and that copy I expect to be Extremely Permanent:
https://forum.effectivealtruism.org/posts/QXpxioWSQcNuNnNTy/the-copenhagen-interpretation-of-ethics
Sorry about the trail of dead links I've carelessly left in my wake.
"Sorry about the trail of dead links I've carelessly left in my wake." - Me, the first time I played a Zelda game
This kind of thing is getting far beyond the actual utility of moral thought experiments. Once you're bringing in blatantly nonsensical constructs like the river where all the drowning children from a magical megacity go, you've passed the point where you can get any useful insight from thinking about this hypothetical.
If you want to actually make a moral point around this, it's better to find real-life situations that illustrate your preferred point, even if they're messier or have inconvenient details. The fact that reality has inconvenient details in it is actually germane to moral decision-making.
So much this. My moral intuition just completely checks out somewhere between the examples 2 and 3 and goes "blah, whatever, this is all mega-contrived nonsense, I might just as well imagine me a spaceship while at it". Even though I'm already convinced of the argument Scott makes.
I feel the same about many trolley problems.
Having said that, doing thought experiments is a good discipline.
Theyre a great way to consider things in a stripped down way. They just hurt the brain a bit.
Learning morality from these thought experiments is like learning architecture from an Escher painting.
True that it's hard to learn from these--but they're not for *learning* morality. Thought experiments are the edge cases by which you *test* what you've learned or concluded. In that analogy, it's like looking at what architecture *can't* do by studying an Escher lithograph.
Practically speaking, no one has been persuaded into actually looking into details when they say things like "why would I donate to malaria nets". They fall back onto their preconceptions about how charities are corrupt and oh no nothing ever happens productively when it comes to charities, despite those points being laid out in exhausting detail on givewell's website.
So when people say that hypotheticals are useless and that it takes too much time to find out germane details, it sure does seem like people have a gigantic preference for not having anything damage their self image as a fundamentally morally good person, and this preference happens before any rules about the correct level of meta or object level details arise.
I mean, that's obvious, right?What's your point? That most people don't seem especially saintly when scrutinized by Singer or similarly scrupulous utilitarians?
If it was obvious, there'd be way more pushback re: discussion norms against bad faith. Coming into a discussion with your bottom line written down and being unwilling to update on germane facts that someone has to find for you is rude and should be banned via most ethical systems and not just utilitarianism (or is being stubborn a virtue?)
I'm not saying that they're at fault for being less virtuous, but for *not even attempting to be virtuous by most definitions of virtue*. Neither deontology nor virtue ethics says that's it okay to ignore rules or virtues because it feels uncomfortable. And this isn't a deep seated discomfort that's hard to hide, it's an obvious-by-your-accounting one!
You're just saying "bad people should be good people" at great length here. So yeah, I'd say it's pretty obvious.
> or is being stubborn a virtue?
Plenty of people think of things like maintaining faith and hope in conditions where they are challenged as virtuous, rather than as opportunities to reconsider your beliefs. Usually this is couched in terms of being ultimately right, contra the immediate evidence - seems like a pretty good definition of stubbornness to me.
You're wrong. I was persuaded precisely by the details, specifically by Scott back on SSC - the post which finally pushed me over was *Beware Systemic Change*, oddly enough, but the fuel was all of his writing about poverty and the effectiveness and so on in a specific detailed fashion.
What I think you're saying is "people want to be selfish and will engage in whatever tortured pseudo-logic that lets them indulge in this urge with minimal guilt". And on a purely descriptive level, I agree. I also think that's bad, and we should not in any way encourage that behavior.
Thank you so much for proving me wrong. I should not have been hyperbolic.
And I also agree this shouldn't be encouraged, but I have no idea what a productive way of going about this would be. The unproductive way I've been doing is to post snark and dunks, which I agree is bad and also should not be encouraged but what if it makes me feel a tiny bit better for one moment? have you considered that.
But no seriously, you can't see the exact degree to which someone is bad faith in this way until you've engaged with them substantially, at which point they usually get bored and call you names instead of responding. Any ideas would be welcome
Politics is the mind-killer. It is the little death that precedes total obliteration. I will face the hot takes and I will permit them to pass over me and through me. And when the thinkpieces and quips have gone past, I will turn the inner eye to see its path. Where the dunks have gone there will be nothing. Only I will remain.
But to your point, yes, broadly speaking I agree. Claims that you have an obligation to be Perfectly Rational or Perfectly Moral-Maximising or whatever at all times, and to fall short by a hairs breadth is equivalent to having never tried at all or tried as hard as possible to do the opposite, are utterly Not Helpful and also patently stupid. If I came across as saying that, I strongly apologise. And implied within that position is that it is less than maximally damning to fall short from time to time - not *good* maybe, but you do get credit for the Good Things.
And yes, I agree that there is a lot of bad faith on this topic, because people want to justify their urges to have another six-pack of dubiously-enjoyable beer rather than helping someone else, an urge which only gets greater with greater psychological distance. Construal level theory is applicable here, I think. Frankly, I'm getting pretty hacked off with people arguing in what is obviously bad faith trying to justify both being selfish and viewing themselves as not-selfish.
The basic way I ground things out is "do you accept that, barring incurring some greater Bad Thing, to a first approximation we have some degree of moral obligation to help others in bad situations?" If yes, then we can discuss specifics and frameworks and suchforth. If not, we're from such totally different moral universes then our differences are far more fundamental.
> If I came across as saying that, I strongly apologise.
You did not come across this way.
I actually do think I'm not being helpful, and like, surely there exist norms that we can push for where people don't post such bad faith takes.
> If not, we're from such totally different moral universes
To a certain extent, this is not what Scott believes and it's to his great credit that he doesn't, because it's what motivated him to be persuasive and argue cogently for his point.
Agreed. The day I first encountered Peter Singer's original drowning child essay, I went home and donated to malaria nets. I've been donating 10% of my income to global health charities ever since. Hypothetical situations aren't inherently unpersuasive, even if you can't persuade all the people all the time.
I truly think that most people just don't have money to donate to charity after all of the taxes they pay. People may believe that if spending isn't taken care of immediately that the government will go bankrupt within 1-5 years and if that happens the entire western world will. Ollapse overnight and a whole lot of people, the entire planet, will be suffering a whole lot. People may also believe DOGE actually will make things more efficient and if that ends up being the case its completely fine to continue to help the rest of the world in a streamlined and technologically up to date way.
I honestly haven't kept up with DOGE and whats going on but it seems like theyre going full Shiva on everything and then reinstating things they make mistakes on. Its not the way I think anyone would prefer but if it really is true that the US could bankrupt within 1-5 years then this absolutely had to happen and one can be a moral person that supports this.
I think the mega-death river is actually a pretty reasonable analogy for many real-life situations. Scott has mentioned the rich Zimbabweans who ignore the suffering of their countrymen. These are analogies for simply turning a blind eye to suffering, and the point being illustrated is that morality does not reasonably have any *actual* relationship with distance or entanglement or whatever, it's just more convenient to request that people close to a situation respond to it.
Of course there are plenty of ordinary Angolan businessmen, but I think the assumption must be that the rich Angolan is probably not a legitimate businessman but someone who skims or completely appropriates Western aid or the oil revenues that themselves owe to Western businessmen.
I would mostly agree. It's the distillation of some moral hypothetical into a specific (albeit wholly artificial and nonsensical) scenario that makes it a PARABLE.
I think people are apt to ignore problems if they think they can't do anything useful. They might or might not be right about whether they can do anything useful.
Sometimes the locals are the only ones who can help. Oskar Schindler was in the right place and at the right time to save a good number of Jews. Henry Ford wasn't in a place where he could do much. What he could do, make weapons for the allies, was entirely different from what Oskar could do (making defective shells for the Nazis as a cover for saving Jews).
Even assuming Ford was a moral person who was genuinely interested in helping, he didn't have an avenue to do so in a direct way. I don't consider that a moral failing. That he instead chose to help the war effort (which maybe not coincidentally also gave him a lot of money) is not a moral failing either.
And sometimes we just make mistakes, which we cannot determine at the time. The US returned several boatloads of Jews to Europe at a time when it didn't seem like that was likely a big deal. Hindsight wants us to call the action evil, but that's a kind of bias. It was 1939. Very little of Europe was under the control of the Nazis and there wasn't much reason to think that would change. Even less reason to think that the Nazis planned to exterminate Jews in lands they conquered. The solution of "always accept boatloads of foreigners" is not a reasonable policy and comes with its own negatives and evils, which again would be noticed in hindsight.
Maybe Henry Ford isn't the best example to use here.
https://www.si.edu/object/american-axis-henry-ford-charles-lindbergh-and-rise-third-reich-max-wallace%3Asiris_sil_1094433
I'm totally aware of that. Hence the "even assuming Ford was a moral person" part.
"The solution of "always accept boatloads of foreigners" is not a reasonable policy"
It was America's policy up until 1882 and, for white people, up until 1921.
Which means that "sometimes accept boatloads of foreigners" is a reasonable policy. That does not imply that "always accept boatloads of foreigners" is as well.
Yes, I think even more than physical closeness (which, to me, include all the examples with remote bots, portals and any techno-magical way to be able to experience things and jump in as easily and quickly as if you were physically close - so the thought experiments are not ruling out closeness, because it's very clear those alternatives have the same effect as physical closeness for many things, not only altruism. It just precise what closeness is, when (existing or hypothetical) things make it more complex than physical distance), altruism is boosted by:
- innate empathy (higher for children, higher for people more like you, higher for women, lower for enemies)
- the impression you can help (your efforts are not likely to be in vain)
- the impression you will not loose too much by helping
- this include the fear of establishing a precedent for such help, which indeed can cost a lot if such issue is ultra-common. For me a better interpretation to lack of empathy for common misery than habituation...
- the impression you can gain social status as the "good guy" (direct or indirect bystanders).
On the other hand, it is decreased (decreased a lot, I think) by:
- the impression you are being taken advantage of, scammed in a way.... (i.e. your saving will super-benefit the victim that would become more well off than just fix the issue (like drowning), or, more commonly, it benefit a third party, especially if this third party caused the problem in the first place). This is linked to the "loose too much", but not only, also a little bit to the social status (hero v.s. trop bon trop con (too good=too dumb), but I feel it really is an altruism killer in it's own instinctual way. Maybe THE killer.
I use "instinctual" a lot, because I am fully in the camp of morality being an instinct first, an axiom-based construction (distant) second. So, like other instincts/innate things (like sensory perception), it is easy to construct moral illusions, especially in situations impossible (or unlikely) to happen during human evolution.
I think its a good illustration of how to think about this problem
Here's a real-life situation:
You're a doctor working at a hospital, putting in superhuman effort and working round the clock to save as many people as you possible can. Once you finish your residency, do you have the moral obligation to keep doing this?
You have a moral obligation to be a good person. There are many ways to do that, of which backbreaking labor at a hospital is both not the only option and perhaps not the best option.
You don't have a moral obligation to be a good person - to be a good person is to go above and beyond your obligations. Meeting your obligations doesn't make you good, it makes you normal.
This attitude is toxic and feeds into tribalism and "no cookies" arguments where treating the other tribe well gives credit even for a little while treating your tribe with anything but the most delicate kid gloves invites excoriation.
That is not a prescriptivist statement, but a descriptivist statement, about what the words actually mean to people.
I'm not sure it works as descriptivist either--there are plenty of people who divide the world into "good people" and "bad people", not "the good, the bad, and the average".
I didn't respond at first because in some sense you're right - or we could quibble over what "good" or "Good" mean, which probably isn't productive.
I will say that I don't consider moral to be neutral. Just being a normal person who does normal stuff doesn't make you moral. It doesn't make you immoral, either.
For me to consider someone moral, I believe that they have to do things that are morally positive that are not natural or easy. There has to be at least some effort at doing other than go-with-the-flow.
Again, not doing that doesn't make you evil (usually), but I don't want to dilute the idea of morality to make it natural and easy. It lets everybody get off too easily and with no benefit to society. We should expect more, in the sense of "leave the place better than you found it."
Agreed.
Does the hospital administrator have a moral obligation to hire enough people to finish all the necessary work without sleep deprivation and burnout?
Does it matter? Does the fact that someone else's lack of moral obligation left you in this situation mean you don't need to help?
Maybe you see a drowning child because someone didn't fulfill their moral obligation to add fences. Or because someone pushed the child into the river. Does that change your moral obligation to save them?
Strongly disagree. The utility of unrealistically simple toy models is that they can explain principles that the messiness of real-world examples conceals.
Suppose you're Newton trying to explain how orbits work with the cannon thought experiment, but the person you're talking with keeps bringing up ways in which the example is unrealistic. "What sort of gunpowder could propel a cannonball out of the atmosphere?" they ask, and "What about air resistance slowing the cannonball down?" and so on.
It's not unreasonable to say in that situation "No, ignore all of that and focus on the idea the thought experiment is trying to communicate. If it helps, imagine that the cannon is in a vacuum and the gunpowder is magic."
And sure, if Newton thought hard enough, maybe he could come up with the concept of rockets and provided an entirely realistic example of the principle- but if someone had demanded that of them, they'd still have been missing the point.
You are making sound argument for using extreme examples where the details stop mattering.
Unfortunately, what we have here are highly contrived examples with lots of extra details thrown in to muddle up the analysis.
They may both be hypotheticals/thought experiments, but otherwise they are not similar
>The utility of unrealistically simple toy models is that they can explain principles that the messiness of real-world examples conceals.
Even the most simple, original Drowning Child thought experiment is drawn from messy reality. It asks us to avoid many questions that any person in that situation might ask themselves, intuitively or not: What is the risk to myself, other than the financial risk of ruining my suit? Am I a good enough swimmer to get to the child and pull it to shore? Is the child, once I reach it, going to drown *me* by thrashing around in panic? Is the child 10 meters from shore, or 100? Are there any tools around that could help, like a rope?
Plenty of complications there already, and no need to introduce even more. Or, if you do need to introduce more, start asking yourself if it's really a good thought experiment to begin with.
But Newton never claimed that his cannonball experiment was, by itself, proof of his theories, only that it helped to illustrate an idea that he'd separately demonstrated from real examples. Scott doesn't have the real-world demonstration.
How do you do a "real-world demonstration" of an ethical principle?
Complete agreement.
I'd have thought the opposite is true, or can be, in that well-chosen idealized scenarios can help clarify and emphasize moral points. It's analagous to a cartoon or diagram, in which a few lines can vividly convey all the relevant information in a photo without any extraneous detail.
I actually have found a lot of utility in it because I seem to disagree with basically everyone in this thread, and it has given me context on why I find EA so uncompelling.
By relocating to the drowning child cabin you are given an incredibly rare chance to save many lives, and you should really be taking advantage of it.
On the other hand, you only get the opportunity because the megacity is so careless about the lives of its children. Obviously, saving the drowning children is a good thing, but what would be even better is if the megacity does something to prevent the kids falling into lakes and streams in the first place.
And if they don't bother because "well that sucker downstream will save the kids for us, and we can then spend the money that should go to fencing off dangerous waterways and having lifeguards stationed around pools on trips to Dubai for the rulers of the city", then are we really saving lives in the long run?
You are not really engaging with the thought experiment. Maybe think of this experiment instead: you suddenly develop the superpower of being able to be aware of any time somebody is drowning within say, 100 miles, and being able to teleport to them and teleport back afterward. If you think 100 miles is so little that a significant number of people drowning within that area is the government being lazy or corrupt, then imagine it was 150, or 200, or 1000, or the whole planet if you must. Would you have an obligation to ever use the active part of your powers to save drowning victims, and how much if so?
"You are not really engaging with the thought experiment."
Because it's rigged. It's not honest. It's trying to force me along the path to the pre-determined conclusion: "we think X is the right thing to do and we want you to agree".
I don't try to convert people to Catholicism on here, even though I do think in that case X is right, because I have too much respect for their own minds and souls. I'll be jiggered if I let some thought experiment that is as rigged as a Las Vegas roulette wheel manhandle me into "well of course I agree with everything your cult says".
EDIT: You want me to engage with the thought experiment? Fine. Let's forget the megacity.
Outside my door is a river, and every hour down this river comes a drowning child. Am I obligated to save one of them?
I answer no.
Am I obligated to save every single one of them, morning noon and night, twenty-four drowning children a day every day for the foreseeable future?
Again I answer, no.
But that's not what I'm supposed to answer? Well then make the terms clearer: you're not asking me "do you think you are morally obligated?", you're telling me I'm morally obligated. And you have NOT made that case at all.
There's a group of people who think that if you live in a regular old cabin in the woods in the real world, see a single child drowning in the river outside, and can save them with only a mild inconvenience, you are morally obligated to do so.
The child-drowning-in-a-river-every-hour thought experiment is a way to further explore that belief and discuss where that moral obligation comes from. Of course it's going to sound absurd to you, because you don't agree with the original premise. It's convoluted because it's a distortion of a previous thought experiment.
I'm not a huge fan of the every hour version because it implies an excessive burden on the person who would have to save a child every hour, completely disrupting their life and removing moral obligation to some degree. I think the comparison of the moralities of the two people earning $200k is a much more interesting example.
Save sixteen kids the first day, then formally adopt those. Have them stand watch in shifts, with long bamboo poles for rescuing their future siblings from safely back on shore. If their original parents show up, agree to an out-of-court settlement, conditional on a full-time lifeguard being hired to solve the problem properly.
I mean, if you seriously think you're not morally obligated to save any drowning children (and elsewhere in the thread you said it applies to the original hypothetical with just one child too), then fine, you've finally engaged instead of talking around the thing.
This, and your attitude to moral questions in general, does affect my opinion of the effectiveness of Catholicism, and religion in general, in instilling morals in people though, and I can't be the only one. You're not just a non-missionary, you're an anti-missionary.
Oh dearie, dearie me. You now have a poor opinion of Catholicism, huh? As distinct from up to ten minutes ago when you were on the point of converting?
Yeah, I'm afraid my only reaction here is 😁😁😁😁😁😁😁😁😁😁
Now, who's the one not engaging with the hypothetical? "Just because you think it's bad doesn't mean it's wrong", remember that when it comes to imposing one's own morals or religious beliefs on others in such instances as trans athletes in women's sports, polyamory, no-fault divorce, capitalism, communism, abortion, child-free movement and a lot more.
You don't like the conclusion I come to when faced with the hypothetical original Drowning Child or the variants with the Megacity Drowning Children River? Tough for you, that does not make my view wrong *unless* you can demonstrate from whence comes the moral obligation.
"If you agree to X you are morally obliged to agree to Y". Fine. Demonstrate to me where you get the moral obligation about X in the first instance. You haven't done that, you (and the thought experiment) are assuming we all share Western, Christianity-derived, social values about the importance of life, the duty towards one's neighbour, and what is moral and ethical to do.
That's a presumption, not a proof. Indeed, we are arguing about universal values and objective moral standards in the first place!
I can be just as disappointed about "the effectiveness of Effective Altruism, and rationalism in general, in instilling morals in people" if you refuse to agree with me that "if you agree to save the Drowning Child, you are morally obligated to agree to ban abortion".
Malaria killed 608,000 children globally in 2022. Abortion killed 609,360 children in the USA alone in 2022. Now who cares more about the sacred value of life and the duty to save children?
I think she's engaging, but the experiment seems to be going sideways ;-).
That's the fun with hypotheticals - someone elsewhere said "the choice is snake hands or snake feet and you're going 'I want to pick snake tail'" but why not? It's a hypothetical, nobody in reality is going to get snake hands or snake feet! So why not "Oh I think I'd rather be Rahu instead!" with the snakey tail?
https://pureprayer-web.s3.ap-south-1.amazonaws.com/2024/08/xvJUkEcy-Rahu-Ketu-Dosha-Parihara-Homam-Creatives-2024-600-%C3%97600px.jpg
These things never seem to bother with considering that the value of a human life is not a universal constant, any more than is the value of other life on this planet.
Oh, sure. It's an arm-twisting argument about "you should give to charity", not anything more. Same way the thought experiment about "suppose a famous violinst was connected up to your circulatory system" or "suppose people got pregnant from dandelion seeds floating in the window" about abortion.
It's set up to force you along a path to the conclusion the experimenter wants you to arrive at. You have three choices:
(1) Agree with the conclusion - good, moral person, here's a pat on the head for you
(2) Agree with X but not with Y - tsk, tsk, you are being inconsistent! You don't want to be inconsistent, do you? Only bad and stupid people are inconsistent!
(3) Recognise the trap lying in wait and refuse to agree with X in the first place - and we get what FeaturelessPoint above pulls with me - oh you bad and wicked and evil monster, how could you?
Many people go along with (1) because nobody (or very very few) are willing to be called a monster by people they have been habituated to regard as Authorities (hence why it's always Famous Philosopher or Big Name University coming out with the dumb experiments; we'll all laugh and ignore if it was Joe Schmoe on the Innertubes) and most people want to get along with others so they'll cave on (2). We all want to think of ourselves as moral and good people, after all, and if the Authority says "only viewpoint 1 is acceptable for good and moral people to hold", we'll most of us go along meekly enough.
You have to be hardened enough to go "okay, I'm a monster? fine, I'm a monster!" but it becomes a lot easier if your views have had you called a monster for decades (same way every Republican candidate was "Hitler for real this time", eventually people stop paying attention).
I'm willing to bite that bullet in a hypothetical, because I know it's a hypothetical and what I might or might not do in a spherical cow world of runaway trolleys and nobody in sight for miles around a pond except me and a drowning child, is completely different from what I'd do in real life.
In real life, maybe I don't jump into the pond because I can't swim. Maybe this is my only good suit and if I ruin it, I can't easily afford to replace it, and then I can't go to that interview to get the job that means now I can pay rent and feed my own kids. Maybe I'm scared of water. Maybe I think the kid is just messing around and isn't really drowning. Real life is fucking complicated*, so I have no problem being a contrarian in a simplified thought experiment that I can tell is trying to steer me down path A and not path B.
In real life, I acknowledge the duty to give to charity, because my religion tells me to do so. That's a world away from some smug thought experiment.
*Which is why there is a field called moral theology in Catholicism, and why for instance orthodox Jews get around Sabbath prohibitions by using automated switches to turn on lights etc. The bare rule says X. Real life makes it hard to do X, is Y acceptable? How about Z? "You're a bad Jew and make me think badly of Judaism as instilling moral values if you use automation on the Sabbath" is easy to say when it's not you trying to live your values.
I'm loving your ability to enunciate what the rest of us can only mutely feel.
"Suppose people got pregnant from dandelion seeds floating in the window" - hadn't heard that one but it's funny to me because it puts the thought experimenters about at the level of some adolescent girls circa 1984 - when my fellow schoolgirls earnestly discussed whether one could get pregnant "sitting in the ocean" lol.
Thank you for the compliment!
Yeah, the dandelion seeds comes from Judith Thomson's "A Defense of Abortion"
https://spot.colorado.edu/~heathwoo/Phil160,Fall02/thomson.htm
"Again, suppose it were like this: people-seeds drift about in the air like pollen, and if you open your windows, one may drift in and take root in your carpets or upholstery. You don't want children, so you fix up your windows with fine mesh screens, the very best you can buy. As can happen, however, and on very, very rare occasions does happen, one of the screens is defective, and a seed drifts in and takes root. Does the person-plant who now develops have a right to the use of your house? Surely not--despite the fact that you voluntarily opened your windows, you knowingly kept carpets and upholstered furniture, and you knew that screens were sometimes defective. Someone may argue that you are responsible for its rooting, that it does have a right to your house, because after all you could have lived out your life with bare floors and furniture, or with sealed windows and doors. But this won't do--for by the same token anyone can avoid a pregnancy due to rape by having a hysterectomy, or anyway by never leaving home without a (reliable!) army."
Interestingly, she seems to argue *against* the Drowning Child scenario, though not by mentioning it:
"For we should now, at long last, ask what it comes to, to have a right to life. In some views having a right to life includes having a right to be given at least the bare minimum one needs for continued life. But suppose that what in fact IS the bare minimum a man needs for continued life is something he has no right at all to be given? If I am sick unto death, and the only thing that will save my life is the touch of Henry Fonda's cool hand on my fevered brow. then all the same, I have no right to be given the touch of Henry Fonda's cool hand on my fevered brow. It would be frightfully nice of him to fly in from the West Coast to provide it. It would be less nice, though no doubt well meant, if my friends flew out to the West coast and brought Henry Fonda back with them. But I have no right at all against anybody that he should do this for me."
So by her logic, if you live by the river of drowning children, nobody in the world can force or expect you to rush out and save them every hour, or indeed at all. Just because your cabin is located beside the river, where there is a megacity upstream where the children all tumble into lakes and get washed downstream, puts no obligation whatsoever on you. You didn't do anything to create the river or the city, or the careless parents and negligent city government.
I appreciated this perspective and was surprised is wasn't brought up earlier or given greater weight.
Deontological details are important, but a core part of all of this revolves around who is accountable for stopping an atrocity. I loved Scott's article, but we focused on pushing the extreme boundaries on how to evaluate a hapless individual's response to the megacity drowning machine while literally ignoring the rest of the society.
I've waved this part off as avoiding the pitfalls of the bystander effect; plus the point of the article seems to be answering the question "what should I as an individual do?" as well . But sometimes a problem requires a mobilized, community response.
I also appreciated Deiseach pointing out that when you altruistically remove pain from a dysfunctional system that you can remove the incentives for the system to change which can have a worse outcome.
If it needs to be in the form of a thought experiment:
A high profile, reckless child belonging to a powerful lawmaker who is constantly gallivanting falls in the river. If you save them you know the child will stay mum about it to avoid backlash from their parents, but if they drown the emotionally vexed lawmaker will attempt to re-prioritize riparian safety laws. What do you do?
The megacity is a vibrant democracy. Every child who drowns traumatizes the entire family and their immediate relations and galvanizes them to vote against the status quo and demand policy change, which is the only thing that will ultimately stop the jeopardy to the children long term. Do you save an arbitrary child that afternoon? How about at night after saving every child during your standard waking hours?
No one wants to see an atrocity occur. But sometimes letting things burn allows enough smoke to get in the air that meaningful action can finally happen. We should at least consider this if we're doing an elaborate moral calculus.
Isn’t this “incredibly rare chance” basically equivalent to being on the global rich list? Which most people on this forum are on?
https://www.givingwhatwecan.org/how-rich-am-i
I went to that page and entered my information, but it didn't tell me whether I was on the global rich list or not, and it didn't say how rich someone would have to be in order to be on the global rich list (which I assume is not a real list, but a metaphor meaning in the top 0.01% or something). Do you know?
You mean to tell me that people *don't* wake up sutured to violinists?
And here was me convinced my sister got pregnant from the baby-seeds floating in the window!
Scott has to keep making up fantastical situations because it’s the only way to pump up the drowning child intuition. I don’t regularly encounter strangers who I can see right in front of me having an emergency where they have seconds before dying but are also thousands of miles away.
The important part isn't physical distance, it's ping times.
Hmm, I don't have an ethic where I judge hypotheticals in terms of their realism. In fact, isn't the beauty of the hypothetical the fact that it is so malleable?
It really is a different mode of thinking. For some people, abstract situations are clarifying because they eliminate the ancillary details that obscure the general principle. For others, it's necessary to have all the ancillary details to make the impact of the general principle evident.
I've always favored the former, but I regularly encounter folks who only process things via the latter. Communicating effectively and convincingly across different types requires the ability to switch modes. Sorta like talking to a physicist vs. an engineer.
I redd about this on r/askphilosophy (not sure why this scenario is resurfacing so much lately) and was struck by this comment:
"Singer isn't writing for people walking by ponds that children have fallen into, though. It's a thought experiment ... Singer's point isn't "Intuition, yay!" it's that our intuition privileges the people close to us but we should consider distant folks the same. It's that our intuition is wrong."
That comes very close to sounding like there is no "thought" either sought or required - that he had a point, and smuggled it into a parable.
It seems entirely disingenuous to me. He (Singer) should state his point, assert that he knows the truth and you know a lie, and let the chips fall where they may.
"That comes very close to sounding like there is no "thought" either sought or required - that he had a point, and smuggled it into a parable."
It's a gotcha, and why my withers remain resolutely unwrung by those telling me I'm immoral if I don't fall into line about "if X, then by necessity and compulsion Y".
I kind of know what you mean, but I kind of also feel like thought experiments lay bare uncomfortable truths about ourselves that we can typically hide from behind "germane details"
Is it a good thing to aspire to be the moral equivalent of the Siberian peasant who can’t do math word problems because he rejects hypotheticals? The thought experiments are useful for crystallizing what principles are relevant and how. Most people don’t intuitively think in terms of symbolic abstractions, that’s why hypothetical scenarios. Their practical absurdity is beside the point.
Given Russian history, I sorta suspect the Siberian peasant is capable of doing math word problems in private, but has developed an exceptionally vigilant spam filter. Smooth-talking outsider comes along, saying things without evidence? Don't try to figure out the scam, just play dumb, avoid giving offense, and wait for him to leave.
I agree. Maybe the Siberian peasant is too stupid to do maths problems, or maybe he remembers the last time some government guy from the Big City turned up and asked the villagers to agree to a harmless imaginary statement.
They're still scrubbing the bloodstains out of the floor in that hut.
As we are hyperbolic discounters of time, perhaps we similarly discount space.
Perhaps more precisely, people discount help according to their social circle's accounting of that help. Distance is part of it, relatedness is another (especially in collectivist cultures like mine)
That's a good point. Intuitively, the kid you can see drowning is probably your neighbor's kid or your second cousin twice removed or something. The kid you can't see drowning has nothing to do with you, and you have nothing to do with any of the people that would be grateful for them being saved
Honestly that's not intuitive to me? I've never thought of the drowning child thought experiment and thought "wow, that's probably related to someone I know!" and if we imagine that the drowning child is not at all related, e.g they're a Nigerian tourist or something, it still seems like I'm just as obligated to save them.
I think they are trying to explain why our moral intuitions discount space.
https://www.lesswrong.com/w/adaptation-executors
In other words, caring about space (for space's sake) may have evolved because it was a near-enough proxy for relatedness.
ohh that makes more sense, like kin selection
So people intuitively recognize that you should save drowning children bcs that intuition evolved to help people related to you pass their genes on. They don't have that intuition for people far away bcs that had no reason to evolve since helping people hundreds of miles away doesn't help your genes.
In older times people just said that other tribes didn't really matter and only they did, so that's why they only helped their own tribe. Nowadays people are more egalitarian and recognize that everyone has moral worth, so they have twist themselves into knots to justify their intuition that you don't have to help far-off strangers.
If there are two people choking, one 10cm away inside a bank vault you don't know how to unlock (and might be charged with a felony for trying), the other a hundred meters away across clear open ground, who do you have the greater responsibility to?
Available bandwidth and ping times are more important than literal spatial distance.
I would agree with this idea. It also seems like a near vs far mode thing. The suffering of children in Africa is very conceptually distant, and we perceive it with a very low resolution. A child drowning right next to you just feels a lot more real.
In Garrett Cullity's The Moral Demands of Affluence his argument is that the "Extreme Argument" (Singer's drowning child) would require us to compromise our own impartially acceptable goods. And we don't even ask that of the people we are saving, so they can't ask it of us. (Kind of hard to do a tl;dr on it because his entire 300 page book is solely on this topic.)
"My strategy is to begin by describing certain personal goods—friendships and commitments to personal projects will be my leading examples—that are in an important sense constituted by attitudes of personal partiality. Focusing on these goods involves no bias towards the well-off: they are goods that have fundamental importance to people’s lives, irrespective of the material standard of living of those who possess them. Next, I shall point out the way in which your pursuit of these goods would be fundamentally compromised if you were attempting to follow the Extreme Demand. Your life would have to be altruistically focused, in a distinctive way that I shall describe. The rest of the chapter then demonstrates that this is not just a tough consequence of the Extreme Demand: it is a reason for rejecting it. If other people’s interests in life are to ground a requirement on us to save them—as surely they do—then, I shall argue, it must be impartially acceptable to pursue the kinds of good that give people such interests. An ethical outlook will be impartially rejectable if it does not properly accommodate the pursuit of these goods—on any plausible conception of appropriate impartiality."
Never heard of it before but I think you chose a good except, thanks.
I don't know if you were the one to recommend it in a prior ACX post, but I saw a comment about that book and devoured it. I am shocked that it doesn't seem to have any influence on modern EA, because it deals a strong counterargument to the standard rejection of the infinite demand problem.
I don't think it was me. I live in an Asian timezone so rarely post here (or on any other moderately popular American thing since they always have 500 comments by the time I wake up).
But maybe we both saw the same original recommendation? I read it probably 5-6 years ago, so maybe I saw it back on SSC in the day??
No, this was within a few months ago.
I'm not well versed on the matter--isn't the entire point of Singer's drowning child that it is not an extreme demand? It doesn't require you to sacrifice important personal goods like friendships or personal projects, just that you accept a minor inconvenience.
Edit--I read more about it, and Singer's drowning child is not the extreme demand, but the child-per-hour argument could be. The iterative vs aggregative question to moral duty seems particularly relevant to Scott's post.
With the disclaimer that it has been many years since I read the book and Cullitty's "life-saving analogy" is slightly different from Singer's for reasons he explains in the book. But part of the Singer's argument is he isn't actually asking us "just" to save one single child with his life-saving analogy. That's just the wedge entry point and you then are required by his logic to iterate on it.
"I do not claim to have invented what I am calling the ‘life-saving analogy’. Peter Singer did that, in 1972, when he compared the failure to donate money towards relief of the then-recent Bengal famine with the failure to stop to pull a drowning child from a shallow pond."
Singer certainly wouldn't say "you donated to the Bengal famine in 1972 so you are relieved from all further charity for the rest of your life." He would ask you to iterate for the next thing. After all his other books advocate for recurring monthly donations, not one offs.
"No matter how many lives I may have saved already, the wrongness of not saving the next one is to be determined by iterating the same comparison."
Singer doesn't let you off the hook if you see a school bus plunge into the river and have saved a single child from it. You can't just say "eh, I did my part, time to get to work, I'm already running late for morning standup".
And then once you iterate it seems to lead inexorably to the Extreme Demand.
"An iterative approach to the life-saving analogy leads to the conclusion that
you are required to get as close as you productively can to meeting the
Extreme Demand"
So I think Cullitty, at least, believes that (some variation of) Singer's argument requires the Extreme Demand.
I think it also abolishes the "why help one near person when the same resources would help one hundred far persons?" arguments I see bruited about.
If you don't get to say "I donated once/I pulled one kid out of a river once" and that's it, no more obligations, then neither do you get to argue that people far away should be prioritised in giving over people near to you (and I've seen plenty of arguments about 'this is why you shouldn't give to the soup kitchen on your street when that same money would help many more people ten thousand miles away').
If I'm obliged to help those far away, I am *also* obliged to help those near to me. I'm obliged to donate to malaria net charities to save 100 children, but I'm *also* obliged to give that one homeless guy money when he begs from me.
Distance is not an excuse in either case; if I'm obliged to help one, I am obliged to help all, and I don't get off the hook by saying "but I just transferred 10% of my wages to GiveWell" when confronted by the beggar at my bus stop.
There are no drowning children.
The correct scenario and question to ask is:
"An entire society focuses on a sexual practice that spreads an incurable disease. People are now dying because of this disease."
Is it your moral responsibility to pay to reduce the disease incidence for people in this society given that they are spreading the disease?
If you're going to get moralistic about (I assume) HIV you should also bear in mind it gets transmitted from mothers to newborns, who obviously have no moral responsibility for their plight.
We can focus on the newborns:
"An entire society focuses on a sexual practice that spreads an incurable disease. The disease is also passed on to the children."
->
"An entire society has collectively decided that drowning children is sexually pleasurable. Should you save the child and ignore the sexual practices of the society?"
This is standard motte and bailey. You cannot consider one without the other in this thought experiment and turn around and apply it to real life.
This is kind of an absurd argument given that the sexual practice in question is just "having sex" and infecting children is in no way a necessary consequence for people to fulfill their desire.
This is again a question of moral luck: people in the United States can, relatively trivially, get the medicine necessary to have sex without passing on HIV and people in some nations cannot.
This is false:
> use of a cloth to remove vaginal secretions during intercourse (dry sex) (relative risk, 37.95)
Dry sex has a risk ratio that is higher than blood transfusion (relative risk, 10.89).
The prevalence of dry sex is over 50% in places with high HIV rates.
"Just sex" has a transmission rate of 0.08%, which means someone needs to have sex with an infected person OVER 1000 times to be infected with HIV.
Okay, fine - we should definitely discourage this. I don't think that gives us the license to ignore newborns getting HIV or that this is tantamount to deliberately drowning children.
Why not colonization for the greater good, then?
The figure you cite is based on a study which observed 19 cases of HIV in total: https://pubmed.ncbi.nlm.nih.gov/2391598/
A much larger meta-analysis found much, much lower hazard ratios for dry sex practices (maybe 0-80% higher): https://pubmed.ncbi.nlm.nih.gov/21358808/
Do you really think the world's assembled anti-HIV efforts would have ignored this out of embarrassment or stupidity? It's largely a sexually transmitted disease - they are not squeamish when it comes to studying which sexual activities are associated with increased risk. It is easy to find tables with estimated infection rates for anal sex, sex while carrying open STI sores, and so on.
I suggest you come up with some other way of blaming HIV incidence on the backwards culture of Africans.
>An entire society has collectively decided that drowning children is sexually pleasurable. Should you save the child and ignore the sexual practices of the society
Those are two separate questions. You should save the child and do what you can to encourage the societal change that you think would be beneficial.
I get that this is some kind of tortured analogy to HIV, so I guess the real question is do you actually think non-profits aren’t also spending money on safer sex education in addition to HAART?
I hope you mean the newborns because mothers obviously have moral responsibility for their newborns plight.
I was referring to the newborns (although it's definitely not universally true that their mothers are at fault).
And from rapists to rapees, often, they say.
That doesn't mean long-term utilitarian arguments about the consequences of policy go away. It is conceivable that refusing to pay for HIV medication would ultimately produce a society where the risks of HIV are so terrifying that no-one engages in unprotected sex, and thus the number of infected newborns drops to zero. Even if, in the short term, more newborns die.
I don't know if this is actually true, but 'moralism' isn't the correct frame to analyse this problem with. "Fewer people should die of HIV" is a 'moralising' position.
I think "maybe if we let them all have AIDS it'll work itself out" is:
1. Very ghoulish
2. Obviously wrong, given that some countries already have wildly high rates
Yeah, some countries do have wildly high rates, but were these countries with zero access to HIV meds? How do you know this didn't exacerbate the problem?
I don't know what the correct solution to the calculus here would look like, I'm just pointing out that calling your critic a 'moraliser' in response is nonsensical. There's no utilitarian calculus without a moral definition of utility.
Countries in the first world with ready access to these medications are way better off than much poorer countries who rely on foreign aid to get them.
And my point was not that it's absurd to invoke morality, my point was that invoking it to assign responsibility to victims was incorrect for the population of victims who have no agency.
EDIT: Also antiretrovirals can virtually eliminate transmission so it's hard to see how the "moral hazard" argument would work here, at least in a scenario where you adequately supply everyone.
> Countries in the first world with ready access to these medications...
Are, among other things, overwhelmingly white/asian, and I'm HBD-pilled, so I don't assume that identical policies are going to yield identical outcomes in both areas.
> EDIT: Also antiretrovirals can virtually eliminate transmission
If they remember to take them, sure. IQ has an impact on how reliably that happens, though.
So there is a question here of judging people as a society. If the newborns in a society all get HIV because the society is bad, but then the newborns grow up to be part of the same society, how do we judge them?
There's a reasonable counterargument that any specific newborn isn't blameworthy because he's a blank slate that might not grow up like that society, so we shouldn't judge him as a social average. But then this is already effectively a newborn we know nothing about except that he's from that society, so maybe judging him by the priors of his society (instead of global priors) makes more sense.
(There's also, in this particular case, a second objection that AIDS aid might gradually help push that society away from the disease as a group; this is a practical question I have no particular insight about)
Judging someone who has taken no actions is pretty incoherent.
Umm, if you're talking about HIV, don't all sexual practices that involve the exchange of semen, saliva, or vaginal secretions increase the spread of the incurable disease? Doesn't this include all societies that allow or encourage physical sexual connection?
Does it make any difference whether or not you're a member of the society?
Reposting:
This is false:
> use of a cloth to remove vaginal secretions during intercourse (dry sex) (relative risk, 37.95)
Dry sex has a risk ratio that is higher than blood transfusion (relative risk, 10.89).
The prevalence of dry sex is over 50% in places with high HIV rates.
"Just sex" has a transmission rate of 0.08%, which means someone needs to have sex with an infected person OVER 1000 times to be infected with HIV.
If you’re going to say this and repeat it. then get it right. The O.08% figure applies to the situation where the infected male is asymptomatic. If he is sick the chance of transmission is 8x higher, or 0.64, enuf bigger to be harder to shrug at. That works out to a 12% chance of infection if thewoman has sex with the symptomatic man 20 times.
The spread of HIV depends on multiple sexual partners. That’s why I thought the blaming of the Catholic Church in Africa was odd.
The blaming was down to opposition to condoms. Condoms are the highest good, you see, and being opposed to them means that you are a no-fun wet blanket who thinks having fun sex with no consequences like kids is a bad thing. This makes Westerners mad about their current attitudes to sex, because it makes them feel like they're being blamed and are bad people (see the comments above about people wanting to believe they're good and moral while not doing something to help) so they use cases like "condoms reduce spread of AIDS, the church wants people to die" to make themselves feel justified.
It's not so much that the church is against condoms in committed relationships, it's the position that even using condoms in extramarital or promiscuous sex makes those things a worse sin instead of not as bad, that really riles people up wrt AIDS and other diseases.
Sex that makes two people happy is good on that basis in and of itself. Is it in all circumstances net good? No.
"being opposed to them means that you are a no-fun wet blanket who thinks having fun sex with no consequences like kids is a bad thing. This makes Westerners mad about their current attitudes to sex, because it makes them feel like they're being blamed and are bad people"
All this is an accurate description of the Catholic church's perspective. Judge and ye shall be called judgey.
The real reason to blame the Catholic Church in Africa is that Islam is so negatively correlated with HIV rates there :)
Can you please post a link to the stat you quote of 37x increased risk of HIV transmission when dry sex is practiced vs intercourse with no interference with the vagina’s
secretions? I have looked quickly on Google Scholar and asked the “deep research” version of GPT and cannot find any figures remotely like what you are quoting.
All I found was this study saying there was no difference. https://pubmed.ncbi.nlm.nih.gov/8562002/
Yeah I found that too. That 37x-as-likely sounded like bullshit to me from the start, because it's too big and precise. This just isn't the kind of data from which you can extract such a precise number, or find such a large difference between groups. To get such a big number, and to trust it, you'd have to have a control group of couples and a dry sex group, and assign couples randomly to the groups and then followed them for a year doing HIV tests. Um, obviously such a study was not done, and it's not possible to get data you're confident of just by interviewing people about practices and number of partners, etc. In fact it's probably not possible even group the people studied into dry sex and regular sex groups . You'd have to find people who have *only* done dry sex or only done plain vanilla intercourse. In one of the studies I looked at the best they could do was look at women who reported they had had dry sex at least once in the period they asked about.
I really don't doubt that dry sex creates abrasions on the genitals of both partners and that this ups the chance of HIV transmission. Really irritates me when people spout bullshit that supports it though.
The odds of spreading HIV only through semen, saliva, or vaginal secretions are practically nil which is why HIV is not transmitted through oral sex.
Isn't it only non-monogomous sex that can really spread any of these diseases? (Okay, you could be born with it, and give it to your one sexual partner, but none of these diseases can long survive on such a patry pathway, all in practice require promiscuity). Pre-1960 there have been a lot of societies (damn near all of them in theory if not in practice?) that encourage only monogomous sexual connections.
You can also share needles. No sex necessary.
Sure, but aren't we in the context of responding to the comment "don't all sexual practices increase the spread? Doesn't this include all societies?"
Pretty hard to think of any that came close to it in practice, though, ain't it?
Absolutely, but also pretty easy to think of several that were much closer than the West today.
Do we have any real idea how many sexual partners the average 19th century Venetian shop-owner, US cowboy, or
1920’s flapper had?
We don't have time series data, but we have extensive literary evidence. That's usually the case with history. Demanding time series data smells of an isolated demand for rigour. After all, you seemed comfortable ruling out any society being entirely monogamous in your previous comment.
it wasn't "all societies", not even close. Read some anthropology.
It really depends on the kind of sex you're having much more than the number of partners- anal sex is *massively* more dangerous than vaginal. In Australia, for example, gay men are 100x more likely to have HIV than female sex workers (and most of the female sex workers most likely got it through drug use rather than sex. Most sex workers in Australia don't use IV drugs, but there's a somewhat significant number who do).
> Is it your moral responsibility to pay to reduce the disease incidence for people in this society given that they are spreading the disease?
You're a battlefield medic. As you triage the incoming casualties, you realise that some of them are members of the enemy forces.
Do you help them, or do you toss them out of the tent?
I would feel no obligation to treat the enemy soldiers. If I did treat them, I'd be going above and beyond my moral duty.
You might be interested in the knowledge that your personal ethics are at odds with the Geneva convention.
I know. It doesn't keep me up at night.
Would you not want enemy medics to treat captured friendlies?
Do you value not treating enemy soldiers more than you value society maintaining such a norm?
The point in war is to kill or injure enemy soldiers, so no, I don't see any intrinsic value in doing the opposite of that (treating the wounds of the enemy,) when you are at war. (Nor would I expect the enemy to treat our wounded, although that would be a nice bonus.) War is brutal in and of itself and so if you want to avoid brutality and cruelty, my suggestion is to avoid war.
Out shallow societal norms about war crimes are a joke because 1. As demonstrated countless times, we throw the norm out of the window when it is convenient to do so. And 2. It's perpetrating a gigantic fraud to imagine criminal war vs. just War when War itself is a crime
Is your argument that no child actually dies which could have been reasonably prevented without incurring greater moral wrong? Because that's patently false.
Is your argument that discussions about the nuances of moral theory and intuitions and how they cash out in actual object-level behavior is useless? Because that could work, but would need a greater explanation.
Is your argument that discussions about the nuances of moral theory and intuitions and how they cash out in actual object-level behavior is rhetorically not effective? Because that's false - I myself was persuaded by precisely that kind of argument and to this day find them more persuasive than other forms. We can talk about if other forms would be more effective, but that'd need more explanation.
Is your argument that worrying about malaria is inefficient compared to worrying about HIV? Because any even somewhat reasonable numbers say that's not true.
Is your argument that worrying about HIV is a waste of time because people with HIV are too ignorant and engage in risky behaviour? Because then the solution is education.
Is your argument that worrying about HIV is a waste of time because people with HIV are too stupid to avoid engaging in risky behaviour? Because then you'd need more evidence to support this claim.
Or do you just want to beat your own hobby horse about how African People Bad? I assume I don't need to bother saying why that position is oderous.
Hi Alan, I would like to see a single piece of writing from Scott on the importance of education against practices like dry sex or having sex with infants to "cure" HIV. I would also like to see where in this post, his analogies are anything like societal practices encouraging dry sex or having sex with infants to "cure" HIV. Please make it so that a five year old can understand, thank you!
I'm South African. South Africa has for decades had major public awareness campaigns about AIDS that (among other things) explain and warn against those specific practices. Everyone who went to a South African high school heard the spiel repeatedly, and wrote exams testing their understanding of it. It was on TV, the radio, newspapers, the internet.
Those awareness campaigns are funded by the international aid that Scott has repeatedly endorsed in writing. I would have thought it fairly obvious to everyone that said funding is allocated to local education programs, in addition to condom and ARV distribution, etc.
Here is an additional hypothetical. I won't call it an analogy, because it describes a strictly worse situation than reality.
A child is drowning in a river, because their parents pushed them in. Should you save the child? Or should you let the child die because clearly those parents are terrible people who don't deserve to have children?
South Africa can't even reliably supply its cities with electricity, my dude.
According to a source I find online, South Africa funds just over 70% of it's anti-AIDS budget directly, 18-24% comes from PEPFAR depending on the year, and the rest from something called the Global Fund.
Deserve or no, if the child is going back to those parents, it would seem to have good odds of being pushed in the river again.
The child will also inevitably die of old age if nothing else, so I guess we don't have any moral obligation to them whatsoever?
The standard unit used to measure such things is a QALY (quality-adjusted life year).
So your argument is that current educational programs don't exist (not true, as Synchrotron describes at least in the case of SA, and a cursory search finds similar programs in at least a dozen African countries) or that they're not effective? Because again, an even cursory glance at the literature suggests that while they're obviously far from perfect, rates of safer sex practices does improve with education, albeit very unevenly depending on country and specific program.
Actually, I'll make it easier for you. What, precisely, is your actual argument?
I think he just wants to bitch about Africans. He's not making an argument. His posts are kinda pathetic
I think you are on the right track here. The common issue with extrapolating all these drowning child experiments is that the child presumably has no agency in the matter. The intuitions change very quickly if they do.
"You save the child drowning in the pond and point out the "Danger: No Swimming" sign. The child thanks you, then immediately jumps back into the pond and swims out towards the middle, and proceeds towards drowning. Do you save them again, and how often do you let them repeat that?"
"You see some adult men swimming in a pond, and one of them starts to drown. You save him, and then the next day you see him swimming in the pond again, apparently not having learned his lesson. Do you hang around in case he starts to drown? If he does start to drown, do you save him again? How often do you repeat that?"
All that before you get to the questions of "Can you actually save the person?" "Will going out to help only drown both of you?" "How likely are you to make things worse by trying to save them?" That last one doesn't fit the metaphor at all, but is in fact usually what happens with foreign aid: the situation is made somewhat worse.
Another question is: How much do you know the situation? Is the child actually drowning? Is he swimming? Filming a movie?
Another is: How much responsibility have the Megacity for all the drowning?
I think what happens is Alexander took a case that is rare and unpredictable and said it happens all the time. This of course inverts our intuitions.
In this case, in real life, it would be like:
"We have no responsibility to save YOUR children, but we don't like to hear them crying so we added a net at the border so they can drown at your end".
Indeed. In fairness it is Singer’s base example, and people just use it because it seems to be difficult for most to grapple with. Singer is not someone I feel good about based on his writing that I have read, but maybe he is a decent person.
"How likely are you to make things worse by trying to save them?" That last one doesn't fit the metaphor at all, but is in fact usually what happens with foreign aid: the situation is made somewhat worse."
That's in the blurb for Scott's dad's book:
https://www.amazon.com/Edge-Everyday-Adventures-Disaster-Medicine/dp/B0F1CL61T9
"The “Law of Unintended Consequences” reared its ugly head despite the best of intentions. For example, when the US flew in free food for the starving people of Port Au Prince, it put the farmers out of business. They just couldn’t compete against free food. Many were forced to abandon their farms and move to tent cities so that they could get fed and obtain the services they needed."
I was reading the obituary of a neighbor’s father, a doctor, and I learned that he had always had a special passion for Haiti, from way back in the 80s, and “had made over 150 trips there”.
How admirable, I thought. And there was really nothing else to think of in connection with that, beyond its evidence of his compassion.
Among the many things I don’t understand is why people look so hard for (and frequently find) unintended consequences when talking about ostensibly altruistic acts, but rarely when talking about “selfish” ones. The example taken from the blurb of Scott’s father’s book is a single paragraph among others, most of which extol the virtue of voluntarism (although I haven’t read the book, so it may include a lot of similar examples of do-gooding gone wrong.) But even in the case of the farmers who lost their market, we don’t know for sure that that itself wasn’t a blessing in disguise – maybe some of them went on to find different, better paying and less arduous work. Maybe some of the people who were prevented from starving went on to do good works far in excess of saving a drowning child.
But as soon as it comes to “selfish” acts – starting a business with the aim of becoming rich, a business that fills a societal need or want – we don’t try to look for unintended consequences (we call them externalities); instead we point to the good they are doing. Even if we admit the negative externalities (the classic case is pollution, but another more modern one is social media platforms’ responsibility for increased political polarization), we still say “but look at all the good they’re doing,” or at least the potential good, if the benefits are still in the future.
One reason for saving a drowning child might be so that you don’t hate yourself for not doing it, which is only tangentially related to desiring others to see you as virtuous. Should that count as an argument against altruism? Why does the argument against the possibility of true altruism not also get applied to selfishness? Even the most selfish, sociopathic and least self-aware person will bring on themself *some* negative consequences of their actions – the loss of opportunities for even more selfishness; the loss of the possibility of truly mutually beneficial relationships; a victim who seeks revenge. Even if they die before realizing these negative consequences, their legacy and the reputation of their descendants will be tarnished.
Unintended consequences are not synonymous with externalities. The reason people focus on them with regards to altruistic motives is that the general default mode towards apparently altruistic acts is “do it” when in fact it might make things worse, whereas there is a default of assuming selfish acts are harmful to others, often in excess of what is really there.
Yes, I agree, unintended consequences are not synonymous with externalities -- externalities can be unintended, which is the rationale for environmental review of projects, but some of them are planned for, and some of them are intended to be mitigated whereas others are ignored or covered up. I don't agree that the default mode toward selfish acts is "don't do it," however. Selfishness is in many cases held up as a virtue (e.g. the selfish gene; the profit motive; the adversarial process in legal proceedings; the notion of survival of the fittest and competition for an ecological niche).
Read that again, please.
The point is the Drowning Child argument tries to hit us over the head with "do this self-evidently good thing or else what kind of monster are you?" without consideration of unintended effects. Donating to anti-malaria charities is a good thing.
So is feeding the hungry. And yet the intervention in Haiti ended up causing *more* hunger and undermining local food production. So was the self-evidently good thing an unalloyed good, or should we maybe look before we leap into the pond?
I think this is obviously the wrong analysis of pepfar but even if it were right this wouldn’t be a good argument against the against malaria foundation
Can you please describe to me why this is obviously wrong for PEPFAR? Please explain it to me like I'm five, thank you.
" 'An entire society focuses on a sexual practice that spreads an incurable disease. People are now dying because of this disease.'
Is it your moral responsibility to pay to reduce the disease incidence for people in this society given that they are spreading the disease?"
Even if we granted the premise that it's not our moral responsibility to save people who recklessly endangered themself or others, many of the people who are getting HIV were not reckless. Some of them are literal babies or women and children who were raped. Many others didn't have the education to know how HIV is spread and how to avoid being infected; if someone mistakenly believes that sex with a virgin renders them immune to HIV, can you blame them for getting HIV when they thought they couldn't?
But I would definitely contest that premise. If someone is drowning in front of you, you're obligated to save them. It doesn't matter if they got there by recklessly playing near the lake or through no fault of their own. If someone will die unless you intervene, you have to help regardless of how they got into that position.
A drowning person taken out of the water is safe. An AIDS patient needs lifelong treatment.
Are you also opposed to paying for a family member's medical care if they were injured playing sports?
> ...behave as if the coalition is still intact...
I think you may have snuck Kant in through the back door. Isn't this kind of what his ethics is? Behave according to those principles that you could reasonably wish were inflexible laws of nature (or, in this case, were agreed to by the angelic coalition).
No, Kant relies on the idea of immoral actions being illogical, because they contradict the rules that also provide the environment where the action even makes sense to do.
Lies only make sense if people trust you to tell the truth.
Theft only makes sense if you think you get to keep what you take,
Etc.
>My favorite heuristic for thinking about this is John Rawls’ “original position” - if we were all pre-incarnation angelic intelligences, knowing we would go to Earth and become humans but ignorant of which human we would become, what deals would we strike with each other to make our time on Earth as pleasant as possible? So for example, we would probably agree not to commit rape, because we wouldn’t know if we would be the offender or the victim, and we would expect rape to hurt the victim more than it helped the offender.
No, it's trivially obviously false that we would agree to that (or anything else) in this scenario. If we don't have any information about which humans we are, then we're equally likely as not to end up being sadomasochists, so any agreement premised on the assumption that we want to minimize suffering for either ourselves or others is dead on arrival. All other conceivable agreements are also trivially DOA in this scenario, since we also don't have any information about whether we're going to want or care about any possible outcomes that might result. Consistently applied Rawlsianism is just roundabout moral nihilism.
In order for it to be possible that the intelligences behind the veil of ignorance might have any reason to agree to anything, you have to add as a kludge that they know masochism, suicidality, and other such preferences will be highly unusual among humans in the society they're born into, and that it's therefore highly unlikely they'll end up with such traits. But if they can know that, then there's no reason why they can't also know the commonality of other traits, and then there's no reason why they shouldn't be able to at least make a well-informed Bayesian estimate of whether they're more likely to end up the offender or victim in a rape, or whatever else you want them not to know, and so the whole experiment becomes pointless.
Masochists tend to be very picky about the kind of pain they want. I have no idea whether this is as true about what kind of pain sadists want to impose.
I think that's a misstating of what the veil makes you ignorant of.. The point isn't that you don't know anything about the society into which you will be incarnated; the point is that you don't know what role in that society you will have.
Firstly, as a masochist myself, you are heavily misrepresenting masochism. Secondly, as someone who's met a weirdly large number of people who have committed rape, I'm pretty sure the net utility *for rapists* is at least slightly negative - some of them get something out of it, but some of them are deeply traumatized by it and very seriously regret it (and that's ignoring the ones who actually get reported and charged and go to prison, because I haven't met any of those).
I've wondered whether there were people who committed rape once and found they didn't like it and never did it again, or maybe once more to be certain and then never again.
It makes no difference to the victims, but it might make a difference to rape prevention strategy.
I *think* the stats I read showed that most rapists only do it once? But I don't remember super clearly, and I don't have a link to that source.
From what I can tell, it is both that the majority of rapists only have one victim, and the majority of rapes are committed by serial offenders.
Yeah, that sounds about right. I definitely meant that the majority of people who commit rape only do so once, not that the majority of rapes are committed by one-time offenders. Probably should have clarified, though, so thanks for that.
The veil of ignorance is about circumstances not values. So you know what you value you just don't know what circumstances you'll end up in.
It's about both. Rawls is very clear on this.
You can try to invent an alternate version of the VOI where you arbitrarily know what your values will be without knowing anything else, but I'm not sure how such a blatantly arbitrary thought experiment is supposed to be a compelling argument for anything.
The point isn't that you know what values will be, but that you know the distribution of values/preferences and circumstances, from which yours will be randomly chosen.
I already explained in my original post why this doesn't work. If you grant the souls this kind of probabilistic information, then there's no reason why they can't also make well-informed probabilistic guesses regarding all the other things they're supposed to remain ignorant of, which makes their "ignorance" functionally meaningless.
It does work. If you don’t know whether you will be born a sexual predator or a victim, you should assume you’ll be a victim and therefore advocate for a society that prevents sexual assault.
Why?
Remember the point of this experiment is to determine the rules.
Including of course the false positives.
The whole point of the veil is to be arbitrary. You only know *this* which is what the constructor of the thought experiment has predetermined is the important thing.
I mostly agree with this, but still some versions of the veil make the arbitrariness more obvious than others.
Ah, but what is the problem? Obviousness or arbitrariness?
That depends on what the goal is.
> we're equally likely as not to end up being sadomasochists
I think a lot of ethical thought experiments are pointless too, but the point that you could be a masochist is complete nonsense. Sadomasochists are a small minority of people, full-time ones even more so. Rawls’ angels could assume their human avatars wouldn’t like pain. The point is to apply that frame to actual human ethical questions, and humans can assume that the drowning child doesn’t enjoy drowning and that children in Africa don’t enjoy starving or dying of malaria. Otherwise is just silly sophistry.
I already explained in my original post why this doesn't work. If you grant the "angels" this kind of probabilistic information, then there's no reason why they can't also make well-informed probabilistic guesses regarding all the other things they're supposed to remain ignorant of, which makes their "ignorance" functionally meaningless.
I don't understand. How does probabilistic information about the personality makeup of the human species mean you can't be incarnated at random? Are they supposed to be making decisions with no knowledge of the world whatsoever?
>Are they supposed to be making decisions with no knowledge of the world whatsoever?
Not exactly. Souls behind the VOI are allowed to know general rules that apply to all human interactions; there's no reason why they can't know that humans inhale oxygen and exhale carbon dioxide, or other such things. They just aren't allowed any information that might incentivize them to favour the interests of any one person or group of people over those of any other person or group of people. So they can't know that "sadomasochists are a small minority of people", because then it would be rational for them to treat the interests of non-sadomasochists as a group as more important than those of sadomasochists as a group.
Okay, sorry I didn't see your reply here:
https://www.astralcodexten.com/p/more-drowning-children/comment/102705348
https://archive.org/details/a-theory-of-justice-john-rawls-1971/page/133/mode/2up
So... yeah, it looks like your quote is accurate, Rawls intended for the VoI to preclude any information about group size and relative probability of who you'd incarnate as.
At a glance, Rawls does seem to be making a lot of stipulations or assumptions about the value system of the angels, though (maximin principle, conservative harm avoidance, some stipulation of 'monetary gain' as if he were doing economics), so... it looks like "maybe you all incarnate as hellraiser cenobites" would contradict his thought experiment. But maybe I'd have to read it again.
There's perhaps a more fundamental objection to "you can't know how common different groups are", which is that subgroups are in principle infinitely sub-divisable. Are the "ginger swedish lesbians born with twelve fingers" group supposed to be exactly as common as "people over five feet tall"?
I have never heard it claimed that Rawls prohibits probabilistic knowledge. Indexical ignorance is precisely the ignorance Rawls seems to be requiring.
Then you have not actually read Rawls, because not only does he state this prohibition explicitly, but he also explicitly acknowledges that removing this prohibition would make his argument completely nonsensical.
Could you quote where he says that?
From "A Theory of Justice", pages 134-135 in the latest edition:
>Now there appear to be three chief features of situations that give plausibility to this unusual rule.20 First, since the rule takes no account of the likelihoods of the possible circumstances, there must be some reason for sharply discounting estimates of these probabilities. Offhand, the most natural rule of choice would seem to be to compute the expectation of monetary gain for each decision and then to adopt the course of action with the highest prospect. (This expectation is defined as follows: let us suppose that gij represent the numbers in the gain-and-loss table, where i is the row index and j is the column index; and let pj, j 1, 2, 3, be the likelihoods of the circumstances, with pj 1. Then the expectation for the ith decision is equal to pjgij.) Thus it must be, for example, that the situation is one in which a knowledge of likelihoods is impossible, or at best extremely insecure.
>[...]
>Let us review briefly the nature of the original position with these three special features in mind. To begin with, the veil of ignorance excludes all knowledge of likelihoods. The parties have no basis for determining the probable nature of their society, or their place in it. Thus they have no basis for probability calculations. [...] Not only are they unable to conjecture the likelihoods of the various possible circumstances, they cannot say much about what the possible circumstances are, much less
enumerate them and foresee the outcome of each alternative available. Those deciding are much more in the dark than illustrations by numerical tables suggest.
The Rawls veil of ignorance works even if the "angelic intelligences" know every single fact about what will result from the society they choose except which human they will end up being. In that case it's basically rule total utilitarianism. It also works, somewhat, if there's only one intelligence doing the choosing, although there it ends up looking like rule average utilitarianism.
I think the mistake you're making is assuming that behind the veil of ignorance you're choosing with the same intelligence and values that you have in life, which can leak information about which human you are, causing a failure to come to agreement, but part of the experiment is that behind the veil you have a completely standardized mind.
>I think the mistake you're making is assuming that behind the veil of ignorance you're choosing with the same intelligence and values that you have in life,
...What? The fact that you're *not* doing this is my whole point!
Then I fail to understand what you mean by "But if they can know that, then there's no reason why they can't also know the commonality of other traits, and then there's no reason why they shouldn't be able to at least make a well-informed Bayesian estimate of whether they're more likely to end up the offender or victim in a rape, or whatever else you want them not to know, and so the whole experiment becomes pointless." The only thing they're supposed to not know is which particular human they end up as. Bayesian estimates of what a generic human is likely to experience are on the table! (The original Rawls book does handle this badly, but it's because Rawls has a particular (and common) blind spot about probability rather than it being an inherent defect of the thought experiment.)
What I mean is that the whole goal of the VOI is to justify some kind of egalitarian intuition. But this only sort-of appears to work in Rawls' original version because the souls lack *any* ability to guess, even probabilistically, what sort of people they're going to be (a point which Rawls states explicitly). If they're allowed to make informed guesses as to what sorts of people they'll most likely be, then there's no reason for them not to make rules where an East Asian's interests count for 36x more than a Pacific Islander's, or where a Christian's interests count for 31000x more than a Zoroastrian's, or where an autistic person's interests count for only 1% those of an allistic, or to any number of the other sorts of discriminatory rules which the whole point of proposing the VOI is to avoid.
If you're trying to maximise your expected utility, you don't want a scenario "where an autistic person's interests count for only 1% those of an allistic".
This is because in a world with 99 allistics/1 autistic, and a dispute between an autistic in which the autistic loses 50x as much as the allistic gains, you have:
a 1% chance of being the autistic and losing 50
a 1% chance of being *the specific* allistic in the dispute and gaining 1
a 98% chance of being someone else
...which is an EV of -49/100.
You'd be in support of a measure that hurt the autistic by 50 in order to make the lives of *all* the allistics better by 1, but that's not valuing an autistic's interests at 1% of an allistic's, it's just not valuing them at twice as important..
Why would privileges only accrue to the specific allistic in the dispute in this scenario? That's never been how discrimination has worked. If you were born white in Apartheid South Africa, you wouldn't need to get into a specific, identifiable dispute with a black person to be favoured over them for the highest-paying jobs, for your vote to count more than theirs in an election, etc. you'd just get all that automatically.
"So for example, we would probably agree not to commit rape, because we wouldn’t know if we would be the offender or the victim, and we would expect rape to hurt the victim more than it helped the offender."
Unless, of course, the rapist got much more pleasure than the victim felt suffering, so the total amount of happiness in the world increased:
https://pbs.twimg.com/media/DOinMW5UQAA_omS.jpg:large
I broadly agree that we should "do unto others as we would have them do unto us" but yeah, depends on the tastes of both ourselves and the other person.
"There is at least one sado-masochist on earth, therefore when I'm born my chances of being a sado-masochist are around 50%" is certainly a take.
"If we don't have any information about which humans we are, then we're equally likely as not to end up being sadomasochists"
Huh, I think way fewer than 50% of humans are sadomasochists actually
This is among the more hilarious misunderstandings of VOI I've seen. Scott is correct.
Your assumption that you can go from knowledge of population level traits/desires to (probabilistic) knowledge of circumstance also doesn't follow.
I would draw a distinction between "observing a problem" and "touching a problem" in jai's original post. Trace is commenting on the "touching" side of things, specifically the pattern where a charity solicits money to solve a problem, spend that money to make poor progress on the problem, and defends this as "everyone's mad at us trying to help even though not trying would be worse". It is possible to fruitfully spend money in distant weird to you circumstances you don't properly understand, but if you think you're helping somewhere you're familiar with, you're more likely to be right.
I think the distance objection does not refer to literal distance, but our lack of knowledge and increase in risk of harm the further we are from the people we're trying to help.
For example, consider the classic insecticide-treated mosquito nets to prevent malaria. Straightforward lifesaving intervention that GiveWell loves, right? It turns out that many of the hungry families who received such nets decided to use them to catch fish instead. This not only failed to prevent malaria, but also poisoned fish and people with insecticide. We didn't save as many drowning children as we hoped, and may have even pushed more of them underwater, because we were epistemically too far away to appreciate the entire socioeconomic context of the problem.
The further you are in physical and social space and time from the people you're trying to help, the greater the risk that your intervention might not only fail to help, but might actually harm. This is the main reason for discount rates. It's not that people in the far future are worth less morally, but that our interventions become more uncertain and risky. We're discounting our actions, not the goals of our actions. Yes, this is learned epistemic helplessness, but it is justified epistemic helplessness.
> It turns out that many of the hungry families who received such nets decided to use them to catch fish instead. This not only failed to prevent malaria, but also poisoned fish and people with insecticide.
I think this Vox article does a good job deflating this claim: https://www.vox.com/future-perfect/2024/1/25/24047975/malaria-mosquito-bednets-prevention-fishing-marc-andreessen
The best study we have on bed net toxicity—as opposed to one 2015 NYT article that made a guess based on one observation in one community—is from a 2021 paper that’s linked in the Vox article. It does a thorough job summarizing all known evidence regarding the issue, and concludes with a lot of uncertainty. However:
> I asked the study’s lead author, David Larsen, chair of the department of public health at Syracuse’s Falk College of Sport & Human Dynamics and an expert on malaria and mosquito-borne illnesses, for his reaction to Andreessen citing his work. He found the idea that one should stop using bednets because of the issues the paper raises ridiculous:
> “Andreessen is missing a lot of the nuance. In another study we discussed with traditional leaders the damage they thought ITNs [insecticide-treated nets] were doing to the fisheries. Although the traditional leaders attributed fishery decline to ITN fishing, they were adamant that the ITNs must continue. Malaria is a scourge, and controlling malaria should be the priority. In 2015 ITNs were estimated to have saved more than 10 million lives — likely 20-25 million at this point.
>“… ITNs are perhaps the most impactful medical intervention of this century. Is there another intervention that has saved so many lives? Maybe the COVID-19 vaccine. ITNs are hugely effective at reducing malaria transmission, and malaria is one of the most impactful pathogens on humanity. My thought is that local communities should decide for themselves through their processes. They should know the potential risk that ITN fishing poses, but they also experience the real risk of malaria transmission.”
There’s no good evidence that bed net toxicity kills a lot of people, and there’s extremely good evidence that they’re one of the best interventions out there for reducing child mortality. See also the article’s comments on nets getting used for fishing; the studies on net effectiveness account for this. Even if the nets do cause some level of harm, the downsides are enormously outweighed by the upsides, which are massive:
> A systematic review by the Cochrane Collaboration, probably the most respected reviewer of evidence on medical issues, found that across five different randomized studies, insecticide-treated nets reduce child mortality from all causes by 17 percent, and save 5.6 lives for every 1,000 children protected by nets.
This doesn’t mean that we should stop studying possible downsides of bed nets or avoid finding ways to improve them, but it does mean that 1) they do prevent malaria, extremely well, and 2) they save pretty much as many children as we thought.
To add, the Against Malaria Foundation specifically knows about this failure mode and sends someone to randomly check up on households to see if they're using the nets correctly. The rate of observed compliance failure isn't close to zero, but it isn't close to a high number either. See: https://www.givewell.org/charities/amf#Monitoring_and_evaluation_2
Maybe I'm too cynical but I haven't seen anyone change their mind when you add context that defies their expectation, I feel like they either sputter about how that's not their real objection (which if you think about it is pretty damn rude to say "this is why I believe in X" and then immediately go "I don't believe in X why would you think I believe in X") or they just stop engaging.
That's good news! Thanks.
But I think we agree that the general principle still stands that moral interventions further in time and space from ourselves generally have more risk. We can reduce the risk with careful study, but helping people far away is rarely as straightforward as "saving a child from drowning" where the benefit is clear and immediate. I find the "drowning child" thought experiment to be unhelpful as a metaphor for that reason.
We're not saving drowning children. We're writing policies to gather resources to hire technicians to build machines to pluck children from rivers at some point in the future. In expectation we aim to save children from drowning, but unlike the thought experiment there are many layers and linkages where things can go wrong, and that should be acknowledged and respected.
Sure—but then shouldn’t we respond by being very careful about international health interventions and trying as hard as we can to make sure that they’re evidence-based, as opposed to throwing up our hands and giving up on ever helping people in other countries? The former is basically the entire goal of the organizations that Scott is asking people to listen to (GiveWell, etc). Hell, GiveWell’s AMF review is something like 30 pages long with well over 100 citations.
There has to be some point where it’s acceptable to say “Alright, we’ve done a pretty good job trying to assess whether this intervention works and it still looks good, let’s do it.” Going back again to the organizations that Scott wants people to donate to, I think that bar has been met.
I believe that where the bar lies should be for each person to decide for themself. Also, it's not enough for an intervention to have positive effect, but it must have a more positive effect than doing what we would otherwise do anyway. That's a much harder bar to clear.
I personally do think many international interventions have positive effects in expectation. But I am skeptical that they have more positive effect than the "null hypothesis" of simply acting as the market incentivises. I'm honestly really not sure if sending bed nets to Uganda helps save more lives in the long run than just buying Ugandan exports when they make sense to buy and thereby encouraging Ugandan economic development, or just keeping my money in the bank and thereby lowering international interest rates and helping Uganda and all other countries.
The market is a superintelligent artificial intelligence that is meant to optimize exactly this. To be fair, part of the process of optimization is precisely people sometimes deciding that donating is best. Market efficiency is achieved by individuals taking advantage of inefficiencies. But I don't think I have any comparative advantage.
The market optimizes something very different from "human flourishing". Economic resources and productivity are conducive enough to human flourishing that we've been able to gain a lot by taking advantage of the market being smarter than individuals, but now it's taking us down the path of racing toward AI, so in the end we're very likely to lose more than we ever gained by listening to the market. And in the meantime, Moloch is very much an aspect of "who" the market is.
The market is not selecting for AI.
Moloch is an aspect of everything. It would be cherry-picking to say that it uniquely destroys the efficient market hypothesis vs. all other solutions. Efficiently functioning markets very much is demonstrated in the real world as leading to vastly better outcomes than any other known system of resource allocation.
This argument proves too much, though. If the maximally efficient way to save lives is sitting back and letting markets do their thing, wouldn’t that also mean that we should get rid of food stamps, welfare, and every other social program in the US? After all, these programs weren’t created by market forces—they were created by voters who wanted to help the unfortunate (or help themselves) and who probably weren’t thinking all that hard about the economic consequences of these policies. The true market-based approach would to destroy the social safety net, lower taxes by a proportional amount, ban all private charities that give things away at below-market prices, and let the chips fall where they may.
Markets are good at doing what they do, but there’s no law of economics that says markets must maximize human welfare. They maximize economic efficiency, which is somewhat correlated with human welfare but a very imperfect proxy for it. I don’t think that I can beat the market at what it does best (which is why I’m mostly invested in the S&P), but when it comes to something the market isn’t designed for and doesn’t really care about, I trust it far less.
Moreover: Is that your true objection? If someone came out with a miracle study proving that donations to the AMF save more lives than investments in the S&P (I know this is sort of impossible to quantify, it let’s say they did), would you then agree that donating to the AMF is a good idea if you want to improve human welfare?
The market does an impressive job at optimizing for the welfare of people who have money. LVT + UBI would neatly sort out most of the associated misalignment problems.
Stories about clothing donation - unsorted heaps of your old sportsball and fun-run and corporate team building tee shirts having more value than the most beautiful locally-produced textiles - are depressing in this regard, and bring to mind the African economist who - 20 years ago or so - received a tiny bit of attention for asking Western do-gooders to basically leave Africa alone.
Do you also apply this heuristic to acts that we might call selfish? Starting a clothing business to make lots of money by jumping on microtrends in fashion carries the risk of encouraging young people to overextend their credit. Discarded, no-longer-in-fashion garments may end up clogging landfills. And yet it’s the ostensibly altruistic projects that we attack for ending up “doing more harm than good." The others we praise for their entrepreneurial spirit.
> insecticide-treated nets reduce child mortality from all causes by 17 percent, and save 5.6 lives for every 1,000 children protected by nets
I'm curious as to how the math here works out. If they're reducing child mortality by 17%, how does that not imply 170 lives saved per 1000 children? Everyone goes through an infant stage during their lives, right?
17 percent of the total risk of child mortality. If the total risk of child mortality without bednets was 100% then Africa wouldn't have made it long enough for this even to become a charity.
Oh, I see. I thought it was 17% in absolute terms, not as a reduction of prior risk.
That's exactly the answer. When you're helping the child in the lake across the street there are a lot of implied social contracts at play, between you and your neighbors, you and your city, you and your country. That child will grow up to pay taxes and be the teacher of your grandchildren or the doctor that will take care of you as you age.
There's no such contract with the far away child. You don't know if the child's drowning because their society keeps throwing children in lakes. You don't know that the money you send won't be used to throw even more children in lakes. You don't even know if that child will be saved just to grow up and come make war with your own society.
There’s something to this, but I’m not sure if it’s enough. Suppose you’re American and taking a vacation (or on a business trip, or working there temporarily) in rural China and you see a drowning child.
Would you decide not to save them because it’s not your country? What if Omega tells you that it’s a genuine accident and the locals are not routinely leaving children to drown?
Something like that really happened. A british diplomat saving a chinese woman among a bunch of passive chinese bystanders, IIRC. https://www.bbc.com/news/world-asia-china-54961075
If you read the article many of the Chinese bystanders were not passive, obviously the drowning child scenario assumes a shallow lake, very few people would dive into a fast moving river without some sense of training (the diplomat competed in triathlons).
Unironically yes. When you travel to a foreign country like this, you are an outsider and you aren’t really supposed to interact with the locals very much. I wouldn’t talk to them, I wouldn’t make friends with them, so I sure as hell am not going to get involved in their private affairs like this. It’s none of my business and as an outsider I wouldn’t be welcome to participate. I’m pretty sure that if I saved the child, I would be called a creep for laying hands on him. Without knowledge of the actual language, I wouldn’t have the tools to explain myself otherwise.
Honestly, I think your thought experiment kind of illuminates why we save local children and not distant ones. The local children are theoretically members of our community, and though community bonds are weaker than ever, they aren’t non-existent, they still matter. Ergo, we save the child to reinforce this community norm, and we hope someone else saves our own children from drowning some day.
That doesn’t transfer if we do it in a foreign country.
"I'm pretty sure that if I saved the child, I would be called a creep for laying hands on him." I'm not a mind reader, but this sure reads like a bad faith argument to me.
Have you read the fable of the snow child? It’s a story about a fox who saves a girl who was lost in the woods in the winter. Upon bringing the girl home the parents shoot the fox because he’s a fox and they were afraid that he was going to steal their hens. The girl of course admonished the parents for this, but it didn’t change the fact that the fox was dead.
Do not underestimate the power of xenophobia.
Communicating with foreigners can be a very high stakes situation. When people are naturally suspicious of you, it’s critical that you stick to pre-approved socially accepted scripts and to not deviate from them, otherwise the outcomes can be very unpredictable. Drowning children is a rare enough event that we don’t have universally agreed upon procedures for how to handle them.
This is middling sophistry.
There have, I don't think commonly but still non-zero, honor killings because a (male) foreigner interacted with a local (female) child. The interaction that the tourist probably thought was merely being polite was enough for her to be marked unclean.
If you happen to stumble into a living thought experiment where a child is drowning in a shallow pond, it's worth the risk of a cultural misunderstanding and the child's later death instead of the certainty of their current drowning. But such cultures do exist.
As someone who would like to be saved from any hypothetical future drownings, even if they were to happen in foreign countries, or in my own country in instances where the only potential saviours are foreigners, I very much dispute your last sentence as logically following from the previous.
Indeed, I would like the community of people who would feel obligated to save my children from drowning to be as large as possible, all else equal.
To deal with the objections below, switch it up so it's in your home country but a visiting tourist: a cruise ship docks at your local beach; you know at this time of year the majority of swimmers are tourists not locals. You see a kid drowning... Do you ignore them because they're from a different country?
>That child will grow up to pay taxes and be the teacher of your grandchildren or the doctor that will take care of you as you age.
Would you accept that our moral circle should expand as the economy becomes more globalised then? It's standard in the modern economy for kids on the other side of the world to grow up to make your clothes, grow your coffee etc.
Yes, but the economic bound is not enough. You need a cultural bound. None of these variables are binary in practice, so the amount of economic ties, cultural ties and amount of help we send tend to be proportional to each other.
"You don't know if the child's drowning because their society keeps throwing children in lakes." That doesn't seem like a good reason to not save the child.
"You don't know that the money you send won't be used to throw even more children in lakes." This would be an argument against dropping money randomly, but we have fairly robust ways of evaluating charitable giving.
"You don't even know if that child will be saved just to grow up and come make war with your own society." Saving them with a life preserver that says 'Your drowning prevented by [my society]" seems like an excellent way to prevent that, with the added benefit that they'll tell all their friends to not make war on your society, too.
This is an over-simplification. Children are routinely indoctrinated by their societies throughout early adulthood to become warriors or to create more warriors. There is absolutely a real risk that a random person saved will be your enemy in the future. Saving them from a vast distance can indeed by seen by that society as a great way of helping their future takeover while impoverishing their enemy. Moloch is everywhere.
Allowing for the sake of argument that that's a significant problem, seems to me the obvious patch would be forking some percentage of nets off the main production line before the insecticide-treatment step, adding a cheap but visible feature to them (maybe weights? A cord for retrieval?) which would make the insecticide-free nets more useful for fishing and/or less useful for beds, then distributing those through the same channel until fish-net demand is saturated.
I think uncertainty in outcome and timing explains a lot, at least for my own behavior.
If I am certain of a benefit to others while uncertain about how grumpy I will be after the good deed, the finger is on the balance to help.
The inverse is also true. Giving for the certainty of a relief is very different from giving with the non-zero chance that funds get diverted to wars, corruption or crime organisations.
Thank you. This is very much my intuition as well, and I'm glad somebody else laid it out clearly. The biggest flaw in all these thought experiments, IMO, is that you're assumed to have 100% accurate knowledge of the situation. Accurately knowing the details of the river and the megacity and the drowning children is FAR more important to moral culpability than whether you happen to have a cabin nearby, or whether you happen to live there.
Sounds like we need some kind of social arrangement where we are gently compelled to work together to solve social problems cooperatively with roughly equal burdens and benefits, determined by need and ability to contribute. What would we call this...rule of Social cooperation? Perhaps social...ism?
Nah, sounds scary. Lets just keep letting the rules be defined by the Sociopathic Jerks Convention, with voting shares determined by capitol contributions.
Right, the trick is that the altruistic people need to make rules that exclude the sociopaths jerks from accumulating power and which make cooperation the better choice from their own selfish perspective.
Perhaps a good start, just a bare minimum, would be to strictly limit the amount of capitol that any one person can control. (could be through wealth taxes, could be through expropriation, could be through enforcement of antitrust....whatever, trying to keep this on a higher level.) The extreme inequality here leads to further multiplication of the power of the wealthy, and because the Sociopathic Jerk Convention (E.g. the behavior of corporations, which are amoral) is running the show, their rules allow them to further multiply their power.
The altruistic people need to be assertive and willing to fight. There are more of us than there are of them by a huge margin.
"The altruistic people need to be assertive and willing to fight. There are more of us than there are of them by a huge margin."
Where do you get this from?
Better yet, why not leverage the ambitions of entrepreneurs to invest their time, money and creativity to solve problems for consumers? Big investments require big bets and huge risks which need to be offset with immense potential rewards.
I think a lower bound is more important - and more feasible to enforce - than an upper bound.
When you go to the most powerful capitalist in the world and tell him "your net worth is above this arbitrary line, so a new law almost everyone else agreed on says you have to give some of it up," is he going to actually cooperate with that policy in good faith? Or is he going to hire the best lawyers and accountants in the world to find some loophole, possibly involving his idiot nephew becoming (on paper) the new second or third wealthiest capitalist in the world?
One trouble with Rawlsian veils (better yet Harsanyian veils) is that networks of billions of interacting people are complex adaptive systems with emergent characteristics and outcomes. If we want to establish morality by what actions would be lead to the best outcomes, then we need to actually play through the system and see how it develops.
May I suggest that a world where everyone gave everything beyond basic sustenance to anyone worse off than them would scale into a world where nobody invested, saved for the future and everyone felt like a slave to humanity because they would be. It would be a world of complete and total destitution devoid of any ability to help people across the world.
I think it is more realistic to take real humans with their real natures and find rules and ethics and institutions which build upon this human nature in a productive way. I would offer that this is more of a world where altruism, utilitarianism and egoism overlap. Science does this by rewarding scientists with reputation for creating knowledge beneficial to humanity. Free markets do this by rewarding producers for solving problems for consumers. Democracy does this (in theory at least) by aligning the welfare of the politician with the citizenry voting for them. Charities do this by recognizing the benefactors with praise and bronze inscriptions.
There are good reasons why pretty much nobody gives everything to charity. Effective Altruists need to take it up a level.
>Nah, sounds scary.
Given the history of actually existing socialism, very scary indeed.
Socialism has a history far broader than whatever particular example you are thinking of.
Economic system and body count don't seem correlated meaningfully...mercantilism and capitalism gets colonialism (and neocolonialism) and slavery, also fun wars like Vietnam and Iraq, communists get the great purge and great leap forward.
Authoritarianism has the body count, regardless of if it's socialist, capitalist, theocratic, mercantilism or whatever you prefer.
>Socialism has a history far broader than whatever particular example you are thinking of.
And /none/ of it has been notably successful, in any significant respect (& particularly compared to its competitor(s)—just the opposite, in fact—so... well, I remain skeptical.
Of course, it depends on what you call "socialism". Is "capitalism but with government services" deserving of the name? If so, it DOES work! (...but I, of course, would credit that to the other component at work.)
>mercantilism and capitalism gets colonialism (and neocolonialism) and slavery, also fun wars like Vietnam and Iraq, communists get the great purge and great leap forward.
I think there are many things wrong with this attempt at "death-toll parity", but no one ever changes their minds on this topic & it's always a huge exhausting slog... so I'm just registering my objection; those who agree can nod wisely, those who don't can frown severely, and we all end up exactly where we would've anyway except without spending hours flinging studies and arguments and so forth at each other!
Well, I don't want to turn this thread into a debate on socialism, and you're very right that how we define our terms is contested and critical.
I would suggest that there are many examples, such as with Allende, where it seems like it was going to work really well and the CIA simply could not have that.
I'd also note that life for the average Cuban is far, far better under Communism that it was under Batista, for example, and possibly the results in the other countries you are thinking about are better than you think when looked at from the perspective from the average poor person rather than the very-small middle and upper classes, who typically control the narrative.
Regardless, I was just saying that authoritarianism is orthogonal to economic model. And it is authoritarianism, regardless of the economic model, which is "scary." The Nazis were not less horrific simply because they had a right-wing social and economic program.
I'm not sure how your comment addressed that.
Would Venezuela be an example of a country where non-authoritarian socialism has gone badly? (May be you can wriggle out of this by saying it's a bit authoritarian?)
I suppose Norway would be an example of a country where socialism has gone pretty well (though with a fairly large dose of capitalism, and the advantage of massive oil reserves - not that those saved Venezuela).
Norway is not Socialism, per ChatGPT at least. It is a Social Democracy, not a Socialist government, though one may quibble about the distinction. Norway has:
Private Property and Business:
Individuals can own businesses and land
Market economy drives most goods and services.
Stock Market and Investment:
Norway has a well-functioning stock exchange
Encourages entrepreneurship and foreign investment
Profit Incentives:
While taxes are high, businesses still operate for profit
Wealth creation is encouraged, though it's heavily taxed and redistributed
I would personally argue that it would function even better with higher profit motive and less government intervention, but it is a misnomer to claim it is Socialism.
Venezuela went very authoritarian, but also, I wouldn't make the claim that every socialist experiment, even less authoritarian ones, are good. Norway is a possible example of a good one as you mention. Cuba is an obvious example. One could argue China is doing really well, and you can say it's capitalist but also they haul off billionaires to labor camps if they get too out of line so I would push back on that.
But Venezuela failed. This stuff is complicated. Anyone who says HURR DURR SOCIALISIM BAD is ignorant, many of them proudly so.
Chile was not working "very well" under Allende. The CIA was not able to cause a general strike and the legislature requesting his removal.
As for Batista vs Castro, https://www.bradford-delong.com/2008/02/lets-get-even-m.html
Let's look at how people vote with their feet. Do we see people moving on-net to socialist countries from capitalist ones?
Cuba has a *much* lower emigration rate than capitalist countries in the region, in fact.
Cuba restricts emigration, but do people immigrate to there? My understanding is that even Haitians don't want to.
Haitians tried migrating to Cuba at one point (I forget the exact year), but they weren't allowed in.
At my workplace (before I quit, angrily—and, as it turns out, unwisely), we had several Cubans who had come over here to 'Merica on rafts and the like. They were bigger fans of America than most Americans—the yard manager bought a Corvette and had it emblazoned with a giant American flag, always wore "America: Land of the FREE!" or "...of Opportunity!" etc. T-shirts, and so on (& I once witnessed his eyes get wet at the anthem before a game(!)—and... uh... well, they would talk about Cuban food, women, weather, vistas, but to a man they said they'd die trying to sneak back in the U.S. rather than accept being forced to go back & remain.
Anecdote, of course. But I get the impression that this is the modal Cuban, over here; granted, they're self-selected—but one doesn't see very many Proud Cuban Forever, "I'd die before leaving my adopted Cuba!", etc., expats, going the other direction.
why, and how?
i'd much prefer actually existing socialism to actually existing capitalism.
Speaking personally, I can feel that the appeal of both socialism and effective altruism are linked to the same set of intuitions about solving social problems.
To me, the big difference is: (many) socialists seem more attached to a specific idea of how to act in accordance with that intuition than with actually figuring out the best way to operationalize them.
Socialists tend to presume they know the answer even in cases where their preferred answer does not seem like it actually achieves the goals they're supposed to be working towards.
Or, maybe a different way of saying it: I think ~120 years ago, socialism would have felt a lot like EA today: the ideology of smart, scientific, conscientious but not sentimentalist, universalists. But the actual history of socialism means that a lot of the intellectual energy of socialism has gone into retroactively justifying the USSR and Mao and whatever, so that original core has become very diluted.
TBC, I don't mean this as a complete dismissal of socialism, I think there are lots of people who consider themselves socialists who I think basically have the right moral intuitions and attitudes, and I absolutely feel the pull of socialist ideas... But I often find myself frustrated how quickly so many socialists just refuse to engage with the fact that capitalism has been absolutely necessary to generate the resources necessary for a universalist moral program, or will completely abandon any pretence of conscientiousness as soon as awkward facts about communist totalitarianism are mentioned.
I'd say "hollowed out" rather than "diluted." Anybody who got sufficiently sick of trying to justify the USSR, and still cared about the original virtuous goal, started calling their personal agenda something else and focusing it in different directions.
I'm not sure that's 100% true, especially if you consider young people whose identities on these things aren't totally formed yet.
"To me, the big difference is: (many) socialists seem more attached to a specific idea of how to act in accordance with that intuition than with actually figuring out the best way to operationalize them."
Yes, clearly. That's because socialism (and capitalism) includes a large component of moral axioms and value claims as well as claims about facts, and you are not going to argue someone out of their moral axioms.
I'm opposed to capitalism partially for evidence-based reasons and partly because of basic values (I think it's morally wrong to derive most of your income from non-labor sources) and you couldn't convince me out of having my values even if you changed my opinion about some facts.
"or will completely abandon any pretence of conscientiousness as soon as awkward facts about communist totalitarianism are mentioned."
what facts, or "facts", are you thinking of, and why would you expect they would change my mind?
I'm aware socialist countries tend to be authoritarian (not necessarily "totalitarian", whatever you think that means), but I'm not really bothered by that in principle, since I don't view political freedom as self evidently good.
"Yes, clearly. That's because socialism (and capitalism) includes a large component of moral axioms and value claims as well as claims about facts, and you are not going to argue someone out of their moral axioms."
That's totally fair, but in the context of the original comment, which implied that "socialism" was just a method to implement the strategy of gently compelling people to work together to solve social problems, the fact that socialism has other moral axioms that may be unrelated to the project of solving those problems--or at least, that the problems socialism see itself as solving might be different than the problems suggested by Scott's post.
"what facts, or "facts", are you thinking of, and why would you expect they would change my mind?"
The usual ones about gulags and the Cultural Revolution and so forth; I'm sure you already know them. And I didn't say that they should make you change your mind, I said that socialists abandon their conscientiousness in the face of those facts: they tend to defend actions and outcomes that are canonically the sort of thing our hypothetical strategy of "gently compelling people to cooperatively solve problems" is meant to be *solving*.
Again, this is fine, you're allowed to think that the occasional gulag is justified to make sure that no one derives income from non-labour sources. I'm not saying you shouldn't be a socialist, I'm saying that being a socialist is *different* from the project that Scott loosely alludes to and that the top-level commenter suggests is basically achieved by socialism.
I'm explaining to the top-level commenter why some people who are sympathetic to the goal that Scott outlines, and who have some sympathy for the intuition that this has something in common with socialism, might still not consider themselves to be socialist, or at least, might think that the two projects aren't exactly identical.
Okay, you've changed my mind. I'm now convinced that promoting a social norm of saving strangers is actively evil because of second-order effects. Thanks!
Reading "More Drowning Children", the thought that came up for me was, "Damn, he has greatest ability to write reticulated hypotheticals which primarily serve to justify his priors of any one I've ever read!"
My second thought: For me, the issue is more, "At the end of this ever-escalating set of drowning children, do I ever get to do anything other than the minimal activities that allow me to survive to rescue more drowning children?" Not what you're getting ads, I know, but what you're doing seems to me to point in that direction.
I might as well take the role of the angel on your shoulder, whispering into your ear to tempt you, saying, why not give all you have to help those in extreme need just once, to see how it feels? What if your material comfort was always at the whims of strange coincidence, and goodness was the true measure of man? What if you found out you liked being a penniless saint more than a Respectable Person? You might enjoy it more than you think. Just think about it. :)
Penniless saints have done far less good in the world as a whole than wealthy countries and wealthy billionaires who then had enough time and capacity to look beyond their near term needs.
Sounds like something you'd hear from media sponsored by billionaires, or in history books written by billionaires, or in a society which overemphasizes the achievements of billionaires while ignoring the harm they are doing, etc.
Yeah, well, that just, like, your opinion man. Maybe bring some examples of poorer or different societies that do less harm and more good?
I actually completely agree with this post. You shouldn't take your own feeling of "feeling good" as the entire idea behind morality. Yes, billionaires giving to charity will do more good than a penniless saint (being influential can make up for this gap -- Gandhi may have done more good than the amount than the amount of money in his pocket -- but the random penniless saint won't outweigh $100,000,000 to charity).
That being said, billionaires can save 100,000 lives, but you personally could save 1 life. If you don't save that one life you could, seems like you're saying you don't value saving lives at all.
You could say "one of my highest utility action is to become a billionaire first, AND THEN donate all my money to the causes which are the most effective" and yes! I might even agree with you! If you dedicate yourself to that then you're doing good! But if instead, you say "well it's difficult to do the maximally efficient thing so I'm not even going to save ONE LIFE", then you're giving an excuse for not saving a life even if you wanted to.
You could say "one of my highest utility actions is to CONVINCE all the billionaires to donate their money to charity". and yes! I might even agree with you! If you dedicate yourself to that then you're doing good! But if instead you say "well, most people who say they're moral aren't doing that, so clearly the idea of morality is bunk and I'm not a bad person for not following the natural conclusion of my morality" then that's a problem.
Someone who weaves complicated webs to not do anything different than what they wanted to IS, IN FACT a worse person than if that same person donated enough to charity to save one life.
No matter what, all morality says you need to either *try*, OR say you don't value saving any lives (an internally consistent moral position that wouldn't care if your mom was tortured with a razor blade) OR do what scott says in the post and assume that looking cool/feeling good about yourself IS morality, and therefore there's no moral difference between saving 0 lives or 1 life and 10,000 lives if they provide the same societal benefit and warm feeling of fuzzyness about being a good person in your gut.
You do, in fact, have to choose one of those 3.
I'm not sure what complexity there is. The invisible hand makes free societies wealthy, and wealthy societies give more to charity. No external effort, no waiting, no convincing, marketing, sales, or anything else needed. Lowest effort, highest utility.
There is more in heaven and earth than billionaires. There are also a lot more millionaires, and even more hundred-thousand-aires than there are billionaires. Grow the whole pie. This isn't zero-sum.
"At the end of this ever-escalating set of drowning children, do I ever get to do anything other than the minimal activities that allow me to survive to rescue more drowning children?"
In the thought example, some of the saved children should take over the job, and the others maybe give thanks for saving their lives.
In real life, no one is ever going to reward you, because the kind of people with the capacity and desire to reward you are probably too busy saving kids themselves. Until the day comes when there's finally no more kids to save anywhere, then MAYBE society will throw you a bone, but we'll probably all be dead before that happens.
This is a the point of the first issue of Kurt Busiek's Astro City comic. Samaritan, a Superman-like hero, can never rest, literally (I think), because with his super-hearing he can always, 24/7, hear a drowning child in Africa, and he can get there in 2 seconds, so he feels compelled to do so.
Seems like the superior solution would be finding an EA / moral entrepreneur who will happily pay market value for the cabin, and then set up a net or chute or some sort of ongoing technological solution that diverts drowning children into an area with towels, snacks, and a phone where they can call their parents for pickup. Parents are charged an entrance fee to enter and retreive their saved children.
I unironically think the moral equivalent of this for Scott's favorite African use cases is something like "sweatshops."
"Parents are charged an entrance fee to enter and retrieve their saved children."
What if the parents don't turn up?
"Look, do you really think that by now *nobody* has realised all the missing children are due to them falling into lakes and streams? One child per hour every hour every day every month all year? 8,760 child fatalities due to drowning per year for this one city alone? Come on, haven't *you* figured out by now that this is happening on purpose?
Don't want to be bothered with your kids anymore? Don't worry, eventually they'll wander off and fall into one of our many unsecured lakes, streams, ponds, and waterways, and that's that problem off your hands! Your kid is too stupid to figure out that they shouldn't go near this body of water? Then they're too stupid to live, but Nature - in the guise of drowning - will take care of that.
You keep saving all these kids, our population is going to explode! And the genetically unfit will survive and reproduce! It will ruin our society!"
This would be a strong incentive for parents to teach their children how to swim, and not to jump in fast moving rivers.
And yet some would still fall, just as occurs in our world.
Bodies of water have inherent danger. Yet, it is worth the tradeoff of not posting lifeguards at every single river, every pond and stream, just to stop the potential of some children drowning. Life is life, and tragic accidents happen. Safetyism is worse.
> What if the parents don't turn up?
Vocational training as a rescue-chute maintenance technician.
Economic development leads to lower fertility. I definitely think population and fertility rates in Africa are huge social problems, but the best way to address them is to make Africans more prosperous so they adopt the norms about sex and family size that other countries have adopted as they get richer.
They refuse to pay and have you arrested for kidnapping. It turns out the LAW generally adopts the Copenhagen view.
in the dam lobbying example, surely lobbying for a different dam is touching the children
Yeah, probably. But what about voting for the Different Dam Party? Or voting for a party whose headline 5 policies you greatly support but also have a nasty line in their manifesto about building a different dam?
I think at some point Scott has to accept that people reading this blog are exactly the types of people to optimize for their own coolness and not at all for truth seeking or morality, when you see them go into contortions to avoid intuition pumps. The problem is upstream of logical argument, and in whatever thought process preventing them from thinking they could be at all immoral.
I would think his readership is on average more into truth seeking or morality vs coolness maximation than the average person
If so, that just shows how low a bar that is. :(
Depends. Some people are here for the careful explorations of morality. Some people are here because they heard it was where all the smart kids hang out, and they are desparate to prove they belong, which often means showing off your ability to do cognitive somersaults over things like empathy or basic moral intuition. Its essentially trangressive intellectualism as psychic fashion.
Yup, this. Thank you for explaining it.
Although I am being bad for not mentioning that I'm really talking about the commenters. If you were persuaded, the most likely time you mention it (if you do *at all* which you probably don't because mentioning donations are gauche) is a random start or end of year open thread, probably with no direct direct link back to the persuasive post. If you weren't persuaded, you likely fall into the above failure mode. (Edit: and therefore immediately respond)
> Some people are here for
I also think the proportions have shifted over time, and are still shifting.
"on average"
Have you actually met any of the 'people reading this blog'? Try coming to an SSC Meetup, or Less online or Manifest, rather than just making shit up.
People who actually come to meetups may not be representative of readers.
Yup, people who go to meetups are several tiers above the average commentator, who cannot seem to grasp the purpose of hypotheticals and post things like "well this just makes me WANT to drown people (unstated subtext) because I don't like your arguments". Even if those types of people went to meetups, they'd know better than to say things like that!
And "seeming cool" doesn't mean "fashionable" or "obviously pandering to populist sentiments" (both of which I agree would be a bad way to describe even the current commentators) in this context, but something more like "self conception preserving" or "alliance affirming". Someone replying a post about morality about how obviously they love their family, and obviously giving is local because then it'd be reciprocal are not thinking truth seeking thoughts but "yay friends" or "yay status quo".
If you think you have a simpler explanation of why over 50% of the replies are point missing or explicitly talk about how they don't want to engage with the hypotheticals, with only reference to the replier's surface level feelings rather than marshaling object level arguments on why it'd be inappropriate to use hypotheticals then I'm all ears. But saying "people just make mistakes" is not an answer when the mistakes are all correlated in this fashion.
>when you see them go into contortions to avoid intuition pumps
Funny enough it was Scott himself in his What We Owe The Future review that broached the idea you probably should just hit da bricks and stop playing the philosophy game! He wanted to avoid the intuition pumps because they're bad. When you *know* someone is rigging the game in ways that aren't beneficial to you, you are not obligated to go along with the rigging.
Ever-more-contrived thought experiments are not about truth-seeking, either.
>whatever thought process preventing them from thinking they could be at all immoral.
The phrase you're looking for is "human nature."
I’m confused by the use of ethical thought experiments designed to hone our moral intuitions, but which rely on increasingly fantastical scenarios and ethical epicycles upon epicycles. Mid-way through I was wondering if you were going to say “gotcha! this was all a way of showing that the drowning-child mode of talking about ethics is getting a bit out of hand.” Aren’t there more realistic examples we could be using? Or is the unreality part of the point?
Like with scientific experiments, you try to get down to testing just one variable in thought experiments. The realism isn't the point, just like when a scientist who is studying the effects of some chemical on mice ensures that they each get perfectly identical and unchanging diets over the course of the experiment. The scientist isn't worried about whether it is realistic that their diets would be so static because that's not what's being tested right now.
You can build back to realistic scenarios after you've gotten answers to some of the core questions. But reality is usually messy and involves lots of variables at once, so unless you have done the work to answer some of those more basic questions, you're going to get stuck in the muck, unsure what factors are really at play. Same as if the scientist just splashed unmeasured amounts of the chemical onto random field mice in a local park.
The problem is, the drowning child thought experiment, in its *original* form, is already the most free of confounders, as it is much simpler than the scenarios Scott proposed here. So the equivalent of your mouse science example would be: I give my mice a certain drug, and the mice are held und the most supremely controlled circumstances such as their diet. But the drug did not have any effect. So now instead I let my mice roam free in the garden and feed them leftovers from the employee canteen, and then I give them the drug again and see if it works now.
The original Drowning Child thought experiment is "you'd save a child if you saw it drowning, wouldn't you?" and the majority of people will go "of course I would".
*Then* it sandbags you with "okay, so now you have agreed to save *all* the drowning children forever" and people not unreasonably go "hold on, that's not what I agreed to!"
And then the proposers go "oh how greedy and selfish and immoral those people are, not like wonderful me who optimises for truth seeking and morality".
No, it asks you _why_ you feel so sure you have to save the one drowning child, but you never even think about the others. The point is to make you realize that _is_ what you (implicitly) agree with; that it's _your_ judgement that thinks you're greedy and selfish for not saving children.
Some people actually respond in the desired way to the thought experiment; they can't think of any compelling answer to the question "what's the difference?"
Other people propose answers like: "the difference is spatial proximity", and so Scott counter-proposes other thought experiments to try and isolate that variable and discovers that it actually doesn't seem very explanatory.
The point of these iterated versions is to isolate different variables that have been proposed to see if they actually work to answer the question; and if we can discover an answer, figure out what it suggests about our actual moral obligations vis a vis funding AIDS reduction in Africa or whatever.
But Scott *isn't* isolating any variables, nor is he trying to. He's just constantly changing all the variables on a whim, including "variables" that aren't actually variable to begin with (e.g. laws of physics). Continuing the analogy from before, what Scott is doing here is like if one of the scientists were to notice that the mice seem to be becoming unhealthy, and another scientist proposes that it might be because their diets don't contain enough protein. Then the first scientist says, "okay, let's test for that. We'll send the mice to an alternate universe where the speed of light is 2 m/s slower than it is in our world, genetically modify them to have pink fur with purple polka dots, give them all tiny ear piercings, and start adding protein to their diets -- if your theory is correct, this should resolve their health issues."
I guess I disagree? People claimed that the clear difference between drowning kids and malarial kids in Kenya is distance, so Scott lists some (not even all that unrealistic) examples where you're physically distant to see if the intuition holds?
After rejecting physical distance he tries to think of some other factors: Copenhagen-style "entanglement", the repeated nature of the malaria situation as opposed to the (usually) one-off nature of the drowning child. He decides that these are indeed the operative intuitions, and then challenges them, finding all versions unsatisfying of using these as a complete basis for moral action, before laying out his preferred resolution.
I agree the examples come fast and thick, and sometimes it feels like we're nested a few levels deep, but I think he's exactly looking at the variables "physical distance", "declining marginal value of moral action", "entanglement with the situation" , and trying to isolate them individually and then in various combinations/interpretations.
This is a completely inaccurate presentation of the original argument which makes me think you’ve never even seen/read it.
In point of fact, I have read it, but thank you for your interest.
Actually, what's going on here is that we observed some effect X in the original experiment (the drowning child). Then someone claimed "yes, but that effect only occurs when their living space is a small cage. In more naturally-sized living spaces, the effect X would vanish. The chemical isn't sufficient on its own." And so we go on to run the test again, but now the scientist builds a (still contained and controlled in other ways) testing environment where the mice live in burrows made of dirt instead of cages.
It's trying to apply rules of logic to something that is squishy and not made of logic.
Really there are no reason to believe that our moral intuitions are coherent. They probably aren't. Thought experiments are fun and useful for trying to explore the edges and reasons of our intuitions, but they have their limits. This article may have gracefully (or not gracefully, depending on your perspective) bumped up against them.
You could have a framework where you expect yourself, and hopefully others, to donate a portion of thier time and/or money to helping others (call it the old 10 percent tithe, although admittedly everyone has thier own number). If you already expect yourself to do this, then adding on saving a drowning kid once hardly costs you more in the big picture, and is the right thing to do since you're uniquely positioned to do it. if it's really important to you, you can just take it out of your mental tithe ledger and skip one life unit of donation that month (although you probably won't because it's in the noise anyway). But if you're by the drowning river and this is happening so often it's significantly cutting into your tithe, it's perfectly reasonable to start actually taking your lifeguard duties out of your mental tithe, and start wondering if this is the most effective way for your tithe to save lives. And if not, then we all reasonably conclude you're fine (even better off) not doing it.
Also this reminds me of my favorite short story:
https://www.newyorker.com/magazine/1996/01/22/the-falls
"... doesn't seem to be a simple one-to-one correspondence where you’re the only person who can help: [sociopathic jerk thought experiment]"
I'm not sure if this tells us tooo much about the effect of other people in real-world moral dilemmata; one might bite the bullet and say "sure, /in that case/, where you know you're the only one who can help, you should; but in any real situation, there will be 1000 other people of whom you know little—any one of which /could/ help."
That is, if we're considering whether there is some sort of dilution of moral responsibility, I don't think the S.J.C. example really captures the salient considerations/intuitions.
-------------
I disagree with the other commenters about the utility of these thought-experiments in general, though.
They're /supposed/ to be extreme, so as to isolate the effect of x or y factor upon moral judgments—the only other options are to (a) waste all your time arguing small details & becoming confused (or, perhaps, just becoming frustrated by arguing with someone who's become confused) by the interplay of the thousand messy complications in real-world scenarios, or (b) throw up your hands & say "there's no way to systematize it, man, it's just... like... ineffable!"
If there is some issue with one of the thought experiments, such that it does not apply / isn't quite analogous / *is* isomorphic in structure but *isn't* analyzed correctly / etc., it ought to be possible to point it out. (Compare: "Yo, Einstein, man, these Gedankenexperimente are too extreme to ever be useful in reality! Speed of LIGHT? Let's think about more PRACTICAL stuff!")
I can't help but feel that some reactions of the "these are too whacky, bro" sort must come from a sense of frustration at the objector's inability to articulate why the (argument from the) scenario isn't convincing.
I'm sympathetic, though, because I think that sometimes one /can correctly/ dismiss such a scenario—intuiting that there's something wrong—without necessarily being able to put it to one's interlocutor in a convincing way.
Still—no reason to throw the bathwater out with the baby. It's still warm enough to soak in for a while!
edit: removing this post, it is being misinterpreted so I am going to give up
In the Christian tradition, Jesus explains precisely what decides someone's eternal fate in Matthew 25 -- suffice it to say, it really is just meeting the material and social needs of the worst off people. No requirement you're religious in any way, and Jesus does mention that it'll lead to a lot of surprise both from disappointed "devout" people and confused but delighted skeptics.
Obviously there are other traditions and legends, but presuming Heaven is a Judeo-Christian term of art for a specific kind of eternal fate, it seemed relevant.
John 5:24
I'm not sure what it would mean to believe the gospel, but like absolutely refuse to care for a neighbor as though they were yourself. It is a gibberish idea.
Yeah that’s what James says in James 2:18. Contrast Ephesians 2:6-10. Seems like a contradiction! But it’s not. Paul explains in some detail in Romans 3.
Martin Luther would like to inform you about the epistle of straw 😁
https://www.thegospelcoalition.org/themelios/article/the-epistle-of-straw-reflections-on-luther-and-the-epistle-of-james/
The "actually existing Christian tradition" would say that the morally relevant aspect of action is the act of the will, not the change in external circumstances brought about. This is why the charity of the poor woman who gave her last coin was of greater merit than those of the rich.
Obviously one cannot harden one's heart to the poor and still be righteous. What I am saying is that external impact is in some cases disconnected from moral goodness; thus, the rich man who gives 1000 dollars has not done a moral act 100 times better than the poor man who gives 10 dollars.
> What if it is primarily the cultivation of a certain purity or nobility of soul?
Interesting theory. How much does that cost to do, at a median level of success? In terms of man-hours, or purchasing power parity, or wattage for sacred hot springs, or whatever standard fits best.
Would soul-cultivation be directly incompatible with funding antimalarial bed nets, or is there room for Pareto improvements in someone hedging between the possibilities? "Tithe 10%, and also do these weekly exercises, from which you'll personally benefit" isn't an obviously harsher standard to live up to than tithing by itself.
> After all, if the soul is immortal, its quality is infinitely more valuable than any material and temporal vicissitudes.
That's not how discount rates work. It's entirely possible for something of infinite duration to have finite net present value. https://en.wikipedia.org/wiki/Gabriel%27s_horn
Giving up on explaining the nobility of soul formulation. However, I will say that immortality of the soul is not shaped like the linked image; the amount of suffering in Hell or Purgatory or the amount of joy in Heaven is far greater than anything terrestrial.
Here is a hole that I think is relevant.
I would argue that saving drowning children is actually a very-high-utility action, because you can call the child's parents to pick the child up and they'll be super grateful, and even if they don't pay money, you'll accrue social and reputational benefits. Tacking on "...oh, but your water-damaged suit!" is misleading, because even with a water-damaged suit, saving the child is still obviously net-positive-utility.
(So, for example, if you get the chance to move to a cabin and rescue drowning children all day, you could totally just do that and make a living off it. Start a Patreon, have a little website with a heartwarming story about how you're able to save all these children thanks to the generosity of your patrons. When you save a child, send them back to their parents with a link to your venmo.)
The Drowning Child story takes a situation in which saving the drowning child is obviously high-utility, and conflates it with a situation in which saving the person-with-a-disease is obviously negative-utility.
I don't have a moral about whether you should give lots of money to charity. I just think the drowning child story is misleading us, because it says "...you chose to save the drowning child, so for consistency you should make the same moral decision in other equivalent situations" but the situations are not actually equivalent.
I would argue that it's mostly false that society gives you kudos for saving drowning children. Society gives you very little. The *child's parents* are the people who are rewarding you.
You can get an award in many countries for saving a drowning child or saving someone from a burning building. Not that much utility granted but some?
In steps the entrepreneurial nonprofit Singer Fineries, the world's first designer of high-end suits, gowns, and other formalwear that are all 100% waterproof. For the go-getting drowning-child-saver on the go! Ethically sourced materials made by fair trade certified artisans vetted by GiveWell, all proceeds donated to effective charities, carbon-neutral, etc.
Even better, the SF corporation will provide training in practical needlework, tailoring, and seamstressing for every saved child and hold a position open for them to work on the waterproof clothing. Sweatshops, you say? No, not at all! Ethical pro-child independence, we say! Earn your own living, live your own life, free of the neglectful parents who let you tumble into the lake and left it up to a stranger to save you!
"(So, for example, if you get the chance to move to a cabin and rescue drowning children all day, you could totally just do that and make a living off it. Start a Patreon, have a little website with a heartwarming story about how you're able to save all these children thanks to the generosity of your patrons. When you save a child, send them back to their parents with a link to your venmo.)"
I like the cut of your jib, young lion, but I think the EA and those inspired by Singer would be appalled. You're not supposed to *benefit* from this, you are supposed to engage in it via scrupulosity-evoked guilt! You should be paring yourself down to the bone to save drowning children every spare minute! You surely should *not* be making a nice little living from being a professional lifeguard! 😁
I have to say, if you must live beside a river full of dumb brats whose inattentive parents can't be bothered to keep them from drowning themselves, you may as well make a go of it how you can. Venmo those negligent caretakers for every cent you can, and don't forget to shame them on social media if they don't cough up!
Unless of course, by benefiting, you end up doing more total good over the long term.
Replace the original thought experiment with an orphan, who has no-one in the world and no social capital whatsoever.
Do your intuitions change? Mine don't.
Yes the "touching" thing is dumb but:
"But I think most people would consider it common sense that refusing to rescue the 37th kid near the cabin is a minor/excusable sin, but refusing to rescue the one kid in your hometown is inexcusable."
What?!?! I cannot for a second imagine that a majority of people would say "just picking a number of kids you're down to save is fine in this situation". That there is a diminishing marginal utility of saving dead kids!
If this is happening I genuinely think that someone living in this cabin needs to realize their life has been turned upside down by fate and that their new main life goal has to be "saving the 24 lives that are ending per day" by whatever means possible. Calling every newspaper to make sure people are aware of the 24 daily drowning kids in the river. Begging any person you see to trade with you in saving 12 kids a day so you can sleep. Make other people "touch the problem." Whatever the solution is--if a problem this immediate, substantial, and solvable appears and no one else is doing anything about it, you have to do what you can to get every kid saved.
I took it as "personally stop whatever else you may be doing to physically save the kids, despite the effect on your own life, sleep deprivation, etc." (until you pass out and drown)
if other means are available, damn right I'm making sure there are lifeguards
I think this speaks to a completely sane worldview, but it is less commonly used to navigate the world than espoused.
"What?!?! I cannot for a second imagine that a majority of people would say "just picking a number of kids you're down to save is fine in this situation". That there is a diminishing marginal utility of saving dead kids!"
Why not? You're one person, there's a kid in the river every hour, it's physically impossible for you to save every kid in 24 hours in perpetuity. You have to eat, sleep, catch your breath after jumping into the river and pulling the last kid out, etc., never mind working at your job.
So most people would agree that yeah, you can't save them all, not on your own. Maybe after saving 37 kids straight you collapse from fatigue and end up in hospital. That means all the rest of the kids drown unless someone takes over from you. Or you work a reasonable rate of "I save one kid every daylight hour for ten hours, then that's it".
If you're discounting the need to have some connection to the harms in order to be responsible for curing them, be it causality or proximity or association, then you're stuck back into the original problem we're trying to escape here. Other than your proximity to the river, there's nothing special about your situation unless or until you've assumed a duty. You are best positioned to intervene if it's just physically jumping in 24 times a day, but we're advanced humans with technology and an economy, so your neighbor a half mile in from the river could just as equally hire a person or pay for a contraption to save the kids as you could. If there is no need for a connection, merely awareness, then why isn't your new main life goal saving the 2800 children under the age of 5 who die every day from dysentery? Because there are other people doing something about it? Not very well, it would seem!
So... the thing we should do is rebuild the coalition and the general pot, yes?
I was amazed that this essay wasn't about /didn't get to USAID. USAID is a global aid program Trump is essentially closing. As a result he (and America) are being blamed for killing poor foreigners who will apparently no longer survive due to not receiving the aid. Would it not be our problem at all if we'd never given any aid? Are we really the ones killing people by no longer voluntarily providing aid?
https://www.bu.edu/articles/2025/mathematician-tracks-deaths-from-usaid-medicaid-cuts/
https://www.nytimes.com/interactive/2025/03/15/opinion/foreign-aid-cuts-impact.html
https://www.reuters.com/world/us/usaid-official-warns-unnecessary-deaths-trumps-foreign-aid-block-then-says-hes-2025-03-03/
Yes, because if the actual aid is shut down all of a sudden without prior warning, you are exhibiting the vice of akrasia, and not giving people who you now have an obligation to time to adjust or plan out their response. Now the USA does have at least a little obligation towards poorer countries, so when it goes to start fulfilling those obligations again, people will not trust it.
There is an actual argument against USAID (it is used to spew evil filth into the rest of the world) but I actually agree with Scott on the exact points of good which he highlighted it was doing, so a sufficiently competent statesman should be able to shut down the bad parts and keep the good parts.
A sufficiently competent and powerful statesman. It would take a great deal of power to be able to pick and choose when dismantling these organizations.
Yes, so the deaths are really the fault of the people opposing Trump becoming all-powerful dictator-for-life.
If we make Trump an eternal dictator of the entire planet, all the drowning children will become his personal property, and then he will have an incentive to save them. Perhaps he will order Elon Musk to build a fleet of self-driving boats to rescue those kids.
Yes, and if they keep dying, you can just declare it another instance of those classic "theodicy" things.
> There is an actual argument against USAID (it is used to spew evil filth into the rest of the world) but I actually agree with Scott on the exact points of good which he highlighted it was doing, so a sufficiently competent statesman should be able to shut down the bad parts and keep the good parts.
Yes, this is something that frankly appalled me about Musk's handling of the situation. So far as I can tell, the percentage of USAID funding that was actually being spent on DEI or wokeness-related programs was small, and it's not like Musk couldn't have afforded to hire auditors with basic reading comprehension to go in and surgically remove the objectionable items. He chose to go in with a hatchet on day one for the sake of cheap political theatre.
https://newsable.asianetnews.com/gallery/world/usaid-craziest-spends-revealed-millions-on-condoms-for-taliban-afghan-poppy-farms-to-drag-shows-in-ecuador-shk-srb8eg#image7
I don't think $47,000 for a Transgender Opera in Colombia is a wise use of taxpayer funding, but every item on that list combined amounts to less than half a billion dollars, and USAID was spending 40 billion a year.
There are even items on that list that I'm not sure should have been axed. Is anyone going to die because Afghanistan lacks condoms? Not directly, but there might be some risky abortions avoided, not to mention that Afghan TFR is well-above replacement and could place demographic pressure on limited agricultural resources, possibly triggering war or famine. I don't have a high opinion of Palestinian society, but unless the plan is to liquidate the region's population then constructing a pier to facilitate imports of essential food items isn't an automatically terrible idea.
Here are some at least perceived serious problems with USAID and rationale for the rapid action:
1) Lots of the funding was not going directly to overseas, but directly into Washington Beltway NGOs in the US. Yes, presumably much ended up overseas, but certainly parts of it simply enriched politically-connected individuals of the opposition party.
2) In many cases USAID funding directly sponsored and supported poltical ambitions and patrons of one party, not both parties in the US. This rendered it perceived as not just neutral but actively harmful to the opposition party.
3) Because the first 100 days of a lame-duck US President's term are widely perceived to be much more effective and important than the remainder, it is/was necessary to move very quickly to shut it down, both to actually succeed (it is already tied up in courts), and to see the impact on individual recipients, and use the impulse response of the system to better-understand the fraud and patronage that might be involved.
Fixing e.g. PEPFAR after the fact is not ideal, but letting the perfect be the enemy of the good is also not ideal.
Then why not do this in a non-lame-duck Presidential term?
Because a presidential term where all branches are controlled by one party is incredibly rare and hard to predict, and certainly that term will not be controlled by 'you' (whoever you is), and might not lead to the same desired outcome.
For example, right now, the house of representatives is balanced on a knife edge of control where any absences render it evenly split or controlled by the opposition.
If the control of multiple branches was so important, then why try to invest the all-important 100 days in shutting down the programs by executive order without involving Congress? That could have been done in the first term just fine.
> a lame-duck US President
lame duck
1. An elected officeholder or group continuing in office during the period between failure to win an election and the inauguration of a successor.
2. An officeholder who has chosen not to run for reelection or is ineligible for reelection.
Wow. Today I learned about definition #2. God, do I hate stupid extra definitions of terms that ruin the first, good definition of those terms (see also: literally)
This is interesting, as I have always understand lame duck-ness to be definition 2, not 1. I would have reversed their order based on my own experience.
This whole "we have to do it quickly or it'll be impossible" / "we'll fix the rest later" part seems incoherent to me.
First, they shut down USAID in *two weeks*, not 100 days.
Second, if they only have power for a brief moment and must use it... how will they bring back the good parts later?
Third, how do you even KNOW if a $50b-a-year agency is more bad then good in two weeks?
This just seems like a post-hoc justification for a staggering level of carelessness and in curiosity in a Chesterton's Fence scenario.
I could maybe buy the limited-time-window argument, but people in Scott's comment section were saying it would only have taken a few interns a couple of weeks to read through all the axed NSF grant proposals, so... even under time pressure, I think Musk could likely have done better.
> Lots of the funding was not going directly to overseas, but directly into Washington Beltway NGOs in the US. Yes, presumably much ended up overseas, but certainly parts of it simply enriched politically-connected individuals of the opposition party.
If you're paying the staff who run a charity NGO, and they talk to their patrons and vote for the party who funds them, then... yes, you will be 'enriching politically-connected individuals of the opposition party', almost by definition. I don't know a solution to this problem other than the GOP being less negligent when it comes to institution-building.
A lot of those 40 billion is NGO color revolution money though.
>condoms
So like I said, evil filth. Do you have anything unambiguously morally good besides PEPFAR which the USAID was doing?
At least on paper, less than 10% of US foreign aid is/was allocated to 'human rights and democracy', or anything that could plausibly be interpreted as 'NGO color revolution money'.
The sexual revolution debate aside, I don't think any and all birth control is wrong, so... gonna have to differ on the condoms.
The problem is trying to disentangle the good parts from the bad parts, since any attempt to question it is met with the "people will die!" defence and asking the civil servants "so what did you do last week?" is seemingly intolerable interference.
Nothing wrong with gutting fat or rot. Some servants however, really do say "I stopped HIV babies from dying," and it is competent statesmanship to be able to distinguishing between the two, or at least undo the problem when there is just cause.
So what is refusing to state whether or not you did save HIV babies from dying?
An admission that you are useless and need to go. My problem is that Musk seems to not have actually asked, or if he did, he did not do it well.
I would think it is worse to take action to foreseeably cause death, as opposed to neglecting to take action to foreseeably prevent death. (If this weren't the case, the answer to the trolley problem would be obvious)
I do admire that you continue to advocate for some version of "EA values" in these increasingly screw-you-got-mine times, even if it's largely academic to me as a Poor with limited resources wrt the scale of drowning children. Not having any realistic path towards ameliorating that state of affairs means it's even more important to Be Excellent To Others in whatever small capacities present themselves, I think. Everyone can do the mundane things somebody has to and nobody else will, if one cares to notice them, Copenhagen be damned. (While acknowledging that yes, there's real value in blissful ignorance! Premature burnout from the daunting scale of worldwide lifeguard duties is worse than at least helping the local drownees and ignoring the ones in the next city over.)
The real problem comes with coordinating others to act similarly, so the burden is collective but light, versus an endless uphill battle for a few heroic souls. That always feels missing from such philosophical musings - the kind of people susceptible to Singerian arguments aren't the ones who most needed convincing. Classic memes like religion sort of work for instilling the charitable drive, but come with a whole host of other "entanglements" that aren't all desirable.
I think a core objection to giving lots of money to be charity might be skepticism that the people being saved actually exist.
Like... the Effective Altruism page about malaria bednets has this long string of numbers they multiply together, to figure out how many dollars it takes to save a life. And that's legit cool. Of course, when you multiply a string of numbers like that, you tend to get huge error bars because all the uncertainties multiply. But they're smart people, I assume, and they're trying really hard, so I'm sure they're not trying to be deceptive. I have to respect that they've done as much as they have.
But... I'm in an environment where people will say anything to get me to give them money, and I guess I've gotten used to expecting evidence that people's claims are real? And I know that, if I buy a bunch of bednets to prevent malaria, no evidence will ever be provided that any malaria was prevented. At best they'll have some statistics and some best-guess counterfactuals.
And -- I mean, I'm sure the bednets people are good people. I've never met any of them personally, but they're working on a charity that does really good things, so they must be really good people with the best of intentions. But it sort of feels like they don't really have an incentive structure that aligns with communicating honestly.
I dunno. The internet in general isn't a high-trust place. I guess probably the people in the charity part of the internet are especially honest and trustworthy, so rationally I'd probably have to concede that the charity really is saving lives. But I don't feel it.
>I'm in an environment where people will say anything to get me to give them money<
So... got a lot of it laying around then, eh?
Hey, unrelated but FYI, I've been meaning to tell you ever since we last saw each other at that university or high-school wherein we were real good friends: if you give me some money, I'll write you into my next book as a badass superhero. Also, I may be on the verge of solving world peace and stuff, if only I had the funds... ah, the tragedy of it all—to have the solution in my hands, yet be stymied by a mere want of filthy, filthy lucre–
"I'm in an environment where people will say anything to get me to give them money"
Begging emails from charities. Gave one donation to a specific cause one time, got hailshowers of "donate donate please give donate we need money for this special thing donate donate" begging emails until I directed them all to spam.
That sort of nagging turns me away more than anything; I don't have a zillion dollars to donate to all the good causes, and I'm going to give to what I judge most in need/most effective. I am not an ATM you can hit up every time you want a donation for anything and everything. And of course they used the tearjerking heartstrings tugging route: here's little Conchita or whomever who is the human equivalent of a one-legged blind puppy, don't you feel sorry for her? Here's Anita who is the homeless mother of twenty in a war zone who has to pick up grains from bird droppings to feed her earless quadruplets, don't you feel sorry for her?
No, in fact, because you've so desensitised me with all the begging and all the hard cases, I have no problems shrugging and going "not my circus, not my monkeys".
There's a thought experiment, where someone runs up to you and says: "Give me a hundred dollars right now or else TEN TRILLION BILLION GAZILLION people will die horribly!"
And the thought experiment says: "Okay, that's a crazy claim, but as a Bayesian you have to assign some probability to the chance that it's true. And then you have to multiply that by the disutility of ten trillion billion gazillion people dying horribly, and check if it's worse than giving a hundred dollars to an obvious scammer. And what if the number was even more than that?"
But in practice people don't do this, we just say "no thanks, I don't believe you" and walk away. I'm not sure what rule we're applying here, but it seems to work pretty well.
And when I think about buying anti-malaria bednets, I feel like that same sort of rule is getting applied.
GiveWell is mostly advertising that you donate to charities that are not them. So it really seems like your thought experiment is in the opposite direction: someone tells you to give to an unrelated third party and you're trying to come up with reasons why the third party isn't really a third party.
The easy out for this is that because the claim is physically impossible the actual expected utility is always 0. Probability doesn't innately have to be an asymptote of 0, it can just be 0.
Our knowledge of what's physically possible is probabilistic, though, so this out doesn't really work. I think a more realistic out is that even though we don't have the cognitive resources to correctly estimate a probability at 1/3^^^3 or something by reasoning explicitly, conservation of evidence implies that most general statements about what's happening to about 3^^^3 entities are going to have a probability of about ~1/3^^^3 or lower so failing a straightforward logical argument why it's much larger in this case then if you have any risk averseness at all, and probably even if you don't, you should ignore such possibilities.
Pascal’s mugging is indeed a bad way to go.
Bed nets do appear to pencil out give reasonable bounds on utility, but that doesn’t mean everything crosses that threshold.
I don't think this is the core objection, it's more often an excuse. If everyone trusted the EA people's figures, most people still wouldn't donate anywhere near as much as EA people say they should.
GiveDirectly has a web-page with a real-time-updated list of testimonials from people who receive money and saying what they did with it, so I don't think this is the main blocker.
What sort of evidence would convince you? What do you think is missing?
After thinking it over somewhat, sadly I think I have to admit that this *was* an excuse.
I recant the above statement. I do think that statistics are easy to lie with, or easy to get confused and report overly optimistic numbers despite the best of intentions. But I don't think it was my core objection.
Proximity based moral obligations work because the further away something is, the less power you have to actually affect things, and therefore the less responsible you are for them. You may say 'give to effective charities' but how do I know that those charities are actually effective and are not lying to me, or covering up horrific side effects, or ignorant of their side effects? Therefore, it would seem that I have more of an obligation to give to charities whose effects I can easily check up on in my day to day life*.
By this principle, the person in the NYC portal has an obligation, since he can actually see and actually help. If the guy screws up following your instructions, the situation is not worse than before. If you come up with a highly implausible scenario where his screwup can cause massive damage, then it becomes more morally complicated.
Same for the robot operator, since he is in control of the robot and knows what it is doing, assuming he knows it won't risk the patient's life. If you were a non-surgeon robot operator who came across the robot in the middle of an operation (the surgeon took an urgent bathroom break?) it would be immoral for you to help, since you wouldn't know what effect messing with the surgery would have.
In the same way, if I am simply told that going into a pond and pressing a button would save a drowning child halfway across the world, well, I have no way to verify that now do I? It could blow up a dam for all I know.
For the drowning child question, you always have a moral obligation if it occurs, but you don't necessarily have an obligation to put yourself into situations where moral obligations occur. Going out of your way to avoid touching things however is the sin of denying/attempting to subvert God's providence, see Book of Jonah.
So my Copenhagen answer is as follows: if a morally tough situation happens to occur to you, it is God's providence, and He wants you to do it.
>God notices there is one extra spot in Heaven, and plans to give it to either you or your neighbor. He knows that you jump in the water and save one child per day, and your neighbor (if he were in the same situation) would save zero. He also knows that you are willing to pay 80% of the cost of a lifeguard, and your neighbor (who makes exactly the same amount of money as you) would pay 0%
The neighbor judged himself when he called you a monster for not doing enough to save people didn't he? He also was touched by the situation when he involved himself in it by commenting and refusing the counteroffer. Its also seems fairly proximate to him for him to be auto-involved, and he is cognizant of this and in denial. Problem solved.
I recognize where you are going with this, and my point is not that you are a monster for not doing enough, but that your donations can have side effects which you cannot detect and cannot evaluate to adjust for in time, or they can end up not doing anything. Sure you can export it to other EAs to verify, but how can you trust them to be honest or competent? The crypto fiasco is a good example here.
>Alice vs Bob
God is Omnipotent and Omnibenevolent, he can have infinite spots in heaven and design life by his providence so that both Alice and Bob can have the appropriate moral tests which they can legitimately succeed or fail at. Bob would likely have a nicer spot in heaven though assuming he succeeded, because he had more opportunity for virtuous merit.
*Note that this argument is not an argument against giving to charity, only against giving to 'untrusted' charities, which I classify EAs as because they seem to be focused on minmaxing a single issue as if life is designed like a video game without considering side effects they can't see, and are prone to falling for things that smell suspiciously like the St Petersburg Paradox.
My logic leads me to conclude that it is optimal to use your money to help the homeless near you since you have the most knowledge and power-capacity towards it, which I have been half-heartedly doing but should put more effort into.
I've helped homeless people sometimes but more often than not I haven't. Homeless people sometimes have simple problems that you can help with (e.g. need a coat) but often it would require an expert to actually help them out as much as a malaria net would help someone in Africa.
This is true if the distribution of problems are the same near and far, but if you live in a rich country and are thinking of donating to a far away poor country, that's probably not true: the people near you with _real_ problems are people with medical conditions that require expert knowledge to solve, or mental problems that we may not know how to help, and so forth. While the people in poor countries may have problems like vitamin A deficiency which is easily solved by giving them vitamin A, or endemic malaria which is relatively easily solved by giving them bednets.
Even with the distance, I'm pretty confident it's much easier for me to get a hundred vitamin A capsules to Kenya than to cure whatever it is that affects the homeless guy who stays in the shelter a few blocks away from me.
Indeed, the whole point of charity evaluators like GiveWell is to quantify how easily a dollar of yours will translate into meaningful effects on the lives of others.
You lost me when you brought Jonah into your argument. IIRC, God's brief to Jonah was that he specifically was to go to Ninevah and preach against the evil there. After trying to avoid the task, Jonah finally slogged his way to Ninevah and preached what God had told him to preach. But he failed God because didn't preach about His mercy, as well. But nowhere in the story do I remember God telling Jonah to preach about His mercy.
How can we know God's will? God didn't tell Jonah that there was an additional item on his SoW. The only takeaway I get from Johah, is that if I rescue a drowning child, I need to preach about God's mercy as I pull the kid out of the water. In the trolly scenario, God's will may be that the five people tied to the track die, and the bystander lives. But His providence put us in the control of a trolly car, and he left us the choice between killing five people tied to the track or a single bystander. We don't know what God's optimal solution is.
You misunderstood my point. God gave Jonah a job, he tried to evade it entirely, and that was clearly the sin, which indicates that trying to cleverly dodge moral responsibility by removing proximity is bad.
Jonah's behavior in chapter 4 is not relevant to the point.
>How can we know God's will
Study moral theology and you can guesstimate what the correct action in a given situation is.
>preach about God's mercy as I pull the kid out of the water
As others pointed out, you can recruit the kid to help you pull more kids out of the water.
>trolley problem
See double effect.
So, what does Christian moral theology indicate we should do if we find ourselves in a trolley problem scenario? Bear in mind that this is also the type of ethical koan that has troubled Talmudic scholars. For instance, Rabbi Avrohom Karelitz asked whether it is ethical to deflect a projectile from a larger crowd toward a smaller one. Karelitz maintained that Jewish law does not allow actively causing harm to others, even to save a greater number of people. He argued that human beings cannot know God's intent, so they do not have the authority to make calculations about whose life is more valuable. Thus, according to Karelitz, it would be neither ethically or halachically permissible to deflect a projectile from a larger crowd toward a smaller one because doing so would constitute an act of direct harm.
As for me, I'd say the heck with the Karelitz, I'd deflect the projectile toward the fewest number of victims. I don't know what I'd do if my child was in the group receiving the deflection, though. But I'd probably make my decision by reflex without considering the downstream ramifications. Ethical problems do not lend themselves to logical analysis because human existence is greater than logical formulas. Sure, we could all be Vulcans and obey the logical constructs of our culture, but the minute we encountered a Gödelian paradox, we'd be helpless.
You are free to flip the lever, but not push the fat man, since the trolley running over a single person is a side effect, while pushing the fat man is directly evil.
The trolley running over a single person is a side effect of moving the trolley, the fat man dying is a side effect of moving the fat man. There isn't really a sharp line here.
Its not a side effect though. You are actively choosing to push the fat man, ie it is you active will that the fat man be pushed, and the trolley is stopped by the means of pushing the fat man.
I will point out that I think a more nuanced framing of Rawlsian ethics is inter-temporal Rawlsian ethics where we both don’t know **where** we will be born or **when** we will be born.
Instead of the argument of keeping taxes on the rich low so they don’t defect, those in the future will want as much growth in the past as possible to maximize the total wealth of the world and the total number of medical breakthroughs available to them.
There is now a balance of being fair and equitable at a single spacetime slice and people farther in the future wanting more growth and investment in previous time slices that better benefit them.
I think this makes the tradeoffs we often confront in redistribution vs investment more salient and makes the correct policy more difficult to easily figure out.
(Sorry if this was mentioned in another comment, I looked at about half.)
I think intertemporal Rawlsian ethics is a wonderful idea, but it’s *really* sensitive to your discount function and error bars on the probabilities of stable growth and maintenance of a functioning civilization , isn’t it?
Yes! That’s why it’s so hard to know what to do!
And that’s the tradeoff we face here existing in the world now too.
> First, she could lobby the megacity to redirect the dam; this would cause the drowning children to go somewhere else - they would be equally dead, but it’s not your problem.
By the standards of morality in the thought experiment this is the correct solution. The prevailing standards in this hypothetical world allow for magical child flushing rivers accepted without significant protest or mitigation. Objectively, you are not doing anything wrong.
Morality is not an abstract thing to be discovered and while basic survivorship bias means that societies with a sense of morality that result in Child River O'Death are unlikely to be tremendously advanced we can say that certain moral codes can be more effective than others at human flourishing, you cannot use thought experiments to find the rules because there are none. It's all a blur of human messiness.
If I got to choose, I would rather give 1000 people a sandwich or something instead of torturing 1000 people by cutting off their fingers.
Sure, you can argue "you cannot use thought experiments to find the rules because there are none".
Yes, it's "all a blur of human messiness".
Would you rather give 1000 people a sandwich or torture 1000 people? If you prefer one, well, you might even have a reason, let's get to the bottom of it. I'll call it "morality". And if this hypothetical seems to have a preference, we can probably assume other hypotheticals do, like the ones Scott is using.
If you don't prefer either, cause "nothing means anything, man, we're all like, dust in space" then I hope that I'm not one of the 1000 people.
> If you don't prefer either, cause "nothing means anything, man, we're all like, dust in space" then I hope that I'm not one of the 1000 people.
Yes, I fully understand that you're gesturing at the normal human tendency towards being pro-social in the case of Sandwich V. Torture. But the issue with these "thought experiments" which consist of implausible setup followed by binary choice of options designed to be as far apart as possible is that the human tendency you're referencing does not operate according to rules of logic and any answer I can give provides no information on how morality or decision making works. Or: if I'm in a position to torture 1,000 people by cutting off their fingers, I need more information before I can tell you my choice because my actual choices - the thing we are trying to model and/or predict - depend on those variables.
Crafting a hypothetical to try to disprove that someone's objection to a previous hypothetical - or even worse, a concrete course of action which comes with all of the contingent details that do matter a great deal - wasn't their real objection is useless, because it requires inventing situations that are outside the distribution of the thing being studied.
If someone came to you with an idealized method of calculating an object's trajectory and you point out that it is unlikely to be correct because it doesn't take gravity into account, them producing a thought experiment where the object is in a perfect vacuum without the influence of gravity does not mean that gravity isn't your real objection to their method.
A new reign of terror is needed. this comment section sucks. I think I saw someone advocating the defunding of pepfar on grounds of "the kids mother shouldn't have been wrong about sexual hygeine" or something.
more productively, I disagree with the veil of ignorance approach. Just be the kind of person that the future you hope for would admire you (or at least not condemn you). much simpler and more emotionally compelling, and I think it leads to better behavior.
I think this points at something important, but the intuition is sharper if you also stipulate the future knowing your context and thoughts and being much wiser and much, much kinder. Some people believe this means it is "good to believe" in a religion, but I think that is sort of silly and arrogant. Of course there are many people who have enough empathy to know your thoughts and there are very moral people.
People utterly refusing to engage really indicates the change in audience from people who find this kind of discussion interesting on its own merits (SA is clearly doing this to probe the limits of moral thinking as an intellectual exercise) and people who view this kind of moral discussion as a personal attack on them. Discussing morality like this feels like a core part of the rationalist movement and refusing to do so is not a good sign.
To flip this a little: I think it's maybe good that Scott is spreading EA ideas outside their natural constituency. In the spirit of, "if you never miss a flight you're spending too much time in airports", I propose "if you're not getting bad faith pushback and refusal to engage, you're not doing enough to spread pro-social ideas".
While I think the commenting population has gotten worse since the Substack move, I also think the drowning child is a terrible thought experiment and more complicated versions are not so much enlightening as they are a mild form of torture, like that episode of The Good Place where the gang gets explores a hundred variations on the trolley problem.
Discussing morality is interesting. *This particular branch* is exhausted and everyone is entrenched in the degree to which they admire or despise Singer's Mugging. The juice has been squeezed.
Were they advocating, or playing devils advocate?
I am rabidly opposed to the rapid abolition of USAID.
But I am, in fact, quite struck by how appalling the continuation of the AIDS crisis in Southern Africa is and how little we are willing to condemn the sexual behavior that appears to be the driving factor in this crisis.
Babies may be blameless, but it is legitimately fucked up that a very-easy-to-prevent disease has such a high prevalence. AIDS is not malaria. The prevalence does not appear to be reduced by PEPFAR over multiples decades
Failing to engage with the thought experiment is a failure to examine your own moral system, and a failure to contribute anything useful to the discussion. Any of these comments (which there are way too many) that say something like "it's too abstract/too weird what if I change the premise of the thought experiment so I don't have to choose any bad option/ignore the thought experiment because it's dumb" are missing the whole point.
If your answer to the trolley problem is "this wouldn't happen to me, why would I think about it" then you're failing to find what your moral system prioritizes. If your answer to a would-you-rather have snakes for arms or snakes for legs is "neither to be honest" you're being annoying. If your answer to "what superpower would you have" is "the superpower to create superpowers" you're not being clever, you're avoiding having to make a choice. Just make a choice! Choose one of the options given to you in any of these scenarios, please! And if you still say "well um technically the rules state *any* superpower" then change the rules yourself so you can't choose the thing that's the most boring, obviously unintended, easily-avoided-if-the-question-is-just-phrased-a-different-way option. Choose! Pull the lever to kill 1 person instead of 5 or not! What are you so afraid of? Learning about yourself?
Scott says this in the article:
"Assume that all unmentioned details are resolved in whatever way makes the thought experiment most unsettling - so for example, maybe the megacity inhabitants are well-intentioned, but haven’t hired their own lifeguards because their city is so vast that this is only #999 on their list of causes of death and nobody’s gotten around to it yet."
And I think it's worth a whole post by itself why people are so reluctant to choose. Anybody unwilling to take these steps to try to figure out what they genuinely prioritize is *actively avoiding* setting up a framework for their own priorities, moral or otherwise. It’s not just that these people about dodging an uncomfortable choice — they're also refusing to engage with the process of decision-making itself. I cannot imagine setting up any reasonable moral system if I didn't do something so simple as *imagine decisions I don't have to make right now, but could have to make*. If I don't do that I'm basically letting whatever random emotions or vibes I feel in the moment, when I really really have to choose, BE my moral system. Why would people do that to themselves? Something something defense mechanisms? Something something instinctually keeping their options open when there's no pressing need to choose?
I don't know. I would choose snakes for legs though.
>But people are making decisions just fine.
This is not my experience. People in the comments are talking about how it's "far beyond the actual utility of moral thought experiments", "How is bringing up all of these absurd hypotheticals supposed to help your interests?", "never encountered a hypothetical that wasn't profoundly unrealistic". This is a post about hypotheticals. If they don't engage with it, instead dismissing the use of hypotheticals altogether, well, refer to my main post.
Many other comments dismiss the hypotheticals with "but, like, we're all atoms and stuff, man, so like, what means anything?" I have a hard time believing these people wouldn't care if their loved ones were tortured. If they say they would care, great, they've just made a decision in a hypothetical, hopefully they're willing to make decisions in other hypotheticals.
>Religion and culture already control their actions enough to make civilized society possible. Nothing more is needed.
Nothing more is needed? There are things that I think are bad that exist in the world (malaria, starvation, poverty, torture) that I would prefer that there is less of. If I can make it so there's less of this stuff, then I'd like to. To do that, it seems I first have to decide what this bad stuff is, and to quantify how bad bad stuff is compared to each other (paperclip vs cutting off fingers with a knife). That's morality, and it can help guide decisions, too!
Locally they may find it more fulfilling, but it makes things globally less fulfilling for everyone.
The state of global utility has very little to do with individual choices.
It's completely determined by individual choices.
Would you believe that Scott actually wrote that post?
https://www.lesswrong.com/posts/neQ7eXuaXpiYw7SBy/the-least-convenient-possible-world
This is the post I've been thinking the entire time I'm reading the comments
In 2009 on LessWrong, no less! I love it, I missed this one. Yeah I guess you can’t force all readers to read this before each article dealing with hypotheticals.
Maybe a disclaimer like “if you’re about to dismiss the use of hypotheticals, visit this lesswrong post” at the top? But I imagine the comment section would then also have people arguing against this lesswrong post, which seems kinda dumb. Also, do you really want homework on every post? “Read the entire sequences before this post to understand this best”. Ehhhhhhh
Maybe I’m going about this all wrong and I should just be ignoring all the comments that don’t engage with the hypothetical, because *I’m* not discussing the hypothetical either, I’m countering people who aren’t discussing it. So I’ve made the comment section EVEN LESS about the actual post. I don’t know, ignoring a huge chunk of comments who just don’t get the post feels weird though.
I agree that people who do this are annoying. Though too many thought experiments are also annoying. In my experience the reason that the type of person who refuses to answer a hypothetical does that is because they interpret it as a "gotcha" type question that is being asked by the asker for the purpose of pinning them down and then lording it over them by explaining why they're wrong or inferior in some manner. I don't think that's always, or even often, the intention of the asker, but that is how the reluctant askee tends to view it.
Particularly when it's explicitly being set up for maximum psychological discomfort.
Yeah, this. Scott has a good track record of not using antagonistic thought experiments, but elsewhere online, that's not the case. It makes sense some commenters would apply a general purpose cached response to not indulge it.
Agreed. I think moral reasoning of this sort is worthless at convincing others and the methods of analytic moral philosophy in general are not good. But failing to engage with hypotheticals (including explaining why they're irrelevant or undermotivated) is like a guy with a big club saying "Me no understand why engage in 'abstract' reasoning, I just hit you with club."
I have 15,000 USD in credit card debt, near-term plans for marriage and children, plus a genetic predisposition towards cancer that may be passed on to my children. To what degree is my money my own?
If I always knew that I would have obligations to my family, and I could never fully predict my capacity to meet those obligations, then how should I think about the money that I gave to effective altruism when I was younger?
I think Scott is correct, to a first approximation, and that there is virtue in buying bed nets, but no obligation. I also agree with the comment about bed nets being a rare example where we can be confident that we're doing a good thing, despite how distant the problem is from our everyday lives.
Even so, I think the rhetoric around effective altruism is sometimes a bit... I don't know, maybe tone deaf or something? Because lots of people aren't great with money, and when you ask them to tithe 10% they're going to think of all the times they couldn't afford to help a loved one, and they're going to extrapolate that into the future, and they might decide that virtue is poor compensation for an increased risk of struggling to feed their future children, or whatever.
And, yeah, it's too easy to use this as an excuse for being lazy and not demonstrating virtue. And people who aren't good with money could sometimes be better, with a bit more effort. I really do think that Scott is mostly right, here. But it also feels like he's missing something important.
If it helps, I think you're accidental collateral damage. He's mostly talking to the types of person who say they wouldn't donate to charity even if they had means, and brag about this fact. I think insofar as EA is concerned, you should put on your own parachute first. There's no point in ruining your life when someone else can do similarly without ruin.
In theory if we lived in a world where everyone was already donating a lot, yeah maybe that would be a concern (but probably not since societal effort has nonlinear effects). But we're very far from that world, and I think it's wrong to think we are.
In my tradition, when it comes to questions of the heart, even one penny from a struggling widow is more than all the billions the hyper rich donate. There are important questions about how to help people in need, but that is the landscape, and we are travelers on it. Your heart isn't defined by magnitude of impact. It's beauty is captured in that wordless prayer that others might be better off than you.
You are among the richest people to have ever lived.
Don’t believe me? Look here: https://www.givingwhatwecan.org/how-rich-am-i
> To what degree is my money my own?
How about, to the degree that it’s not from luck or circumstance? Including of course the country and time of your birth.
Doesn’t this produce a paradox though? If I believe that as a median American I’m expected to donate $32,000 per year to reduce myself to the global median of $8,000 why would I bother working at all?
You could of course conclude that not only is every $ I fail to donate a theft from the global poor but that every hour I fail to work is an equivalent theft. Except, even as an EA sympathetic person that feels ridiculous.
I’m not sure there’s a clean solution to this whole paradox, but I’m also not sure there’s a clean above model works well.
I already knew that. And I think you missed the point of my question, which is "To what degree does my money belong to me, as opposed to my family? And how will I justify my altruism to my family if I find myself unable to pay their medical bills in the future?"
Your philosophy would be more convincing if I could reasonably expect strangers to altruistically help me if I find myself in need, such that the selflessness isn't so unilateral. But Scott already pointed out that I can do no such thing, and at best I can pretend. But by pretending, I would be gambling with the lives of the people I love.
I know it's possible to be okay with that. I might even agree that it's noble. But they say that the strong do what they can while the weak suffer what they must, and there's as much truth in that as there is in effective altruism. The world isn't just. And I have neither the obligation nor the inclination to be a saint.
Objections this doesn't address:
- It costs much more money to actually save a life through charity ($4000+) than to save a drowning child in these thought experiments.
- A natural morality IMO is that helping people is never obligatory; it's supererogatory: nice but not obligatory. The only obligation is not to hurt others. Saving drowning children, and saving lives through charity, are equally good, but neither is obligatory. (Going further, of course there's the nihilist view that the concept of morality makes no sense; also various other moral non-cognitivist views, like that moral intuition is much like a taste, and not something that can be objectively correct or incorrect, so there's no reason to expect it to be consistent.)
Or they can abandon the intuition that saving the drowning child is obligatory; or abandon the meta-intuition that analogous situations ought to be treated analogously, and instead rely on their intuition in each case separately. Or of course abandon the intuition that charity is not obligatory, as the EAs would like them. If we find a contradiction between people's different moral intuitions, that doesn't tell which one should be abandoned.
Why would you ever abandon that intuition? It seems I would rather take that as axiomatic, and then work backwards from it.
I don't feel a pressing need to resolve metaethics wrt charity. And ultimately all of this discussion can easily be discounted as so much sophistry, but dear god let me not get to a point where I'm ever thinking that saving a drowning child is not obligatory lest it undermine my courage to act
In the original drowning child experiment, you are wearing an expensive suit at the time.
I've never encountered a hypothetical that wasn't profoundly unrealistic. You have put Bob in a world where there is no local government. No police force to call and no local volunteer fire department. There's no local Red Cross to lobby for a solution to the Death River. Delete these real aspects of the real world and there will be an abundance of problems that are too big for one guy in one cabin next to one river to solve.
Also. If Bob is so isolated in his cabin, where are the 35 kids floating down the river coming from, all of them still alive? You also omitted the impact of their grieving parents who would be lobbying and suing local government for its failure to take action.
This hypothetical is as unrealistic as speculation about the sex lives of worms on Pluto.
They aren't unrealistic. They are realistic but rare.
I'd describe it as farcical but directionally relevant to some elements of reality. There are indeed many people we can help, and to the one who suffers, it hardly matters if they're in our backyard or not, so long as either way we don't help them. And to the person in the cabin, it hardly matters if people suffer and die nearby or far, so long as they've resolved to ignore them. There is no governance covering both people, yes. That part is accurate to real life, for international aid at least. But they are indeed sharing a world, with capacity to save or be saved.
Realistic? Name a city or county in the United States that would not react to the fact that 35 children drowned every single day in a river within their boundaries.
Rare? Name one point in the history where such an event has taken place. These events are not rare—they are nonexistent.
> Delete these real aspects of the real world
Isn't DOGE in the process of doing just that?
> No police force to call and no local fire department
That would be the lifeguard, though. Somebody has to pay for all that, so they still have to address the question of whether they will.
A privately hired lifeguard has nothing in common with a publicly funded fire department, which exists in every city and every suburban county in the United States. In the United States, citizens are not expected to deal with large scale problems such as 35 kids floating down the river every single day.
Incorrect, at least on a few levels. Many if not most small municipalities throughout American history rely on volunteer and citizen-based fire departments.
Likewise, as a society that arguably aspires to maximal freedom in the US Constitution, Americans are very much expected to try to deal with large-scale problems through private mechanisms, and in fact our charities and charitable giving as a percent of our GDP, and in absolute dollar terms, are world-leading by a significant margin.
Also just to tie this back to a percentage figure, Americans give 1.47% of GDP to charity, roughly double second place New Zealand, at least per ChatGPT and assuming no hallucinations, and the dollar figures are two orders of magnitude larger.
Charitable Donations as a Percentage of GDP:
According to a 2016 report by CAF:
United States: Charitable giving constituted 1.44% of its GDP, totaling approximately $258.5 billion. (Wikipedia)
New Zealand: Donations amounted to 0.79% of GDP, around $1.1 billion.
Canada: Charitable contributions were 0.77% of GDP, equating to $12.4 billion.
First of all, modern volunteer fires departments almost always receive some municipal funds for equipment, building, et cetera. By happenstance, I once attended a community fundraiser for a volunteer fire department. That well attended community fundraiser brought the communal into the equation.
Second of all, charities represent a communal effort to solve communal problems. They are one gigantic step beyond a single man at a cabin expected to deal with an obviously communal problem,
The hypothetical also assumes that the parents of these hundreds of dying children will play no role in trying to achieve a communal solution to this communal problem.
While I don't disagree, you still haven't demonstrated that receiving municipal funds are a better solution. That it tends to evolve in that direction is meaningful, but might actually be an antipattern co-opted by Moloch rent seekers for example.
This is devils' advocacy - I have no experience in this area, but I do think that the volunteers throughout our history deserve due credit and may have done a great job relative to our current system.
I think volunteerism is great too. However, I distinguish individual efforts from communal volunteerism. Believe it or not, I had a dear friend who managed to reach out his unusually long arm and grab a kid just before he landed in a raging flooded river. This event happened once in his lifetime.
Communal volunteerism realizes that there is a recurring problem in society that could be helped by a self-organized group standing ready to help. The Red Cross, founded in the 19th century, served as a template for many of these organizations.
BTW, the hypothetical guy in the cabin could have built a weir 1/2 mile up the river from his cabin. A weir is a flow through dam used for catching fish. This one would be designed for catching drowning kids.
I view this from an evolutionary perspective. We are hard wired to react vigorously to events that happen in our presence, such as a lion attacking a child. We have no evolutionary wiring to respond to events outside of our personal experience. It's hard to go against evolutionary hardwiring.
Hypotheticals aren't meant to be realistic, they're meant to isolate one part of a logic chain so you can discuss it without those other factors being in consideration. People have a bad habit in debate of switching between arguments when they're losing. The hypothetical forces you to focus only on one specific part of your reasoning so you can either account for the faults the other person thinks it has, or admit that it's flawed and abandon it. It's a technique for reaching agreement.
A quick (sigh) hypothetical example:
"If you have $500 you must give it to me. I would use $500 to make rent. If I make rent I will be able to use my shower. I will look presentable for an interview. I will then get this high-paying job and I can pay you back. Also you do not need the $500. You have an emergency savings fund already and no immediate expenses."
Most of the time an argument would look like this:
"Yeah, dude I'm saving the money in case some big expense comes up, sorry."
"I need that money way more than you though!"
"It's not fair to ask me to pay your expenses."
"I'm going to get a job and then you won't have to!"
"Are you sure you'll get the job?"
"Yeah! So it's just this one time!"
"What if you don't?"
"I will but even if I don't I still need to make rent and you won't miss the money!"
"That's not the point though."
"Yes it is!"
etc.
See how the money requester is bouncing back and forth between his argument that he should get the money because he's going to get the job and pay it back, and his argument that you have an obligation to give him money because you don't immediately need it? You can isolate one of those arguments with a hypothetical:
"Let's say tomorrow the perfect candidate walks into their office and takes the job before you have a chance to have your interview. This guy's exactly who they're looking for, immaculately qualified, and they hire him on the spot. So you don't get the job. You can't pay me back. Do you *still* think I should give you the money?"
That's unlikely to happen, but now you can talk about *just* the argument that you owe this dude money because you have more, without having him constantly try to jump back to the job thing.
Of course, this assumes good faith and a willingness to actually explore the argument together. In this particular case you'd be better served by just saying "no" and leaving. But in this blog's community, there is significant interest in getting to the bottom of why we hold certain beliefs, and if those beliefs are wrong, changing them.
Scott wants to know the answer to a specific question: "There is an argument that you are only responsible for saving people you naturally encounter in day-to-day life. Is it wrong to structure your life in such a way that you don't naturally encounter people in urgent need? Do you have a duty to save people you choose not to be in proximity to?" He's well aware that someone else might save them, that the situation would likely be resolved without your influence, and that there are other considerations. He's trying to force you to set those considerations aside for the time being so you can focus on establishing your views on that one question in particular.
I can tell you from personal experience that in Uganda at least there is indeed no police or fire department to call.
Well, this hypothetical makes sense in Uganda, then.
> But I think most people would consider it common sense that refusing to rescue the 37th kid near the cabin is a minor/excusable sin, but refusing to rescue the one kid in your hometown is inexcusable.
Again my moral intuition straightforwardly disagrees with something! It says that not rescuing the kid in the hometown afterward is very excusable. I wonder why, though?
> I think this represents a sort of declining marginal utility of moral goods. The first time you rescue a kid, you get lots of personal benefits (feeling good about yourself, being regarded as a hero, etc). By the 37th time, these benefits are played out.
That feels like it resonates with my intuition, except my intuition *also* considers the kid in the hometown to be part of the same chain. Maybe by having done so much thankless positive moral work in the past, you've accumulated a massive credit that diminishes any moral necessity for you to take active steps like that in the future.
I notice if I swap the locations, so that it's going into the woods that results in seeing one drowning child while being in the city results in seeing them every day, this feels different—and it also feels closer to real-world situations that immediately come to mind. Maybe my mental image assumes the city is more densely populated? The more people there are who could help, the less each one is obligated to. Bystander effect is bad only when it doesn't come out to having at least a sufficient number of responders for the tradeoff to work out (though the usual presentation of bystander effect implies that it doesn't, so assuming that's true, applying the counterbias is still morally good). I bet there's something in here about comparing how many of these situations one agent can *reasonably expect to encounter* with how many that agent can handle before it reaches a certain burden threshold, then also dividing by the number of agents available in some way. This seems to extend partially across counterfactuals; being by chance the only person there at the time in the city feels different from being by chance the only person there at the time in the forest.
Or maybe it's because the drowning kids in the forest part of the water come *from* the city in the first place that affects it? Aha—that seems to make a larger difference! If I present the image of the protagonist moving from the forest to a *different*, more “normal” city, and *then* failing to rescue a random drowning child, it seems much worse than the original situation, though still not as bad as if the exceptional situation were being presented to the person for the first time, probably due to the credit in the meantime in some way over-discharging their responsibility. But if I assume the second city is structurally and socially indistinguishable from the first one, only with different individuals and with its stream of drowning kids passing by a different cabin that the protagonist never goes to, then it stops being so different again. So it's not due to the entanglement as such.
Maybe if the people in the city are already in an equilibrium where they aren't stopping the outflow of drowning kids, then it's supererogatory to climb too far above the average and compromise the agent's ability to engage in normal resource allocation (including competition) with the other people in the city—if I remove the important business meeting and change it to going out for drinks with a friend, not doing the rescue feels much worse than before, and if I change *that* situation so that the friendship is on the rocks and might be lost if the protagonist doesn't make it to the bar, then the difference disappears again. This feels independent of the intention structure behind the city not saving the stream of drowning kids to me. If the city people are using those resources for something better, the protagonist should probably join in; if the city people are squandering their resources, the protagonist is not obliged to unique levels of self-sacrifice, though it would be morally good to try to convince the city people to do things differently.
Of course, possibly my moral intuition just treats the rescues as more supererogatory than most people's treat them as to begin with, too…
> and if I change *that* situation so that the friendship is on the rocks and might be lost if the protagonist doesn't make it to the bar,
Bring the rescued kid along with you to the bar, hand 'em to the bouncer saying something like "she's your problem now," then tell that semi-estranged friend that if they don't believe your excuse for being an hour late, and covered in kelp, they can ask said bouncer.
Make a rule that the children you rescue as they pass by the cabin have to help you rescue future children who pass by. After rescuing a few kids, you've got a whole team who can rescue later kids without your help.
+1000. Building scalable solutions to problems almost literally IS civilization.
Scott, and I say this with love, has lost the thread here.
Like the point of the thought experiment was to draw attention to the parallels between potential actions, their costs and their benefit. These examples seem like they are meant to precisely deconstruct those parallels to identify quantum morality and bootstrap a new utilitarianism. It's putting way too much significance on a very particular thought experiment.
But even taken on its face, the answer to wthe apparent contradiction is obvious, right? Why the cost of a ruined suit feels worth a kids life to most people, but donating the same amount of money to save a life via charity is unappealing. It's not that lifes value is morally contingent on distance, or future discounting or causality or any of that. It's that when you save a drowning kid, you get a HUGE benefit as you are now the guy who saved a drowning kid. The costs might be the same, but the benefits are not remotely equal. I guarantee I get my suit money back in free drinks at a bar, and a maybe a GoFundMe , probably before the day is over.
And even if you want to cut the obvious social benefits out of the picture, self perception matters. Personally saving a kid is tangible and rewarding. Donating money to a charity is always undercut by doubts. Such as "is this money actually making a difference?” And ”why am I morally obligated to take on a financial burden that has been empirically rejected by the majority of the population?"
Because saving a drowning child is assumed to reveal something about the rescuer's moral character, while bragging about charity is viewed as performative. The former might be dubious, but the latter is usually correct.
Alternatively: because mentioning "one can save a child through charity" is an implicit moral attack on those who not given charity, whereas saving a drowning child is not such an attack because few of us will ever encounter a drowning child (and most people probably think they would save the drowning child if they ever encountered one).
Something that gets missed is that saving a drowning child is "heroic." Why is it heroic? Because even though most people say they would do it, in practice they don't. The hero takes action to gain social status. In the case of drowning children floating by a cabin, there's no heroism, since the person rescuing them consistently is now engaged in a hobby instead of a single act of will.
Also, people do move away to places like Martha's Vineyard for exactly this reason, to avoid the plebs complaining about them.
Interesting but these are all similar to the “all Cretans are liars; I’m a Cretan” self-reference trap (paradox).
Insert one word “predict”, as in “do you predict that you…” and the trap is closed because it clarifies that this is an endless regression paradox at “heart” IMHO.
All future statements are predictions, and it is self-referential. The gives away is the reference to the “37th child…”
There is no “moral choice” in infinite moral regress as there is no truth value to the statement “this statement is false”
Language is a semantic system which is incomplete under Gödel’s Incompleteness Theorems.
It’s relatively easy to generate paradoxes.
Morality is a personal, aesthetic choice.
Angelic superintelligences are like Chuck Schumer’s “the Baileys,” a mouthpiece for moral ventriloquism.
We are here on Earth by accident. Nothing happens when you die. We should take personal responsibility for our own moral sense. Share yours if that seems like the right thing to do, and express it in the way that seems right.
There won’t ever be an authority or conclusive argument, because we’re an assembly of talking atoms hallucinating a human experience. That is beautiful and strange. I think helping other sentient beings, and helping them at mass scale, dispassionately and in ways that seem plausibly highly effective, is lovely.
Is this a reasonable moral intuition?
If faced with a drowning child and you are the only person who can help it, you have a 100% obligation to save it. I'll leave open the question of what exactly a 100% obligation means, but it's obviously pretty strong.
If there's there's a drowning child and you're one of five people (all equally qualified in lifesaving) standing around, you have a 20% obligation to save the child.
If there's a child who's going to die of malaria and you're one of eight billion people on the planet who could save it, then you have a one over eight billion obligation to do so.
If there's millions of children going to die of something, and you're one of billions of people on the planet who can do something about it then you have something on the order of a 0.1% obligation to do something. That's not nothing, but it's a lot weaker than the obligation where there was a 1-1 dying child to capable adult ratio.
If there are 5 people, and for some reason the other 4 are jerks who refuse to help drowning children, is your obligation now 100% because your choice is guaranteed to decide the child's fate?
If 3 of them are jerks, is your obligation 50%? And can you make it 0% by becoming a jerk yourself, so that the remaining non-jerk now has 100% responsibility? Or is obligation not additive in this way, and if not, does that suggest a more complicated calculation is necessary?
Only if I can reduce my obligation to zero by declaring myself a jerk.
I think the angelic original position makes a lot of sense BUT the key is the pool of people who you might become.
If the pool includes not just humans, but livestock also, you would become vegans out of moral consideration.
If you and all your angelic buddies are limited to a pool of rich people, that also shifts the calculus.
Said another way, the consideration is that you can become anyone within a particular pool so you want to be fair towards everyone in the pool.
In reality, the closest thing to pools we have are our family, community, country, with each getting a greater level of care.
A zen story I once heard:
Thousands of fish have washed up on the beach. A monk is walking along the beach throwing fish into the ocean. A villager laughs at him — “You'll never save all the fish!”.
The monk answers as he throws another fish, “No, but this fish really appreciates it.”
Same reason police catch speeding drivers. When one asks 'Why me? Everyone else is speeding also!' the response is 'when I go fishing, I don't expect to catch EVERY fish!'.
Something something donuts and coffee
The Alice and Bob thought experiment feels rather strongly off to me. Yes, certainly a person who fails to do wrong due to lack of opportunity might be a worse person than another who actually does wrong. That seems to be a fine way to summarize moral luck, and we would expect eternal judgment to control for moral luck. So far so good. You then conclude that, therefore, moral luck is fake and the worse person must have actually done worse things.
I'm confused by the absence of positives to moral obligations. If someone fulfilled a moral obligation, I love and trust them more, my estimation of something like their honor or their heroism goes up. If someone who was not obliged to does the exact same thing, I "only" like them more, I don't get the same sense that they're worthy of my trust.
It's trite, but I think "moral challenges" is closer to how I feel about these dreadful scenarios. I want to be someone who handles his challenges well, to be heroic, this seems to me more primal and real than these framings of actions in these dreadful scenarios as attempts to avoid blame, in a way that I don't think reducing everything into a single dimension of heroic versus blameworthy can quite capture.
I largely agree -- I soften it down to invitations for stuff like this, because when it comes to helping strangers, it's not quite a challenge, as people avoid the question at no cost to themselves. But there is an invitation: care for the least among you. Some people see it as a beautiful thing to go to, and some do not. I largely chalk the latter up to their unbelievably poor taste hahaha
One of the things that disturbs me is: good intentions are often counterproductive. You mention Africa, and that is a whole basket of great examples.
Feed Africa! Save starving children! Great intentions, only: among other deleterious effects, free food drives local farmers out of business, which leads to more starving children, which leads to more well-meant (but counterproductive) aid.
Medical aid! Reduce infant mortality! Only, without cultural change and reduced birth rates, the population explodes, driving warfare, resulting in millions of deaths.
Far too much aid focuses on the short-term benefits, without considering the long-term consequences. Why? Because, cynically, a lot of aid work is really about making those "helping" feel benevolent, rather than actually making long-term, sustainable improvements.
In practice, reducing infant mortality leads rather directly to decreases in overall fertility; parents who can count on their children surviving to grow up don't have to have "extra" children to make sure that enough of them survive to become adults.
So give to the aid which has good long-term effects. As you are probably aware, most of the effort of the "effective altruist" movement is directed at figuring out which interventions are in fact helpful overall and which not. Follow them.
III. Alice and Bob: If Bob saves more sick people, he'll get exploited by needy "friends" into not-being-rich.
Whatever moral theory you use, it needs to be sustainable; any donation is a door open for unscrupulous agents to cause or simulate suffering in order to extract aid from you.
Not that Bob should do less - while it certainly would be coherent, it doesn't sound very moral - but I think the optimal point between saving no one and saving everyone is heavily skewed downward from maximal efficiency of lives saved per dollar because of this, even when you're altruistic.
For Alice, that applies too, though in a different ways: while there might be less expectations by scammers for her to spend everything if she spends a little, there will still be such expectations in her own opinion; this is common enough that many posts on EAF warn against burning out.
The drowning children thing never worked for me.
In the classic example you are the only passerby where the child drowning so you are the only one who can save them so according to the Copenhagen Interpretation of Ethics it's your duty to do so.
If we change the situation and on the river bank you find the child's parents and their uncles, lifeguards, firefighters, the local council and the mayor himself (they all have expensive manors overlooking the river) what duty do you have to save the child? According to the Copenhagen Interpretation of Ethics it's their duty to do so because they are first by the river but it's also their legal responsibility to care for the child. In the order of duty of saving the child you are at the end of the list.
Given the blatant purpose for which the drowning child thought experiment was created in the first place I propose the White Saviour Corollary to the Copenhagen Interpretation of Ethics:
"The Copenhagen Interpretation of Ethics says that if you're a westerner when you observe or interact with a Third World problem in any way, you can be blamed for it. At the very least, you are to blame for not doing more."
I hate this just like I hate trolley problem questions. They all have the same stupid property that you’re asked to exercise moral intuitions in a situation so impossibly contrived that the assumptions you must accept unquestioningly could only hold in a world controlled by an evil experimenter who is perfectly capable of saving all the people he wants you to make choices between. The obvious timeless meta-strategy is to refuse to cooperate with all such experiments even hypothetically.
The SHORT VERSION of ALL these controversies is “many people suffer in poverty and medical risk because they live in bad societies but it is always and only the responsibility of better off people in good societies to help them within the system by donating as much as they can bear without ever challenging the system itself”.
In this example, you are not supposed to question the assumption that no one would save 24 children’s’ lives a day at the trivial cost of $50 per life saved by hiring $50/hr lifeguards to work shifts, that somehow no collective action is possible either to raise money for such a spectacularly good cost/benefit ratio charity or to get the relevant public maintenance department a slight budget increase to fix the drowning hazard, and only isolated random unlucky individuals have any power to rescue these children and must do so without ever making the public aware of their absurd plight and trying to fix things that way.
If you want to point to some current country with a shitty government which blocks or steals efforts to ameliorate its citizens’ suffering, don’t give me a f***ing trolley problem to show me I must donate in clever ways to get past the government, save your efforts of persuasion for the bad government, or for good governments elsewhere or the UN to intervene.
Yes. As I said in my comment, a lot of "aid" is there to make the contributors feel benevolent.
There is simply no point to providing aid to cultures and countries where it cannot have positive, long-term effects, and likely just supports the existing, malicious system.
Yes, this is a totally fine argument, but has already conceded that if that were not the case, you would have an obligation to provide aid!
And now we can argue whether the aid in question actually has the deleterious properties you assert.
Or you can say, not so fast! I think the aid is useless, but even if it weren't, I'd still have no obligation to provide it!, and then we can focus on that claim by isolating it from practical considerations, in the hopes that if we can resolve this narrow claim, we can return to discussing the actual effectiveness.
That's the point of the thought experiments, to resolve whether your objection is actually 1. aid is ineffective or 2. even if aid could be effective I'd have no reason to give, or both, or some third thing.
But by isolating which claim we're debating, we can stay focused and not jump around between 1 and 2 depending on how you feel the argument is going.
If your objection is truly 1., and that's why you find these hypotheticals useless, then great! But you better be prepared to do battle on the grounds of "is this aid effective" and not retreat to, "why am I giving aid to Africa when my neighbour..." as many others do.
Again, the point of hypotheticals is not to be realistic. It's to remove factors from consideration to laser focus on a single question. Generally in argument, it's easy to hop around in your logic chain or among the multiple reasons you believe something. This means you'll never change how you think because if you start "losing" on one point you don't concede "okay that one is bad, I will no longer use it" but instead hop to other considerations. These "contrived" situations are meant to keep you from hopping from "is it right to kill a person to save others?" to "Okay I think I can get out of this choice completely by doing X,Y,Z." Whether X, Y, and Z turn out to be flawed or not, you still never had to answer that question, which means that you still don't have clarity on your beliefs and under what circumstances you would change them.
Of course it seems like most people manage it even within the hypothetical, so opposed they are to systematically thinking through their beliefs one point at a time.
Certainty must be a factor here. Both the certainty of help being needed (not from general knowledge of world-poverty, but direct perceptional evidence) and the certainty of the help reaching its intended target make the responsibility more real and vice versa.
I think you're taking too seriously people's rationalizations as reasoning. The transparently correct moral reasoning is much more likely to be rooted in relationships -- you ought to love and care for drowning kids, so you simply *would*, if you have the right relationship with them, simply save every one you can. Which means, yes, in the modern world, living with minimal expenses and donating literally everything you earn to EA charities is a solid choice, one of the things you'd just naturally choose to do.
I believe role ethics (as seen in Stoicism - the concept is even more central in Confucianism, but I am less well read on that philosophy) offers a good descriptive account(*), more so than Copenhagen and declining marginal utility. The idea is that a person has moral obligations depending on what roles they play in the society. Some of those roles we choose (like a vocation, which may present obligations like a captain of a ship in the case of emergency seeing to the rescue of all passengers and crew even at risk of going down with the ship, or parenthood with equally strong obligations to our children), some of them are thrusts upon us by circumstances (like duty to rescue in the capacity of a fellow citizen in a position to do so), and some come down to us as part of being a human being living in the cosmopolis (helping those in need even if they are far off).
Now, there are and can be situations where virtues and obligations pull us in different directions and resolving the conflict can be a nontrivial task (indeed, the Stoics deny the existence of a sage - someone with perfect moral knowledge), but as a practical matter it is not unreasonable to establish a rough hierarchy where your obligations towards fellow citizens outweigh those towards other members of the cosmopolis, which is why you save the drowning child. That however doesn't mean those other obligations disappear: Alice, after doing her duty as a daughter, a neighbor, a member in her work community, perhaps as a mother, etc, would in fact have these obligations. Stoic perspective isn't prescriptive in the sense of saying outright "10% of her net income", but chances are a virtuous person in position of such prosperity would likely (but not necessarily: personal luxury clearly wouldn't be something a virtuous person would choose, but she might e.g. feel to be in an unique position to advance democracy domestically, and use her good fortunes to that cause instead) be moved to act.
* And I dare say prescriptive, insofar as they help make sense of our moral intuitions, which I'm inclined to treat as foundational.
This seems like a sane take. It accounts for both our intuition that Alice really ought to do her duty to her daughter, neighbor, and work community before engaging in telescopic charity, but it also accounts for the intuition that we really ought to help drowning children sometimes, even when they are very far away. It also accounts for the case where, Alice, living in the cabin is called away from saving children by a more moderate need of her own child--the baby's got colic, and needs tending--and we don't find Alice to be totally reprehensible.
The question I always have though about role ethics or Confucian derived ideas of li is how to work out what, in this increasingly atomized cosmopolis in which we all live, I--as an unattached, single person--my roles or li are. It also seems like there is some tension between my intuition that I ought to be pretty free: I believe in divorce, in allowing minors to be emancipated, in--less extremely--moving away from one's hometown community, breaking up with friends, business partners, employers. Those freedoms seem a little bit in tension with an ethics derived from my roles.
The mechanism by which new social roles are constructed is being pushed far beyond its historical peak throughput rate, and facing corresponding clogs and breakdowns.
The argument for the copinhagin interpretation would be that instead of optimizing for getting into heaven you should optomize for "being an empathetic person"
The person who sees dozens of drowning children every day and dozent save them becomes desensitized to drowning people and losses their capacity for empathy.
The person who lives far away from the drowning people doesn't.
That's unfair moral luck, but that's the truth.
I will always remember when I saw a rabbi I knew giving a dollar to a bigger at a wedding. I asked the rabbi why he did that. Doesn't he already give 10percent of his money to charity?
He said yes and indeed the single dollar wouldn't help the bigger that much. On the other hand giving nothing will train him to become un-empathetic. He quotes a biblical verse saying something like "when someone needs money you should be unable to refuse" or something like that (לא תוכל להתעלם) or something...not sure the exact context.
Of course he still gave the 10 percent as well. He didn't think you could completly remove moral obligations by not touching them. Just that the slight additional obligation you have twords situations you touch versus those you don't relates to self empathy training.
Seems like you'd train empathy even more effectively the more you help. The 10% of it makes little sense in comparison to, if you see a person is in need you shouldn't be able to refuse. Isn't donating, and then having a great abundance left and deciding not to continue helping, a form of refusal?
It is a form of refusal but it psychologicly doesn't feel like a form of refusal as saliently. So in terms of training your own psychology it doesn't have the same effect.
>what is the right prescriptive theory that doesn’t just explain moral behavior, but would let us feel dignified and non-idiotic if we followed it?
Nobody has found one as of yet, and not for the lack of trying. I'm pretty sure that there isn't a universally appealing one to be found, even if outright psychopaths aren't included in "us".
As far as I can tell, the moral principle that the average person operates on is "do what's expected from you to maintain respectability", with everything beyond that being strictly supererogatory. This is of course a meta-rule, with actual standards greatly differing throughout time and space, but I doubt that you'll do better at this level of generality.
About: The 37th vs 1st child aspect.
Never thought about that one before, but it feels natural instantly.
1.) it will help to more evenly distribute the "giving effort" among those that can help (in some situation, does not need to be the same situation)
2.) in real life the odds of success, the utility towards the saved one and the cost for the saving one - all have some uncertainty. Having a degressive motivation to help leads to a more stable equilibrium between "should have helped but did not" and "irrationally exhausts herself and thereby risks the tribe's success"
3.) limits the personal darwian disadvantage agaist freeriders in the own tribe (even if all drowning children are from a different tribe).
We are all in the cabin. The cabin is global. Despite the billions given in foreign aid and charity for decades, there are still people living in poverty and dying by preventable disease etc all over the world. And any given money is likely to go to corrupt state heads. And regardless, the only countries with a booming population are all in africa, despite the low quality of life, and we hardly need more Africans surviving.
"billions" is a superficially big number, but a tiny fraction of world GDP and collectively dwarfed by what countries spend on weapons to kill and maim other human beings
The global incidence of extreme poverty is in fact going steadily down, which is what we'd expect to see if those charitable interventions were working.
That is overwhelmingly due to economic growth and tech advances. For example due to Fritz haber there are billions more people alive than there would otherwise be. Individual charitable aid is so tiny it wouldnt even fix poverty within the nations of those that give it let alone fix the world.
Of course that doesnt mean we shouldn't all give more. But what is optimal? If we had that much more focus on charity we wouldn't have had such focus on the growth that allowed for the people to exist who need that charity.
Tech and charity aren't mutually exclusive causal factors. Pretty sure we didn't get modern insecticide-infused bed nets by banging rocks together, or having some enlightened saint pray for an otherworldly being to restock the warehouse. Norman Borlaug's dwarf wheat was paid for partly by the Rockefeller Foundation and is right up there with the Haber process in terms of ending food scarcity. Lots of EA charities are evaluated in terms of the economic growth they unlock.
Is the magical city of drowning children called "Omelas"?
The city is an amazing place to live in, with its high tech infrastructure and endless energy, but if you aren't living there, you might not have heard in your childhood that the city is directly powered by drowning children. Every child you rescue from the river, lost to the system, reduces life satisfaction of millions of citizens by about 1%. The children need to drown for the city to be as great as it is.
You're allowed to leave the city after your schoolteacher takes you all on a field trip to the hydro station, but it's only allowed until you turn 18, and if you walk away you might slip and fall into the river.
I have updated towards the following view on the drowning child thought experiment:
1. The current social norms are that you have to save the drowning child but don't have to save children in Africa
2. The current social norms are wrong in the sense that the idea moral ethics disagree with them, but not in the direction you would think. According to ideal moral ethics, the solution isn't that you have to save children in Africa. It's that you don't have to save the drowning child either.
2. Obviously, I would save a drowning child. But not because I have to due to the ideal moral ethics. It's because of a combination of my personal preferences together with the currently existing social norms.
It’s a big problem because society runs off people doing these nice one offs but then other people demand that you have to universalize it and then they stop doing nice things.
I basically agree with the conclusion (except for the "not letting capitalism collapse" part, of course, it will collapse anyway, as is in its nature, the point is to dismantle it in a way that does not tear society and its ethical foundations apart).
But the way you arrived at it was... wild. I was (figuratively) screaming at my screen halfway through the essay that if your hypothetical scenario for moral obligation involves an immensely potent institution sending dying people specifically your way, the obvious way to deflect moral blame, and the only natural reaction honestly, is asking why it doesn't simply use but a tiny fraction of its capabilities to save them itself instead.
Basically, there's a crucial difference between a chaotic, one-off event where you're unpredictably finding yourself as the only (or one of the only) person capable of intervening, and a systemic, predictable event where the system is either:
- alien and incomprehensible - e.g. you can save a drowning rabbit, but have no way of averting the way the nature works. Here, Copenhagen stands, you have no moral responsibility to act, but once you act, you should accept the follow up of caring after the rabbit you've taken out of the circle-of-life loop.
- local and familiar - in which case all notions of individual responsibility are just distractions, the only meaningful course of action is pushing for systemic change.
Why do you think that capitalism will collapse?
The most orthodox Marxist crisis theory, based on the tendency of the rate of profit to fall, depends on a labor of theory of value, which seems squiffy.
A revolution of the workers brought on by an intolerable intensification of their exploitation seems less likely in consumer capitalism, where the leisure of the workers, their consumption is an important part of the system.
I'm not opposed to the idea, but I guess I don't necessarily believe that the inherent structural contradictions of capital will lead to its collapse inevitably.
I personally have a different idea of why capitalism will collapse, which goes Capitalism > AI > ASI > Dominance by beings capable of central planning.
Interesting! That's something I've thought about as well, but I see that as a relatively hopeful outcome, a possibility, not something that for capitalism is "in its nature."
I suspect definitional and/or scope mismatch here. To clarify - I am (arguably) not a Marxist, more specifically - not a stereotypical Marxist operating on a grand theory of history where capitalism is a discrete stage of civilizational progress, to be replaced with a more advanced one. I am not saying that people will stop trading or [whatever you think capitalism means]. I am saying that societies based on capitalist principles are bound to experience a collapse - which alone, not saying much, all societies in history eventually collapse, due to more general social dynamics - and, more strongly, that in their specific case capitalism is the very mechanism bringing about this collapse, and as such, not worth trying to preserve.
(In a vastly simplified way, how this plays out is - wealth, and thus power, concentrates in fewer and fewer hands, eventually creating a financier class at top of the society. Because it's measured in money, the more concentrated wealth is, the more decoupled it is from [whatever we want from productive economy]. This eventually makes various destructive (war) or unproductive (financial instruments) "investments" more profitable than expanding productive capital, this makes economy stall, this makes the ruling class push everyone else into debt to maintain their profit levels, this further immiserates the population and bankrupts the state, causing a collapse, and this makes the financiers pack up their money and escape for greener pastures, while the regular citizens of once-wealthy society are left cleaning up their mess.)
(It happened several times in history, and we can observe it happening once again right now, in real time, in the so-called first world economic block, with US as the epicenter.)
Hm, interesting! So you don't have a stadial theory of history, but you believe that any society is eventually going to collapse, which in capitalism will come from too concentrated wealth becoming separated from what's really productive in an economy.
You gave one optimistic view of how AI could disrupt this, but couldn't it be possible that AI (-->ASI, as you put it) allows the financier class to keep consolidating forever? If they have something that makes more and more of the stuff they want, automates more and more of their economy: can't we just end up being cut out of the picture, with not much of a mess to clean up in the first place?
I agree with most of that, but I think it's solvable without a collapse. There are two different things financiers are doing, which the current system (and to some extent even the financiers themselves) mostly fails to distinguish: innovation, and extraction. Making the overall pie bigger vs. capturing a bigger piece of a fixed pie for yourself. Building capital vs. buying land to collect rent.
A Georgist land value tax skips straight to the end of that inevitable "extraction concentrates wealth" progression, compiles the resulting power into something publicly accountable. UBI keeps the local pastures green. Financiers who want more profit have to accept the risks of innovation, deliver products that some other successful business and/or the average UBI recipient is willing to pay for.
Georgist LVT in a modern society also needs to tax other sources of rent extraction like the network effects that keep Facebook afloat, and the theory there is not nearly as clear unfortunately.
There's an existing regulatory framework for public utilities.
Not Just a Thought Experiment
I live in a third world country with first world money. My wife runs a small animal rescue out of our property and sponsors animal neutering around the city. The country is also very poor and most humans here are in what most westerners would consider a very bad situation. I spend most of my time and money on research to cure aging since I believe aging is the number one cause of pain and suffering in the world and I believe curing it is within reach.
My wife had to almost entirely stop her animal rescue efforts because it got to the point where it was consuming all of her, and much of my, time to the point where it was significantly interfering with our lives. She has friends in the rescue community who have completely ruined their lives over it, living in squalid conditions because their money all goes to feeding an army of dogs and cats. She also used to volunteer to help homeless children, but that similarly was consuming her life.
Our solution: Build a big wall around our property and don't leave the house. Every time we leave the house, we see the suffering everywhere and it is overwhelming. You can very easily ruin your own life trying to save everyone one at a time, and from an optimization standpoint there are way bigger bangs for your bucks.
Funny? Story (and what prompted me to comment):
About 10 minutes after I finished reading this article I went out to walk around my property, which I haven't done in about a month or so. I got about 20 steps out when I ran into a terrified abandoned kitten. Since I have over-active mirror neurons I was essentially forced to pick it up and rescue it before returning to look for its mother or any siblings in the area. This is my punishment for leaving my walled garden that blocks out the sound of screaming kittens and starving children.
If I owned the property next to the child river I know exactly what I would do. I would build a wall and soundproof my house so I could ignore the problem, knowing that there are better uses of my time and money than completely ruining my life saving 24 children a day. I would strive to avoid going outside, but when I had to I would almost certainly rescue another child before hurriedly returning to my walled garden.
I don't have a solution to the moral dilemma, only a solution to my mirror neurons that make me do irrational things. I suspect that most humans with functioning mirror neurons are not applying some complicated moral philosophy, they are just responding the way they were evolved to respond when they witness pain and suffering of others. Now that we can witness and impact things further away, these mirror neurons can easily be overwhelmed and cease to function as a good solution on their own.
That’s why big social problems should be left to organizations, not individuals. If a social worker for a non profit gets overwhelmed, they can quit their job and go back home. No one will think less of themselves. But if you have to live with it, it becomes more difficult.
Organized social programs often get co-opted by Moloch, doing more harm than good. I don't have a solution to this, but I am unconvinced that organizations should be assumed to inherently do better than free will and law of large numbers.
Instead their net effect may be to breed cycles of dependency that rob populations of agency and personal responsibility and prevent progress at scale.
Clearly some cultural philosophies do better than others.
You seem to have a weird idea of what Moloch is. Moloch isn't just "everything bad", Moloch is when a Nash equilibrium of independent actors is an ethical or welfare race to the bottom. It's inherently harder to avoid a bad Nash equilibrium the more players there are in the game.
The original definition of Moloch, per Wikipedia is:
The fire god of the Ammonites in Canaan, to whom human sacrifices were offered; Molech. Also applied figuratively
This is almost precisely the example I give of a centralized power structure that destroys lives at scale.
I appreciate your definition, but these two things are not the same as far as I understand it, and yes, I use the Wikipedia version of Moloch as one of my mental models.
Around here the definition in common use comes from this post on Scott's old blog: https://slatestarcodex.com/2014/07/30/meditations-on-moloch/
Thank you, and yes, I read that one a while ago but lost track of the specifics.
For the sake of argument, let's call my 'Moloch' Fred. Given that I have Fred here, does it make the point more worth considering? If we have both Fred and Moloch, my thesis is that my concerns as stated are still valid.
Why do you think that there isn't a equilibrium here?
* People seeking power and money are attracted to running/operating organizations with lots of power/money.
* People wanting to do good aren't as motivated to run large organizations with lots of power/money (it is miserable work).
* Any large social endeavor that has sufficient power or money to enact meaningful change will eventually be dominated by those seeking power and money, rather than those seeking to do good?
* Eventually any large social endeavor will no longer do good.
Note: The above is just a high level hand-wavy illustration, but I am not convinced that we cannot rule out Moloch here.
Large social endeavors do sometimes (often?) end up having all or the majority of their power and money skimmed off by insiders for their own use. However, this doesn't seem to happen 100% of the time. I mean, if principal-agent problems were this bad, corporations wouldn't function at all either and the economy would be reduced to humans acting as individual agents. (And corporations do also fall into ruin by this mechanism.) So I don't think this makes the argument that the optimum amount of non-market interventions is zero and we should just accept that Moloch wins everything always.
Are there examples of powerful/rich charitable organizations not running into this problem over the long term?
I can definitely believe that this can be delayed for quite a while if you have a strong ethos aligned leader, but eventually they need to be replaced and each time replacement happens you may not get lucky with the new pick. This would suggest that while Moloch will eventually win, there is a period of time between now and that inevitability where things can be good/useful. Perhaps there is value in accepting an inevitable fate if there are positive things that come of it along the way? Or perhaps we can try to find ways to shut things down once Moloch shows up?
> Organized social programs often get co-opted by Moloch, doing more harm than good.
That's a common meme, but I don't think it's always, or even often the case. I've personally worked with a organized social program of massive size, funded by a large network of individual donors, doing amazingly good work over decades.
> Our solution: Build a big wall around our property and don't leave the house. Every time we leave the house, we see the suffering everywhere and it is overwhelming.
huh. It's like Siddhartha Gautama's origin story, but in reverse.
(I'm not trying to be sardonic or condescending. It's just an interesting observation about how one man's modus ponens is another man's modus tollens.)
I wasn't aware of that origin story, but you are right it is exactly the opposite of my solution! Perhaps there is some optimal amount of exposure to pain and suffering one needs in order to take appropriate action to address it while not also being debilitated by it?
I have my own strong opinions on the Drowning Child experiment, though I've withheld them, so far. Because about a month ago, I basically said I'd tackle it on my own substack, and then procrastinated since I'm such a lazy layabout. Nonetheless, I'm confident that I've got it figured out, in a way that solves several other ethical questions and adds up to normality. At the highest level, it's tied together by expectation management. Which dovetails with Friston and Bayes Theorem. But it's a lot to explain, and a bit woo.
For now, I'll just say that ethics is basically social-engineering. "Actual" engineering disciplines (e.g. Civil Engineering) recognize that reality imposes hard constraints to what you can reasonably accomplish with the resources you have. If you wanna launch yourself to the moon with nothing but a coke bottle and a bag of mentos, you're not gonna make it. Likewise, any ethical system that says "donate literally 100% of your money to charity, such that your own person dies of starvation in a matter of weeks" is not sustainable. It's not sustainable individually, and it's not sustainable en mass. You have to consider how things interact on a local level. Which is yet another reason why Utilitiarism Is Bananas (TM). I.e. part of the appeal of Utilitarianism is the abstracting/universalizing/agglomerating instinct to shove all the particularities of a scenario conveniently under the rug. I.e. spherical cow syndrome [0].
"Let's save the shrimp! And build a dyson sphere, while we're at it!"
How?
"details... details... "
Sure, if you have a utility function that values the well-being of others, then perhaps you want to give a portion of your resources to charity. But you have to balance it with keeping your own system running. Both physically, and psychologically. And the burden you take upon yourself should, at most, equal the amount of stress you can handle. Which varies from person to person. E.g. commenter Dan Megill mentions [1] that the charity-doctors who denied themselves coke/chocolate/internet didn't have the mental fortitude to stay in Africa. To me, what this indicates is that Peter Singer psy-oped them into taking on more suffering than they could personally handle, and they buckled. Materials Science 101: different materials have different stress-strain curves [2]. Materials are not all created equal.
In sum, there's no objectively optimal amount of exposure. It entirely depends on what you can handle, and what you're willing to handle. I.e. it's subjective. I.e. the weight of the cross you bear is between you and your god.
[0] https://en.wikipedia.org/wiki/Spherical_cow
[1] https://www.astralcodexten.com/p/more-drowning-children/comment/102299122
[2] https://en.wikipedia.org/wiki/Yield_(engineering)
I thought this was mostly not about wanting to have your life ruined / being exploited? Which are closely related
If I see a drowning kid saving it would be inconvenient but it's not going to ruin my life. And this is partially because there is no Big Seamstress pushing kids into water to ruin people clothes and send their stock to the moon.
If a Megacity is skimping on lifeguard and creating a situation where I can save those kids (and also somehow there is no other person upstream or downstream willing to help them?) saving all the kids would ruin my life (I can't even sleep properly). And related to that is that the city is saving relatively little money (cost of 24/7 lifeguard so if you paid them SWE salaries maybe 2 M$/year) and getting a rather huge benefit. If they value life at 1M$ they get from me 24*365*1M$ = 8760M$ / year.
If a city spends millions to build a dam but they count on my free labor to extract billions per year from my unpaid labour then yeah they are kind of exploiting me.
With charities situation is trickier - if a charity is really saving lives at low cost then it would be great to donate to it (some amount, you probably don't want to ruin your life). But you're donating money so it's harder to verify that's actually happening. And people have obvious incentive (getting your money) to misrepresent the situation to you (so you should be more worried about your money actually being used in the way they claim to).
And people setting up a generic argument which, if accepted, would oblige you to potentially ruin your life (by giving away all your money) while potentially benefiting them (by directing money in their general direction) is extra suspicious.
I don't want to say that one should never give money to charity. I agree with what I think was the original premise of EA (find out what charities are effective and how effective exactly they are and what money you want to use charitably try to use effectively). But it's really hard!
I think most of the criticisms of these extreme life/death hypotheticals as teaching tools or thought experiments are valid, but I'll add another one I think is pretty important.
There never seems to be any scope for local or medium-scale collective action. It's always you, alone, with the power of life/death, or else Angels of Heaven making grand population-wide agreements. For example:
What if in the cabin by the river of children scenario, you found three "roommates" to live there with you (presumably all doing laptop work-from-home etc.) and you all did six-hour shifts as lifeguards, saving all the children? And why does it take a "lobbyist" to possibly get Omelas to do something about the drowning children problem? Ever see "Frankenstein"? You could pick up a drowned kid and walk to City Hall with her body, that might get some attention.
And in reality, that is how things usually improve in human society. Some local group takes the initiative to starting reducing harms and improving people's lives. Sometimes they grow and found Pennsylvania. Mostly they gain status and attention and can have their work helped by or taken up by governments (assuming any govt is not COMPLETELY corrupt.) Global-level co-ordination only happens in like The Silmarrilion -- here on earth see previous about TOTAL corruption.
BR
PS --- The second-most cynical take here would be to get an EPA ruling classifying the stream of children as an illegal "discharge" into a public waterway, getting an injunction against the city for polluting the river with dead kids, which honestly at the rate of one/hr would be some VERY significant contamation indeed, even if you had some horrible mutant species of carrion-eating beavers downstream building their dams out of small human bones.
A more cynical take comes to mind -- after a week of tag-team lifeguarding, you will have 168 children. What do you with them? This quickly become completely unmanageable. In fact after 24 hours all the kids would be so annoying you'd probably start letting the little bastards drown.
"A more cynical take comes to mind -- after a week of tag-team lifeguarding, you will have 168 children. What do you with them? This quickly become completely unmanageable. In fact after 24 hours all the kids would be so annoying you'd probably start letting the little bastards drown."
Even more cynical take: sell 'em to child sex/labour trafficking gangs. The parents obviously don't care, since they all continue to live in a city that allows children to fall into waterways and get swept downriver to drown unless a random stranger saves them. The megacity even more obviously doesn't care about what happens to its minor citizens. The problem is set up such that if you, personally, singlehandedly don't intervene the kids will drown. So clearly nobody is looking for them or dealing with them or trying to prevent the drowning. We don't even know if the bodies are collected for burial or if that too is left to whoever is downstream when the corpses wash ashore.
So who is going to miss one more (or 168 more) 'dead' kids? Profit!
That was my immediate thought, but my comment was already too long. Maybe the Clinton Foundation could send a van a couple times a day to scoop up this free resource. Or establish a Jonestown style mini-nation someplace and train your infinite steam of children who owe your their lives to be an invincible army. So many possibilities.
Exactly! You now have this never-ending (it would seem) source of free labour literally floating down the river to you. For the megacity, this is only 999th on the list of "we're in really deep doo-doo now", so it could even be argued that you are giving the children a better life (how bad must life in the megacity be, if there are 998 *worse* things than drowning kids on the hour every hour all year round?) no matter where you send them or what you do with them.
Yes, an army of infinitely-replenishing children would be great, kind of like Hector (?) and the Dragon's Teeth in Greek mythology. But for maximum Dark Lord chaos points, I think my own army of mutant carrion-eating beavers would slightly edge it out. Add in some weaponized raccoons and it's "Halo 4: The River Strikes Back"
Split the difference: hand the rescued kids over to your cadre of mad scientists (you *do* have a cadre of mad scientists, don't you?) as experimental material to help create the next generation of mutant carrion-eating beavers and weaponized raccoons! After all, the carrion for the beavers has to come from somewhere, right, and where better to ensure a steady supply than the spare parts from the lab experiments?
Train the kids to train the vulture-beavers to construct the weir to simplify the rescue process, then treat the overall situation's rank among that megacity's problems like a scoreboard for your raiders to climb.
https://viruscomix.com/page464.html
The observer will destroy reality so no problem right
>People love trying to find holes in the drowning child thought experiment. [..] So there must be some distinction between the two scenarios. But most people’s cursory and uninspired attempts to find these fail.
Alternative way of framing this article:
> People love trying to find holes in the drowning child thought experiment (DCTE) counter-arguments. So allow me to present even more contrived scenarios that are not the DCTE, and apply the DCTE counter-arguments to those instead and see how they fail.
My takeaway is that you're nerd-sniping yourself by employing ever more sophisticated arguments to a minimally sophisticated "I'll know it when I see it" approach to life in general and the DCTE in particular that most people have.
My intuition goes towards : accident vs systemic issues.
In IT, we have a saying : "your lack of planning is not my emergency".
Similar vibes here. Why is there a drowning child in front of me ? Is it an unfortunate accident, or the predictable and logical consequences of a really poor system ? I feel absolutely no responsibility for the second. In this example :
> Every time a child falls into any of the megacity’s streams, lakes, or rivers, they get swept away and flow past your cabin; there’s a new drowning child every hour or so.
Not my problem. Fix. Your. Damn. System. Or don’t — at this point I don’t care.
The point of the hypothetical is that this isn't really a *source* of drowning children, it's just that all the children that would normally drown in that big city end up in one place.
Still not my problem: what if my cabin is situated such that before the children coming down river reach it, they all get swallowed up by a sinkhole? So I don't even have any drowning children to save, but they're still drowning. The city is the source of the drowning children, let them sort out why one child every hour falls into their damn rivers and lakes.
The absence in general of a Duty to Rescue stems from the principle that one shouldn't be obliged to put oneself at risk on behalf of a stranger to whom one has no allegiance or duty of care, and that might not be the same risk as the stranger's predicament (assuming one didn't cause the latter).
Even with the example of the kid drowning in a puddle, who is to say there isn't a bare mains electrical cable under the water that would electrocute a rescuer as soon as they touched the water or the child?
There's also the snow shovelling example, in which if you public-spiritedly clear the snow from the sidewalk adjoining your dwelling (a sort of anticipatory rescue) and a passer by slips on the patch you cleared then they can sue you for creating the potential hazard, which they could not had they slipped on the uncleared snow!
Or you could pull someone from a crashed car that was in imminent danger of catching fire or being rammed by another vehicle, but in the process break their dislocated neck so they ended up paralyzed for life, again risking a lawsuit.
I gotta be honest with you fam: all that such posts do is make me steel my heart and resolve to not rescue local drowning children either, in the interest of fairness. One man's modus ponens is another's modus tollens and all that.
What you're trying to do here is to erase the concept of supererogatory duty. It's inherently subjective and unquantifiable so every time you say "well, you don't have to do it, but objectively you should donate exactly 12.7% of your income to African orphans, but you don't have to do it," you're not fooling anyone, you just converted that opportunity for charity to ordinary duty.
So here's an alternative you have not even considered: I decide that I have a duty to rescue my own drowning children, I decide that I have a duty to rescue my neighbors' drowning children for reciprocal reasons (and rescuing any drowning child in sight is merely a heuristic in service of that goal), but rescuing African drowning children is entirely supererogatory, I might do it when and how I feel like it, but it's not my obligation that can be objectively quantified. This solves all your "river of death" problems without any mental gymnastics required.
In conclusion: https://knowyourmeme.com/photos/2996252-moral-circles-heatmap
It sounds like you are perhaps saying that everything is commensurable? Except like it’s some kind of gotcha?
https://slatestarcodex.com/2014/12/19/nobody-is-perfect-everything-is-commensurable/
Kind of, but not really.
What I'm getting at is, when someone proposes that from assumptions A, B, and C follows conclusion D, you can agree that it does logically follow, but disagree that D is factually true and instead reject some of the original assumptions.
So when someone proposes that I have a moral duty to save a drowning child in front of me, and that the life of a drowning child in Africa has an equivalent moral worth, I can disagree with their conclusion (that I must be miserable because I donate all my money to malaria nets and that still doesn't make a perceptible dent in African suffering) and declare that no, for my purposes the children are not fungible, and also that I don't have a duty to save the local child. What's going to happen if I don't, will Peter Singer put me into an ethical prison? Even usual police, if I were to look at them as the source of morality, would leave me alone in most jurisdictions, or all if I tell them that I don't swim very well.
Then someone might ask me, won't I feel terrible watching the child drown? Sure, *that's* why I'll try to save it, but I don't feel particularly terrible about knowing that thousands of children drown in Africa because I *don't see them*, and why would I try to rewire myself about that? Reaching https://en.wikipedia.org/wiki/Reflective_equilibrium goes both ways and nothing about the process suggests that the end result will be maximally altruistic. So I can choose to retain my normal human reactions to suffering that I can see and alleviate, but harden my heart against infinite suffering elsewhere.
Similarly, wouldn't I get ostracized by the people of my town for letting the child drown, because we have an understanding about saving each other's children? Sure, and that's another good reason to save the local child that doesn't generalize to saving African children because Africans won't help me with anything and my fellow townspeople won't be upset about me not helping Africans without expecting reciprocity.
Then Scott says, all right, but watch this, and adds a bunch of different epicycles, which he then invalidates with more convoluted thought experiments and replaces with further epicycles, but I still find the end result unsatisfactory.
The solution proposed here has a fatal flaw: Rawls' Veil of Ignorance doesn't actually exist. I understand that it would be very nice if it existed, it would let us ground utilitarian ethics pretty soundly, but unfortunately it's completely made up.
The solution in the post you linked, to donate 10% of your income to charity, is also kind of incomplete, because it still tries to make an utilitarian argument, but then suddenly forgets all its principles and says that it's OK to donate 10% because most people donate less, so you can just do that and sleep soundly. Why?
What I think is if not outright missing (upon rereading both posts) then at least not properly articulated is the distinction between ordinary duty and supererogatory duty, such as donating to charity. Ordinary duty, and I'm willing to walk back my objection and include saving a local drowning child, you are obligated to fulfill. Anything above and beyond that you can do if you want, but it's not mandatory.
And that's the crucial part that allows you to have arbitrarily whimsical justifications, such as: really I'm just satisfying my desire to make the world better, so donating exactly 10% of my income scratches my itch, poor Africans are welcome. Or you can imagine that there's a God that will reward you with a place in heaven, or that you entered an esoteric compact before your angelic soul incorporated in a body, or whatever that satisfies your desire to feel like a nice person without having too much troublesome thorny edge cases.
There's something charming about an aethist solving ethical dilemmas by recommending we behave as if an angelic coalition is still in effect.
Good post. And I approve of thoughtfully engaging with the substantive details of all ideas. But throughout the post I couldn't help but constantly think "OK, but the main point is that the Copenhagen Intepretation of Ethics has nothing to recommend it as a prescriptive theory. That seems to be the bigger issue."
The person saving the children washing out of the mega city is obviously acting extremely immorally.
"this is only #999 on their list of causes of death"
By saving those children they are neglecting 998 higher priority interventions. For every child saved from drowning they are willfully killing a much higher number of children.
The drowning child saver is a monster by Scott's reckoning.
I feel you're neglecting neglectedness as a consideration, and tractability seems like a straightforward consideration to add.
I am very fond of Scott, but these sort of thought experiments just feel meaningless to me. This is probably a function of different starting premises. I have been reading and reflecting a lot of moral philosophy in the last years, and the place I've (nor dogmatically) arrived at is some type of non-realist contractualism, which means questions of 'ethical' behavior are basically meaningless. There are contracts (formal and informal) one submits to when part of a society, and beyond them there are preferences where people are unconstrained to change them except if they want to. Morality is just a strategically useful evolutionary strategy (both natural and cultural) that allows individuals and the groups they belong to to prosper.
Tbh I find such discussions rather tiresome. Moral intuition evolved not to help us make better moral choices, but to improve our chances of reproduction. Thus the inherent moral framework build into every human is "be as selfish as possible as long as it does not reduce your social standing in your tribe of max. 50 people".
So either go and donate most of your money for mosquito nets for African children or admit that you are not trying to maximize morality in your decisions.
I can easily admit that I like to eat fast food even though I know it's not healthy because it triggers evolved cravings and it's easier than to make the right choices. Moral frameworks like the Copenhagen theory are the intelectual equivalent of saying "if you eat with friends, you only have to count the calories that you eat more than everyone else". It's bullshit and you know it. Stop rationalizing poor decisions and own them, if nothing else.
Actually by relocating to the drowning child cabin you are given a wondrous opportunity to be in the top .01% of life-savers historically and you should really be taking advantage of it, unless you are retiring to your study to do irreplaceable work on AI safety or malaria eradication.
Yeah I kept thinking about this. Perhaps the broader world of the hypothetical is extremely strange -- certainly our glimpse is -- but it would be absurd for anyone to be so sure of their work that the cabin isn't an amazing opportunity. Even the few (less than a hundred) people who have saved more lives could not have had this level of certainty in their impact. The real question is, how does the cabin not get bid up in price by people most willing to take the opportunity? Then it should be allocated to someone who would use it to the max and have low opportunity costs, I would think. You only need like ten sane/normal/good people in the entire world to get a fairly good outcome in that situation, assuming the context isn't saturated with even better opportunities.
I think you're all missing the obvious solution: drain the river. Kids can't drown if there's no river to drown in, now can they?
I think this is the pretty obvious problem with the whole post. It's an appeal to ethical intuitions, but ethical intuitions are formed by experience and interaction with the world as it exists. In a world without gravity, my horror at seeing a child falling off a cliff would be entirely inappropriate. So the extreme hypotheticals don't "isolate the variables," they just trigger the realization that "this world is different."
I strongly suspect that there is no such thing as a compete and internally consistent moral framework. The obsession that EA types have with trying to come up with a set of moral axioms that can be mapped to all situations is pointless.
Moral frameworks are an emergent property of society which make them effectively determined by consensus, weighted by status and proximity. The problem is that the individual judgements that coalesce into a consensus are not derived from some abstract fundamental set of principles isolated from reality, they're determined by countless factors that can't be formalized or predicted.
For instance..
I could walk past a drowning child and suffer reputation damage
A priest could walk past in a deeply religious society and declare that the child is the devil and so deserves to drown.
The child could be drowning in a holy river that is not to be touched so a passerby is praised for their virtue in ignoring the child and respecting the river gods.
An exceptionally charismatic individual could start a cult in ancient Rome around not saving drowning children because Neptune demands sacrifices. This cult outcompetes Christianity and becomes the foundation of all of western civilization. That passerby is not evil, he's just religious and very orthodox.
an even more charismatic individual could convince an entire nation to adopt a set of beliefs within which saving a drowning child is dysgenic because a healthy child would know how to swim.
You can keep going on and on..
It's better to adopt the general consensus of the society within which you exist or if you insist on changing the status quo, play the status game to increase you and your group's influence on the consensus. Trying to come up with a logical framework is not it because that's not what normal people are basing their judgements on.
ISTM this model is failing to capture all the variables involved. Why on earth /wouldn't/ we be obligated to save the hourly drowning child, forever?
We have a habit of excluding physical and mental health from these calculations. The wet suit and missed lunch don't matter, but dustspecks in the eye, forever, with no prospect of an end, add up.
Consider a model where people generate a finite amount of some resource each day. Let's call it "copium" for convenience. Self-maintenance costs some variable amount of the resource over time. This amount varies randomly, in accordance with some distribution. You can approximate some upper and lower bounds on how much copium you're likely to need to get through the day, but you can't know ahead of time. All decisions and actions you perform cost copium. If you incur a copium cost when you have no copium left, you take permanent damage. If you accumulate enough damage, you die.
This brings the difference between the one-off case and the hourly case into focus: the one-off scenario is worth the copium spend, but in the ongoing scenario you predictably die unless you can make it fit your copium budget.
The rule then becomes - help however many people you sustainably can, and no more than that unless you'd also be willing to sacrifice yourself for them in more immediate ways (the answer to "would you be willing to die to save hundreds of children?", for many people, isn't "no"!)
In the moment, though, when forced to actually decide, the difference between whether you act like a Singer or a Sociopathic Jerk is down to the amount of copium you have left for the day.
Another part of the difference why the cabin in the woods (and saving lives through charity on the other side of the world) feels different from the other examples is that millions or billions of other people could act to prevent the deaths (even if they don’t).
Whilst if a child is drowning in front of you only you can stop them dying.
The other element that I would add is reciprocal moral obligations. We all have a different sets of moral obligations to our direct family, extended family, friends, neighbours, town, country, humanity etc.
Whilst it might be great if everyone in the world treated everyone else like family, it would quickly fall apart to defection.
In most nice societies, you have a moral obligation to help someone whose life is in danger if you are one of the few people who can help and it is relatively simple to do so. This is a great thing and has, along with other social ties, taken hundreds (or thousands) of years to create. To prevent moral hazard (and also defection) it doesn’t really apply if someone has repeatedly got themselves into the situation - it is about extraordinary aid when something goes accidentally wrong.
This explains why in the cabin situation I feel morally mixed - the population of the megacity know this is happening and have clearly chosen to let it happen despite it being easily preventable. However I feel bad for the children (they haven’t made that decision) and at the time of their drowning I am the only one who could save them. But it wouldn’t be simple to save all of them.
This also explains why I don’t naturally feel much of a moral obligation to give to effective charities saving lives on the other side of the world. They are not in any of the communities I have varying degrees of moral obligation to (other than humanity as a whole). Furthermore those with much stronger moral obligations to those people are clearly failing them (although this varies a bit by country). There are also many others who could save them.
The big question is whether this notion of reciprocal moral obligations to differing extents to different communities we are part of, that most of us who have been brought up in ‘nice’ circumstances feel, is logically correct? I think Scott would say they are all very well, but that we should fulfill our obligations to them and then focus on how we can do the most good for humanity as a whole, from a broadly utilitarian perspective. Clearly in a direct impact sense this is correct, but thinking through secondary impacts I’m less sure.
Most directly and specifically around charitable donations from wealthier people in western democracies, if people in a country feel like the successful aren’t giving back to them and the country, this undermines support for the capitalist policies that enable the wealth to be generated in the first place.
More broadly I don’t really think you can just ‘fulfill’ your obligations to those other communities. Part of those obligations are that the more you have, the more you give back (e.g. a rich person donates to the school they attended, if you have more free time than your siblings you are expected to help out your ageing grandparents more etc). So choosing to help humanity as a whole is a form of defection (e.g. if rich people decide to switch their philanthropy to donating to charities abroad rather than at home) from these moral obligations in some sense.
By defecting from these ties and norms you are causing damage to the social fabric (or ‘social trust’ in economic terms) that ultimately created that wealth. In most ‘not nice’ countries the only reciprocal moral obligations that are adhered to are those around the extended family. A key part of why rich countries are rich is that they created strong moral responsibilities to wider communities, particularly your town, your country and other institutions within your country. Rather than a government official being obligated to cut their cousin in, in these countries they are morally obligated not to.
Personally think this is part of the reason for Trump and the populist swing in recent years. ‘Elites’ increasingly have a morality focused on utilitarianism or helping those most sidelined/discriminated against, whilst ordinary people see it more in terms of these communities, which from their perspective the elites are defecting from. For instance in the past the rich people in a town facing issues might have worked together to sort them out, whilst now they are probably more likely to just leave. Or the kids of the rich and powerful would had have a decent chance of being in the military (the death rate of the British aristocracy in WW1 was incredibly high), so ordinary people were more likely to trust elites on foreign policy decisions.
These norms and obligations only work if everyone feels like everyone else feels them and mostly acts on them (rather than being for ‘suckers’), and messing with something that is such a key part of what makes societies stable, rich and ‘nice’ is very dangerous.
This is a huge and underrated driver of NIMBYism. People are willing to destroy housing affordability and massively reduce total prosperity if it means they are more insolated from drowning children.
It’s really about the number of children drowning. One, yes you can save. Many, you cannot.
Singer - the original author of the thought experiment - argues that to impoverish ourselves to the extent that we ourselves are close to impoverishment but not quite there, as the only moral solution.
There are multiple drowning children though, not one. I imagine myself on a boat on the sea or lake with the drowning children. I can rescue the children. However, I can myself drown by capsizing my boat if I take on too many.
I am also in danger of capsizing in future if I take less than capacity, it’s not clear how many but it becomes risky to take on anything near the limit, as the boat is rickety and storms occasionally occur. People who have not maintained their boats have drowned.
All around me, though, are much bigger boats towering over my boat, these boats either don’t help the drowning children, or take on a number of children - which while admittedly more than me is nowhere near their carrying capacity - and the large boats are at no danger of sinking in future storm damage either.
Also on the lake are military who I fund with taxes who are actively drowning the children. Just jumping in and drowning a few every so often. I can’t stop this. It’s for geopolitical reasons.
None of this means you shouldn’t help the drowning children but I wouldn’t worry about relative morality here either. Rescue some, but not to the capacity of the boat, not to put the boat in danger.
I deleted my old blog, but the essay is still around:
https://forum.effectivealtruism.org/posts/QXpxioWSQcNuNnNTy/the-copenhagen-interpretation-of-ethics
Sorry about the dead links though :-/
From the subreddit:
I think morality originally started, and still functions for most people, for two things:
a) To pressure friends and strangers around you into helping you and not harming you, and
b) To signal to friends and strangers around you that you're the type of person who'll help and not harm people around you, so that you're worth cultivating as a friend
This has naturally resulted in all sorts of incoherent prescriptions, because to best accomplish those goals, you'll want to say selflessness is an ultimate virtue. But the real goal of moral prescriptions isn't selfless altruism, it's to benefit yourself. And it works out that way because behaviors that aren't beneficial will die out and not spread.
But everything got confused when philosophers, priests, and other big thinkers got involved and took the incoherent moral prescriptions too literally, and tried to resolve all the contradictions in a consistent manner.
There's a reason why you help a kid you pass by drowning, and not a starving African child. It's because you'd want your neighbor to help your kid in such a situation so you tell everyone saving local drowning kids is a necessity, and it's because you want to signal you're a good person who can be trusted in a coalition. The African kid's parent is likely in no position to ever help your kid, and there's such an endless amount of African kids to help that pouring your resources into the cause will outweigh any benefits of good reputation you gain.
Our moral expectations are also based on what we can actually get away with expecting our friends to do. If my child falls into the river, I can expect my friend to save my child, because that's relatively low cost to my friend, high benefit to me. If my child falls into the river 12 times a day, it'll be harder to find a friend who thinks my loyalty is worth diving into the river 12 times a day. If I can't actually get a friend who meets my moral standards, then there's no point in having those moral standards.
Essentially ethics makes sense when centered around a community but we in the west don’t really have communities anymore. Hence the incoherent philosophy.
I've never really seen this version of ethical egoism that's like "it's Moral Mazes all the way down" espoused other than here. Although now that I think of it, Rawlsianism basically assumes that this is what would happen without deliberation behind the Veil of Ignorance, and nobody but maybe Mormons believes the deliberation actually happens. Nonetheless I don't think this is plausible on a human level, even if it probably is from a gene's-eye view, because sympathy and guilt are things. If you suffer for ignoring others' well-being, then others' well-being is at least sometimes more-than-instrumentally important to you.
I subscribe to this as an explanatory theory but not a prescriptive one. Sometimes you have to be better than the soulless, brainless and hopeless forces that made you, because you do have a soul, a brain and a hope. Sometimes you see that you're being puppeted and you think that's the best of all possible worlds.
The most important part of bravery as a virtue isn't that you have ridiculous amounts of it for situations that rarely happen, but that you have enough of it to face the parts of you that are imperfect and acknowledge that you are imperfect, so that fixes and changes can happen at all. And you can't argue someone into being brave. I don't know how else to explain why else people flinch away from being better than what they were designed for.
Yes - and even more so. "Morality" is not a rule system, it is a mishmash of loose heuristics that evolved to help us cooperate in small, local groups because cooperating groups outcompete non-cooperating groups.
With this in mind, I think most seemingly paradoxical moral intuitions make sense. It is all about what someone who saw or heard about some or all of what you did or did not do might be able to infer about your motivations (all in the context of a group of 20-30 people with only eyes, ears, and a theory of mind as evaluation tools).
Contorted moral scenarios are engineered to exploit the incoherencies of our moral system heuristics just like optical illusions show the incoherence of our visual system heuristics. These inconsistencies persisted because they were not relevant in our evolutionary past. There were neither Penrose Triangles nor robotic surgeons out on the savanna.
Right, I don't think Scott or others of an EA persuasion would dispute this, or any of the similar statements made above.
The point is that, we don't live in the savannah anymore, but we still live in networks of people that approximate the social structures we evolved with, and technology and culture put us in some kind of proximity to people who are distant from us, yet whom we also can't help but apply our moral instincts to.
Since our intuitions can't help but be incoherent, but we still want to live in a cooperating group (or to put it in the language of the comment you're responding to, we still want to signal to friends and strangers that we should be helped and not harmed), we have to build something coherent enough to achieve these aims, built out of our evolved moral intuitions.
That's necessarily gonna mean making tradeoffs between different moral intuitions, hence the convoluted thought experiments to figure out what exactly our moral intuitions are, and how we trade them off against each other.
From a prescriptivist standpoint, there won't come a time when it will *not* be more moral to save the next drowning baby-sutured-to-a-famous-violinist floating from the magical post-industrial bubble city filled with burning fertility clinics and infinite trolley switches or whatever the shit. The person who donates 11% of his wealth to mosquito nets is better than the person who donates 10%.
But I'm sorry, I can't do it. I'm flawed. I don't live for others as much as I could. I'm too attached to comfort. I (roughly) tithe but I could give more if I didn't pay for the Internet connection that I'm using to post this. I could be volunteering instead of posting.
Perhaps someday I'll grow in selflessness and I'll get to the point where I love radically, for the whole world. I think that's the call of Christianity in a fallen world I just hope that until I get there, my sins of omission aren't considered too great.
You raise a good point: what if, in order to save the drowning child, they have to be plugged in to your circulatory system for the next nine months (falling into this river automatically gives them kidney disease as well as risk of drowning)?
Are we then permitted to refuse to have attached the drowning child? Cage match between Singer and Thomson!
The fact that Singer, who is fine with killing toddlers, is taken seriously for his "ethical intuitions" is also a tell.
I have been writing about aphantasia and hyperphantasia and what it might mean, for these thought experiments, if you actually *see* the drowning child or the terrible thing that needed intervention. Our reactions are not wholly philosophical. https://hollisrobbinsanecdotal.substack.com/p/aphantasia-and-the-sixth-sense
I feel bad admonishing Scott for not being universal enough when there's this much opposition in the comments to having even slightly more rational ethics. And I realise he has to take into account that you can't expect anyone to be fully rational or alturisitc. But if you really take Rawl's veil seriously the conclusion should obviously be world communism for all sentient beings.
If Earth was populated with perfectly rational, perfectly allutristic Rawlsians they wouldn't just be donating 10% to bed nets, they'd also be spending like 25% of world gdp building social housing for wild mice etc.
>How much should they pay? Enough to pick the low-hanging fruit and make it so nobody is desperately poor, but not enough to make global capitalism collapse.
I feel like like the 10% level of alturism Scott's proposing is way lower than could be justified by constraints on maintaning economic growth and he's really considering psychological opostion to being more alturistic than anything theoretical here. The top rate of tax used to 90% for a lot of places in the post war period, and modern gdp per capita is about 20x above subsistane level. The theorietically ideal Ralwsians could easily be spending 50%+ of gdp on charitable redistribution imo.
>I think the angelic intelligences would also consider that rich people could defect on the deal after being born, and so try to make the yoke as light as possible.
Considering the posibilty of defections seems to defeat the point of the though experiment since that's no longer behind the veil.