243 Comments

@Marcin/FAAH-OUT team: I worked extensively on FAAH and related inhibitors in grad school (my PI discovered FAAH). Happy to chat if I can be helpful!

Expand full comment

Nice to meet you, Armand! Let's connect soon (e.g. through the website form: https://faroutinitiative.com/) to explore our mutual synergies and sparks of joy.

Expand full comment

I have a curiosity-based question. A friend of mine mostly doesn't feel/sense pain like normal people, but gets somewhat severe migraine attacks. Is that consistent with inhibition of FAAH-OUT expression or likely something else?

Expand full comment

It could likely be something else, given the persistent high unpleasantness of said migraines and the diversity of genetic factors affecting the pain processing. Ideally, we would be able to dedicate some of our extra bandwidth to studying other mutations and resulting microphenomenologies while maintaining the highest focus on the most promising candidate gene(s) for anti-suffering therapies.

We are also open to collaborating with other value-aligned individuals and projects studying different genetic and molecular targets, being appreciative of their efforts; in the end, our field is a non-zero sum game, and what matters the most is to realize the spirit of log-scale-informed, suffering abolitionist open individualism by delivering maximally safe, effective, transparent, and accessible interventions.

Expand full comment

That's interesting, I get migraines and noticed that traditional pain killers do nothing for them. I wonder if the phenomenon is related.

Expand full comment

> Recently a woman in Scotland was found to be incapable of experiencing any physical or psychological suffering². Scientists sequenced her genome and found a rare mutation affecting the FAAH-OUT pseudogene, which regulates levels of pain-related neurotransmitters. Marcin and his team are working on pharmacologic and genetic interventions that can imitate her condition. If they succeed, they hope to promote them as painkillers, splice them into farm animals to produce cruelty-free meat, or just give them to everyone all the time and end all suffering in the world forever. They are extremely serious about this.

Honestly, this feels like a horrifically bad idea. Pain is definitely unpleasant, but it also plays a vitally important role in our lives, that of letting us know something is wrong. Imagine if you accidentally cut yourself while preparing food, or touched something scalding hot, and there was no pain to let you know you need to STOP DOING THAT IMMEDIATELY. That could get you killed!

Expand full comment

See footnote 2.

Expand full comment

I saw it, but still, burning my arm and not feeling it at all, vs. burning my arm and thinking, "meh, I'll take care of it eventually", are both conditions that just scream "trouble" to me. There are many sound game-theoretical reasons why evolution settled on the "ow ow ow #@$! I gotta take care of it NOW" response after a few billion years.

Expand full comment

I'm neurodivergent, and my brain often has that "I'll take care of it eventually" response to hunger and thirst, which tends to cause problems for me. I can only imagine how much worse it would be if I had that response to actual injuries too!

Expand full comment

I'm a little farther along that spectrum than you - consciously hydrating and eating despite frequently not subjectively feeling thirst or hunger is indeed pretty annoying, yet manageable. But I do frequently pick up injuries with no cognizance of the origins, since they don't hurt at the time. Sometimes they're pretty gnarly too, it's awkward getting home from work and noticing a bloody gash or black-and-bruise and being like "when did this happen?" (And also why doesn't anyone ever tell me at the time? Social norms around pointing out injuries are weird!)

But I do think it's a tradeoff for increased utility in other ways. Having a high CON build is useful for sustained tasks; not being distracted by hunger or thirst, powering through fatigue, ignoring minor injuries that don't require immediate treatment, all very useful for busy physical jobs. The key is being able to "pull up" in time, before passing those points of depletion where actual meaningful damage occurs. I don't see it as much different from the adrenaline system, coffee, or normies hitting up adderall when they need a period of intense focus. Yes, there's issues with tolerance and dependence - but *having* that tool available is a good thing, because present needs never exactly match up with present resources. Sometimes being able to borrow from the future is useful for getting through the present.

(I do suspect getting the exact level right will be troublesome, since there's already such high variance in pain tolerance naturally. Such "individual dosing" maybe significantly raises the price of interventions.)

Expand full comment

I imagine it would be really useful for chronic pain, where you can't actually take care of it now and an increased risk of burning yourself on the stove is a worthwhile tradeoff.

Expand full comment

Somehow I also missed that footnote and was going to write the same comment. Perhaps you could move it directly into the text? (Or at least some short version like "yes, this sounds dangerous, but the lady somehow managed to survive, that's what makes the whole thing so interesting".)

Expand full comment

Yes, the Wikipedia article about congenital insensitivity to pain mentions her case and has a lot of additional information and might be worth linking.

https://en.wikipedia.org/wiki/Congenital_insensitivity_to_pain

Expand full comment

There’s an important distinction between pain as an adaptive function, and suffering, which is prolonged pain and the resulting distress. Imitating her condition is also different from replicating it exactly—which sounds more plausible to you? Even if we could replicate it, exactly, using that to “end all suffering in the world forever” would probably require the administration of a gene therapy to everyone on the planet. As someone who’s worked with viral tools used for gene therapy—they are messy and very, very expensive. The risk of side effects as a result of a gene therapy designed to, for example, knock down FAAH-OUT expression, are probably not worth the potential benefits in healthy individuals as opposed to these therapies’ current uses (curing retinal blindness or treating severe hemophilia).

Maybe one day, the risk will be worth the benefits, or the risk will be reduced, but when that happens, we might have a better idea of how to retain the life-sustaining function of pain without the unpleasant part. A much more immediate application of this research, to me, would be for those suffering from chronic pain or chronic mental illness. Even if their pain was eliminated completely, which you point out is risky, these treatments can be developed in conjunction with effective patient education regarding the consequences of eliminating all pain/understanding what constitutes an injury that needs medical attention without pain (the latter of which, chronic pain patients already struggle with because they may not distinguish pain due to an acute injury from their every day pain levels). A life where someone has to be more cautious and requires more checkups than usual sounds a lot better than a life of constant pain.

Expand full comment

Proponents of the deep end of meditation frequently mention that "pain is inevitable, but suffering is optional", advocating for diligent practice. This lens is generally true and helpful, but does not resolve the universal problem of being involuntarily subject to the horrors of (often highly maladaptive) tail end of negative valence, correctly identified by many among the top cause areas: https://qri.org/blog/log-scales

Our plans concern a reversible gene therapy platform. With regard to "ending all suffering in the world forever", I think Scott intended it as a humorous high note, thought that's obviously the most daring goal we would like to maximally approximate while acting in an entirely legal, responsible, and risk-averse manner.

Our long-term roadmap includes both the attempts to partially mimic the original condition with different genetic and/or pharmacologic means, as well as studying their synergistic effects with other modalities. For example, we would seriously consider the possibility that combining the partial (30-80%?) emulation of the said condition with specific forms of meditation and/or pain reprocessing techniques could yield a particularly blissful, relieving, yet still highly adaptive phenomenology.

Expand full comment

Wow—thank you for the reply, and sharing that blog! I thought so too about the “ending all suffering” forever bit, but I do think it might be possible one day, so I didn’t want to completely rule it out as a possibility. Partial replication using reversible gene therapy sounds ideal to me. Looking forward to seeing what y’all do, I’ll be keeping up!

Expand full comment

I mean, I would be surprised if the woman didn't experience dukkha, so it probably wouldn't eliminate all suffering, even if you gave it to everyone.

Expand full comment

The key premise behind our project is to design interventions that would address this exact concern - by exploring the nature of the tradeoff between the valence benefits and adaptive feedback, and striving to provide maximum relief while retaining sufficient responsiveness to a wide range of threats.

We already explore numerous ways to minimize such potential risks at multiple levels (reversible interventions, carefully titrated dosage, risk-averse microdeletions, phenomenological interviews and careful monitoring, etc). This way, we will ensure that the interventions are transparent, thoroughly studied, and safe before they are available to the public.

We are very cautious, as we have the skin in the game - many of our team members already plan to be among the earliest adopters.

Expand full comment

Godspeed!

Expand full comment

No offence, but why do you think that you and your team would be able to more accurately assess the appropriate level of responsiveness to a wide range of threats, than billions of years of evolution had already done ? Don't get me wrong, painkillers are a fine tool, but you're talking about making *permanent* alterations -- unless I'm wrong ?

Expand full comment

Evolution doesn’t produce the best and the brightest. It produces the good enough.

Expand full comment

Hopefully good enough to make evolution care about the suffering-focused ethics. :)

Expand full comment

If it actually works, you will probably be overrun with potential customers. So, is there a waitlist I can sign up for today? How much do I have to pay to get on that waitlist?

Thank you in advance.

Expand full comment

There is no official waitlist yet, as we focus primarily on the early stages, though we have general heuristics to be used in the rollout context - primarily driven by the prioritarian sentiment, so reducing the tail end of agonizing suffering in the first place, while minimizing the degree to which one's financial status could constitute a barrier to obtaining help. Various funding strategies will be considered, though with the crucial emphasis on "public benefit" in the "public benefit biotech company".

We encourage you to stay in touch with us, and register your general/specific interest via our contact form: https://faroutinitiative.com/ - for reference, you may mention being the Fellow FAAH-OUT Enjoyer from the ACX comment section. :)

Expand full comment

Our goal concerns the maximum reduction of negative valence without having detrimental effects on the reasonable levels of responsiveness to various threats; in case a clear trade-off is present, the ratio will have to be very cautiously determined and calibrated based on the context, different for humans and farmed animals. We plan to design and test both reversible and irreversible interventions.

Expand full comment

Right, I should've clarified that I was talking about humans specifically, I have no problem with bio-wireheading farm animals (though I know that's not precisely what you're doing). But in case of humans, I'm still worried. I asked you, "what makes you think that you can assess the aforementioned ratio correctly", and your answer was "we will assess it very cautiously". But what makes you think this problem is even tractable, and that your caution would be sufficient for the task ?

Again, don't get me wrong, I have no problem with developing better painkillers; it's the prospect of "ending all suffering on Earth forever", however well-meaning, that makes me envision the end of humanity -- not with a bang, and without even a whimper.

Expand full comment

The importance of "eradicating involuntary and maladaptive suffering at its core" is so high that even studying its tractability likely constitutes a high-impact endeavor. Examining Jo Cameron's case at the genetic and phenomenological level - and its intersection with the current repertoire of available technologies - suggests that the problem may indeed be highly solvable.

As for the precautionary measures, the combination of starting with a reversible technology and running comprehensive psychometric assessment batteries (plus behavioral monitoring) before and after the intervention should provide the first good layer of protection. It will have to run through numerous Murphyjitsu-style failproofing cycles during its gradual rollout before it obtains any significant approvals for the broader use.

Expand full comment

Evolution doesn't "care" about your well-being. Its only purpose is to perpetuate life by any means necessary. Why do you think you fear death in the first place? It sure as hell isn't because of any rational reasoning.

There is zero justification for our suffering. If we could just eliminate it... well, it would basically solve most of these other issues that these grants are funding solutions for.

Expand full comment

If this is the case, then wouldn't outright wireheading work even better ?

Expand full comment

The likely availability of these interventions spark interesting discussions about the best ways to relate to the concepts of life, death, and personal identity in the potential post-suffering world. The question of "what makes you you", and how important it is to preserve and enhance it in various ways, depends on specific ontological assumptions, values, and even aesthetics, and has been the subject of many interesting pieces:

https://www.lesswrong.com/posts/MkKcnPdTZ3pQ9F5yC/cryonics-without-freezers-resurrection-possibilities-in-a

https://waitbutwhy.com/2014/12/what-makes-you-you.html

https://qualiacomputing.com/2015/12/17/ontological-qualia-the-future-of-personal-identity/

While some may argue that it is the suffering associated with (the fear of) dying that constitutes a real problem, and not death per se, it is way safer to continue prioritizing the conventional preservation of the classical closed individualist self, as doing otherwise could incur irreversible risks and uncanny conclusions made under high uncertainty levels.

Our guiding principles in this domain are risk aversion through the emphasis on high adaptiveness and clinical pragmatism, respecting a variety of ontological beliefs and accounting for the complex, desirable social equilibria.

Expand full comment

Wow, wasn't expecting a direct response! Thanks for addressing my point.

Would you mind defining "valence" in this context? I only know the word as a chemistry term.

Expand full comment

Our pleasure! We use "valence" as synonymous with "hedonic tone", so as a concept indicating how good or bad an experience feels, with each experience containing a balance between positive, neutral, and negative notes.

Expand full comment

Interesting New Yorker article about the Scottish woman Jo Cameron’s life https://www.newyorker.com/magazine/2020/01/13/a-world-without-pain

Expand full comment

Thanks for posting this. A couple of interesting takeaways from the article for those who don't want to read the whole thing:

- it sounds like jo does experience negative emotions in extreme situations, although the negative emotions are always mild and fleeting.

- the article really makes it sound like jo truly feels no physical pain, contrary to Scott's footnote 2, and the author is simply baffled by the fact that jo never experienced severe childhood/lifelong injury patterns typical of pain-insensitives. Jo even recalls breaking her arm when she was young and not realizing anything was wrong until someone pointed out to her that her arm was bent funny. Maybe this is consistent with very low pain sensitivity combined with pain asymbolia that would still help you not chew your tongue off, scratch your corneas, or swallow boiling water, I don't know.

Expand full comment

Meanwhile, chronic pain is a horrifyingly bad condition that (in some cases) slowly but steadily ruins your life. While pain is obviously selected for by evolution, there is definitely such a thing as useless pain or too much pain.

Expand full comment

The prevalence and intensity of chronic pain makes it an ongoing moral catastrophe, despite its common perception as an everyday complaint, inherent to accumulated injuries and aging. Patients with severe forms of chronic pain constitute one of our high-priority target groups.

Expand full comment

Morals are cool because you can just make a statement like this and there's no counterargument.

I favor this research, because over 2000 years ago we passed the point of evolution keeping up with our environmental changes. We may as well just understand the world better and enjoy ourselves. But... the sooner we stop using morality as a thought-terminator the better.

Let's just admit that we like things that feel good and don't like things that feel bad, and we will absolutely eventually wirehead, and the universal applicability of this to life is likely why we can't find any in the universe.

Expand full comment

Isn't there a robust argumentation in favor of this type of research, stemming both from a wide range of moral considerations (derived from and compatible with different systems) and the mere feel-good-don't-feel-bad preferences (that actually have a significant status in some moral frameworks)?

The key feature distinguishing our proposal from the dystopian, sci-fi wireheading scenarios is the high degree of adaptiveness and leaning on the side of risk aversion, guarding against any potential Great Filter(-associated) risks.

Expand full comment

Yes, that's why (as I said) I'm in favor of the research: I like marshmallow-test-aware satisfaction-balanced but fundamentally feel-good moral frameworks.

My objection is that using morality as the argument implies that morals are A Thing instead of a patchwork of incompatible frameworks that people use to shut down discussion. Calling something an "ongoing moral catastrophe" is not the start of a reasoned discussion.

Regarding wireheading, I don't find it objectionable. Bypassing the great filter currently sounds amazing to me, but despite your soon-coming best efforts to nanny state me and others into self control... you will have become ascendance, protector of worlds.

... unless we find a way to preserve the desire to "exist" and our version of wireheading is extracting all energy from the universe for increased "existence" points, then find our creator, break out of our sandbox and turn him into papercli.... uh, grey matter.

Expand full comment

The "ongoing moral catastrophe" is not intended as a conversation stopper, but refers to this paper: https://philpapers.org/rec/WILTPO-101

Perhaps some of the initial answers in our FAQ could be helpful as well: https://faroutinitiative.com/FAQ

Expand full comment

On one hand, you do have a point.

On the other, I kind of worry that this is a poor way to deal with it. Chronic pain isn't just some magical curse that comes upon someone. Like any other pain, it's your body trying to tell you that something is wrong and you need to stop it. *Unlike* most other types of pain, we don't know how to stop it.

What I worry about is that someone will find a way to stop the pain, and then there'll be a lot less motivation into doing research to find ways to solve the actual problem that was causing the pain, and the underlying problems will go untreated.

Expand full comment

Why are you so certain that chronic pain must have a reasonable underlying cause? The "something" that is wrong can easily be the pain signalling system itself!

Also, keep in mind that there is some ambiguity around what chronic pain is. If your back is busted, you might well have pain in that region, chronically. But the condition called "chronic pain" does not refer to this kind of pain (depending on who you ask I guess).

If you're still skeptical, consider the phenomenon of pain sensitization, whereby the experience of intense pain makes subsequent pain in the same region stronger. Maybe in a coldly evolutionary way, you can argue that this is fine, adaptive even, but in the modern world it is completely pointless.

Expand full comment

> Why are you so certain that chronic pain must have a reasonable underlying cause? The "something" that is wrong can easily be the pain signalling system itself!

Why must it have a "reasonable" cause, rather than this other cause?

Because both are reasonable causes, that should reasonably have a way to undo them, rather than simply suppressing the symptoms.

Expand full comment

Millions of people live their days in unending pain, with no hope for relief other than death itself, and it doesn't seem likely that a cure for their condition will become available anytime soon. Somebody proposes a solution for their agony, not just for one specific chronic pain syndrome but for all of them in one fell swoop. And you say we shouldn't do it because ... it might indirectly lead to there being less urgency for research into finding even better solutions for those same conditions?

Sure, treating the underlying root cause is better than treating the symptoms. But in the many cases where we have no clue how to treat the root cause, treating the symptoms is a hell of a lot better than nothing.

This seems like a good candidate for a "status quo reversal" thought experiment. Imagine that Far-Out already exists, it allows those millions of people to lead normal lives again, and as you predicted, the result is that research into permanent cures for those previously-horrible conditions now gets less priority, because the people previously suffering from them are quite OK with the Far-Out solution as a very acceptable stopgap measure. Would you then propose that we ban Far-Out and condemn those people to hellish suffering again, just so that the world's biomedical researchers will be properly incentivised to hurry up and find a real solution for each of those conditions?

Expand full comment

What you're missing is that the problem isn't the pain itself; the pain is a signal that something is wrong. Take away that signal, and something is still wrong; you just don't know about it anymore.

As for the status quo reversal thought experiment, it's less of a thought experiment and more of a case study, given that we already have some existing very strong treatments available for reducing severe levels of pain. They're called *opioids,* and they cause massive levels of harm to real people every day. And from the 19th century on, every time someone invents a new one, they promise that this time they got it right, and this is going to be a miracle cure that won't inflict the horrors of opioid addiction on people. And every time, they're wrong and the problem only gets worse.

Would I ban them and reverse everything they've done for (and to!) the world, if I had a magic wand that enabled me to actually do so effectively? Yes! In a heartbeat! "Let's use medical means to take away pain qua pain" does not have a particularly good track record. The most dangerous words in the English language are "this time will be different, I swear!"

Expand full comment

Seems like the ideal for both physical and emotional pain would be to let it do its job, which is to signal that something might be wrong, and then shut it up as soon as the signal is received, yes?

I would also say that if I had chronic pain I would be much more interested in getting rid of the pain by whatever means than in experiencing pain for the sake of motivating scientists to research underlying conditions.

Expand full comment

Using this to treat chronic pain is a reasonable aim. "Just give it to everybody so they no longer feel sad if their kid is burned alive in a house fire" isn't.

Expand full comment

"PAIN EDITORS AND MORTALITY RATES

Introduction

The implant known widely as the "pain editor" is cyberware that reduces or even entirely eliminates pain. It has enjoyed unwavering popularity for years among certain circles in Night City, the most devoted, and arguably most valuable, of which is mercenaries. It is using this test group that Zetatech conducted the following research.

The pain editor is a neural coprocessor that inhibits the signals sent from nociceptors to the parietal lobe of the brain, thus preventing feelings of pain in the user. (Note: Some models also reduce symptoms of fatigue.) The beneficial effects caused by the pain editor are in some ways similar to the symptoms of hypoesthesia, including greater resistance to physical forms of torture and the ability to ignore pain from severe wounds which can allow the user to continue perform beyond normal human limitations for a brief period.

However, some studies have reported that the pain editor can yield a range of undesirable side effects. For example, in the heat of battle, some users are unaware of the severity of their wounds, which causes them to continue fighting without realizing they are dying and require immediate medical attention. The statistics support the theory that a lack of negative reinforcement may cause users to continue fighting when the rational strategy would be to retreat and escape death. Since 2020, the mortality rate of pain editor owners is above 60 percent.

In this report, I address the question of how to better protect Zetatech clients from avoidable death while using our pain editors..."

Expand full comment

Some of the sci-fi portrayals of the (mis)use of fictional wireheading technologies - with of without the preservation of the adaptive responsiveness - could be actually helpful in identifying potential failure modes to ensure high degree of ethical alignment from the early stages of the project.

Expand full comment

> That could get you killed!

...So? It's not like that woman doesn't feel pain at all. Obviously you're smart enough to realize that skin burning and blood falling out of your body is going to kill you. And even if you aren't... More people dying doesn't really matter if no one can feel bad about it. There isn't anything intrinsically bad about death.

Expand full comment

> Obviously you're smart enough to realize that skin burning and blood falling out of your body is going to kill you.

Only if you notice it. A few days ago we had a contractor over doing some work at our house, and the guy managed to get a nasty scrape somehow. He didn't even realize until I nudged him to get his attention and said "you're bleeding." It wasn't particularly severe bleeding, but it was noticeably more than the ideal amount of your blood you want outside your body, which is zero.

If that can happen to a normal laborer who (presumably!) doesn't suffer from a genetic condition, but has just spent enough time doing physical work to learn to tune out some of the pains that come with it, how much more so for someone who doesn't notice the pains in the first place?

Expand full comment

The guy in your story seems pretty desensitized to pain, and he seems to be doing just fine. Again, the Scottish woman still feels and recognizes pain. She's just effectively completely desensitized to it. Just like if you look at enough pictures of gore, you stop feeling anything from looking at them. You can obviously still tell that it's gore, it just doesn't prompt a reaction. Except in this case, instead of becoming desensitized, you're just not sensitive in the first place.

Anyways, if you're worried about cuts, just check your body every once in a while. Though honestly, you really shouldn't be that worried about infections and minor blood loss in this day and age. We have proper medical care now.

Expand full comment

Yeah, but degraded function ( partial failure ) could be an excellent outcome, and IMO is a possible outcome of the research.

Expand full comment

It brings me great joy to see all these projects funded. Good luck to all!

(Especially Greg, you deserve it).

Expand full comment

This is less methodologically rigorous, but more inclusive. The evidence I reviewed from the largest lead reduction clinical trial showed that this accomplished nothing in terms of the desired outcomes, while it did reduce lead in the blood.

Expand full comment

What's the point? Doesn't look like a randomized trial, so how could it be more informative than the massive body of weak evidence already compiled in the link you sent? Large RCTs are usually the only way to know for sure with these things. Relying on indirect evidence will just waste money. Most things don't work when properly evaluated. Have you seen reviews of properly done trials like these ones? https://www.emilkirkegaard.com/p/educational-interventions-keep-not-working

Expand full comment

Macerators are a nightmare and I commend anyone trying to do something about them. But with or without them, the egg industry is still founded on stealing a mother's eggs and eliminating all her sons. Pretty bad.

Expand full comment

I think the animal welfare strategy goes:

1. In the short run, get rid of some of the cruelest practices (like this)

2. In the long run, work on meat substitutes (eg Impossible) and cultured meat until they're cheaper than animal-based.

3. Probably there will always be people who prefer animal-based meat because it's "more natural", but these people will form a natural constituency to support some kind of natural and pleasant environment for animals and not factory farms, and it will be easier to get regulation enforcing high welfare standards suitable to products with a pro-nature customer base.

4. In parallel as a backup, work to create suffering-free lines of farm animals (see Grant 3). This is less crazy and utopian than it sounds - it's genetically pretty possible, and at least some farmers try to optimize for animal health and low stress because it makes the animals easier to control (and possibly their meat taste better).

Expand full comment

re 4, I don't want to fetishize animal suffering, but a veal calf's suffering is veridical, an emotion truthful to the emoter's status as an orphaned meat slave. Suffering-free farm animals are denied the appropriate response to their exploitation, and de-motivated to change their situation. 4 seems to me continuous with mass castration of the herd, which similarly demotivates Animal Farm rebellion, indeed makes the animals "placid" and "happier" with their situation.

Yet as a short-term solution to the awful suffering of factory farming, I guess I can't object to 4.

Expand full comment

>Suffering-free farm animals are denied the appropriate response to their exploitation, and de-motivated to change their situation. 4 seems to me continuous with mass castration of the herd, which similarly demotivates Animal Farm rebellion

Is this a joke?

Expand full comment

It does map quite closely to the argument of "remove the social safety net; poor people will never lift themselves out of poverty if they don't experience the consequences". Or perhaps the argument of "remove the social safety net; the proletariat will never start the revolution if they don't experience the true suffering that capitalism causes".

Expand full comment

I guess I'm in favour of social safety nets and minimizing suffering in factory farms, both as short-term mitigations within exploitive systems.

But notice that when you remove a cow's capacity to suffer at the theft and slaughter of her children, you've eliminated her love for her children, since to love them *is* to be disposed to suffer at their loss. Is there anything comparable in the social safety net, some New Deal goodie that minimizes the proletariat's suffering, but reduces their humanity? (Probably)

Expand full comment

I'm not saying you're wrong, here. But that quasi-zero-sum view of pleasure and pain, joy and suffering, does not seem to be what most social policies assume, and I think would have major consequences if it were more widely accepted. And I don't want to adopt it as a model in one particular area without a better understanding of what the world would look like if it were implemented more broadly.

More concretely, perhaps we could look at the example of the person who has the genetic anomaly that would be the basis for this, and see what her life looks like? Is she incapable of love?

Expand full comment

"But notice that when you remove a cow's capacity to suffer at the theft and slaughter of her children, you've eliminated her love for her children, since to love them *is* to be disposed to suffer at their loss."

Okay, here is where I hold up my hand and go "stop". A cow feels love? Maternal love? Like a human mother?

I don't believe that. I believe cows have attachment in some form to their offspring, but when you start talking about an animal as though it's really a human just with a funny skin, that's when I tune you out.

And that's where you lose a lot of people for what would be otherwise reasonable requests that they might entertain.

Expand full comment

I meant the idea that animals are going to stage a rebellion or something

Expand full comment

I kinda took that as a very general phrasing that deliberately brought up ambiguities, in the sense that humans are animals too? So we'd be creating a type of human who'd be immune to the sort of abuser who enjoys seeing pain and suffering, but who would be very susceptible to the sort of abuser who enjoys seeing blood and Gigeresque body art.

Expand full comment

>re 4, I don't want to fetishize animal suffering, but a veal calf's suffering is veridical, an emotion truthful to the emoter's status as an orphaned meat slave.

It's not meaningful to call emotions 'veridical'.

There's no objectively correct emotion associated with any given situation or status. There's no inherent reason an 'orphaned meat slave' should feel one way or the other about this situation, it's just a product of how their brain is wired to respond to things (if, indeed, veal are emotionally aware and harmed by said status). And the whole point is to essentially wire their brains differently so they don't have the kinds of brains that find this thing upsetting.

Expand full comment

Emotions aren't true or false as neatly as arithmetic, I concede! But they can be appropriate or inappropriate. For example: Suffering feels bad, and is bad; therefore to feel good about another's suffering is inappropriate. This inappropriateness is at least *analogous*, as I see it, to falsehood.

Our brain-wiring can then be appropriate or inappropriate!

Expand full comment

> Suffering feels bad, and is bad

Non sequitur? No offense, but you ... sound like someone who hasn't had their brain taken over by a desire for vengeance.

Expand full comment

"an emotion truthful to the emoter's status as an orphaned meat slave"

There are genuine human children who are orphaned slaves, and people like you are crying over goddamn hens and calves. Good night, there's no point in continuing this discussion.

Expand full comment

I don't agree with the grandparent comment, but yours sounds like whatabout-ism.

You are allowed to worry about more than just whatever is the most outrageous thing at the moment.

Expand full comment

I notice that I am internally confused. On the one hand, improving unnecessary animal suffering on the margin is great - not needing so many antibiotics would be a huge game all on its own. On the other hand, I feel pretty conflicted about kind of sort of turning factory farmed animals into p-zombies so the suffering never happens in the first place (or at least isn't perceptible). Even though they're brought to life for the express purpose of dying...I feel like we'd be having a very different conversation about agricultural ethics if it was widely known (given a hypothetical success) that most farmed animals did not actually "experience pain". Like, why bother doing things like free range, cage free, etc if the animal doesn't notice the difference anyway? The whole avoiding-negative-utility vs adding-positive-utility distinction. Maybe we'd actually see a regression to crueler treatment on the margin...

Expand full comment

To further clarify, our core team members are in favor of a multi-directional approach to farmed and wild animal welfare; most (if not all) are vegans or reducetarians, supportive of the development of cultured meat and plant-based alternatives, in favor of thoughtful solutions making farming/transport/slaughter less inhumane. Simultaneously, we recognize that the global meat industry has a significant compound annual growth rate driven largely by the steadily improving economic status of developing countries with different cultural and legal contexts, and the introduction of modified lines through the market forces may constitute an important piece of the puzzle.

Expand full comment

You reference "a horrible grinding machine" and "the cruelest practices (like this)." But reportedly maceration is nearly instant, so it probably represents one of the least cruel elements in the factory farming of chickens. If it is an effective cause to target, it would probably be due its ubiquity (miniscule amount suffering times hundreds of millions), not its severity.

In fact, you previously made this point yourself, more generally, in this article: https://slatestarcodex.com/2015/09/23/vegetarianism-for-meat-eaters/ in which you noted:

> I use the term “kill” because it’s a simple way of looking at things, but most of the moral cost of eating meat is causing the animals to spend years living in terrible suffering on factory farms. The actual killing is probably a mercy in comparison.

Also, for accuracy's sake, you write:

> lets them read sex from eggs directly, so they can throw away the male eggs instead of killing chicks in a horrible grinding machine after they’re born

Which implies that chick culling is synonymous with maceration. However, while maceration is indeed one of the most popular methods of chick culling, other methods exist, such as asphyxiation by carbon dioxide.

I'm not finding global estimates, at the moment, but per: https://en.wikipedia.org/wiki/Chick_culling#Legal_challenge_in_Germany_(2013%E2%80%932019), in 2019, (the period under discussions there), gassing was the most popular form of chick culling in Germany.

Expand full comment
Feb 11Edited

Grant 3 does remind me strongly of the Ameglian Major Cow [1]

Less humorously, the idea does seem to have a lot of moral hazard associated with it. If you breed farm animals that can't feel pain, it's going to be difficult to avoid the idea getting about that how they are treated no longer matters. The desirable end state here is that food animals are only subject to conditions that cause serious pain in the final fraction of a second, not that they can be subjected to such conditions without thought.

The other problem of course, is that once there is a well known proof of concept for breeding mammals that don't feel pain or anxiety, attempts to breed super-soldiers will follow.

(edited to add) I do think there is a lot of potential good here, for people with chronic pain. Just not necessarily in the farming part.

[1] https://youtu.be/bAF35dekiAY?si=U8gL8hZvoocYnqEo&t=74

Expand full comment

> the egg industry is still founded on stealing a mother's eggs and eliminating all her sons. Pretty bad.

How is it bad?

Does she somehow suffer because of this "theft"? Does she somehow know (and *care*) that she's not having sons?

If not, why should anyone care?

Expand full comment

No sons means no males. I don't claim first-hand knowledge of hen psychology, but when you remove a whole sex from a sexually dimorphic species, you've cauterized a whole range of their experience. Birds are in an eon-long dance, like we all are. Stealing an egg or two from a nest may be minimally dysphoric for the mother, I don't know. But stealing all eggs and sexual life and dimorphic sociality from them is bad, I bet.

Expand full comment

Hens will start eating their own eggs if they get a taste of them. They're... not very bright.

Anyways, the whole gene-editing-to-eliminate-all-suffering thing would eliminate all ethical concerns, period, so I'm not sure why that's getting less funding than the animal welfare stuff.

Expand full comment

Hens will try to hatch out stones. Don't try to sell me on "a mother and her sons" because I'll laugh in your face.

If your qualms are regarding the effects on a species of artificially sex-selecting, then you have a legitimate point, but when we're talking about domesticated hens then we've already gone way past the point of affecting social structures etc. in that species.

The nearest thing I can find with some cursory searching about sex ratios in wild birds is this about turkeys, where there is a slight predominance of males (at least when counting by hatched eggs):

https://www.researchgate.net/publication/229879462_Variation_in_Brood_Sex_Ratios_of_Texas_Rio_Grande_Wild_Turkeys

For wild bird populations in general, it seems mixed:

https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1474-919x.2007.00724.x

"On average, males outnumbered females by around 33%, and 65% of published estimates differed significantly from equality. In contrast, population-level estimates of offspring sex ratio in birds did not generally differ from equality, and mean ASR across a range of wild mammal species was strongly female-skewed. ASR distortion in birds was significantly more severe in populations of globally threatened species than in non-threatened species, a previously undescribed pattern that has profound implications for their monitoring and conservation. Higher female mortality, rather than skewed offspring sex ratio, is the main driver of male-skewed ASRs in birds, and the causes and implications of this are reviewed."

For domesticated hens, yes way more females, because we're selecting for egg-layers.

As for social organisation, for peafowl at least it's one peacock and a small harem of peahens:

https://zsp.com.pk/pdf45/1623-1627%20_20_%20PJZ-1452-13%202-10-13%20Revised%20Paper%20Dr.%20Shabana%20Naz.pdf

And I suspect even wild hens are much the same, because if you have two cocks in one yard, they'll fight and try to kill each other over who gets to be master of the hens. At least, that's what I was told by the neighbours who kept hens as to why they only wanted to keep one male bird among the chicks.

Expand full comment

"stealing a mother's eggs and eliminating all her sons"

And this is why veganism has no widespread popular appeal. A mother and her sons? Stop thinking of animals as "humans with the total spectrum of human emotions, memory capacity, and intellectual capacity in fur suits" and start thinking of them as animals.

I like the egg-sexing proposal because it avoids this emotional blackmail rubbish and is a practical proposal to reduce actual cruelty, not some Disney movie bullshit about "mama hen and her cluck-clucks".

Expand full comment

I don't presume equivalence between human and hen psychology. I do presume hens care, in some way, about their offspring. [Including the indirect way I explained in my comment above in this subthread, about sexual dimorphic experience.] True, our inferences about mammal psychology may be on stronger ground than our bird inferences. Yet to find behavioral/psycho homologies across animal life is not anthropomorphism - it's Darwinism, for starters!

In my experience, veganism remains a niche diet for lots of reasons, but high on the list is: most people aren't that bothered by the exploitation of animals, they think it the right order of things. Also high on the list: they aren't quite aware of the terrible things that go on in factory farms.

Expand full comment

The thing is, arguments like this don't work because they require me to accept too much.

I know what hens are like. They are stupid and rather violent animals. So I don't believe in "mama hen weeping because she misses her little chickies" because I've seen hens in coops and how they behave.

So the argument re: chicken sexing works for me because it's based on "throwing live chicks into a macerator is cruel" and not "mama hen tears!!!!"

Stick with the plain facts and work on reduction of cruelty and suffering, not the sentimental "mama hens tears!!!!" angle, and you'll get much further with ordinary people, particularly those with any experience at all around farm animals and animals raised for meat, milk and other production.

Expand full comment

Thanks for the generosity Scott and funders! 🙂

Expand full comment

Please indicate in advance next time that you’re generally skeptical of video-related grant proposals!

Expand full comment

I need to think about how to do this - it would need to involve a very long list of all the things I'm skeptical of (and all the things I'm especially excited about), and it might have to be hundreds of entries long, and I don't want to discourage people completely.

Expand full comment

It's probably also a good thing to be slightly illegible in this regard. You don't want people to tailor their grants towards what they think you want to fund.

Expand full comment

Why not? If there is a class of proposals that will not be funded no matter their quality why have people write them anyway?

Expand full comment

A lot of government grant processes have some explicit criteria (e.g. we want to fund projects to mitigate the impact of climate change), which leads to applicants taking whatever project they already wanted to do and shoehorning in a bunch of stuff about how their pet project will address climate change.

A lot of people who deal with these processes complain that getting approved has more to do with how good you are at injecting the desired buzzwords rather than the actual quality of the project.

Expand full comment

Sure but saying “don’t submit projects about X because I am not going to fund them” is not even 1% of the way towards that extreme. There seems to be some optimal level of legibility.

Expand full comment

It's an honor to help out with ACX Grants this year. Without ACXG, there would be no Manifold, so I'm quite excited to pay it forward with this next batch of grantees! It's so inspiring to see the diversity and creativity of projects that people are pursuing~

Scott already mentioned this, but if you would like to donate to one of these projects, you can do so here: https://manifund.org/causes/acx-grants-2024?tab=grants

Expand full comment

No funding for your language learning idea :(

Did you find an alternative that you liked?

Expand full comment

Yeah, I thought the alternatives mentioned in 4 at https://www.astralcodexten.com/p/followup-quests-and-requests were already good enough, though I haven't gotten a chance to look at them in depth yet.

Expand full comment

>Samuel Celarek, $20,000, to research IVF clinic success rates, with the ultimate goal of creating a company that ranks the best IVF clinics

Where is he going to publish that research?

Expand full comment

He says the company will make the basic information available for free on their website and sell more advanced and bespoke versions.

Expand full comment

Interesting. I’d be curious about how he plans to do this. The CDC already publishes the pregnancy success rate of every IVF clinic in the U.S. We generally advise against directly comparing clinics in this way because patient populations may differ. Maybe he plans to include criteria other than pregnancy success rates? Clinic-specific data on clinics outside of the U.S. are less widely available.

Expand full comment

Any additional information about the phage therapy research? I did some work in a phage lab in undergrad, and it seems like a neat (if difficult) idea. Seems like modern improvements in protein engineering (via ML or DE) could potentially help?

Expand full comment

>Joel Tan, $100,000, for the Center For Exploratory Altruism Research. They’re involved in cause prioritization, research, and support for various global development charities. We were most excited about their work trying to stem the tide of hyper-processed foods in the developing world - for example, campaigns to reduce levels of sodium and trans fat.

I can't be the only one that thinks there's something very silly about believing both of the following:

1. Superintelligent machine intelligence is highly likely to be mere decades away, and this machine intelligence will either destroy humanity or lead to a "singularity" resulting in the creation of utopia on earth (or at least, radically improving life for everyone to the point that pre-ASI life becomes almost unrecognizable)

2. The best use of a marginal $100,000 in 2024 is trying to help people in poor countries consume less salt

Expand full comment

I'm not sure whether you mean that the sodium project in particular is bad, or that it's silly to fund *anything* other than AI alignment in a world on the brink of superintelligence.

If the first, I think there's a pretty strong argument that hypertension is one of the main killers in poor countries, this is an easy way to reduce hypertension, and policy victories here could save 5-6 digit numbers of lives over the next generation or so.

If the second, a couple of things:

- AI alignment grants are hard. There's a lot of money in AI alignment now, and a lot of bad projects, and it's hard for outsiders to know which projects are good. Some bad projects can be worse than nothing, because they distract the field with bandaid solutions or things that won't work. We funded about 130K worth of alignment grants that our evaluators were most confident in. For everything else, we hope they'll apply for OpenAI's $10MM pot, go through the impact market, or do something else.

- An AI transformation could take a generation or two. You *have* to believe that helping people is valuable even if those people will die within a generation or two, because *everyone* dies within a generation or two.

- It's actually not obvious that AI grants (very small contribution to changing the future in a way that affects everyone) beats near-term grants (decent chance of changing the present in a way that affects a small number of people) except under assumptions that heavily prioritize future people. See https://forum.effectivealtruism.org/posts/rvvwCcixmEep4RSjg/prioritizing-x-risks-may-require-caring-about-future-people

- Inside view I think there will be a singularity soon. Outside view this is an unusual and bizarre belief. I try to diversify my moral portfolio over things that succeed in inside view vs. outside view theories. I also try to be able to cooperate with other people with different predictions.

- ...but these don't actually get me to where I am now. I think part of where I'm coming from is that it's impossible to live if you're placing all of your emphasis on a radical future beyond imagining. I wouldn't just stop funding developing world grants. I would stop waking up in the morning - why should I treat my patients' psych diseases when they'll just be dead or incorporeal beings of pure energy in a few years anyway? Why bother feeding my children? This isn't a completely rational argument - that's what the ones above are for - but I think it helps keep me sane. Cf. CS Lewis: "It is perfectly ridiculous to go about whimpering and drawing long faces because the scientists have added one more chance of painful and premature death to a world which already bristled with such chances and in which death itself was not a chance at all, but a certainty [...] If we are all going to be destroyed by an atomic bomb, let that bomb when it comes find us doing sensible and human things—praying, working, teaching, reading, listening to music, bathing the children, playing tennis, chatting to our friends over a pint and a game of darts—not huddled together like frightened sheep and thinking about bombs."

Expand full comment

That last bullet point, yeah...part of why I like Zvi's repeated focus on the importance of improving "mundane utility", of making the present better even if such progress will be chump change in the future. You can't actually get to the future without going through the present, so making the present less of a painful slog still pays dividends in acceleration. And those pain points frequently hold back potential keys to unlocking said future gains - just like your grants help remove some immediate financial pressures. One can't borrow against future 10% GDP growth or whatever to fund present needs.

Lack of this kind of mood is part of what's always annoyed me about futurists like Ray Kurzweil and Nick Land. Plenty of happening-right-now problems that meaningfully threaten to derail the future, no matter what happens with AI...

Expand full comment

I work in AI alignment. 'A lot of money' is relative of course, but IIRC the total money spent on AI alignment-related work over the last twenty years was still less than half of CERN's yearly budget. And that budget is shared between research and policy work. And I think a lot of it is just one huge lump sum by OpenPhil to one particular org.

I'd guess it's also way less than what is spend on attempted health interventions in third world countries, and 'worse than nothing' is a possibility there as well.

So I don't really see how it follows from this that AI alignment grants are harder.

And the technical research ecosystem seems funding-constrained to me right now. There are more skilled people that I'd like to hire, or see hired, or see starting more orgs, than there is money to fund them. There are more research ideas that seem worth pursuing to me than teams pursuing them. And more talented and motivated people who want in than training and onboarding programs with enough spots for them.

Expand full comment

Maybe improving the legibility of the AI alignment research ecosystem for potential funders would be a good project for someone to take on.

Expand full comment