127 Comments
Comment deleted
Expand full comment

A common reaction many ppl (even in the prior discussion) seem to have when one raises the issue of IRBs is: I can imagine/heard of really unethical research so of course we need independent IRBs to pre-approve research.

But this seems like exactly the opposite attitude we take in other aspects of the world [1]. So what accounts for the intuition that research needs pre-approval while most other aspects of life we use liability and post-hoc prosecution (trials and lawsuits come after not before)? Is this something specific to experiments?

I'd lean towards a system that holds institutions liable after the fact if they are determined to have allowed a clearly unethical study to go ahead but this frequent reaction makes me wonder what I'm missing.

---

[1]: Doctors who prescribes recklessly and surgeons who perform risky surgery are sued/disciplined/prosecuted after the fact. A company that offers a defectively harmful product can kill people but (with a few exceptions) that's handled with lawsuits. And we specifically reject prior restraint in speech and rely on subsequent lawsuit (when tortious).

Expand full comment

“At Facebook, code changes have low upside (Facebook will remain a hegemon regardless of whether it introduces a new feature today vs. in a year) and high downside (if you accidentally crash Facebook for an hour, it’s international news). Also, if the design of one button on the Facebook website causes people to use it 0.1% more, that’s probably still a difference of millions of hits - so it’s worth having strong opinions on button design.”

That’s exactly right. However it’s also why Facebook will disappear some day.

Expand full comment

Reading the parts about MTurk, I have to wonder why, if requestors doing mass-rejects is such a real problem, Amazon reports raw hit rate rather than an adjusted hit rate that weights hits/non-hits by the requestor's hit rate; it seems like that would be a more useful metric.

Expand full comment

I want to say something, but I haven't yet come up with anything profound on the topic.

My assumptions:

* A certain level of external review is necessary for most research.

* Setting up an adversarial system for this review simply motivates the creation of more bullshit.

* You can't legislate competence, and various attempts to do so have only made things worse.

My only idea is a more "arbitration"-like system, where the IRB can say "your research is bad in any form and you must stop", but can't impose "you have to read aloud a six-page consent form" style requirements. But that has its own problems ...

Expand full comment

I'm not sure if the last paragraph comment from Scott was about the tax prep people lobbying to protect their business, or about the equivalent IRB version. But in case anyone is not familiar with how the tax prep industry deliberately sabotages attempts to simplify the tax filing process, see e.g.:

- https://www.propublica.org/article/inside-turbotax-20-year-fight-to-stop-americans-from-filing-their-taxes-for-free

- https://www.nbcnews.com/business/taxes/turbotax-h-r-block-spend-millions-lobbying-us-keep-doing-n736386

I'd also be interested in learning if there is an equivalent lobby to prevent IRB streamlining.

Expand full comment

On the question of whether absences cause, people's intuitions are probably influenced by which account of causation you're inclined to accept. If you have a kind of "energy transfer" model of causation in mind (if C causes E then some energy has been transferred from C to E), then absences can't cause. Absences of events can't transfer energy. But if you have a kind of counterfactual model of causation in mind (C causes E if and only if: If C were to happen, then E would happen), then absences can cause. The absence of the removal of the surgeon's clamp causes death because if the clamp weren't removed, then the patient would have died. This is a vexed issue because our concept of causation seems to include both energy-transfer and counterfactual elements, so this marks an inconsistency in our concept of a cause.

Expand full comment

Not directly related, but can I suggest that any time you do a "highlights from the comments on X" post, you link to it at the bottom of the original post? That way if I show someone the original post, they have a easy place to go for further reading on the subject.

Expand full comment

> There is no way for an average uneducated jury to distinguish between “doctor did their homework and got unlucky” and “doctor did an idiotic thing”.

So why have one? If the accused has a right to a trial by a jury of their peers, why do the lawyers stack the jury with uneducated people who know nothing about medicine? In a case like this, this is literally one of the first things that the lawyers do during the voir dire process: eliminate any and all *actual peers,* leaving only gullible and uninformed who are easier for lawyers to manipulate.

Why is that not grounds for said lawyer being summarily ejected from the courtroom?

Expand full comment

Regarding whether IRBs "cause" whatever deaths the studies would have ended up preventing but didn't . . .

If a skilled surgeon retires, and statistically more patients die because there's now one fewer surgeon and the average skill level of the remaining surgeons is a bit lower, I don't think most people would say that the surgeon "caused" those deaths by retiring — even if the surgeon feels a bit guilty about doing so. Furthermore, if the surgeon is retiring at the encouragement of his/her spouse so they can travel more or something, I don't think most people would feel comfortable saying that the spouse "caused" those deaths via this encouragement.

On the other hand, you've given some clear examples where someone certainly does "cause" deaths by preventing something that would have prevented those deaths.

I think the difference lies in *responsibility*. The surgeon is entitled to retire, and is not responsible for patients (s)he doesn't treat. The spouse, likewise. But someone who barges into an operating people and restrains people is not entitled to do so, and has a responsibility not to do so, so we have no qualms saying they "cause" the death.

Awkwardly, this means that whether we say that the IRB system "causes" deaths is not a fact that goes *into* deciding whether it's a good system, but rather, a terminological choice that *depends* in part on whether we think it's a good system. Someone who feels that IRBs are responsible only for protecting study participants, and are entitled to prevent or delay studies to that end, would reasonably feel uncomfortable saying that they "cause" whatever deaths would have otherwise been prevented.

(I suppose that a rationalist would reply that we just need to accept this discomfort and say that yes, the surgeon and spouse and IRBs all cause deaths?)

Expand full comment

I just want to say that this post, and all the comments and commenters gathered here, is some of the highest qualiry and informationally rich discussion I've read. Thanks to Scott and all the subject experts for their stories.

The greatest intellectual compliment I've ever read was at the 2000's group blog "Baseball Toaster", it applies here:

"Thanks for once again giving me something to make something of myself with. I've learned so much in such a short time, yet I feel I've been here forever. If it means anything, I think you're a hero. Period."

BRetty

PS -- Ken Arneson's final Baseball Toaster post, "And So to Fade Away", remains one of the greatest pieces of writing, and sportswriting, I have ever read. A beautiful essay that is also a brilliant use of the web/blog/internet form and format. Canonical. (Comment #50 by user ChillWyll is what I referenced above.)

https://catfishstew.baseballtoaster.com/archives/1182040.html

Expand full comment

"... you need some standards for when a study is safe, so that if people sue you, you can say “I was following the standards and everyone else agreed with me that this was good” and then the lawsuit will fail. Right now those standards are “complied with an IRB”."

This is not technically true. You can still be sued, despite a clean IRB review. Indeed, in large, multicenter trials sponsored by pharma companies, most centers require that the company indemnify the sites for anything study-related in addition to the standard IRB review. Pharma companies turn around and carry liability insurance in case someone files a claim against them for study-related actions taken by sites.

I think this is probably a good incentive structure (though also susceptible to being gamed by bad-faith actors, of course). I prefer a system where people are reasonably cautious because there's a cost to not being cautious, as opposed to an expansive set of rules aimed at keeping people cautious, but that actually just creates a bunch of make-work for an army of compliance officers who aren't actively engaged in promoting the scientific/medical/safety value of the study so much as just checking off boxes. Sometimes you still end up with people who are intellectually engaged with the process, but just as often you run into people so far removed from the purpose of the study that they advocate doing things that are actively harmful as justification for their job. You have to be careful against these subtle and unintentional attacks against the integrity of a good scientific study.

I read a few years back that clinical research was the second most heavily regulated industry in the US. It shows. There's a point where one person working alone can do a study because it's not very complicated or because we're playing fast and loose with safety standards. I think bringing stringency standards up to the point where a team of people are working together to perfect the things people put into their bodies is a good thing. But at some point the team of people slogging through boring checklists grows so large that instead of capitalizing on the 'more minds are better' ethos we end up with drugs 'designed by committee', which is probably even worse than the lone researcher trying to do it all on their own.

Expand full comment

In partial defense of JDK's argument on the difference between directly vs. indirectly causing a death:

My understanding is that the AMA has a strict standard on this. The difference between active euthanasia and 'passive' pulling of the plug lies in the question of whether the physician is the direct cause of the underlying condition. The term of art is that the 'cessation of extraordinary measures' to keep someone alive is not considered euthanasia. But-for the existence of the hospital, ventilator, etc. the patient would already be dead, so the system allows the people within it to decide when withholding that care is acceptable (the family is grieving, the patient had a DNR, etc.). If the patient would still be alive but for the existence of the medical intervention, that's euthanasia and is on a different side of a line, at least in certain circles of medical ethics.

Expand full comment

> we can’t cut the IRB out entirely without some kind of profound reform of the very concept of lawsuits, and I don’t know what that reform would look like.

The loser pays the legal bills. There’s a Latin phrase for this so you know it’s legit.

Expand full comment

> But playing devil’s advocate: at a startup, code changes usually have high upside (you need to build the product fast in order to survive) and low downside (if your site used by 100 people goes down for a few minutes it doesn’t matter very much). At Facebook, code changes have low upside (Facebook will remain a hegemon regardless of whether it introduces a new feature today vs. in a year) and high downside (if you accidentally crash Facebook for an hour, it’s international news). Also, if the design of one button on the Facebook website causes people to use it 0.1% more, that’s probably still a difference of millions of hits - so it’s worth having strong opinions on button design.

This is true, but there's still generally no reason to have such an extensive process for minor changes. If you did your experiment correctly from the get-go, there should be very little uncertainty about whether the button causes 0.1% more usage (at least for a company facebook's size). There should be 1 or 2 people responsible for making the decision, and only in rare cases do you involve multiple entirely different teams. Certainly not over something trivial like button design. Regardless of scale, making 100 decisions at 80% correct is better than making 20 decisions at 95% correct.

This reminds me, though, since I work in this field but forgot to post on the original thread: Is it surprising that there's absolutely no regulation on the experiments that tech companies do? Given the level of paranoia applied by IRBs, I'm sure they could come up with some reason why it risks harm to show one person a slightly different shade of blue than another. (I'm not saying there should be--I'm just asking if it's surprising that there isn't).

Expand full comment

" There is no way for an average uneducated jury to distinguish between “doctor did their homework and got unlucky” and “doctor did an idiotic thing”. Either way, the prosecution can find “expert witnesses” to testify, for money, that you were an idiot and should have known the study would fail."

If expert witnesses don't provide valuable expertise but just do what they are paid to say isn't that a pretty fundamental problem in the legal system and one that is not limited to medical cases, presumably it is also true for forensics and fraud cases.

Why aren't they neutral and paid for by the court? Do courts genuinely not care about objectivity?

I have seen lawyers (in the UK) joking about it, are expert witnesses a total farce?

Expand full comment

> the prevailing legal-moral normative system (PLMNS)

This acronym is a tragically missed opportunity. 'Prevailing' can be inferred, just call it 'the' system, or 'our' system...

Our LeMoNS.

Expand full comment
Apr 18, 2023·edited Apr 18, 2023

> In order to remove this risk, you need some standards for when a study is safe, so that if people sue you, you can say “I was following the standards and everyone else agreed with me that this was good” and then the lawsuit will fail. Right now those standards are “complied with an IRB”. This book is arguing that the IRB’s standards are too high, but we can’t cut the IRB out entirely without some kind of profound reform of the very concept of lawsuits, and I don’t know what that reform would look like.

A reform direction: any tort liability should be waivable in contract. (Ideally criminal liability too, but that's too far outside the Overton window.)

Expand full comment

> I’m surprised by this, both because I thought tech had a reputation for “move fast and break things”, and because I would have expected the market to train this out of companies that don’t have to fear lawsuits.

I'm not sure if this has penetrated outside of tech but the general understanding is that one should work at Google if one wants to never get anything done. It's what led to the conspiracy theory that Google just hires people to keep the other companies from having them.

Expand full comment

> Meadow Freckle writes:

>> Why can’t you sue an IRB for killing people for blocking research? You can clearly at least sometimes activist them into changing course. But their behavior seems sue-worthy in these examples, and completely irresponsible. We have negligence laws in other areas. Is there an airtight legal case that they’re beyond suing, or is it just that nobody’s tried?

> I don’t know, and this seems like an important question.

Well, at the very least you'd have major problems with standing. They haven't killed _you_. You can't sue over harm to a known third party, and you certainly can't sue over harm to a hypothetical third party.

Say you (you in particular, whoever you are) hire someone to paint your house. And he does. There's no shirking or malingering or anything. But he's terrible at his job, and it takes him a long time, and you end up paying about triple the ordinary cost of getting your house painted for a paint job that you're going to have to hire someone else to redo.

Now, the question isn't "can you sue him for doing a bad job?"

The question here is "can I, Michael Watts, someone who doesn't know either of the two of you and lives in another country, sue that guy for doing a bad job painting your house?"

And it's not really hard to understand why that's difficult.

Expand full comment

> NIMBYs, whatever else you think about them, are resisting things they think would hurt them personally, whereas IRBs are often pushing rules that nobody (including themselves) wants or benefits from them.

If we run with this, we'd analogize the IRBs to environmentalists instead of NIMBYs.

Expand full comment

"The "solution" that google uses is to first define (by business commitee) a non-zero number of "how much should this crash per unit time". This is common, for contracts, but what is less common is that the people responsible for defending this number are expected to defend it from both sides, not just preventing crashing too often but also preventing crashing not often enough. If there are too few crashes, then that means there is too much safety and effort should be put on faster change/releases, and that way the incentives are better."

There's a more extreme version of this.

Netflix runs a ChaosMonkey.

https://github.com/Netflix/chaosmonkey

The job of the ChaosMonkey is to crash services, kill processes and generally break stuff. Intentionally.

So now the infrastructure people and the developers are faced with a situation, they can no longer hope for everything to run perfectly all of the time, it's not a question of whether servers will crash but *when* because the ChaosMonkey is doing it's job.

So they need to build everything under the assumption that any server or service can fail at any time and everything needs to be built to survive the chaos.

it's a design philosophy that apparently works surprisingly well, possibly because it deprives the infrastructure people of the possibility of everything running smoothly all the time.

Expand full comment

> Under utilitarianism, you'd probably still want some sort of oversight to eliminate pointless yet harmful experiments or reduce unnecessary harm, but it's not clear why subjects' consent would ever be a relevant concern; you might not want to tell them about the worst risks of a study, as this would upset them.

This is similar to the "would you murder a patient so that their organs can be used to save the life of five unrelated others" thought experiment.

The standard utilitarian reply is that if this was commonly done, people would stop going to the doctor because they don't want to risk being murdered, and that would overall result in lower utility.

(Maybe doctors can do it *in secret* as a conspiracy - but this has the same expected value, just higher variance. You may fool some patients, but the conspiracy has a chance of being uncovered, in which event people will stop trusting *any group* that might want to kill them for the greater good even if they don't seem to be doing it because they too might have a conspiracy.)

Expand full comment
Apr 18, 2023·edited Apr 18, 2023

"I don’t know if some branch of the government has enough power to mandate everyone use IRBs regardless of their funding source."

Congress does in the US, as does most countries' legislature. The Commerce Clause gives the US Congress a mandate to regulate interstate and international commerce. In practice, since Wickard v. Filburn, it can regulate intrastate economic activity as well.

No major drug company, or AI company, operates solely within the borders of one US state, so it can be regulated even with a relatively reasonable interpretation of the Commerce Clause.

After all, where does the FDA get its power to ban selling unapproved drugs (without any federal funding conditions or anything such) in the first place?

Expand full comment

I do wonder if the gene therapy researcher Whitney mentions is underestimating what they could do without general regulatory insanity. I'm not sure whether you can attribute this to an IRB precisely, but a regulatory change is maybe the reason we don't have artificial hearts: https://www.newyorker.com/magazine/2021/03/08/how-to-build-an-artificial-heart

The key, enraging quote being: "In the early days of artificial-heart research, a team could implant a device in a dying person on an emergency basis—as a last-ditch effort to save his life—and see how it functioned. Ethicists were uneasy, but progress was swift. Today, such experimentation is prohibited: a heart’s design must be locked in place and approved before a clinical trial can begin; the trial may take years, and, if it reveals that the heart isn’t good enough, the process must start again."

I wonder if there could be much better/easier gene therapy research too but it just doesn't even occur to people to ask the question. To be fair it would be harder with gene therapy since the results are generally much noisier and the endpoints more complicated than "is the patient dead" so maybe not.

Expand full comment

> whereas IRBs are often pushing rules that nobody (including themselves) wants or benefits from them.

They literally benefit from having rules to enforce because they literally get paid for it.

I thought that maybe you omitted this angle because it's too obvious, but now I guess it's not: if you create an organization that can invent pointless rules for other people to follow and each new pointless rule guarantees continued employment for the employees of said organization and increases the number of required subordinates and therefore the salaries and status of senior employees, while repealing a pointless rule threatens downsizing and becoming unemployed, then this organization will keep inventing a lot of pointless rules.

It's not a weird accident, or a collective PTSD from the Nazi research practices, or that one Gnostic guy who formulated the original ethos.

This is what the incentives incentivize and this is what you'll keep getting, guaranteed, as long as the incentives remain the same. No action that doesn't meaningfully change the incentives can possibly change the outcomes. You can't socially pressure people into making their own jobs obsolete. No amount of articles calling IRBs out will result in IRBs voluntarily removing some of the pointless rules that guarantee their employment.

Expand full comment

> I appreciate this correction - NIMBYs, whatever else you think about them, are resisting things they think would hurt them personally, whereas IRBs are often pushing rules that nobody (including themselves) wants or benefits from them.

I think the analogy to the IRB here isn't the NIMBYs themselves, it's the people at city hall who run "public impact meetings" and insist on running a 3-year 300k permitting process to open an ice cream shop. The analog of NIMBYs would be the few people who do get hurt in the studies - not a nonzero amount, but the problem is the people who design the process, not the NIMBYs/subjects themselves.

Expand full comment
Apr 18, 2023·edited Apr 18, 2023

“ you would expect competition to train people out of this, but also another situation where a hegemon might feel tempted to rest on its laurels!”

So the places you’d think you’d get competition in defense contracting are: on the government side, an actual big war that forces you out of peacetime priorities, or on the contractor side, a push for efficiency to either earn more profit or win more contracts.

The problem with the former is that neither the USA nor anyone that could plausibly threaten us existentially has had one of those big wars in well over half a century (this is of course a good thing generally, but a bad thing for encouraging utilitarian cost benefit assessments in defense developments). And our competitors have most of the same problems plus often worse actual corruption.

The problem with expecting the contractors to be competitive is that most of these inefficient, excessively risk averse, micromanaging policies are demanded by the customer (government agencies). They are literally written into the contracts, because the bosses of the agencies get yelled at by Congress whenever a defense contract goes sufficiently pear-shaped to make the news, so they need to “Do Something” to “Ensure Contractors are Following Best Practices to Protect the Taxpayer”.

Just like IRBs, they aren’t judged against some ideal cost-benefit analysis, but against the “did you screw up bad enough for the public to notice” standard. Boring stuff like moderate cost and schedule overruns and wasted dollars spent on pointless tests and reports can’t trump that risk aversion.

So contractors can’t compete on efficient processes because the processes are dictated by the customer and everybody has to bake the costs into their proposals. And perversely they get judged by how enthusiastically and thoroughly they implement these processes, so if they want to win future business, they shut up and get in line. The financial incentive isn’t really there either - true “cost plus” contracts are largely gone but in general the government still covers the actual development costs plus fixed incentive fees, or pays you based on your original estimate (including the mandated inefficiencies), so the incentive to cut costs is weak.

As an example, SpaceX charges the government (NASA, the Space Force, other agencies) something like 3x as much per launch as their advertised fees for commercial flights. These are literally the same rockets designed by the same people and built on the same production lines, but with the added burden of extra oversight, testing, validation, etc. (some of the cost is more legitimately unique to government launches, such as costs imposed by securing classified payloads and associated data).

I don’t want to minimize the fact that in some cases contractors really do screw up and waste a lot of money, but a lot of this is “oversight theater” that serves more as CYA for the contracting agency than anything that actually drives quality and efficiency.

There are a couple of other perverse ways competition actually hurts things:

1) to avoid wasting government money on sweetheart single source subcontracts, we are mandated to “competitively bid” every part we order from somebody else. Even if we’ve got an equivalent part that works just fine we’ve been ordering for another program for years, doesn’t matter, you need to solicit bids from multiple suppliers and take the “best” bid by some objective criteria and generate a mountain of paperwork that goes through many hands to get this approved. This takes forever and costs a lot of money. It might make sense for some major subcontracts but often we’re talking commercially available catalog parts that cost 4 or low 5 digit dollars on a literal billion dollar contract and any savings are rapidly swamped by the overhead cost (but due diligence has been done, the agency can confidently report).

2) Sometimes awarding a big contract to only one contractor after an initial proposal fails because something goes wrong after lots of money has been spent - it can be very hard to judge the success probability of a very complicated development based just on a proposal. The solution is to fund multiple teams farther into development, say the Critical Design Review (CDR). Unfortunately sometimes this means that dumb requirements that got baked into the proposal can’t be fixed, because it would warp the competition. If we were single sourced we could say “look government, you want A, but it will double the cost, take twice as long, and we’ve done an analysis that shows that giving you 50% of A will cover 99% of your use cases, plus make the thing better at B and C”. But in a competitive program, the government must say “nope, contact says A, do A”.

Expand full comment
Apr 18, 2023·edited Apr 18, 2023

I would disagree with Rbbb's comment a bit regarding the Nimby parallel. I would say they're both situations where regulations are used by one group to pursue their perceived self-interest in a way that creates a tragedy of the commons effect.

Expand full comment

Commenting on "Notably, in most Anglo-Saxon legal systems, you can't consent to be caused physical injury."

Importantly though, you can consent to *risking* physical injury and death, even very severe injury and death. In fact (thinking of impressment and conscription) the government is even allowed to compel you to risk those things. That seems importantly relevant when thinking about medical research and consent problems generally.

Expand full comment

If the main benefits of IRB are accruing to MTurk workers, that doesn’t justify the IRB. That just sounds like (taxpayers presumably) are funding Amazons failure to moderate disputes in its own marketplace.

Expand full comment
Apr 18, 2023·edited Apr 18, 2023

Hopefully a good explanation of JDK's comment:

I think JDK's point is that your comparison and complaint only works for a narrow set of overwhelmingly likely and precise counterfactuals. In the case of a protocol-optimization study that was delayed, yes, we know the consequences. A speculative study that never happened? No clue. In your own area of clinical interest, what is the range of possible outcomes of psychedelic therapy (Roughly status quo to hippie utopia?) and what is the size of the range you'd assign even a 2/3 probability to? An "easy" example I'm semi-seriously challenging you to forecast (I don't have money with which to bet you, but I'm sure you can find someone willing to put it on a prediction market): Similarly dosed intranasal racemic, s-, and r-ketamine monotherapies were all in as-yet unpublished Phase III trials, last year; forecast their respective effect sizes and response rates. You're already intimately familiar with the relatively large amount of existing research, and they're three variations on the same molecule! If you can't forecast *those* trial results with reasonable accuracy and precision, why think anyone can do the same for something more speculative or the downstream benefits of basic science?

If we can't reasonably quantify the benefit, a cost-benefit - and, therefore, consequentialist moral - analysis is "garbage in, garbage out."

Expand full comment

The anecdotes mirror my experience working in military IT and dealing with cyber security departments which churn out mandates that are completely divorced from both the technical systems as well as the actual security principles that they're supposed to be enforcing. I wonder if this generalizes to any kind of organization which attempts to separate "implementation" from "accountability"... the "accountability" group tends to be overtaken by bean counters who are better at concern-trolling than doing their actual jobs.

Expand full comment

From the jumpingjacksplash quote:

> Mengele's research has definitely saved more people than he killed

{{Citation needed}} very much.

From my understanding, Mengele's "research interests" were about as relevant to the thriving of humanity as you would expect the obsessions of a random serial killer to be. People with different eye colors, wtf? How does that help the Endsieg?

From his wikipedia article, there might be cases when he investigated the spread of the diseases with the help of Jewish doctors, but mostly his method of fighting diseases was to send all the inmates of the building to the gas chamber and then disinfect the building. Arguably Mengele was part of a structure which killed on the order of perhaps a million (No, I did not look up the deaths during his stay there, nor will not get involved in any discussion of numbers) people in Auschwitz (most of them not involved in his "research"), so for that statement to be true his research would have to have saved millions of people. I am very doubtful that his research saved even more people than he killed /with his research/.

Also, scientists and "scientists" working in totalitarian regimes often have some epistemic conflict of interest. Suppose a Nazi scientist had a genuine but unethical experiment trying to test the hypothesis of "Aryan" superiority, and the result came out negative. What do you suppose Himmler would do if he learned of their publication?

Not all atrocious experiments are also scientifically useless, but there seems to be a strong correlation between being an atrocious human being and being an atrocious experimenter.

I previously mentioned the Dachau freezing water experiments as something which might safe human lives. Reading Wikipedia,[0] I might have to go back on that:

> The results of the Dachau freezing experiments have been used in some late 20th century research into the treatment of hypothermia; at least 45 publications had referenced the experiments as of 1984, though the majority of publications in the field did not cite the research.

> In a 1990 review of the Dachau experiments, Robert Berger concludes that the study has "all the ingredients of a scientific fraud" and that the data "cannot advance science or save human lives."

[0] https://en.wikipedia.org/wiki/Nazi_human_experimentation

Expand full comment

> I thought tech had a reputation for “move fast and break things”, and because I would have expected the market to train this out of companies that don’t have to fear lawsuits.

Still reading through the post, but I'll just highlight that Tech often has two major flavors: "Startup Tech" (sometimes "Small Company") that is very prone to the Move Fast and Break Things lifestyle, and "Enterprise Tech" (sometimes "Big Company") where there is a lot more regulatory influence and litigation risk active. These are the places that tend to stay out of the limelight but have a lot more trend for the layers of approval and Dilbert-like dotted line bosses who can veto without cost to themselves.

Expand full comment

Regarding harm/omission, that's a bad example for Christianity's standard positions. Not saying there isn't such a distinction in this framework, but there's also too much of "I was hungry and you didn't feed me" for that example to work. You definitely have a moral obligation to let the guy know there's a car coming.

Expand full comment

I kinda mentioned this in the original book review comment section, but this quote:

“Jonas argued that all cancer studies should be banned because it’s impossible to consent when you’re desperate to survive”

makes me even more sure: this guy really sounds like a non-fictional member of the BETA-MEALR party:

https://slatestarcodex.com/2013/08/25/fake-consensualism/

Which opens with:

“Dear friend, have you considered banning health care?”

Looks like we’ve found someone who actually has considered it! (Not quite the same reasoning, but still uncanny)

Expand full comment

Whataburger usually starts serving breakfast from 11:00 pm of the previous day, and it serves until 11:00 am. https://breakfastoffers.com/whataburger-breakfast-hours/

Expand full comment