127 Comments
User's avatar
User's avatar
Comment deleted
Apr 17, 2023
Comment deleted
Expand full comment
murphy's avatar

"The theory is that excessive compensation could get desperate people to agree to participate in your torture chamber"

Yep, this justification is one I always found suuuuper skeevy.

Peoples time has a fair market value, even (small) risk of harm has a fair market value.

Underpaying people for that is the easiest way to reach a situation where you're exploiting them. Worrying about over-paying is failing to see the Forrest for the trees.

Expand full comment
AntimemeticsDivisionDirector's avatar

yeah, I can easily see the case being made that this is exactly backward.

If your your compensation is too *low*, only the truly desperate with no other options will take the time out of their day to participate.

Expand full comment
murphy's avatar

The justification I've heard is that it's moral to experiment on people who volunteer out of altruism or a sense of duty etc etc but it's not moral to experiment on people motivated by cash.

someone had to get a lot of ethicists in one place to reach logic that twisted up on itself.

Expand full comment
Peter Gerdes's avatar

A common reaction many ppl (even in the prior discussion) seem to have when one raises the issue of IRBs is: I can imagine/heard of really unethical research so of course we need independent IRBs to pre-approve research.

But this seems like exactly the opposite attitude we take in other aspects of the world [1]. So what accounts for the intuition that research needs pre-approval while most other aspects of life we use liability and post-hoc prosecution (trials and lawsuits come after not before)? Is this something specific to experiments?

I'd lean towards a system that holds institutions liable after the fact if they are determined to have allowed a clearly unethical study to go ahead but this frequent reaction makes me wonder what I'm missing.

---

[1]: Doctors who prescribes recklessly and surgeons who perform risky surgery are sued/disciplined/prosecuted after the fact. A company that offers a defectively harmful product can kill people but (with a few exceptions) that's handled with lawsuits. And we specifically reject prior restraint in speech and rely on subsequent lawsuit (when tortious).

Expand full comment
Garrett's avatar

I suspect that part of the problem is some reverse of Pascal's Mugging. A researcher thinks about the potential millions of amortized lives they might save, all while torturing real people in the hear-and-now. In that case, an IRB acts as a mandatory reviewer to make sure the researching isn't going off the deep end.

In the best of cases, the IRB also makes sure that the most value is extracted out of the least risk possible. Eg. ensuring adequate statistical power to ensure that any meaningful signal can be extracted while not exposing any people to unnecessary risk.

What makes this challenging is that the IRB seems to be set up to address the low-risk high-frequency or high-risk low-frequency issues, but instead seem to worry about the low-risk, low frequency issue. They don't worry about the "study of decapitating orphans while cackling". Ideally, they'd either ask questions like "do you have insurance to cover any rare life-long injury?" or "maybe you should warn people that they are likely to have some swelling for a few days". But instead they seem to worry more about the "pt. might be made sad that the trial medication didn't come in their favorite color" kind of thing.

Expand full comment
thefance's avatar

Frankenstein. Pushing the boundaries of knowledge can be quite galvanizing.

Expand full comment
Peter Gerdes's avatar

Was this just an excuse for a bad pun? ;-)

Expand full comment
Nolan Eoghan (not a robot)'s avatar

“At Facebook, code changes have low upside (Facebook will remain a hegemon regardless of whether it introduces a new feature today vs. in a year) and high downside (if you accidentally crash Facebook for an hour, it’s international news). Also, if the design of one button on the Facebook website causes people to use it 0.1% more, that’s probably still a difference of millions of hits - so it’s worth having strong opinions on button design.”

That’s exactly right. However it’s also why Facebook will disappear some day.

Expand full comment
Sniffnoy's avatar

Reading the parts about MTurk, I have to wonder why, if requestors doing mass-rejects is such a real problem, Amazon reports raw hit rate rather than an adjusted hit rate that weights hits/non-hits by the requestor's hit rate; it seems like that would be a more useful metric.

Expand full comment
Doug S.'s avatar

Based on the comments, it appears to be because Amazon just doesn't care.

Expand full comment
Roxolan's avatar

And I imagine Amazon is very much one of those large tech companies with a dozen internal committees who will veto any code change if a button's colour is 1% wrong. Might be stuck even if the MTurk team does want to make things better.

Expand full comment
Alex Power's avatar

I want to say something, but I haven't yet come up with anything profound on the topic.

My assumptions:

* A certain level of external review is necessary for most research.

* Setting up an adversarial system for this review simply motivates the creation of more bullshit.

* You can't legislate competence, and various attempts to do so have only made things worse.

My only idea is a more "arbitration"-like system, where the IRB can say "your research is bad in any form and you must stop", but can't impose "you have to read aloud a six-page consent form" style requirements. But that has its own problems ...

Expand full comment
Gres's avatar

If there was any group with authority which could publicly say, “These checks are doing more harm than good”, they could just set targets for improving, and threaten the IRBs with whatever those consequences were. This could even just be a college rankings provider giving an “Ease of research” score - medium-size colleges care about those, and they could develop a culture which could spread to other colleges.

Expand full comment
None of the Above's avatar

To what extent is the problem here just a lot of little fiefdoms in which local powers in the IRB make a bunch of dumb or unreasonable demands? The way different scientists talk about their IRB experiences suggests that there's just a huge amount of variance here. Someone was put into a position of local power and uses it to lord it over other people, either because it's fun to lord it over people or because they're in over their head and don't quite know what else to do. (Like someone asked to review a technical document who picks some random thing to complain about, simply because he feels like he should say *something* or everyone will know he doesn't know enough to be reviewing this document.)

Expand full comment
Theodric's avatar

“ Like someone asked to review a technical document who picks some random thing to complain about, simply because he feels like he should say *something* or everyone will know he doesn't know enough to be reviewing this document.”

These people are the bane of my existence - nit pickers trying to prove their value and smarts (often these are “independent reviewers” hired by the customer because the customer isn’t technically proficient enough to review you directly). Sometimes you think they get paid by the comment / action item.

Expand full comment
Gres's avatar

That’s probably a fair comparison in a lot of ways. Medicine isn’t the only industry with lots of pointless local regulations. If anything, it might be something of a test case for resisting those regulations. Medical researchers have a strong formal community, and universities are used to being ranked publicly against common criteria. Change in IRBs may be easier than change in much less-legible local tyranny elsewhere.

On the other hand, fiefdoms feel like more of a temporary problem than regulation creep. Regulation creep has a ratchet effect - once a given IRB starts mandating pens instead of pencils, it’s really hard for them to stop. Decisions by petty tyrants which aren’t safety regulations seem to have much less resistance to changing back after the tyrant leaves - if the boss decides everyone has to use blue pen because blue is the company colour, that decision probably won’t last long after the boss leaves.

Expand full comment
Ch Hi's avatar

The real problem is that there are no consequences to the members of the IRB if they make arbitrary restrictions.

There needs to be a appeal to a different body, which has the power to discipline in some way the members of the IRB. Things would still go off the tracks occasionally, but a lot less frequently.

Expand full comment
Joshua's avatar

I'm not sure if the last paragraph comment from Scott was about the tax prep people lobbying to protect their business, or about the equivalent IRB version. But in case anyone is not familiar with how the tax prep industry deliberately sabotages attempts to simplify the tax filing process, see e.g.:

- https://www.propublica.org/article/inside-turbotax-20-year-fight-to-stop-americans-from-filing-their-taxes-for-free

- https://www.nbcnews.com/business/taxes/turbotax-h-r-block-spend-millions-lobbying-us-keep-doing-n736386

I'd also be interested in learning if there is an equivalent lobby to prevent IRB streamlining.

Expand full comment
Matt Lutz's avatar

On the question of whether absences cause, people's intuitions are probably influenced by which account of causation you're inclined to accept. If you have a kind of "energy transfer" model of causation in mind (if C causes E then some energy has been transferred from C to E), then absences can't cause. Absences of events can't transfer energy. But if you have a kind of counterfactual model of causation in mind (C causes E if and only if: If C were to happen, then E would happen), then absences can cause. The absence of the removal of the surgeon's clamp causes death because if the clamp weren't removed, then the patient would have died. This is a vexed issue because our concept of causation seems to include both energy-transfer and counterfactual elements, so this marks an inconsistency in our concept of a cause.

Expand full comment
jumpingjacksplash's avatar

I fully agree - I think it’s that (at least to me) “but-for” seems like an over-broad cop-out due to lack of a better definition, but energy transfer (I still think of it as “billiard balls”) can’t be made rigorous. There’s also a link between cause and fault that a lot of people have, which can’t be squared with energy transfer.

Expand full comment
thefance's avatar

An agreement is an implicit contract. It's called a contract because it "constrains" your "actions". Contracts often bestow a right (AKA option, privilege, etc.) on a party. Which is necessarily mirrored by an obligation (AKA duty, expectation, etc.) on a counterparty. E.g. suppose I buy a book from you for $20. Our purchasing agreement dictates that you have a right to $20, which mirrors my obligation to pay you $20. Simultaneously, I have a right to the book, which mirrors your obligation to give me the book.

"Fault" (AKA blame) arises when someone doesn't fulfill their obligation. It's a purely ethical concept, rather than a physical concept. "Causation" is a red-herring imho.

Expand full comment
Stalking Goat's avatar

Please don't spread folk etymologies. "Contract" comes via French from the Latin "contractus" which meant "to bring together" and which itself was the Latin prefix "com-" meaning "with, together" and the Latin stem "trahere" meaning "to draw". The "com-" prefix, which becomes "con-" before certain consonants in Latin, is present in other words like "concentrate" and "concert". The stem "trahere" lead to words like "abstract" and "attract".

Expand full comment
thefance's avatar

you right. i made an assumption.

Expand full comment
Eh's avatar

Perhaps of note, absences can be described as presences in certain contexts: https://physics.stackexchange.com/questions/108601/the-momentum-of-a-hole

Expand full comment
Isaac King's avatar

Not directly related, but can I suggest that any time you do a "highlights from the comments on X" post, you link to it at the bottom of the original post? That way if I show someone the original post, they have a easy place to go for further reading on the subject.

Expand full comment
Anteros's avatar

Excellent suggestion

Expand full comment
o11o1's avatar

Agreed, anyone new to ACX would be very unlikely to guess that a comments followup post would exist.

Expand full comment
José Vieira's avatar

I endorse this message

Expand full comment
Bob Frank's avatar

> There is no way for an average uneducated jury to distinguish between “doctor did their homework and got unlucky” and “doctor did an idiotic thing”.

So why have one? If the accused has a right to a trial by a jury of their peers, why do the lawyers stack the jury with uneducated people who know nothing about medicine? In a case like this, this is literally one of the first things that the lawyers do during the voir dire process: eliminate any and all *actual peers,* leaving only gullible and uninformed who are easier for lawyers to manipulate.

Why is that not grounds for said lawyer being summarily ejected from the courtroom?

Expand full comment
Carl Pham's avatar

As I understand it, lawyers have no interest in someone gullible and uniformed, because as much as he might be led around by one lawyer, he can equally well be led around by the other, and overall his response to evidence and argument will be highly unpredictable -- because he's gullible and uninformed -- which is a nightmare, since the lawyer can't adapt as the trial unfolds, can't formulate effective challenges or responses to what the other side says, et cetera.

But on the other hand, a jury that contains an expert in the field -- or any one person of unusual intelligence, direct experience of the area in contention, or a strong and domineering personality -- is equally a disaster, because what you'll get in the end is a judgment effectively from 1 person instead of 12, which defeats the purpose of a jury and makes the opinion harder to predict, in part because its the whimsy of one person with whom no one will seriously argue. Remember "wisdom of the crowds?" Prediction markets? That.

More importantly, someone (or 12 someones) with direct experience or expert wisdom will probably make his decision in no small part because of his preexisting knowledge and views -- and this is the exact opposite of what you want in a fair trial. What you want in a trial is for 12 people to examine with fresh eyes the evidence in front of them -- and *only* that evidence -- and the arguments presented -- without having prejudices and strong pre-existing opinions of their own -- and make a judgment based only on what they see and hear in the courtroom. Expertise and direct experience of the field make that much harder. It completely vitiates all the careful apparatus we have built up that say what kind of evidence is and is not allowed, the rules for presenting it and laying a foundation, et cetera. Basically the judgment starts to be formed substantially from stuff that happened randomly *outside* the courtroom -- which is not the ideal. Who wants to be judged based on random childhood life experiences of the jury? This is also why in a criminal trial they will exclude both victims and perpetrators, or close friends/relatives of same, of the crime at issue. They have a lot more direct info, are much better informed, of course -- but their opinion will not derive only from the evidence presented.

The ideal juryman is intelligent, but not much more so than average. Interested, but not so much that he can't judge objectively. Rational, so he can follow a sound argument, but not the kind of person who loves theories and is willing to follow them anywhere they go, regardless of common sense. Generally experienced in life, so he *has* common sense, but not specifically experienced in the issue under the microscope, so he can limit his judgment to the evidence presented in the court. In short, your average competent middle-aged adult, neither a clown nor a squish nor a zealot or egghead.

With respect to expertise, it's worth remembering the jury's job is to decide which expert to believe, not to construct their own opinion (which would be just another opinion anyway). They're expected to use all the myriad ways human beings have of deciding whether some other human being is telling the truth, aided by the sharp examining and cross-examining of lawyers -- who presumably *have* taken the time to grok as fully as possible the testimony of the competing experts. It's not clear that subject matter expertise is that helpful, especially since acquisition of it runs the significant risk of prejudgment (supra). You're supposed to decide who's telling the truth based on stuff like their demeanor, internal consistency, response to questions (both friendly and hostile, and both verbal and emotional). Generally we find that human beings are actually pretty good bullshit detectors, especially if someone helps them along by asking a bunch of pointed questions. And if you can't tell? You're expected to dismiss that expert. If you end up without any basis for a good judgment, you don't deliver one. You decline to convict (in a criminal case), or you decline to vote for the plaintiff (in a civil case). That leaves the status quo untouched, and the goverment (or plaintiffs) need to try again and do a better job.

Obviously it doesn't always work. But it seems to work in general much better than any other system anyone can devise, at least until we develop superintelligent AI bullshit detectors, or mind reading devices.

Expand full comment
Martin Blank's avatar

Our jury system sucks. Jurors should be both better compensated, and more highly educated. I cannot imagine ever wanting a jury to decide something actually important to me. Seems like a bad sign.

Expand full comment
Carl Pham's avatar

One is reminded of the famous quote about democracy. ("The worst system of goverment, except for any other that's been tried.") I mean, if you're willing to trust the awesome power and scope of government to the half-assed opinion of 120 million of your countrymen who are not only not paid to vote, they have to go to some mild trouble to do so -- and who aren't compelled to listen to any expert presentation of the issues, still less any close cross-examination of the candidates -- then the wonkiness of the jury system shouldn't be much more concern.

Anyway, that's why you should try to settle your disputes directly. Anyone who expects True Justice from a human-derived system is kind of an idiot. Justice is what you can look for from your god of choice at Judgment. The real purpose of the earthly justice system is merely to prevent *injustice* -- from such outrageous results that people give up and turn to violence. In this it succeeds well enough, at least better than any other system anyone knows.

Expand full comment
Martin Blank's avatar

I think I would prefer expert judges or arbitrators for most things unless I was actually trying to get a result that was against the facts.

As for democracy, meh.

Expand full comment
10240's avatar

It's empirically pretty clear that democracy is the least bad at least among systems of government that currently exist anywhere.

Is it clear that the American jury system is less bad than a system that relies on judges only, or only uses juries in much fewer situations, which exists in most countries?

Expand full comment
Michael Watts's avatar

> What you want in a trial is for 12 people to examine with fresh eyes the evidence in front of them -- and *only* that evidence

It is true that the American system specifically calls for this, but it's equally true that this predictably causes worse decisions.

Expand full comment
John Wittle's avatar

It's not about optimizing decisions though, it's about the game theory and the incentives it creates. Everybody knows somebody's a criminal, the cops and the politicians and the public and the journalists, but our rules don't allow him to get convicted because of procedural issues. I don't think *anyone* could possibly say this is a 'good outcome' in the truthseeking sense. Imagine a rationalist who would only agree that the obvious mafioso was a criminal when a court did so, they would be many decades behind everyone else!

but like... surely you see why this isn't 'worse'? the court is not interested in optimizing for truth, but truth plus lack of the sorts of things that caused the english civil war and the glorious revolution.

or maybe this is what you were saying and i misread your comment... but i'm not sure a court where al capone got arrested on day one because his criminality was obvious and who cares about his civil rights, i'm not sure that court is making 'better' decisions...

Expand full comment
Michael Watts's avatar

But the incentives we have are much worse. In your example, Al Capone is a famous crime boss and everybody knows it, but he can't be convicted of a crime because of procedural issues. This is true of the American system generally; getting convictions is, theoretically, almost impossible.

So the system has developed a few responses to its inability to convict:

1. Don't try. Instead, force people to confess to crimes, and then throw them in jail without a trial.

2. Impose extremely harsh sentences for extremely minor crimes, because we can convict on those crimes. It is routine under American law to sentence someone convicted on one charge for a crime he was just formally acquitted of on a different charge.

3. Define some crimes that nobody can help committing. You can convict anyone of those!

Then we end up back in the place where, if everyone knows that Al Capone is a criminal, he can be thrown in jail regardless of the formal evidence. But in the process of getting back to where we originally were, we set it up so that everyone else can also be thrown in jail on a whim. That's not an improvement. Back when popular knowledge of criminality was a requirement, Al Capone could be thrown in jail, and most other people couldn't be.

Expand full comment
John Wittle's avatar

Do you actually believe the solution to the problem of over-policing, qualified immunity, and the violation of everyone's civil rights on a routine basis is to make it *easier* to arrest and convict people? our innocent conviction rate is sorta already sky high

Expand full comment
Michael Watts's avatar

Our conviction rate is extremely low. See the development I identify as "1.".

There is a formal system governing how convictions can be obtained, and it says, more or less, that they can't be. This is unacceptable to everyone, so we get the same result extrajudicially.

The consequence of that is that none of the protections that the formal system claims to provide actually exist. Since that system is not in use, it is not providing any benefits.

And it won't be able to, until the threshold for getting a conviction is low enough that using the formal system becomes a possibility.

What relationship do you see between qualified immunity and the ease or difficulty of convicting people in court? I'm having trouble seeing an argument other than "qualified immunity is good for cops, and low conviction thresholds are notionally good for cops, so the right way to address qualified immunity is to raise conviction thresholds". But that wouldn't make any sense.

Expand full comment
Carl Pham's avatar

No, it doesn't. The good decision is not one that is galaxy-brain brilliantly insightful, it's the one that causes most people to say "yeah, that seems not unreasonable." The purpose of the justice system is forestall private vengeance -- to prevent violence being used as a means to settle disputes. That's why the most important criterion for success is that most people find it reasonable -- it represents what The People think is generally right. Reaching a decision that only Einstein could comprehend, and which may be right according to some ineffable deep philosophical theory you need a PhD from Harvard Divinity School to grok, but which most ordinary people find stupid or insane, is a recipe for loss of faith in the system, and return to might makes right.

Expand full comment
Bob Frank's avatar

> What you want in a trial is for 12 people to examine with fresh eyes the evidence in front of them -- and *only* that evidence -- and the arguments presented -- without having prejudices

That's not what I want, because another word for "prejudice," when you're not using (*ahem*) prejudicial language to disparage it, is "relevant experience." It actually wasn't that long ago that writers would use the term prejudice as a positive word, not a negative one, referring to the value of experience.

Remember the scene in 12 Angry Men where the jurors look at the knife, and one of them says "the forensics guys say the victim was stabbed like *this,* but I'm familiar with this type of knife and you can't really hold it that way, so that can't be what happened"? That was probably the most persuasive point in establishing the defendant's innocence, and I know if I ever found myself on trial, I'd vastly prefer to have that guy on the jury than to not have him there!

Expand full comment
Theodric's avatar

I thought the 12 Angry Men thing was more like “an experienced knife fighter would not hold the knife this way” - turning something that would normally go against the defendant (he’s a kid from the slums who has been involved in violence before) into a mark in favor of his innocence.

Expand full comment
Bob Frank's avatar

Possibly. It's been a while since I saw it. But the impression I got was "if you tried to hold it this way, it wouldn't work/would end up cutting you".

Expand full comment
Theodric's avatar

I looked it up and it’s kind of both. First, they determined that the defendant could only have caused the wound if he had been holding the knife in an overhand grip and stabbing down (because the victim was substantially taller than the defendant). But the knife was a switchblade, and you would normally hold it underhand if you knew what you were doing, in part because that’s how you have to hold it to open it (held overhand, the knife blade would spring open into your hand).

Expand full comment
Carl Pham's avatar

Yes, of course, what we all want is to be judged by someone who has all of our experience, because we have (a naive) faith that this means he will totally understand our point of view. And when your soul is confronted with the god of your choice after it departs this vale o' tears, you can perhaps look forward to that.

More realistically, the second-best choice, and that to which you can actually aspire in this world, is that you have reasonably competent open-minded people without much in the way of preformed opinion, who can take the evidence presented to them and come to some reasonable conclusion. Not some deep Sherlock Holmes take, but just ordinary common sense. My observation is that relatively few criminal (or civil) trials hinge on some incredibly subtle point of evidence that no lawyer has thought to emphasize, explain carefully, buttress with additional evidence, et cetera. My impression is that cases that actually go to trial (and most do not) generally hinge on the credibility of conflicting witnesses. So there really isn't much in the way of useful preexisting knowledge anyway -- what you want is people who are tolerably decent judges of character, and who can spot bullshit reasonably well.

"12 Angry Men" was Hollywood fiction, designed for entertainment, and bears as much resemblance to reality as "Captain Blood" to actual piracy or "Buck Rogers" to the actual business of spaceflight. Of course they're going to have the desired outcome turn on some tiny little almost-overlooked fact and some wonderful coincidence of who was there: this is how you build dramatic tension, and movies that don't have that are boring. But a well-run and successful trial by jury is *supposed* to be boring. The evidence is supposed to be very clearly laid out, the arguments plain as day, accessible to anyone of ordinary experience and intelligence, and all the possibilities thrashed out in testimony. A trial that hinged on some lucky coincidence like a movie would be a disaster, evidence of gross incompetence on the part of the government, perhaps (for bringing a pathetic case), on the part of the defense perhaps (for overlooking an important factor the jury should consider), or perhaps both.

Expand full comment
o11o1's avatar

So by this theory, the sort of jury that the lawyers want to see on a medical malpractice trial is a bunch of people with 4-year degrees or better in *fields other than medicine*, or possibly a medical specialty other than the one of interest.

Does that match the nature of actual juries that are selected in practice? (Where would one go to gather that data?)

Expand full comment
Thor Odinson's avatar

"peer" in a jury context merely meant that nobility could only be tried by other nobility, not by commoners; having largely done away with nobility as a concept, the term has come to be largely meaningless - the jury is whoever lives close to the court and can't get out of jury duty, which rather adversely selects against anyone smart enough to have anything better to do with their time (and almost anything is a better use of time than serving in a jury, given the conditions and the pay).

Juries, much like democracy, are far more about making sure the populace feels involved in and responsible for outcomes (and thus accepts them) than about improving the odds of choosing the right outcome. Jury nullification is probably their most important function overall, but knowledge of its existence is treated as actively disqualifying.

Expand full comment
Random Reader's avatar

I have only witnessed jury selection in a small county. The judge running the process was remarkably good at it.

One of the consequences of living in a small county is that up to half of your jury pool could probably recognize at least one party to the case. This was not considered disqualifying, as long as your relationship was something like, "I taught them in second grade 30 years ago and they were an ordinary student," or "I used to cut his wife's hair." They would happily take a doctor or a professor as a juror. You just had to be a warm body who wasn't obviously biased and wasn't going to go off the rails.

But the most impressive part of the performance is how the judge discouraged people from getting out of jury duty. He was aware that a lot of people in court room knew each other vaguely. And so he very deliberately built up an awareness that this was a shared social duty and we all had to take our chances. I saw exactly one person escape jury duty by inventing a bullshit rationale. The judge let him go, but not before everyone in the room had concluded that guy was a weasel who'd happily defect on the social contract. And remember: half the room _knew_ each other. 5 of the 7 jurors previously knew at least one party to the case (but only in a passing fashion). And so the judge understood that there was a reputational cost for looking like an ass in front of people you'd be seeing for the next 20 years. So he used that shamelessly to keep his jury pool representative of the community. The only other guy who escape jury duty was the town snowplow driver, who made the excellent point that if he didn't drive the snowplow, _nobody_ would make it to the courthouse.

If I ever have the misfortune to wind up in court, I'd be happy to see that judge presiding. He was obviously dedicated to justice, and he displayed that same kind of hyper-competence that I'd be impressed to discover in a startup CEO or a professor.

Expand full comment
None of the Above's avatar

Even if you just selected a random jury, you would be unlikely to get a doctor or really anyone with much medical knowledge. And an all-doctor jury trying medical malpractice cases sounds like it would introduce a different but maybe equally bad bias, sort of like an all-cop jury trying police brutality cases.

Expand full comment
Ian Argent's avatar

"Amendment VI

In all criminal prosecutions, the accused shall enjoy the right to a speedy and public trial, by an impartial jury of the state and district wherein the crime shall have been committed, which district shall have been previously ascertained by law, and to be informed of the nature and cause of the accusation; to be confronted with the witnesses against him; to have compulsory process for obtaining witnesses in his favor, and to have the assistance of counsel for his defense."

Nowhere does the word or concept of "peer" exist in the text of the US Constitutional Guarantee of a jury trial. The key concept is "unbiased".

Expand full comment
Ran's avatar

Regarding whether IRBs "cause" whatever deaths the studies would have ended up preventing but didn't . . .

If a skilled surgeon retires, and statistically more patients die because there's now one fewer surgeon and the average skill level of the remaining surgeons is a bit lower, I don't think most people would say that the surgeon "caused" those deaths by retiring — even if the surgeon feels a bit guilty about doing so. Furthermore, if the surgeon is retiring at the encouragement of his/her spouse so they can travel more or something, I don't think most people would feel comfortable saying that the spouse "caused" those deaths via this encouragement.

On the other hand, you've given some clear examples where someone certainly does "cause" deaths by preventing something that would have prevented those deaths.

I think the difference lies in *responsibility*. The surgeon is entitled to retire, and is not responsible for patients (s)he doesn't treat. The spouse, likewise. But someone who barges into an operating people and restrains people is not entitled to do so, and has a responsibility not to do so, so we have no qualms saying they "cause" the death.

Awkwardly, this means that whether we say that the IRB system "causes" deaths is not a fact that goes *into* deciding whether it's a good system, but rather, a terminological choice that *depends* in part on whether we think it's a good system. Someone who feels that IRBs are responsible only for protecting study participants, and are entitled to prevent or delay studies to that end, would reasonably feel uncomfortable saying that they "cause" whatever deaths would have otherwise been prevented.

(I suppose that a rationalist would reply that we just need to accept this discomfort and say that yes, the surgeon and spouse and IRBs all cause deaths?)

Expand full comment
Dweomite's avatar

One system I've seen is that the "null action" is privileged; that is, by default, you are always allowed to do nothing. You can bargain this right away--e.g. if I agree to wash your car for $10, I can't take your $10 and then do nothing, because that breaches the contract. But you *start out* with the right to do nothing, and if someone else is unhappy with you doing nothing, then they need to negotiate with you.

So in your retirement example, the surgeon is allowed to retire (even if that costs lives), unless he's already signed a contract saying he'll work for at least N years or something like that. If someone would prefer that he keep working and save more lives, they can offer to pay him enough money to change his mind.

I don't think this rule excuses the IRBs. For one, I don't think denying a study can reasonably count as "doing nothing". For another, the people on those IRBs have presumably agreed to do the job, or else they wouldn't have been given the power.

Expand full comment
Ran's avatar

The "null action" explanation is appealing, and may be part of the answer, but I'm not sure it covers everything:

- What about the spouse encouraging the surgeon to retire? Or heck, an online commenter who reassures the surgeon that it's OK for him/her to retire because the "null action" is privileged? I don't think we'd say that they "cause" these deaths, either, even though they're taking real actions, not just doing nothing.

- And what if retiring does involve doing something, e.g. referring existing patients to other surgeons (even if those surgeons have a bit less experience than the one retiring)? Does that suddenly mean the doctor is "causing" deaths by retiring? I don't think it does, though maybe you'd disagree?

Expand full comment
Dweomite's avatar

Starting with the second example:

I think I want to say that anything that would have happened if you'd done your null action is not "caused" by you, even if you didn't actually do your null action.

For example, suppose that if you do nothing, a meteor will hit earth and kill all life. But, instead of doing nothing, you eat lunch. This does not mean that you "caused" the meteor, because that was going to happen anyway. (Even if you could've stopped it, I still don't think this counts as "causing" it.)

.

Regarding the first example: If we're really just talking about what the word "cause" means, and not about *culpability*, then I think I actually do want to say that there's causation there. Certainly, if the surgeon wasn't planning on retiring, and you talked him into it, then I at least want to say that you caused the surgeon to retire. And if I saw someone write "by persuading this surgeon to retire, you indirectly caused John Doe to die" I wouldn't feel that they were abusing the word "cause".

In terms of culpability, if you want me to describe the rule that I think most people actually use, it's that "talking" is a special category of action that is exempt from most rules. I'm not sure I want to defend that as a fundamental principle, though.

My current personal version of the golden rule is "honor the deals you would've made (if idealized versions of everyone had had the opportunity to make them)". I think persuading the surgeon to retire is *perhaps* justifiable by an argument along the lines of "if they'd had an opportunity to negotiate about it in advance, most people would've agreed to a deal that everyone should be mutually allowed to give people truthful information and honestly express their own preferences, even in situations where that influences some third party to take an action that disadvantages the deal-maker; therefore, you should act as if most people had actually made that deal."

But if you imagine a really extreme situation--say, replace the surgeon with superman--then it starts to seem pretty dubious that people would've made that deal. I think it's fairly plausible that most people would've instead made a deal that anyone should sacrifice their personal happiness to keep superman on the job, if it ever comes up.

Expand full comment
10240's avatar

Even without considering talking in particular special, we should distinguish between coercing someone into not saving a life (which is what the IRBs may effectively do), and influencing someone to choose not to save a life. The former is worse than the latter.

Expand full comment
Dweomite's avatar

The categories of "talking" and "influencing" seem pretty similar to me. I suspect neither word *precisely* captures the relevant domain.

Expand full comment
Ali Afroz's avatar

Forgive me if I misunderstood how IRBs work, but I thought they just communicate true information about whether they think a study meats their standards, and based on that the government and other donors decide whether to fund your research with their funds, and journals decide whether they should let you use their private property to publish your findings. Not that I support IRBs, but I think if IRBs work like how I just described, then there is no coercion or extortion involved, just people conditioning voluntary assistance on IRB judgment. So if you only object to coercion, IRBs should be in the clear. Of course, most people also care about the consequences, not just the presence or absence of coercion, so they can still condemn IRBs without issue.

Expand full comment
10240's avatar

Yes, I was imprecise; the IRBs aren't the ones to blame, the system that puts the IRB requirements in place, and requires the IRBs to operate the way they do is.

The IRBs aren't engaging in coercion, the FDA does when it prohibits selling drugs that whose safety wasn't proved by IRB-approved trials.

The part about the government funding incentive is murkier; it's technically not a requirement, but as far as I understand how the system is set up, for all intents and purposes it is one. And remember that the government gets its funding by coercion; the tax money the government spends on medical research is money some of which private organizations could have spent on non-IRB-encumbered medical research.

Expand full comment
Ali Afroz's avatar

I don’t think most people consider talking a special action for which you can’t be held culpable. After all, incitement to offence and criminal solicitation are both illegal and nobody seems to have a problem with this, not even most free speech absolutists.

Regarding your proposed rule, it depends a lot on what the starting point before idealized bargaining is. If I’m behind the Vaile of ignorance, then I most certainly wouldn’t agree to your rule, because I’m much likelier to be one of the numerous people who die if the doctor retires than I’m to be the doctor or his friends or family. And if it’s our real world positions, then if I were superman’s wife, I would most certainly reject your rule because I care about my husband’s happiness a lot more than I do about a lot of people, most of whom I don’t even know. There is also the question of whether you should keep your end of a deal if the other parties aren’t keeping up their end of the bargain.

Expand full comment
10240's avatar

Even if you are behind a veil of ignorance, few people will become doctors, and do a good job, if that means society will prohibit them from retiring. That'll matter more than having a few good doctors work for a few more years.

Expand full comment
Ali Afroz's avatar

Were I a person in the original position, I would agree to continue to work under conditions where it would not be in my self interest to do so, because the potential expected benefit if I were a patient outweighs the expected cost of possibly being a doctor who has to work under these conditions. This is no different from agreeing to pay parfit's hitchhiker even when it would no longer be in your self interest to pay him by the time you actually have to pay him.

Expand full comment
Dweomite's avatar

I said talking seems to be exempt from "most" rules, not "all" rules.

Expand full comment
Thor Odinson's avatar

I like this definition for "cause", though I agree with your later comment that "cause" and "culpability" do diverge in many situations.

I fully agree that actively preventing someone else from acting is an action and thus IRBs are not in any way protected by the action/inaction distinction

Expand full comment
Ali Afroz's avatar

This system has some pretty crazy results. For example, if John, a young child, accidentally gets locked in a room through no fault of his parents, and the parents do nothing and ignore his cries for help, and in consequence John starves to death then john’s parents did not cause his death. After all, most children haven’t negotiated for their parent’s help at least not when they are young, and John’s parents might have agreed when they decided to have a child that they would consider all moral obligation to the child discharged by the huge favor of bringing him into existence.

Or alternatively, if you see Bob poring poison into food, and he tells you that he intends to use it to murder Alice, a total stranger to you, and you have ample opportunity to warn Alice without any cost to yourself, and you don’t warn her, resulting in her death, then most people would agree that you caused her death.

A much better standard would take how people normally act, or how they are normally expected to act, as the null action. So since most parents would have helped their child, and most people would have helped Alice, then those deaths were caused in the relevant sense.

Mind you, I think talking about this as a dispute over the definition of cause is a mistake. We aren’t interested in the definition of a word who’s meaning changes with context, or with the metaphysics of causation here, but rather with when someone should be blamed for the bad consequences of their actions. Consequencialism has an answer, blame them when blaming them has good consequences great enough to outweigh the cost of blaming them. Deontological theories, being a number of very different theories that are similar only in so far as they distill morality to a set of rules other than cause good consequences, unsurprisingly have many different answers.

Expand full comment
10240's avatar

*The starving child*: Bringing a child to existence is not, in itself, a huge favor if you then let him starve. To the contrary, bringing a child to existence who you then neglect is a disfavor; that's why, if you bring him to existence, you have an obligation to care for him.

IMO a mistake that's common when people discuss utilitarianism is assuming that never having existed is equivalent to death. It isn't, I think that's the intuition of most of us: never having existed is neutral, but death is bad; that's why giving birth and then letting the child starve is bad (beyond the suffering during starving). It's just non-obvious to translate this intuition to utilitarian language.

----

*The poisoning*: This is a classic case of inaction; you don't cause Alice's death if you don't warn her, Bob and only him does. And IMO if you don't warn her, that isn't bad, it's warning her that's good. Though not warning her is unvirtuous, and many people would say that it's wrong to not do something that has minimal cost to you but a huge benefit to her.

Expand full comment
Dweomite's avatar

Our legal system disagrees with you about the poisoning example, and I think most people's moral intuitions would disagree with you too. We don't normally impose a duty to police others. Preventing a murder is supererogatory.

Similarly, if you're walking in the woods and you see a stranger drowning, most people would say that saving the stranger is good but not obligatory. I do think most people would give a different answer if it's your own child, but that implies that it's something special about being a parent, not a "what most people would do" rule. (I've changed your locked-room example because seeing a stranger in a locked room is made complicated by the possibility that they could be a criminal being detained lawfully.)

Making the rule be how people typically act, or how they're expected to act, implies that an action becomes OK as long as most people do it. This seems clearly inappropriate.

I think all of the above applies whether we're talking about causation or culpability. I've phrased it in terms of culpability because I think the analogous causation arguments are even stronger and would get even more widespread agreement.

Expand full comment
10240's avatar

There are at least three things to be distinguished here:

(1) Directly causing a death.

(2) Not saving a life.

(3) Preventing someone else (who would be willing to do it) from saving a life.

IMO (3) is somewhere between (2) and (1) in badness; in some cases, it's effectively equivalent to (1).

Usually, the utilitarianism discussion focuses on (1) vs (2): in at least some situations, utilitarians tend to consider (1) and (2) equivalent, while others consider (1) much worse. But in the IRB discussion we're really talking about (3).

---

A further consideration is consent. IMO in many situations, killing someone is much worse than not saving a life, but not if the person being killed consented to it (or, typically, to the risk of it). If a study saves more lives than it kills in expectation, it's right to do it as long as the participants know about the risk and consent to it. So we have a fourth thing to consider:

(4) Killing someone who consents to it. Or exposing n people to a 1/n risk of death, which they consent to.

Expand full comment
Sandro's avatar

The distinction everyone is grasping at is whether causation necessarily implies blame.

All of the scenarios described do strictly speaking cause the outcomes, because counterfactually, the outcomes would not have happened had the agents made different choices.

However, while causation is necessary for blameworthiness, it is not sufficient. The discussion around the "null action" is an example of adding considerations beyond causation that factor into blame.

I personally do think the IRB carries some blame for deaths that are not averted because the explicit duty of the medical profession is to save lives and improve health without doing harm. If the IRB is obstructing advancements in health but is not doing so in a way that is not preventing actual harm, then that contradicts the whole purpose of that profession, and so the blame follows from the failure to their duties.

Edit: added missing word to clarify meaning

Expand full comment
Ran's avatar

Well, right -- what I'm saying is that I don't think most people will feel comfortable saying that someone "caused deaths" unless they're comfortable blaming that someone for those deaths.

In the case of IRBs, I'm fine with blaming them (or at least the system they're a part of) for some of those deaths; but I can understand someone else feeling differently, and this manifesting as a disagreement over Scott's use of the word "cause".

Expand full comment
Sandro's avatar

> but I can understand someone else feeling differently, and this manifesting as a disagreement over Scott's use of the word "cause".

Sure, but my point is that this "manifestation over cause" is itself leading to confusion because it conflates cause and blame, so it would be better to split the causal claim from the moral blame assignment, because the latter is the issue we should be talking about.

C is one of D's causes if NOT(C) implies NOT(D). Obviously D thus has multiple causes, but only some of them entail blame. There are multiple ways to assign blame, one of them being whether you've sworn moral duties to certain principles, like a doctor's duty of care, or whether you've opted to become a parent which carries moral duties to your children.

Expand full comment
BRetty's avatar

I just want to say that this post, and all the comments and commenters gathered here, is some of the highest qualiry and informationally rich discussion I've read. Thanks to Scott and all the subject experts for their stories.

The greatest intellectual compliment I've ever read was at the 2000's group blog "Baseball Toaster", it applies here:

"Thanks for once again giving me something to make something of myself with. I've learned so much in such a short time, yet I feel I've been here forever. If it means anything, I think you're a hero. Period."

BRetty

PS -- Ken Arneson's final Baseball Toaster post, "And So to Fade Away", remains one of the greatest pieces of writing, and sportswriting, I have ever read. A beautiful essay that is also a brilliant use of the web/blog/internet form and format. Canonical. (Comment #50 by user ChillWyll is what I referenced above.)

https://catfishstew.baseballtoaster.com/archives/1182040.html

Expand full comment
sclmlw's avatar

"... you need some standards for when a study is safe, so that if people sue you, you can say “I was following the standards and everyone else agreed with me that this was good” and then the lawsuit will fail. Right now those standards are “complied with an IRB”."

This is not technically true. You can still be sued, despite a clean IRB review. Indeed, in large, multicenter trials sponsored by pharma companies, most centers require that the company indemnify the sites for anything study-related in addition to the standard IRB review. Pharma companies turn around and carry liability insurance in case someone files a claim against them for study-related actions taken by sites.

I think this is probably a good incentive structure (though also susceptible to being gamed by bad-faith actors, of course). I prefer a system where people are reasonably cautious because there's a cost to not being cautious, as opposed to an expansive set of rules aimed at keeping people cautious, but that actually just creates a bunch of make-work for an army of compliance officers who aren't actively engaged in promoting the scientific/medical/safety value of the study so much as just checking off boxes. Sometimes you still end up with people who are intellectually engaged with the process, but just as often you run into people so far removed from the purpose of the study that they advocate doing things that are actively harmful as justification for their job. You have to be careful against these subtle and unintentional attacks against the integrity of a good scientific study.

I read a few years back that clinical research was the second most heavily regulated industry in the US. It shows. There's a point where one person working alone can do a study because it's not very complicated or because we're playing fast and loose with safety standards. I think bringing stringency standards up to the point where a team of people are working together to perfect the things people put into their bodies is a good thing. But at some point the team of people slogging through boring checklists grows so large that instead of capitalizing on the 'more minds are better' ethos we end up with drugs 'designed by committee', which is probably even worse than the lone researcher trying to do it all on their own.

Expand full comment
sclmlw's avatar

In partial defense of JDK's argument on the difference between directly vs. indirectly causing a death:

My understanding is that the AMA has a strict standard on this. The difference between active euthanasia and 'passive' pulling of the plug lies in the question of whether the physician is the direct cause of the underlying condition. The term of art is that the 'cessation of extraordinary measures' to keep someone alive is not considered euthanasia. But-for the existence of the hospital, ventilator, etc. the patient would already be dead, so the system allows the people within it to decide when withholding that care is acceptable (the family is grieving, the patient had a DNR, etc.). If the patient would still be alive but for the existence of the medical intervention, that's euthanasia and is on a different side of a line, at least in certain circles of medical ethics.

Expand full comment
TheGodfatherBaritone's avatar

> we can’t cut the IRB out entirely without some kind of profound reform of the very concept of lawsuits, and I don’t know what that reform would look like.

The loser pays the legal bills. There’s a Latin phrase for this so you know it’s legit.

Expand full comment
Alex Zavoluk's avatar

I think I mentioned this in the original thread, but that's how almost every other developed country does it: https://en.wikipedia.org/wiki/English_rule_(attorney%27s_fees)

Expand full comment
Alex Zavoluk's avatar

> But playing devil’s advocate: at a startup, code changes usually have high upside (you need to build the product fast in order to survive) and low downside (if your site used by 100 people goes down for a few minutes it doesn’t matter very much). At Facebook, code changes have low upside (Facebook will remain a hegemon regardless of whether it introduces a new feature today vs. in a year) and high downside (if you accidentally crash Facebook for an hour, it’s international news). Also, if the design of one button on the Facebook website causes people to use it 0.1% more, that’s probably still a difference of millions of hits - so it’s worth having strong opinions on button design.

This is true, but there's still generally no reason to have such an extensive process for minor changes. If you did your experiment correctly from the get-go, there should be very little uncertainty about whether the button causes 0.1% more usage (at least for a company facebook's size). There should be 1 or 2 people responsible for making the decision, and only in rare cases do you involve multiple entirely different teams. Certainly not over something trivial like button design. Regardless of scale, making 100 decisions at 80% correct is better than making 20 decisions at 95% correct.

This reminds me, though, since I work in this field but forgot to post on the original thread: Is it surprising that there's absolutely no regulation on the experiments that tech companies do? Given the level of paranoia applied by IRBs, I'm sure they could come up with some reason why it risks harm to show one person a slightly different shade of blue than another. (I'm not saying there should be--I'm just asking if it's surprising that there isn't).

Expand full comment
Thor Odinson's avatar

As Scott pointed out in this post, all the regulation of medical research is backed by "else you don't get government funding". Since Facebook and Google and the rest aren't relying on government grants to keep their lights on, the government doesn't have that lever of power over them.

(p.s. I have seen Facebook get flak in the media for utterly routine A/B testing of stuff, so there are definitely elements of the public with the same sort of safetyism "all experimenting is super dangerous" brainworms about social media layouts)

Expand full comment
Alex Zavoluk's avatar

> (p.s. I have seen Facebook get flak in the media for utterly routine A/B testing of stuff, so there are definitely elements of the public with the same sort of safetyism "all experimenting is super dangerous" brainworms about social media layouts)

This is what I was thinking about. "Experimenting on humans! Like the Nazis!" seems like an easy way to rile up outrage and get some Senators looking to make a name for themselves on board.

Expand full comment
OffaSeptimus's avatar

" There is no way for an average uneducated jury to distinguish between “doctor did their homework and got unlucky” and “doctor did an idiotic thing”. Either way, the prosecution can find “expert witnesses” to testify, for money, that you were an idiot and should have known the study would fail."

If expert witnesses don't provide valuable expertise but just do what they are paid to say isn't that a pretty fundamental problem in the legal system and one that is not limited to medical cases, presumably it is also true for forensics and fraud cases.

Why aren't they neutral and paid for by the court? Do courts genuinely not care about objectivity?

I have seen lawyers (in the UK) joking about it, are expert witnesses a total farce?

Expand full comment
Thor Odinson's avatar

How do you define "Neutral"? Employed by the government? In criminal trials, the government is one of the sides in the case, the prosecution.

I also note that the status quo of "find a witness to say what you want said" doesn't require deceptive witnesses. In anything where the answer isn't obvious, there will exist people honestly believing each of the plausible hypotheses; the attorney simply hires 3 or 4 different experts to make reports, then only calls as a witness the one that honestly believes the interpretation suited to his case.

Expand full comment
10240's avatar

> In criminal trials, the government is one of the sides in the case, the prosecution.

One arm of the government (the prosecution) is on one side, another arm (the judiciary) is neutral. In some cases, a third arm (the public defender) is on the other side. Ideally, the government "really" is neutral, but it has people act as if they are on one side or the other, because that's a better way to make sure that all relevant evidence is collected than if the same person was tasked with collecting the evidence for both sides. Of course it's debatable how well this works out in practice.

Expand full comment
OffaSeptimus's avatar

An expert witness has no value if it is easy to find an expert witness who will say the opposite.

I am debating their usefulness not their sincerity.

Expand full comment
cromulent's avatar

After watching some livestreamed trials my opinion of expert witnesses has absolutely cratered. The specific one that had a big effect was the johnny depp trial, even if you don't care about the trial itself if you want an idea of what the expert witness system is like you can watch those parts of the trial. Specifically there were three psychiatrists/ psychologists called with varying levels of how bad it was, but there were a lot of expert witnesses in general. When i saw the first i was disappointed, by the time i saw the others the first was amazing in comparison in how much they limited shadiness.

That's not to say all the expert witnesses were horrible, but yeah the system is such that you can get away with a lot and it's upto the individual experts how honest to be. The point is basically both sides hire an expert to argue opposite sides and the jury decides who made a better argument and it's not expected that the experts will be unbiased.

Expand full comment
SelfDiagnosedWiseMan's avatar

> the prevailing legal-moral normative system (PLMNS)

This acronym is a tragically missed opportunity. 'Prevailing' can be inferred, just call it 'the' system, or 'our' system...

Our LeMoNS.

Expand full comment
10240's avatar

> In order to remove this risk, you need some standards for when a study is safe, so that if people sue you, you can say “I was following the standards and everyone else agreed with me that this was good” and then the lawsuit will fail. Right now those standards are “complied with an IRB”. This book is arguing that the IRB’s standards are too high, but we can’t cut the IRB out entirely without some kind of profound reform of the very concept of lawsuits, and I don’t know what that reform would look like.

A reform direction: any tort liability should be waivable in contract. (Ideally criminal liability too, but that's too far outside the Overton window.)

Expand full comment
JamesLeng's avatar

The victim can't excuse someone for criminal liability because that part isn't an offense against them personally, it's an offense against the larger state.

Expand full comment
10240's avatar

That's the legal fiction the current system assumes (for some crimes), and that's the notion I propose abolishing.

To clarify, the victim probably shouldn't be able to waive criminal liability *after the fact*. The reason for that is that punishment often needs to be severe in order to deter crime (taking into account that many kinds of criminals only have a small chance of getting caught). If the victim can waive prosecution after the criminal is caught, it may be in the interest of the victim to do so in exchange for a small payment; this could allow criminals to get away with a relatively minor cost, reducing deterrence. What I'm proposing is that the victim should be able to consent to a crime *beforehand*, in which case there should be no liability.

Expand full comment
Carl Pham's avatar

We don't do that, because we consider some actions an offence against all of us, not just the particular victim. We think they coarsen society and drag us all down, like (but worse than) bad manners or public speech that is racist or obscene.

That is, for example, why you can't sell yourself into slavery, or commission someone to rape or murder you for the masochistic pleasure, the insurance money, or to revenge yourself on your mother. We decide we don't want to live in a world in which slavery, rape, or murder is an accepted behavior, even if all parties consent. So it's an offence against the state, and the state prosecutes it whether the alleged victim chooses or no.

Expand full comment
10240's avatar

Again, this is what I disagree with. (With the exception of insurance fraud, which would breach a contract.)

Expand full comment
JamesLeng's avatar

Any policy that amounts to "abolish the Overton Window itself" isn't going to fall inside it any time soon.

Expand full comment
Davis Yoshida's avatar

> I’m surprised by this, both because I thought tech had a reputation for “move fast and break things”, and because I would have expected the market to train this out of companies that don’t have to fear lawsuits.

I'm not sure if this has penetrated outside of tech but the general understanding is that one should work at Google if one wants to never get anything done. It's what led to the conspiracy theory that Google just hires people to keep the other companies from having them.

Expand full comment
Michael Watts's avatar

> Meadow Freckle writes:

>> Why can’t you sue an IRB for killing people for blocking research? You can clearly at least sometimes activist them into changing course. But their behavior seems sue-worthy in these examples, and completely irresponsible. We have negligence laws in other areas. Is there an airtight legal case that they’re beyond suing, or is it just that nobody’s tried?

> I don’t know, and this seems like an important question.

Well, at the very least you'd have major problems with standing. They haven't killed _you_. You can't sue over harm to a known third party, and you certainly can't sue over harm to a hypothetical third party.

Say you (you in particular, whoever you are) hire someone to paint your house. And he does. There's no shirking or malingering or anything. But he's terrible at his job, and it takes him a long time, and you end up paying about triple the ordinary cost of getting your house painted for a paint job that you're going to have to hire someone else to redo.

Now, the question isn't "can you sue him for doing a bad job?"

The question here is "can I, Michael Watts, someone who doesn't know either of the two of you and lives in another country, sue that guy for doing a bad job painting your house?"

And it's not really hard to understand why that's difficult.

Expand full comment
Theodric's avatar

Wouldn’t the families of people who died from lack of a delayed treatment potentially have standing?

Expand full comment
Michael Watts's avatar

> NIMBYs, whatever else you think about them, are resisting things they think would hurt them personally, whereas IRBs are often pushing rules that nobody (including themselves) wants or benefits from them.

If we run with this, we'd analogize the IRBs to environmentalists instead of NIMBYs.

Expand full comment
murphy's avatar

"The "solution" that google uses is to first define (by business commitee) a non-zero number of "how much should this crash per unit time". This is common, for contracts, but what is less common is that the people responsible for defending this number are expected to defend it from both sides, not just preventing crashing too often but also preventing crashing not often enough. If there are too few crashes, then that means there is too much safety and effort should be put on faster change/releases, and that way the incentives are better."

There's a more extreme version of this.

Netflix runs a ChaosMonkey.

https://github.com/Netflix/chaosmonkey

The job of the ChaosMonkey is to crash services, kill processes and generally break stuff. Intentionally.

So now the infrastructure people and the developers are faced with a situation, they can no longer hope for everything to run perfectly all of the time, it's not a question of whether servers will crash but *when* because the ChaosMonkey is doing it's job.

So they need to build everything under the assumption that any server or service can fail at any time and everything needs to be built to survive the chaos.

it's a design philosophy that apparently works surprisingly well, possibly because it deprives the infrastructure people of the possibility of everything running smoothly all the time.

Expand full comment
Anya L's avatar

This doesn't solve the same problem, google has similar things and it happens automatically at sufficient scale. Infrastructure people are already sufficiently paranoid of things breaking, the problem is knowing when you've done *enough* risk mitigation and should instead remove queues and slow safety checks.

Expand full comment
Roxolan's avatar

> Under utilitarianism, you'd probably still want some sort of oversight to eliminate pointless yet harmful experiments or reduce unnecessary harm, but it's not clear why subjects' consent would ever be a relevant concern; you might not want to tell them about the worst risks of a study, as this would upset them.

This is similar to the "would you murder a patient so that their organs can be used to save the life of five unrelated others" thought experiment.

The standard utilitarian reply is that if this was commonly done, people would stop going to the doctor because they don't want to risk being murdered, and that would overall result in lower utility.

(Maybe doctors can do it *in secret* as a conspiracy - but this has the same expected value, just higher variance. You may fool some patients, but the conspiracy has a chance of being uncovered, in which event people will stop trusting *any group* that might want to kill them for the greater good even if they don't seem to be doing it because they too might have a conspiracy.)

Expand full comment
10240's avatar

"I don’t know if some branch of the government has enough power to mandate everyone use IRBs regardless of their funding source."

Congress does in the US, as does most countries' legislature. The Commerce Clause gives the US Congress a mandate to regulate interstate and international commerce. In practice, since Wickard v. Filburn, it can regulate intrastate economic activity as well.

No major drug company, or AI company, operates solely within the borders of one US state, so it can be regulated even with a relatively reasonable interpretation of the Commerce Clause.

After all, where does the FDA get its power to ban selling unapproved drugs (without any federal funding conditions or anything such) in the first place?

Expand full comment
Jacob's avatar

I do wonder if the gene therapy researcher Whitney mentions is underestimating what they could do without general regulatory insanity. I'm not sure whether you can attribute this to an IRB precisely, but a regulatory change is maybe the reason we don't have artificial hearts: https://www.newyorker.com/magazine/2021/03/08/how-to-build-an-artificial-heart

The key, enraging quote being: "In the early days of artificial-heart research, a team could implant a device in a dying person on an emergency basis—as a last-ditch effort to save his life—and see how it functioned. Ethicists were uneasy, but progress was swift. Today, such experimentation is prohibited: a heart’s design must be locked in place and approved before a clinical trial can begin; the trial may take years, and, if it reveals that the heart isn’t good enough, the process must start again."

I wonder if there could be much better/easier gene therapy research too but it just doesn't even occur to people to ask the question. To be fair it would be harder with gene therapy since the results are generally much noisier and the endpoints more complicated than "is the patient dead" so maybe not.

Expand full comment
Jacob Steel's avatar

Is that about changes to regulatory attitudes, or is it about the existing artificial hearts getting better?

I think that the evidence you have to give that your experimental artificial heart is worth trying /should/ have to be a lot better if the alternative is "modern quite-good-already artificial heart" rather than "recently-invented crap artificial heart" or just "no artificial heart, they die".

Expand full comment
Jacob's avatar

Per the article existing artificial hearts are still pretty bad and mostly exist as bridges to a transplant for people on death's door, and there haven't been any major improvements for decades, and at least one serious startup gave up partly over regulatory BS. And the upside is hundreds of thousands of people saved per year if you could hit the moonshot of making artificial hearts as routine and reliable as like a hip replacement but alas making progress toward that goal would be Unethical.

Expand full comment
Trust Vectoring's avatar

> whereas IRBs are often pushing rules that nobody (including themselves) wants or benefits from them.

They literally benefit from having rules to enforce because they literally get paid for it.

I thought that maybe you omitted this angle because it's too obvious, but now I guess it's not: if you create an organization that can invent pointless rules for other people to follow and each new pointless rule guarantees continued employment for the employees of said organization and increases the number of required subordinates and therefore the salaries and status of senior employees, while repealing a pointless rule threatens downsizing and becoming unemployed, then this organization will keep inventing a lot of pointless rules.

It's not a weird accident, or a collective PTSD from the Nazi research practices, or that one Gnostic guy who formulated the original ethos.

This is what the incentives incentivize and this is what you'll keep getting, guaranteed, as long as the incentives remain the same. No action that doesn't meaningfully change the incentives can possibly change the outcomes. You can't socially pressure people into making their own jobs obsolete. No amount of articles calling IRBs out will result in IRBs voluntarily removing some of the pointless rules that guarantee their employment.

Expand full comment
Shaked Koplewitz's avatar

> I appreciate this correction - NIMBYs, whatever else you think about them, are resisting things they think would hurt them personally, whereas IRBs are often pushing rules that nobody (including themselves) wants or benefits from them.

I think the analogy to the IRB here isn't the NIMBYs themselves, it's the people at city hall who run "public impact meetings" and insist on running a 3-year 300k permitting process to open an ice cream shop. The analog of NIMBYs would be the few people who do get hurt in the studies - not a nonzero amount, but the problem is the people who design the process, not the NIMBYs/subjects themselves.

Expand full comment
John McDonnell's avatar

Yes and in particular many cities have "historical preservation boards" that have to come up with new protections to enact to justify their continued existence.

Expand full comment
Theodric's avatar

“ you would expect competition to train people out of this, but also another situation where a hegemon might feel tempted to rest on its laurels!”

So the places you’d think you’d get competition in defense contracting are: on the government side, an actual big war that forces you out of peacetime priorities, or on the contractor side, a push for efficiency to either earn more profit or win more contracts.

The problem with the former is that neither the USA nor anyone that could plausibly threaten us existentially has had one of those big wars in well over half a century (this is of course a good thing generally, but a bad thing for encouraging utilitarian cost benefit assessments in defense developments). And our competitors have most of the same problems plus often worse actual corruption.

The problem with expecting the contractors to be competitive is that most of these inefficient, excessively risk averse, micromanaging policies are demanded by the customer (government agencies). They are literally written into the contracts, because the bosses of the agencies get yelled at by Congress whenever a defense contract goes sufficiently pear-shaped to make the news, so they need to “Do Something” to “Ensure Contractors are Following Best Practices to Protect the Taxpayer”.

Just like IRBs, they aren’t judged against some ideal cost-benefit analysis, but against the “did you screw up bad enough for the public to notice” standard. Boring stuff like moderate cost and schedule overruns and wasted dollars spent on pointless tests and reports can’t trump that risk aversion.

So contractors can’t compete on efficient processes because the processes are dictated by the customer and everybody has to bake the costs into their proposals. And perversely they get judged by how enthusiastically and thoroughly they implement these processes, so if they want to win future business, they shut up and get in line. The financial incentive isn’t really there either - true “cost plus” contracts are largely gone but in general the government still covers the actual development costs plus fixed incentive fees, or pays you based on your original estimate (including the mandated inefficiencies), so the incentive to cut costs is weak.

As an example, SpaceX charges the government (NASA, the Space Force, other agencies) something like 3x as much per launch as their advertised fees for commercial flights. These are literally the same rockets designed by the same people and built on the same production lines, but with the added burden of extra oversight, testing, validation, etc. (some of the cost is more legitimately unique to government launches, such as costs imposed by securing classified payloads and associated data).

I don’t want to minimize the fact that in some cases contractors really do screw up and waste a lot of money, but a lot of this is “oversight theater” that serves more as CYA for the contracting agency than anything that actually drives quality and efficiency.

There are a couple of other perverse ways competition actually hurts things:

1) to avoid wasting government money on sweetheart single source subcontracts, we are mandated to “competitively bid” every part we order from somebody else. Even if we’ve got an equivalent part that works just fine we’ve been ordering for another program for years, doesn’t matter, you need to solicit bids from multiple suppliers and take the “best” bid by some objective criteria and generate a mountain of paperwork that goes through many hands to get this approved. This takes forever and costs a lot of money. It might make sense for some major subcontracts but often we’re talking commercially available catalog parts that cost 4 or low 5 digit dollars on a literal billion dollar contract and any savings are rapidly swamped by the overhead cost (but due diligence has been done, the agency can confidently report).

2) Sometimes awarding a big contract to only one contractor after an initial proposal fails because something goes wrong after lots of money has been spent - it can be very hard to judge the success probability of a very complicated development based just on a proposal. The solution is to fund multiple teams farther into development, say the Critical Design Review (CDR). Unfortunately sometimes this means that dumb requirements that got baked into the proposal can’t be fixed, because it would warp the competition. If we were single sourced we could say “look government, you want A, but it will double the cost, take twice as long, and we’ve done an analysis that shows that giving you 50% of A will cover 99% of your use cases, plus make the thing better at B and C”. But in a competitive program, the government must say “nope, contact says A, do A”.

Expand full comment
Gordon Tremeshko's avatar

I would disagree with Rbbb's comment a bit regarding the Nimby parallel. I would say they're both situations where regulations are used by one group to pursue their perceived self-interest in a way that creates a tragedy of the commons effect.

Expand full comment
Paul Dueck's avatar

Commenting on "Notably, in most Anglo-Saxon legal systems, you can't consent to be caused physical injury."

Importantly though, you can consent to *risking* physical injury and death, even very severe injury and death. In fact (thinking of impressment and conscription) the government is even allowed to compel you to risk those things. That seems importantly relevant when thinking about medical research and consent problems generally.

Expand full comment
Jordan's avatar

If the main benefits of IRB are accruing to MTurk workers, that doesn’t justify the IRB. That just sounds like (taxpayers presumably) are funding Amazons failure to moderate disputes in its own marketplace.

Expand full comment
Your name's avatar

Hopefully a good explanation of JDK's comment:

I think JDK's point is that your comparison and complaint only works for a narrow set of overwhelmingly likely and precise counterfactuals. In the case of a protocol-optimization study that was delayed, yes, we know the consequences. A speculative study that never happened? No clue. In your own area of clinical interest, what is the range of possible outcomes of psychedelic therapy (Roughly status quo to hippie utopia?) and what is the size of the range you'd assign even a 2/3 probability to? An "easy" example I'm semi-seriously challenging you to forecast (I don't have money with which to bet you, but I'm sure you can find someone willing to put it on a prediction market): Similarly dosed intranasal racemic, s-, and r-ketamine monotherapies were all in as-yet unpublished Phase III trials, last year; forecast their respective effect sizes and response rates. You're already intimately familiar with the relatively large amount of existing research, and they're three variations on the same molecule! If you can't forecast *those* trial results with reasonable accuracy and precision, why think anyone can do the same for something more speculative or the downstream benefits of basic science?

If we can't reasonably quantify the benefit, a cost-benefit - and, therefore, consequentialist moral - analysis is "garbage in, garbage out."

Expand full comment
Theodric's avatar

“A speculative study that never happened? No clue.”

I think you can still judge “what’s the probability this finds something that might significantly help people” vs. “what’s the likelihood this might significantly harm someone in the study”. The harm side should not be given infinite weight.

Expand full comment
Your name's avatar

Yes, *where possible* you should endeavor to make reasonable inferences, but JDK and Scott were discussing moral culpability in the deaths of people not saved by unconducted research and not all unconducted research is the same.

Expand full comment
Theodric's avatar

Part of the problem I have with JDK’s argument is that it doesn’t acknowledge that the *harms* of research are often equally speculative. Furthermore it’s unclear to me why we can say that “the IRB preventing this study caused deaths” is incoherent / inappropriate assignment of culpability, but then turn around and say “the IRB prevented harms by denying the research”. In neither case is the IRB *directly* culpable.

This is especially infuriating when the things the IRB denies are not “do dangerous experiments without consent” but “allow the patient to read the consent form themselves instead of out loud on the off chance that someone is illiterate but doesn’t want to tell you that” or “use this data we already have to answer a question”.

Expand full comment
Your name's avatar

"Part of the problem I have with JDK’s argument is that it doesn’t acknowledge that the *harms* of research are often equally speculative. Furthermore it’s unclear to me why we can say that “the IRB preventing this study caused deaths” is incoherent / inappropriate assignment of culpability, but then turn around and say “the IRB prevented harms by denying the research”. In neither case is the IRB *directly* culpable. "

Simple: Re-reading their comments, JDK does not say that the IRB prevented harms.

Expand full comment
Theodric's avatar

And Scott never said it was impossible for IRBs to prevent harms.

The whole argument is a non-sequitur. We can semantically argue about whether an IRB “causes” death or just is one necessary but insufficient link in the chain of a death, but ultimately it doesn’t matter for a cost benefit assessment.

I think you’re making a fundamentally different (better) argument: the supposed benefits are often too speculative to be quantitatively weighed. JDK’s argument was more like “IRBs can definitionally NEVER *cause* death by denying a study, so their cost should always be zero lives”.

I mean if we really want to deep-read JDK’s comments, they seem to be trying to derail the discussion into their personal hobby horse of iatrogenesis rather than actually assessing whether IRBs are in and of themselves net beneficial.

Expand full comment
Bill's avatar

The anecdotes mirror my experience working in military IT and dealing with cyber security departments which churn out mandates that are completely divorced from both the technical systems as well as the actual security principles that they're supposed to be enforcing. I wonder if this generalizes to any kind of organization which attempts to separate "implementation" from "accountability"... the "accountability" group tends to be overtaken by bean counters who are better at concern-trolling than doing their actual jobs.

Expand full comment
quiet_NaN's avatar

From the jumpingjacksplash quote:

> Mengele's research has definitely saved more people than he killed

{{Citation needed}} very much.

From my understanding, Mengele's "research interests" were about as relevant to the thriving of humanity as you would expect the obsessions of a random serial killer to be. People with different eye colors, wtf? How does that help the Endsieg?

From his wikipedia article, there might be cases when he investigated the spread of the diseases with the help of Jewish doctors, but mostly his method of fighting diseases was to send all the inmates of the building to the gas chamber and then disinfect the building. Arguably Mengele was part of a structure which killed on the order of perhaps a million (No, I did not look up the deaths during his stay there, nor will not get involved in any discussion of numbers) people in Auschwitz (most of them not involved in his "research"), so for that statement to be true his research would have to have saved millions of people. I am very doubtful that his research saved even more people than he killed /with his research/.

Also, scientists and "scientists" working in totalitarian regimes often have some epistemic conflict of interest. Suppose a Nazi scientist had a genuine but unethical experiment trying to test the hypothesis of "Aryan" superiority, and the result came out negative. What do you suppose Himmler would do if he learned of their publication?

Not all atrocious experiments are also scientifically useless, but there seems to be a strong correlation between being an atrocious human being and being an atrocious experimenter.

I previously mentioned the Dachau freezing water experiments as something which might safe human lives. Reading Wikipedia,[0] I might have to go back on that:

> The results of the Dachau freezing experiments have been used in some late 20th century research into the treatment of hypothermia; at least 45 publications had referenced the experiments as of 1984, though the majority of publications in the field did not cite the research.

> In a 1990 review of the Dachau experiments, Robert Berger concludes that the study has "all the ingredients of a scientific fraud" and that the data "cannot advance science or save human lives."

[0] https://en.wikipedia.org/wiki/Nazi_human_experimentation

Expand full comment
o11o1's avatar

> I thought tech had a reputation for “move fast and break things”, and because I would have expected the market to train this out of companies that don’t have to fear lawsuits.

Still reading through the post, but I'll just highlight that Tech often has two major flavors: "Startup Tech" (sometimes "Small Company") that is very prone to the Move Fast and Break Things lifestyle, and "Enterprise Tech" (sometimes "Big Company") where there is a lot more regulatory influence and litigation risk active. These are the places that tend to stay out of the limelight but have a lot more trend for the layers of approval and Dilbert-like dotted line bosses who can veto without cost to themselves.

Expand full comment
José Vieira's avatar

Regarding harm/omission, that's a bad example for Christianity's standard positions. Not saying there isn't such a distinction in this framework, but there's also too much of "I was hungry and you didn't feed me" for that example to work. You definitely have a moral obligation to let the guy know there's a car coming.

Expand full comment
Mark Y's avatar

I kinda mentioned this in the original book review comment section, but this quote:

“Jonas argued that all cancer studies should be banned because it’s impossible to consent when you’re desperate to survive”

makes me even more sure: this guy really sounds like a non-fictional member of the BETA-MEALR party:

https://slatestarcodex.com/2013/08/25/fake-consensualism/

Which opens with:

“Dear friend, have you considered banning health care?”

Looks like we’ve found someone who actually has considered it! (Not quite the same reasoning, but still uncanny)

Expand full comment
hkgunu's avatar

Whataburger usually starts serving breakfast from 11:00 pm of the previous day, and it serves until 11:00 am. https://breakfastoffers.com/whataburger-breakfast-hours/

Expand full comment