What really bothers me about IRBs is that there doesn't seem to be any attempt to check if they really are effective nor to take the moral harms of not doing experiments seriously. Letting people die because they aren't getting a better treatment is deeply immoral but not taken seriously.
The justifications the people like to bring up like Tuskegee were awful but it's just not clear that IRBs would make them better. Often many people knew and just shrugged. I'm sure you could have gotten an IRB in Nazi Germany to approve Mengele's experiments.
I'd much prefer just having a committee of other tenured profs at the institution do a brief consideration with all the detailed forms and considerations saved for the few studies which raise serious ethical questions.
People have joked about applying NEPA review to AI capabilities research, but I wonder if some kind of IRB model might have legs (as part of a larger package of capabilities-slowing policy.) It’s embedded in research bureaucracies, we sort of know how to subject institutions to it, and so on.
I can think of seven obvious reasons this wouldn’t work, but at this point I’m getting doomery enough that I feel like we may just have to throw every snowball we have at the train on the off chance one has stopping power.
“In one 2009 study, meant to simulate human contact, he used a Q-tip to cotton swab first one subject’s mouth (or skin), then another’s, to see how many bacteria traveled over. On the consent forms, he said risks were near zero - it was the equivalent of kissing another person’s hand.”
Is the swab from the first person’s mouth going into the second person’s mouth or just on their skin? If it’s the former, which is what the description makes it seem like, I don’t think it’s right to say the risk is equivalent to kissing another person’s hand - more like the risk of French kissing them.
This problem must be at the heart of Erooms law. The opportunity cost of expensive and slow drug development seems like an obvious place to start but a political red button
>IRBs aren’t like this in a vacuum. Increasingly many areas of modern American life are like this.
Or the Motion Picture Association, which assigns ratings to movies. This has massive consequences for a film's financial success - most theaters won't play movies with an NC-17 rating, for example - so studios aggressively cut films to ensure a good rating.
This is why you almost never see genitals (for example) in a Hollywood movie. You've got a weird, perverse system where studios aren't truly making movies for an audience, they're making it for the MPA: if the MPA objects, the film is dead in the water.
Almost every industry has its own version of this. In the world of comics, you had the Comics Code Authority. It was the same deal. Unless your comic met with the moral approval of the CCA (example rules: "(6) In every instance good shall triumph over evil and the criminal punished for his misdeeds." [...] "(3) All characters shall be depicted in dress reasonably acceptable to society.") you basically couldn't get distribution on newsstands. This lead to the death of countless horror and crime comics in the 50s (Bill Gaines' EC Comics being the most famous casualty), and the firm establishment of superhero comics.
This isn't government regulation, exactly - it's industries regulating themselves (but it's complicated, because the CCA was implemented out of fears that harsh government censorship might be coming unless the comics trade cleaned its own house first, so to speak). It has similar gatekeeping effects, though.
>Ezra Klein calls this “vetocracy”, rule by safety-focused bureaucrats whose mandate is to stop anything that might cause harm, with no consideration of the harm of stopping too many things. It’s worst in medicine, but everywhere else is catching up.
It sounds almost like a game theory problem. In real life, there are more options than just co-operate/defect. There's often also "refuse to play", which means you cannot win and cannot lose.
If a new drug or treatment or whatever works, no glory will come to the IRB that approved the experiment. But if it kills a bunch of people, they'll probably get in trouble. Regulators have a strong incentive to push the "do nothing" button.
I think there's a typo here:
> Patients with a short consent form that listed only the major risks got twice the score on a comprehension test compared to those with the longer form; they were also more likely to miss cases where their medical histories made the study procedure dangerous
As written the "they" in the second clause is referring to the "Patients with a short consent form," such that longer is better, which is the opposite of what the rest of the paragraph is suggesting.
It’s situations like these which would be avoided in a country with an absolute monarchy, where the monarch’s one job is to “unstick” the system when it is doing insanely dysfunctional things because of a combination of unintended consequences and empty skulls. Ideally such a monarch would have a weekly tv show where he used his limited-unlimited powers to fix things (limited in that he can’t overrule the legislature but unlimited in that all problems which are the result of idiots and rent seekers and cowards and ninnies and wicked people not doing their jobs better can be solved by his plenary power to fire or imprison any individual whose egregiousness he can make an example of in order to address public opinion). Bureaucrats who reject basic common sense about costing 10,000 lives to save 10 can be named and shamed in ways which will rapidly improve the quality of the others.
>Whitney points out that the doctors who gave the combination didn’t need to jump through any hoops to give it, and the doctors who refused it didn’t need to jump through any hoops to refuse it. But the doctors who wanted to study which doctors were right sure had to jump through a lot of hoops.
The incentives are different between "a doctor may do X or Y if he thinks it's best for the patient" and "a doctor may do X or Y based on considerations other than what's best for the patient", even if he's only going to be doing one of the things that one of the first two doctors would do. Don't ignore incentives.
Slight nitpick: the cost of getting rid of IRBs isn't the handful of deaths a decade we see currently. It's whatever the difference in deaths would be with existing IRBs vs how many would die without them (or with their limited form). And of course any costs related to something something perfect patient consent.
You said Whitney found no evidence™ of increased deaths from before 1998, but if anything that strengthens the case against the increased strictness.
I am sure there are good reasons why it is impossible, but in my dreams (the ones where I have a pony) some bored lawyer figures out how to file a class-action suite against the ISIS-2 IRB on behalf of everyone that died of a heart attack during the 6 month delay. Once IRBs are trapped in a Morton's-fork where they get sued no matter what decision they make they will have to come up with some other criteria to base their decisions off of (though I am cynical enough to expect whatever they come up with to be even worse).
A very strong essay, well-written as always, and addressing clearly and cogently a point both very important and not obvious to most people. Well done!
I think the fundamental problem is that you cannot separate the ability to make a decision from the ability to make a *wrong* decision. However, our society--pushed by the regulator/lawyer/journalist/administrator axis you discuss--tries to use detailed written rules to prevent wrong decisions from being made. But, because of the decision/wrong decision inseparability thing, the consequences are that nobody has the ability to make a decision.
This is ultimately a political question. It's not wrong, precisely, or right either. It's a question of value tradeoffs. Any constraint you put on a course of action is necessarily something that you value more than the action, but this isn't something people like to admit or hear voiced aloud. If you say, "We want to make sure that no infrastructure project will drive a species to extinction", then you are saying that's more important than building infrastructure. Which can be a defensible decision! But if you keep adding stuff--we need to make sure we're not burdening certain races, we need to make sure we're getting input from each neighborhood nearby, etc.--you can eventually end up overconstraining the problem, where there turns out to be no viable path forward for a project. This is often a consequence of the detailed rules to prevent wrong decisions.
But because we can't admit that we're valuing things more than building stuff (or doing medical research, I guess?), we as a society just end up sitting and stewing about how we seemingly can't do anything anymore. We need to either: 1) admit we're fine with crumbling infrastructure, so long as we don't have any environmental, social, etc., impacts; or 2) decide which of those are less important and streamline the rules, admitting that sometimes the people who are thus able to make a decision are going to screw it up and do stuff we ultimately won't like.
I kept expecting this post, like so many others on apparently-unrelated subjects, to loop around and become an AI Safety post. But since it didn't, I guess I'll drop the needle myself.
Can we consider AI Safety types to be part of the lawyer-adminstrator-journalist-academic-regulator axis that stops things from happening due to unquantifiable risks?
After reading through and getting to the proposed suggestions/your meta thoughts, I was surprised that you didn't mention the option of some sort of blanket liability shield to protect institutions much in the same way that the current consent forms are designed to do, so they don't need to be as obsessed with lengthy bureaucracy to shield themselves. Is this just not realistic?
"Lasagna and Epstein?"
And no endnote about the nominative determinism in a guy named Epstein studying consent?
Missed opportunity, Scott.
Some back of the envelope math: the high end of your estimate for IRB-induced deaths (100k Americans/year) would imply ~2.9% of the 3.5M annual deaths would have been preventable by studies getting delayed that year.
This seems high to me. I wonder if the most egregious examples like ISIS-2 are throwing your estimate off. The lower end of 0.29% seems more reasonable.
Still a massive number of people though. Really enjoyed this, and hope it nudges us toward change.
(edited for clarity)
If I were in Scott's situation I would have avoided any involvement with the IRB in the first place, and instead secretly recorded the data and published the stats anonymously on the internet. Nobody can sue you for that because nobody has any damages. In the unlikely event that the hospital found out they could fire you, but "firing doctor for publishing statistics" is a bad look.
I haven't finished reading by felt compelled to comment on this:
"the stricter IRB system in place since the
'90s probably only prevents a single-digit number of deaths per decade, but causes tens of thousands more by preventing lifesaving studies."
No. It does NOT "cause" deaths. We can't go down this weird path of imprecision about what "causing" means.
I've been examining Ivan Illich, "Medical Nemesis" recently. By claiming IRBs which stop research ostensibly CAUSE death strikes me as cultural iatrogenesis masquerading as a cure for clinical iatrogenesis.
Excellent essay. Since you’re interested in distinguished families, you might note that Ezekiel Emanuel is the brother of Rahm and Ari.
What good historical examples are there of systems this broken being radically redesigned or recreated, with great results afterwards? As far as I know, most of them occur during large power shifts or violent revolutions, but I'd love to see what types of strategies worked in the past aside from those.
> "A few ethicists (including star bioethicist Ezekiel Emanuel) are starting to criticize the current system; maybe this could become some kind of trend."
Nir Eyal's guest essay 'Utilitarianism and Research Ethics' is a nice contribution in this vein:
(AFAIK, Eyal is not himself a utilitarian, but he here sets out a number of potential commonsense reforms for research ethics to better serve human interests. He's the inaugural Henry Rutgers Professor of Bioethics at Rutgers University, and founded and directs Rutgers’s Center for Population-Level Bioethics.)
>the Willowbrook Hepatitis Experiment, where researchers gave mentally defective children hepatitis on purpose
I ran across this study while reviewing historical human challenge trials as a consultant for 1 Day Sooner. Having not previously known about it, I found it quite shocking. They certainly learned a lot about hepatitis, but they definitely mistreated the kids.
I have to tell my consent form story. I was asked to join an ongoing, IRB approved study in order to get control samples of normal skin to compare to samples of melanomas that had already been collected by the primary investigator. The samples were to be 3mm in diameter taken from the edge of an open incision made at the time of another surgery (e.g. I make an incision to fix your hernia and before I close the incision I take a 3mm wide ellipse of extra skin at the edge). There is literally zero extra risk. You could not have told after the closure where the skin was taken. The consent form was 6 pages long (the consent form for the operation itself that could actually have risk was 1 page and included a consent for blood products). I had to read every page to the patient out loud (the IRB was worried that the patients might not be literate and I wasn’t allowed to ask them because that would risk harm by embarrassing them). They had to initial every page and sign at the end.
I attempted to enroll three patients. Every one stopped me at the first page and said they would be happy to sign but they refused to listen to the other 5 pages of boilerplate. The only actual risk of the study seemed to be pissing off the subjects with the consent process itself. I quit after my first clinic day.
I never did any prospective research again as chart review and database dredging was much simpler.
You will also find that many institutions are dodging the IRB by labeling small clinical trials as “Performance Improvement Projects” which are mandated by CMS and the ACGME. They are publishing the crappy results in throwaway or in-house journals to get “pubs” on resident CVs.
The broader question of "why did all this administrative bullshit massively explode in the 90s and cripple society" seems underexplored, presumably because the administrators won't let you explore it.
Wasn't the whole point of the 90s the triumph of neoliberalism and deregulation? How come a less regulated economy goes hand in hand with a more regulated research sector?
I’m surprised, doctors have only wished death upon the IRB. The 1000’s of lives lost due to study delays vs the lives of a few IRB administrators, sounds like a very easy version of the trolley problem
How does this work in other countries? This review describes a peculiar sequence of events in the US. Not every country had a Tuskegee, a Hans Jonas, an asthma experiment death, or a lack of bigger issues to worry about. Yet the US can't be that much of an outlier either, otherwise this would be a story of how all medical research has left the US. (At least fundamental/early stage research and post-approval research; drug companies may need to do some trials in the US to get approved in its lucrative market.)
Did other countries independently take a similar path? Did they copy the US? Did they have much stricter laws on human experiments to begin with?
>It’s not as bad as it sounds
Only in hindsight, because they got lucky. They didn't know that the children would be asymptomatic--the experiment was to prove it. What if they had been wrong?
For a longer look at the legal issues:
Yuval Noah Harari, in something of a throwaway line, points out that the UN Universal Declaration of Human Rights is essentially the closest thing we have to a world constitution, and the most fundamental of these rights is the right to life.
I raise this because I'd love to see how far a lawsuit against this IRB nonsense could go based on Human Rights and general wokeness; the essential argument being that this "oversight", far from being costless and principled, is in fact denying the human rights of uncountable numbers of people, both alive and not yet born, by preventing, for no good reason, the accumulation of useful medical knowledge...
A couple of weeks ago I suggested (partly in jest) to some physicians who think about these kinds of issues that maybe a study where participants are doctor's families, parents and grandparents would be useful to put some skin in game.
That however would not really be a "randomized" study and would introduce confounding problems.
you could have just told us to watch Finding Nemo lol
I've noticed the bureaucrat hegemony as well on a smaller scale. I believe it's the sign of a mature, post-peak society. Everything is pretty much built out, so risk/reward favors rent-seeking petty overlords.
It's a sign that the low-hanging fruit has mostly been picked. "People who want to get stuff done, go somewhere else!" I mean, your examples are pretty piddly in the grand scheme. 100 years ago the study would have been identifying a deadly disease, and there would be no IRB (or grants or funding for that matter). Today it's settings on a ventilator? Small potato, even if the surface area is large.
The two historical examples that come to mind are late imperial china and (post?) Soviet russia. Not a good omen?
Why can’t you sue an IRB for killing people for blocking research? You can clearly at least sometimes activist them into changing course. But their behavior seems sue-worthy in these examples, and completely irresponsible. We have negligence laws in other areas. Is there an airtight legal case that they’re beyond suing, or is it just that nobody’s tried?
“He was uncertain that people could ever truly consent to studies; there was too much they didn’t understand, and you could never prove the consent wasn’t forced.”
Makes me think that we’ve finally found a real live member of the BETA-MEALR party:
"some kind of lawyer-adminstrator-journalist-academic-regulator axis"
Yup! Which was why my view proposed AI regulation is (as I wrote in the last open thread, in https://astralcodexten.substack.com/p/open-thread-271/comment/14409338 ):
Oh shit. For an example of recent regulatory action see https://astralcodexten.substack.com/p/the-government-is-making-telemedicine - and this is a _favorable_ case. The government has a century of experience in regulating medicine.
It isn't wholly impossible for the government to eventually settle on a fairly sane set of regulations. Traffic regulations work reasonably sanely. But this isn't the way to bet, particularly in a new area of technology.
Yes, AI is potentially dangerous. While I think Eliezer Yudkowsky probably overestimates the risks, I personally guess that humanity probably has less than 50/50 odds of surviving it. Nonetheless, I would rather take my chances with whatever OpenAI and its peers come up with rather than see an analog to the telemedicine regulation fiasco - failing to solve the problem it purports to address, and making the overall situation pointlessly worse - in AI.
The claim in this essay (“Repeal Title IX,” https://www.firstthings.com/article/2023/01/repeal-title-ix) is that Title IX bureaucracy followed the exact same crackdown-> defensive bureaucratisation as Scott describes below for IRBs. (I see more upside for Title IX than the author, but I find her overall analysis of the defensive dynamics compelling).
“The surviving institutions were traumatized. They resolved to never again do anything even slightly wrong, not commit any offense that even the most hostile bureaucrat could find reason to fault them for. They didn’t trust IRB members - the eminent doctors and clergymen doing this as a part time job - to follow all of the regulations, sub-regulations, implications of regulations, and pieces of case law that suddenly seemed relevant. So they hired a new staff of administrators to wield the real power. These administrators had never done research themselves, had no particular interest in research, and their entire career track had been created ex nihilo to make sure nobody got sued.”
It's almost as if our current practices of science impede the practice of science.
This is yet another example of how attempts to reduce risk in fact serve to reduce variance, and this turns out to be net negative. True of a broad range of policies justified in the name of safety.
Most of the comments diving into the details of IRBs are missing the point. The dynamics on display here has little to do with IRBs in particular. They're common to most institutions set up to prevent risk by requiring prior permission.
Ugh, what is wrong with our species? Maybe we are ungovernable. This fungus of dysfunctionality sprouts in any crevice, and there are always, always crevices.
Here’s deep context beyond the IRB context
1. Loss aversion
Medicine: First DO no harm.
Law: You may step over a drunk in the gutter with impunity. But you come to his aid at your peril.
People are more upset at losing a $20 bill than they would be to learn that they had walked past a $100 bill.
2 Bambi's mom and Stalin
We can, with complete indifference watch vast tracts of wild lands disappear, but we shed tears over the cute orphaned fawn. "One death is a tragedy. A million deaths is a statistic."
3. Every thing happens for a reason
No bad outcome without a bad actor. Someone is to blame, and must be punished. To carry out this principle, new duties continually are being created by statute or judicial decision.
4. Living in the moment
Just as we find deferred gratification difficult, and discount future positive outcomes, we also underweight future negative outcomes.
5. Rules based order
Life is chaotic, full of lurking dangers, and must be constrained so we have rules. But life is also full of paradox, randomness, and edge cases that must be handled on a case-by-case basis until some underlying regulatory principle can be discerned. But the grown hall monitors who administer the rules know that exercising judgment is far more likely to get them canned than not. They will only do that when the new rule is almost in place. For a sweetener they usually ask an expanded scope of authority and a new cadre of courtiers to support the enhanced dignity of the bureau.
6. Petty tyrannies
Hall monitors grow up but never lose the taste for pettifogging displays of social dominance.
How do you imagine regulating AI is going to play out?
I don't understand why we need regulation of research ethics at all. As long as the researchers aren't doing something that is illegal for a "civilian" to do on their own, why do we need an IRB to monitor them? All of the examples of bad research you give here are very clearly torts, and any competent lawyer could easily extract massive settlements from those doctors and their institutions nowadays. Fear of lawsuits on its own could deter the majority of truly harmful research.
If you're giving someone an experimental medicine or otherwise doing research on an activity that is potentially harmful, all that should be required is a full explanation of the risks, what the doctors know and don't know about the drug, etc. Consent by a competent adult is all that any reasonable system of research should require (and I'd add that consent should only be necessary where there's a risk of a meaningful harm--if it wouldn't be illegal for you to give people surveys as a civilian, then you shouldn't need to get consent to collect them as a researcher, for example).
Lasagna and Epstein is a legendary research duo.
Also when you brought up NIMBYism, I imagined the IRB wearing joker makeup and saying "You see I'm not a monster, I'm just ahead of the curve"
I wish some lawyer would use that study showing that short consent forms work better to argue that anyone demanding a long consent form is negligently endangering the subjects.
There's an obvious solution. Create a meta-regulatory-agency which requires other regulatory agencies to fill out enormous amounts of paperwork proving that their regulations won't cause any harm.
I work at a big tech company and this is depressingly relatable (minus the AIDS, smallpox and death).
Any time something goes wrong with a launch the obvious response is to add extra process that would have prevented that particular issue. And there is no incentive to remove processes. Things go wrong in obvious, legible, and individually high impact ways. Whereas the extra 5% productivity hit from the new process is diffuse, hard to measure, and easy to ignore.
I've been trying to launch a very simple feature for months and months, and there are dozens of de facto approvers who can block the whole thing over some trivial issue like the wording of some text or the colour of a button. And these people have no incentive to move quickly.
I haven’t finished the article, and I presume someone did, but did no one think about just taking out some extra insurance? This kind of thing is what insurance (and creative kinds of insurance) were meant to handle. Even with grandstanding congressmen, you can say your alleged victims were fully compensated.
I hate to be that guy... no, actually, I love being that guy. You really think that this is down to connectedness?
There's a book that comes out in 1957 that predicts _exactly_ this kind of insanity throttling progress with consequent disasters. And, yes, though it focused more on railways and steel makers, it also predicted the effect on medical progress.
That book was called "Atlas Shrugged".
Please start paying attention.
Here's a better reason for the vetocracy: Court Overreach. The reason lawyers have as much power as they do is because decision making has been forcibly taken from policymakers and more efficient bureaucracies by overzealous judges with no limits on their power.
IRBs have to deal with lawyers, our approval process for environmental protection goes through courts rather than the EPA, and every single new apartment building has to win or beat a lawsuit to get built.
This is because entrenched interests want to wrest policymaking away from legislators and give it to courts.
Another example of the bureaucratic calculation, bureaucrats are exposed to all the downsides of a positive decision but get none of the upsides, so they are incredibly cautious about making a decision. If you want a different system, change the incentives for the decision makers.
The idea of a dispassionate safety reviewer sounds great until you start to realize it will be populated by agents who have their own interests to consider.
One thing a student who did an exchange year in America pointed out to me is that, unlike in Europe, speed limits are kind of meaningless. The speed limit on the interstate may be 70. Nobody goes 70. They all drive somewhere between 75 and 90, and whether that's judged to be breaking the rules or not is ambiguous and situational. It depends who you are, who the traffic cop is, and so on. If you grow up in the US, this is second nature, but if you come into the system from the outside it seems utterly mysterious.
The American system has a lot of rules on paper, and functioning in it requires you to know which rules are real, which rules you can break or ignore with impunity, and which rules to subvert by entangling the interests of the rule-enforcers with your own and co-opting them.
This is not efficient but it does ensure that real power is wielded by those who understand the system best, which tends to be those closest to it.
The Gordian knot of all this bureaucratic nonsense could be decisively cut if in any lawsuit for damages the compensation amount was strictly limited, according to a set of guidelines and, based on the latter, decided by the judge rather than the jury.
Of course the jury would still be the sole arbiters of whether there was a liability. I'm not suggesting they would have no role and a panel of judges would go into a huddle and decide the whole case, although that would be even simpler and cheaper!
The snag is I think in the US judge-determined awards would be contrary to some constitutional principle, dating from a former age when life was simpler and maybe jurors were more pragmatic about how much compensation to award, instead of the lottery jackpot win level of awards which often seems to prevail today.
Isn't this just part of the greater liability law issue the US seems to have (seen from the outside)? It looks like the incentives for liability claims are so huge that a significant part of the legal professionals are dealing with either putting forward these changes or preventing them. This seems to be a significant part of the extreme US health spending
To be frank I have not thought about it much, but my assumption is that LLM's (large language models), AI's - are going to revolutionise medical research.
At first their prognoscations will be challenged and resisted, but as time goes on and they start to get a track record of getting it right, we will come to rely on them more and more.
AI's are going to change literally everything, and in ways that are completely unpredictable.
>It defends them insofar as it argues this isn’t the fault of the board members themselves. They’re caught up in a network of lawyers, regulators, cynical Congressmen, sensationalist reporters, and hospital administrators gone out of control. Oversight is Whitney’s attempt to demystify this network, explain how we got here, and plan our escape.
This is consistent with my experience of government administration more generally. The people implementing rules tend to be very aware of the problems and irrationalities of the system, but they're following instructions and priorities from senior management, and ultimately politicians, for whom the primary incentive is to avoid embarrassing scandals, not maximise outcomes.
Politicians are to some extent just following their rational self interest, as the negative press and public opinion from one death from a trial is far greater than a thousand from inaction.
I'm in the UK so consent is a bit less onerous here, and yet, I've attempted to participate in 3 covid challenge trials, trials for which there are not exactly truckloads of willing participants, and yet I keep getting denied because I'm a carrier for Factor 5. I don't even have it, I'm just a carrier, and yet because it raises my risk of clotting ever so slightly, I keep getting denied. This AFTER I've had covid twice, once before being vaccinated, and without any sign of blood clotting. Bureaucracy gone awry.
I’m talking about a King with limited powers, much less than a President would have. There are plenty of constitutional monarchies where the King’s power is nontrivial but which have legislatures and Prime Ministers for ordinary governance.
On the general theory of risk avoidance in every form dominating politics, I think there's more to it. The risk of gun violence is famously ignored by politics, for instance. The risk of injury to pedestrians and cyclists by cars is pretty famously ignored. COVID risk certainly wasn't uniformly assessed.
So it isn't that society can't tolerate risks or make ROI-based decisions in many situations. I suspect the specific kind of legal risk medical professionals and institutions face have the issue that tail risk imposed is huge and does not do a good job assessing the ambiguity of day-to-day decisions. Doctors famously believe this is onerous.
But compare that with policing. Society famously tolerates quite a bit of risky behavior from police and assesses police officers face ambiguous situations and need to be given latitude to act, and while tail risk there has increased substantially I'm not sure how it compares to the situation in medicine.
Maybe a National Medical Research Association joined by millions of voters suffering from conditions that would benefit from medical research, and fanatically protective of medical research prerogatives, could change the landscape. National Health Council? Maybe the big orgs like AHA and ACS need to advocate less for funding and more for red tape removal.
"I don’t know exactly who to blame things on"
I do. America is the victim of its own success. It's gotten so rich, so comfortable, so risk-averse that a billion to maybe save a handful of lives isn't patently absurd. There are no reality checks anymore. Like you quote, "At a time when clerks and farm boys were being drafted and shipped to the Pacific, infecting the mentally ill with malaria was generally seen as asking no greater sacrifice of them than of everyone else." Whereas these days what little of warfare remains is mostly done by unmanned drones.
I think you've unintentionally elided two distinct points: first, that IRBs are wildly inefficient and often pointless within the prevailing legal-moral normative system (PLMNS); second, that IRBs are at odds with utilitarianism.
Law in Anglo-Saxon countries, and most people's opinions, draw a huge distinction between harming someone and not helping them. If I cut you with a knife causing a small amount of blood loss and maybe a small scar, that's a serious crime because I have an obligation not to harm you. If I see a car hurtling towards you that you've got time to escape from if you notice it, but don't shout to warn you (even if I do this because I don't like you), then that's completely fine because I have no obligation to help you. This is the answer you'd get from both Christianity and Liberalism (in the old-fashioned/European sense of the term, cf. American Right-Libertarianism). Notably, in most Anglo-Saxon legal systems, you can't consent to be caused physical injury.
Under PLMNS, researchers should always ask people if they consent to using their personal data in studies which are purely comparing data and don't change how someone will be treated. For anything that affects what medical treatment someone will or won't receive, you'd at least have to give them a full account of how their treatment would be different and what the risks of that are. If there's a real risk of killing someone, or permanently disabling them, you probably shouldn't be allowed to do the study even if all the participants give their informed consent. This isn't quite Hans Jonas' position, but it cashes out pretty similarly.
That isn't to say the current IRB system works fine for PLMNS purposes; obviously there's a focus on matters that are simply irrelevant to anything anyone could be rationally concerned with. But if, for example, they were putting people on a different ventilator setting than they otherwise would, and that risked killing the patient, then that probably shouldn't be allowed; the fact that it might lead to the future survival of other, unconnected people isn't a relevant consideration, and nor is "the same number of people end up on each ventilator setting, who cares which ones it is" because under PLMNS individuals aren't fungible.
Under utilitarianism, you'd probably still want some sort of oversight to eliminate pointless yet harmful experiments or reduce unnecessary harm, but it's not clear why subjects' consent would ever be a relevant concern; you might not want to tell them about the worst risks of a study, as this would upset them. The threshold would be really low, because any advance in medical science could potentially last for centuries and save vastly more people than the study would ever involve. The problem is, as is always the case for utilitarianism, this binds you to some pretty nasty stuff; I can't work out whether the Tuskegee experiment's findings have saved any lives, but Mengele's research has definitely saved more people than he killed, and I'd be surprised if that didn't apply to Unit 731 as well. The utilitarian IRB would presumably sign off on those. More interestingly, it might have to object to a study where everyone gives informed consent but the risk of serious harm to subjects is pretty high, and insist that it be done on people whose quality of life will be less affected if it goes wrong (or whose lower expected utility in the longer term makes their deaths less bad) such as prisoners or the disabled.
The starting point to any ideal system has to be setting out what it's trying to achieve. Granted, if you wanted reform in the utilitarian direction, you probably wouldn't advocate a fully utilitarian system due to the tendency of the general public to recoil in horror.
> Academic ethicists wrote lots of papers about how no amount of supposed benefit could ever justify a single research-related death.
Ah, the perennial "trolley problem", gifted to us by academic virtue ethicists.
Or at least a peculiar variant of the problem, in which the trolley was headed to run over a few hapless people, until the ethicists virtuously pulled the lever that makes it run over thousands.
The massive increase in administration is killing health care in many ways. A small example: in 1985 I moved to Canada to run a rural practice including a 29 bed hospital with one administrator. We never ran out of beds, the ER was open 24/7 as was the lab and x-ray. These days that 'hospital' has 8 beds, 15 administrators and ER and x-ray during business hours on weekdays. No lab. The 8 beds are permanently filled, acute cases get transferred over the mountain to a regional hospital 50 miles away and most of the time those who would have to present to the ER also have to drive over the mountain.
Administration and management have become like ivy growing on an oak tree, thriving as the tree is killed. More nutrition goes to the ivy just as more money flows into admin than patient care. This is new. No one has seen or dealt with this before. How do we reverse it? I cannot imagine a government firing all the administrators and replacing them with one hard-working real person rather than a bureaucrat, but that is what it will take and if we don't do it one day there will be no room for any patients in that little hospital, and an exponentially rising number of admins will just have meetings with each other all day long for no purpose whatsoever. Forty years ago John Ralston Saul foresaw this in his wonderful book Voltaire's Bastards, but we have done nothing, learned nothing. We are facing not so much institutional inertia, but institutional entropic death.
I was going along nodding my head in general agreement til I got to the part where you said this just like NIMBYism.
This is the near opposite of NIMBYism. When people (to cite recent examples in my neighborhood) rise up to protest building houses on unused land, they do it because they are more or less directly “injured”.
A person who prefers trees instead of town houses across the street is completely different from some institution that wants a dense thicket of regulations to prevent being sued. There is no connection.
This reminds me a lot of a concept in software engineering I read in the google Site Reliability Engineering book, the concept of error budgets as a way to resolve the conflict of interest between progress and safety.
Normally, you have devs, who want to improve a product, add new features, and iterate quickly. But change introduces risk, things crash more often, new bugs are found, and so you have a different group whose job it is to make sure things never crash. These incentives conflict, and so you have constant fighting between the second group trying to add new checklists, change management processes, and internal regulations to make release safer, and the first group who try to skip or circumvent these so they can make things. The equilibrium ends up being decided by whoever has more local political power.
The "solution" that google uses is to first define (by business commitee) a non-zero number of "how much should this crash per unit time". This is common, for contracts, but what is less common is that the people responsible for defending this number are expected to defend it from both sides, not just preventing crashing too often but also preventing crashing not often enough. If there are too few crashes, then that means there is too much safety and effort should be put on faster change/releases, and that way the incentives are better.
I don't know how directly applicable this is to the legal system, and of course this is the ideal theory, real implementation has a dozen warts involved, but it seemed like a relevant line of thought.
"Patients with a short consent form that listed only the major risks got twice the score on a comprehension test compared to those with the longer form; they were also more likely to miss cases where their medical histories made the study procedure dangerous"
I think you mean LESS likely to miss cases?
Thanks for writing this; I've watched people I know suffer through IRBs...
>Journalists (“if it bleeds, it leads”) and academics (who gain clout from discovering and calling out new types of injustice), operating in conjunction with these people, pull the culture towards celebrating harm-avoidance as the greatest good
So, which one are you, a journalist or an academic?
Okay, that's a bit snarky, but I'm genuinely wondering why there isn't equal incentive for a journalist or academic to do what you're doing here. "Obstructive bureaucrats are literally killing people" is a perfect "if it bleeds it leads" headline!
This blog post made me so unbearably angry. It's like a Kafka story except worse because millions of people continue to die as a result of cruelly circular bureaucracy. I don't have anything constructive to add, just the wordless frustration of the truly horrified. IRBs must die.
Another data point in a giant pile of data points for my theory that liability concerns rule everything around us. How did we get here, and how do we escape?
Fascinating article! I feel for the American researchers!
It seems to me a very interesting case of a general problem where whether we find something acceptable or not depends a lot on the distribution of costs and benefits, and maybe less on the average cost or benefit. We tend to care a lot if the costs or benefits are high, at least for some people, and much less if the individual costs or benefits are low for all people, even if their sum is very high. It is not at all obvious to me what the general solution to this problem should be (although in this case there is no doubt that the current IRB process should be changed!)
I understand why lawyers and journalists might contribute to the problem. But if academic ethicists are dedicated to routing out new forms of injustice, how come so few have noticed the injustice of good not done that you've laid out in this post?
In my experience, allowing an appeal process to an alternative decider is always net beneficial, even if the person taking the appeal is very likely to approve the previous decision. It is another point of review, which serves two major purposes. The first is that egregious cases can still be identified and shut down, which I think even a non-expert dean would be willing to do in very extreme cases. Secondly, it puts the earlier review levels on notice that someone may be looking at their work. Even if they eventually get their stance approved through the appeal, it shines light on their bad behavior and reduces their prestige in the relevant areas.
Let me mention my mental journey through this post, as it points out an important aspect:
> how they insist on consent processes that appear designed to help the institution dodge liability or litigation
When I read this early on, I said to myself, of course, the people running the institution have a fiduciary responsibility to avoid having it sued. So at root the problem is our litigious society. But further down I read:
> the IRB’s over-riding goal is clear: to avoid the enormous risk to the institution of being found in noncompliance by OHRP.
This is different, it's not really the litigious problem, as nobody is actually worried about subject suing the researchers. (And as other posters have mentioned, current malpractice insurance probably can handle that.) The risk is that the OHRP will declare them noncompliant and cut off their federal research funding. And it seems that essentially all medical research is done if not by, at least, in facilities that get so much money from the US federal government that they have to do what it says.
So we're in a situation of "private law" where everybody is financially dependent on one bureaucracy and "he who has the gold makes the rules".
Unrelated to the content, this is the first time I've spotted use of the double "the," and I think it's specifically because it was followed by the repeated As in American Academy of Arts.
> I don’t know exactly who to blame things on, but my working hypothesis is some kind of lawyer-adminstrator-journalist-academic-regulator axis. Lawyers sue institutions every time they harm someone (but not when they fail to benefit someone). The institutions hire administrators to create policies that will help avoid lawsuits, and the administrators codify maximally strict rules meant to protect the institution in the worst-case scenario. Journalists (“if it bleeds, it leads”) and academics (who gain clout from discovering and calling out new types of injustice), operating in conjunction with these people, pull the culture towards celebrating harm-avoidance as the greatest good, and cast suspicion on anyone who tries to add benefit-getting to the calculation. Finally, there are calls for regulators to step in - always on the side of ratcheting up severity.
Read up on Jonathan Haidt's research on Moral Foundations Theory. What you're describing flows directly from the intense left-wing bias in academia.
In a nutshell, there are five virtues that seem to be inherent points on the human moral compass, found across all different cultures. Care/prevention of harm, fairness, loyalty, respect for authority, and respect for sanctity. Liberals tend to focus strongly on the first two, while conservatives are more likely to weight all five more or less evenly. There's also a very strong bias towards immediate-term thinking among liberals, while conservatives are more likely to look at the big picture and take a long-term perspective.
When you have a system like modern-day academia that's actively hostile to conservative thought, you end up with an echo chamber devoid of the conservative virtues, a place where all they have is a hammer (short-term harm prevention) and so every little thing starts to look like a form of harm that must be prevented. And ironically, all this hyperfocus on harm-prevention ends up causing much greater harm over the long term, but short-term harm prevention inhibits any attempts to do anything about it.
> IRBs aren’t like this in a vacuum. Increasingly many areas of modern American life are like this. The San Francisco Chronicle recently reported it takes 87 permits, two to three years, and $500,000 to get permission to build houses in SF; developers have to face their own “IRB” of NIMBYs, concerned with risks of their own. Teachers complain that instead of helping students, they’re forced to conform to more and more weird regulations, paperwork, and federal mandates. Infrastructure fails to materialize, unable to escape Environmental Review Hell. Ezra Klein calls this “vetocracy”, rule by safety-focused bureaucrats whose mandate is to stop anything that might cause harm, with no consideration of the harm of stopping too many things. It’s worst in medicine, but everywhere else is catching up.
See also: COVID response. There's precious little in the way of evidence that lockdowns and masking actually saved any lives, but along the way to saving those few-if-any lives we created long-term effects that killed a lot of people and damaged millions more.
If IRB reform *is* possible, what can an individual do to make it more likely?
I'm hoping for a better option than "write your congressman," but it is a top-down problem. Grassroots approaches (like those applied to electoral or zoning reform) are a bad idea. Even at the state level, getting North Carolina to preempt federal regulations for its universities seems....risky.
I'm not saying this rates as a New EA Cause Area, but I don't want to leave this $1.6B bill lying on the ground.
>"Lawyers sue institutions every time they harm someone (but not when they fail to benefit someone)."
I wonder if Congress re-writing institutional mandates to make them at least consider benefits instead of just risks would cause (at least the threat of) the parenthetical lawsuits against inaction. The courts don't seem like the best place to handle this cost-benefit analysis but this seems to me like the least intractable path forward. Would this help create action, or would it only increase the core problem of everything being done for the sake of lawsuit protection?
Why do we need special rules for medicine?
The law has rules about what dangerous activities people are allowed to consent to, for example in the context of dangerous sports or dangerous jobs. Criminal and civil trials in this context seem to be a fairly functional system. If Doctors do bad things, they can stand in the accused box in court and get charged with assault or murder, with the same standards applied as are applied to everyone else. If there need to be exceptions, they should be exceptions of the form "doctors have special permission to do X".
There is no problem you can't solve with another level of indirection. So the obvious solution: Regulate the regulators. Make the regulators prove that a regulation they make or enforce is not killing more people than it saves.
Add IRBs to the list of "reasons why the US should adopt loser pays for suits like every other developed country."
There’s some trolley logic at work here - we are okay with hundreds of theoretical people inadvertently dying but we can’t handle even a few dying from direct action. The whole situation reminds me of the medical system’s risk-compliance-legal axis who all trained at the school of “no” and ascribe to the maxim that thing-doing is what gets us in trouble, so the best thing to do is nothing.
Clinical researcher here. I wanted to comment on this suggestion:
- Let each institution run their IRB with limited federal interference. Big institutions doing dangerous studies can enforce more regulations; small institutions doing simpler ones can be more permissive. The government only has to step in when some institution seems to be failing really badly.
This is kind of already how it goes. Smaller clinical sites tend to use what we call "central IRBs", which are essentially IRBs for hire. They can pick and choose which IRB best suits their needs. These include IRBs like Advarra and WIRB. Meanwhile, most clinicians at larger academic institutions have to use what we call a "local IRB", which is the institution-specific board that everything has to go through no matter what. In some cases, they can outsource the use of a 'central' IRB, but they still have to justify that decision to their institutional IRB, which still includes a lengthy review process (and the potential the IRB says "no").
What's the difference between a central and a local IRB? At least 2x the startup time, but often longer (from 3 months to 6+ months). Partly, this is because a smaller research site can decide to switch from WIRB to Advarra if their review times are too long, so central IRBs have an incentive to not be needlessly obstructive. While a central IRB might meet weekly or sometimes even more than once a week, with local IRBs you're lucky if they meet more than once a month. Did you miss your submission deadline? Better luck next month. You were supposed to get it in 2 weeks before the board meeting.
But this isn't the end of the difference between smaller clinics and those associated with large institutions. At many academic centers, before you can submit to the IRB you have to get through the committee phase. Sometimes you're lucky and you only have one committee, or you maybe you can submit to them all simultaneously. More often, you have to run the gauntlet of sequential committee reviews, with each one taking 2-5 weeks plus comments and responses. There's a committee to review the scientific benefit of the study (which the IRB will also review), one to review the safety (again, also the IRB's job), and one to review the statistics (IRB will opine here as well).
In my experience, central IRBs tend to not just have a much faster turn-around time, they also tend to ask fewer questions. Often, those questions are already answered in the protocol, demonstrating that the IRB didn't understand what they were supposed to be reviewing. I don't remember ever going back to change the protocol because of an IRB suggestion.
Maybe you could argue that local IRBs are still better for other reasons? I'm not convinced this is the case. We brought in a site through a local IRB on a liver study. It took an extra six months past when most other sites had started (including other local IRB sites - obviously a much more stringent IRB!). Did that translate to better patient safety?
Nope, the opposite happened. One of the provisions of the protocol was that patients would get periodic LFT labs done (liver function tests) to make sure there was no drug-induced liver injury. In cases of elevated LFTs, patients were supposed to come back into the site for a confirmation within 48 hours of receiving the lab results. We were very strict about this, given the nature of the experimental treatment. The treatment period went on for 2 years, so there's a concern that a long-term treatment might result in long-term damage if you're not careful.
This site, with its local IRB, enrolled a few patients onto our study. At one point, I visited the site to check on them and discovered the PI hadn't been reviewing the lab results in a timely manner. Sometimes he'd wait a month or more after a patient's results came in to assess the labs. Obviously they couldn't follow the protocol and get confirmatory LFT draws in time. Someone with a liver injury could continue accumulating damage to this vital organ without any intervention, simply because the PI wasn't paying attention to the study. I was concerned, but these studies can sometimes be complicated so I communicated the concern - and the reason it was important - to the PI. The PI agreed he'd messed up and committed to do better.
When I came back, six months later, I discovered things had gotten worse, not better. There were multiple instances of patients with elevated LFTs, including one instance of a critical lab value. NONE of the labs had been reviewed by anyone at the site since I visited last. They hadn't even pulled the reports from the lab. There was nobody at the wheel, but patients kept getting drug so the site could keep getting paid.
Since it's not our job to report this kind of thing to the IRB, we told them to do it. We do review what they report, though, so we made sure they told the whole story to the IRB. These were major, safety-related protocol violations. They did the reporting. The PI blamed the whole fiasco on one of his low-paid research coordinators - one who hadn't actually been working on the study at the time, but the IRB didn't ask for details, so the PI could pretty much claim whatever and get away with it. The PI then said he'd let that guy go, so problem solved. The hutzpah of that excuse was that it's not the coordinator's job to review lab reports, it's the PI's job. This would be like claiming the reason you removed the wrong kidney is because you were relying on one of the nurses to do the actual resection and she did it wrong. The obvious question should have been WTF was the nurse doing operating on the patient!?! Isn't that your job? Why weren't you doing your job?
What was the IRB's response to this gross negligence that put patient safety in danger? They ACKNOWLEDGED RECIEPT of the protocol violation and that was the end of it. They didn't censure the PI, or ask further questions, or anything. If 'strict IRBs' were truly organized in the interest of patient safety, that PI would not be conducting any more research. We certainly put him on our list of investigators to NEVER use again. But the IRB ignored the whole thing.
I'm not convinced that this is a 'tradeoff' between spending a bunch of money to stall research versus saving patients' lives through more stringent review. I think that the vetocracy isn't about safety, so much as the illusion of safety.
To what extent is this a purely U.S. phenomenon? While I'm sure researchers everywhere gripe about these things, I don't typically see these utter horror stories elsewhere.
And shouldn't researchers just move their research operations if the U.S. climate (only) is crippling?
Or it could indicate that self-regulation ins't all that onerous.
"Patients with a short consent form that listed only the major risks got twice the score on a comprehension test compared to those with the longer form; they were also more likely to miss cases where their medical histories made the study procedure dangerous (eg a person with a penicillin allergy in a study giving penicillin)"
Typo? Which group was (eg) more likely to give penicillin to people with a penicillin allergy?
I refer to it as "Cover your ass, not your buddies".
I ran into it just last week; we're prototyping a new machine at our farm that uses high calcium hydrated lime to break down organic matter, corrosive stuff and it's blowing back in our faces, so I wanted to know what sort of protective measures we should be using.
So I called poison control, they had no advice, but told me to call OH&S, so I did. OH&S had no immediate advice but offered me a consultation appointment. Sure.
Appointment swings around and they start asking about our overall health and safety policy. I tell them there isn't one, we don't have time for that.
They tell me that we really need one in case someone gets hurt and they try to sue.
I tell them that we don't have Worker's Compensation for our guys, so if something happens, we want them to sue us, and we want to lose, so that the injured employee can get a payout from our liability insurance.
They proceed to tell me that it's not my problem, and that we should have a CYA safety policy that no one ever reads so that if something happens, we don't lose the lawsuit.
I reiterate that we need to lose that lawsuit or a dude who loses a leg would be left with nothing. They again, say, well, that's not really your problem...
I point out their moral bankruptcy, and try to refocus the conversation on the lime dust.
They tell me they have no idea how to handle it safely, they just know how to protect the company from legal liability.
Is this just me, or does the trajectory of IRBs mirror the rise of woke? And perhaps also the timing?
On the “lawyer-adminstrator-journalist-academic-regulator axis,” don’t forget that lobbyists are mostly lawyers too. That means that they think like lawyers. So when something bad happens, their reaction is more law. When that law goes too far, their reaction is ... more law. That doesn’t make sense to non-lawyers, but it does to lawyers. Obviously, you just need to have the law make more finely grained distinctions, in order to do the good of the original law without the bad. So let’s add some exceptions and defenses and limits. So the law now goes from one question or decision to many questions or decisions. And that means you need specialists to figure it out. Hence the modern compliance department, which is an awful lot like the commissars that Soviet governments embedded in every organization -- there to make sure that you do the government’s bidding. In detail.
Right as the IRBs are radically reformed to be less paranoid and harm/liability-obsessive, we'll radially reform police departments to be more paranoid and harm/liability-obsessive
I'm a scientist who does medical research at several top tier institituions. I only do research, and every month or so one of my projects is submitted to an IRB somewhere. I do clinical trials and observational studies, as well as a lot of health system trials (e.g., where we are randomizing doctors or hospitals, not patients). I have a few observations, some of which aren't consistent with what Scott reports here.
1. I've never had an IRB nix a study or require non-trivial modifications to a study. This may be because my colleagues and I are always thinking about consent when we design a study, or it may be because top tier institutions have more effective IRBs. These institutions receive vast amounts of funding for doing research, which may incentivize a more efficient and flexible IRB.
2. I have done some small studies on the order of Scotts questionnaire investigation. For these, and even some larger studies, we start by asking the IRB for a waiver of consent - we make the case that there are no risks, etc, and so no consent is needed. We have always recieved the waiver. Searching PubMed turns up many such trials - here's a patient randomized trial of antibiotics where the IRB waived the requirement for patient consent: https://pubmed.ncbi.nlm.nih.gov/36898748/ I am wondering if the author discusses such studies where IRBs waive patient consent.
3. There are people working on the problem of how terrible patient consent forms can be. There are guidelines, standards, even measures. And of course research into what sort of patient consent form is maximally useful to patients (which is determined by asking patients). I helped develop a measure of informed consent for elective surgery (not the same thing as a trial, but same problem with consent forms) that is being considered for use in determining payment to providers.
4. Every year or so I have to take a test to be/stay certified for doing human subjects research. Interestingly, all the materials and questions indicate that the idea of patient consent emerged from the Nuremberg Trials and what was discovered there about the malfeasance of Nazi scientists. I'm surprised to hear the (more plausible) sequence of events Scott reports from the book.
5. Technology, especially internet + smartphones, is beginning to change the underlying paradigm of how some research is done. There are organizations which enroll what are essentially 'subscribers' who are connected via app and who can elect to participate in what is called 'distributed' research. Maybe you have diabetes, so you sign up; you get all the latest tips on managing diabetes, and if someone wants to do a study of a new diabetes drug you get an alert with an option to participate. There is still informed consent, but it is standardized and simplified, and all your data are ready and waiting to be uploaded when you agree. Obviously, there are some concerns here about patient data, but there are many people who *want* to be in trials, and this supports those people. These kinds of registries are in a sense standardizing the entire process, which will make it easier/harder for IRBs.
While this book sounds very interesting, and like one I will read, it also maybe obscures the vast number of studies that are greenlighted every day without any real IRB objections or concerns.
I guess I equate "being governed" with having a government. I certainly don't think everything can be achieved by government. I don't like the *idea* of being governed, though in practice I mostly have little problem with living with the law. There are not many illegal things I want to do, and those I really wanted to do, such as smoking weed back when it was illegal, I have found easy to get away with. I think our government did a lousy job with covid, but I personally was not greatly inconvenienced -- were you? I read up on the science, and navigated the info about risks and personal safety successfully -- still have not had covid, despite going in to work through the entire pandemic. So overall, I have had an easy time with being governed. But whenever I read something like Scott's post here, or really anything about how we can do a better job of organizing life so that there is more fairness and less misery I am filled with rage and hopelessness. Even Scott's article made me have fantasies of slapping the faces of IRB members. Consequently I am not well-read or well-informed about government, the constitution, politics, or any other related matters. Regarding this topic I am resigned to being part of the problem not part of the solution.
"How would you feel if your doctor suggested - not as part of a research study - that he pick the treatment you get by flipping a coin" if I knew that the doctor really genuinely didnt know which option were better then i would prefer for him to flip a coin rather than dither
> Also, I find it hard to imagine a dean would ever do this
Plausibly, the IRB only has incentives pointing towards caution, but the Dean has incentives pointing in both directions. Having a successful and famous study or invention come out of their institution brings fame and clout and investment, and sometimes direct monetary gain depending on how IP rights are handled in the researcher's contract with the institution.
If you want to join the army in Canada, you have to say you've never been sick a day in your life, and you've never had a single injury. You say you're allergic to grass, you broke your leg in high school, sometimes you feel really sad... any of these will disqualify you.
I don't know how it got to be that way, it doesn't make sense, the army isn't in a position to be especially picky... somehow I think it's related to whatever causes this IRB situation though.
Hi Scott, I am curious if your questions on bipolar study included anything that might be considered “questions on self-harm.” These sorts of questions might raise the risk assessment from low to moderate and require that you include precursor warnings on the risk of your questionnaire. I’m genuinely trying to make a best case argument for the hindrances you faced, so anything that you might see as potential “red flags” to your reviewers would be helpful. Thanks!
Note: Although I am but an entomologist and have almost no regulatory bodies for my actions, my fiancé often designs the interfaces that researchers use for submitting IRB forms at a university. We’re trying to speculate what happened in your case, so anything you think might be important would be tremendously helpful. Thank you!!!
My big annoyance regarding this area as someone who was close friends with a medical ethics professor at university (and still is 20 years later), is just the incredibly low quality of reasoning among the “leading lights”. You have people like Leon Kass who was on the President’s Bioethics advisory Council or whatever who by their writings didn’t appear to be able to think themselves out of a wet paper bag.
Now I doubt this grey eminences were actually that stupid, but they clearly had political and religious commitments that were preventing them from thinking remotely clearly about the topics they were put in charge of. So disappointing. I remember being told this is the top contemporary thinking in this area and just finding the arguments hot garbage.
Thank you, Scott, for this careful and thought-provoking essay.
Since so many people wonder, the study by Lynn Epstein and Louis Lasagna showed that people who read the short consent form were better at both comprehending the experiment and about realizing that the study drug might be dangerous to them.
Much of this fascinating conversation on ACX is on the theoretical side, and there’s a reason for that. IRBs are ever on the outlook for proposed research that would be unethical—that is why they exist. But there is no national database of proposed experiments to show how many were turned down because they would be abusive. In fact, I know of no individual IRB that even attempts to keep track of this. There are IRBs that are proud they turned down this or that specific protocol, but those decisions are made in private so neither other IRBs nor the public can ever see if they were right. Some IRBs pride themselves on improving the science of the protocols they review, but I know of no IRB that has ever permitted outside review to see if its suggestions actually helped. Ditto for a dozen other aspects of IRB review that could be measured, but are not. It’s a largely data-free zone.
I got an interesting email yesterday from a friend who read my book. She is part of a major enterprise that helps develop new gene therapies. From her point of view, IRBs aren’t really a problem at all. Her enterprise has standard ways of doing business that the IRBs they work with accept. She sees this work with and around big pharma as providing the relatively predictable breakthroughs that will lead to major life-enhancing treatments down the road. This is a world of big money and Big Science, and it’s all about the billions. A new drug costs $2.6 billion to develop; the FDA employs 17,000 people and has a budget of $3.3 billion; the companies involved measure their value and profits in the billions.
The scientists I am speaking for in "From Oversight to Overkill" are lucky when they can cobble together a budget in the millions, and much of the work they do, like Scott’s frustrating project, is entirely unfunded. They are dealing with OHRP, an agency with a budget of $9 million that employs 30 people. Unlike big pharma with its standardized routines, they are trying new approaches that raise new regulatory questions. And because OHRP operates on such a smaller scale, its actions are rarely newsworthy even when they make no sense at all. This includes decisions that suppress the little projects with no funding that people just starting out attempt.
Of course, the smaller budgets of the scientists in my book don’t mean that their findings will be trivial. It has always been true that when myriad scientists work to better understand human health and disease, each in their own way, that the vast majority will make, at most, tiny steps, and that a very few will be on the track of something transformative. A system that makes their work more difficult means that we, the public who struggle with disease and death in our daily lives, are the ones who suffer.
Do I understand correctly that the ISIS-2 horror story was well before 1998, and still within the supposed "golden age"?
Between the horrors of NIMBYism, the virtuous corpse mountain left behind by IRBs, and the laughable insanity of NEPA and CEQA, perhaps the purest expression of the underlying concept is the Precautionary Principle. So far as I can tell, someone made the following chart in all earnestness. It is a fully-generic excuse for inaction.
When you think your options range between a worst-case of "Life continues" for inaction and a worst-case of "Extreme Catastrophe" for action, well, here we are. Too bad life literally didn't continue for the people getting subpar medical treatment.
The comical inefficiency of IRB doesn't seem to be a controversial point. Why didn't you ignore it and simply conduct and publish your survey research anonymously? Maybe you judged that your study wasn't important enough to overcome your risk aversion. Why didn't the authors of ISIS 2 conduct and publish their study anonymously, if the interventions as such were not against regulation?
I have my own theory of everything: the median age is 38. Perhaps it's unfair to call Gary Ellis a coward who's responsible for thousands of deaths and unquantifiable unnecessary suffering. Perhaps he's just a regular old guy in a society of increasingly older guys who lack the knees or back to stand up for anything.
I can't wait for the rationalistic explanations in ten years of why things continue to go increasingly wrong for a country in whom the average resident is 42 years old and obese. Maybe you think we only need to find the right balance of incentives in a carefully engineered system; if so, you're in good company. I believe it was Kant who famously said - about government, but surely we could apply his wisdom to institutions of science and medicine -
"The problem of organizing a state, however hard it may seem, can be solved even for a race of exhausted, sclerotic, amorphous blobs"
Early on, you said, “The public went along, placated by the breakneck pace of medical advances and a sense that we were all in it together.”
That last part — the sense that we were all in it together — speaks volumes. To my mind its loss explains most of what has gone wrong with the world today. But how did we lose that?
We are criminally ignoring a "more connected populace".and the IQ needed to process that data flow. It's no wonder people resort to regression, stasis, or revolution copes. It's not like IQ or coping mechanisms were better in the past. We need to wrestle with restrictions to transparency, and prioritize defaults, or it will be left behind for those that started the game with fewer rules.
Reminds me of https://randsinrepose.com/archives/the-worry-police/
>Greg Koski said that “a complete redesign of the approach, a disruptive transformation, is necessary and long overdue", which becomes more impressive if you know that Dr. Koski is the former head of the OHRP, ie the leading IRB administrator in the country.
I've heard of many similar cases of the former head of <org> calling for major reforms, but if they didn't have the will or the political capital to do it while they were there, it seems unlikely the next guy will either (even if they agree).
To the extent that the problem is that Hospitals and their IRBs are overly incentivized to avoid harm more so than they are incentivized to cause good, might this be a good opportunity for something like impact certificates?
Like if there was a poll of 500 million dollars to be given each year to whichever set of hospitals did the most good in their studies, (more money given to those who did more beneficial studies) would that put some pressure the other direction?
My brother, who has some experience in this area, had this to say, “For the most part I feel you just have to know how to build relationships with your irb people and what words will trigger them. I never submit a protocol without first talking with my irb person, and thus usually don’t hit these types of bottle necks. The assumption should be that unless you’re super clear in your explanation, they’ll be risk averse and put forward roadblock. Because that’s their job.”
So from his telling, there’s a certain amount of gladhandling that is needed for getting research past IRB’s. This is probably not great for scientific research (it doesn’t seem fair that because you aren’t up for getting coffee with your IRB person you can’t do your research), but it does mean that scientists aren’t quite as helpless as the book presents them.
> Hans Jonas: “progress is an optional goal.”
I think this is the most morally deranged thing I have read a philosopher stating in a long time.
In my mind, technological process is often the prerequisite for social progress. From a modern perspective, most iron age civilisations look rather terrible. Slavery, war, starvation. Good luck finding a society at that tech level who would agree to avoid "the violation of the rights of even the tiniest minority, because these undermine the moral basis on which society's existence".
If you don't have the tech to prevent frequent deaths during child birth, in addition to death being bad in itself, you will end up with a population in which a significant number of males can't find partners. The traditional solution for getting rid of excess males is warfare.
If you don't have contraception tech, your population will be reduced by disease and starvation instead.
If your farming tech sucks, most of your population will spend their lives doing back-breaking labor in the fields and have their surplus extracted under duress to support a tiny sliver of elite.
If running a household is a full-time job at your tech level, good luck achieving gender equality.
That is not to say that all technological progress is obviously good. Sometimes, it might not be worth the cost in alignment shift (like the freezing water exeriments in Dachau), and sometimes we might judge that a particular tech will have overwhelmingly negative consequences (like figuring out the best way random citizens can produce neurotoxins in their kitchen).
And of course, you can always plead that while past progress was absolutely necessary (lest you be called an apologist for slavery, war and starvation), the present tech level (which allows for ideas such as human rights) is absolutely sufficient and any future tech is strictly optional. Of course, statistically speaking, it would be extremely unlikely that you just happen to live at exactly that point.
"Increasingly many areas of modern American life are like this."
Yep, America is frustratingly anti-tree-climbing. I am a happier person, in better physical shape, when I can climb trees. That was fine at home where I had a backyard with trees, but here American bureaucratic idiocy gets in the way. You see if I climb a tree on someone else's property, fall out, and get hurt, I could sue them. As a result city parks make sure to cut off any branches that would make a tree climbable, lest someone hurt themselves on it.
Harsh as it sounds, we need to hold people responsible for their own mistakes. Only then can we be free to take what risks we judge worthwhile.
Administrators, bureaucrats, and managers are “minimaxers”. They seek to minimize the maximum bad outcome.
... and the ethics review panel continued to demand pen not pencil.
Meanwhile, at the Wuhan Institute of Virology ...