What really bothers me about IRBs is that there doesn't seem to be any attempt to check if they really are effective nor to take the moral harms of not doing experiments seriously. Letting people die because they aren't getting a better treatment is deeply immoral but not taken seriously.
The justifications the people like to bring up like Tuskegee were awful but it's just not clear that IRBs would make them better. Often many people knew and just shrugged. I'm sure you could have gotten an IRB in Nazi Germany to approve Mengele's experiments.
I'd much prefer just having a committee of other tenured profs at the institution do a brief consideration with all the detailed forms and considerations saved for the few studies which raise serious ethical questions.
You probably don't want a committee that is too socially close to the people they are evaluating. If it's a bunch of friends evaluating each other, there's going to be a lot of unconscious social pressure not to be the one that calls out your friend in front of their other friends. You probably want it to be something more like refereeing at a journal - which still involves some moderate social closeness, but even the one-way anonymity that exists in the sciences prevents it from becoming *too* clubby.
I don't know about that. It depends alot on how you have the system work. I tend to think that the best system here relies on punishment (at least social if not formal) for doing bad things.
The guy whose experiment is approved by a faceless buerocracy has no incentive not to make them look bad by using any freedom they are given in poor ways. That leads to the over formalization of the process where everything has to be documented to the last detail beforehand and you can't just approve some swab study on minimal paperwork assuming the experimenter will act reasonably.
Basically, I see the appropriate role of the IRB here to be to prevent wackos from going off on some weird personal crusade or doing stupid shit. The department and their fellows have their own careers and experiments on the line if this guy makes the department look bad or does something unethical that gets their experiments shut down.
Basically, I tend to think that formalizing ethics tends to make everyone take their own ethical responsibilities less seriously. So I want the system that is most like the one we have for doctors (upfront little oversight but punishment for bad behavior) that still deals with the problem of weeding out the wackjobs and ppl with some deeply different moral sensibilities.
Like I think a board of my peers would be best positioned to realize my ethical judgements might look very different than theirs/society in general (if I did experimental work) in certain areas and being more inquisitive about how I would deal with certain issues while being less worried about my more conventional colleague.
I guess I'm thinking that the sort of "arm's length" peers that do peer-review for journals (and tenure and promotion reviews) are probably better than the immediate peers of the department (which is probably why that's the system that's used for publication and tenure, which are these other significant decisions that you want to get right in both directions).
I think some European and Australian universities have dissertations examined by someone outside the department but inside the discipline for this same sort of reason. American universities allow departments to do the evaluation in-house (partly because grad students aren't actually your social peers) but still require an outside member from another department to be present.
It's universal practice to have an external examiner in the UK, and I think in most of Europe. We think Americans are weird for not doing it. How do you maintain any level of parity across universities without it?
I don’t think there’s any assumption of parity across universities. People assume a PhD from a top program in a discipline means something different than a PhD from a third tier program in that discipline. Departments are only incentivized by reputation to keep their standards high.
I guess my question is what is the reason to think a system of review produces better results.
In the case of granting PhDs there is concern about a kind of self-dealing where a department might be improperly incentivized to check for itself.
In the case of ethical decisions someone ultimately has to decide if it's ethical or not. The self-dealing concern is that someone might be tempted to cut ethical corners to advance their research at the expense of those they are conducting research on (balanced against this is the fact that a remote IRB dealing with this at a remove may be less concerned with the moral imperitive to find cures or even just the real concerns over those research is conducted on).
I agree that for my system to work you also need a robust process for dealing with complaints after the fact and imposing punishment on those found to have acted unethically and even those who approved it and *that* board needs to be divorced from the sponsoring institutions.
So I guess my real claim here is that I think post research adjudication of punishment is the right mechanism and the only thing pre-experiment review should be offering is a non-adversarial did u think of this and a check to make sure no one is just going rogue.
At bottom, government is always in charge of everything, but that still describes a huge space. English Common Law, as I understand it, is *entirely* generated by judicial precedent with little or no reference to original statutes or regulations because it largely predates any such notions.
Mmm, not quite. You've at least overlooked the critical role of the jury, even in criminal cases. And in tort cases you've overlooked the fact that in principle a civil action is a dispute between two private parties, which the government agrees to adjudicate. It's not the same as DC agency staffer John Doe saying "it seems like a good idea to me if we forbid X and Y, because I can imagine evil effect Z and Q." Instead, it's Citizen Kane suing Citizen Rosebud for specific harm caused by specific actions, with the outcome to be determined by a jury of their peers -- more citizens. The role of government is mostly to ensure that the dispute between Kane and Rosebud gets settled fairly, nonviolently, and in agreement with centuries of traditional rights.
No, it's obviously a good idea - without government power over medicine, you'd have vast amounts of "medicine" offered for sale that didn't actually have any evidence that it did what it claimed, and a lot of people would fall for it.
Just because in this particular area Scott makes a plausible case that regulation isn't being done well (although I'd want to hear the other side of the argument before trusting this one) doesn't mean we can do without it altogether.
Oh, and please don't come with the "what if they said something that's not true?" argument. Protection against fraud is one of the legitimate functions of government.
"you'd have vast amounts of "medicine" offered for sale that didn't actually have any evidence that it did what it claimed, and a lot of people would fall for it."
I can't tell if you're being sarcastic or are ignoring the vast array of pseudo-science and outright woo that is already being sold and consumed.
You might want to think out the difference between regulation pre- and post-facto. When we speak of "government regulation" we usually mean laws that act *before* any particular medical decision or act takes place, with the goal of forestalling a bad result by simply forbidding the act or decision that leads to it.
Contrariwise, we have a very old system of law in place that punishes bad decisions and actions *after* they have actually caused harm -- that's what malpractice lawsuits are all about. And of course, the existence of the possibility of a malpractice claim exerts very strong effects on the decisions and actions that medical providers take.
Both systems have advantages and disadvantages. Pre-act regulation obviously has the theoretical advantage in that it need not wait until at least one person has suffered from a bad decision, so that the bad decision maker can be shot on his own quarterdeck pour encourager les autres. It can also be cheaper, because it can tell people "don't even start down this road because there's a block ahead," and it can make the overall regulatory regime more predictable, and assist planning, because we don't need to wait to see what a jury says about this or that type of decision.
But of course, on the other hand, pre-regulation is far less flexible, and cannot adapt well, or at all, to special circumstances (which are certain to arise in a population of >300 million), or to circumstances unforeseen at the time the law was written, e.g. because of technology advances. There's also no human judgment involved in its enforcement -- the role of the jury is gone -- so it's a far blunter instrument, and can almost be guaranteed to produce Unexpected Side Effects, some of which may be worse than the harm hypothetically prevented in the first place.
Leaving "regulation" (meaning social constraint) up to tort law has the advantages of much greater flexibility and adaptability, of course, because a group of human beings gets to examine the specific situation and the specific facts in great detail, and that group will already know the factual outcome -- it need not guess about that the way Congressional staffers drafting legislation must -- before rendering judgment. It is also less chilling of experimentation and innovation, because as long as it all works out well, you're good. The judgment is always centered on outcome -- what actually in fact happened -- and not on the hypotheses of whoever wrote the legislation about what might happen.
But it has the disadvantages of being slow, requiring at least one victim, being much less predictable, which handicaps planning and may make people unusually conservative (because they have to plan for the worst instead of the typical case), and probably by being more expensive, since we need to pay a bunch of fancy-pants lawyers, and absorb some very expensive judical system time.
In the US (and UK), unlike many other countries, there is a common law principle that one is not generally legally obliged to help anyone else in imminent danger or distress, e.g. rescuing someone from drowning, even at zero risk to one's self such as holding out a pole from a safe position. (There are exceptions, where someone has a duty of care or a life-saving occupation such as a pool guard, etc.)
I guess this principle partly stems from that of not being compelled to risk liability if one's aid is ineffective or even harmful. A classic example is clearing snow from in front of one's dwelling: Do nothing, and you can't be blamed for any mishaps, but clear the snow and if someone then slips on the area they can sue you for leaving an icy patch! In a way, indirectly, another example is releasing a new drug for sale.
So the IRB's attitude seems somewhat analogous to that series of moral conundrums involving people tied to railway tracks, which asks whether you should reroute the train. My instinct is to say never divert the train, because I didn't tie the casualties to the track and it is regrettable that they will die but just their bad luck. But that starts to sound insupportable if the train is hurtling towards a dozen teenagers tied to the track and you could divert it to kill only one 90 year old terminal cancer sufferer, and the latter scenario is analogous to the situation our host is describing!
Descriptively I agree with you as to how the IRB tends to behave (if we didn't cause through positive action it not our problem).
However, the law isn't meant to track moral responsiblity in the way that the IRB is supposed to. The absence of a duty to rescue reflects pragmatic concerns such as the fact that the courts could be clogged with all sorts of cases suing whoever had deep pockets who saw the shit go down. Indeed, it would risk creating an attractive nuisance where people would delibrately try to get themselves injured (or appear to be) in front of people with deep pockets. Moreover, it creates a secondary problem as legal necessity is often a defense to liability.
I'm not saying these couldn't be dealt with but the point is that even in situations where it's clear there is a moral duty there are good reasons for the legal process not to impose liability. Hell half of free speech law is the right to be a dick and needlessly hurt people.
The IRB was created to do exactly the opposite. It's supposed to make sure that scientific studies are behaving morally even if (absent hook of federal funding) the law couldn't regulate the behavior at all (asking ppl questions and talking about what they say in response is clearly protected by the first amendment).
Yes, many people do have a moral intuition that action is different than inaction. But you make the trolley problem bad enough (it's going to kill the whole city) or the harms smaller (someone will lose an arm) they will say that you should switch tracks and IRBs should at the very least try to meet that kind of moral standard.
> suing whoever had deep pockets who saw the shit go down
In Austria, not helping is a criminal offense, so you may have to pay the state or do prison time, but you can't get sued. This fixes the deep pockets issue.
It's interesting idea to think of this as a trolley problem. For the researcher, doing the study is analogous to pulling the lever. But for the IRB, isn't preventing the researcher from doing the study pulling the lever? That is, the lever that keeps the researcher from pulling the lever.
Maybe not, because the process is set up so that the IRB has to explicitly approve so that the study can go ahead. But what if the study was presumptively approved unless the IRB actively vetoed it? Maybe board members with an intuition that the action / omission distinction is important would then behave differently.
Status Quo Bias. The effects of the status quo are already written off, but the effects of any change are charged full price, so a horrific status quo winds up looking better than even a major positive change with a few small tradeoffs. You see this in handwringing about the energy transition...a couple hundred million tons of copper mining over the next 30 years is a looming environmental catastrophe while billions of tons of fossil fuels being mined right now and then lit on fire every year is business as usual.
I... well, I had a whole comment typed out, then I remembered that this would be more appropriate for the culture war OTs. still, like. surely you know this is a highly contested issue.
Given the popularity of guns in various social climates in the United States, I suspect that attempting to enforce UK-style gun control laws would have similar results to the attempt to enforce alcohol prohibition in the 1920s. :(
Nah. We just prefer that generic adult free men can possess deadly weapons, because we don't put all our faith in the state to defend our lives or liberty. If some weird shit goes down, we prefer to be a nasty spiky thing to try to swallow, like a porcupine. We're well aware that the cost of this is that some sick shitheads, like in Nashville or Uvalde, might get their hands on these tools and do terrible things. We're no more moved by this a priori than by an argument that cars should not be permitted in private hands because idiots sometimes kill innocents with them.
Which is not to say that a more timid culture, or one where the peasants actualy like being taken care of by the aristocracy, might think some other way, and by all means vaya con Dios. But that's not really us. (We drive fast, too.)
‘Governor Ronald Reagan, who was coincidentally present on the capitol lawn when the protesters arrived, later commented that he saw "no reason why on the street today a citizen should be carrying loaded weapons" and that guns were a "ridiculous way to solve problems that have to be solved among people of good will.”’
Seems like Michigan and Coeur d’ Alene militias are cool though.
I will give Ronald Reagan credit for a reasoned approach to gun control:
“In 1991, Reagan supported the Brady Handgun Violence Prevention Act, named for his press secretary shot during the 1981 attempt on Reagan's life. That bill passed in 1993, mandating federal background checks and a five-day waiting period.”
According to the Constitution, everyone in the US has the absolute right to own a muzzle loading flintlock, if they are members of a militia! Isn't that right?
Seriously, law-abiding people in the UK can apply for a gun license, and many rural people such as farmers own shotguns and rifles. But I don't think anyone is allowed machine guns, semi-automatic weapons, or firearms with silencers, except maybe on a licenced firing range where those weapons would have to be stored.
I mean Carl, surely the citizenry could still be pretty prickly with only manual load weapons. Also, if push came to shove, in the event of a general insurrection citizens could still by common consent retrieve their more deadly weapons from where these were kept in shooting clubs and the like.
Well, I don't agree with your interpretation of the Constitution in the first paragraph, and neither does the majority of US voters nor the Supreme Court, so I'm comfortable just leaving that creative meme aside as entertaining and imaginative, but irrelevant.
Have you noticed that violent crime and murder have become absent in the UK, then? Or does it...persist, somehow? You'll also note that Switzerland has one of the highest rates of gun ownership in the world, but quite a low murder rate. It's almost as if...what matters is the person, not the tool, and maybe our focus should be there. A much harder target, to be sure, but treating cancer with morphine because the latter takes away the pain is not an intelligent focus.
Consequently, I have no use for tool control, any more than I think the government should be telling the citizens what type of car they should buy (other than "safe"), or what type of dishwasher, what type of birth control they can use/not use, who they can bone and how, whether they can/can't get certain kinds of medical treatment, how they rear their kids, what school they can/can't go to, what job they can/can't have, and about infinity things besides (quite a number of which can also have serious, even deadly consequences, if done wrong). My interest in being a feudal serf whose care lies in the hands of a squire to whom I should bend the knee is zero. My ancestors lived like that, always some king/tsar/shah/emperor/consul/baron/bishop/Better Than You type to tell them what to do, and that's why they came here, to get away from all that.
On the other hand, I'm quite comfortable with people-who-get-to-own-tools control. I'm quite happy to have the FAA license pilots with strict regular exams, so airplanes don't fall out of the sky onto my head. I'm 100% on board with drivers' licenses, and only wish the exam were (1) much more rigorous, and (2) required far more often, and consequently if the government were to say owning a dangerous tool like a rifle is absolutely your right, but you will have to regularly pass this rigorous exam in how to operate one safely and what the rules on its use might be -- why, I'd be fine with that.
Whether that would pass Constitutional muster I am far less sure, as the unfortunate fact is there is a long history of government abusing competency tests to deny civil rights, e.g. poll tests, and the courts are rightly suspicious that the power to certify and license is the power to deny, and invites abuse. But there's some precedent, with our existing regime of waiting periods and licensing exams for CCWs[1].
And of course government could certainly demonstrate a lot more competence in using the tools it already has to control access to dangerous tools:
[1] What might be still more relevant is that I imagine the great majority of gun owners, i.e. those who are responsible and competent, would also be on board, which means big majorities for action might be readily achievable. My more cynical nature says that's *why* it doesn't happen -- because effective action supported by big majorities would launder a bloody shirt that politicians can otherwise wave about during campaigns, when they have no idea how to solve more quotidian problems like keeping inflation under control, balancing the budget, ensuring the bridges aren't falling down or 20% or the children being unable to do basic math by 12th grade.
I think this is a general problem with all kinds of regulation. Once a regulation exists, there's a bureaucracy invested in enforcing it (sometimes government, but sometimes within individual companies or other institutions), and it's very hard to get rid of. And then nobody really ever checks to see whether the regulation is even solving the problem it exists to solve, let alone checking to see whether the net benefits outweigh the net costs.
IRBs could of passed Nazi experiments? What? How can a prisoner give informed consent?
"The few studies which raise serious ethical questions " - all studies on humans raise serious ethical issues. People are not widgets. Human dignity and autonomy prevents a solely utilitarian frame.
The same way China and Russia guarantee tons of rights on paper yet their courts allow the government to throw you in prison for whatever the administration wants.
Regarding what studies raise serious ethical questions, I don't think all the simple studies where you pay UGs some money to compare how loud tones are or to play some game for a bit of cash you provide raise many serious ethical questions.
Yes, in some sense, all human interactions have an ethical component but the idea that making it part of a study somehow necessarily raises the ethical stakes seems mistaken.
If you do studies on humans, someone independent (like an ombudsman) should take a look at it.
I'd bet that more studies are passed than rejected. That but of information doesn't seem to have been reported in the review or in the mountains of comments.
Yes, I suppose sham IRBs could have existed or do exist in China or North Korea. I'm not sure though commies, fascists or totalitarians have ever felt the need to pretend that IRBs (btw I've also called these human subjects committees) are necessary are all.
what we need is to perform a randomized controlled trial of IRBs. run 200 experiments; half of them have to get approved by a real IRB, half have a "placebo" IRB rubber stamping it. see how the results differ! once we secure research funding, all we have to do is get approval fr-
What is the end point? Whether the researchers are ethical? Or is it did what they found have some utilitarian benefit?
As to the last, experiments that don't find things are as important as those that do for science. As to the first, it is unethical to allow unethical research. The standard of care includes the obligation to be ethical.
If it was unambiguously ethical it would have been approved.
There were issues. My reading of the story sounds like he gave up instead of doing what was required. And he admitted in the original AUGUST 29, 2017 essay that there were some issues justly raised by the IRB.
I am not sure how you imagine that this works, you seems to think that if Scott or any researcher claims it to be appropriately ethical then it must be so. It's as if you think the IRB has the burden of proof to show its not unethical.
Why that presumption? If you are going to do an experiment involving humans the burden is on the researcher to show that it is ethical.
Since we are dealing with humans, I don't think the standard can be preponderance. Beyond a reasonable doubt seems the right standard to me. We want researchers to have the highest ethical standards not the lowest. Unless theoretical utilitarian benefits is all that is important to you. In other words, if we get a lot of beneficial research, then it's worth it to have a few Nazis or Tuskegee's. Is that your idea?
IRBs do not (amd arent intended to) ensure ethical research. if all research suddenly stopped they would not do a thing about it. they exist to prevent unethical research
But I'm pretty sure that no members of human subjects committees have every advocated to halt all research as means to prevent unethical research. They are firmly in the pro- research camp.
Better yet, ask the IRB to approve a meta-analysis of all meta-analyses that don't include themselves... ask the IRB whether your new meta-analysis should include itself.
People have joked about applying NEPA review to AI capabilities research, but I wonder if some kind of IRB model might have legs (as part of a larger package of capabilities-slowing policy.) It’s embedded in research bureaucracies, we sort of know how to subject institutions to it, and so on.
I can think of seven obvious reasons this wouldn’t work, but at this point I’m getting doomery enough that I feel like we may just have to throw every snowball we have at the train on the off chance one has stopping power.
It might make more sense to treat AI the way we do nuclear power - demand proof of its safety using engineering calculations, redundant safety systems, and defense in depth. The IRB method seems to focus more on listing possible consequences and getting consent from the people who may be affected, both of which seem nearly impossible for sufficiently advanced AI systems.
But both of these existing technologies are examples of what Ezra Klein talked about in a recent podcast:
"There's a lot of places in the economy where what the regulators say is that you cannot release this unless you can prove to us it is safe. Not that I have to prove to you that you can make it safe for me. Like if you want to release GPT-5 and GPT-7 and Clod plus plus plus and whatever, you need to put these things in where we can verify that it is a safe thing to release. And doing that would slow all this down, it would be hard. There are parts of it that may not be possible. I don't know what level of interpretability is truly even possible here. But I think that is the kind of thing where I wanna say, I'm not trying to slow this down, I'm trying to improve it."
As far as fundamental capabilities expansion goes, like large training runs, any caltrop on the road will do - just slow the damn thing down.
Applications are lower stakes and I have less of an opinion on that.
This is of course based on my own opinions of what likely costs and benefits are, and would be screened out by a hypothetical infinitely wise IRB - but that applies to just about all policy proposals by anybody. In order to not think capabilities research should be slowed down, you’d have to convince me that extinction-level risks are an OOM below 1% (crux for this probably being a survey of mainstream AI researchers; with the frequently cited 10% study probably having some flaws but putting it way above anything we should be comfortable with.))
You'd want to slow down applications to. Anything to make them commercially useless. So that there's less incentives to run the research in the first place.
The problem is that we apply this kind of caltrop to most everything in our society, making the cost of building things, inventing stuff, trying new ideas, discovering new things, etc., much higher. From nuclear plants to vaccines, we just add a zero or two to the cost of making them, and then wonder why progress in those areas is so frustratingly slow.
Ezra Klein said something smart about regulating AI's in a recent podcast: Slowing them down via this 6 month moratorium or similar is just dumb. Restarting should be tied to some criterion of increased safety, not to the clock. I do not understand the situation well enough to make good suggestions about what the criterion should be, but I'm sure there exist people who could. Maybe something to do with understanding better what is going on under the hood -- by what process does GPT4 arrive at its outputs? And I even have a suggestion about a way to go about it: use some of the experimental approaches of cognitive psychology, which has been able to figure out a fair amount about *human* output without investigating the brain.
And one more thing about the 6 month moratorium: I'm thinking that *obviously* what the companies are going to do is spend the 6months working on further advancing AI. They will just refrain from doing certain kinds of work that are very obviously geared toward producing GPT5 or whatever. Maybe they won't even refrain from doing that. Is anyone going to be checking to see how staff at these places are actually spending their time? Is my presumption wrong?
> I do not understand the situation well enough to make good suggestions about what the criterion should be, but I'm sure there exist people who could.
As an AI safety researcher (with minor interest in governance), I believe that no, there currently do not exist any people who have *good* answers to this. But at this point, *better than nothing* might be, well, better than business-as-usual and better than fixed period.
Well, Vojta, you caught me on a day when I've been thinking a lot about this stuff, and have time to write it out. So here are my thoughts about the kinds of thing that should be criteria. All of them are are ideas for things that developers should produce before they develop AI further. They're not quick and easy to do, either, so might actually keep the developers busy, so they can't just quickly toss off a list of bullshit criteria then return to working, a bit more discreetly than before, on the next version of their product.
Come up with a list of things that, if they happen in some setting employing AI, should be responded to in the way an airliner crash is: A team of investigators from somewhere other than the company that built the AI figures out what happened, &, depending on what they find, they will have the power to insist that certain steps be taken to prevent another similar incident before AI of this kind is used any more. There should also be some plans for problems of lesser magnitude — sort of what would happen in airline industry is there was a near miss.
Figure out more about what goes on in that black box — how the AI “knows” the stuff gives in response to a prompt. And don’t tell me that can’t be done! Even I can think of ways to learn some more, and I’m a psychologist and don’t even know how to code in Python. OK, 2 ideas: (1) I read somewhere that some AI, I don’t know which, can look at an image of a retina and know what gender the subject was. But it can’t explain how it knows. Actually, in this case the answer to “how does it know” is probably not very useful, but let’s just use it as an example. Can’t you give the AI the task of systematically altering images of retinas — like reduce resolution of different parts of image— then stating subject’s gender and checking to see whether it’s accuracy has decreased? After a while AI should be able at the least to tell you what features of retina image are crucial, what features utterly irrelevant. So let’s say that blotting out different parts of the images makes clear that for judging differences in gender all that matters is 2 small areas of the retina, but AI still can’t tell you how it uses those areas to determine gender. So now you could have the AI play around with, say, color in those area, throwing away different kinds of color information. So keep doing that sort of thing & eventually it will be evident how AI “knows” subject’s gender. OK, so seems like this example gives a model for a way to use AI itself to find information about how it knows things, things much more interesting than how it recognizes gender from retinas. (2) Use some of the methods of cognitive psychology or psycholinguistics: Human subjects can’t tell you anything about how they know what verb tense to use, how they know the way to downtown, etc., and we don’t know enough about the brain to get answers from observing it. But you can figure out quite a lot about things like this with clever experiments. In fact my idea (1) was an experiment of that kind — throw away different bits of the data, as a way of finding out which parts are crucial to a task. Here’s an article about using cognitive psychology methods on GPT3: https://pubmed.ncbi.nlm.nih.gov/36730192/
Call in a bunch of hackers, discuss ways AIs could go accidentally go awry that could cause substantial damage, and ways AI could do serious damage in the hands of bad actors. Then put hackers into a sandbox with AI and see how much of that stuff they can bring about. Make the findings public. Also state which holes developers are going to plug before releasing the AI. Re-test with hackers after plugging.
Work on problem of how to shut down an AI that is running amok some way. For instance, say one that manages a hospital’s drug inventory starts erasing the records of daily counts of available drugs over the last 10 years — or ordering truckloads of the wrong stuff — or adding 2 zeros to every payment, so it’s paying 100x as much as it should for new meds. And quite likely the AI is in the cloud somewhere, and the hospital is paying to use it — so it’s not like they can just unplug the thing. Or what about a similar problem with an AI that’s used for trading, and is installed in multiple places, and is somehow tipping over the whole stock market? Or, of course, an AI that’s decided to turn us all into paperclips. Seems like ways of recognizing such problems and shutting down the system should be part of what users get, along with access to the AI. It’s not clear to me that developers have thought much about this feature. I haven’t seen a word about it.
Learn more about people’s vulnerability to being influenced by AI. My impression is that with the early versions of AI it was mostly mentally ill people that got over-involved with AI, thought it was conscious, etc. BUT the more advanced the AI gets, the less mentally ill you have to be to be vulnerable to developing a personal attachment to it that’s like one’s attachment to friends and loved ones. And of course people are much more willing to take advice from friends and loved ones, to do them favors, to go to great lengths to protect them. So there’s a whole category of bad shit that can happen entirely as a result of someone’s attachment to an AI: They can take bad advice from the AI, including advice to commit crimes if the AI has been corrupted or has somehow wandered into a weird mode. They can keep secret something the AI has done that they think might make people want to change or shut down the AI. And, of course, there’s no reason why an AI developer cannot come down with a case of AI over-attachment, and that could have very bad results indeed. So I think there should be studies of how likely this is to happen. Here’s a very rough model: Get 2 matched groups. AI group is given free unlimited access to AI, plus instructions to spend at least an hour a day chatting with AI about subjects that experimentors judge are likely to foster a feeling of personal closeness. Subjects could be given a list of topics to discuss, and should be instructed to tell the AI stuff but also solicit replies. For ex., one assignment could be to describe to AI a problem in their life, and ask for advice. Another could be to ask the AI for some kind of ongoing help — for instance, “I’m trying to stop smoking — can you ask me each day how it’s going, and congratulate me if it went well?” Meanwhile, the placebo group gets free unlimited access to some especially nice music app, and instructions to listen to a certain channel on it one hour a day. So then we do a test of how attached people are to the AI. Don’t have details worked out, except that it should involve some competitive online game where people can elect to have an AI give them ongoing advice on strategy, or to not involve AI. For those who elect to use AI, test how willing subjects are to accept AI advice, to accept seemingly bad AI advice, to cheat on the game some way at AI’s suggestion, to cheat on the game in a way that they believe will be especially destructive to another player, to later lie to experimenters when they later ask it whether the AI suggested cheating.
"After reviewing your plans to build a Torment Nexus, we've concluded that there are no regulations against it, and it exposes the corporation to no liability. Have at it."
“In one 2009 study, meant to simulate human contact, he used a Q-tip to cotton swab first one subject’s mouth (or skin), then another’s, to see how many bacteria traveled over. On the consent forms, he said risks were near zero - it was the equivalent of kissing another person’s hand.”
Is the swab from the first person’s mouth going into the second person’s mouth or just on their skin? If it’s the former, which is what the description makes it seem like, I don’t think it’s right to say the risk is equivalent to kissing another person’s hand - more like the risk of French kissing them.
No it wasn't, but since the doctor designing the study said it was equivalent to kissing someone's hand -- and not to shaking hands with them or french kissing them -- the procedure probably involved both swabbing someone's skin, then another subject's mouth with the same swab, or the reverse -- subject 1's moutn then subject 2's skin.
You probably meant "more like the risk of French kissing another person's hand"?
I'm not trying to make it sound better, just more realistic; I'm not fond of the idea of random people French kissing my hand.
And yet I'm not in favor of the IRB intervening here. If people are informed in advance what exactly is going to happen - sterile swab, someone's mouth, their hand - presuming they have no skin cuts and they can wash afterwards - and those people accept, then I think no more bureaucratic diversions are mandated, about AIDS and smallpox or otherwise.
No, I meant just French kissing. When I read “swab first one subject’s mouth (or skin), then another’s”, that sounds like sometimes the swab goes from person’s mouth into another’s mouth.
Oh you are right. Interesting how the original quote can be processed several ways:
> he used a Q-tip to cotton swab first one subject’s mouth (or skin), then another’s, to see how many bacteria traveled over. On the consent forms, he said risks were near zero - it was the equivalent of kissing another person’s hand.
There's ambiguity in how you interpret the "or" vs. the "then". In all, it could be "1: mouth -> mouth, 2: mouth -> skin, 3: skin -> mouth, 4: skin -> skin".
I have automatically narrowed this down to #2-only, both because it aligned with the paragraph's last sentence and because I assumed that a hospital would not do #1/#3.
But on second thought, since even "fecal transplant" is a thing, I may have assumed too much.
This problem must be at the heart of Erooms law. The opportunity cost of expensive and slow drug development seems like an obvious place to start but a political red button
I think all of these are subject to the "if it saves one child" class of arguments. Basically, when I want to do cost-benefit analysis and you want to point to the heart-rending story of the one cute puppy who yelped in pain once because some rule wasn't properly followed, I sound like an evil robot, even if the result of the rule being consistently enforced is a million cute puppies suffering offscreen.
>IRBs aren’t like this in a vacuum. Increasingly many areas of modern American life are like this.
Or the Motion Picture Association, which assigns ratings to movies. This has massive consequences for a film's financial success - most theaters won't play movies with an NC-17 rating, for example - so studios aggressively cut films to ensure a good rating.
This is why you almost never see genitals (for example) in a Hollywood movie. You've got a weird, perverse system where studios aren't truly making movies for an audience, they're making it for the MPA: if the MPA objects, the film is dead in the water.
Almost every industry has its own version of this. In the world of comics, you had the Comics Code Authority. It was the same deal. Unless your comic met with the moral approval of the CCA (example rules: "(6) In every instance good shall triumph over evil and the criminal punished for his misdeeds." [...] "(3) All characters shall be depicted in dress reasonably acceptable to society.") you basically couldn't get distribution on newsstands. This lead to the death of countless horror and crime comics in the 50s (Bill Gaines' EC Comics being the most famous casualty), and the firm establishment of superhero comics.
This isn't government regulation, exactly - it's industries regulating themselves (but it's complicated, because the CCA was implemented out of fears that harsh government censorship might be coming unless the comics trade cleaned its own house first, so to speak). It has similar gatekeeping effects, though.
>Ezra Klein calls this “vetocracy”, rule by safety-focused bureaucrats whose mandate is to stop anything that might cause harm, with no consideration of the harm of stopping too many things. It’s worst in medicine, but everywhere else is catching up.
It sounds almost like a game theory problem. In real life, there are more options than just co-operate/defect. There's often also "refuse to play", which means you cannot win and cannot lose.
If a new drug or treatment or whatever works, no glory will come to the IRB that approved the experiment. But if it kills a bunch of people, they'll probably get in trouble. Regulators have a strong incentive to push the "do nothing" button.
That's probably one of the reasons that prestige TV was generally better than Hollywood in the 2005-2015 decade, less pressure to self-censor. Since then, the woke thought police has gotten to them too, and now everything equally sucks.
Those institutions have gotten much weaker over time actually. The CCA was abandoned in 2010, for example. So if anything, recent movies and comics are an example of what you get without any censoring body.
I'd blame it on lowest common denominator capitalism; making an original statement takes effort, while churning out remakes and adaptations is cheap and quick. Dahl's work didn't get edited because anyone was going "THINK OF THE CHILDREN"; it was some pre-emptive blandening just to CYA the people who paid half a billion for the rights.
But movies don't fit the generalization of stricter rules over time. There used to be the Code, which prohibited a wide range of things now common. The Code broke down around the 60s, and Midnight Cowboy won Best Picture despite being rated X. Nowadays it probably wouldn't even be rated R. I recently watched "The Quatermass Xperiment", which has a capital letter X because it was considered scary enough at that time to be rated X, even though it's completely tame by today's horror standards.
That was because PG-13 only got created recently, dividing what was previously PG into kid & teen categories. And toplessness by itself does not mean an R-rating now:
"Nudity is restricted to PG and above, and anything that constitutes more than brief nudity will require at least a PG-13 rating. Nudity that is sexually oriented will generally require an R rating."
Are you against a rating system at all? I very much like a quick way to evaluate whether the movie I am about to watch has significant levels of content that I might find objectionable, especially if my children may be watching. Without any kind of rating system, there's no guarantee that some animated children's show doesn't throw in some genitals (or swearing, or whatever) once in a while.
If you're not against a rating system at all, then I'm not sure how your following concerns change much. Movie theaters are allowed to show X rated or NC-17 rated movies, they just choose not to because they know sales will be significantly lower than for movies with lesser ratings. Taking away the rating system doesn't change that desire on behalf of movie-goers, it just makes them play roulette on whether they are going to be bothered by what's being shown or find some other way to evaluate the content of the movie and end up doing the same thing.
The rating system is stoopit. It focuses on sex and polite language. Why should children not see genitals? (I agree that seeing actual sex is likely not good for kids -- would be scary, weird, overstimulating -- although of course in many parts of the world couples and their kids sleep in the same room, and presumably the kids witness intercourse pretty often, and their heads do not explode.) And there is so much swearing to be heard in any public place at this point that whether kids hear a bit more in a movie is not going to make any difference in how soon they start saying "shit" when they spill things, drop things, etc. What *I* I would have liked to know about in advance about kids' movies was the amount of violence, loss and tragedy . My daughter grew up without a TV in the house, and I also did not show her movies or TV on the computer. However, we did occasionally go out to the movies. So when she was about 4 I took her to see Finding Nemo, which opens with a scene where a family of mama fish, papa fish and many baby fish is attacked by something -- maybe a bigger fish -- who eats EVERYBODY except the papa fish. What the fucking fuck? How is that alright to show to small children? I lost interest in TV etc. decades ago, and so am far more ignorant than most people of what Disney or this or that is like. I was appalled at the cruelty of showing kids this tragedy in a movie. And yet the theater was full of kids my daughter's age who seemed to be doing OK. My daughter insisted on leaving, saying she felt so sorry for the poor little daddy fish that she couldn't stand it. And she was not a thin-skinned kid, in fact she was and still is much thicker-skinned than me. Her main playmates at the time were 3 boys, 2 of whom had some tendency to be bullies, but she held her own. I think she just had not been gradually desensitized to violence and tragedy, the way the TV-watching kids in the audience had been.
So you agree that some kind of rating system would be nice, though you would prefer them to look at other things. I'm not against a more nuanced system! I happen to think that sex and course language are important factors, and suggest that you do as well, perhaps with different words and phrases being significant to you. I doubt there are many movie-goers who take their children who wouldn't care about the n-word or other racial slurs, for instance.
Providers have been working on rating systems with a lot more nuance, giving both the overall rating and also brief descriptions (Sex, Violence, Nudity, Gore, etc.) to help viewers determine if the movie or show might violate their personal sensibilities. I support this effort. We're not all going to care about the same content, but we all do care about some content. If they added a "Personal Loss" or however to describe Finding Nemo, I would consider that a gain.
I don't think I'd even object to racial slurs, so long as use of the slurs was portrayed as bad behavior, the way, say, spitting on people would be. You can't keep kids from learning these words, all you can do is explain what's wrong with using them as insults. Let's say the fish family had been black fish instead of whatever color they were in the movie -- sort of orange, I think -- and a big ugly mean fish, obviously the villain, had swum in circles around them for a while calling them all niggers and saying they should move to another part of the ocean and after he left the little fish were crying and the parents were furious and shaken. How is that worse for kids to see than a big fish eating the mommy and all but one of the many little brothers and sisters?
I think an "overall rating" might be the key mistake. Better to work out independent scales for the different categories people care about, objective criteria for each, then display the whole thing - maybe on circular coordinates, least objectionable at the center and worst at the outside, so it's comprehensible at a glance. Perhaps five criteria, zero to five scale for each? [Violence], [sex/nudity/intimacy], [antisocial/unwholesome behavior], [disgust/squick], and [cosmic horror/tragedy/injustice] seem like a useful starting point.
Exactly, my 9 year old is a lot more worried/harmed about violence and loss and drama than dicks and boobies and cuss words, all of which he is comfortable with even if he doesn’t “get” sex yet.
Children should not see genitals in certain contexts because it may not be developmentally appropriate or socially acceptable. The appropriateness of children seeing genitals depends on the context and the age of the child.
In general, young children should not be exposed to nudity or sexual behavior because they may not have the cognitive or emotional maturity to understand what they are seeing. It is also important to protect children from sexual abuse or exploitation, which can involve exposing children to genitals.
All children in group settings, including sports, see other kids' genitals quite often -- in the bathroom, when they change their clothes (or are changed, if their preschoolers still in diapers), during bathing etc. Kids at home see their siblings genitals quite often. And parents with little kids, rushing to get things done, often find it impossible to use the bathroom, change their clothes, shower etc. without kids around. And then there's pets, who not only have genitals but often display them in a particularly visible way.
I agree that showing kids genitals in a sexual context is not a great idea, though in real life they catch glimpses of their parents' sexual life, glimpses of sex in the media, etc.
There are many voluntary rating systems, including ones that have a multidimensional space of "why this may concern parents", with counts and intensity descriptions. sex, drugs, violence, alcohol, swearing, nudity, crime, etc etc etc. They are all linked off IMDB. Go use them, instead of depending on the utter farce that is the MPAA.
If the argument is that we can do better than the MPAA, that's fine and I have no disagreement. It appeared that the argument may have been that we don't need ratings at all, or that the existence of rating systems lead to bad situations (referring to a lack of genitals in Hollywood movies - which I don't think is a universal bad thing, as I explained in my post about viewer preferences, and it's not like people can't see genitals if they want to, even if not in a major movie).
Yes, the Code became increasingly irrelevant as decades passed. The first mainstream comic to break with it was The Amazing Spider-Man #96 in 1971, which contains drug references.
What changed? Mostly, the economics of the comics trade. From the 50s to the start of the 80s, something like 95% of comics were sold at newsstands. But as the 80s progressed, the industry shifted to so-called "direct market" outlets (specialist comic stores and mail order). These were mostly run by younger guys who had no interest in obeying the CCA's diktats. It became a paper tiger.
The shift to the direct market mostly happened a decade after the Code started to show cracks. I think things like ASM #96 more reflected changes in society (the Hays Code had also withered and eventually broken the previous decade) and distance from the days when comics were a central moral panic: more writers and editors who didn't personally remember being part of an industry hauled before Congress and required to explain itself.
There was also a big influx of young creators (some very young) who wanted to push the envelope in a way the previous generation mostly hadn't. (At least after the Golden Age, when the angry young men were more focused on Hitler or political corruption than the Establishment generally.)
Most comics were still Code-approved for a long time after, but the Code was decidedly less restrictive in the 70s than it had been in the 50s and 60s. The shift to the direct market and the aging up of the audience then reinforced that trend.
What if we required the IRB's to judge the dangers of a study in comparison to the harm done by delaying it, and to delay it only if the former is greater than the latter?
Obviously the IRB would have to estimate. While in some situations that would be difficult or impossible, in others it would not, and after all physicians and other people have to make intelligent estimates all the time about harm if done vs. harm if not done. Doctors, for instance: If I give a 2 weeks supply of this abusable drug the person might abuse it; however, if I do not give it they are going to suffer a lot of pain/anxiety/whatever the drug treats. Seems to me that in some of Scott's examples the IRB was in at least as good a position as doctors typically are to make judgments about harm if done vs. harm if not done. In fact, maybe they are in a better position, because it's likely that the proposals submitted to the IRB contain actual stats about likely risks to patients from the proposed study, and about evidence that the treatment they are proposing will be reduce illness, suffering or death. After all, whatever they're trying isn't some random chemical, it's generally something about which a fair amount is known -- it's already used for some other illness, or already used at a different dose, or a very close cousin of a familiar and safe drug. As you say, medicine is not manufacturing, and there are judgment calls professionals must make all the time, and they are often high stakes, too. Does not seem to me too much to ask IRB's to do the same.
Weirdly, they already sort of do this for animal research, just not human research. But since I don't really agree with the things they allow, I'd like them to at least be less loose than that with human
There certainly are some treatments that you'd want to think over very, very carefully before giving the go-ahead for human subject trials. But there have to be a fair number of situations where the risk of trials is obviously low, and the benefit , if the treatment works, would be great. For instance metformin is a a drug very widely used for diabetes, and it's been in use for quite a while -- it's safe and effective. Last year some time there was a study that showed it reduced risk of hospitalization for people who had covid. Seems to me that an IRB considering whether to permit that study could be confident that subjects would not be harmed by the metformin, and of course if it was found to work there clearly would be benefit to many people. And metformin wasn't randomly chosen. The people doing the study must have had some reason to think metformin would help. (I believe it was because of data that diabetics on metformin had better covid outcomes that diabetics who were not.) Or consider Scott's example about wanting to check the accuracy the brief bipolar questionnaire administered to patients soon after admission. There's clearly zero chance that this study would harm any patients. They were all routinely administered the questionnaire anyhow. Scott's study would just have involved determining how well their questionnaire score matched up with final diagnosis. The experience of patients in the study would not have differed in any way from that of patients not in the study. As for the benefits of figuring out how good a predictor the initial questionnaire was -- well, it's hardly going to change the world, but clearly it's overall beneficial to patients to make sure the early assessment of whether they are bipolar is done with a questionnaire that's as valid as possible.
Can you explain more about why the MPAA situation is bad? It sounds like it's working correctly - it's saying you can't market your film to people who want safe-for-all-audiences films if it has naked people in it. That is what I would want out of a film rating system.
The problem is that the ratings ripple back onto the industry that makes the movies. As Goodhart's law predicts, studios edit films to ensure a "good" rating, regardless of the filmmaker's artistic goals or what moviegoers actually want to see.
Technically, the ratings are voluntary. But most theaters refuse to play NC17/unrated movies, so nobody makes them.
Don't imagine that the ratings make sense. They're arbitrary and bizarre - there's too many examples to list. A good one is "Love Is Strange" (a movie with no violence or sex and about ten curse words). It got an R rating, apparently for depicting a gay relationship. In general, the MPA is extremely harsh on sex, and extremely lenient on violence. (Jack Nicholson: "Kiss a breast, they rate it R. Hack it off with a sword, they rate it PG."). It's purely the personal preference of a group of risk-averse middle-aged people.
There's no viable network of non-MPA theaters that unrated movies play at. They're the only game in town. You can see the results of this by looking at global box office totals:
All the highest grossing NC-17 movies (with few exceptions) are either sleazy porno crap from the 70s (when adult theaters existed) or foreign films that made money outside America. And the amounts are so small! The #1 movie on the list only made 65 million dollars, or less than that horrible Cats movie everyone hates.
The MPA isn't the worst thing ever, but it doesn't classify films so much as control them. It's basically the modern descendent of those 1930s organisations like the Legion of the Decency, which meant movies couldn't show a toilet flushing (I'm not kidding, that was a rule for decades).
If the free market is working correctly, the **people** doesn't want sex in their movies. Otherwise the theaters would smell the profit and cater for it. Ironically I've heard that the quite modern R-13 is also unpopular because audience doesn't like those half measures movies.
If anything I guess the lesson is in more gradular ratings. But studios would still prefer to make family friendly movies if they can.
> Patients with a short consent form that listed only the major risks got twice the score on a comprehension test compared to those with the longer form; they were also more likely to miss cases where their medical histories made the study procedure dangerous
As written the "they" in the second clause is referring to the "Patients with a short consent form," such that longer is better, which is the opposite of what the rest of the paragraph is suggesting.
Since I was just going to post the same thing, I'll hijack this comment to report something else:
I'm confused by the name "Virginia University". I'm familiar with the University of Virginia (UVA), Virginia Commonwealth University (VCU), Virginia State University (VSU), Virginia Polytechnic Institute and State University (Virginia Tech), and several others that this might refer to. Did you mean one of them?
It’s situations like these which would be avoided in a country with an absolute monarchy, where the monarch’s one job is to “unstick” the system when it is doing insanely dysfunctional things because of a combination of unintended consequences and empty skulls. Ideally such a monarch would have a weekly tv show where he used his limited-unlimited powers to fix things (limited in that he can’t overrule the legislature but unlimited in that all problems which are the result of idiots and rent seekers and cowards and ninnies and wicked people not doing their jobs better can be solved by his plenary power to fire or imprison any individual whose egregiousness he can make an example of in order to address public opinion). Bureaucrats who reject basic common sense about costing 10,000 lives to save 10 can be named and shamed in ways which will rapidly improve the quality of the others.
As usual, comments beginning with “so you’re saying” misrepresent.
This more like a super-supreme court that can act on its own initiative but only in a case-based way, indicating by example how things ought to work and punishing those who abuse systems in the ways Scott described in this post.
I read that comment as: "here is my takeaway from what you said, is this what you meant?" and this doesn't sound like misrepresentation, but a question asking for clarification.
And who said the monarch always has to agree with me?
This is an unsticking mechanism, to deal with a particular kind of civilizational inadequacy that Scott and Eliezer have written about. We already have it in the case of pardons, where the head of state has a plenary power to intervene to prevent an unjust result. If you don’t like it, suggest another mechanism.
Wonderful! And by what means does this constitution exert its magical powers? A freeze ray that zooms out from the LIbrary of Congress, and encases the evil-doer in ice?
We divide the constitution into three parts: the "bird", the "cat", and the "dog". They keep themselves from getting out of hand, and together they exert control over the federal bureaucracy, which we'll call the "goat".
Make it like the Gong Show -- George Washington et all. sound the gong, and the spider gets yanked out . But how do we ensure that the next one we swallow if better? Hmmm
This is more or less the role of the media. Sometimes they help fix the problem; sometimes they help create it. I can easily imagine your hypothetical monarch hearing about one death from a study and demanding that the system "do something" to keep it from happening again.
It used to be part of their role to expose things. But they often lacked investigative authority and had no power to fix things; and now there abandoned even the “expose problems” part.
No. The problem is that 'some doctor fucked up doing a study and it killed someone' would have been frontpage in the past and would go viral now, but IRB stops good research is boring and wouldn't go viral. Media can still expose problems, but they're focused on audience engagement and small real harms are more engaging than massive harms that rely on counterfactuals.
What this tells me is that we should be asking our reporters to make "massive harm relying on counterfactuals" stories interesting! It's not that hard--I've seen really well-written data-driven "boring" investigative journalism before and even written some things in the genre myself. Most journalists these days a) go to college and b) are thus required to take at least one creative writing class as part of their journalism major/minor, so they *should* have the requisite skills to write engagingly about these kinds of things. The fact that either they don't or they don't have the guts/institutional backing to try is rather sad.
Maybe we could have an Institutional Review Board that checks articles by reporters to make sure their "massive harm relying on countertactuals" stories are interesting. If they're not, they have to rewrite. You know, sort of like scientists proposing experiments with human subjects have to.
I think there's definitely an institutional incentive issue here created by asymmetric consequences for good/bad press, but it's not the only issue, or even necessarily the most important one. Media outrage stories work better when there's a sympathetic victim and a clear villain and narrative. They're a lot harder when the villain is "slow drug approval procedures, each of which can be individually justified but which when summed up mean that beta blockers took an extra decade to get onto the market in the US, and thus thousands of extra people died of heart attacks but nobody can say exactly which heart attack deaths can be blamed on the slow approval."
The role of the media has always been to be hysterical Chicken Littles. "Yellow journalism" is not a phrase that was coined in the 21st century. What seems to have changed is the alignment of the media with one political party and one ideology. As recently as the 80s, for example, endorsements for President were about evenly divided as to political party, and there were "conservative" and "liberal" newspapers who would editorialize and write breathless warnings about initiatives from almost any ideological direction.
This is no longer true, and why that is the case is a very interesting question. But the problem this has created is that the media are no longer reliable Chicken Littles when it comes to problems and fuckups coming from one particular ideological direction.
What does it mean that you "can't overrule the legislature" but "all problems which are the result of idiots and rent seekers and cowards and ninnies and wicked people not doing their jobs better can be solved by his plenary power to fire or imprison any individual whose egregiousness he can make an example of"? Doesn't that latter power give the monarch plenty of power to overrule the legislature, at least in practice, even if not in principle?
One discouraging thing about this proposal is that I don't think governors and presidents do all that much of this sort of thing, even though they do generally have the power to do so. I mean, the EPA works for the president, and so in principle, I think Biden can tell them "this administrative rule you have imposed is dumb, get rid of it," and they probably have to listen to him sooner or later. But of course, that's not something you see happening all that often, probably because Biden is worried about bad press if he gets it wrong, doesn't think he or his staff know enough to do a good job of overriding the EPA's bureaucracy, etc.
Joseph. Your idea has a fatal flaw. Who or what ensures that the "monarch" or whatever you call him is wise, just and conscientious? When one dies do we just plug his offspring into the job? Do we elect a new one? Hmm, I think those ideas have been tried. Do we have a group of wise, just and conscientious people who choose a new monarch when we need one? What do we do when someone on the monarch-choosing committee dies? Do we just plug their offspring into the job? Do we elect someone? . . .
There are drawbacks to any system but his ONE JOB is to counter the kind of civilizational inadequacy Scott often decries. This is intended as a safety valve. There may be other designs which better avoid such horrors, feel free to suggest some.
You could give him a limited number of overrules in his term, allow congress to overrule him, or even make being overruled like a vote of no confidence that forces him out of office.
I think the key insight you have is that there is an institutional gap for enablement and facilitation of progress. Currently we have too many "veto points" in the system ... so where can we add a counterbalance? The example of pardons is a good one. In practice, Executive Orders somewhat fulfill this role, but with lots of limitations. It is worth pondering the options.
"limited in that he can’t overrule the legislature but unlimited in that all problems which are the result of idiots and rent seekers and cowards and ninnies and wicked people not doing their jobs better can be solved"
...and next week there's a coalition of legislators who are idiots and rent seekers. This is a hard problem and a monarch introduces different problems.
There's nothing stopping American Presidents from doing something similar. Keep the weekly TV show, get some researchers, and shove John Oliver out of his niche with a weekly take-down of something rotten.
I don't think it would work well. Insufficient research, insufficient followup, playing to the crowd. At some point there'll be more utility in using this as a threat, and then it's just a villain-of-the-week show. There's going to be scapegoat this week, so do whatever you have to do to make sure it's not you.
And I think the same reasoning would apply to a king.
Have you read any history books? Everyone panders. Everybody wants something that they cannot get for themselves. Security ends as soon as someone with that kind of power desires something.
Further, humans are social creatures and need to intertwine with others in order to keep healthy. Health in all its forms is an important aspect of security. We cannot avoid that which is encoded within us.
History, from recent and modern to ancient, is replete with people whose power to change with the stroke of a pen or a statement has been the antagonism central to its time. We can look back to earlier this week with Justice Clarence Thomas. There is zero practical ability to police misconduct when the person whose actions or words are at fault is beyond the law's approach.
If anything, this book review highlights the problem with such a system; an all-powerful authority standing athwart that few have the ability to cow is exactly the problem. The difference is which direction is the beneficial one, which is a matter of perspective.
The answer to a specious and noxious authority spending more resources in order to hoard a few is not a stronger individual, however satisfying it might be for some to imagine.
The answer is never to invest within a single person. We have to change the people that work within the system. When people are the problem, people will be the solution, not a monarch.
Madame La Guillotine would like to have a word with Louis XVI...
And I cannot believe that, were Trump somehow to become king of America, he would stop playing to his base. He's too much of a showman, and he knows exactly how to get his narcissistic supply.
But, joking aside: what percentage of kings do you think have lived up to this ideal? Let's restrict this to kings with real power, who are both heads of state and heads of government; a lack of power seems to correlate with a lack of corruption.
Also recall Proxmire and his routine. You can describe many valuable things in terms that make them sound nuts. (I heard about this totally insane proposal for car safety one time--these whackjobs wanted to set off a bomb in a bag in front of the driver, just as the car was crashing. Clearly, lives are endangered by this nutjobbery and we need to put a stop to it!)
2) It's impossible to codify powers which simultaneously allow a person to solve the problems you hint at and prevent them from just being a classical absolute monarchy with theoretically unlimited power
It won't stop people from trying, though. We are the eternal optimist species. Sure, heretofore, in all the billions of times it's been tried, it hasn't worked out, but maybe *this* time there really will be A Free Lunch[1].
-----------------
[1] Or maybe it's a failure of will. Tinkerbell didn't live before because not *everyone* was clapping his hands, damn them.
Well that seems like a benevolent -- not dictator, but a dictator with somewhat limited powers. Sure, that would work well, but what's the system for ensuring these semi-dictators are bright, consciousness and fair, and not assholes like so many people in power are?
Why is this different from a Presidential system? Most of this is happening through the executive branch; my guess is that if Biden wanted this fixed, he could fix it.
There’s an awful lot that’s a consequence of Congressional dysfunction burdening the system, and a lot that’s from private sector inefficiencies uncorrected due to market failure. But even for the part that’s entirely the executive branch’s job, what President ever made a serious dent in such civilizational inadequacy?
>Whitney points out that the doctors who gave the combination didn’t need to jump through any hoops to give it, and the doctors who refused it didn’t need to jump through any hoops to refuse it. But the doctors who wanted to study which doctors were right sure had to jump through a lot of hoops.
The incentives are different between "a doctor may do X or Y if he thinks it's best for the patient" and "a doctor may do X or Y based on considerations other than what's best for the patient", even if he's only going to be doing one of the things that one of the first two doctors would do. Don't ignore incentives.
That's definitely valid, but I think it belies the point that "what the doctor thinks is best for the patient" is not necessarily "what is best for the patient," and this sort of IRB review is preventing us from moving those things closer together.
But there is a different type of a problem, when we genuinely don't know what is in the best interests of the patient. And further a social benefit in ensuring we know the best option for the set of all patients. Participating in a trial might very well be _worse_ for a patient, but nevertheless results in better overall outcomes for all potential patients. So it almost is a different question. Consider the perfectly healthy individual rather than one hoping for new treatment. Can they consent to participate in a trial that might save the lives of others? I think so, and in that case the question of "best treatment for the patient is moot," the only question is one of risk vs reward. If the risks of death is, say, 1 in 1000 based on rat studies, but with genuine unknown-unknowns in humans and the chance to save millions, then I might altruistically take that risk. If the risk in 1/5 of death, to save dozens, I probably selfishly would not take the risk (and not feel bad about making such a choice).
If you mean to imply that this is therefore a valid objection to the study, that seems exactly backwards.
In a randomized study, the doctor isn't choosing whether to do X or Y, the coin flip is choosing. This is actually LESS vulnerable to bad incentives than the non-study case, because a doctor who is told to do whichever they think is best can still be influenced by other incentives (consciously or subconsciously), whereas a doctor following the random number generator cannot be so influenced.
Plus, once the study is done, then you'll actually know whether X or Y is better. In addition to the obvious benefit of providing doctors with relevant information, this again REDUCES the vulnerability to perverse incentives, because doctors have less cover for biased decisions.
The more concerned you are about this, the more you should be in favor of doing the study!
The problem is not this particular study, but the incentives behind doctors choosing particular treatments. Imagine a COVID Ivermectin study where 90% of the doctors are pretty sure Ivermectin doesn't help, and wouldn't prescribe it. If we're randomly assigning it, now we've got a number of doctors being overruled on the best course of treatment in order to do the study. Now imagine instead of something fairly harmless like Ivermectin, the treatment has big long term potential side effects (like causing cancer in a non-trivial percent of patients). Then the IRB really should be concerned about this, even if a good number of doctors is already prescribing the potentially cancer-causing treatments and is allowed to do so.
I wouldn't rule out studying both treatments in that scenario, but I can definitely see why there might be concern. We would need some kind of evaluator that knows enough about the treatments to determine how big the risks are - which apparently the IRB had until 1998.
1) Unless I'm much mistaken, doctors aren't typically FORCED to participate in studies against their will. Presumably only doctors who think the study is worthwhile will volunteer, so talking about how their judgment is being overridden seems backwards. Their judgment said to do the study; their judgment would be overridden by blocking the study, not by allowing it.
2) If there's more than a trivial amount of X currently being prescribed (and you don't already intend to shut that down), then it seems like any study that concludes X is bad will almost certainly be a long-term net reduction in the amount of X, even if the study temporarily slightly increased X in order to study it. Therefore, the risk that X might turn out to be bad should weigh in FAVOR of doing the study, not against it.
3) If you think there are actually valid scenarios where a treatment could be too risky to study but not too risky to prescribe, could you give a couple examples? (Hypotheticals are fine.) I feel like you're talking about examples where you want to "be concerned", but I'd like an example of something that's over the line rather than just close to it.
Some of the risk tradeoffs need to be mitigated with earlier studies. If we genuinely don't know if the treatment will kill everyone in the study ... then probably we need to do some animal studies first, so that we can generate some priors. This seems like a reasonable kind of objection from more sane IRB.
This particular comment thread is discussing the study of treatments that are already in common usage.
If your objection is that you think the study is going to kill everyone, then presumably you should ALSO object to the doctors who are already prescribing that treatment (without studying it).
Sorry, I was imprecise in my description. When I say "overruled" I don't mean that they don't have a say in the matter or are forced to join the study. I mean that the specific judgement of those doctors in individual cases - as it may pertain to the needs and wants of their individual patients - is not taken into consideration. A doctor may think that Ivermectin (or whatever treatment we're talking about) has a small chance of working, and they would like to know if that small chance is true - so they need to study it. At the individual level of each patient coming through, they would think something like "90% chance this does nothing positive and has X, Y, or Z side effects, I would not prescribe the treatment to this patient." But to study it they have to prescribe it to a representative sample of their patients. That directly means that some significant portion of individuals who would not normally be given a treatment, and that has potential negative side effects, will be given this treatment. Any negative effects from this study are therefore against the doctor's better judgement and otherwise unnecessary.
This is more of a problem if a patient would really want to try a particular treatment, or would really like to avoid it, or the doctor has reasons to want or not want to try it with a particular patient due to their particular situation, but in order to study it those considerations must also be ignored.
As I said, that doesn't mean we should avoid studying treatments. It just means that there's a real reason for something like the IRB to exist.
It sounded to me like you were previously arguing "it is sometimes correct to place more onerous restrictions on doctors trying to study X than on doctors who just prescribe X without studying it"
and you are now arguing "there is more than zero reason for the IRB to exist at all"
I was always arguing the second point. I agree that hypothetically the first point could also be true, or at least a steelman version of it. I would phrase it more as "it is sometimes correct to place more onerous restrictions of doctors trying to study X than on doctors who prescribe X based on the individual circumstances of their patients."
That would look like a doctor prescribing a treatment because the doctor believes the patient to be a good candidate for that treatment, instead of because the treatment was determined by the design of the controlled study being administered.
That's assuming that the *current* best guess treatments are both at 50% prevalence, as opposed to being 80-20, or 60-30-10 with 10% being something the study won't cover, or that doctors aren't choosing different treatments for different patients for some reasons that are relevant, or that doctors aren't doing variations on the treatment for individual patients which would be excluded from a controlled study, or etc.
A doctor can never give treatment that is less than the standard of care. It is when there is no standard of care (equipoise) that something may be studied straight up against a placebo and/or in arms that have differing treatment.
This is discussed here (there is other stuff too); the Australian physician and ethicist gives a good history and explanation of equipoise and it's evolution.
I do wonder if 'contact patients after they have already received treatment and ask if their anonymized medical data can be analyzed in a study' is an option.
It does mean you can't control for a lot of things, but you could get a lot of data potentially.
Slight nitpick: the cost of getting rid of IRBs isn't the handful of deaths a decade we see currently. It's whatever the difference in deaths would be with existing IRBs vs how many would die without them (or with their limited form). And of course any costs related to something something perfect patient consent.
You said Whitney found no evidence™ of increased deaths from before 1998, but if anything that strengthens the case against the increased strictness.
It strengthens the case because it provides evidence (perhaps weak) that post-1998 IRBs are not in fact saving lives compared to pre-1998 ones, so their benefit isn't even a handful of lives a decade. I'm not sure what you're getting at in the second sentence, but I would expect a large number of deaths would be noticed, since before 1998 IRBs still existed, they just weren't as strict.
This is in part Illich's argument that medicine itself is causing deaths.
IRBs might be red herring; iatrogenesis happens because of over medicalization.
Studies are not designed to help a particular and specific patient, they are designed to study "a population". Per Illich physicians are to treat their patients, not some amorphous "population". People are not widgets on a factory line where RCTs, experimental design, and tests are done on objects.
There is a deeper philosophical and sociological question at issue regarding the personal relationship between the sick and the healer, which is sidestepped in this book review.
I mean, I guess that sounds noble and all, but I'm glad they've done all sorts of studies on which treatments are effective instead of just winging it. Like grand visions aside, what does the concrete interaction look like? In your ideal case, if I'm having a heart attack, how should my doctor decide what to do? Or do they do nothing and see if I pull through myself, since death is part of the process of life.
Yes, agree. Instead of getting rid of IRB's what about having them change their criteria: They may refuse to approve only those studies where the harm of not doing them (or of delaying them more than briefly) is clearly greater than the harm of doing them in the original form proposed by the researchers.
I think that first paragraph is a pretty big nitpick, honestly, and came here to post about it. Stuff like "We haven't had any thefts in this store in the past ten years, so we clearly don't need store security," is a common fallacy.
(I don't think that narrative hiccup changes the outcome of the overall review/book, mind. It just popped out at me.)
Thefts vs. security is an easy tradeoff to check the math on, taken in isolation, since both can be evaluated in terms of money, and the inherent variability of losses to theft can be hedged by insurance. Where it gets ugly is the possibility of widespread theft reducing aggregate demand for legitimate purchases, and thus the price point. That's how you end up with cops outside grocery stores trying to stop desperate people from "stealing" technically-expired-but-probably-still-edible garbage.
I am sure there are good reasons why it is impossible, but in my dreams (the ones where I have a pony) some bored lawyer figures out how to file a class-action suite against the ISIS-2 IRB on behalf of everyone that died of a heart attack during the 6 month delay. Once IRBs are trapped in a Morton's-fork where they get sued no matter what decision they make they will have to come up with some other criteria to base their decisions off of (though I am cynical enough to expect whatever they come up with to be even worse).
A very strong essay, well-written as always, and addressing clearly and cogently a point both very important and not obvious to most people. Well done!
I think the fundamental problem is that you cannot separate the ability to make a decision from the ability to make a *wrong* decision. However, our society--pushed by the regulator/lawyer/journalist/administrator axis you discuss--tries to use detailed written rules to prevent wrong decisions from being made. But, because of the decision/wrong decision inseparability thing, the consequences are that nobody has the ability to make a decision.
This is ultimately a political question. It's not wrong, precisely, or right either. It's a question of value tradeoffs. Any constraint you put on a course of action is necessarily something that you value more than the action, but this isn't something people like to admit or hear voiced aloud. If you say, "We want to make sure that no infrastructure project will drive a species to extinction", then you are saying that's more important than building infrastructure. Which can be a defensible decision! But if you keep adding stuff--we need to make sure we're not burdening certain races, we need to make sure we're getting input from each neighborhood nearby, etc.--you can eventually end up overconstraining the problem, where there turns out to be no viable path forward for a project. This is often a consequence of the detailed rules to prevent wrong decisions.
But because we can't admit that we're valuing things more than building stuff (or doing medical research, I guess?), we as a society just end up sitting and stewing about how we seemingly can't do anything anymore. We need to either: 1) admit we're fine with crumbling infrastructure, so long as we don't have any environmental, social, etc., impacts; or 2) decide which of those are less important and streamline the rules, admitting that sometimes the people who are thus able to make a decision are going to screw it up and do stuff we ultimately won't like.
I think this is mostly correct. However, what you can do is seperate the ability to make a decision and the ability to be blamed for it.
You can make sure the researcher says: but I relied on an IRB. The IRB can say they followed all the appropriate procedures and considered the kind of concerns previously established as important. And the upshot is you can make sure everyone has only partial responsibility and no one in particular can be blamed.
I actually fear this often makes the deciscions worse. Since no one can be blamed there isn't the same incentive not to try to get any random shit by the IRB and no one gets blamed for stopping studies that save lives.
"Environmentalism always trumps infrastructure" and "infrastructure always trumps environmentalism" are actually both terrible policies.
In order to have nice things, we'd need someone to do cost/benefit analyses on a case-by-case basis, and allow important infrastructure that causes minor environmental problems while blocking inconsequential infrastructure that causes severe environmental problems.
But then, someone would actually have to make a judgment call, and could be blamed if it was a bad call, rather than being able to defend themselves by saying that they just followed the rules.
I think this is too much focus on trying to control decisions, and not enough on trying to control outcomes.
I feel like a lot of laws that apply to organisations (not people) need to change to prosecute specific outcomes rather than actions.
People are limited in the scope of their actions. The worst thing you can do is probably assault or kill people (number varies). You prevent that by restricting people's access to tools that help you kill people (weapons, dangerous goods, things that help you make bombs), and you also make rules against the types of behaviours that directly lead to those outcomes (do not attack people! Don't murder people! Etc)
With organisations, I don't think you can control actions in a sensible way. If you want to prevent a certain outcome, you should just say "if you caused these outcomes as part of your activities we will make your life hell" and let the organisation sort it out. You should not be attempting to exert control on their actions, because
1) they're much better placed to figure out the cause and effects of certain actions than you [random politician or bureaucrat]
2) trying to prevent outcomes by requiring specific actions means you have to be on top on every freaking thing they're doing and what potential harms they could be unleashing. That's a lot of work, and again, much more achievable by the organisation doing this work than you, an external non-expert party.
This works when you, the government, have some lever of power over the organisation. If it's a research institution, you can ban them from doing research on subjects. If it's a private company, you can take away their license to operate and fine them. As long as the government can in fact enforce the threat (removal of license to operate), this should influence the organisations to think very, very, very carefully about poking anything they don't fully understand. This will not necessarily stop a person, but this will stop an organisation (who can control said people by removing them ie firing if necessary).
So yeah. People can have accidents, but organisations should not. And these activities are so complicated now, trying to write rules around specific decisions is a futile, losing race.
Also, requiring specific actions increases complexity. This favors larger organisations, with more elaborate compliance and legal teams, over smaller ones, and thus hampers innovation.
This goes well beyond medical or construction or the other tangibles that Scott mentions. It also applies to "safety culture" that has spread in the last few decades. Whereas everyone above the age of 35 seems to remember childhood spent outside with lots of freedom, children now live highly regulated lives that greatly reduce the (already low) chances of failure/injury. Do we make a decision to let some kids die to entirely preventable situations so that the majority can experience more things in childhood? I would say yes, but it's not always an easy decision. I am certainly more protective of my children than my grandparents were of my parents. I would even go so far as to say that my grandparents were too lax on safety, even though all of their children survived to adulthood (maybe some luck involved there).
One issue here is that it's hard to think about/discuss tradeoffs across value domains--like if I'm talking about dollars vs dollars, or engangered species vs endangered species, or lives vs lives, it's not so hard to talk about the tradeoff without coming off like an evil robot. But if you're talking about lives vs dollars or endangered species vs affordable housing or something, then everything gets harder. Partly that's because we don't all agree on exchange rates between those things, but even more it's because there are often sacred values at stake, which means it's easy for people to fall into the "if it saves one child" trap where they can't even consider tradeoffs, and it makes it easy for anyone arguing for those tradeoffs to come off like an evil robot. And yet, we have to make tradeoffs between those things to function. Lives vs personal freedom (covid restrictions, gun control, smoking bans), endangered species vs economic growth (dams and large construction projects), safety vs money (car safety requirements, motorcycle bans), etc.
I think the net effect of this is that those tradeoffs are very often pushed off onto unaccountable bureaucracies or unaccountable courts, just because the more democratically accountable parts of the world have a hell of a time dealing with them at all.
I kept expecting this post, like so many others on apparently-unrelated subjects, to loop around and become an AI Safety post. But since it didn't, I guess I'll drop the needle myself.
Can we consider AI Safety types to be part of the lawyer-adminstrator-journalist-academic-regulator axis that stops things from happening due to unquantifiable risks?
Let me try to defend the AI safety folks, some of who(m?) have started calling their field AI “notkill everyoneism”. In particular, they have attempted to quantify the risk as follows: “if we’re right then everyone dies”. (I guess they failed to quantify the chance they’re wrong. Oh well.)
A second dis-analogy is that they have not in fact succeeded in stopping that which they want to stop.
And a third is that they tend to violently reject the kind of thinking that would make someone say “it’s okay if my actions (blocking research) cause countless billions to die as long as none of them die in my study (which couldn’t take place)”. The AI safety folks tend to believe in cost benefit analysis instead, I think.
AI-risk falls into the same Pascal's Mugging situation that Rationalists consistently reject when it comes to religion. By staking "the end of everything" (or humanity, or civilization - something inexpressibly major) as a potential end point from AI, they can multiply that out in any kind of cost-benefit analysis as not worthwhile. How do you multiply a small percentage chance against infinite loss? You keep coming up with infinity expected loss and must automatically reject AI. It wouldn't matter if the chance were 80%, 30%, 5%, or 0.001% - multiplying by infinite still results in infinite.
Is 5% high? That's about the median that I've seen among the select people who study this, which may be a very biased sample towards higher numbers. What's the percentage chance that Jesus comes back? How do you calculate such things with any kind of certainty, that doesn't rely on some very subjective reasoning? If you ask a bunch of believing Christians, the percentage chance that Jesus comes back is high as well. What makes one Pascal's Mugging and the other not?
I'm not seeing anything that cannot be described as special pleading.
5% is high enough that it puts a pretty low limit on the number of mutually exclusive scenarios you can be sold. Usual Pascal Wagers/Muggings rely on things with infinitesimal probabilities and infinite upside/downside, and then focus on one option out of near infinite possibilities.
A believing Christian isn't being Mugged into believing, they believe of their own accord. A classic Mugging is to say, well there may or may not be a god, but if there is and you follow the motions, you get infinite utility, so it's still a benefit to believe. The fact that the particular god in question is one of a vast possibility space is what makes it a Mugging. If there were only two options, this specific god or no god, and there was a 5% chance that the god did exist, it wouldn't be a Mugging as usually described.
Sure, but then we're looking at whether 5% is the real chance of AI killing all life in the universe. Just like the religious are far more likely to think a higher percentage is likely, those who study AI safety think that the percentage is higher than everyone else. But regardless of what people think, there's the real probability that depends on things like "is it even physically possible to kill all life in the universe" or whatever. If a Foom is literally impossible (or simply doesn't happen) or the worst consequence of an AI disaster is something like a 10% reduction in world GDP for a few years, that's definitely a different calculation from the [infinite negative] X [probability of AI superintelligence + AI is evil] that looks more like a Mugging.
I'm not sure about the existing AI Safety types (amongst other considerations, I don't think that they have the actual power to prevent research). But the proposed government regulators sure sound like part of that axis.
I don't think the take-away should be "blocking things is bad; allowing things is good". There are SOME studies that should, in fact, be blocked.
The problem is that bureaucrats are looking only at costs, instead of weighing costs and benefits; that is, they think that a study that accidentally kills one person but saves a thousand people is "bad", even though it prevented far more deaths than it caused.
I think the proper take-away is "do a goddamn cost-benefit analysis like a sane person."
Or, at a higher level of organization, "the public needs to be just as angry at administrators who prevent good things as they are at administrators who permit bad things".
The resources required for legal battles with people who'd prefer to opt out on the basis of e.g. strong religious convictions could be more productively spent on developing synthetic transplantable-organ substitutes.
I think you should do C/B analysis even if you also have a deontology you obey; they're not just for utilitarians (unless you your deontology only permits exactly 1 action in all possible situations).
I don't think utilitarianism actually results in murdering people for organs in real life.
And I'm not arguing medical studies should get rid of consent. I'm arguing you should allow people to consent to things instead of presuming that true consent is impossible.
Right. I'm not sure how pure utilitarianism ever survived Swift.
I will have to pull out some Voegelin essays to reread. The strain of gnostics immanitizing the eschaton are persistent. If only they could get the utilitarian equations right - Eden.
People want to be probabilistic but haven't ever read Deming's Probability as basis for action. No understanding of the difference between enumerative studies and analytical studies nor of the limitations.
Asking for a 6 month pause seems dumb as hell to me, and hard to justify, sort of like IRB bullshit.. Makes far more sense to demand we pause development until certain criteria are met, having to do with understanding what's going on under the hood.
It's not dumb from a sociology point of view. It's a camel nose. Let's say you *do* get everyone to agree to a 6-month pause. The pause itself is worthless, of course. But what you have also done is (1) get everyone to agree that the risk is real, and (2) get everyone to agree that a pause is a reasonable response to the risk. You've won some huge victories in terms of setting the terms for future debate. *Now* you can go back and say "actually, no, we need a 10-year pause, or we need a pause until this-and-such criteria are met" and 80% of your battle is already won by the fact that everyone previously agree to a 6-month pause[1].
------------------------
[1] Like the old chestnut:
A: "Would you sleep with me for $1 million?"
B: "Sure, I guess."
A: "What about if I just buy you a cup of coffee afterward?"
B: "Of course not! What kind of man/woman do you think I am?!"
A: "We've already established *that*. We're just dickering over your price."
In other posts on here I've made a coupla related points:
-Won't the AI development companies just use the 6 mos. to keep developing GPT5, or whatever they're working on now? (I recognize that this doesn't negate the benefit you name, of sort of driving in a wedge, getting people's attention.)
-Wouldn't it make more sense to tie the pause to a criterion instead of the clock? I mean, that's just basic behavioral management. If you want a recalcitrant subject to change their behavior, you don't just take away their cookies for the day, you tell them no more cookies until they do X, and it has to be a good-quality version of X and sustained for at least some specified period of time.
The criterion could be accomplishing some piece of work that would make AI safer, maybe something to do with alignment. Or some simple test of degree of alignment, probably something like setting some rules and then inviting all the dark, ironic hackers in the world to try to get the AI to break the rules. Or something having to do with undertanding better what's inside that black box? How does AI come up with its responses to prompts? Or some document developed jointly by the AI companies where they list several safety tests that each new version has to pass -- or several things that, if they happen, will automatically trigger cessation of development until there's full understanding of what happened (sort of like investigation that's done when an airliner crashes ).
Seems like tying resuming work to a criterion also drives in a wedge, and gets people's attention, and wakes AI companies up to the possibility that government could really cramp their style if it chose -- but also is useful in itself, whereas the 6-month cessation plan is useful only as a wedge. Also, many people will realize the 6-month cessation plan put forth by the government is ineffective and silly. How can that be good? During covid there were various directives and info bulletins from the government that were obviously ineffective and silly, and that caused a lot of rage and despair to individuals directly affected by them, plus a general cynicism about government among the public at large. If the government starts with something stoopit then there will be less support for whatever its next proposal is. So if the next proposal is a 10 years cessation, fewer people will be behind it. One of the Shitlords of the Internet can say, "the government keeps shutting us down for periods of time with no clear idea of what that will accomplish." and they will be right and more people will rally around the Shitlord.
I'm not disagreeing there are much better ways to go about things if you're really worried about superintelligent AIs eating our brains (which I'm not). Nor do I disagree there might be better political/sociological approaches. I'm just saying calling for a 6-month moratorium isn't a priori stupid, if you consider it a political/sociological gambit to alter future conversations rather than as an attempt to actually change trajectory in the short term.
I have no idea. If I had that kind of social-manipulation skills, I'd be fascinated by politics and enjoy big leadership roles, neither of which is true. On the criterion of "which would make more sense if we could just pray to the God Emperor and he would grant one?" then of course you're correct. I'm just observing, secondarily, that part of getting stuff done at the collective level is social-psychological or political, "the art of the possible" and/or the art of getting half a loaf.
Maybe people who are better at social manipulation than me have decided that a specific (short) time frame soothes people who are concerned about Luddism, because it's "only" 6 months -- and because there can really be no subsequent debate about the meaning of "6 months", while any criterion-based pause relies on a future consensus on whether the criterion has been met or not. So getting a 6 month moratorium is plausible, while getting a criterion-based one is not, and it changes the terms of future debate, so hurray.
But I will say even all of that assumes the people proposing it are representing their motives in good faith, and I am alas deeply skeptical about that. I think no small number of them are participating for assorted selfish reasons which have actually squat to do with protecting the species. Mostly because, as you have pointed out elsewhere, if people *really* thought that the probability of species extinction inside of 25 years was, let us say as high as Scott's 33%, they would be pulling out all the stops -- they would be hysterical with fear and rage, and they wouldn't be talking about milquetoast 6 month moratoria, they would be hunting down AI researchers and hanging them from the nearest tree, wrecking hardware, burning down buildings, and otherwise acting like a truly frightened people.
B: "My previous tentative acceptance was implicitly predicated on a certain foundation of respect. How do you feel about $16 million, half in advance? Or, since the exchange rate there is so unfavorable, we could discuss your options for a public apology."
After reading through and getting to the proposed suggestions/your meta thoughts, I was surprised that you didn't mention the option of some sort of blanket liability shield to protect institutions much in the same way that the current consent forms are designed to do, so they don't need to be as obsessed with lengthy bureaucracy to shield themselves. Is this just not realistic?
If the change in 1988 originated with grandstanding congressmen demanding changes in order to win votes, then protection from lawsuits wouldn't have helped (in that particular case). You would've needed a shield that prevents congressmen from gaining or losing votes based on how they react to the issue, which doesn't seem possible without crazy sci-fi tech.
Also, if you protect the institutions TOO much, you might get the opposite problem, where they rubber-stamp all studies and do no actual oversight.
Not nominative determinism. "Epstein" doesn't have anything to do with consent in itself, except for the fact that a different guy called Epstein was a sex trafficker.
Some back of the envelope math: the high end of your estimate for IRB-induced deaths (100k Americans/year) would imply ~2.9% of the 3.5M annual deaths would have been preventable by studies getting delayed that year.
This seems high to me. I wonder if the most egregious examples like ISIS-2 are throwing your estimate off. The lower end of 0.29% seems more reasonable.
Still a massive number of people though. Really enjoyed this, and hope it nudges us toward change.
Sure, but intent matters. If you leave the rake out after raking up the leaves, and a lifeguard accidentally trips on it on the way to save someone drowing, it seems dubious to accuse you of murder. Nobody suggests IRBs "intentionally* set out to cause death.
I would regard the effects of the early IRBs, which seemed to be mostly trying to do reasonable things to prevent another Tuskegee experiment, as well intended.
On the other hand, when Gary Ellis shut down every study at John Hopkins - well, it seems like an action that can both be described as acting "on advice of counsel" and acting "with depraved indifference to human life".
No argument there. It seems a human tendency to take almost anything good in moderation and push it to the limit where it become toxic. I suppose we do this so that future generations have our (bad) example(s) to help guide better choices..."ok, a little bit of this funny brown drink is good, cheering, makes you good company, but too much will turn you into an asshole and get you killed."
Many Thanks! Agreed on "It seems a human tendency to take almost anything good in moderation and push it to the limit where it become toxic." Come to think of it, setting down the beer and contemplating purity spirals: This might not even be restricted to humans. Toxic purity spirals might be driven by game-theoretic considerations when intra-group competition to signal alliance to the group is important. A set of GAIs divided into several competing alliances might suffer from it.
On top of this, the members of these boards (at least in the example above) don't seem interested in improving the process to optimize for reduced deaths even when they are shown they are wrong. Jerry Menikoff's arrogance displayed regarding the Petal study is just shocking and to me, makes him morally culpable for any death his delays may have caused.
If you are having a heart attack and I tackle the doctor who is on his way to perform CPR, I think it's reasonable to say I caused or even induced your death. I don't object if you want to propose other language, especially for "induce," but the point is similar.
In my opinion, it's even clearer if we move out of human actors. For example, aspirating mineral oil will coat the inside of your lungs, preventing oxygen from entering your body and killing you. I'd call that a mineral oil induced death, even if the actual cause is lack of oxygen.
If someone (why would you imagine yourself doing that) tackles someone getting ready to do cpr it is "reasonable foreseeable" that the tackler might be responsible for death. (Unless CPR would have been futile since the person (why make me the heart attack victim?) was already dead.
There is no reasonable foreseeability that an IRB actions causes any actual harm at the time of the action.
If someone tackles someone using a kitchen knife to do open heart surgery (to see if it work - they had a hunch and wanted to do experiment given the opportunity), would the tackle be reasonable for the death of the heart attack victim? No!
I am dumb founded by the idea that so many here are willing to hold so fast to notion that IRBs and desire for ethical research on humans is ostensibly killing people. Is this some kind of generational divide?
I personally don't agree that things can only be viewed as causing those effects that are reasonably foreseeable, but don't think language disagreements are usually productive.
More to the point, how is it not reasonably foreseeable that if we have two treatments and don't know which one is better, then delaying our discovery will cost lives?
What if neither is "better"? What if study shows currently used one is better? (Shall we put down in the utilitarian score card an IRB plus for all lives not lost from hypothesized alternative due to delayed treatment experimentation.) How can an IRB know in advance that this research is going to be a game changer - we ought to give them some slack, what's a few dead among "inovaters".
The aim of the IRB is not improvement in "results" whatever that really might be imagined to include, the aim of the IRB is ethical research.
Were the Tuskegee experiments ethically justifiable? How about Nazi experiments? How about Soviet or Chinese experiments on prisoners?
How anybody can be a pure utilitarian after Swift's A Modest Proposal is remarkable to me. Of course, balancing risks and rewards is part of prudential decision-making, but doctors don't treat populations, they treat individual human beings.
Medicine is not manufacturing. Medical research involves people not widgets. There is ethical research and there is unethical research. Sometimes figuring it out is hard and takes time and certainly both kinds of errors will be made.
1) This is a different question than whether deaths due to research delays are foreseeable, but still interesting.
2) I don't disagree with the premise that as a society, we will sometimes find the costs higher than the benefits of accelerating research, but I'm not willing to concede that we happen to be at the ideal spot now.
If we decide that (a) it will cost some lives not to do experiments on condemned prisoners against their will but (b) that's the ethical decision we've made, then so be it.
That said, I agree with Scott and the people he's summarizing that we've pushed the needle too far towards slowing research, and that it's costing lives that we would prefer to save. I think the examples in Scott's article are good examples where the research should have happened faster.
Another consideration is whether that medical treatment would have increased longevity by a relatively modest amount (say, 1 year - but that year is spent in various treatments at or near extreme old age and quality of life is very low), or would have "saved" someone's life in the more general sense that lay people would use, where that person is able to go on and live a meaningful life for years to come.
It's all well and good to know that a heart treatment was able to prevent a death on April 12, 2023, but not prevent a death of the same patient on May 17, 2023. In some cases that might be a meaningful difference to the patient, but I wouldn't fault the IRB for X preventable lives lost in that scenario.
Yes, but I don't think most people would actually think about different interventions intuitively that way. For one thing, it's really really complicated to determine the number of QALYs between the following situations:
Someone living for 10 years with limited consciousness and bouts of moderate to severe pain, but lots of visitors - OR - Someone living for 5 years with no pain, but very limited mobility and no one ever comes to visit them.
Even if we can develop a mathematical formula to quantify those scenarios and say that one is "better" than the other, that doesn't necessarily correlate with an average person's intuitive feeling about different situations. Maybe the QALYs are technically positive but an individual reviewer would disagree.
Specifically to the point here, knowing that "6,000 people were saved" by an intervention doesn't even try to QALY those lives. If the average life after being "saved" was worse than death, then the intervention is net negative. Since we have no way to tell the quality of the lives being lived, including the length of time someone lives after being "saved," then we can't evaluate whether the IRB's intervention was negative at all, let alone by how much.
Sigh. Innumeracy is a problem but so is hypernumeracy.
Maybe someone will do a review of Ivan Illich, Medical Nemesis.
Clinical, social and cultural iatrogenesis is a thing. I'm not full Illich on these matters, but I think the imagination that there is some magic utilitarian formula that is soluble if only IRBs would get out of way is bonkers to me.
After Medical Nemesis, if I was setting up a seminar: short Deming article On probability as basis for action, 1975; Why Most Published Research Findings Are False, Ioannidis (2005), and Ending Medical Reversal: Improving Outcomes, Saving Lives
Vinayak K. Prasad, MD, MPH, and Adam S. Cifu, MD (2019), The Norm Chronicles, Michael Blastland and David Spiegelhalter (2013), and maybe Fooled by Randomness, N N Taleb and the sections from Catholic Catechism regard medical ethics.
And then we might have have an interesting discussion that isn't some undergrad BS session.
If I were in Scott's situation I would have avoided any involvement with the IRB in the first place, and instead secretly recorded the data and published the stats anonymously on the internet. Nobody can sue you for that because nobody has any damages. In the unlikely event that the hospital found out they could fire you, but "firing doctor for publishing statistics" is a bad look.
But "firing doctor for illegally conducting experiments on patients and publishing confidential medical data on the internet" is a great look, and I'm sure they'd be able to sell it that way.
Ultimately it's "firing doctor for generally being a pain and creating controversy and work for everyone else", which is a firing offence pretty much everywhere.
In the case of Scott's experiment, (1) no confidential medical data would appear on the internet. There would be no need to identify individual patients. Scott's results would be in the form of, "out of X patients who were diagnosed bipolar on admission using the questionnaire Y% were eventually given an unqualified diagnosis of bipolar disorder, and of those NOT diagnosed bipolar using the questionnaire, Z% were eventually given a full diagnosis of bipolar. Therefore, the accuracy of the questionnaire is [high, low, nonexistent . . .]"
(2) I believe, though I'm not sure, that conducting a study like Scott's without IRB approval is not illegal, but against the medical professions code of conduct.
I actually don't see a thing wrong with publishing that data anonymously online, other than the fact that it will be much less useful if it's presented that way, because people will have no way to judge how much they can trust the results.
Why isn't the OHRP itself getting constantly sued seeking injunctions against its arbitrary, capricious, and extremely destructive behavior? Prohibiting doctors from publishing statistics about their standard practice is a blatant violation of their first amendment rights, and the current SCOTUS is very strong on first amendment issues.
In the mean time, unjust laws were made to be broken.
Well, sovereign immunity. You cannot sue the government unless the government itself by statute says you can. Congress would need to pass a law waiving sovereign immunity for HHS.
Um, yes. Also the remainder of the Bill of Rights. Now all you need is a plausible theory as to how OHRP decisions violate the Bill of Rights, and a second paragraph that explains why the million or so plaintiff lawyers hungering after some magnificent payday haven't themselves thought of your theory.
Edit: incidentally, publishing information *about somebody else* is not a First Amendment right. That's why I can't snoop around, discover your credit card number, and publish it and cry "Ha ha! First Amendment bitchez!"
I guess you're forgetting the Supremacy Clause? Anyway, I've already agreed you can sue the government for a violation of your civil rights, because that's in the Constitution, which supersedes any statute, and can be considered as establishing mechanisms by which The People say the government can be held to account.
But unless you have a good reason why some bureaucratic decisions is an actual violation of an identifiable victims rights, with actual harm, you're fucked, unless the government has waived its sovereign immunity rights. This is why it's proven tricky for people who dislike it to sue to stop the Biden Administration's student loan hokey pokey: it's hard to find an actual someone who has suffered some actual harm that violates their actual civil rights. Pointing at hypothetical harm to generic masses of people that has squat to do with their enumerated civil rights doesn't work.
If your study isn't doing anything outside the scope of what you would be allowed to do in a non-study (i.e., most observational studies in medicine), than any regulation of the study is basically purely a speech regulation, and therefore unconstitutional. Worth a shot putting that to the supreme court. You should only need IRB approval to do things to patients that you weren't already allowed to do outside of a study.
Not necessarily unconstitutional; I don't the First Amendment has been ruled to allow people to violate confidentiality agreements or privacy regulations.
Confidentiality agreements are a matter of civil contract law, totally different from the government telling you that you can't say X. And patient privacy has basically nothing to do with publication of aggregate stats of the sort that are necessary for figuring out whether a treatment works.
I haven't finished reading by felt compelled to comment on this:
"the stricter IRB system in place since the
'90s probably only prevents a single-digit number of deaths per decade, but causes tens of thousands more by preventing lifesaving studies."
No. It does NOT "cause" deaths. We can't go down this weird path of imprecision about what "causing" means.
I've been examining Ivan Illich, "Medical Nemesis" recently. By claiming IRBs which stop research ostensibly CAUSE death strikes me as cultural iatrogenesis masquerading as a cure for clinical iatrogenesis.
"Nobody knows how many people OHRP’s six month delay killed,"
The delay did not kill anyone. The poor compliance with known procedures by healthcare providers is what killed patients, i.e. clinical iatrogenesis!
It is conflicting: I'm drawn both to need for more RCTs AND to the importance of recognizing the dangers pointed out by Illich of overmedicalizing the process of life which includes death and illness.
Was it really poor compliance? They ran their study by an IRB, were approved, then the OHRP stepped in and said no. "Poor compliance" in this case is not getting every doctor, nurse, and patient to sign a consent form saying that nurses can remind doctors to follow a checklist.
The problem is called being human. It's not a thing unique to doctors, there's a reason pilots also have extensive checklists to follow. It's not like they're foregoing items because they're lazy, it's because humans make mistakes, and using tools to help overcome mistakes is how we stop making mistakes. Disallowing study of these tools in favor of putting all the blame on the doctors might feel cathartic or whatever, but it won't help anything.
And like, none of this applies to the heart attack thing where no one knew the answer before doing the study.
But do you really need "a study" to "test" effectiveness of checklists. Do we need a study to test whether a parachute should be used when jumping out of a plane?
The IRB did NOT prevent the use of checklists, it prevented a study about using checklists!
And how many more medical facilities would be using checklists today if the IRB had allowed that riskless study to be published?
The claim isn't "The current system caused deaths by preventing checklists from being used". The claim is "The current system, due to its own ignorance and risk-aversion, prevented information about the effectiveness of checklists from being published and distributed to the medical community, thus indirectly causing deaths.".
Any medical facility can choose to use a checklist, yes. But heaven forbid they should be able to look at relevant research beforehand!
And not to put to fine a point on it, but yeah, "Doctors washing their hands" was actually pretty controversial for quite some time.
They don't prevent the use of aspirin to help with heart attacks either. But until we do a study, we don't actually know whether aspirin helps with heart attacks. It may help, or it may make them worse! It's easy to look at a study after it was done and proclaim the results obvious, but implementing new standards and procedures takes overhead and work, it's not something that can or should be done on a whim or a hunch.
And that's leaving aside all the things that can't, in fact, be done without an IRB, for example any non-existing procedure or medication that will only be approved by the FDA with a large, expensive trial. (And before you say that's equally to blame on the FDA, while they can also be overrestrictive, there is value in actually having to test something before releasing it.)
I'm unclear on what definition of "cause" you are applying here.
If I take an ordinary healthy human, and forcibly stop their heart, would you say that I "did not cause" their death, because I only prevented something from happening (i.e. prevented the heart from beating)?
How about if I prevent them from leaving a room until they die of thirst?
Call it what you want - more people would be almost certainly be alive today if IRBs rejected / delayed fewer studies.
Assuming you think people being alive instead of dead is a good thing, the fact that the IRBs didn’t “cause” these deaths in preferably the same way that literally shooting them in the head “causes” death is kind of a pointless argument. Either way, the IRBs did things that prevented a significant number of people from being successfully kept not-dead, and that’s pretty bad.
I’d also add that by your same logic, IRBs can never actually “save” a person, so their “lives saved” and “deaths caused” can still be weighed against each other in the way Scott is doing.
Then in that case you aren’t really arguing the topic at hand at all, but rather trying to shoehorn in an argument about something else entirely (whether the practice of medicine on net saves lives at all).
On that, it seems very difficult to imagine a world that solves iatrogenesis without, you know, doing research on medical best practices (like “how to make sure doctors don’t forget to wash their hands” or “gee this high stakes medical situation doesn’t have a universally agreed upon treatment protocol, maybe we should figure out which one actually works”).
IRBs are attempt to be a bulwark against social and cultural iatrogenesis.
Studies are an attempt to be a bulwark against clinical iatrogenesis.
But a study is only of a populations it is not treatment of a sick person.
I'm not sure that you can really consider these questions without a broader view of medicine and anthropology: people are not widgets but medicine cannot be voodoo so we need science. But the noetic (including techne) is insufficient for any problem involving humans, we must also consider the poetic and pneumatic.
This seems like a weird overly metaphysical nitpick.
Suppose a surgeon is operating on someone. In the process, they must clamp a blood vessel - this is completely safe for one minute, but if they leave it clamped more than one minute, the patient dies. They clamp it as usual, but I rush into the operating room and kill the surgeon and all the staff. The surgeon is unable to remove the clamp and the patient dies.
It sounds like you're insisting I have to say the surgeon caused the patient's death and I was only tangentially involved. This seems contrary to common usage, common sense, and communicating information clearly. I have never heard any philosopher or dictionary suggest this, so what exactly is your argument?
The surgeon did, unintentionally, cause the patient's death, because he clamped the blood vessel; you didn't, so you didn't cause the death. Whether you're *responsible* for the death is a different question.
Causation is a physical thing as opposed to a moral one. Causally, if the surgeon had to leave the room for some good reason and being justifiably confident he'd be back in a minute, you inadvertently detained him in some way (not knowing about the clamp), and he didn't get back in time, your causal position is the same; you're a but-for distal cause of the death (as is almost everything that's happened in the preceding light-cone of the death), and not the proximal cause.
Whether you're morally responsible for the death is different in both circumstances, but depends on the moral framework you're using (if it even cares about "moral responsibility" or anything analagous). In law, if you knew about the clamp (or ought to have known that the patient was likely to die for some reason if you killed the surgeon mid-surgery) you're guilty of murder in your scenario but not in mine, and otherwise guilty of manslaughter. For this reason, there aren't a lot of moral frameworks that care about causation without caring about intention.
What if I'm besieging Leningrad and (lots of) people there starve to death. Did I "cause" those deaths, even though I didn't directly physically interact with those people and their metabolism requiring food predates my siege of their city?
You're morally responsible for it, but you didn't physically cause it (especially as presumably you're not besieging Leningrad on your own, but sitting in a bunker somewhere telling a few people to tell a bunch of people to siege Leningrad).
I was thinking of the people who were being told to siege Leningrad, but that does enable me to turn this into a variation on a Yo Mama joke: This dictator is so fat, when he's around a besieged city, he's AROUND the besieged city.
Hahaha, I was thinking you'd have to run really really fast to block all the routes in and out.
For the individual besiegers, the causation is really weird - if any individual one of them doesn't participate, the siege still happens and the inhabitants starve, so even in terms of but-for causation it's not clear that any individual soldier has caused anyone to starve. They'd still be morally responsible from their joint intention to besiege though.
Surely you *do* physically cause it, with a few extra steps. You produced sound-waves/put ink down on paper in patterns which resonated into the neurons of some other apes until down the chain it caused some of them to take up guns and tanks and go sit around Leningrad.
Everything has multiple causes. All fires are caused, among other things, by oxygen, but we rarely consider oxygen to be "the" cause of a fire. When we pick out something as "the" cause, we are usually looking for something that varies and something that we can control. All fires require oxygen, but oxygen is not a, variable enough factor compared to dropped matches and inflammable materials.
Context matters as well. A coroner could find that the cause of Mr Smiths death was ingestion of Arsenic, while the judge finds that it was Mrs Smith. It would be inappropriate to put the arsenic on trial and punish it, because it is not a moral agent.. but it is a causal factor nonetheless.
If you are trying to find someone to blame for a problem, that is one thing, if you are trying to have less of it, that is another.
I don't think "moral causation" as a phrase/category adds anything to anyone's position; if Mrs Smith put the arsenic in Mr Smith's tea thinking it was sugar, or if she put it in because she wanted him dead, her position in the causal chain is the same. It's a peculiar use of the word cause that muddies the waters and either 1) doesn't pick out a different category from moral responsibility (or "fault," if you like), or 2) picks out a subset, defined purely by whether you can slot something neatly into the chain of physical causation.
Moral causation adds the ability to praise and blame people for their intentional actions. But maybe you feel the criminal justice system doesn't do anything.
Criminal law distinguishes between cause and intention, and as general rule both are needed as separate elements of a criminal offence. The causation part is factual causation though.
Event A is the proximal cause of Event B if Event B immediately proceeds Event A and Event B is the direct consequence of Event A with no circumstances in between.
Event A is a distal cause of Event B if, but for Event A, Event B wouldn't have occurred (and Event A isn't the proximal cause).
No. Come you've made up nonsensical nonanalogous situation. (Why would you ever make up a story where you are a killer? That is messed up. Fully intended as a kind true and necessary statement for your own good.)
Reasonable foreseeability is the concept as opposed to hypothetical or tangentially foreseeability. It is reasonably foreseeable that a lunatic rushing into an OR could also lead to death of patient even if malevolence was only directed at healthcare providers.
Note that one cannot know in advance what the results of the proposed study will be unless it is so obvious (parachutes) that there was no need for a study or IRB in first place.
The farrier who lost the nail did not really lose the war!
IRBs may well need reform but they don't need eliminating. The root problem is iatrogenesis. Medicine is making us healthier and sicker.
I fully support rigorous and ethical studies. But, Illich is worth a least a consideration. Studies do not treat a patient, they study a population.
I'm no expert. But my own meditations have led me to view causality as "d/dx applied to logical conditionals" a la Pearl. In this framing, "reasonable foreseeability" boils down to whether the reference-class of a particular action reliably precedes bad-outcomes. Except "reasonable forseeability" makes reliable reference-classes seem qualitatively distinct from unreliable reference-classes (rather than a difference of degree) so that you can claim that distal causes "aren't true Scotsmen". If Scott were arguing that safety committees as a reference-class reliably lead to bad outcomes, then I would agree with you that Scott is wrong. But a colloquial reading suggests that Scott is discussing a particular committee in a particular state of the world.
any objections?
> Note that one cannot know in advance what the results of the proposed study will be unless it is so obvious (parachutes) that there was no need for a study or IRB in first place.
Scott isn't discussing the outcome of a particular study. He's discussing the outcomes in aggregate.
Uh... I was expecting objections related to why you find the phrase "IRB's cause deaths" unreasonable.
> Outcomes in advance for ANY particular study or in aggregate for all or most studies aren't known.
Statistics is predicated on the idea that an aggregate is (almost certainly) a higher-res representation of the underlying distribution than any single datum, is it not? Scott is looking at the field of medicine as an aggregate and complaining that progress is too slow because the OHRP is far too skewed toward preventing false-positives rather than preventing false-negatives. Imagine your email provider sorts all your email (including the obviously not-spam) into the spam folder. "It's fine" you say, "since it's impossible for the provider to know in advance whether I'll judge it as truly spam."
> Observational studies are generally pretty poor at telling us anything.
> And RTCs depend on arms and endpoints.
I'm aware of this. I would have emphasized this by saying "interventions" if I had thought it relevant.
And if you're going to stand by the "we can never truly know if the OHRB is causing deaths" story, does that not entail that we can never truly be sure that the OHRB prevents harm on net? You can't have it both ways.
"Y caused X" to me means ceteris paribus X would not have happened if Y had not happened. So both the clamping and shooting the surgeon are causes of the death of the patient, because if either one of those things hadn't happened the patient wouldn't have died.
Tens of thousands of people would have not-died if the OHRP hadn't stonewalled doctors trying to do research, therefore the OHRP is one of the causes of those deaths.
You don't know in advance what the results of the research would have been.
What we can generally say is the entire system of medicine produces deaths that would not have happened if there was no medicine. Iatrogenesis. It is also true that entire system of medicine has cured disease and reduced certain cause specific deaths.
2. Probably if we are precise. Remember though the famous doctor strike which lead to fewer deaths!
3. Re: OHRP not existing; no possible way to know that. And the removal of ethics from medicine a very bad idea. Should prisoners give up informed consent? Maybe that would "save" lives? No.
How about killing - action which leads death where there is a reasonably foreseeable risk of death. We measure reasonable foreseeability at the time of the action.
We do studies because we do not know something. The prevention of death can be pretty complicated. Does drug A prevent a death from cancer? Well, maybe in half the cases where it is used. Well what about the other half? Shrug, people still die. Does drug A also leads to death because of its toxity? Well, yes, there is a reasonable foreseeable risk of death, but it is small (whatever that means). So the preventing the prevention of death is a pretty complicated ball of wax. Don't take drug A if you are allergic to drug A. How do I know if I'm allergic? If you take drug A and die soon there after from a reaction you were probably allergic.
IRBs can only be measured on the basis of whether the prevent unethical research. Maybe we have reached equipoise about whether IRBs (which are the de facto standard of care for research on humans) are successfully preventing unethical research, and we need to do studies. How to design such a study might be very difficult or impossible or maybe very easy, I don't know.
We have a situation where there is obvious iatrogenesis. It existed both before and after IRBs, but IRBs are not intended to aid that problem. IRBs are only to ensure that research on humans is ethical. Have there been unethical studies notwithstanding IRBs? I think the answer to that is yes.
I notice you helpfully defined killing, which is nice, but not prevention of prevention of death. I know this may come across as pedantic, but the reason I'd like to see that is that your definition (like most reasonable definitions trying to support your type of position) makes less-rigorous-than-at-first-apparent words like "action" do a lot of heavy lifting.
(I could also pick on what exactly is the meaning of "reasonable foreseeability" but that doesn't seem like a fundamental ingredient in this disagreement.)
So I guess the most fundamental equivalentish question here is: what is an action?
E.g., am I killing the starving child I could have fed? I am reasonably sure I could find and feed such a child!
Or, were I a doctor, would I be killing a patient I refused to treat for something very deadly and treatable if no other doctor were reasonably available?
I don't see we can reasonable say that IRBs are killing people. Killing someone is not the same thing as preventing someone from reasonably foreseeably preventing a death.
Are you killing a child, no. Are you culpable for some kind of other ethical or moral failing, maybe.
Some one gets into a fender bender with the only surgeon who can do a life saving procedure scheduled in 2 days. Surgeon, who was actually the negligent car driver, can't do the surgery because of injury in fender bender and patient dies. It is not reasonably foreseeable that negligent driving would lead to delay or cancellation of surgery. Surgeon might be responsible for fender bender but is not responsible for patient demise. Even "but for" causation has legal and moral limitations.
What if we reframed the article, instead of using the word cause we wrote something like "in a world without the stricter IRB system tens of thousands of people would still be alive". Do you disagree with that statement, and if so is it the empirical claim you disagree with (i.e. you think those people would not be alive in a world without the stricter IRB system)? To me, Scott's argument is still very persuasive even if I accept your point that saying the IRB *caused* the deaths to happen is wrong/imprecise. A second question I have is, under your view does the IRB system *cause* any lives to be saved? Or should we just abandon using the word cause in this type of way completely.
I disagree with the statement only because it presupposes the outcome of studies (ostensibly delayed by zealous IRBs).
If we knew the outcome we wouldn't need a study and IRB would not be involved.
IRBs we're invented after I was born, so I like to say like me still a work in progress.
Medicine should become more scientific but medicine is not manufacturing.
Even manufacturing has method to determine when a destructive test should be done. See Deming, out of the crisis, p 418 ff (1993 edition) (1982)
Utilitarianism alone cannot cure the problems of medical reversals. The problems with ultilitariansim have been pretty clear since Swift's A modest proposal.
It is wishful thinking: if only medicine were more scientific (like quality manufacturing) (pesky IRBs), no one would die.
Unfortunately, people will always die. You cannot test or measure you way out of that dilemma. (For some, Easter is the long term solution to that problem.)
I'm not sure if I really disagree with you, I'm struggling to see where you disagree with the overall premise of the Book Review. I agree people will always die and I generally agree that doing a ceteris paribus style analysis and easily conclude 'X more people would be alive". But none of that seems to contradict the hypothesis that the current IRB system is too strict, and unnecessarily delays/halts studies that might could be enormously beneficial.
But if we know the eventual result of the study, it seems like we can reason in just this way.
Consider the FDA delaying the approval of beta blockers for many years in the US, after they were in widespread use in Europe. At the time they did this, maybe it wasn't possible to know for certain if this would be a net good or net bad. But now, many decades later, we can know that it was a bad decision to delay beta blockers, one that must have resulted in more people dying of heart attacks than would have died in a parallel world where they'd been approved in the US a decade earlier.
I read it as just rhetorical hyperbole. A strenuous call to consider things unseen as well as seen, to use Bastiat's potent phrasing. I don't think it's meant to be taken literally, and if we insist everyone composing an argumentative essay (as opposed to drafting legislation, say) eschew rhetorical exaggeration, we turn into Vulcans and can only have sex every 7 years.
"Cause" is a very imprecise word anyhow. If a man dies of a heart attack, all of the following are in some sense causes: Man's cell phone was lost so it took him a long time to contact someone. Man did not take his prescribed heart meds. Ambulance got caught in traffic jam. Man was human and everybody dies. Heart stopped beating. Lack of blood flow to brain caused irreparable damage. So saying that an IRB's stopping a certain piece of research from being done is a cause in one of the looser senses of the word, but it's not nonsense. Consider the man with the phone. Maybe it's true that if he had gotten to the hospital 20 mins earlier he would have lived, and that if his cell phone had not been lost he would have gotten there 20 mins earlier. It would not be absurd for his spouse to think, guiltily, that she knew he was bad about keeping track of his cell phone, and that if she had devised some better system for keeping it from getting lost that he would probably be alive today.
We could change the wording from "IRB caused deaths" to something else "IRB's refusal to approve experiment makes it responsible for the deaths" -- but what's your point exactly? Everybody agrees the IRB's preoccupation with never making a mistake in the direction of allowing too much has led to it making many mistakes in the direction of allowing too little, and that those mistakes have interfered with putting into practice some things that would have saved a lot of lives. Nobody thinks the IRB members went out with guns and shot all the people. Is your point purely semantic? I don't really get what your problem is with the phrase "IRB caused deaths."
"Everybody agrees the IRB's preoccupation with never making a mistake ..." No everybody does not agree.
Medicine is not manufacturing.
What is problem? Saying IRBs cause deaths is not accurate. And it is a sketchy rhetorical trick. Once we agree that A causes a lot death then next logical step may be to get rid of A.
If I said that your perseveration over this one issue is giving me and various other people a headache, do you think that's a dangerous rhetorical trick too? Like that we're on a slippery slope, and we're more than halfway to accusing you of physically tormenting those who disagree with you, and that if we're not called out on our use of "headache" here someone might actually try to bring assault charges against you?
If you think my comment does not meet this criterion, feel free to report it. I actually think it passes. Here's why: My comment is
an argument against your point that "saying IRBs cause deaths is not accurate. And it is a sketchy rhetorical trick. Once we agree that A causes a lot death then next logical step may be to get rid of A." I'm saying, so does that principle apply in other situations -- the principle that if you speak in a loose way about something harming people, the world will take what you say literally and punish whoever you are complaining about as though they had *literally* done the harm?
Of course my comment also is a way of saying that it seems to me that you're nitpicking, and I find it irritating, and it looks from the comments like others do too. Would it be less offensive if I said your perseveration was irritating, rather than that it was giving people me a headache? Do you really think it's unkind and unfair to tell someone you find their remarks irritating? That's different from saying they're basically an irritating person, you know. And it leaves the listener free to say that the speaker's emotional reaction is a result of some flaw in them, or of their failing to grasp the listener's point.
"Irritating" I thought most of the people here were supposed to be rationalists. Why would so-called rationalist get irritated.
People have pushed and I responded. Normally, the moving party gets the last word. Right? The alleged irritation seems to be a desire of proponents opposing mine to have the last word.
Scott personally responded with a hypothetical in which he personally was rushing into a OR to attack healthcare personnel (a rather disturbing idea). To which I responded with a pretty understandable idea of reasonable foreseeability. Discussion over.
To say the IRBs have killed people or caused deaths is a very radical position. It is as radical as suggesting that doctors who failed to sign the forms or make adjustments required by IRB caused death (also not true.)
IRBs keeps medicine from becoming manufacturing. They are based in the recognition that people are not objects upon which experimentation can be done without complete and voluntary participation with informed consent.
It's the trolley problem, and I'm standing there and about to pull the lever to kill 1 person and save 5.
But then you jump in my way and restrain me, and the 5 die.
There was clear and reliable causal chain of events in place that was going to end with 1 person dying.
Instead, you intervened in a way that was clearly and legibly going to interrupt that cause a chain of events, and instead institute a different causal chain of events that ends with 5 people dying.
I think it is totally fair to say you caused 4 extra deaths in this case.
You took an affirmative and intentional action, that action increased the number of deaths by 4, and this outcome of your actions was entirely predictable to everyone involved at all times.
If we don't call that 'causing', then I think the word loses all practical meaning and usefulness in these domains.
I'm a Chicagoan born and raised, and resumed being a city resident some years ago after living in a close suburb for a while. Rahm turned out to be pretty generic as big-city mayors go and surprisingly meek about it.
That last part was unexpected and I still wonder about it. Going in he'd seemed likely to be either a bold "move fast and break things" mayor or a "pointlessly tick off pretty much everybody and have no particular base of support left" train wreck. And he definitely never stopped _talking_ about being the first thing, but...by his actions in office he was mostly straight out of central casting. (E.g. he talked big about standing up to the teacher's and police unions but in actual contract negotiations got absolutely rolled by both of them.)
Rahm won a second term only by being gifted with a terrible opponent, and in the end his deciding not to run for a third term was greeted with a collective shrug. My neighbors and I -- some of whom voted for Rahm and some didn't -- agree that we did not see _that_ outcome as a possibility with him.
Reminds me of The Governator, although some have claimed he was neutered (so to speak) by his wife's clan finding out about the housekeeper, and establishing a quid pro quo for silence until the end of his term.
Ezekiel Emanuel literally believes that life after 75 isn't worth living, and opposes all life-extension measures because he believes people of that age generally cannot do meaningful work. (I have seen him defend this, in person. This is not a straw man.) I do not know why we think he should be trusted as an ethical authority.
Am I understanding correctly that your argument is that Zeke Emanuel's opinions about what the good life is are in no way affecting his ethical advice?
What good historical examples are there of systems this broken being radically redesigned or recreated, with great results afterwards? As far as I know, most of them occur during large power shifts or violent revolutions, but I'd love to see what types of strategies worked in the past aside from those.
China did manage to transform its basketcase of an economy by allowing initially small doses of capitalism. But I guess Mao-Deng transition could qualify as a large power shift. Probably either political or technological shifts of at least this magnitude are required for any consequential radical redesigns.
One positive example is that the USG set airline fares (and had to approve new routes) and I think also set trucking freight rates for several decades. Then, they stopped, and the world is a much better place for it. That's an instance where a destructive regulatory regime went away and things seem to have gone okay.
> "A few ethicists (including star bioethicist Ezekiel Emanuel) are starting to criticize the current system; maybe this could become some kind of trend."
Nir Eyal's guest essay 'Utilitarianism and Research Ethics' is a nice contribution in this vein:
(AFAIK, Eyal is not himself a utilitarian, but he here sets out a number of potential commonsense reforms for research ethics to better serve human interests. He's the inaugural Henry Rutgers Professor of Bioethics at Rutgers University, and founded and directs Rutgers’s Center for Population-Level Bioethics.)
>the Willowbrook Hepatitis Experiment, where researchers gave mentally defective children hepatitis on purpose
I ran across this study while reviewing historical human challenge trials as a consultant for 1 Day Sooner. Having not previously known about it, I found it quite shocking. They certainly learned a lot about hepatitis, but they definitely mistreated the kids.
The story sounded crazy to me, too. It was about Hepatitis B if I got it right, which is usually transmitted by sexual acts, giving birth and needle sharing. If people are bleeding over the place regularly and the disinfection regime of that time and place wasn`t sufficient, that may have been a reason for the high infection rates. But I couldn`t help imagining orgies of inmates and staff. Hepatitis B is not harmless and only has been really treatable for some years. Back then it doesn`t seem to have been regarded as a very serious issue.
I have to tell my consent form story. I was asked to join an ongoing, IRB approved study in order to get control samples of normal skin to compare to samples of melanomas that had already been collected by the primary investigator. The samples were to be 3mm in diameter taken from the edge of an open incision made at the time of another surgery (e.g. I make an incision to fix your hernia and before I close the incision I take a 3mm wide ellipse of extra skin at the edge). There is literally zero extra risk. You could not have told after the closure where the skin was taken. The consent form was 6 pages long (the consent form for the operation itself that could actually have risk was 1 page and included a consent for blood products). I had to read every page to the patient out loud (the IRB was worried that the patients might not be literate and I wasn’t allowed to ask them because that would risk harm by embarrassing them). They had to initial every page and sign at the end.
I attempted to enroll three patients. Every one stopped me at the first page and said they would be happy to sign but they refused to listen to the other 5 pages of boilerplate. The only actual risk of the study seemed to be pissing off the subjects with the consent process itself. I quit after my first clinic day.
I never did any prospective research again as chart review and database dredging was much simpler.
You will also find that many institutions are dodging the IRB by labeling small clinical trials as “Performance Improvement Projects” which are mandated by CMS and the ACGME. They are publishing the crappy results in throwaway or in-house journals to get “pubs” on resident CVs.
We could get reform pretty quick if some adventurous Federal appeals court were to order that *all* consent forms had to follow the same standards. So e.g. before you can click through your Facebook/Google account creation or Windows install license agreement, a lawyer from FB, GOOG, or MSFT has to call you up and personally read every page to you, slowly, and you have to DocuSign each page as he does. Better set aside the whole afternoon if you're buying a car or house ha ha.
The broader question of "why did all this administrative bullshit massively explode in the 90s and cripple society" seems underexplored, presumably because the administrators won't let you explore it.
Wasn't the whole point of the 90s the triumph of neoliberalism and deregulation? How come a less regulated economy goes hand in hand with a more regulated research sector?
Sort of a cyclical/two side of the same coin answer:
Without the excuse of 'we were following all of the very strict and explicit regulations, so the bad thing that happened was a freak accident and not our fault' to rely on, companies had to take safety and caution and liability limitation and PR management into their own hands in a much more serious way.
And without the confidence in very strict and explicit regulations to limit the bad things companies might do, and without democratically-elected regulators as a means to bring complaint and affect change, we became much more focused on seeking remedy for corporate malfeasance by suing companies into oblivion and destroying them in the court of public opinion.
Basically, government actually *can* do useful things, as it turns out.
One of the useful things it can do is be a third party to a dispute between two people or entities, such as 'corporations' and 'citizens', and use it's power to legibly and credibly ensure cooperation by explicitly specifying what will be considered defection and then punishing it harshly. This actually allows the two parties, which might otherwise be in conflict, to trust each other much more and cooperate much better, because their incentives have been shifted by a third party to make defection more costly.
Without government playing that role, you can fall back into bad equilibrium of distrust and warring, which in this case might look like a wary populace ready to sue and decry at the slightest excuse, and paranoid corporations going overboard on caution and PR to shield from that.
Alternatively, the deregulation didn't go far enough, as it didn't touch tort law. Based on the US tort cases I hear of, IMO tort liability should be significantly curtailed: a defendant should only be held responsible if it either caused the harm to the plaintiff directly, or intentionally, or if it created a situation far outside the range of possibilities a reasonable person should expect, or if it breached a regulation. Also, damages awards in America are often higher than I'd consider reasonable by an order of magnitude (occasionally, for emotional harm, by several orders of magnitude).
More importantly, any tort liability should be waivable in contract.
---
What do you mean by cooperation and defection? As far as I understand, this terminology comes from the prisoner dilemma, and it applies to game theoretic equivalents of the prisoner dilemma or the tragedy of commons. But I don't see how that applies here. If an entity puts me at risk in a way I consent to, I don't see how that's defection, yet regulation often bans it.
In the examples cited above there isn't any litigation mentioned as preceding the rise in IRB oversight. If litigation was the response to bad studies, instead of trying to prevent them from happening through IRB, then studies like Scott's with no chance of harm wouldn't have been prevented, but people harmed by studies would have recourse.
The economy is not less regulated at all. There was no overall reduction in regulation that has happened at any point since WW2 in the US. you may be able to point to specific industries (like airlines); however, usually thats a specific type of regulation that is removed while others spring up in its place. Different regulation does not mean deregulation.
That overall statement is accurate if the measures used are things like total number of rules issued, total numbers of pages of rules, etc. And sometimes those sorts of crude measuring sticks also line up with a qualitative judgement; I would nominate for instance the federal tax code as one such example.
In other examples though the sheer bulk of published regulation doesn't change the fact that at its heart a given sector is far less regulated than when I was a kid. Airlines, telephony, consumer banking, trucking, freight rail, some other things. E.g. there are far more pages of federal rules related to air travel now than in the 1970s, but no sensible person would dispute that the 1970s airline industry wasn't drastically more regulated -- in its essentials -- than it is today.
Maybe there's 2 things here - the overall degree of external control, and the degree of bureaucratic micromanagement. For an example of one without the other, how about a country with a dictator who issues verbal orders and will shoot anyone who doesn't comply or who doesn't perform well. If he tells his local airline to fly particular routes, they're very much under his control, but at the same time this isn't what we'd call "regulation".
I’m surprised, doctors have only wished death upon the IRB. The 1000’s of lives lost due to study delays vs the lives of a few IRB administrators, sounds like a very easy version of the trolley problem
I imagine you're joking, but in case any reader thinks otherwise, I'd like to point out that the administrators would just be replaced, you'd harm your cause by associating it with murder, and you'd harm civilized society in general by making it a less generally-law-abiding place.
(Real life almost never presents anything as simple as a trolley problem.)
I'm not *disagreeing* with you, but these sound very similar to reasons why, 200 years ago in the American South, killing slaveholders would be a bad idea.
I think, as a lone individual, it would have been? Putting aside arguments about whether a peaceful end to slavery was possible for the USA, a lopsided civil war is a very different prospect than an attempted slave revolt (which were almost invariably unsuccessful and resulted in truly horrific reprisals). An abolitionist in early 19th century USA is much better served politically campaigning in the North than trying to be a vigilante in the South
So, like, as a matter of rational utilitarian public policy, I agree.
However, there's also a part of me that says, fuck that, any survivors are owed a blood debt, and unless sufficient recompense is made, they've got a personal right to vengeance that supersedes any human-made law. And as of yet, no amount of rational argument has made a dent in this part of me; it's like trying to talk myself into believing gravity doesn't exist. (But who knows, maybe one day I'll throw myself at the ground and miss.)
I mean, I don't like it, and in my case I'm pretty sure most of it is from PTSD, and I'm working on that. So far that's therapy, medication, drugs, prayer, and meditation, but not much progress over too long a time. :-/
But for the sake of discussion, I'll suggest that the problem is largely psychopaths and assholes, not their victims. And we shouldn't blame the tit-for-tat-style defensive mechanism that keeps said psychopaths and assholes from exploiting otherwise defenseless people.
I'm not sure there's any contradiction between those two things? It can be the case that you have a personal right to vengeance, and also that your particular plan for exacting vengeance is going to end horribly for everyone including you. Moral rights do not come with physical capabilities attached, nor do they grant immunity from consequences.
I wouldn't say that there's an inherent contradiction. But roughly speaking, human societies tend to have rules against doing things that end horribly, like blood feuds. And this seems to be mostly tolerable for most people, if the society provides an alternative way of dealing with the root cause. But what happens when something slips through the cracks, and society bans the older remedies without offering anything in their place?
While I agree that assassination is far from an ideal solution to the problem, an administrator who got the job because their predecessor was forcibly removed - as part of a highly publicized complaint about a proposal being rejected - would surely take that unlikely-but-catastrophic outcome into account during their own day-to-day assessments of risk. The question is how to make such an incentive similarly effective while minimizing damage to the wider rule of law.
That incentive is INHERENTLY a harm. We do not generally want public servants changing their decisions out of fear that one crazy zealot will murder them if they don't; that would lead to worse outcomes more often than it leads to better ones, by giving every lone crazy a veto.
(Imagine that same administrator changing decisions out of fear of anti-vaxxers, or the "cell phones cause cancer" people.)
In the current system they're changing their decisions out of fear that one crazy zealot will get them fired, but not personally killed, which is still a serious problem. I wouldn't be particularly surprised if cell phones actually did somehow cause at least one case of life-threatening cancer every few years, worldwide, but, yeah, that's not much of a peg to hang policy decisions on - and it's about the same level of excess caution being applied to medical research. Gotta re-balance those scales somehow.
How does this work in other countries? This review describes a peculiar sequence of events in the US. Not every country had a Tuskegee, a Hans Jonas, an asthma experiment death, or a lack of bigger issues to worry about. Yet the US can't be that much of an outlier either, otherwise this would be a story of how all medical research has left the US. (At least fundamental/early stage research and post-approval research; drug companies may need to do some trials in the US to get approved in its lucrative market.)
Did other countries independently take a similar path? Did they copy the US? Did they have much stricter laws on human experiments to begin with?
The USA is the biggest and richest drug market, and the FDA refuses to accept the results of trials not done in the USA, so a lot of research has to be done there. The early stage stuff isn't done on humans anyway, so it's possible the IRBs for eg. mouse studies aren't especially onerous, or maybe it's just that the inertia of being a very rich country is enough to stop people moving - at an individual level, moving overseas means a massive pay cut for any academic, the USA pays better than almost anywhere else.
This makes me wonder how much of an improvement to the world would be available by simply setting up reciprocal approval between FDA and the UK, EU, Canadian, and other first-world medical regulatory bodies.
Only in hindsight, because they got lucky. They didn't know that the children would be asymptomatic--the experiment was to prove it. What if they had been wrong?
It's not like hepatitis has no treatments, and they knew the kids were infected - I don't know about hep specifically but almost all diseases are much easier to treat if the diagnosis is 100% certain and known early in disease progression.
Yuval Noah Harari, in something of a throwaway line, points out that the UN Universal Declaration of Human Rights is essentially the closest thing we have to a world constitution, and the most fundamental of these rights is the right to life.
I raise this because I'd love to see how far a lawsuit against this IRB nonsense could go based on Human Rights and general wokeness; the essential argument being that this "oversight", far from being costless and principled, is in fact denying the human rights of uncountable numbers of people, both alive and not yet born, by preventing, for no good reason, the accumulation of useful medical knowledge...
Given the post-WWII circumstances of the creation of UN and its Declaration, I'm pretty skeptical that you would succeed in leveraging its principles into "we have a right to medical experimentation on human beings free of constraints!" OTOH, some time has indeed passed.
A couple of weeks ago I suggested (partly in jest) to some physicians who think about these kinds of issues that maybe a study where participants are doctor's families, parents and grandparents would be useful to put some skin in game.
That however would not really be a "randomized" study and would introduce confounding problems.
Taking the campaign for human challenge trials for Covid vaccines as an example, there was no shortage of volunteers. Many, many doctors historically experimented on themselves. Or are you intending to draw on the emotive difference between experimenting on oneself and experimenting on one's family?
P.S. I would still expect the vast majority to bite the bullet with family too - better to roll the dice and maybe find a cure for whatever disease the test is on than to suffer without treatment because of negligible risks
The issue is generally not "sacrificing others against their will", it is ALLOWING others who are VOLUNTEERING to make the choice to sacrifice themselves.
The most ridiculous version of this comes in the whole Henrietta Lacks nonsense, where we are supposed to be upset not that a person was experimented on, but simply that some tiny part of them was used for experiments.
There seems to be a truly incomprehensible gap here between people like me who look at this and say "yeah, so what?" and people who seem to think it represents a second holocaust.
Henrietta Lacks - she did not volunteer her cells. And money was made off of them. Somewhere between "so what" and "holocaust" is the right answer. Both of those reactions would be outliers.
Sure she did. She voluntarily underwent the cancer biopsy during which the cells were removed. What you probably mean is that she didn't volunteer the cells *as a culture line." (Parenthetically, it would've been a bit tricky to secure the permission directly, on account of she had already died by the time the attempt was made to establish the cell line, but one assumes her heirs could've been asked, assuming her heirs were entitled as a matter of principle to inherit her cancer along with the remainder of her earthly belongings.)
As for the money...I dunno, there's a long line of finders-keepers in common law tradition that dims hope here. If I throw away a lottery ticket that turns out to be worth $100 million, I will have no luck at all suing the garbageman who picked it up and claimed the prize. Basically, what's yours is yours only to the extent you exhibit a clear intention to keep it. Perhaps if Henrietta had exhibited a desire to keep her cancer cells, the case would have a leg to stand on, but unfortunately one would guess that keeping her cancer was about the furthest thing from her mind.
Which is to say, if we hop in a time machine and give Henrietta a 6-page consent form in which we explicitly say we're going to attempt to create an immortal cell line from her tumor cells for all kinds of exciting research purposes that may pay off in 50-100 years, it would be pretty surprising to me if she said anything other than sure - fine - who gives a fuck? just get this shit out of me.
There seems to be a basic, incomprehensible, gap between those who understand the concept of willingness to sacrifice oneself (or the things one loves) and those who live in a world of Hume's finger scratching.
This gap is explored in great detail as one of the major themes of Terra Ignota, but in modern discourse both sides seem not to comprehend that the other side even exists as a set of real humans out there.
.............................
As I've said before, I'd love to see Terra Ignota turned into a seriously produced TV series; it could do wonders to improve public consciousness; but it's probably too complicated and too cross-cutting in its heroes and villains? (On the other hand, I could have said that about Game of Thrones, so...)
Of course the lighter version of this is seen in Joss Whedon's canon; both Buffy sacrificing herself (twice) to save everyone AND the slacker losers in The Cabin in the Woods refusing to sacrifice themselves -- destruction of the whole world rather than scratching of my finger, indeed.
I've noticed the bureaucrat hegemony as well on a smaller scale. I believe it's the sign of a mature, post-peak society. Everything is pretty much built out, so risk/reward favors rent-seeking petty overlords.
It's a sign that the low-hanging fruit has mostly been picked. "People who want to get stuff done, go somewhere else!" I mean, your examples are pretty piddly in the grand scheme. 100 years ago the study would have been identifying a deadly disease, and there would be no IRB (or grants or funding for that matter). Today it's settings on a ventilator? Small potato, even if the surface area is large.
The two historical examples that come to mind are late imperial china and (post?) Soviet russia. Not a good omen?
I also remember one of Paul Graham's earlier essays saying something like "startups move fast & break things because they have nothing to lose. But as the company matures, red-tape creeps in. Because broken objects are alarming, while the costs of red-tape manifest as invisible externalities." I don't remember which one it was, though.
It's a pretty small fraction of people involved in startups, but a high fraction of people involved in startups that involved defrauding investors, stealing money, engaging in money laundering, etc.
If you are a wealthy society, you might as well spend the money on nice things. Regulation, done right, at least, is nice things. It means you're not sending children up chimneys or making people work 18 days in factories.
Sure, but as with all things, there are diminishing returns. The first step in the safety regs where you require guardrails over long drops, forbid exposed high voltage wires in reach, etc., gives you large benefits. The five hundredth step where you have covered every flat surface at the worksite with warning signs and the employees spend 50% of their time at safety trainings gives you very few additional benefits.
I think this gets at one of the fundamental problems. If the ideology is to press for more safety, it never stops. We need an ideology that presses for the optimal amount of safety.
I don't think it's the sign of an *actual* post-peak society, any more than similar malaise and conservatism in the late Roman Empire was a sign that the Mediterranean littoral had achieved everything it was possible for men to achieve by AD 300. It definitely seems like a sign of a lack of self-confidence, a loss of mojo, a belief that we, at least, are severely limited in what we can achieve further, so we might as well turn our attention to seeing that the pie is cut up exquisitely fairly, and everybody gets at least a participation prize.
Why can’t you sue an IRB for killing people for blocking research? You can clearly at least sometimes activist them into changing course. But their behavior seems sue-worthy in these examples, and completely irresponsible. We have negligence laws in other areas. Is there an airtight legal case that they’re beyond suing, or is it just that nobody’s tried?
If they're created by statute, then sovereign immunity would prevent you. If they're acting in accordance with statutory or regulatory requirements, then they can't be in breach of a duty of care by failing to undertake an unlawful act. Failing both of those, it would be too remote.
One major problem is that it is often hard to identify the specific victims -- the people who would been helped by the research, if it had been carried out. Unless you can find identifiable victims of the IRB's refusal, you don't have anyone who has standing to sue.
Even if you get over that hurdle, to find negligence by the IRB, you would have to show that the IRB had a duty of care towards the potential beneficiaries of the research. Most courts would be reluctant to find such a duty without an explicit law passed by Congress (or a state legislature) creating one. After all, you have no right to demand that the researcher do the research in the first place. If you aren't entitled to the research at all, then it doesn't matter whether the researcher decided not to do it, or the IRB vetoed it.
“He was uncertain that people could ever truly consent to studies; there was too much they didn’t understand, and you could never prove the consent wasn’t forced.”
Makes me think that we’ve finally found a real live member of the BETA-MEALR party:
It isn't wholly impossible for the government to eventually settle on a fairly sane set of regulations. Traffic regulations work reasonably sanely. But this isn't the way to bet, particularly in a new area of technology.
Yes, AI is potentially dangerous. While I think Eliezer Yudkowsky probably overestimates the risks, I personally guess that humanity probably has less than 50/50 odds of surviving it. Nonetheless, I would rather take my chances with whatever OpenAI and its peers come up with rather than see an analog to the telemedicine regulation fiasco - failing to solve the problem it purports to address, and making the overall situation pointlessly worse - in AI.
The claim in this essay (“Repeal Title IX,” https://www.firstthings.com/article/2023/01/repeal-title-ix) is that Title IX bureaucracy followed the exact same crackdown-> defensive bureaucratisation as Scott describes below for IRBs. (I see more upside for Title IX than the author, but I find her overall analysis of the defensive dynamics compelling).
“The surviving institutions were traumatized. They resolved to never again do anything even slightly wrong, not commit any offense that even the most hostile bureaucrat could find reason to fault them for. They didn’t trust IRB members - the eminent doctors and clergymen doing this as a part time job - to follow all of the regulations, sub-regulations, implications of regulations, and pieces of case law that suddenly seemed relevant. So they hired a new staff of administrators to wield the real power. These administrators had never done research themselves, had no particular interest in research, and their entire career track had been created ex nihilo to make sure nobody got sued.”
It's almost as if our current practices of science impede the practice of science.
This is yet another example of how attempts to reduce risk in fact serve to reduce variance, and this turns out to be net negative. True of a broad range of policies justified in the name of safety.
Most of the comments diving into the details of IRBs are missing the point. The dynamics on display here has little to do with IRBs in particular. They're common to most institutions set up to prevent risk by requiring prior permission.
Interesting framing, but it rests on an unproven assumption: that you can place a point on that x axis of quality (i) accurately and (ii) quickly.
If this assumption is true, I agree with everything. Unfortunately, the assumption is very likely false.
If it is false, the dichotomy is not useful. Because if you approach the problem as a strong link, you're basically maximizing the number of points you get. If you do not place them accurately and/or evaluating each point takes a lot of time/effort/expertise, you end up with a worse outcome overall _even if_ you have generated a lot of excellent points.
To use the same example, if we have tons of papers published and we suck at knowing which ones are good and which ones are bad, the good ones will not emerge to the top and the bad ones will not fade into oblivion. So even if we get 10 revolutionary papers a year, we would never know and go on with the status quo as if they never existed.
Ironically, this is _exactly_ how scientific publishing is working right ow. The author is lamenting we're approaching science as a weak link, but the publish-or-perish paper mill everybody is high on is exactly treating science as a strong link problem, by maximizing the output and taking crap shoots at everything. The reaction the author is lamenting at, is a way to try and figure out how do we better evaluate the quality of the outputs.
Ugh, what is wrong with our species? Maybe we are ungovernable. This fungus of dysfunctionality sprouts in any crevice, and there are always, always crevices.
The way around the rot that accumulates within an organism's lifetime is the creation of a new organism without all that rot. The cultural group selection discussed in Henrich's "The Secret of Our Success" replaces groups that have rotted with ones still thriving. However, that kind of selection isn't really taking place in the modern day. We have deliberately made lots of barriers to one of the biggest mechanisms of such group selection (war), and things like nukes also make it much more of a hassle to overthrow an obviously dysfunctional government like North Korea.
What's wrong with being ungovernable? I don't intend to be governable. I'm not a cog in a machine, still less a program on a CPU, and I don't want to be either.
How did we go from not everything can be achieved by government to zero government? In any event, I'm just objecting to what I saw as the implication that "governable" is an appellation a free man would find complimentary.
I mean, I feel like bellowing "This! Is! Sparta!" but lack the abs to pull it off, alas.
Law: You may step over a drunk in the gutter with impunity. But you come to his aid at your peril.
People are more upset at losing a $20 bill than they would be to learn that they had walked past a $100 bill.
2 Bambi's mom and Stalin
We can, with complete indifference watch vast tracts of wild lands disappear, but we shed tears over the cute orphaned fawn. "One death is a tragedy. A million deaths is a statistic."
3. Every thing happens for a reason
No bad outcome without a bad actor. Someone is to blame, and must be punished. To carry out this principle, new duties continually are being created by statute or judicial decision.
4. Living in the moment
Just as we find deferred gratification difficult, and discount future positive outcomes, we also underweight future negative outcomes.
5. Rules based order
Life is chaotic, full of lurking dangers, and must be constrained so we have rules. But life is also full of paradox, randomness, and edge cases that must be handled on a case-by-case basis until some underlying regulatory principle can be discerned. But the grown hall monitors who administer the rules know that exercising judgment is far more likely to get them canned than not. They will only do that when the new rule is almost in place. For a sweetener they usually ask an expanded scope of authority and a new cadre of courtiers to support the enhanced dignity of the bureau.
6. Petty tyrannies
Hall monitors grow up but never lose the taste for pettifogging displays of social dominance.
yeah okay, but this stuff has been true for thousands of years - there's still an interesting question of why regulatory barriers appear to have been increasing since some time in the 90s
Mostly the late 70s. Here's a list of the things that the Carter Administration deregulated with the support of the Democratic-majority Congress:
air travel (signed 1978)
trucking (signed 1980)
freight rail (signed 1980)
consumer banking (signed 1980)
long-distance phone service (signed 1980)
Carter also continued and amplified the Nixon Administration's legal fight to deregulate telephony more generally, and issued an executive order requiring federal agencies to perform and to publish specific analyses of the costs and benefits of proposed new rules, and instigated the first-ever federal law requiring agencies to consider alternatives before issuing new regulations.
Then here's the list of things that Reagan Administration deregulated:
Carter was strongly anti-monopoly and viewed deregulating those industries as an economically-progressive reform. The left wing of his own party mostly reacted in the knee-jerk manner, but he was able to find enough Democratic centrists to combine with GOP votes to get those measures passed and then he proudly signed them into law.
There was almost no other policy area, domestic or foreign, in which Carter possessed and successfully carried out such a clear and coherent strategy. Also he became obsessed with and put most of his personal energy into what became the Camp David Accords. Meanwhile his skills at and frankly interest in haggling with Congressional leaders were so limited that by 1980 he'd basically painted himself into a box politically.
He and his close advisors have never stopped believing that had the Iranian situation blew up and those embassy hostages taken [an outcome which Carter's own fumbling and bumbling helped to enable, just to be clear], Carter would have run for and won a second term largely on the strength of those economic reforms. As you can probably tell, I disagree -- all things considered, pretty much any GOP nominee in 1980 would have defeated him.
(In full disclosure my mother was appointed to a key federal position by President Carter and never had anything but the highest regard for him. She's gone now but just for the record, opinions expressed here are no one's but mine etc.)
So essentially a leader who almost incidentally believed in and carried out some reforms? No particular movement/lobby/general broader belief that we could point to? How did he tap the right expertise? For e.g, it could be argued that in the US, the left is pretty anti-monopoly, but they seem to be very ham handed in how they're going about it.
Also, were all the industries you listed monopolies, and hence coming under Carter's deregulation scanner? Because my limited understanding, and mostly from the airline deregulation, is that prices were deregulated.
I think there will not be much regulation at all, because (1) we do very little regulation of tech companies now; (2) there is a great deal of money to be made out of GPT4 and the like, and so lots of interests will apply pressure in the direction of not regulating; (3) you have to be fairly sharp and well-informed about tech to understand the issues, and most in power are not.
I don't understand why we need regulation of research ethics at all. As long as the researchers aren't doing something that is illegal for a "civilian" to do on their own, why do we need an IRB to monitor them? All of the examples of bad research you give here are very clearly torts, and any competent lawyer could easily extract massive settlements from those doctors and their institutions nowadays. Fear of lawsuits on its own could deter the majority of truly harmful research.
If you're giving someone an experimental medicine or otherwise doing research on an activity that is potentially harmful, all that should be required is a full explanation of the risks, what the doctors know and don't know about the drug, etc. Consent by a competent adult is all that any reasonable system of research should require (and I'd add that consent should only be necessary where there's a risk of a meaningful harm--if it wouldn't be illegal for you to give people surveys as a civilian, then you shouldn't need to get consent to collect them as a researcher, for example).
I think part of the problem with that is that fear of lawsuits would also deter a lot of non harmful research - the value of an IRB from this perspective is that it gives you a pretty good shield against frivolous claims about your research being negligently harmful.
Yeah, there's no reason to think courts are going to do a good job distinguishing between reasonable and unreasonable risks in medical research, particularly with dueling credentialed experts and a sympathetic victim.
I think Gbdub has the right answer: universities and hospitals are classic "deep pockets" defendants, and if they didn't got to extreme lengths to be able to show to hostile[1] juries "yeah there was TOTALLY informed consent here, and we absolutely bent over backwards to be sure there was no forseeable harm that we didn't explain clearly, and all these experts (who are willing to testify in our defense) said it was ethical" then their in-house legal staff would have chronic insomnia and many ulcers.
--------
[1] Because, as demonstrated in some previous thread, your ordinary person tends to think that (1) Big Faceless Corporation is more likely to be evil than mistaken, compared to an individual person, and (2) if BFC has to fork out $50 million no actual human beings will be harmed, they'll just declare a slightly smaller dividend which will only piss off Scrooge McDuck because his swimming pool full of gold will be 1" shallower.
We don't let normal people cut other people open with knives, even if the cut open person signed a consent form first.
Doctors get to do a lot of things that normal people don't as part of their licensed position, and that licensing also comes with restrictions on those abilities.
I wish some lawyer would use that study showing that short consent forms work better to argue that anyone demanding a long consent form is negligently endangering the subjects.
There's an obvious solution. Create a meta-regulatory-agency which requires other regulatory agencies to fill out enormous amounts of paperwork proving that their regulations won't cause any harm.
I work at a big tech company and this is depressingly relatable (minus the AIDS, smallpox and death).
Any time something goes wrong with a launch the obvious response is to add extra process that would have prevented that particular issue. And there is no incentive to remove processes. Things go wrong in obvious, legible, and individually high impact ways. Whereas the extra 5% productivity hit from the new process is diffuse, hard to measure, and easy to ignore.
I've been trying to launch a very simple feature for months and months, and there are dozens of de facto approvers who can block the whole thing over some trivial issue like the wording of some text or the colour of a button. And these people have no incentive to move quickly.
Same in defense contracting. Easily half and probably more of the cost and schedule of programs comes from “quality standards” and micromanagement that gives the illusion of competent oversight. Distilling very complicated technical problems into easy to understand but basically useless metrics so paper shufflers and congressional staffers can feel smart and like they know what’s going on is a big part of my job.
When in reality, we learn by screwing up in novel ways - the new process rarely catches any problems because we already learned our lesson and the next screw up will probably be something new and unanticipated. But the cost of the new process stays forever, because no one wants to be the guy that makes “quality controls less rigorous”.
I haven’t finished the article, and I presume someone did, but did no one think about just taking out some extra insurance? This kind of thing is what insurance (and creative kinds of insurance) were meant to handle. Even with grandstanding congressmen, you can say your alleged victims were fully compensated.
IIRC the book mentioned there are laws against having insurance for this because it (according to whatever regulator made this law) produces the wrong incentives.
I hate to be that guy... no, actually, I love being that guy. You really think that this is down to connectedness?
There's a book that comes out in 1957 that predicts _exactly_ this kind of insanity throttling progress with consequent disasters. And, yes, though it focused more on railways and steel makers, it also predicted the effect on medical progress.
Which socialist countries? I don't know about places like Cuba, North Korea etc. but I do not think they are hotbeds of innovation.
But let me be generous and assume you mean social democratic nations (words mean things: socialism isn't social democracy), such as e.g. Denmark. I went and looked, and Denmark is ahead of the US in terms of economic freedom, as is Iceland, the Netherlands and even Canada.
But this misses a deeper point. Rand never predicted that the United States would fall to socialism, but to fascism. She predicted an alliance of bent big business and corrupt big government that passed laws that enriched the 0.01% while screwing the 99.99%, backed up by nationalist hysteria, militarised police. and a compliant media (please, stop me if any of this is ringing any bells). Stuff like the above is exactly what A.S. predicts.
"“A mixed economy is an explosive, untenable mixture of two opposite elements,” freedom and statism, “which cannot remain stable, but must ultimately go one way or the other"
Well - yeah? See the 2008 financial crash, riots, Trumpism etc. etc. Conversely, see how the Scandinavian social democracies are, again, more free market now than the United States.
The problem is that the internal contradictions of a mixed economy are, well, contradictions and sooner or later lead to a bang. Then people are thrown into a crisis and ask "How do we fix this?", and they're either move towards more state control and more disasters or away from state control. See also the economic history of Venezuela.
You are missing the point that some social democracies actually are stable. The free market segment of Scandinavian economies may be freer than that of the US, but the welfare segment is more generous as well!
That's only true if you discount the vast amount of corporate welfare, direct and indirect, in the United States.
The social democracies that are stable are the ones that have moved towards being free market, and indeed have done so much that they are more so than the U.S. That is, in fact, entirely predictable, and was predicted in advance by, well, guess who?
Here's a better reason for the vetocracy: Court Overreach. The reason lawyers have as much power as they do is because decision making has been forcibly taken from policymakers and more efficient bureaucracies by overzealous judges with no limits on their power.
IRBs have to deal with lawyers, our approval process for environmental protection goes through courts rather than the EPA, and every single new apartment building has to win or beat a lawsuit to get built.
This is because entrenched interests want to wrest policymaking away from legislators and give it to courts.
Court Overreach is a response to the real problem, Representative Abdication.
Once upon a time, countries were run by people who were chosen to run them by everyone else, in a process called elections. People chose whomever they thought was best to run the country based on what they were like and what they said they'd do, and were basically happy with the result. However, there was a tiny misalignment between winning elections and governing well. As a result, in every election, a candidate who was willing to optimise slightly more for election-winning than good governance was more likely to be elected.
Fast forward to now, and politicians aren't interested in anything that doesn't involve winning elections; competition is fierce enough that this is all they do, and governance only occurs to the extent that it's useful to win elections. This means most of the non-election-wining aspects of governance get pawned off onto civil servants. It's also led to Trevelyanism (in the US, the Pendleton Act), in order to prevent the whole government from being swallowed and repurposed by the election-winning machine.
The result is that there are two choices - either a bureaucracy that's accountable to no-one, or a bureaucracy that's accountable to judges. Seeing this, judges have stepped in as probably a better alternative than civil servants.
Go along to your state legislature and look at the people there. They couldn't run a state if they wanted to; most of them couldn't run a raffle. This applies to most European parliaments, and also to the US' Congress. The civil services are all an awful mess of blame-shifting and empire-building whose incentives are almost as bad as the legislators, leaving judges as the only people who are disinterested enough to make some sort of decision (although now generally beginning their own empire-building campaigns, as the term "non-justiciable" slowly fades into history).
All of this is probably inevitable. We live in an entropic universe, people deteriorate with age, machines wear out, iron rusts, wood rots away, societies degenerate into decadence and governments collapse into nothing but rent-seeking and power struggles.
The average judge is just a bureaucrat with less oversight.
Optimizing for winning elections is not a thing that happens, its just a fact of democracy that we are capable of minimizing. All of these "accountable bureaucracies" are inevitably just accountable back to representatives - there's no easy solutions to Power.
For all the talk of lawyers, it seems like this particular issue was driven largely by overreaction from government in the late 90s… lawyers at least would have to prove harm to win a tort, whereas government agencies can simply shut down whole research departments on a whim because some Congressman decided Something Must Be Done.
Another example of the bureaucratic calculation, bureaucrats are exposed to all the downsides of a positive decision but get none of the upsides, so they are incredibly cautious about making a decision. If you want a different system, change the incentives for the decision makers.
The idea of a dispassionate safety reviewer sounds great until you start to realize it will be populated by agents who have their own interests to consider.
Yep. See also the no-fly list, and the apparently Kafkaesque process people wrongly added to it have to go through to be taken off. (What happens to your career if you take someone off the list and they they turn out to later do something terrible?)
One thing a student who did an exchange year in America pointed out to me is that, unlike in Europe, speed limits are kind of meaningless. The speed limit on the interstate may be 70. Nobody goes 70. They all drive somewhere between 75 and 90, and whether that's judged to be breaking the rules or not is ambiguous and situational. It depends who you are, who the traffic cop is, and so on. If you grow up in the US, this is second nature, but if you come into the system from the outside it seems utterly mysterious.
The American system has a lot of rules on paper, and functioning in it requires you to know which rules are real, which rules you can break or ignore with impunity, and which rules to subvert by entangling the interests of the rule-enforcers with your own and co-opting them.
This is not efficient but it does ensure that real power is wielded by those who understand the system best, which tends to be those closest to it.
Funny you should say that; don't ask me for statutory cites, but my understanding is that in many US jurisdictions, speeding tickets must be issued by a real live cop and not a camera, except where explicitly carved out by statute.
On paper, a response to potential "but the machine was poorly calibrated" defenses.
Not saying any of that makes sense, but fits reassuringly into the "everybody is[, at least officially,] as risk averse as can be" narrative.
Indeed, yet traffic enforcement is still done in large part by traffic cops in the US. Having the unpredictable human element as part of the structure of enforcement seems to be a feature, not a bug.
Your student surely can't have come from the UK, where the official speed limit on the equivalent of interstates is 70mph, but the enforced speed limit is 85 (though it's seen as polite to slow down to about 75 when passing a marked police car, which in turn will always travel at about 65 so the charade doesn't slow traffic too much).
From my experience of driving in other European countries, I'd say the same applies across most of Europe. Maybe not Switzerland.
It's true that I've chatted to a number of northern Europeans who find Anglophone indirectness and lack of clarity infuriating; but I'd guess that in general this isn't an American problem, or even an Anglophone problem, it's a human being problem.
I used to think that. Probably because I'm English, and I think that in English culture that is how they're seen, by and large. Canadian too, I imagine.
I therefore found it very interesting when I talked about Americans to continental Europeans, who said that of course Americans aren't direct, they're almost as annoyingly indirect as Brits, and why can't they ever say what they mean. Which is, once you think about it, absolutely true - when compared to a culture where people actually do try to say what they mean, Americans are just as bad as the British, it's just that on top of everything else they're pretending to be happy.
You'd almost think that the use of the English language leads to indirect, hyper-polite cultures with language codes that are frustratingly impenetrable for outsiders. Though if that was your theory, I'm not sure how you'd explain Australia.
I thought the "real limit" was 1 mph under the limit for reckless driving (usually the posted limit + 15 mph), maybe lowered a bit more for speedometer and detector inaccuracy? :-)
The Gordian knot of all this bureaucratic nonsense could be decisively cut if in any lawsuit for damages the compensation amount was strictly limited, according to a set of guidelines and, based on the latter, decided by the judge rather than the jury.
Of course the jury would still be the sole arbiters of whether there was a liability. I'm not suggesting they would have no role and a panel of judges would go into a huddle and decide the whole case, although that would be even simpler and cheaper!
The snag is I think in the US judge-determined awards would be contrary to some constitutional principle, dating from a former age when life was simpler and maybe jurors were more pragmatic about how much compensation to award, instead of the lottery jackpot win level of awards which often seems to prevail today.
Isn't this just part of the greater liability law issue the US seems to have (seen from the outside)? It looks like the incentives for liability claims are so huge that a significant part of the legal professionals are dealing with either putting forward these changes or preventing them. This seems to be a significant part of the extreme US health spending
To be frank I have not thought about it much, but my assumption is that LLM's (large language models), AI's - are going to revolutionise medical research.
At first their prognoscations will be challenged and resisted, but as time goes on and they start to get a track record of getting it right, we will come to rely on them more and more.
AI's are going to change literally everything, and in ways that are completely unpredictable.
I too think they are going to revolutionize medical research too, and also medical practice. For instance a surgeon removing a cancer needs to know where the cancer ends and healthy cells begin. If he had something in his glasses that showed a magnified version of whatever he was looking at to an AI, the AI could do instant biopsies of every place surgeon looks, using pattern recognition to distinguish heathy from cancerous cells, give him instant feedback on where tumor edges are. I do worry, though, that there's so much money and fun to be had from things like GPT-4 that the AI development companies are going to continue to develop the flashy silly party tricks side of AI, rather than the side that aids science and medicine.
>It defends them insofar as it argues this isn’t the fault of the board members themselves. They’re caught up in a network of lawyers, regulators, cynical Congressmen, sensationalist reporters, and hospital administrators gone out of control. Oversight is Whitney’s attempt to demystify this network, explain how we got here, and plan our escape.
This is consistent with my experience of government administration more generally. The people implementing rules tend to be very aware of the problems and irrationalities of the system, but they're following instructions and priorities from senior management, and ultimately politicians, for whom the primary incentive is to avoid embarrassing scandals, not maximise outcomes.
Politicians are to some extent just following their rational self interest, as the negative press and public opinion from one death from a trial is far greater than a thousand from inaction.
I'm in the UK so consent is a bit less onerous here, and yet, I've attempted to participate in 3 covid challenge trials, trials for which there are not exactly truckloads of willing participants, and yet I keep getting denied because I'm a carrier for Factor 5. I don't even have it, I'm just a carrier, and yet because it raises my risk of clotting ever so slightly, I keep getting denied. This AFTER I've had covid twice, once before being vaccinated, and without any sign of blood clotting. Bureaucracy gone awry.
I’m talking about a King with limited powers, much less than a President would have. There are plenty of constitutional monarchies where the King’s power is nontrivial but which have legislatures and Prime Ministers for ordinary governance.
At all levels of power from unlimited to extremely limited, it is easy to find stupid assholes filling jobs of that level. Whatever power setting you give your quasi-Monarch, the idea's useless unless you have a reliable way to ensure whoever gets the job is smart, honest, and conscientious.
On the general theory of risk avoidance in every form dominating politics, I think there's more to it. The risk of gun violence is famously ignored by politics, for instance. The risk of injury to pedestrians and cyclists by cars is pretty famously ignored. COVID risk certainly wasn't uniformly assessed.
So it isn't that society can't tolerate risks or make ROI-based decisions in many situations. I suspect the specific kind of legal risk medical professionals and institutions face have the issue that tail risk imposed is huge and does not do a good job assessing the ambiguity of day-to-day decisions. Doctors famously believe this is onerous.
But compare that with policing. Society famously tolerates quite a bit of risky behavior from police and assesses police officers face ambiguous situations and need to be given latitude to act, and while tail risk there has increased substantially I'm not sure how it compares to the situation in medicine.
Maybe a National Medical Research Association joined by millions of voters suffering from conditions that would benefit from medical research, and fanatically protective of medical research prerogatives, could change the landscape. National Health Council? Maybe the big orgs like AHA and ACS need to advocate less for funding and more for red tape removal.
I am wondering if it is partly caused by the fact that many people are very suspicious of research (all those nazi doctors!). For example, as an ecologist, in my country (France) I am supposed to follow several training sessions, fill a lot of forms, ask for many authorization, etc.. if I want to capture, mark and release a wild animal without hurting it, whereas hunters can just hunt and kill the same animals.
I do. America is the victim of its own success. It's gotten so rich, so comfortable, so risk-averse that a billion to maybe save a handful of lives isn't patently absurd. There are no reality checks anymore. Like you quote, "At a time when clerks and farm boys were being drafted and shipped to the Pacific, infecting the mentally ill with malaria was generally seen as asking no greater sacrifice of them than of everyone else." Whereas these days what little of warfare remains is mostly done by unmanned drones.
I think you've unintentionally elided two distinct points: first, that IRBs are wildly inefficient and often pointless within the prevailing legal-moral normative system (PLMNS); second, that IRBs are at odds with utilitarianism.
Law in Anglo-Saxon countries, and most people's opinions, draw a huge distinction between harming someone and not helping them. If I cut you with a knife causing a small amount of blood loss and maybe a small scar, that's a serious crime because I have an obligation not to harm you. If I see a car hurtling towards you that you've got time to escape from if you notice it, but don't shout to warn you (even if I do this because I don't like you), then that's completely fine because I have no obligation to help you. This is the answer you'd get from both Christianity and Liberalism (in the old-fashioned/European sense of the term, cf. American Right-Libertarianism). Notably, in most Anglo-Saxon legal systems, you can't consent to be caused physical injury.
Under PLMNS, researchers should always ask people if they consent to using their personal data in studies which are purely comparing data and don't change how someone will be treated. For anything that affects what medical treatment someone will or won't receive, you'd at least have to give them a full account of how their treatment would be different and what the risks of that are. If there's a real risk of killing someone, or permanently disabling them, you probably shouldn't be allowed to do the study even if all the participants give their informed consent. This isn't quite Hans Jonas' position, but it cashes out pretty similarly.
That isn't to say the current IRB system works fine for PLMNS purposes; obviously there's a focus on matters that are simply irrelevant to anything anyone could be rationally concerned with. But if, for example, they were putting people on a different ventilator setting than they otherwise would, and that risked killing the patient, then that probably shouldn't be allowed; the fact that it might lead to the future survival of other, unconnected people isn't a relevant consideration, and nor is "the same number of people end up on each ventilator setting, who cares which ones it is" because under PLMNS individuals aren't fungible.
Under utilitarianism, you'd probably still want some sort of oversight to eliminate pointless yet harmful experiments or reduce unnecessary harm, but it's not clear why subjects' consent would ever be a relevant concern; you might not want to tell them about the worst risks of a study, as this would upset them. The threshold would be really low, because any advance in medical science could potentially last for centuries and save vastly more people than the study would ever involve. The problem is, as is always the case for utilitarianism, this binds you to some pretty nasty stuff; I can't work out whether the Tuskegee experiment's findings have saved any lives, but Mengele's research has definitely saved more people than he killed, and I'd be surprised if that didn't apply to Unit 731 as well. The utilitarian IRB would presumably sign off on those. More interestingly, it might have to object to a study where everyone gives informed consent but the risk of serious harm to subjects is pretty high, and insist that it be done on people whose quality of life will be less affected if it goes wrong (or whose lower expected utility in the longer term makes their deaths less bad) such as prisoners or the disabled.
The starting point to any ideal system has to be setting out what it's trying to achieve. Granted, if you wanted reform in the utilitarian direction, you probably wouldn't advocate a fully utilitarian system due to the tendency of the general public to recoil in horror.
I’m pretty sure everyone who has basic reading comprehension grasped that this was an argument against utilitarianism. My viewing Mengele’s etc. actions as abhorrent is necessary for the article to make sense, and I don’t think the fact you’re a moron makes your life less valuable in the slightest.
I don’t think, if that’s true, it’s as obvious as you think. This is also a pretty stock response from Nazis when they get called out in their Nazi shit. That said, I don’t know you and it is entirely possible that your argument against utilitarianism just happened to look like a defense of Nazi shit.
Regardless…
> I don’t think the fact you’re a moron makes your life less valuable in the slightest.
Sick burn, 10/10. It makes me want to believe you aren’t a Nazi just because of how great it was.
Very interesting. French law says that "non assistance to a person in danger" is a crime, and I was surprised to learn that this clause is not universal at all.
It seems to me that nobody advocate a fully utilitarian system but I would expect the large majority of people to agree that not willing to submit people to very minor inconvenience or even risk if they agree, even if there are potential larger benefits, is not the right position either.
(1) People only desire pleasure (and only seem to desire other things because they desire pleasure).
(2) From (1), we can infer that pleasure is the only desirable thing.
(3) Therefore increasing the total quantity of pleasure is the most desirable thing.
(4) Therefore utilitarianism.
Very few living utilitarians (none?) buy this though; they mostly say either that they have a moral intuition in favour of pleasure and only pleasure being important, or that on analysis pleasure-maximisation is the hidden unifying principle of all their moral intuitions.
You comment that we should better distinguish the inefficiencies and pointlessness of IRBs, and that IRBs are incompatible with utilitarianism. I can't comment on the relationship between IRBs and the prevailing norms of the legal system, which we should in turn sharply distinguish between prevailing moral and cultural norms among the public.
I wanted to address your point about utilitarianism. You say that "it's not clear why subjects' consent would ever be a relevant concern," and that Mengele's research has "definitly saved more people than he killed," which may also apply to Unit 731. The key point is that, if horrific human torture-research like that of Mengele and Unit 731 saves lives on net, or more accurately is "net positive utility," then that is a serious challenge for whether or not utilitarianism is the proper moral foundation for research ethics.
The BBC goes a little deeper, pointing out that "Allied forces also snapped up other Nazi innovations. Nerve agents such as Tabun and Sarin (which would fuel the development of new insecticides as well as weapons of mass destruction), the antimalarial chloroquine, methadone and methamphetamines, as well as medical research into hypothermia, hypoxia, dehydration and more, were all generated on the back of human experiments in concentration camps." [1]
The use of these examples is emotionally powerful, which is good because it drives home how important the philosophical issue is. However, we have to be careful in analyzing it. For many of these experiments, the results could have also been obtained through normal scientific studies that we consider ethical, so the marginal "value" of the brutal Nazi or Japanese experiments is significantly undermined. Some modern researchers continue to use Nazi data on hypothermia, because there is no ethical way to obtain it, but the benefit there is much lower.
Any moral system runs into calculation and reference class issues. If we were able to use brutal, unethical experiments to speed-run biomedical progress, and this was in fact the fastest way to long-term save and improve QoL for the highest number of people, would utilitarianism obligate us to do it? Perhaps yes, and that would be a real moral quandary. On the other hand, if such brutal expeirments don't in fact systematically provide the highest benefits relative to a conventionally ethical research program, the sting is taken out of the thought experiment. Furthermore, if the only way to run such brutal experiments is in the context of a brutal society, such as that of Nazi Germany or the Japanese Empire's invading army in Manchuria, then we have to weigh the negatives of the wider societal brutality against the research.
We won't ever have definitive data proving which real-world system is objectively best from a utilitarian perspective. But it would be a consistent and common-sense utilitarian position to claim that conventional research ethics, perhaps with a more efficient and flexible regulatory system, is the only sustainable, and therefore the most efficient, way to drive scientific progress forward, as compared to a program of deregulated and brutal human experimentation that would have to take place in a similar society to that of Nazi Germany or the most racist aspects of early 20th century America.
A rule-utilitarian IRB that held such a position would change its behavior by prioritizing the benefits of research as well as the cost, but could also enforce common-sense constraints on specific types of experiments because the net utility of decisions made under the rule exceeds that of the net utility of decisions made not under the rule, even though enforcing the rule has net negative utility in specific cases. This is still reasoning from consequences, which is why it's utilitarian rather than deontological, but it is considering the consequences of the rule, rather than of individual acts.
For that reason, I disagree with your claim that you would not want to advocate for a "fully utilitarian system," or that a "utilitarian IRB would presumably sign off on [Mengele's research or Unit 731]," partly because a fully utilitarian system would never have agreed to permit the existence of Nazi Germany or the Japanese abuses in Manchuria, but also because a fully rule-utilitarian system would ban even net life-saving abusive research on the grounds that compliance with the rule saves more lives on net than not following the rule.
I'm not sure, when it comes to medical research, what a rule-utilitarian IRB would ultimately end up caring about; it's sitting in circumstances where it's in a better position to weigh the cost/benefit risks, so I think you'd need to be a pretty hard rule-utilitarian not to let it do it do its own calculations.
What I don't see is how even a rules-based rule-utilitarian IRB would find itself having a rule about patient consent, unless it was subscribing to a version of hard rule-utilitarianism with an absolute "no non-consensual interference with people" rule (making the utilitarianism redundant, and bringing us back to Hans Jonas). The crucial, defining feature of utilitarianism is that nothing other than happiness* matters. Common-sense rules can only come into play if they ultimately lead back to that.
It's probably right for an individual utilitarian to advocate for a marginal shift towards looser regulation. But the position they should be hoping for is utilitarian regulation, and in terms of societies they should advocate one where the prevailing norms are theirs as the most likely to increase utility.**
This, I think, is the flaw in your argument. There are three possible levels: utilitarian individuals, a utilitarian IRB, and a utilitarian society. A utilitarian IRB in a utilitarian society could happily permit whatever it concluded maximised utility, including non-consensual dangerous research on humans; it's not clear why a utilitarian society would object to this, even if they wouldn't have colonised Manchuria. A utilitarian IRB in a non-utilitarian society wouldn't get to dictate what the society it was in was like, but if the buck stopped with it then it would lean as far towards doing away with consent, and to treating harm/risk of harm equally to benefit/risk of benefit, as it could get away with without the non-utilitarians stepping in and shutting it down.
*Or preferences. I'm fairly sure preferences don't save consent though, as you have to weigh it against people's preferences not to die of whatever horrible disease this will be treating.
**This might, empirically, be false (eg. if a Christian society were happier than a utilitarian one because religion makes people happier), and could lead to a fun short story about a secret society of utilitarians trying to convert everyone to Methodism before dissolving itself and taking all knowledge of utilitarianism to their graves.
I think your point about the context of the IRB in society is a crucial one. In a utilitarian society that embraced cost benefit analysis for all its decisions, and was rigorously moral in personal behavior as in policy, then indeed, the IRB would probably do away with patient consent and focus on speeding research that would promote health and happiness and barring harmful or wasteful research.
In a society mainly composed of non-utilitarians, a utilitarian IRB must operate within the strictures imposed on it by wider society. It can be activist, pushing for pure utilitarianism, but it can also adopt a pragmatic bargaining posture - looking for compromises with prevailing norms where most people feel they’re better off moving in a marginally more utilitarian direction. That’s what it sounds like this book is doing, and it’s what I would envision for a utilitarian IRB.
Even if some brutal experiments have benefits to society that exceed the harm, experimenting on unconsenting people under a brutal dictatorship isn't the only possible way to obtain the data. Even if you can't get anyone to consent to being brutally killed, you can pay 1000 people to consent to a 0.1% risk of being brutally killed. Then you randomly pick one of them to experiment on.
Unfortunately, libertarian societies where you can legally consent to such risks are rarer than brutal dictatorships, so our only data on some things are from brutal dictatorships.
> I can't work out whether the Tuskegee experiment's findings have saved any lives, but Mengele's research has definitely saved more people than he killed, and I'd be surprised if that didn't apply to Unit 731 as well. The utilitarian IRB would presumably sign off on those.
That does not follow, even granting the claim about break-even for Mengele and Unit 731 (and ignoring second-order consequences like, say, the harm of assisting the continuance of IRBs by furnishing a convenient founding myth). Utilitarianism / consequentialism are usually formulated with some sort of maximizing. So, it is not enough to show that an action makes the world slightly better off / higher-utility, because there are many other actions, which could be better.
...I came here to say that this is very much NOT the answer of Christianity, and given how famous the Parable of the Good Samaritan is that's not exactly hard to know.
This is one of the reasons if you do not save a drowning man you could have easily saved without risk to yourself, many people will judge you harshly.
I don't think the prevailing legal system as categorically bans consensual harm as you say. Boxing matches are legal. Surgery is legal, even though it always injures the patient. Medical treatments that carry a significant chance of killing the patient are legal when the benefit outweighs the harm. Dangerous jobs are legal if the risk is contained. And medical experiments that carry a tiny risk of death are still legal, even if the IRB system requires the risk to be very tiny.
---
I disagree that consent would be irrelevant under utilitarianism. Requiring consent is a good way to ensure that the benefit to society exceed the harm to the experimental subjects, especially if they participate for personal benefits (payment, or access to the experimental drug) rather than out of altruism. And requiring consent doesn't preclude even brutal experiments if benefit to society exceeds the harm: even if you can't get anyone to consent to definitely being experimented on, you can get many people to consent to a small chance of being experimented on, then perform the experiment on a random subset.
You say that, instead of people who consent, a utilitarian would prefer to experiment on people who are the least harmed by the experiment. But it should be assumed (especially if you are a preference utilitarian) that those who are the least harmed by the experiment are the most willing to consent! People know what's good or bad for them better than you do (especially if you properly inform them about the consequences of the experiment); their choice of consenting or not is useful information about it, in the same way as people's voluntary choices in the marketplace are much better information about their needs and preferences than a government can ever gather in other ways.
Also, disregarding consent can create all sorts of perverse incentives that are bad from a utilitarian standpoint. Such as that if hospitals perform risky experiments on patients without asking for their consent in the interest of the greater good, people may avoid going to hospitals if they are sick.
The dude is advocating Nazi Shit so unsubtly that his post contained apologia for Nazi death camp atrocities. I don’t think he’s someone that gives a good god damn about any actual argument. Just trying to get more people on the Nazi shit bandwagon.
> Academic ethicists wrote lots of papers about how no amount of supposed benefit could ever justify a single research-related death.
Ah, the perennial "trolley problem", gifted to us by academic virtue ethicists.
Or at least a peculiar variant of the problem, in which the trolley was headed to run over a few hapless people, until the ethicists virtuously pulled the lever that makes it run over thousands.
I think the trolley metaphor is more accurate (and closer to the "standard" trolley problem) if you do it the other way around.
The trolley is a disease, the people it's currently headed towards are the thousands of people who will die (of "natural causes", so not anybody's fault) if no cure for the disease is found. We have the option to pull the lever (experiment with potential cures, risking some health complications for the trial participants when an experiment goes wrong) and thereby kill a small number of people who would have otherwise lived, while in the long run saving thousands.
From a utilitarian POV the answer is obvious. But just like a lot of non-utilitarians feel that pulling the lever is wrong because then the blood of the one victim will be on your hands, while you are not responsible for the deaths of the multiple people on the trolley's original track, it gives a little more insight into the mindset of an IRB administrator who feels that "curing diseases is not my job; preventing people from getting hurt in the process of finding the cure is my job."
well, more precisely still: the trolley is heading for many thousands. Researchers are actively trying to pull the lever to a track with a couple of people. The IRB is actively restraining the researchers and pulling them away from the lever, which is not exactly inaction on their part.
No, I inverted the trolley metaphor on purpose, because the inversion is /exactly what happened/ when considered from the "academic ethicist" POV.
The usual trolley metaphor is "don't pull the lever, and 4 people die" versus "pull the lever and YOU caused 1 person to die".
In our case however, the people pulling the lever here WERE the "academic ethicists", by insisting on an explosively expanding and unaccountable IRB bureaucracy.
Without them pulling any lever, the trolley was set on the tracks where a few casualties per year occurred. They pulled the lever, and switched the trolley to the tracks where thousands of *extra* casualties per year occur.
This might be a wonderful job programme for a bunch of administrators who often have next to zero knowledge on the subject matter (people like Dr. Whitman being evident exceptions to the rule), and who often delight in crushing other people, who do know the subject matter, with mountains of mindless inane busywork. This might even be a solid way of protecting institutions from willful predatory lawyers. (which is the most likely reason why they are able to persist)
The massive increase in administration is killing health care in many ways. A small example: in 1985 I moved to Canada to run a rural practice including a 29 bed hospital with one administrator. We never ran out of beds, the ER was open 24/7 as was the lab and x-ray. These days that 'hospital' has 8 beds, 15 administrators and ER and x-ray during business hours on weekdays. No lab. The 8 beds are permanently filled, acute cases get transferred over the mountain to a regional hospital 50 miles away and most of the time those who would have to present to the ER also have to drive over the mountain.
Administration and management have become like ivy growing on an oak tree, thriving as the tree is killed. More nutrition goes to the ivy just as more money flows into admin than patient care. This is new. No one has seen or dealt with this before. How do we reverse it? I cannot imagine a government firing all the administrators and replacing them with one hard-working real person rather than a bureaucrat, but that is what it will take and if we don't do it one day there will be no room for any patients in that little hospital, and an exponentially rising number of admins will just have meetings with each other all day long for no purpose whatsoever. Forty years ago John Ralston Saul foresaw this in his wonderful book Voltaire's Bastards, but we have done nothing, learned nothing. We are facing not so much institutional inertia, but institutional entropic death.
What did the old administrator do, and what do the new ones do? Is there a change in the number/type of patient providers too? That’s quite the contrast.
I was going along nodding my head in general agreement til I got to the part where you said this just like NIMBYism.
No.
This is the near opposite of NIMBYism. When people (to cite recent examples in my neighborhood) rise up to protest building houses on unused land, they do it because they are more or less directly “injured”.
A person who prefers trees instead of town houses across the street is completely different from some institution that wants a dense thicket of regulations to prevent being sued. There is no connection.
Actually, it's worse than you say. I say that as someone who lives in metro Boston and has slowly come to understand some of the politics. The biggest barrier is that the current residents are injured by the possibility that a child of a family with a lower income than the average of the current residents may live in the new house. It's actually a known fact that the "outcomes" of high-school students is strongly affected by the average socioeconomic status of the students in the school, which gives the residents a strong incentive to keep poor people out of their school district (which in metro Boston corresponds to the suburban municipality boundaries). Unfortunately, this is both perfectly rational and a gruesome tragedy of the commons.
You can frame an objection to anything in the of a risk of injury. Onerous regulation of medical trials is in fact framed in those terms.
But it's ridiculous to claim that the current level of regulation is justified in terms of harm reduction. Just as nearly everyone who currently claims that they are injured by housing being built on unoccupied land near their home is being ridiculous. Inconvenienced, put out, made sad; yes. Injured, no.
In both cases you have one party (the researcher or developer) wanting to do something to benefit others (the ill or newly-housed), and a third-party (the IRB or the NIMBY) butting in to block something that's no business of theirs in the first place.
This reminds me a lot of a concept in software engineering I read in the google Site Reliability Engineering book, the concept of error budgets as a way to resolve the conflict of interest between progress and safety.
Normally, you have devs, who want to improve a product, add new features, and iterate quickly. But change introduces risk, things crash more often, new bugs are found, and so you have a different group whose job it is to make sure things never crash. These incentives conflict, and so you have constant fighting between the second group trying to add new checklists, change management processes, and internal regulations to make release safer, and the first group who try to skip or circumvent these so they can make things. The equilibrium ends up being decided by whoever has more local political power.
The "solution" that google uses is to first define (by business commitee) a non-zero number of "how much should this crash per unit time". This is common, for contracts, but what is less common is that the people responsible for defending this number are expected to defend it from both sides, not just preventing crashing too often but also preventing crashing not often enough. If there are too few crashes, then that means there is too much safety and effort should be put on faster change/releases, and that way the incentives are better.
I don't know how directly applicable this is to the legal system, and of course this is the ideal theory, real implementation has a dozen warts involved, but it seemed like a relevant line of thought.
Insurance has similar incentives - they want to charge premiums proportional to the actual risk involved, and then whoever is buying insurance has a direct, already quantified costing for various safety procedures they are considering. If a check saves $5 per month in premiums it better not take much time to fill out; if it saves $5 million per month that procedure is worth substantial (but not infinite) productivity loss
Your description of "normally, devs....." does not match in the least the way I thought of code when I was doing it (I'm recently retired). I don't doubt at all that you're correctly describing a certain cohort, but "wants to add features and doesn't much care if the program has a ton of bugs" is close to the opposite of the way I worked. But, again, that's the past.
I work in software engineering and we use this method. It works pretty well. It also has the upside that people who do better work become sought after in the organization because product people (who normally have no idea about the quality of engineers) know they want "that developer/team who delivered a million things within the error budget" instead of "that developer/team who had to use up most of their error budget dealing with minor improvements"
"Patients with a short consent form that listed only the major risks got twice the score on a comprehension test compared to those with the longer form; they were also more likely to miss cases where their medical histories made the study procedure dangerous"
I think you mean LESS likely to miss cases?
Thanks for writing this; I've watched people I know suffer through IRBs...
>Journalists (“if it bleeds, it leads”) and academics (who gain clout from discovering and calling out new types of injustice), operating in conjunction with these people, pull the culture towards celebrating harm-avoidance as the greatest good
So, which one are you, a journalist or an academic?
Okay, that's a bit snarky, but I'm genuinely wondering why there isn't equal incentive for a journalist or academic to do what you're doing here. "Obstructive bureaucrats are literally killing people" is a perfect "if it bleeds it leads" headline!
It'd be a really boring newspaper article though. Imagine handing a terribly-written version of this article full of basic factual errors to a normal person. Just because Scott can make something interesting to his readers doesn't mean newspapers can make something interesting to theirs.
On the one hand, I think we are seeing exactly that, in the publication of this book and in Scott's coverage of it. Or if we turn to the sale of kidneys, we've seen a spate of articles in support of a regulated system for permitting kidney sales.
But the populace has to be psychologically prepared to view obstructive bureaucrats as the perpetrator. Scientists are the good guys to the liberal media right now, and bureaucrats are currently working to ban mifepristone. NIMBYs are starting to become a household word, as we are increasingly able to pin homelessness on them, as well as obstruction of a green energy transition. Simultaneously, scientists and their supporters have gotten a lot better at figuring out how to anticipate and defuse or avoid being painted in a negative light in media coverage, with certain exceptions.
As we transition from viewing IRBs as the heroic protectors of victimized study participants and toward viewing them as obstructionist bureaucrats bent on destroying the environment, perpetuating homelessness in service of the property values of the rich, taking away a woman's right to choose, enforcing a policy of banning responsible organ sales and thereby creating a horrific and exploitative black market overseas, I think we will see a shift in media coverage of the kind you describe.
This blog post made me so unbearably angry. It's like a Kafka story except worse because millions of people continue to die as a result of cruelly circular bureaucracy. I don't have anything constructive to add, just the wordless frustration of the truly horrified. IRBs must die.
Another data point in a giant pile of data points for my theory that liability concerns rule everything around us. How did we get here, and how do we escape?
Fascinating article! I feel for the American researchers!
It seems to me a very interesting case of a general problem where whether we find something acceptable or not depends a lot on the distribution of costs and benefits, and maybe less on the average cost or benefit. We tend to care a lot if the costs or benefits are high, at least for some people, and much less if the individual costs or benefits are low for all people, even if their sum is very high. It is not at all obvious to me what the general solution to this problem should be (although in this case there is no doubt that the current IRB process should be changed!)
I understand why lawyers and journalists might contribute to the problem. But if academic ethicists are dedicated to routing out new forms of injustice, how come so few have noticed the injustice of good not done that you've laid out in this post?
In my experience, allowing an appeal process to an alternative decider is always net beneficial, even if the person taking the appeal is very likely to approve the previous decision. It is another point of review, which serves two major purposes. The first is that egregious cases can still be identified and shut down, which I think even a non-expert dean would be willing to do in very extreme cases. Secondly, it puts the earlier review levels on notice that someone may be looking at their work. Even if they eventually get their stance approved through the appeal, it shines light on their bad behavior and reduces their prestige in the relevant areas.
Let me mention my mental journey through this post, as it points out an important aspect:
> how they insist on consent processes that appear designed to help the institution dodge liability or litigation
When I read this early on, I said to myself, of course, the people running the institution have a fiduciary responsibility to avoid having it sued. So at root the problem is our litigious society. But further down I read:
> the IRB’s over-riding goal is clear: to avoid the enormous risk to the institution of being found in noncompliance by OHRP.
This is different, it's not really the litigious problem, as nobody is actually worried about subject suing the researchers. (And as other posters have mentioned, current malpractice insurance probably can handle that.) The risk is that the OHRP will declare them noncompliant and cut off their federal research funding. And it seems that essentially all medical research is done if not by, at least, in facilities that get so much money from the US federal government that they have to do what it says.
So we're in a situation of "private law" where everybody is financially dependent on one bureaucracy and "he who has the gold makes the rules".
Unrelated to the content, this is the first time I've spotted use of the double "the," and I think it's specifically because it was followed by the repeated As in American Academy of Arts.
> I don’t know exactly who to blame things on, but my working hypothesis is some kind of lawyer-adminstrator-journalist-academic-regulator axis. Lawyers sue institutions every time they harm someone (but not when they fail to benefit someone). The institutions hire administrators to create policies that will help avoid lawsuits, and the administrators codify maximally strict rules meant to protect the institution in the worst-case scenario. Journalists (“if it bleeds, it leads”) and academics (who gain clout from discovering and calling out new types of injustice), operating in conjunction with these people, pull the culture towards celebrating harm-avoidance as the greatest good, and cast suspicion on anyone who tries to add benefit-getting to the calculation. Finally, there are calls for regulators to step in - always on the side of ratcheting up severity.
Read up on Jonathan Haidt's research on Moral Foundations Theory. What you're describing flows directly from the intense left-wing bias in academia.
In a nutshell, there are five virtues that seem to be inherent points on the human moral compass, found across all different cultures. Care/prevention of harm, fairness, loyalty, respect for authority, and respect for sanctity. Liberals tend to focus strongly on the first two, while conservatives are more likely to weight all five more or less evenly. There's also a very strong bias towards immediate-term thinking among liberals, while conservatives are more likely to look at the big picture and take a long-term perspective.
When you have a system like modern-day academia that's actively hostile to conservative thought, you end up with an echo chamber devoid of the conservative virtues, a place where all they have is a hammer (short-term harm prevention) and so every little thing starts to look like a form of harm that must be prevented. And ironically, all this hyperfocus on harm-prevention ends up causing much greater harm over the long term, but short-term harm prevention inhibits any attempts to do anything about it.
> IRBs aren’t like this in a vacuum. Increasingly many areas of modern American life are like this. The San Francisco Chronicle recently reported it takes 87 permits, two to three years, and $500,000 to get permission to build houses in SF; developers have to face their own “IRB” of NIMBYs, concerned with risks of their own. Teachers complain that instead of helping students, they’re forced to conform to more and more weird regulations, paperwork, and federal mandates. Infrastructure fails to materialize, unable to escape Environmental Review Hell. Ezra Klein calls this “vetocracy”, rule by safety-focused bureaucrats whose mandate is to stop anything that might cause harm, with no consideration of the harm of stopping too many things. It’s worst in medicine, but everywhere else is catching up.
See also: COVID response. There's precious little in the way of evidence that lockdowns and masking actually saved any lives, but along the way to saving those few-if-any lives we created long-term effects that killed a lot of people and damaged millions more.
I'm not aware that Haidt has claimed that liberals have a stronger bias towards immediate term thinking than conservatives. Nor is it obviously true; for example long term climate change has been more of a concern on the left than the right.
One similarity between this discussion of IRB's and the debate over the Affordable Care Act is that statistical lives tend to be given a lot less weight than particularized lives. The people who are now alive because of the Affordable Care Act, or who would be alive if we had less restrictive IRB reviews, are not easily identified, so for most people they don't carry the same moral weight as, say, a person who is harmed by a medical experiment gone wrong. However, I don't believe that Haidt's moral foundations theory predicts that this tendency should be larger for liberals than conservatives, and the fact that the Affordable Care Act was largely supported by liberals and opposed by conservatives is evidence that, if anything, the opposite is true.
> Nor is it obviously true; for example long term climate change has been more of a concern on the left than the right.
Sort of. Everyone agrees that climate change is a long-term issue. But the left's response to it is always immediate-term in nature, framing it as a "crisis" that demands MASSIVE CHANGES RIGHT NOW!!! in order to head it off before it's too late, whereas conservatives tend to see it as a problem that can be safely dealt with over a longer period, preferably without the society-wrecking upheavals that always seem to accompany any form of "massive changes right now."
> The people who are now alive because of the Affordable Care Act ... are not easily identified
Possibly because the ACA most notably dealt with insurance's treatment of pre-existing conditions, AKA chronic issues that, by definition, cause greater or lesser degrees of misery but do very little actual killing?
If IRB reform *is* possible, what can an individual do to make it more likely?
I'm hoping for a better option than "write your congressman," but it is a top-down problem. Grassroots approaches (like those applied to electoral or zoning reform) are a bad idea. Even at the state level, getting North Carolina to preempt federal regulations for its universities seems....risky.
I'm not saying this rates as a New EA Cause Area, but I don't want to leave this $1.6B bill lying on the ground.
>"Lawyers sue institutions every time they harm someone (but not when they fail to benefit someone)."
I wonder if Congress re-writing institutional mandates to make them at least consider benefits instead of just risks would cause (at least the threat of) the parenthetical lawsuits against inaction. The courts don't seem like the best place to handle this cost-benefit analysis but this seems to me like the least intractable path forward. Would this help create action, or would it only increase the core problem of everything being done for the sake of lawsuit protection?
The law has rules about what dangerous activities people are allowed to consent to, for example in the context of dangerous sports or dangerous jobs. Criminal and civil trials in this context seem to be a fairly functional system. If Doctors do bad things, they can stand in the accused box in court and get charged with assault or murder, with the same standards applied as are applied to everyone else. If there need to be exceptions, they should be exceptions of the form "doctors have special permission to do X".
If you have a decision procedure that is slow but yields the right results, there is a technique to speed it up. In programming it's called caching. In law it's precedent.
There is no problem you can't solve with another level of indirection. So the obvious solution: Regulate the regulators. Make the regulators prove that a regulation they make or enforce is not killing more people than it saves.
There’s some trolley logic at work here - we are okay with hundreds of theoretical people inadvertently dying but we can’t handle even a few dying from direct action. The whole situation reminds me of the medical system’s risk-compliance-legal axis who all trained at the school of “no” and ascribe to the maxim that thing-doing is what gets us in trouble, so the best thing to do is nothing.
This is a good way to put it. Probably because there’s plausible deniability in the indirect action, and not directly, clear attributable causes, vs. easy to play the blame game in the direct.
Clinical researcher here. I wanted to comment on this suggestion:
- Let each institution run their IRB with limited federal interference. Big institutions doing dangerous studies can enforce more regulations; small institutions doing simpler ones can be more permissive. The government only has to step in when some institution seems to be failing really badly.
This is kind of already how it goes. Smaller clinical sites tend to use what we call "central IRBs", which are essentially IRBs for hire. They can pick and choose which IRB best suits their needs. These include IRBs like Advarra and WIRB. Meanwhile, most clinicians at larger academic institutions have to use what we call a "local IRB", which is the institution-specific board that everything has to go through no matter what. In some cases, they can outsource the use of a 'central' IRB, but they still have to justify that decision to their institutional IRB, which still includes a lengthy review process (and the potential the IRB says "no").
What's the difference between a central and a local IRB? At least 2x the startup time, but often longer (from 3 months to 6+ months). Partly, this is because a smaller research site can decide to switch from WIRB to Advarra if their review times are too long, so central IRBs have an incentive to not be needlessly obstructive. While a central IRB might meet weekly or sometimes even more than once a week, with local IRBs you're lucky if they meet more than once a month. Did you miss your submission deadline? Better luck next month. You were supposed to get it in 2 weeks before the board meeting.
But this isn't the end of the difference between smaller clinics and those associated with large institutions. At many academic centers, before you can submit to the IRB you have to get through the committee phase. Sometimes you're lucky and you only have one committee, or you maybe you can submit to them all simultaneously. More often, you have to run the gauntlet of sequential committee reviews, with each one taking 2-5 weeks plus comments and responses. There's a committee to review the scientific benefit of the study (which the IRB will also review), one to review the safety (again, also the IRB's job), and one to review the statistics (IRB will opine here as well).
In my experience, central IRBs tend to not just have a much faster turn-around time, they also tend to ask fewer questions. Often, those questions are already answered in the protocol, demonstrating that the IRB didn't understand what they were supposed to be reviewing. I don't remember ever going back to change the protocol because of an IRB suggestion.
Maybe you could argue that local IRBs are still better for other reasons? I'm not convinced this is the case. We brought in a site through a local IRB on a liver study. It took an extra six months past when most other sites had started (including other local IRB sites - obviously a much more stringent IRB!). Did that translate to better patient safety?
Nope, the opposite happened. One of the provisions of the protocol was that patients would get periodic LFT labs done (liver function tests) to make sure there was no drug-induced liver injury. In cases of elevated LFTs, patients were supposed to come back into the site for a confirmation within 48 hours of receiving the lab results. We were very strict about this, given the nature of the experimental treatment. The treatment period went on for 2 years, so there's a concern that a long-term treatment might result in long-term damage if you're not careful.
This site, with its local IRB, enrolled a few patients onto our study. At one point, I visited the site to check on them and discovered the PI hadn't been reviewing the lab results in a timely manner. Sometimes he'd wait a month or more after a patient's results came in to assess the labs. Obviously they couldn't follow the protocol and get confirmatory LFT draws in time. Someone with a liver injury could continue accumulating damage to this vital organ without any intervention, simply because the PI wasn't paying attention to the study. I was concerned, but these studies can sometimes be complicated so I communicated the concern - and the reason it was important - to the PI. The PI agreed he'd messed up and committed to do better.
When I came back, six months later, I discovered things had gotten worse, not better. There were multiple instances of patients with elevated LFTs, including one instance of a critical lab value. NONE of the labs had been reviewed by anyone at the site since I visited last. They hadn't even pulled the reports from the lab. There was nobody at the wheel, but patients kept getting drug so the site could keep getting paid.
Since it's not our job to report this kind of thing to the IRB, we told them to do it. We do review what they report, though, so we made sure they told the whole story to the IRB. These were major, safety-related protocol violations. They did the reporting. The PI blamed the whole fiasco on one of his low-paid research coordinators - one who hadn't actually been working on the study at the time, but the IRB didn't ask for details, so the PI could pretty much claim whatever and get away with it. The PI then said he'd let that guy go, so problem solved. The hutzpah of that excuse was that it's not the coordinator's job to review lab reports, it's the PI's job. This would be like claiming the reason you removed the wrong kidney is because you were relying on one of the nurses to do the actual resection and she did it wrong. The obvious question should have been WTF was the nurse doing operating on the patient!?! Isn't that your job? Why weren't you doing your job?
What was the IRB's response to this gross negligence that put patient safety in danger? They ACKNOWLEDGED RECIEPT of the protocol violation and that was the end of it. They didn't censure the PI, or ask further questions, or anything. If 'strict IRBs' were truly organized in the interest of patient safety, that PI would not be conducting any more research. We certainly put him on our list of investigators to NEVER use again. But the IRB ignored the whole thing.
I'm not convinced that this is a 'tradeoff' between spending a bunch of money to stall research versus saving patients' lives through more stringent review. I think that the vetocracy isn't about safety, so much as the illusion of safety.
To what extent is this a purely U.S. phenomenon? While I'm sure researchers everywhere gripe about these things, I don't typically see these utter horror stories elsewhere.
And shouldn't researchers just move their research operations if the U.S. climate (only) is crippling?
"Patients with a short consent form that listed only the major risks got twice the score on a comprehension test compared to those with the longer form; they were also more likely to miss cases where their medical histories made the study procedure dangerous (eg a person with a penicillin allergy in a study giving penicillin)"
Typo? Which group was (eg) more likely to give penicillin to people with a penicillin allergy?
I refer to it as "Cover your ass, not your buddies".
I ran into it just last week; we're prototyping a new machine at our farm that uses high calcium hydrated lime to break down organic matter, corrosive stuff and it's blowing back in our faces, so I wanted to know what sort of protective measures we should be using.
So I called poison control, they had no advice, but told me to call OH&S, so I did. OH&S had no immediate advice but offered me a consultation appointment. Sure.
Appointment swings around and they start asking about our overall health and safety policy. I tell them there isn't one, we don't have time for that.
They tell me that we really need one in case someone gets hurt and they try to sue.
I tell them that we don't have Worker's Compensation for our guys, so if something happens, we want them to sue us, and we want to lose, so that the injured employee can get a payout from our liability insurance.
They proceed to tell me that it's not my problem, and that we should have a CYA safety policy that no one ever reads so that if something happens, we don't lose the lawsuit.
I reiterate that we need to lose that lawsuit or a dude who loses a leg would be left with nothing. They again, say, well, that's not really your problem...
I point out their moral bankruptcy, and try to refocus the conversation on the lime dust.
They tell me they have no idea how to handle it safely, they just know how to protect the company from legal liability.
This doesn't make a lot of sense to me. Why would you force a worker to sue you if you actually want to pay them? Lawsuits are only necessary when the two parties disagree - the worker wants to get paid for their injury, and the company doesn't want to pay them.
If you both agree they deserve to get paid, you could just say (or write into their contract) "If you get hurt on the job, send us the hospital bill and we'll pay it." Then they wouldn't need to hire an expensive lawyer to get compensated for their injury.
We're in Canada, so it's not the hospital bills that are the issue; it's how he's going to pay his bills for the rest of his life after he loses his right hand in an auger, or breaks his back falling from a horse.
We don't have that kind of cash reserves; but we have liability insurance. Liability insurance typically requires a lawsuit to at least be filed before they will pay out.
The part i find puzzling about this story is why your insurance company doesn't require you to have an OHS policy before they sell you the liability insurance. I understand why you want it to be easy for you to lose the lawsuit, but I don't understand why the insurance company is happy to play along.
Farms get all sorts of weird exemptions to that sort of stuff, and specialized insurance companies to cater to them. The industry has so much less regulatory pressure compared to most others that it's a whole other world.
You must be familiar with working with strong bases. That's all you've got here, so do the same thing. Watch out for the eyes in particular, I would say.
Oh sorry. Most farmers to my recollection work with some nasty bases, like anhydrous ammonia. What I'm getting at is that while acids burn and probably hurt more, bases eat away tissue in a more dangerous and harder to heal way. (It's the same as the saponification reaction that turns fat into soap.) Also I think corneas are unusually vulnerable because they aren't well enervated or vascularized, and they're moist -- the calcium hydroxide does its damage when it comes into contact with water, because that releases the (very reactive) free hydroxide anions -- so you really want to protect the eyes.
But this is not my area of expertise at all, so please consult with someone for whom it is!
Interesting parallel. Both are situations where we are so, so worried about the instance of one type of error happening that we’ve become overly adherent to the other side of the coin, almost to the point of ridiculousness, but because it’s become “socially unacceptable” to allow any of the first type of error to happen. And maybe both by an intellectual majority but an overall minority?
I wonder if the similarities have to do with the internet era, as mentioned in the post. It allows people who are angry about one thing to band together with others online and make noise, where pre-internet it would be very hard to do so due to geographical dispersion.
I would bet money there is significant overlap in the Venn Diagram circles between Twitter flamers, Woke, Medical Bureaucrats and the general Radical Precautionary Principle types.
On the “lawyer-adminstrator-journalist-academic-regulator axis,” don’t forget that lobbyists are mostly lawyers too. That means that they think like lawyers. So when something bad happens, their reaction is more law. When that law goes too far, their reaction is ... more law. That doesn’t make sense to non-lawyers, but it does to lawyers. Obviously, you just need to have the law make more finely grained distinctions, in order to do the good of the original law without the bad. So let’s add some exceptions and defenses and limits. So the law now goes from one question or decision to many questions or decisions. And that means you need specialists to figure it out. Hence the modern compliance department, which is an awful lot like the commissars that Soviet governments embedded in every organization -- there to make sure that you do the government’s bidding. In detail.
Oh, yeah. This is also a(nother) really good reason for prohibiting anyone who has held the right to practice law within the last ten years from being elected to any legislative body.
Right as the IRBs are radically reformed to be less paranoid and harm/liability-obsessive, we'll radially reform police departments to be more paranoid and harm/liability-obsessive
I'm a scientist who does medical research at several top tier institituions. I only do research, and every month or so one of my projects is submitted to an IRB somewhere. I do clinical trials and observational studies, as well as a lot of health system trials (e.g., where we are randomizing doctors or hospitals, not patients). I have a few observations, some of which aren't consistent with what Scott reports here.
1. I've never had an IRB nix a study or require non-trivial modifications to a study. This may be because my colleagues and I are always thinking about consent when we design a study, or it may be because top tier institutions have more effective IRBs. These institutions receive vast amounts of funding for doing research, which may incentivize a more efficient and flexible IRB.
2. I have done some small studies on the order of Scotts questionnaire investigation. For these, and even some larger studies, we start by asking the IRB for a waiver of consent - we make the case that there are no risks, etc, and so no consent is needed. We have always recieved the waiver. Searching PubMed turns up many such trials - here's a patient randomized trial of antibiotics where the IRB waived the requirement for patient consent: https://pubmed.ncbi.nlm.nih.gov/36898748/ I am wondering if the author discusses such studies where IRBs waive patient consent.
3. There are people working on the problem of how terrible patient consent forms can be. There are guidelines, standards, even measures. And of course research into what sort of patient consent form is maximally useful to patients (which is determined by asking patients). I helped develop a measure of informed consent for elective surgery (not the same thing as a trial, but same problem with consent forms) that is being considered for use in determining payment to providers.
4. Every year or so I have to take a test to be/stay certified for doing human subjects research. Interestingly, all the materials and questions indicate that the idea of patient consent emerged from the Nuremberg Trials and what was discovered there about the malfeasance of Nazi scientists. I'm surprised to hear the (more plausible) sequence of events Scott reports from the book.
5. Technology, especially internet + smartphones, is beginning to change the underlying paradigm of how some research is done. There are organizations which enroll what are essentially 'subscribers' who are connected via app and who can elect to participate in what is called 'distributed' research. Maybe you have diabetes, so you sign up; you get all the latest tips on managing diabetes, and if someone wants to do a study of a new diabetes drug you get an alert with an option to participate. There is still informed consent, but it is standardized and simplified, and all your data are ready and waiting to be uploaded when you agree. Obviously, there are some concerns here about patient data, but there are many people who *want* to be in trials, and this supports those people. These kinds of registries are in a sense standardizing the entire process, which will make it easier/harder for IRBs.
While this book sounds very interesting, and like one I will read, it also maybe obscures the vast number of studies that are greenlighted every day without any real IRB objections or concerns.
If you work somewhere where not much research is done then you may get inexperienced IRB members who have little knowledge of what's normal who have lots of time to play the game of inventing more and more implausible harms.
From scotts story it sounded like much of his experience was tied to a bureaucratic mechanism that was rusted into barely being able to function because not much research was happening at his hospital.
When you're at an institution that does almost nothing *except* research it tends to be much less painful. Though of course that's still bad because it creates a big barrier to entry for anyone working outside of a few big institutions.
I guess I equate "being governed" with having a government. I certainly don't think everything can be achieved by government. I don't like the *idea* of being governed, though in practice I mostly have little problem with living with the law. There are not many illegal things I want to do, and those I really wanted to do, such as smoking weed back when it was illegal, I have found easy to get away with. I think our government did a lousy job with covid, but I personally was not greatly inconvenienced -- were you? I read up on the science, and navigated the info about risks and personal safety successfully -- still have not had covid, despite going in to work through the entire pandemic. So overall, I have had an easy time with being governed. But whenever I read something like Scott's post here, or really anything about how we can do a better job of organizing life so that there is more fairness and less misery I am filled with rage and hopelessness. Even Scott's article made me have fantasies of slapping the faces of IRB members. Consequently I am not well-read or well-informed about government, the constitution, politics, or any other related matters. Regarding this topic I am resigned to being part of the problem not part of the solution.
"How would you feel if your doctor suggested - not as part of a research study - that he pick the treatment you get by flipping a coin" if I knew that the doctor really genuinely didnt know which option were better then i would prefer for him to flip a coin rather than dither
> Also, I find it hard to imagine a dean would ever do this
Plausibly, the IRB only has incentives pointing towards caution, but the Dean has incentives pointing in both directions. Having a successful and famous study or invention come out of their institution brings fame and clout and investment, and sometimes direct monetary gain depending on how IP rights are handled in the researcher's contract with the institution.
If you want to join the army in Canada, you have to say you've never been sick a day in your life, and you've never had a single injury. You say you're allergic to grass, you broke your leg in high school, sometimes you feel really sad... any of these will disqualify you.
I don't know how it got to be that way, it doesn't make sense, the army isn't in a position to be especially picky... somehow I think it's related to whatever causes this IRB situation though.
Hi Scott, I am curious if your questions on bipolar study included anything that might be considered “questions on self-harm.” These sorts of questions might raise the risk assessment from low to moderate and require that you include precursor warnings on the risk of your questionnaire. I’m genuinely trying to make a best case argument for the hindrances you faced, so anything that you might see as potential “red flags” to your reviewers would be helpful. Thanks!
Note: Although I am but an entomologist and have almost no regulatory bodies for my actions, my fiancé often designs the interfaces that researchers use for submitting IRB forms at a university. We’re trying to speculate what happened in your case, so anything you think might be important would be tremendously helpful. Thank you!!!
He wasn't trying to get approval for the questionnaire, though. The study just involved looking at the relationship between the responses to the questionnaire they were already using and the patients' ultimate diagnoses.
That's part of the craziness of the whole IRB system. The treatments themselves (in this case, a questionnaire) can all be used in isolation. The only illegal part is examining whether they work.
Right, this is the part that seemed kinda Kafka-esque. Scott isn’t proposing any new intervention at all, nothing about the patient experience will actually change. He was literally just going to collect data on an existing process.
Gotcha, thanks @Mallard and @Gbdub. I missed that detail. Ya, that's a completely different and much more complex problem. Thanks for the clarification!
That said I think it’s really cool that there is someone here that can potentially provide perspective from “the other side”! Hopefully Scott responds.
No it did not. I would also add that this question (whether asking people questions about suicide increases the risk of suicide) is one that psychiatrists care a lot about and have studied in depth, and the answer appears to be no.
My big annoyance regarding this area as someone who was close friends with a medical ethics professor at university (and still is 20 years later), is just the incredibly low quality of reasoning among the “leading lights”. You have people like Leon Kass who was on the President’s Bioethics advisory Council or whatever who by their writings didn’t appear to be able to think themselves out of a wet paper bag.
Now I doubt this grey eminences were actually that stupid, but they clearly had political and religious commitments that were preventing them from thinking remotely clearly about the topics they were put in charge of. So disappointing. I remember being told this is the top contemporary thinking in this area and just finding the arguments hot garbage.
Thank you, Scott, for this careful and thought-provoking essay.
Since so many people wonder, the study by Lynn Epstein and Louis Lasagna showed that people who read the short consent form were better at both comprehending the experiment and about realizing that the study drug might be dangerous to them.
Much of this fascinating conversation on ACX is on the theoretical side, and there’s a reason for that. IRBs are ever on the outlook for proposed research that would be unethical—that is why they exist. But there is no national database of proposed experiments to show how many were turned down because they would be abusive. In fact, I know of no individual IRB that even attempts to keep track of this. There are IRBs that are proud they turned down this or that specific protocol, but those decisions are made in private so neither other IRBs nor the public can ever see if they were right. Some IRBs pride themselves on improving the science of the protocols they review, but I know of no IRB that has ever permitted outside review to see if its suggestions actually helped. Ditto for a dozen other aspects of IRB review that could be measured, but are not. It’s a largely data-free zone.
I got an interesting email yesterday from a friend who read my book. She is part of a major enterprise that helps develop new gene therapies. From her point of view, IRBs aren’t really a problem at all. Her enterprise has standard ways of doing business that the IRBs they work with accept. She sees this work with and around big pharma as providing the relatively predictable breakthroughs that will lead to major life-enhancing treatments down the road. This is a world of big money and Big Science, and it’s all about the billions. A new drug costs $2.6 billion to develop; the FDA employs 17,000 people and has a budget of $3.3 billion; the companies involved measure their value and profits in the billions.
The scientists I am speaking for in "From Oversight to Overkill" are lucky when they can cobble together a budget in the millions, and much of the work they do, like Scott’s frustrating project, is entirely unfunded. They are dealing with OHRP, an agency with a budget of $9 million that employs 30 people. Unlike big pharma with its standardized routines, they are trying new approaches that raise new regulatory questions. And because OHRP operates on such a smaller scale, its actions are rarely newsworthy even when they make no sense at all. This includes decisions that suppress the little projects with no funding that people just starting out attempt.
Of course, the smaller budgets of the scientists in my book don’t mean that their findings will be trivial. It has always been true that when myriad scientists work to better understand human health and disease, each in their own way, that the vast majority will make, at most, tiny steps, and that a very few will be on the track of something transformative. A system that makes their work more difficult means that we, the public who struggle with disease and death in our daily lives, are the ones who suffer.
Hi, just downloaded the Kindle and can't wait to get stuck in. Purely as a matter of interest, is there no patients' rights body to do battle for needed treatment?
If the public had any idea how many people die not long before an effective treatment becomes available, a treatment that was delayed by bureaucratic hurdles that have nothing to do with making research safer, there would be an uproar! But the system defends itself, very effectively, by claiming that "ethics requires this" and "we can't have another Tuskegee." As a result, the scientists, who know all about the problem, are intimidated. As a result of that, the public remains in the dark.
Have you ever considered organizing an uproar? Surely if enough popular and respected scientists signed a strongly worded open letter, it could get some traction in the right direction?
Popular and respected scientists feel this is a problem that can't be fixed--the fear of looking as if they are against ethics, and don't care about Tuskegee, is too strong, even though these are false arguments. Since the scientists are afraid to speak out, the book is my attempt to organize an uproar in the group that is being hurt--people who are vulnerable to cancer and heart attacks, which is to say the public.
Thanks for responding! I feel like there is enough "anti government" on the right and enough "pro scientist" on the left that they would win easily as long as public outreach hit the basic talking points. Maybe I'm overestimating how much popular scientists are able to coordinate, and how respected they are in the general population (vs just my circles).
Between the horrors of NIMBYism, the virtuous corpse mountain left behind by IRBs, and the laughable insanity of NEPA and CEQA, perhaps the purest expression of the underlying concept is the Precautionary Principle. So far as I can tell, someone made the following chart in all earnestness. It is a fully-generic excuse for inaction.
When you think your options range between a worst-case of "Life continues" for inaction and a worst-case of "Extreme Catastrophe" for action, well, here we are. Too bad life literally didn't continue for the people getting subpar medical treatment.
Wow, that picture is horrible. Even the outcome "benefits enjoyed, no harm done" is painted in scary orange color; as opposed to the green outcome of no action taken.
The comical inefficiency of IRB doesn't seem to be a controversial point. Why didn't you ignore it and simply conduct and publish your survey research anonymously? Maybe you judged that your study wasn't important enough to overcome your risk aversion. Why didn't the authors of ISIS 2 conduct and publish their study anonymously, if the interventions as such were not against regulation?
I have my own theory of everything: the median age is 38. Perhaps it's unfair to call Gary Ellis a coward who's responsible for thousands of deaths and unquantifiable unnecessary suffering. Perhaps he's just a regular old guy in a society of increasingly older guys who lack the knees or back to stand up for anything.
I can't wait for the rationalistic explanations in ten years of why things continue to go increasingly wrong for a country in whom the average resident is 42 years old and obese. Maybe you think we only need to find the right balance of incentives in a carefully engineered system; if so, you're in good company. I believe it was Kant who famously said - about government, but surely we could apply his wisdom to institutions of science and medicine -
"The problem of organizing a state, however hard it may seem, can be solved even for a race of exhausted, sclerotic, amorphous blobs"
"Have you read this study? Some dude claims to have done some research off the books so it's untraceable and unverifiable and the guy is anonymous but his findings are really interesting..."
Anonymous publishing could be accomplished in any number of ways which are not much less reliable than the current regime, and especially for such studies as Scott's - or it could be done with open-secret pseudonimity, or even more blatantly. In any case this is a collective action problem, a prisoner's dilemma where defection is raising a stink about 'unauthorized research', or joining a witch hunt to smoke out the devilish survey-givers, and it's no more likely to be solved than the IRB is likely to undergo spontaneous reformation...
but imagine if there could be institutional mutinies in the medical sector inspired by good and useful causes, like there are for wokeness in every organization in the Anglosphere! That Netflix exec getting defenestrated for quoting the n-word in a benign context, but instead it's doctors refusing to go along with persecuting Scott for recording data his psych ward collects anyway.
Maybe that is the answer: maybe there is actually plenty of rebellion against injustice in these institutions, but the ideology of the young and vital revolutionaries is just not liberal humanism anymore - and isn't that so old and outmoded, compared to our shiny new hyper-protestant self flagellation. Why would we care about 'people dying preventable deaths' - how passé! Join the new century already!
In my specific case, because I needed to do the study officially in order to pass residency, for which it was a requirement.
In the general case, people usually need large consortia that get funding, which is hard to do completely in secret. Also, it's hard to publish things anonymously and get people to listen to them, especially if most people are angry at you and might (eg) refuse to cite your study on principle, or deny that it had ever occurred.
Early on, you said, “The public went along, placated by the breakneck pace of medical advances and a sense that we were all in it together.”
That last part — the sense that we were all in it together — speaks volumes. To my mind its loss explains most of what has gone wrong with the world today. But how did we lose that?
We are criminally ignoring a "more connected populace".and the IQ needed to process that data flow. It's no wonder people resort to regression, stasis, or revolution copes. It's not like IQ or coping mechanisms were better in the past. We need to wrestle with restrictions to transparency, and prioritize defaults, or it will be left behind for those that started the game with fewer rules.
>Greg Koski said that “a complete redesign of the approach, a disruptive transformation, is necessary and long overdue", which becomes more impressive if you know that Dr. Koski is the former head of the OHRP, ie the leading IRB administrator in the country.
I've heard of many similar cases of the former head of <org> calling for major reforms, but if they didn't have the will or the political capital to do it while they were there, it seems unlikely the next guy will either (even if they agree).
To the extent that the problem is that Hospitals and their IRBs are overly incentivized to avoid harm more so than they are incentivized to cause good, might this be a good opportunity for something like impact certificates?
Like if there was a poll of 500 million dollars to be given each year to whichever set of hospitals did the most good in their studies, (more money given to those who did more beneficial studies) would that put some pressure the other direction?
My brother, who has some experience in this area, had this to say, “For the most part I feel you just have to know how to build relationships with your irb people and what words will trigger them. I never submit a protocol without first talking with my irb person, and thus usually don’t hit these types of bottle necks. The assumption should be that unless you’re super clear in your explanation, they’ll be risk averse and put forward roadblock. Because that’s their job.”
So from his telling, there’s a certain amount of gladhandling that is needed for getting research past IRB’s. This is probably not great for scientific research (it doesn’t seem fair that because you aren’t up for getting coffee with your IRB person you can’t do your research), but it does mean that scientists aren’t quite as helpless as the book presents them.
I think this is the most morally deranged thing I have read a philosopher stating in a long time.
In my mind, technological process is often the prerequisite for social progress. From a modern perspective, most iron age civilisations look rather terrible. Slavery, war, starvation. Good luck finding a society at that tech level who would agree to avoid "the violation of the rights of even the tiniest minority, because these undermine the moral basis on which society's existence".
If you don't have the tech to prevent frequent deaths during child birth, in addition to death being bad in itself, you will end up with a population in which a significant number of males can't find partners. The traditional solution for getting rid of excess males is warfare.
If you don't have contraception tech, your population will be reduced by disease and starvation instead.
If your farming tech sucks, most of your population will spend their lives doing back-breaking labor in the fields and have their surplus extracted under duress to support a tiny sliver of elite.
If running a household is a full-time job at your tech level, good luck achieving gender equality.
That is not to say that all technological progress is obviously good. Sometimes, it might not be worth the cost in alignment shift (like the freezing water exeriments in Dachau), and sometimes we might judge that a particular tech will have overwhelmingly negative consequences (like figuring out the best way random citizens can produce neurotoxins in their kitchen).
And of course, you can always plead that while past progress was absolutely necessary (lest you be called an apologist for slavery, war and starvation), the present tech level (which allows for ideas such as human rights) is absolutely sufficient and any future tech is strictly optional. Of course, statistically speaking, it would be extremely unlikely that you just happen to live at exactly that point.
"Increasingly many areas of modern American life are like this."
Yep, America is frustratingly anti-tree-climbing. I am a happier person, in better physical shape, when I can climb trees. That was fine at home where I had a backyard with trees, but here American bureaucratic idiocy gets in the way. You see if I climb a tree on someone else's property, fall out, and get hurt, I could sue them. As a result city parks make sure to cut off any branches that would make a tree climbable, lest someone hurt themselves on it.
Harsh as it sounds, we need to hold people responsible for their own mistakes. Only then can we be free to take what risks we judge worthwhile.
What really bothers me about IRBs is that there doesn't seem to be any attempt to check if they really are effective nor to take the moral harms of not doing experiments seriously. Letting people die because they aren't getting a better treatment is deeply immoral but not taken seriously.
The justifications the people like to bring up like Tuskegee were awful but it's just not clear that IRBs would make them better. Often many people knew and just shrugged. I'm sure you could have gotten an IRB in Nazi Germany to approve Mengele's experiments.
I'd much prefer just having a committee of other tenured profs at the institution do a brief consideration with all the detailed forms and considerations saved for the few studies which raise serious ethical questions.
You probably don't want a committee that is too socially close to the people they are evaluating. If it's a bunch of friends evaluating each other, there's going to be a lot of unconscious social pressure not to be the one that calls out your friend in front of their other friends. You probably want it to be something more like refereeing at a journal - which still involves some moderate social closeness, but even the one-way anonymity that exists in the sciences prevents it from becoming *too* clubby.
I don't know about that. It depends alot on how you have the system work. I tend to think that the best system here relies on punishment (at least social if not formal) for doing bad things.
The guy whose experiment is approved by a faceless buerocracy has no incentive not to make them look bad by using any freedom they are given in poor ways. That leads to the over formalization of the process where everything has to be documented to the last detail beforehand and you can't just approve some swab study on minimal paperwork assuming the experimenter will act reasonably.
Basically, I see the appropriate role of the IRB here to be to prevent wackos from going off on some weird personal crusade or doing stupid shit. The department and their fellows have their own careers and experiments on the line if this guy makes the department look bad or does something unethical that gets their experiments shut down.
Basically, I tend to think that formalizing ethics tends to make everyone take their own ethical responsibilities less seriously. So I want the system that is most like the one we have for doctors (upfront little oversight but punishment for bad behavior) that still deals with the problem of weeding out the wackjobs and ppl with some deeply different moral sensibilities.
Like I think a board of my peers would be best positioned to realize my ethical judgements might look very different than theirs/society in general (if I did experimental work) in certain areas and being more inquisitive about how I would deal with certain issues while being less worried about my more conventional colleague.
I guess I'm thinking that the sort of "arm's length" peers that do peer-review for journals (and tenure and promotion reviews) are probably better than the immediate peers of the department (which is probably why that's the system that's used for publication and tenure, which are these other significant decisions that you want to get right in both directions).
I think some European and Australian universities have dissertations examined by someone outside the department but inside the discipline for this same sort of reason. American universities allow departments to do the evaluation in-house (partly because grad students aren't actually your social peers) but still require an outside member from another department to be present.
It's universal practice to have an external examiner in the UK, and I think in most of Europe. We think Americans are weird for not doing it. How do you maintain any level of parity across universities without it?
I don’t think there’s any assumption of parity across universities. People assume a PhD from a top program in a discipline means something different than a PhD from a third tier program in that discipline. Departments are only incentivized by reputation to keep their standards high.
I guess my question is what is the reason to think a system of review produces better results.
In the case of granting PhDs there is concern about a kind of self-dealing where a department might be improperly incentivized to check for itself.
In the case of ethical decisions someone ultimately has to decide if it's ethical or not. The self-dealing concern is that someone might be tempted to cut ethical corners to advance their research at the expense of those they are conducting research on (balanced against this is the fact that a remote IRB dealing with this at a remove may be less concerned with the moral imperitive to find cures or even just the real concerns over those research is conducted on).
I agree that for my system to work you also need a robust process for dealing with complaints after the fact and imposing punishment on those found to have acted unethically and even those who approved it and *that* board needs to be divorced from the sponsoring institutions.
So I guess my real claim here is that I think post research adjudication of punishment is the right mechanism and the only thing pre-experiment review should be offering is a non-adversarial did u think of this and a check to make sure no one is just going rogue.
Perhaps it is also a bad idea for it to be self regulating.
Nothing is entirely self-regulating. That's the purpose of tort law. When you fuck up, a jury of your peers impoverishes you.
Right, but that's just the government being in charge again, by dint of which laws they pass and which legal precedents they set.
At bottom, government is always in charge of everything, but that still describes a huge space. English Common Law, as I understand it, is *entirely* generated by judicial precedent with little or no reference to original statutes or regulations because it largely predates any such notions.
Mmm, not quite. You've at least overlooked the critical role of the jury, even in criminal cases. And in tort cases you've overlooked the fact that in principle a civil action is a dispute between two private parties, which the government agrees to adjudicate. It's not the same as DC agency staffer John Doe saying "it seems like a good idea to me if we forbid X and Y, because I can imagine evil effect Z and Q." Instead, it's Citizen Kane suing Citizen Rosebud for specific harm caused by specific actions, with the outcome to be determined by a jury of their peers -- more citizens. The role of government is mostly to ensure that the dispute between Kane and Rosebud gets settled fairly, nonviolently, and in agreement with centuries of traditional rights.
No, it's obviously a good idea - without government power over medicine, you'd have vast amounts of "medicine" offered for sale that didn't actually have any evidence that it did what it claimed, and a lot of people would fall for it.
Just because in this particular area Scott makes a plausible case that regulation isn't being done well (although I'd want to hear the other side of the argument before trusting this one) doesn't mean we can do without it altogether.
No. You would have people relying on reputation and their own judgement.
Oh, and please don't come with the "what if they said something that's not true?" argument. Protection against fraud is one of the legitimate functions of government.
You can justify the current FDA approval process on "protection from fraud" grounds...
>Protection against fraud is one of the legitimate functions of government.
OK, so we're agreed, we do in fact need regulation of medicine?
"you'd have vast amounts of "medicine" offered for sale that didn't actually have any evidence that it did what it claimed, and a lot of people would fall for it."
I can't tell if you're being sarcastic or are ignoring the vast array of pseudo-science and outright woo that is already being sold and consumed.
No to both, I'm saying it could easily be far, far worse.
You might want to think out the difference between regulation pre- and post-facto. When we speak of "government regulation" we usually mean laws that act *before* any particular medical decision or act takes place, with the goal of forestalling a bad result by simply forbidding the act or decision that leads to it.
Contrariwise, we have a very old system of law in place that punishes bad decisions and actions *after* they have actually caused harm -- that's what malpractice lawsuits are all about. And of course, the existence of the possibility of a malpractice claim exerts very strong effects on the decisions and actions that medical providers take.
Both systems have advantages and disadvantages. Pre-act regulation obviously has the theoretical advantage in that it need not wait until at least one person has suffered from a bad decision, so that the bad decision maker can be shot on his own quarterdeck pour encourager les autres. It can also be cheaper, because it can tell people "don't even start down this road because there's a block ahead," and it can make the overall regulatory regime more predictable, and assist planning, because we don't need to wait to see what a jury says about this or that type of decision.
But of course, on the other hand, pre-regulation is far less flexible, and cannot adapt well, or at all, to special circumstances (which are certain to arise in a population of >300 million), or to circumstances unforeseen at the time the law was written, e.g. because of technology advances. There's also no human judgment involved in its enforcement -- the role of the jury is gone -- so it's a far blunter instrument, and can almost be guaranteed to produce Unexpected Side Effects, some of which may be worse than the harm hypothetically prevented in the first place.
Leaving "regulation" (meaning social constraint) up to tort law has the advantages of much greater flexibility and adaptability, of course, because a group of human beings gets to examine the specific situation and the specific facts in great detail, and that group will already know the factual outcome -- it need not guess about that the way Congressional staffers drafting legislation must -- before rendering judgment. It is also less chilling of experimentation and innovation, because as long as it all works out well, you're good. The judgment is always centered on outcome -- what actually in fact happened -- and not on the hypotheses of whoever wrote the legislation about what might happen.
But it has the disadvantages of being slow, requiring at least one victim, being much less predictable, which handicaps planning and may make people unusually conservative (because they have to plan for the worst instead of the typical case), and probably by being more expensive, since we need to pay a bunch of fancy-pants lawyers, and absorb some very expensive judical system time.
Low-effort high-temperature comment. Banned for one week.
It would appear we need to found some IRBRBs, along with some IRBRBRBs to monitor their research, and we can start looking into this question.
In the US (and UK), unlike many other countries, there is a common law principle that one is not generally legally obliged to help anyone else in imminent danger or distress, e.g. rescuing someone from drowning, even at zero risk to one's self such as holding out a pole from a safe position. (There are exceptions, where someone has a duty of care or a life-saving occupation such as a pool guard, etc.)
I guess this principle partly stems from that of not being compelled to risk liability if one's aid is ineffective or even harmful. A classic example is clearing snow from in front of one's dwelling: Do nothing, and you can't be blamed for any mishaps, but clear the snow and if someone then slips on the area they can sue you for leaving an icy patch! In a way, indirectly, another example is releasing a new drug for sale.
So the IRB's attitude seems somewhat analogous to that series of moral conundrums involving people tied to railway tracks, which asks whether you should reroute the train. My instinct is to say never divert the train, because I didn't tie the casualties to the track and it is regrettable that they will die but just their bad luck. But that starts to sound insupportable if the train is hurtling towards a dozen teenagers tied to the track and you could divert it to kill only one 90 year old terminal cancer sufferer, and the latter scenario is analogous to the situation our host is describing!
Descriptively I agree with you as to how the IRB tends to behave (if we didn't cause through positive action it not our problem).
However, the law isn't meant to track moral responsiblity in the way that the IRB is supposed to. The absence of a duty to rescue reflects pragmatic concerns such as the fact that the courts could be clogged with all sorts of cases suing whoever had deep pockets who saw the shit go down. Indeed, it would risk creating an attractive nuisance where people would delibrately try to get themselves injured (or appear to be) in front of people with deep pockets. Moreover, it creates a secondary problem as legal necessity is often a defense to liability.
I'm not saying these couldn't be dealt with but the point is that even in situations where it's clear there is a moral duty there are good reasons for the legal process not to impose liability. Hell half of free speech law is the right to be a dick and needlessly hurt people.
The IRB was created to do exactly the opposite. It's supposed to make sure that scientific studies are behaving morally even if (absent hook of federal funding) the law couldn't regulate the behavior at all (asking ppl questions and talking about what they say in response is clearly protected by the first amendment).
Yes, many people do have a moral intuition that action is different than inaction. But you make the trolley problem bad enough (it's going to kill the whole city) or the harms smaller (someone will lose an arm) they will say that you should switch tracks and IRBs should at the very least try to meet that kind of moral standard.
> suing whoever had deep pockets who saw the shit go down
In Austria, not helping is a criminal offense, so you may have to pay the state or do prison time, but you can't get sued. This fixes the deep pockets issue.
It's interesting idea to think of this as a trolley problem. For the researcher, doing the study is analogous to pulling the lever. But for the IRB, isn't preventing the researcher from doing the study pulling the lever? That is, the lever that keeps the researcher from pulling the lever.
Maybe not, because the process is set up so that the IRB has to explicitly approve so that the study can go ahead. But what if the study was presumptively approved unless the IRB actively vetoed it? Maybe board members with an intuition that the action / omission distinction is important would then behave differently.
If you can’t start until the IRB rules, there is no difference. If you *can* start, then the IRB does not provide the legal protection desired.
Status Quo Bias. The effects of the status quo are already written off, but the effects of any change are charged full price, so a horrific status quo winds up looking better than even a major positive change with a few small tradeoffs. You see this in handwringing about the energy transition...a couple hundred million tons of copper mining over the next 30 years is a looming environmental catastrophe while billions of tons of fossil fuels being mined right now and then lit on fire every year is business as usual.
I... well, I had a whole comment typed out, then I remembered that this would be more appropriate for the culture war OTs. still, like. surely you know this is a highly contested issue.
Given the popularity of guns in various social climates in the United States, I suspect that attempting to enforce UK-style gun control laws would have similar results to the attempt to enforce alcohol prohibition in the 1920s. :(
Nah. We just prefer that generic adult free men can possess deadly weapons, because we don't put all our faith in the state to defend our lives or liberty. If some weird shit goes down, we prefer to be a nasty spiky thing to try to swallow, like a porcupine. We're well aware that the cost of this is that some sick shitheads, like in Nashville or Uvalde, might get their hands on these tools and do terrible things. We're no more moved by this a priori than by an argument that cars should not be permitted in private hands because idiots sometimes kill innocents with them.
Which is not to say that a more timid culture, or one where the peasants actualy like being taken care of by the aristocracy, might think some other way, and by all means vaya con Dios. But that's not really us. (We drive fast, too.)
I grew up in Texas. Yeah, I've heard this macho line before.
The Black Panthers agreed with this in 1966 which brought about the Mulford Act in 1967.
Seems like the Second Amendment absolutists were willing to draw a line *somewhere*.
https://en.m.wikipedia.org/wiki/Mulford_Act
‘Governor Ronald Reagan, who was coincidentally present on the capitol lawn when the protesters arrived, later commented that he saw "no reason why on the street today a citizen should be carrying loaded weapons" and that guns were a "ridiculous way to solve problems that have to be solved among people of good will.”’
Seems like Michigan and Coeur d’ Alene militias are cool though.
Those weren't Second Amendment absolutists, and gun control in America has been historically joined at the hip with racism.
I will give Ronald Reagan credit for a reasoned approach to gun control:
“In 1991, Reagan supported the Brady Handgun Violence Prevention Act, named for his press secretary shot during the 1981 attempt on Reagan's life. That bill passed in 1993, mandating federal background checks and a five-day waiting period.”
Well, they were stupid, of course. The way you get your rights infringed is by being an asshole while asserting them.
According to the Constitution, everyone in the US has the absolute right to own a muzzle loading flintlock, if they are members of a militia! Isn't that right?
Seriously, law-abiding people in the UK can apply for a gun license, and many rural people such as farmers own shotguns and rifles. But I don't think anyone is allowed machine guns, semi-automatic weapons, or firearms with silencers, except maybe on a licenced firing range where those weapons would have to be stored.
I mean Carl, surely the citizenry could still be pretty prickly with only manual load weapons. Also, if push came to shove, in the event of a general insurrection citizens could still by common consent retrieve their more deadly weapons from where these were kept in shooting clubs and the like.
Well, I don't agree with your interpretation of the Constitution in the first paragraph, and neither does the majority of US voters nor the Supreme Court, so I'm comfortable just leaving that creative meme aside as entertaining and imaginative, but irrelevant.
Have you noticed that violent crime and murder have become absent in the UK, then? Or does it...persist, somehow? You'll also note that Switzerland has one of the highest rates of gun ownership in the world, but quite a low murder rate. It's almost as if...what matters is the person, not the tool, and maybe our focus should be there. A much harder target, to be sure, but treating cancer with morphine because the latter takes away the pain is not an intelligent focus.
Consequently, I have no use for tool control, any more than I think the government should be telling the citizens what type of car they should buy (other than "safe"), or what type of dishwasher, what type of birth control they can use/not use, who they can bone and how, whether they can/can't get certain kinds of medical treatment, how they rear their kids, what school they can/can't go to, what job they can/can't have, and about infinity things besides (quite a number of which can also have serious, even deadly consequences, if done wrong). My interest in being a feudal serf whose care lies in the hands of a squire to whom I should bend the knee is zero. My ancestors lived like that, always some king/tsar/shah/emperor/consul/baron/bishop/Better Than You type to tell them what to do, and that's why they came here, to get away from all that.
On the other hand, I'm quite comfortable with people-who-get-to-own-tools control. I'm quite happy to have the FAA license pilots with strict regular exams, so airplanes don't fall out of the sky onto my head. I'm 100% on board with drivers' licenses, and only wish the exam were (1) much more rigorous, and (2) required far more often, and consequently if the government were to say owning a dangerous tool like a rifle is absolutely your right, but you will have to regularly pass this rigorous exam in how to operate one safely and what the rules on its use might be -- why, I'd be fine with that.
Whether that would pass Constitutional muster I am far less sure, as the unfortunate fact is there is a long history of government abusing competency tests to deny civil rights, e.g. poll tests, and the courts are rightly suspicious that the power to certify and license is the power to deny, and invites abuse. But there's some precedent, with our existing regime of waiting periods and licensing exams for CCWs[1].
And of course government could certainly demonstrate a lot more competence in using the tools it already has to control access to dangerous tools:
https://thehill.com/homenews/news/359037-air-force-says-it-failed-to-enter-texas-gunmans-court-martial-into-database/
---------------
[1] What might be still more relevant is that I imagine the great majority of gun owners, i.e. those who are responsible and competent, would also be on board, which means big majorities for action might be readily achievable. My more cynical nature says that's *why* it doesn't happen -- because effective action supported by big majorities would launder a bloody shirt that politicians can otherwise wave about during campaigns, when they have no idea how to solve more quotidian problems like keeping inflation under control, balancing the budget, ensuring the bridges aren't falling down or 20% or the children being unable to do basic math by 12th grade.
Low-effort high-temperature comment. Banned for one week.
I think this is a general problem with all kinds of regulation. Once a regulation exists, there's a bureaucracy invested in enforcing it (sometimes government, but sometimes within individual companies or other institutions), and it's very hard to get rid of. And then nobody really ever checks to see whether the regulation is even solving the problem it exists to solve, let alone checking to see whether the net benefits outweigh the net costs.
IRBs could of passed Nazi experiments? What? How can a prisoner give informed consent?
"The few studies which raise serious ethical questions " - all studies on humans raise serious ethical issues. People are not widgets. Human dignity and autonomy prevents a solely utilitarian frame.
The same way China and Russia guarantee tons of rights on paper yet their courts allow the government to throw you in prison for whatever the administration wants.
Regarding what studies raise serious ethical questions, I don't think all the simple studies where you pay UGs some money to compare how loud tones are or to play some game for a bit of cash you provide raise many serious ethical questions.
Yes, in some sense, all human interactions have an ethical component but the idea that making it part of a study somehow necessarily raises the ethical stakes seems mistaken.
If you do studies on humans, someone independent (like an ombudsman) should take a look at it.
I'd bet that more studies are passed than rejected. That but of information doesn't seem to have been reported in the review or in the mountains of comments.
Yes, I suppose sham IRBs could have existed or do exist in China or North Korea. I'm not sure though commies, fascists or totalitarians have ever felt the need to pretend that IRBs (btw I've also called these human subjects committees) are necessary are all.
what we need is to perform a randomized controlled trial of IRBs. run 200 experiments; half of them have to get approved by a real IRB, half have a "placebo" IRB rubber stamping it. see how the results differ! once we secure research funding, all we have to do is get approval fr-
What is the end point? Whether the researchers are ethical? Or is it did what they found have some utilitarian benefit?
As to the last, experiments that don't find things are as important as those that do for science. As to the first, it is unethical to allow unethical research. The standard of care includes the obligation to be ethical.
>"As to the first, it is unethical to allow unethical research."
It seems *more* unethical to actively prevent ethical research.
IRB do not prevent ethical research, they ensure ethical research.
(X) Doubt
Scott's aforementioned questionnaire study was unambiguously ethical, and prevented by the IRB.
If it was unambiguously ethical it would have been approved.
There were issues. My reading of the story sounds like he gave up instead of doing what was required. And he admitted in the original AUGUST 29, 2017 essay that there were some issues justly raised by the IRB.
I am not sure how you imagine that this works, you seems to think that if Scott or any researcher claims it to be appropriately ethical then it must be so. It's as if you think the IRB has the burden of proof to show its not unethical.
Why that presumption? If you are going to do an experiment involving humans the burden is on the researcher to show that it is ethical.
Since we are dealing with humans, I don't think the standard can be preponderance. Beyond a reasonable doubt seems the right standard to me. We want researchers to have the highest ethical standards not the lowest. Unless theoretical utilitarian benefits is all that is important to you. In other words, if we get a lot of beneficial research, then it's worth it to have a few Nazis or Tuskegee's. Is that your idea?
Well, no thank you to that.
IRBs do not (amd arent intended to) ensure ethical research. if all research suddenly stopped they would not do a thing about it. they exist to prevent unethical research
This subtle point I suppose is well taken.
But I'm pretty sure that no members of human subjects committees have every advocated to halt all research as means to prevent unethical research. They are firmly in the pro- research camp.
Better yet, ask the IRB to approve a meta-analysis of all meta-analyses that don't include themselves... ask the IRB whether your new meta-analysis should include itself.
People have joked about applying NEPA review to AI capabilities research, but I wonder if some kind of IRB model might have legs (as part of a larger package of capabilities-slowing policy.) It’s embedded in research bureaucracies, we sort of know how to subject institutions to it, and so on.
I can think of seven obvious reasons this wouldn’t work, but at this point I’m getting doomery enough that I feel like we may just have to throw every snowball we have at the train on the off chance one has stopping power.
A colleague of mine is interested in 'IRBs for AI'-- he hasn't investigated it but has thought about IRB-y stuff in the context of takeaways for AI (https://wiki.aiimpacts.org/doku.php?id=responses_to_ai:technological_inevitability:incentivized_technologies_not_pursued:vaccine_challenge_trials). He's interested in people's takes on the topic.
It might make more sense to treat AI the way we do nuclear power - demand proof of its safety using engineering calculations, redundant safety systems, and defense in depth. The IRB method seems to focus more on listing possible consequences and getting consent from the people who may be affected, both of which seem nearly impossible for sufficiently advanced AI systems.
But both of these existing technologies are examples of what Ezra Klein talked about in a recent podcast:
"There's a lot of places in the economy where what the regulators say is that you cannot release this unless you can prove to us it is safe. Not that I have to prove to you that you can make it safe for me. Like if you want to release GPT-5 and GPT-7 and Clod plus plus plus and whatever, you need to put these things in where we can verify that it is a safe thing to release. And doing that would slow all this down, it would be hard. There are parts of it that may not be possible. I don't know what level of interpretability is truly even possible here. But I think that is the kind of thing where I wanna say, I'm not trying to slow this down, I'm trying to improve it."
Do you want IRBs for AI just to slow them down? Or to do any actual weighing of costs and benefits?
As far as fundamental capabilities expansion goes, like large training runs, any caltrop on the road will do - just slow the damn thing down.
Applications are lower stakes and I have less of an opinion on that.
This is of course based on my own opinions of what likely costs and benefits are, and would be screened out by a hypothetical infinitely wise IRB - but that applies to just about all policy proposals by anybody. In order to not think capabilities research should be slowed down, you’d have to convince me that extinction-level risks are an OOM below 1% (crux for this probably being a survey of mainstream AI researchers; with the frequently cited 10% study probably having some flaws but putting it way above anything we should be comfortable with.))
You'd want to slow down applications to. Anything to make them commercially useless. So that there's less incentives to run the research in the first place.
The problem is that we apply this kind of caltrop to most everything in our society, making the cost of building things, inventing stuff, trying new ideas, discovering new things, etc., much higher. From nuclear plants to vaccines, we just add a zero or two to the cost of making them, and then wonder why progress in those areas is so frustratingly slow.
Ezra Klein said something smart about regulating AI's in a recent podcast: Slowing them down via this 6 month moratorium or similar is just dumb. Restarting should be tied to some criterion of increased safety, not to the clock. I do not understand the situation well enough to make good suggestions about what the criterion should be, but I'm sure there exist people who could. Maybe something to do with understanding better what is going on under the hood -- by what process does GPT4 arrive at its outputs? And I even have a suggestion about a way to go about it: use some of the experimental approaches of cognitive psychology, which has been able to figure out a fair amount about *human* output without investigating the brain.
And one more thing about the 6 month moratorium: I'm thinking that *obviously* what the companies are going to do is spend the 6months working on further advancing AI. They will just refrain from doing certain kinds of work that are very obviously geared toward producing GPT5 or whatever. Maybe they won't even refrain from doing that. Is anyone going to be checking to see how staff at these places are actually spending their time? Is my presumption wrong?
> I do not understand the situation well enough to make good suggestions about what the criterion should be, but I'm sure there exist people who could.
As an AI safety researcher (with minor interest in governance), I believe that no, there currently do not exist any people who have *good* answers to this. But at this point, *better than nothing* might be, well, better than business-as-usual and better than fixed period.
Well, Vojta, you caught me on a day when I've been thinking a lot about this stuff, and have time to write it out. So here are my thoughts about the kinds of thing that should be criteria. All of them are are ideas for things that developers should produce before they develop AI further. They're not quick and easy to do, either, so might actually keep the developers busy, so they can't just quickly toss off a list of bullshit criteria then return to working, a bit more discreetly than before, on the next version of their product.
Come up with a list of things that, if they happen in some setting employing AI, should be responded to in the way an airliner crash is: A team of investigators from somewhere other than the company that built the AI figures out what happened, &, depending on what they find, they will have the power to insist that certain steps be taken to prevent another similar incident before AI of this kind is used any more. There should also be some plans for problems of lesser magnitude — sort of what would happen in airline industry is there was a near miss.
Figure out more about what goes on in that black box — how the AI “knows” the stuff gives in response to a prompt. And don’t tell me that can’t be done! Even I can think of ways to learn some more, and I’m a psychologist and don’t even know how to code in Python. OK, 2 ideas: (1) I read somewhere that some AI, I don’t know which, can look at an image of a retina and know what gender the subject was. But it can’t explain how it knows. Actually, in this case the answer to “how does it know” is probably not very useful, but let’s just use it as an example. Can’t you give the AI the task of systematically altering images of retinas — like reduce resolution of different parts of image— then stating subject’s gender and checking to see whether it’s accuracy has decreased? After a while AI should be able at the least to tell you what features of retina image are crucial, what features utterly irrelevant. So let’s say that blotting out different parts of the images makes clear that for judging differences in gender all that matters is 2 small areas of the retina, but AI still can’t tell you how it uses those areas to determine gender. So now you could have the AI play around with, say, color in those area, throwing away different kinds of color information. So keep doing that sort of thing & eventually it will be evident how AI “knows” subject’s gender. OK, so seems like this example gives a model for a way to use AI itself to find information about how it knows things, things much more interesting than how it recognizes gender from retinas. (2) Use some of the methods of cognitive psychology or psycholinguistics: Human subjects can’t tell you anything about how they know what verb tense to use, how they know the way to downtown, etc., and we don’t know enough about the brain to get answers from observing it. But you can figure out quite a lot about things like this with clever experiments. In fact my idea (1) was an experiment of that kind — throw away different bits of the data, as a way of finding out which parts are crucial to a task. Here’s an article about using cognitive psychology methods on GPT3: https://pubmed.ncbi.nlm.nih.gov/36730192/
Call in a bunch of hackers, discuss ways AIs could go accidentally go awry that could cause substantial damage, and ways AI could do serious damage in the hands of bad actors. Then put hackers into a sandbox with AI and see how much of that stuff they can bring about. Make the findings public. Also state which holes developers are going to plug before releasing the AI. Re-test with hackers after plugging.
Work on problem of how to shut down an AI that is running amok some way. For instance, say one that manages a hospital’s drug inventory starts erasing the records of daily counts of available drugs over the last 10 years — or ordering truckloads of the wrong stuff — or adding 2 zeros to every payment, so it’s paying 100x as much as it should for new meds. And quite likely the AI is in the cloud somewhere, and the hospital is paying to use it — so it’s not like they can just unplug the thing. Or what about a similar problem with an AI that’s used for trading, and is installed in multiple places, and is somehow tipping over the whole stock market? Or, of course, an AI that’s decided to turn us all into paperclips. Seems like ways of recognizing such problems and shutting down the system should be part of what users get, along with access to the AI. It’s not clear to me that developers have thought much about this feature. I haven’t seen a word about it.
Learn more about people’s vulnerability to being influenced by AI. My impression is that with the early versions of AI it was mostly mentally ill people that got over-involved with AI, thought it was conscious, etc. BUT the more advanced the AI gets, the less mentally ill you have to be to be vulnerable to developing a personal attachment to it that’s like one’s attachment to friends and loved ones. And of course people are much more willing to take advice from friends and loved ones, to do them favors, to go to great lengths to protect them. So there’s a whole category of bad shit that can happen entirely as a result of someone’s attachment to an AI: They can take bad advice from the AI, including advice to commit crimes if the AI has been corrupted or has somehow wandered into a weird mode. They can keep secret something the AI has done that they think might make people want to change or shut down the AI. And, of course, there’s no reason why an AI developer cannot come down with a case of AI over-attachment, and that could have very bad results indeed. So I think there should be studies of how likely this is to happen. Here’s a very rough model: Get 2 matched groups. AI group is given free unlimited access to AI, plus instructions to spend at least an hour a day chatting with AI about subjects that experimentors judge are likely to foster a feeling of personal closeness. Subjects could be given a list of topics to discuss, and should be instructed to tell the AI stuff but also solicit replies. For ex., one assignment could be to describe to AI a problem in their life, and ask for advice. Another could be to ask the AI for some kind of ongoing help — for instance, “I’m trying to stop smoking — can you ask me each day how it’s going, and congratulate me if it went well?” Meanwhile, the placebo group gets free unlimited access to some especially nice music app, and instructions to listen to a certain channel on it one hour a day. So then we do a test of how attached people are to the AI. Don’t have details worked out, except that it should involve some competitive online game where people can elect to have an AI give them ongoing advice on strategy, or to not involve AI. For those who elect to use AI, test how willing subjects are to accept AI advice, to accept seemingly bad AI advice, to cheat on the game some way at AI’s suggestion, to cheat on the game in a way that they believe will be especially destructive to another player, to later lie to experimenters when they later ask it whether the AI suggested cheating.
"After reviewing your plans to build a Torment Nexus, we've concluded that there are no regulations against it, and it exposes the corporation to no liability. Have at it."
“In one 2009 study, meant to simulate human contact, he used a Q-tip to cotton swab first one subject’s mouth (or skin), then another’s, to see how many bacteria traveled over. On the consent forms, he said risks were near zero - it was the equivalent of kissing another person’s hand.”
Is the swab from the first person’s mouth going into the second person’s mouth or just on their skin? If it’s the former, which is what the description makes it seem like, I don’t think it’s right to say the risk is equivalent to kissing another person’s hand - more like the risk of French kissing them.
They probably wrote that in the consent form. The doctors aren't idiots?
I do agree that Scott's wording here is far from a paragon of clarity.
Seems dishonest then and a bad example of IRB overreach.
No it wasn't, but since the doctor designing the study said it was equivalent to kissing someone's hand -- and not to shaking hands with them or french kissing them -- the procedure probably involved both swabbing someone's skin, then another subject's mouth with the same swab, or the reverse -- subject 1's moutn then subject 2's skin.
You probably meant "more like the risk of French kissing another person's hand"?
I'm not trying to make it sound better, just more realistic; I'm not fond of the idea of random people French kissing my hand.
And yet I'm not in favor of the IRB intervening here. If people are informed in advance what exactly is going to happen - sterile swab, someone's mouth, their hand - presuming they have no skin cuts and they can wash afterwards - and those people accept, then I think no more bureaucratic diversions are mandated, about AIDS and smallpox or otherwise.
No, I meant just French kissing. When I read “swab first one subject’s mouth (or skin), then another’s”, that sounds like sometimes the swab goes from person’s mouth into another’s mouth.
Oh you are right. Interesting how the original quote can be processed several ways:
> he used a Q-tip to cotton swab first one subject’s mouth (or skin), then another’s, to see how many bacteria traveled over. On the consent forms, he said risks were near zero - it was the equivalent of kissing another person’s hand.
There's ambiguity in how you interpret the "or" vs. the "then". In all, it could be "1: mouth -> mouth, 2: mouth -> skin, 3: skin -> mouth, 4: skin -> skin".
I have automatically narrowed this down to #2-only, both because it aligned with the paragraph's last sentence and because I assumed that a hospital would not do #1/#3.
But on second thought, since even "fecal transplant" is a thing, I may have assumed too much.
This problem must be at the heart of Erooms law. The opportunity cost of expensive and slow drug development seems like an obvious place to start but a political red button
I think all of these are subject to the "if it saves one child" class of arguments. Basically, when I want to do cost-benefit analysis and you want to point to the heart-rending story of the one cute puppy who yelped in pain once because some rule wasn't properly followed, I sound like an evil robot, even if the result of the rule being consistently enforced is a million cute puppies suffering offscreen.
>IRBs aren’t like this in a vacuum. Increasingly many areas of modern American life are like this.
Or the Motion Picture Association, which assigns ratings to movies. This has massive consequences for a film's financial success - most theaters won't play movies with an NC-17 rating, for example - so studios aggressively cut films to ensure a good rating.
This is why you almost never see genitals (for example) in a Hollywood movie. You've got a weird, perverse system where studios aren't truly making movies for an audience, they're making it for the MPA: if the MPA objects, the film is dead in the water.
Almost every industry has its own version of this. In the world of comics, you had the Comics Code Authority. It was the same deal. Unless your comic met with the moral approval of the CCA (example rules: "(6) In every instance good shall triumph over evil and the criminal punished for his misdeeds." [...] "(3) All characters shall be depicted in dress reasonably acceptable to society.") you basically couldn't get distribution on newsstands. This lead to the death of countless horror and crime comics in the 50s (Bill Gaines' EC Comics being the most famous casualty), and the firm establishment of superhero comics.
This isn't government regulation, exactly - it's industries regulating themselves (but it's complicated, because the CCA was implemented out of fears that harsh government censorship might be coming unless the comics trade cleaned its own house first, so to speak). It has similar gatekeeping effects, though.
>Ezra Klein calls this “vetocracy”, rule by safety-focused bureaucrats whose mandate is to stop anything that might cause harm, with no consideration of the harm of stopping too many things. It’s worst in medicine, but everywhere else is catching up.
It sounds almost like a game theory problem. In real life, there are more options than just co-operate/defect. There's often also "refuse to play", which means you cannot win and cannot lose.
If a new drug or treatment or whatever works, no glory will come to the IRB that approved the experiment. But if it kills a bunch of people, they'll probably get in trouble. Regulators have a strong incentive to push the "do nothing" button.
That's probably one of the reasons that prestige TV was generally better than Hollywood in the 2005-2015 decade, less pressure to self-censor. Since then, the woke thought police has gotten to them too, and now everything equally sucks.
Those institutions have gotten much weaker over time actually. The CCA was abandoned in 2010, for example. So if anything, recent movies and comics are an example of what you get without any censoring body.
Modern Hollywood movies are also written for Chinese censors, it isn't just the MPA.
I'd blame it on lowest common denominator capitalism; making an original statement takes effort, while churning out remakes and adaptations is cheap and quick. Dahl's work didn't get edited because anyone was going "THINK OF THE CHILDREN"; it was some pre-emptive blandening just to CYA the people who paid half a billion for the rights.
But movies don't fit the generalization of stricter rules over time. There used to be the Code, which prohibited a wide range of things now common. The Code broke down around the 60s, and Midnight Cowboy won Best Picture despite being rated X. Nowadays it probably wouldn't even be rated R. I recently watched "The Quatermass Xperiment", which has a capital letter X because it was considered scary enough at that time to be rated X, even though it's completely tame by today's horror standards.
On the other hand, you could have toplessness in a PG movie, back in the 80s; now that's an R (because America is neurotic about sex).
That was because PG-13 only got created recently, dividing what was previously PG into kid & teen categories. And toplessness by itself does not mean an R-rating now:
"Nudity is restricted to PG and above, and anything that constitutes more than brief nudity will require at least a PG-13 rating. Nudity that is sexually oriented will generally require an R rating."
Are you against a rating system at all? I very much like a quick way to evaluate whether the movie I am about to watch has significant levels of content that I might find objectionable, especially if my children may be watching. Without any kind of rating system, there's no guarantee that some animated children's show doesn't throw in some genitals (or swearing, or whatever) once in a while.
If you're not against a rating system at all, then I'm not sure how your following concerns change much. Movie theaters are allowed to show X rated or NC-17 rated movies, they just choose not to because they know sales will be significantly lower than for movies with lesser ratings. Taking away the rating system doesn't change that desire on behalf of movie-goers, it just makes them play roulette on whether they are going to be bothered by what's being shown or find some other way to evaluate the content of the movie and end up doing the same thing.
The rating system is stoopit. It focuses on sex and polite language. Why should children not see genitals? (I agree that seeing actual sex is likely not good for kids -- would be scary, weird, overstimulating -- although of course in many parts of the world couples and their kids sleep in the same room, and presumably the kids witness intercourse pretty often, and their heads do not explode.) And there is so much swearing to be heard in any public place at this point that whether kids hear a bit more in a movie is not going to make any difference in how soon they start saying "shit" when they spill things, drop things, etc. What *I* I would have liked to know about in advance about kids' movies was the amount of violence, loss and tragedy . My daughter grew up without a TV in the house, and I also did not show her movies or TV on the computer. However, we did occasionally go out to the movies. So when she was about 4 I took her to see Finding Nemo, which opens with a scene where a family of mama fish, papa fish and many baby fish is attacked by something -- maybe a bigger fish -- who eats EVERYBODY except the papa fish. What the fucking fuck? How is that alright to show to small children? I lost interest in TV etc. decades ago, and so am far more ignorant than most people of what Disney or this or that is like. I was appalled at the cruelty of showing kids this tragedy in a movie. And yet the theater was full of kids my daughter's age who seemed to be doing OK. My daughter insisted on leaving, saying she felt so sorry for the poor little daddy fish that she couldn't stand it. And she was not a thin-skinned kid, in fact she was and still is much thicker-skinned than me. Her main playmates at the time were 3 boys, 2 of whom had some tendency to be bullies, but she held her own. I think she just had not been gradually desensitized to violence and tragedy, the way the TV-watching kids in the audience had been.
So you agree that some kind of rating system would be nice, though you would prefer them to look at other things. I'm not against a more nuanced system! I happen to think that sex and course language are important factors, and suggest that you do as well, perhaps with different words and phrases being significant to you. I doubt there are many movie-goers who take their children who wouldn't care about the n-word or other racial slurs, for instance.
Providers have been working on rating systems with a lot more nuance, giving both the overall rating and also brief descriptions (Sex, Violence, Nudity, Gore, etc.) to help viewers determine if the movie or show might violate their personal sensibilities. I support this effort. We're not all going to care about the same content, but we all do care about some content. If they added a "Personal Loss" or however to describe Finding Nemo, I would consider that a gain.
I don't think I'd even object to racial slurs, so long as use of the slurs was portrayed as bad behavior, the way, say, spitting on people would be. You can't keep kids from learning these words, all you can do is explain what's wrong with using them as insults. Let's say the fish family had been black fish instead of whatever color they were in the movie -- sort of orange, I think -- and a big ugly mean fish, obviously the villain, had swum in circles around them for a while calling them all niggers and saying they should move to another part of the ocean and after he left the little fish were crying and the parents were furious and shaken. How is that worse for kids to see than a big fish eating the mommy and all but one of the many little brothers and sisters?
I think an "overall rating" might be the key mistake. Better to work out independent scales for the different categories people care about, objective criteria for each, then display the whole thing - maybe on circular coordinates, least objectionable at the center and worst at the outside, so it's comprehensible at a glance. Perhaps five criteria, zero to five scale for each? [Violence], [sex/nudity/intimacy], [antisocial/unwholesome behavior], [disgust/squick], and [cosmic horror/tragedy/injustice] seem like a useful starting point.
Exactly, my 9 year old is a lot more worried/harmed about violence and loss and drama than dicks and boobies and cuss words, all of which he is comfortable with even if he doesn’t “get” sex yet.
>Why should children not see genitals?
Children should not see genitals in certain contexts because it may not be developmentally appropriate or socially acceptable. The appropriateness of children seeing genitals depends on the context and the age of the child.
In general, young children should not be exposed to nudity or sexual behavior because they may not have the cognitive or emotional maturity to understand what they are seeing. It is also important to protect children from sexual abuse or exploitation, which can involve exposing children to genitals.
All children in group settings, including sports, see other kids' genitals quite often -- in the bathroom, when they change their clothes (or are changed, if their preschoolers still in diapers), during bathing etc. Kids at home see their siblings genitals quite often. And parents with little kids, rushing to get things done, often find it impossible to use the bathroom, change their clothes, shower etc. without kids around. And then there's pets, who not only have genitals but often display them in a particularly visible way.
I agree that showing kids genitals in a sexual context is not a great idea, though in real life they catch glimpses of their parents' sexual life, glimpses of sex in the media, etc.
There are many voluntary rating systems, including ones that have a multidimensional space of "why this may concern parents", with counts and intensity descriptions. sex, drugs, violence, alcohol, swearing, nudity, crime, etc etc etc. They are all linked off IMDB. Go use them, instead of depending on the utter farce that is the MPAA.
If the argument is that we can do better than the MPAA, that's fine and I have no disagreement. It appeared that the argument may have been that we don't need ratings at all, or that the existence of rating systems lead to bad situations (referring to a lack of genitals in Hollywood movies - which I don't think is a universal bad thing, as I explained in my post about viewer preferences, and it's not like people can't see genitals if they want to, even if not in a major movie).
Adult comics that broke all those rules, like Watchmen and Sandman were coming out in the 80s and 90s.
Yes, the Code became increasingly irrelevant as decades passed. The first mainstream comic to break with it was The Amazing Spider-Man #96 in 1971, which contains drug references.
What changed? Mostly, the economics of the comics trade. From the 50s to the start of the 80s, something like 95% of comics were sold at newsstands. But as the 80s progressed, the industry shifted to so-called "direct market" outlets (specialist comic stores and mail order). These were mostly run by younger guys who had no interest in obeying the CCA's diktats. It became a paper tiger.
See Chuck Rozanski: https://www.milehighcomics.com/newsletter/031513.html
The shift to the direct market mostly happened a decade after the Code started to show cracks. I think things like ASM #96 more reflected changes in society (the Hays Code had also withered and eventually broken the previous decade) and distance from the days when comics were a central moral panic: more writers and editors who didn't personally remember being part of an industry hauled before Congress and required to explain itself.
There was also a big influx of young creators (some very young) who wanted to push the envelope in a way the previous generation mostly hadn't. (At least after the Golden Age, when the angry young men were more focused on Hitler or political corruption than the Establishment generally.)
Most comics were still Code-approved for a long time after, but the Code was decidedly less restrictive in the 70s than it had been in the 50s and 60s. The shift to the direct market and the aging up of the audience then reinforced that trend.
What if we required the IRB's to judge the dangers of a study in comparison to the harm done by delaying it, and to delay it only if the former is greater than the latter?
How would you know in advance of a study?
IRBs are not a utilitarian tool; they represent an entirely different kind of ethics.
Obviously the IRB would have to estimate. While in some situations that would be difficult or impossible, in others it would not, and after all physicians and other people have to make intelligent estimates all the time about harm if done vs. harm if not done. Doctors, for instance: If I give a 2 weeks supply of this abusable drug the person might abuse it; however, if I do not give it they are going to suffer a lot of pain/anxiety/whatever the drug treats. Seems to me that in some of Scott's examples the IRB was in at least as good a position as doctors typically are to make judgments about harm if done vs. harm if not done. In fact, maybe they are in a better position, because it's likely that the proposals submitted to the IRB contain actual stats about likely risks to patients from the proposed study, and about evidence that the treatment they are proposing will be reduce illness, suffering or death. After all, whatever they're trying isn't some random chemical, it's generally something about which a fair amount is known -- it's already used for some other illness, or already used at a different dose, or a very close cousin of a familiar and safe drug. As you say, medicine is not manufacturing, and there are judgment calls professionals must make all the time, and they are often high stakes, too. Does not seem to me too much to ask IRB's to do the same.
Weirdly, they already sort of do this for animal research, just not human research. But since I don't really agree with the things they allow, I'd like them to at least be less loose than that with human
There certainly are some treatments that you'd want to think over very, very carefully before giving the go-ahead for human subject trials. But there have to be a fair number of situations where the risk of trials is obviously low, and the benefit , if the treatment works, would be great. For instance metformin is a a drug very widely used for diabetes, and it's been in use for quite a while -- it's safe and effective. Last year some time there was a study that showed it reduced risk of hospitalization for people who had covid. Seems to me that an IRB considering whether to permit that study could be confident that subjects would not be harmed by the metformin, and of course if it was found to work there clearly would be benefit to many people. And metformin wasn't randomly chosen. The people doing the study must have had some reason to think metformin would help. (I believe it was because of data that diabetics on metformin had better covid outcomes that diabetics who were not.) Or consider Scott's example about wanting to check the accuracy the brief bipolar questionnaire administered to patients soon after admission. There's clearly zero chance that this study would harm any patients. They were all routinely administered the questionnaire anyhow. Scott's study would just have involved determining how well their questionnaire score matched up with final diagnosis. The experience of patients in the study would not have differed in any way from that of patients not in the study. As for the benefits of figuring out how good a predictor the initial questionnaire was -- well, it's hardly going to change the world, but clearly it's overall beneficial to patients to make sure the early assessment of whether they are bipolar is done with a questionnaire that's as valid as possible.
Can you explain more about why the MPAA situation is bad? It sounds like it's working correctly - it's saying you can't market your film to people who want safe-for-all-audiences films if it has naked people in it. That is what I would want out of a film rating system.
The problem is that the ratings ripple back onto the industry that makes the movies. As Goodhart's law predicts, studios edit films to ensure a "good" rating, regardless of the filmmaker's artistic goals or what moviegoers actually want to see.
Technically, the ratings are voluntary. But most theaters refuse to play NC17/unrated movies, so nobody makes them.
Don't imagine that the ratings make sense. They're arbitrary and bizarre - there's too many examples to list. A good one is "Love Is Strange" (a movie with no violence or sex and about ten curse words). It got an R rating, apparently for depicting a gay relationship. In general, the MPA is extremely harsh on sex, and extremely lenient on violence. (Jack Nicholson: "Kiss a breast, they rate it R. Hack it off with a sword, they rate it PG."). It's purely the personal preference of a group of risk-averse middle-aged people.
There's no viable network of non-MPA theaters that unrated movies play at. They're the only game in town. You can see the results of this by looking at global box office totals:
https://www.the-numbers.com/box-office-records/worldwide/all-movies/mpaa-ratings/nc-17-(us)
All the highest grossing NC-17 movies (with few exceptions) are either sleazy porno crap from the 70s (when adult theaters existed) or foreign films that made money outside America. And the amounts are so small! The #1 movie on the list only made 65 million dollars, or less than that horrible Cats movie everyone hates.
The MPA isn't the worst thing ever, but it doesn't classify films so much as control them. It's basically the modern descendent of those 1930s organisations like the Legion of the Decency, which meant movies couldn't show a toilet flushing (I'm not kidding, that was a rule for decades).
Any idea why movie theaters don't show NC-17/unrated movies?
If the free market is working correctly, the **people** doesn't want sex in their movies. Otherwise the theaters would smell the profit and cater for it. Ironically I've heard that the quite modern R-13 is also unpopular because audience doesn't like those half measures movies.
If anything I guess the lesson is in more gradular ratings. But studios would still prefer to make family friendly movies if they can.
I think there's a typo here:
> Patients with a short consent form that listed only the major risks got twice the score on a comprehension test compared to those with the longer form; they were also more likely to miss cases where their medical histories made the study procedure dangerous
As written the "they" in the second clause is referring to the "Patients with a short consent form," such that longer is better, which is the opposite of what the rest of the paragraph is suggesting.
Since I was just going to post the same thing, I'll hijack this comment to report something else:
I'm confused by the name "Virginia University". I'm familiar with the University of Virginia (UVA), Virginia Commonwealth University (VCU), Virginia State University (VSU), Virginia Polytechnic Institute and State University (Virginia Tech), and several others that this might refer to. Did you mean one of them?
It’s situations like these which would be avoided in a country with an absolute monarchy, where the monarch’s one job is to “unstick” the system when it is doing insanely dysfunctional things because of a combination of unintended consequences and empty skulls. Ideally such a monarch would have a weekly tv show where he used his limited-unlimited powers to fix things (limited in that he can’t overrule the legislature but unlimited in that all problems which are the result of idiots and rent seekers and cowards and ninnies and wicked people not doing their jobs better can be solved by his plenary power to fire or imprison any individual whose egregiousness he can make an example of in order to address public opinion). Bureaucrats who reject basic common sense about costing 10,000 lives to save 10 can be named and shamed in ways which will rapidly improve the quality of the others.
So you're saying we should swallow a spider to catch the fly?
As usual, comments beginning with “so you’re saying” misrepresent.
This more like a super-supreme court that can act on its own initiative but only in a case-based way, indicating by example how things ought to work and punishing those who abuse systems in the ways Scott described in this post.
I read that comment as: "here is my takeaway from what you said, is this what you meant?" and this doesn't sound like misrepresentation, but a question asking for clarification.
Then why not have a super supreme court? And why expect a monarch with unlimited powers to use them only for things you like?
Who said “unlimited powers”?
And who said the monarch always has to agree with me?
This is an unsticking mechanism, to deal with a particular kind of civilizational inadequacy that Scott and Eliezer have written about. We already have it in the case of pardons, where the head of state has a plenary power to intervene to prevent an unjust result. If you don’t like it, suggest another mechanism.
> who said the monarch always has to agree with me?
Why are they a good thing, if they don't?
> If you don’t like it, suggest another mechanism.
Supreme courts. Ombudsmen. Unignorable petitions.
> Ombudsmen
I think that is the proposal. A monarch is odd, but we could have an elected "bureaucracy ombudsmen" at national or state/local levels.
Ah, we should give the spider strict instructions before swallowing it. OK.
It's called a "constitution."
Wonderful! And by what means does this constitution exert its magical powers? A freeze ray that zooms out from the LIbrary of Congress, and encases the evil-doer in ice?
We divide the constitution into three parts: the "bird", the "cat", and the "dog". They keep themselves from getting out of hand, and together they exert control over the federal bureaucracy, which we'll call the "goat".
Make it like the Gong Show -- George Washington et all. sound the gong, and the spider gets yanked out . But how do we ensure that the next one we swallow if better? Hmmm
The correct response to any comment that begins with "So you're saying" is to stop reading and block the person who wrote it.
This is more or less the role of the media. Sometimes they help fix the problem; sometimes they help create it. I can easily imagine your hypothetical monarch hearing about one death from a study and demanding that the system "do something" to keep it from happening again.
It used to be part of their role to expose things. But they often lacked investigative authority and had no power to fix things; and now there abandoned even the “expose problems” part.
No. The problem is that 'some doctor fucked up doing a study and it killed someone' would have been frontpage in the past and would go viral now, but IRB stops good research is boring and wouldn't go viral. Media can still expose problems, but they're focused on audience engagement and small real harms are more engaging than massive harms that rely on counterfactuals.
What this tells me is that we should be asking our reporters to make "massive harm relying on counterfactuals" stories interesting! It's not that hard--I've seen really well-written data-driven "boring" investigative journalism before and even written some things in the genre myself. Most journalists these days a) go to college and b) are thus required to take at least one creative writing class as part of their journalism major/minor, so they *should* have the requisite skills to write engagingly about these kinds of things. The fact that either they don't or they don't have the guts/institutional backing to try is rather sad.
Maybe we could have an Institutional Review Board that checks articles by reporters to make sure their "massive harm relying on countertactuals" stories are interesting. If they're not, they have to rewrite. You know, sort of like scientists proposing experiments with human subjects have to.
I think there's definitely an institutional incentive issue here created by asymmetric consequences for good/bad press, but it's not the only issue, or even necessarily the most important one. Media outrage stories work better when there's a sympathetic victim and a clear villain and narrative. They're a lot harder when the villain is "slow drug approval procedures, each of which can be individually justified but which when summed up mean that beta blockers took an extra decade to get onto the market in the US, and thus thousands of extra people died of heart attacks but nobody can say exactly which heart attack deaths can be blamed on the slow approval."
The role of the media has always been to be hysterical Chicken Littles. "Yellow journalism" is not a phrase that was coined in the 21st century. What seems to have changed is the alignment of the media with one political party and one ideology. As recently as the 80s, for example, endorsements for President were about evenly divided as to political party, and there were "conservative" and "liberal" newspapers who would editorialize and write breathless warnings about initiatives from almost any ideological direction.
This is no longer true, and why that is the case is a very interesting question. But the problem this has created is that the media are no longer reliable Chicken Littles when it comes to problems and fuckups coming from one particular ideological direction.
Would you say Singapore comes close? Not a monarchy, but its government doesn't seem obstructed the way the US is.
Singapore is more functioning, but alas our government still bows to voter demands. Even when the voters want stupid things like curbs on immigration.
This made me realise that Singapore is actually a de facto monarchy.
Father, regent (briefly), then son.
What does it mean that you "can't overrule the legislature" but "all problems which are the result of idiots and rent seekers and cowards and ninnies and wicked people not doing their jobs better can be solved by his plenary power to fire or imprison any individual whose egregiousness he can make an example of"? Doesn't that latter power give the monarch plenty of power to overrule the legislature, at least in practice, even if not in principle?
Only in individual cases. He’s not going to have time to take over the role of the entire legal system, but he can unstick things.
One discouraging thing about this proposal is that I don't think governors and presidents do all that much of this sort of thing, even though they do generally have the power to do so. I mean, the EPA works for the president, and so in principle, I think Biden can tell them "this administrative rule you have imposed is dumb, get rid of it," and they probably have to listen to him sooner or later. But of course, that's not something you see happening all that often, probably because Biden is worried about bad press if he gets it wrong, doesn't think he or his staff know enough to do a good job of overriding the EPA's bureaucracy, etc.
Joseph. Your idea has a fatal flaw. Who or what ensures that the "monarch" or whatever you call him is wise, just and conscientious? When one dies do we just plug his offspring into the job? Do we elect a new one? Hmm, I think those ideas have been tried. Do we have a group of wise, just and conscientious people who choose a new monarch when we need one? What do we do when someone on the monarch-choosing committee dies? Do we just plug their offspring into the job? Do we elect someone? . . .
Some "silver" or "bronze" will also slip into the "golds".
That's not a meaningful limitation. Literally everything is an individual case.
Works great when you can make the monarch always be a person who conforms to the ideals of the person you're imagining.
There are drawbacks to any system but his ONE JOB is to counter the kind of civilizational inadequacy Scott often decries. This is intended as a safety valve. There may be other designs which better avoid such horrors, feel free to suggest some.
Why couldn't he just shame and fire people like Scott or Simon Whitney who are trying to reform the bureaucratic system?
you: "Hey, to prevent your car from rolling away, you should make the tires square."
me: "This has a fatal flaw: you can't drive the car anywhere".
you: "There are drawbacks to any solution here, feel free to suggest better solutions."
I mean, yes there are better solutions, but if you can't acknowledge the fatal flaw in yours I'm not sure what more to say.
So who stops him if he overreaches?
You could give him a limited number of overrules in his term, allow congress to overrule him, or even make being overruled like a vote of no confidence that forces him out of office.
I think the key insight you have is that there is an institutional gap for enablement and facilitation of progress. Currently we have too many "veto points" in the system ... so where can we add a counterbalance? The example of pardons is a good one. In practice, Executive Orders somewhat fulfill this role, but with lots of limitations. It is worth pondering the options.
"limited in that he can’t overrule the legislature but unlimited in that all problems which are the result of idiots and rent seekers and cowards and ninnies and wicked people not doing their jobs better can be solved"
...and next week there's a coalition of legislators who are idiots and rent seekers. This is a hard problem and a monarch introduces different problems.
There's nothing stopping American Presidents from doing something similar. Keep the weekly TV show, get some researchers, and shove John Oliver out of his niche with a weekly take-down of something rotten.
I don't think it would work well. Insufficient research, insufficient followup, playing to the crowd. At some point there'll be more utility in using this as a threat, and then it's just a villain-of-the-week show. There's going to be scapegoat this week, so do whatever you have to do to make sure it's not you.
And I think the same reasoning would apply to a king.
Kings have more security and no need to pander.
Have you read any history books? Everyone panders. Everybody wants something that they cannot get for themselves. Security ends as soon as someone with that kind of power desires something.
Further, humans are social creatures and need to intertwine with others in order to keep healthy. Health in all its forms is an important aspect of security. We cannot avoid that which is encoded within us.
History, from recent and modern to ancient, is replete with people whose power to change with the stroke of a pen or a statement has been the antagonism central to its time. We can look back to earlier this week with Justice Clarence Thomas. There is zero practical ability to police misconduct when the person whose actions or words are at fault is beyond the law's approach.
If anything, this book review highlights the problem with such a system; an all-powerful authority standing athwart that few have the ability to cow is exactly the problem. The difference is which direction is the beneficial one, which is a matter of perspective.
The answer to a specious and noxious authority spending more resources in order to hoard a few is not a stronger individual, however satisfying it might be for some to imagine.
The answer is never to invest within a single person. We have to change the people that work within the system. When people are the problem, people will be the solution, not a monarch.
Madame La Guillotine would like to have a word with Louis XVI...
And I cannot believe that, were Trump somehow to become king of America, he would stop playing to his base. He's too much of a showman, and he knows exactly how to get his narcissistic supply.
But, joking aside: what percentage of kings do you think have lived up to this ideal? Let's restrict this to kings with real power, who are both heads of state and heads of government; a lack of power seems to correlate with a lack of corruption.
The Czar can exile any one bureaucrat to Siberia, but he can't run the country without the bureaucracy.
He can decide to try it though, and do a _whole_ lot of fresh damage before he goes far enough to be overthrown.
Also recall Proxmire and his routine. You can describe many valuable things in terms that make them sound nuts. (I heard about this totally insane proposal for car safety one time--these whackjobs wanted to set off a bomb in a bag in front of the driver, just as the car was crashing. Clearly, lives are endangered by this nutjobbery and we need to put a stop to it!)
1) That's not how absolute monarchies work.
2) It's impossible to codify powers which simultaneously allow a person to solve the problems you hint at and prevent them from just being a classical absolute monarchy with theoretically unlimited power
It won't stop people from trying, though. We are the eternal optimist species. Sure, heretofore, in all the billions of times it's been tried, it hasn't worked out, but maybe *this* time there really will be A Free Lunch[1].
-----------------
[1] Or maybe it's a failure of will. Tinkerbell didn't live before because not *everyone* was clapping his hands, damn them.
Have you heard of an https://en.wikipedia.org/wiki/Ombudsman ?
Well that seems like a benevolent -- not dictator, but a dictator with somewhat limited powers. Sure, that would work well, but what's the system for ensuring these semi-dictators are bright, consciousness and fair, and not assholes like so many people in power are?
Why is this different from a Presidential system? Most of this is happening through the executive branch; my guess is that if Biden wanted this fixed, he could fix it.
There’s an awful lot that’s a consequence of Congressional dysfunction burdening the system, and a lot that’s from private sector inefficiencies uncorrected due to market failure. But even for the part that’s entirely the executive branch’s job, what President ever made a serious dent in such civilizational inadequacy?
>Whitney points out that the doctors who gave the combination didn’t need to jump through any hoops to give it, and the doctors who refused it didn’t need to jump through any hoops to refuse it. But the doctors who wanted to study which doctors were right sure had to jump through a lot of hoops.
The incentives are different between "a doctor may do X or Y if he thinks it's best for the patient" and "a doctor may do X or Y based on considerations other than what's best for the patient", even if he's only going to be doing one of the things that one of the first two doctors would do. Don't ignore incentives.
That's definitely valid, but I think it belies the point that "what the doctor thinks is best for the patient" is not necessarily "what is best for the patient," and this sort of IRB review is preventing us from moving those things closer together.
But there is a different type of a problem, when we genuinely don't know what is in the best interests of the patient. And further a social benefit in ensuring we know the best option for the set of all patients. Participating in a trial might very well be _worse_ for a patient, but nevertheless results in better overall outcomes for all potential patients. So it almost is a different question. Consider the perfectly healthy individual rather than one hoping for new treatment. Can they consent to participate in a trial that might save the lives of others? I think so, and in that case the question of "best treatment for the patient is moot," the only question is one of risk vs reward. If the risks of death is, say, 1 in 1000 based on rat studies, but with genuine unknown-unknowns in humans and the chance to save millions, then I might altruistically take that risk. If the risk in 1/5 of death, to save dozens, I probably selfishly would not take the risk (and not feel bad about making such a choice).
If you mean to imply that this is therefore a valid objection to the study, that seems exactly backwards.
In a randomized study, the doctor isn't choosing whether to do X or Y, the coin flip is choosing. This is actually LESS vulnerable to bad incentives than the non-study case, because a doctor who is told to do whichever they think is best can still be influenced by other incentives (consciously or subconsciously), whereas a doctor following the random number generator cannot be so influenced.
Plus, once the study is done, then you'll actually know whether X or Y is better. In addition to the obvious benefit of providing doctors with relevant information, this again REDUCES the vulnerability to perverse incentives, because doctors have less cover for biased decisions.
The more concerned you are about this, the more you should be in favor of doing the study!
The problem is not this particular study, but the incentives behind doctors choosing particular treatments. Imagine a COVID Ivermectin study where 90% of the doctors are pretty sure Ivermectin doesn't help, and wouldn't prescribe it. If we're randomly assigning it, now we've got a number of doctors being overruled on the best course of treatment in order to do the study. Now imagine instead of something fairly harmless like Ivermectin, the treatment has big long term potential side effects (like causing cancer in a non-trivial percent of patients). Then the IRB really should be concerned about this, even if a good number of doctors is already prescribing the potentially cancer-causing treatments and is allowed to do so.
I wouldn't rule out studying both treatments in that scenario, but I can definitely see why there might be concern. We would need some kind of evaluator that knows enough about the treatments to determine how big the risks are - which apparently the IRB had until 1998.
1) Unless I'm much mistaken, doctors aren't typically FORCED to participate in studies against their will. Presumably only doctors who think the study is worthwhile will volunteer, so talking about how their judgment is being overridden seems backwards. Their judgment said to do the study; their judgment would be overridden by blocking the study, not by allowing it.
2) If there's more than a trivial amount of X currently being prescribed (and you don't already intend to shut that down), then it seems like any study that concludes X is bad will almost certainly be a long-term net reduction in the amount of X, even if the study temporarily slightly increased X in order to study it. Therefore, the risk that X might turn out to be bad should weigh in FAVOR of doing the study, not against it.
3) If you think there are actually valid scenarios where a treatment could be too risky to study but not too risky to prescribe, could you give a couple examples? (Hypotheticals are fine.) I feel like you're talking about examples where you want to "be concerned", but I'd like an example of something that's over the line rather than just close to it.
Some of the risk tradeoffs need to be mitigated with earlier studies. If we genuinely don't know if the treatment will kill everyone in the study ... then probably we need to do some animal studies first, so that we can generate some priors. This seems like a reasonable kind of objection from more sane IRB.
This particular comment thread is discussing the study of treatments that are already in common usage.
If your objection is that you think the study is going to kill everyone, then presumably you should ALSO object to the doctors who are already prescribing that treatment (without studying it).
Sorry, I was imprecise in my description. When I say "overruled" I don't mean that they don't have a say in the matter or are forced to join the study. I mean that the specific judgement of those doctors in individual cases - as it may pertain to the needs and wants of their individual patients - is not taken into consideration. A doctor may think that Ivermectin (or whatever treatment we're talking about) has a small chance of working, and they would like to know if that small chance is true - so they need to study it. At the individual level of each patient coming through, they would think something like "90% chance this does nothing positive and has X, Y, or Z side effects, I would not prescribe the treatment to this patient." But to study it they have to prescribe it to a representative sample of their patients. That directly means that some significant portion of individuals who would not normally be given a treatment, and that has potential negative side effects, will be given this treatment. Any negative effects from this study are therefore against the doctor's better judgement and otherwise unnecessary.
This is more of a problem if a patient would really want to try a particular treatment, or would really like to avoid it, or the doctor has reasons to want or not want to try it with a particular patient due to their particular situation, but in order to study it those considerations must also be ignored.
As I said, that doesn't mean we should avoid studying treatments. It just means that there's a real reason for something like the IRB to exist.
It sounded to me like you were previously arguing "it is sometimes correct to place more onerous restrictions on doctors trying to study X than on doctors who just prescribe X without studying it"
and you are now arguing "there is more than zero reason for the IRB to exist at all"
Have I misunderstood?
I was always arguing the second point. I agree that hypothetically the first point could also be true, or at least a steelman version of it. I would phrase it more as "it is sometimes correct to place more onerous restrictions of doctors trying to study X than on doctors who prescribe X based on the individual circumstances of their patients."
That would look like a doctor prescribing a treatment because the doctor believes the patient to be a good candidate for that treatment, instead of because the treatment was determined by the design of the controlled study being administered.
That's assuming that the *current* best guess treatments are both at 50% prevalence, as opposed to being 80-20, or 60-30-10 with 10% being something the study won't cover, or that doctors aren't choosing different treatments for different patients for some reasons that are relevant, or that doctors aren't doing variations on the treatment for individual patients which would be excluded from a controlled study, or etc.
I don't believe I assumed any of those things, and I'm not sure why you'd think I had.
A doctor can never give treatment that is less than the standard of care. It is when there is no standard of care (equipoise) that something may be studied straight up against a placebo and/or in arms that have differing treatment.
This is discussed here (there is other stuff too); the Australian physician and ethicist gives a good history and explanation of equipoise and it's evolution.
I do wonder if 'contact patients after they have already received treatment and ask if their anonymized medical data can be analyzed in a study' is an option.
It does mean you can't control for a lot of things, but you could get a lot of data potentially.
Slight nitpick: the cost of getting rid of IRBs isn't the handful of deaths a decade we see currently. It's whatever the difference in deaths would be with existing IRBs vs how many would die without them (or with their limited form). And of course any costs related to something something perfect patient consent.
You said Whitney found no evidence™ of increased deaths from before 1998, but if anything that strengthens the case against the increased strictness.
Does that really strength the case? If there were lots of deaths, do you suspect it would be hard to track down records to that effect?
It strengthens the case because it provides evidence (perhaps weak) that post-1998 IRBs are not in fact saving lives compared to pre-1998 ones, so their benefit isn't even a handful of lives a decade. I'm not sure what you're getting at in the second sentence, but I would expect a large number of deaths would be noticed, since before 1998 IRBs still existed, they just weren't as strict.
This is in part Illich's argument that medicine itself is causing deaths.
IRBs might be red herring; iatrogenesis happens because of over medicalization.
Studies are not designed to help a particular and specific patient, they are designed to study "a population". Per Illich physicians are to treat their patients, not some amorphous "population". People are not widgets on a factory line where RCTs, experimental design, and tests are done on objects.
There is a deeper philosophical and sociological question at issue regarding the personal relationship between the sick and the healer, which is sidestepped in this book review.
I mean, I guess that sounds noble and all, but I'm glad they've done all sorts of studies on which treatments are effective instead of just winging it. Like grand visions aside, what does the concrete interaction look like? In your ideal case, if I'm having a heart attack, how should my doctor decide what to do? Or do they do nothing and see if I pull through myself, since death is part of the process of life.
Sorry I totally misread what you wrote. What you actually wrote makes perfect sense. Ignore everything I said.
Yes, agree. Instead of getting rid of IRB's what about having them change their criteria: They may refuse to approve only those studies where the harm of not doing them (or of delaying them more than briefly) is clearly greater than the harm of doing them in the original form proposed by the researchers.
I think that first paragraph is a pretty big nitpick, honestly, and came here to post about it. Stuff like "We haven't had any thefts in this store in the past ten years, so we clearly don't need store security," is a common fallacy.
(I don't think that narrative hiccup changes the outcome of the overall review/book, mind. It just popped out at me.)
Thefts vs. security is an easy tradeoff to check the math on, taken in isolation, since both can be evaluated in terms of money, and the inherent variability of losses to theft can be hedged by insurance. Where it gets ugly is the possibility of widespread theft reducing aggregate demand for legitimate purchases, and thus the price point. That's how you end up with cops outside grocery stores trying to stop desperate people from "stealing" technically-expired-but-probably-still-edible garbage.
I am sure there are good reasons why it is impossible, but in my dreams (the ones where I have a pony) some bored lawyer figures out how to file a class-action suite against the ISIS-2 IRB on behalf of everyone that died of a heart attack during the 6 month delay. Once IRBs are trapped in a Morton's-fork where they get sued no matter what decision they make they will have to come up with some other criteria to base their decisions off of (though I am cynical enough to expect whatever they come up with to be even worse).
Administration malpractice insurance. Get in on the ground floor of this soon-to-be burgeoning industry!
What they would come up with is not doing the research at all.
Maybe impact markets could be used to incentivize risk taking?
A very strong essay, well-written as always, and addressing clearly and cogently a point both very important and not obvious to most people. Well done!
I think the fundamental problem is that you cannot separate the ability to make a decision from the ability to make a *wrong* decision. However, our society--pushed by the regulator/lawyer/journalist/administrator axis you discuss--tries to use detailed written rules to prevent wrong decisions from being made. But, because of the decision/wrong decision inseparability thing, the consequences are that nobody has the ability to make a decision.
This is ultimately a political question. It's not wrong, precisely, or right either. It's a question of value tradeoffs. Any constraint you put on a course of action is necessarily something that you value more than the action, but this isn't something people like to admit or hear voiced aloud. If you say, "We want to make sure that no infrastructure project will drive a species to extinction", then you are saying that's more important than building infrastructure. Which can be a defensible decision! But if you keep adding stuff--we need to make sure we're not burdening certain races, we need to make sure we're getting input from each neighborhood nearby, etc.--you can eventually end up overconstraining the problem, where there turns out to be no viable path forward for a project. This is often a consequence of the detailed rules to prevent wrong decisions.
But because we can't admit that we're valuing things more than building stuff (or doing medical research, I guess?), we as a society just end up sitting and stewing about how we seemingly can't do anything anymore. We need to either: 1) admit we're fine with crumbling infrastructure, so long as we don't have any environmental, social, etc., impacts; or 2) decide which of those are less important and streamline the rules, admitting that sometimes the people who are thus able to make a decision are going to screw it up and do stuff we ultimately won't like.
I think this is mostly correct. However, what you can do is seperate the ability to make a decision and the ability to be blamed for it.
You can make sure the researcher says: but I relied on an IRB. The IRB can say they followed all the appropriate procedures and considered the kind of concerns previously established as important. And the upshot is you can make sure everyone has only partial responsibility and no one in particular can be blamed.
I actually fear this often makes the deciscions worse. Since no one can be blamed there isn't the same incentive not to try to get any random shit by the IRB and no one gets blamed for stopping studies that save lives.
So in other words... committees that decouple power from responsibility might have drawbacks?
We move a little bit closer to Neon Reduction every day, and I'm here for it.
"Environmentalism always trumps infrastructure" and "infrastructure always trumps environmentalism" are actually both terrible policies.
In order to have nice things, we'd need someone to do cost/benefit analyses on a case-by-case basis, and allow important infrastructure that causes minor environmental problems while blocking inconsequential infrastructure that causes severe environmental problems.
But then, someone would actually have to make a judgment call, and could be blamed if it was a bad call, rather than being able to defend themselves by saying that they just followed the rules.
I think this is too much focus on trying to control decisions, and not enough on trying to control outcomes.
I feel like a lot of laws that apply to organisations (not people) need to change to prosecute specific outcomes rather than actions.
People are limited in the scope of their actions. The worst thing you can do is probably assault or kill people (number varies). You prevent that by restricting people's access to tools that help you kill people (weapons, dangerous goods, things that help you make bombs), and you also make rules against the types of behaviours that directly lead to those outcomes (do not attack people! Don't murder people! Etc)
With organisations, I don't think you can control actions in a sensible way. If you want to prevent a certain outcome, you should just say "if you caused these outcomes as part of your activities we will make your life hell" and let the organisation sort it out. You should not be attempting to exert control on their actions, because
1) they're much better placed to figure out the cause and effects of certain actions than you [random politician or bureaucrat]
2) trying to prevent outcomes by requiring specific actions means you have to be on top on every freaking thing they're doing and what potential harms they could be unleashing. That's a lot of work, and again, much more achievable by the organisation doing this work than you, an external non-expert party.
This works when you, the government, have some lever of power over the organisation. If it's a research institution, you can ban them from doing research on subjects. If it's a private company, you can take away their license to operate and fine them. As long as the government can in fact enforce the threat (removal of license to operate), this should influence the organisations to think very, very, very carefully about poking anything they don't fully understand. This will not necessarily stop a person, but this will stop an organisation (who can control said people by removing them ie firing if necessary).
So yeah. People can have accidents, but organisations should not. And these activities are so complicated now, trying to write rules around specific decisions is a futile, losing race.
Also, requiring specific actions increases complexity. This favors larger organisations, with more elaborate compliance and legal teams, over smaller ones, and thus hampers innovation.
This goes well beyond medical or construction or the other tangibles that Scott mentions. It also applies to "safety culture" that has spread in the last few decades. Whereas everyone above the age of 35 seems to remember childhood spent outside with lots of freedom, children now live highly regulated lives that greatly reduce the (already low) chances of failure/injury. Do we make a decision to let some kids die to entirely preventable situations so that the majority can experience more things in childhood? I would say yes, but it's not always an easy decision. I am certainly more protective of my children than my grandparents were of my parents. I would even go so far as to say that my grandparents were too lax on safety, even though all of their children survived to adulthood (maybe some luck involved there).
One issue here is that it's hard to think about/discuss tradeoffs across value domains--like if I'm talking about dollars vs dollars, or engangered species vs endangered species, or lives vs lives, it's not so hard to talk about the tradeoff without coming off like an evil robot. But if you're talking about lives vs dollars or endangered species vs affordable housing or something, then everything gets harder. Partly that's because we don't all agree on exchange rates between those things, but even more it's because there are often sacred values at stake, which means it's easy for people to fall into the "if it saves one child" trap where they can't even consider tradeoffs, and it makes it easy for anyone arguing for those tradeoffs to come off like an evil robot. And yet, we have to make tradeoffs between those things to function. Lives vs personal freedom (covid restrictions, gun control, smoking bans), endangered species vs economic growth (dams and large construction projects), safety vs money (car safety requirements, motorcycle bans), etc.
I think the net effect of this is that those tradeoffs are very often pushed off onto unaccountable bureaucracies or unaccountable courts, just because the more democratically accountable parts of the world have a hell of a time dealing with them at all.
I kept expecting this post, like so many others on apparently-unrelated subjects, to loop around and become an AI Safety post. But since it didn't, I guess I'll drop the needle myself.
Can we consider AI Safety types to be part of the lawyer-adminstrator-journalist-academic-regulator axis that stops things from happening due to unquantifiable risks?
Have we considered implementing a six month pause in medical research until we figure all this out?
Let me try to defend the AI safety folks, some of who(m?) have started calling their field AI “notkill everyoneism”. In particular, they have attempted to quantify the risk as follows: “if we’re right then everyone dies”. (I guess they failed to quantify the chance they’re wrong. Oh well.)
A second dis-analogy is that they have not in fact succeeded in stopping that which they want to stop.
And a third is that they tend to violently reject the kind of thinking that would make someone say “it’s okay if my actions (blocking research) cause countless billions to die as long as none of them die in my study (which couldn’t take place)”. The AI safety folks tend to believe in cost benefit analysis instead, I think.
AI-risk falls into the same Pascal's Mugging situation that Rationalists consistently reject when it comes to religion. By staking "the end of everything" (or humanity, or civilization - something inexpressibly major) as a potential end point from AI, they can multiply that out in any kind of cost-benefit analysis as not worthwhile. How do you multiply a small percentage chance against infinite loss? You keep coming up with infinity expected loss and must automatically reject AI. It wouldn't matter if the chance were 80%, 30%, 5%, or 0.001% - multiplying by infinite still results in infinite.
Typically the case is made that the probability is high, so it's not Pascal's mugging.
Is 5% high? That's about the median that I've seen among the select people who study this, which may be a very biased sample towards higher numbers. What's the percentage chance that Jesus comes back? How do you calculate such things with any kind of certainty, that doesn't rely on some very subjective reasoning? If you ask a bunch of believing Christians, the percentage chance that Jesus comes back is high as well. What makes one Pascal's Mugging and the other not?
I'm not seeing anything that cannot be described as special pleading.
5% is high enough that it puts a pretty low limit on the number of mutually exclusive scenarios you can be sold. Usual Pascal Wagers/Muggings rely on things with infinitesimal probabilities and infinite upside/downside, and then focus on one option out of near infinite possibilities.
A believing Christian isn't being Mugged into believing, they believe of their own accord. A classic Mugging is to say, well there may or may not be a god, but if there is and you follow the motions, you get infinite utility, so it's still a benefit to believe. The fact that the particular god in question is one of a vast possibility space is what makes it a Mugging. If there were only two options, this specific god or no god, and there was a 5% chance that the god did exist, it wouldn't be a Mugging as usually described.
Sure, but then we're looking at whether 5% is the real chance of AI killing all life in the universe. Just like the religious are far more likely to think a higher percentage is likely, those who study AI safety think that the percentage is higher than everyone else. But regardless of what people think, there's the real probability that depends on things like "is it even physically possible to kill all life in the universe" or whatever. If a Foom is literally impossible (or simply doesn't happen) or the worst consequence of an AI disaster is something like a 10% reduction in world GDP for a few years, that's definitely a different calculation from the [infinite negative] X [probability of AI superintelligence + AI is evil] that looks more like a Mugging.
I'm not sure about the existing AI Safety types (amongst other considerations, I don't think that they have the actual power to prevent research). But the proposed government regulators sure sound like part of that axis.
I don't think the take-away should be "blocking things is bad; allowing things is good". There are SOME studies that should, in fact, be blocked.
The problem is that bureaucrats are looking only at costs, instead of weighing costs and benefits; that is, they think that a study that accidentally kills one person but saves a thousand people is "bad", even though it prevented far more deaths than it caused.
I think the proper take-away is "do a goddamn cost-benefit analysis like a sane person."
Or, at a higher level of organization, "the public needs to be just as angry at administrators who prevent good things as they are at administrators who permit bad things".
C/B analyses in all things is utilitarianism in all things. Be careful what you wish for, you might end up with involuntary organ harvesting.
We (in the US) could at least make post-death organ donation the default and require people to opt-out instead of opt-in. :/
According to utilitarianism, that's not enough.
The resources required for legal battles with people who'd prefer to opt out on the basis of e.g. strong religious convictions could be more productively spent on developing synthetic transplantable-organ substitutes.
I think you should do C/B analysis even if you also have a deontology you obey; they're not just for utilitarians (unless you your deontology only permits exactly 1 action in all possible situations).
I don't think utilitarianism actually results in murdering people for organs in real life.
And I'm not arguing medical studies should get rid of consent. I'm arguing you should allow people to consent to things instead of presuming that true consent is impossible.
Right. I'm not sure how pure utilitarianism ever survived Swift.
I will have to pull out some Voegelin essays to reread. The strain of gnostics immanitizing the eschaton are persistent. If only they could get the utilitarian equations right - Eden.
People want to be probabilistic but haven't ever read Deming's Probability as basis for action. No understanding of the difference between enumerative studies and analytical studies nor of the limitations.
Asking for a 6 month pause seems dumb as hell to me, and hard to justify, sort of like IRB bullshit.. Makes far more sense to demand we pause development until certain criteria are met, having to do with understanding what's going on under the hood.
It's not dumb from a sociology point of view. It's a camel nose. Let's say you *do* get everyone to agree to a 6-month pause. The pause itself is worthless, of course. But what you have also done is (1) get everyone to agree that the risk is real, and (2) get everyone to agree that a pause is a reasonable response to the risk. You've won some huge victories in terms of setting the terms for future debate. *Now* you can go back and say "actually, no, we need a 10-year pause, or we need a pause until this-and-such criteria are met" and 80% of your battle is already won by the fact that everyone previously agree to a 6-month pause[1].
------------------------
[1] Like the old chestnut:
A: "Would you sleep with me for $1 million?"
B: "Sure, I guess."
A: "What about if I just buy you a cup of coffee afterward?"
B: "Of course not! What kind of man/woman do you think I am?!"
A: "We've already established *that*. We're just dickering over your price."
In other posts on here I've made a coupla related points:
-Won't the AI development companies just use the 6 mos. to keep developing GPT5, or whatever they're working on now? (I recognize that this doesn't negate the benefit you name, of sort of driving in a wedge, getting people's attention.)
-Wouldn't it make more sense to tie the pause to a criterion instead of the clock? I mean, that's just basic behavioral management. If you want a recalcitrant subject to change their behavior, you don't just take away their cookies for the day, you tell them no more cookies until they do X, and it has to be a good-quality version of X and sustained for at least some specified period of time.
The criterion could be accomplishing some piece of work that would make AI safer, maybe something to do with alignment. Or some simple test of degree of alignment, probably something like setting some rules and then inviting all the dark, ironic hackers in the world to try to get the AI to break the rules. Or something having to do with undertanding better what's inside that black box? How does AI come up with its responses to prompts? Or some document developed jointly by the AI companies where they list several safety tests that each new version has to pass -- or several things that, if they happen, will automatically trigger cessation of development until there's full understanding of what happened (sort of like investigation that's done when an airliner crashes ).
Seems like tying resuming work to a criterion also drives in a wedge, and gets people's attention, and wakes AI companies up to the possibility that government could really cramp their style if it chose -- but also is useful in itself, whereas the 6-month cessation plan is useful only as a wedge. Also, many people will realize the 6-month cessation plan put forth by the government is ineffective and silly. How can that be good? During covid there were various directives and info bulletins from the government that were obviously ineffective and silly, and that caused a lot of rage and despair to individuals directly affected by them, plus a general cynicism about government among the public at large. If the government starts with something stoopit then there will be less support for whatever its next proposal is. So if the next proposal is a 10 years cessation, fewer people will be behind it. One of the Shitlords of the Internet can say, "the government keeps shutting us down for periods of time with no clear idea of what that will accomplish." and they will be right and more people will rally around the Shitlord.
I'm not disagreeing there are much better ways to go about things if you're really worried about superintelligent AIs eating our brains (which I'm not). Nor do I disagree there might be better political/sociological approaches. I'm just saying calling for a 6-month moratorium isn't a priori stupid, if you consider it a political/sociological gambit to alter future conversations rather than as an attempt to actually change trajectory in the short term.
Ok, it's not stupid, but don't you think it's stupidER than imposing a criterion-based pause rather than simple time-based one?
I have no idea. If I had that kind of social-manipulation skills, I'd be fascinated by politics and enjoy big leadership roles, neither of which is true. On the criterion of "which would make more sense if we could just pray to the God Emperor and he would grant one?" then of course you're correct. I'm just observing, secondarily, that part of getting stuff done at the collective level is social-psychological or political, "the art of the possible" and/or the art of getting half a loaf.
Maybe people who are better at social manipulation than me have decided that a specific (short) time frame soothes people who are concerned about Luddism, because it's "only" 6 months -- and because there can really be no subsequent debate about the meaning of "6 months", while any criterion-based pause relies on a future consensus on whether the criterion has been met or not. So getting a 6 month moratorium is plausible, while getting a criterion-based one is not, and it changes the terms of future debate, so hurray.
But I will say even all of that assumes the people proposing it are representing their motives in good faith, and I am alas deeply skeptical about that. I think no small number of them are participating for assorted selfish reasons which have actually squat to do with protecting the species. Mostly because, as you have pointed out elsewhere, if people *really* thought that the probability of species extinction inside of 25 years was, let us say as high as Scott's 33%, they would be pulling out all the stops -- they would be hysterical with fear and rage, and they wouldn't be talking about milquetoast 6 month moratoria, they would be hunting down AI researchers and hanging them from the nearest tree, wrecking hardware, burning down buildings, and otherwise acting like a truly frightened people.
"Two weeks to stop the spread!"
A few of them at least had the grace later to admit they had lied, and it was all about introducing the camel's nose.
B: "My previous tentative acceptance was implicitly predicated on a certain foundation of respect. How do you feel about $16 million, half in advance? Or, since the exchange rate there is so unfavorable, we could discuss your options for a public apology."
Yes. However, absent an AI Tuskegee, they're toothless. Whether that's good or bad depends on your priors, I guess.
Excellent article.
After reading through and getting to the proposed suggestions/your meta thoughts, I was surprised that you didn't mention the option of some sort of blanket liability shield to protect institutions much in the same way that the current consent forms are designed to do, so they don't need to be as obsessed with lengthy bureaucracy to shield themselves. Is this just not realistic?
If the change in 1988 originated with grandstanding congressmen demanding changes in order to win votes, then protection from lawsuits wouldn't have helped (in that particular case). You would've needed a shield that prevents congressmen from gaining or losing votes based on how they react to the issue, which doesn't seem possible without crazy sci-fi tech.
Also, if you protect the institutions TOO much, you might get the opposite problem, where they rubber-stamp all studies and do no actual oversight.
Given that ~40% of Congress are lawyers, and only ~3% are physicians, I wouldn't hold my breath.
"Lasagna and Epstein?"
And no endnote about the nominative determinism in a guy named Epstein studying consent?
Missed opportunity, Scott.
Not nominative determinism. "Epstein" doesn't have anything to do with consent in itself, except for the fact that a different guy called Epstein was a sex trafficker.
Though surely, it has kabbalistic significance at the very least.
I had to read that part multiple times to make sure scott wasnt making it up.
Some back of the envelope math: the high end of your estimate for IRB-induced deaths (100k Americans/year) would imply ~2.9% of the 3.5M annual deaths would have been preventable by studies getting delayed that year.
This seems high to me. I wonder if the most egregious examples like ISIS-2 are throwing your estimate off. The lower end of 0.29% seems more reasonable.
Still a massive number of people though. Really enjoyed this, and hope it nudges us toward change.
(edited for clarity)
IRBs do not induce death. This is weird causal claim.
There is confusing about what is normal meant by CAUSATION.
There definitely is iatrogenic death not preventing death is not the same thing as causing death.
"not preventing death is not the same thing as causing death"
But the IRBs' role is actively preventing medical advances from preventing deaths.
Think of it as analogous to tripping a lifeguard on his way to saving someone who is drowning.
Sure, but intent matters. If you leave the rake out after raking up the leaves, and a lifeguard accidentally trips on it on the way to save someone drowing, it seems dubious to accuse you of murder. Nobody suggests IRBs "intentionally* set out to cause death.
'scuse the long delay. Sure, intent matters.
I would regard the effects of the early IRBs, which seemed to be mostly trying to do reasonable things to prevent another Tuskegee experiment, as well intended.
On the other hand, when Gary Ellis shut down every study at John Hopkins - well, it seems like an action that can both be described as acting "on advice of counsel" and acting "with depraved indifference to human life".
No argument there. It seems a human tendency to take almost anything good in moderation and push it to the limit where it become toxic. I suppose we do this so that future generations have our (bad) example(s) to help guide better choices..."ok, a little bit of this funny brown drink is good, cheering, makes you good company, but too much will turn you into an asshole and get you killed."
Many Thanks! Agreed on "It seems a human tendency to take almost anything good in moderation and push it to the limit where it become toxic." Come to think of it, setting down the beer and contemplating purity spirals: This might not even be restricted to humans. Toxic purity spirals might be driven by game-theoretic considerations when intra-group competition to signal alliance to the group is important. A set of GAIs divided into several competing alliances might suffer from it.
Not preventing death is not causing death, and IMO morally very different.
Prohibiting others from preventing death is different from both, and IMO it's morally much closer to causing death.
As a fat guy who spends a lot of time standing near trolley tracks, I'd like to thank you for this sentiment.
On top of this, the members of these boards (at least in the example above) don't seem interested in improving the process to optimize for reduced deaths even when they are shown they are wrong. Jerry Menikoff's arrogance displayed regarding the Petal study is just shocking and to me, makes him morally culpable for any death his delays may have caused.
If you are having a heart attack and I tackle the doctor who is on his way to perform CPR, I think it's reasonable to say I caused or even induced your death. I don't object if you want to propose other language, especially for "induce," but the point is similar.
In my opinion, it's even clearer if we move out of human actors. For example, aspirating mineral oil will coat the inside of your lungs, preventing oxygen from entering your body and killing you. I'd call that a mineral oil induced death, even if the actual cause is lack of oxygen.
If someone (why would you imagine yourself doing that) tackles someone getting ready to do cpr it is "reasonable foreseeable" that the tackler might be responsible for death. (Unless CPR would have been futile since the person (why make me the heart attack victim?) was already dead.
There is no reasonable foreseeability that an IRB actions causes any actual harm at the time of the action.
If someone tackles someone using a kitchen knife to do open heart surgery (to see if it work - they had a hunch and wanted to do experiment given the opportunity), would the tackle be reasonable for the death of the heart attack victim? No!
I am dumb founded by the idea that so many here are willing to hold so fast to notion that IRBs and desire for ethical research on humans is ostensibly killing people. Is this some kind of generational divide?
I personally don't agree that things can only be viewed as causing those effects that are reasonably foreseeable, but don't think language disagreements are usually productive.
More to the point, how is it not reasonably foreseeable that if we have two treatments and don't know which one is better, then delaying our discovery will cost lives?
What if neither is "better"? What if study shows currently used one is better? (Shall we put down in the utilitarian score card an IRB plus for all lives not lost from hypothesized alternative due to delayed treatment experimentation.) How can an IRB know in advance that this research is going to be a game changer - we ought to give them some slack, what's a few dead among "inovaters".
And what shall we do with this conundrum: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1182327/
The aim of the IRB is not improvement in "results" whatever that really might be imagined to include, the aim of the IRB is ethical research.
Were the Tuskegee experiments ethically justifiable? How about Nazi experiments? How about Soviet or Chinese experiments on prisoners?
How anybody can be a pure utilitarian after Swift's A Modest Proposal is remarkable to me. Of course, balancing risks and rewards is part of prudential decision-making, but doctors don't treat populations, they treat individual human beings.
Medicine is not manufacturing. Medical research involves people not widgets. There is ethical research and there is unethical research. Sometimes figuring it out is hard and takes time and certainly both kinds of errors will be made.
1) This is a different question than whether deaths due to research delays are foreseeable, but still interesting.
2) I don't disagree with the premise that as a society, we will sometimes find the costs higher than the benefits of accelerating research, but I'm not willing to concede that we happen to be at the ideal spot now.
If we decide that (a) it will cost some lives not to do experiments on condemned prisoners against their will but (b) that's the ethical decision we've made, then so be it.
That said, I agree with Scott and the people he's summarizing that we've pushed the needle too far towards slowing research, and that it's costing lives that we would prefer to save. I think the examples in Scott's article are good examples where the research should have happened faster.
Why do you say "a specific study"? I interpreted that as Scott's estimate for the combined effects of ALL studies that were impacted that year.
I meant that a particular person's death would be attributable to the delay of a particular study. Not that one study delay caused all deaths.
Another consideration is whether that medical treatment would have increased longevity by a relatively modest amount (say, 1 year - but that year is spent in various treatments at or near extreme old age and quality of life is very low), or would have "saved" someone's life in the more general sense that lay people would use, where that person is able to go on and live a meaningful life for years to come.
It's all well and good to know that a heart treatment was able to prevent a death on April 12, 2023, but not prevent a death of the same patient on May 17, 2023. In some cases that might be a meaningful difference to the patient, but I wouldn't fault the IRB for X preventable lives lost in that scenario.
QALYs (quality adjusted life-years) is a commonly used metric that attempts to deal with this problem.
Yes, but I don't think most people would actually think about different interventions intuitively that way. For one thing, it's really really complicated to determine the number of QALYs between the following situations:
Someone living for 10 years with limited consciousness and bouts of moderate to severe pain, but lots of visitors - OR - Someone living for 5 years with no pain, but very limited mobility and no one ever comes to visit them.
Even if we can develop a mathematical formula to quantify those scenarios and say that one is "better" than the other, that doesn't necessarily correlate with an average person's intuitive feeling about different situations. Maybe the QALYs are technically positive but an individual reviewer would disagree.
Specifically to the point here, knowing that "6,000 people were saved" by an intervention doesn't even try to QALY those lives. If the average life after being "saved" was worse than death, then the intervention is net negative. Since we have no way to tell the quality of the lives being lived, including the length of time someone lives after being "saved," then we can't evaluate whether the IRB's intervention was negative at all, let alone by how much.
Sigh. Innumeracy is a problem but so is hypernumeracy.
Maybe someone will do a review of Ivan Illich, Medical Nemesis.
Clinical, social and cultural iatrogenesis is a thing. I'm not full Illich on these matters, but I think the imagination that there is some magic utilitarian formula that is soluble if only IRBs would get out of way is bonkers to me.
After Medical Nemesis, if I was setting up a seminar: short Deming article On probability as basis for action, 1975; Why Most Published Research Findings Are False, Ioannidis (2005), and Ending Medical Reversal: Improving Outcomes, Saving Lives
Vinayak K. Prasad, MD, MPH, and Adam S. Cifu, MD (2019), The Norm Chronicles, Michael Blastland and David Spiegelhalter (2013), and maybe Fooled by Randomness, N N Taleb and the sections from Catholic Catechism regard medical ethics.
And then we might have have an interesting discussion that isn't some undergrad BS session.
If I were in Scott's situation I would have avoided any involvement with the IRB in the first place, and instead secretly recorded the data and published the stats anonymously on the internet. Nobody can sue you for that because nobody has any damages. In the unlikely event that the hospital found out they could fire you, but "firing doctor for publishing statistics" is a bad look.
Wow!
But "firing doctor for illegally conducting experiments on patients and publishing confidential medical data on the internet" is a great look, and I'm sure they'd be able to sell it that way.
Ultimately it's "firing doctor for generally being a pain and creating controversy and work for everyone else", which is a firing offence pretty much everywhere.
In the case of Scott's experiment, (1) no confidential medical data would appear on the internet. There would be no need to identify individual patients. Scott's results would be in the form of, "out of X patients who were diagnosed bipolar on admission using the questionnaire Y% were eventually given an unqualified diagnosis of bipolar disorder, and of those NOT diagnosed bipolar using the questionnaire, Z% were eventually given a full diagnosis of bipolar. Therefore, the accuracy of the questionnaire is [high, low, nonexistent . . .]"
(2) I believe, though I'm not sure, that conducting a study like Scott's without IRB approval is not illegal, but against the medical professions code of conduct.
I actually don't see a thing wrong with publishing that data anonymously online, other than the fact that it will be much less useful if it's presented that way, because people will have no way to judge how much they can trust the results.
Why isn't the OHRP itself getting constantly sued seeking injunctions against its arbitrary, capricious, and extremely destructive behavior? Prohibiting doctors from publishing statistics about their standard practice is a blatant violation of their first amendment rights, and the current SCOTUS is very strong on first amendment issues.
In the mean time, unjust laws were made to be broken.
Quite a good point, that is worth a try.
Well, sovereign immunity. You cannot sue the government unless the government itself by statute says you can. Congress would need to pass a law waiving sovereign immunity for HHS.
No government agency is above the first amendment. HHS has already been sued many times:
https://www.oyez.org/cases/2021/20-1114
https://www.oyez.org/cases/2021/20-1312
https://www.oyez.org/cases/2011/11-393
Um, yes. Also the remainder of the Bill of Rights. Now all you need is a plausible theory as to how OHRP decisions violate the Bill of Rights, and a second paragraph that explains why the million or so plaintiff lawyers hungering after some magnificent payday haven't themselves thought of your theory.
Edit: incidentally, publishing information *about somebody else* is not a First Amendment right. That's why I can't snoop around, discover your credit card number, and publish it and cry "Ha ha! First Amendment bitchez!"
If sovereign immunity worked how you think it does, there would be no point in having a bill of rights.
I guess you're forgetting the Supremacy Clause? Anyway, I've already agreed you can sue the government for a violation of your civil rights, because that's in the Constitution, which supersedes any statute, and can be considered as establishing mechanisms by which The People say the government can be held to account.
But unless you have a good reason why some bureaucratic decisions is an actual violation of an identifiable victims rights, with actual harm, you're fucked, unless the government has waived its sovereign immunity rights. This is why it's proven tricky for people who dislike it to sue to stop the Biden Administration's student loan hokey pokey: it's hard to find an actual someone who has suffered some actual harm that violates their actual civil rights. Pointing at hypothetical harm to generic masses of people that has squat to do with their enumerated civil rights doesn't work.
If your study isn't doing anything outside the scope of what you would be allowed to do in a non-study (i.e., most observational studies in medicine), than any regulation of the study is basically purely a speech regulation, and therefore unconstitutional. Worth a shot putting that to the supreme court. You should only need IRB approval to do things to patients that you weren't already allowed to do outside of a study.
Not necessarily unconstitutional; I don't the First Amendment has been ruled to allow people to violate confidentiality agreements or privacy regulations.
Confidentiality agreements are a matter of civil contract law, totally different from the government telling you that you can't say X. And patient privacy has basically nothing to do with publication of aggregate stats of the sort that are necessary for figuring out whether a treatment works.
It was a part of a residency program requirement. He had to do it by the books.
I haven't finished reading by felt compelled to comment on this:
"the stricter IRB system in place since the
'90s probably only prevents a single-digit number of deaths per decade, but causes tens of thousands more by preventing lifesaving studies."
No. It does NOT "cause" deaths. We can't go down this weird path of imprecision about what "causing" means.
I've been examining Ivan Illich, "Medical Nemesis" recently. By claiming IRBs which stop research ostensibly CAUSE death strikes me as cultural iatrogenesis masquerading as a cure for clinical iatrogenesis.
Here's another one:
"Nobody knows how many people OHRP’s six month delay killed,"
The delay did not kill anyone. The poor compliance with known procedures by healthcare providers is what killed patients, i.e. clinical iatrogenesis!
It is conflicting: I'm drawn both to need for more RCTs AND to the importance of recognizing the dangers pointed out by Illich of overmedicalizing the process of life which includes death and illness.
Was it really poor compliance? They ran their study by an IRB, were approved, then the OHRP stepped in and said no. "Poor compliance" in this case is not getting every doctor, nurse, and patient to sign a consent form saying that nurses can remind doctors to follow a checklist.
If doctors need someone to remind them to wash their damn hands there is bigger problem than an IRB and jumping thru form signing hoops.
The problem is called being human. It's not a thing unique to doctors, there's a reason pilots also have extensive checklists to follow. It's not like they're foregoing items because they're lazy, it's because humans make mistakes, and using tools to help overcome mistakes is how we stop making mistakes. Disallowing study of these tools in favor of putting all the blame on the doctors might feel cathartic or whatever, but it won't help anything.
And like, none of this applies to the heart attack thing where no one knew the answer before doing the study.
Yes of course humans.
But do you really need "a study" to "test" effectiveness of checklists. Do we need a study to test whether a parachute should be used when jumping out of a plane?
The IRB did NOT prevent the use of checklists, it prevented a study about using checklists!
And how many more medical facilities would be using checklists today if the IRB had allowed that riskless study to be published?
The claim isn't "The current system caused deaths by preventing checklists from being used". The claim is "The current system, due to its own ignorance and risk-aversion, prevented information about the effectiveness of checklists from being published and distributed to the medical community, thus indirectly causing deaths.".
Any medical facility can choose to use a checklist, yes. But heaven forbid they should be able to look at relevant research beforehand!
And not to put to fine a point on it, but yeah, "Doctors washing their hands" was actually pretty controversial for quite some time.
They don't prevent the use of aspirin to help with heart attacks either. But until we do a study, we don't actually know whether aspirin helps with heart attacks. It may help, or it may make them worse! It's easy to look at a study after it was done and proclaim the results obvious, but implementing new standards and procedures takes overhead and work, it's not something that can or should be done on a whim or a hunch.
And that's leaving aside all the things that can't, in fact, be done without an IRB, for example any non-existing procedure or medication that will only be approved by the FDA with a large, expensive trial. (And before you say that's equally to blame on the FDA, while they can also be overrestrictive, there is value in actually having to test something before releasing it.)
ya, they nipped that problem in the bud, didn't they?
More examples of incorrect frame. "Might have been saved if" is not the same as "death was caused by".
No, to all of these ways of writing about it.
Re:ventilator study: "They did, but the delay was responsible for thousands of deaths"
"found that 60 people per year died from IRB-related delays in Australian cancer trials."
"Tens of thousands were probably killed by IRBs blocking human challenge trials for COVID vaccines."
I'm unclear on what definition of "cause" you are applying here.
If I take an ordinary healthy human, and forcibly stop their heart, would you say that I "did not cause" their death, because I only prevented something from happening (i.e. prevented the heart from beating)?
How about if I prevent them from leaving a room until they die of thirst?
Call it what you want - more people would be almost certainly be alive today if IRBs rejected / delayed fewer studies.
Assuming you think people being alive instead of dead is a good thing, the fact that the IRBs didn’t “cause” these deaths in preferably the same way that literally shooting them in the head “causes” death is kind of a pointless argument. Either way, the IRBs did things that prevented a significant number of people from being successfully kept not-dead, and that’s pretty bad.
I’d also add that by your same logic, IRBs can never actually “save” a person, so their “lives saved” and “deaths caused” can still be weighed against each other in the way Scott is doing.
There was a famous doctors' strike in Israel, deaths went down by 20%.
The problem may not be restrictive IRBs, it may be what medicine has become. Iatrogenesis.
Then in that case you aren’t really arguing the topic at hand at all, but rather trying to shoehorn in an argument about something else entirely (whether the practice of medicine on net saves lives at all).
On that, it seems very difficult to imagine a world that solves iatrogenesis without, you know, doing research on medical best practices (like “how to make sure doctors don’t forget to wash their hands” or “gee this high stakes medical situation doesn’t have a universally agreed upon treatment protocol, maybe we should figure out which one actually works”).
IRBs are part of the sociology of medicine.
IRBs are attempt to be a bulwark against social and cultural iatrogenesis.
Studies are an attempt to be a bulwark against clinical iatrogenesis.
But a study is only of a populations it is not treatment of a sick person.
I'm not sure that you can really consider these questions without a broader view of medicine and anthropology: people are not widgets but medicine cannot be voodoo so we need science. But the noetic (including techne) is insufficient for any problem involving humans, we must also consider the poetic and pneumatic.
This seems like a weird overly metaphysical nitpick.
Suppose a surgeon is operating on someone. In the process, they must clamp a blood vessel - this is completely safe for one minute, but if they leave it clamped more than one minute, the patient dies. They clamp it as usual, but I rush into the operating room and kill the surgeon and all the staff. The surgeon is unable to remove the clamp and the patient dies.
It sounds like you're insisting I have to say the surgeon caused the patient's death and I was only tangentially involved. This seems contrary to common usage, common sense, and communicating information clearly. I have never heard any philosopher or dictionary suggest this, so what exactly is your argument?
The surgeon did, unintentionally, cause the patient's death, because he clamped the blood vessel; you didn't, so you didn't cause the death. Whether you're *responsible* for the death is a different question.
Causation is a physical thing as opposed to a moral one. Causally, if the surgeon had to leave the room for some good reason and being justifiably confident he'd be back in a minute, you inadvertently detained him in some way (not knowing about the clamp), and he didn't get back in time, your causal position is the same; you're a but-for distal cause of the death (as is almost everything that's happened in the preceding light-cone of the death), and not the proximal cause.
Whether you're morally responsible for the death is different in both circumstances, but depends on the moral framework you're using (if it even cares about "moral responsibility" or anything analagous). In law, if you knew about the clamp (or ought to have known that the patient was likely to die for some reason if you killed the surgeon mid-surgery) you're guilty of murder in your scenario but not in mine, and otherwise guilty of manslaughter. For this reason, there aren't a lot of moral frameworks that care about causation without caring about intention.
What if I'm besieging Leningrad and (lots of) people there starve to death. Did I "cause" those deaths, even though I didn't directly physically interact with those people and their metabolism requiring food predates my siege of their city?
You're morally responsible for it, but you didn't physically cause it (especially as presumably you're not besieging Leningrad on your own, but sitting in a bunker somewhere telling a few people to tell a bunch of people to siege Leningrad).
I was thinking of the people who were being told to siege Leningrad, but that does enable me to turn this into a variation on a Yo Mama joke: This dictator is so fat, when he's around a besieged city, he's AROUND the besieged city.
Hahaha, I was thinking you'd have to run really really fast to block all the routes in and out.
For the individual besiegers, the causation is really weird - if any individual one of them doesn't participate, the siege still happens and the inhabitants starve, so even in terms of but-for causation it's not clear that any individual soldier has caused anyone to starve. They'd still be morally responsible from their joint intention to besiege though.
Surely you *do* physically cause it, with a few extra steps. You produced sound-waves/put ink down on paper in patterns which resonated into the neurons of some other apes until down the chain it caused some of them to take up guns and tanks and go sit around Leningrad.
Causation is physical and moral.
Everything has multiple causes. All fires are caused, among other things, by oxygen, but we rarely consider oxygen to be "the" cause of a fire. When we pick out something as "the" cause, we are usually looking for something that varies and something that we can control. All fires require oxygen, but oxygen is not a, variable enough factor compared to dropped matches and inflammable materials.
Context matters as well. A coroner could find that the cause of Mr Smiths death was ingestion of Arsenic, while the judge finds that it was Mrs Smith. It would be inappropriate to put the arsenic on trial and punish it, because it is not a moral agent.. but it is a causal factor nonetheless.
If you are trying to find someone to blame for a problem, that is one thing, if you are trying to have less of it, that is another.
I don't think "moral causation" as a phrase/category adds anything to anyone's position; if Mrs Smith put the arsenic in Mr Smith's tea thinking it was sugar, or if she put it in because she wanted him dead, her position in the causal chain is the same. It's a peculiar use of the word cause that muddies the waters and either 1) doesn't pick out a different category from moral responsibility (or "fault," if you like), or 2) picks out a subset, defined purely by whether you can slot something neatly into the chain of physical causation.
Moral causation adds the ability to praise and blame people for their intentional actions. But maybe you feel the criminal justice system doesn't do anything.
Criminal law distinguishes between cause and intention, and as general rule both are needed as separate elements of a criminal offence. The causation part is factual causation though.
That isn’t how the word cause works.
Event A is the proximal cause of Event B if Event B immediately proceeds Event A and Event B is the direct consequence of Event A with no circumstances in between.
Event A is a distal cause of Event B if, but for Event A, Event B wouldn't have occurred (and Event A isn't the proximal cause).
Yeah that is bad. Not an accurate way of thinking about it.
No. Come you've made up nonsensical nonanalogous situation. (Why would you ever make up a story where you are a killer? That is messed up. Fully intended as a kind true and necessary statement for your own good.)
Reasonable foreseeability is the concept as opposed to hypothetical or tangentially foreseeability. It is reasonably foreseeable that a lunatic rushing into an OR could also lead to death of patient even if malevolence was only directed at healthcare providers.
Note that one cannot know in advance what the results of the proposed study will be unless it is so obvious (parachutes) that there was no need for a study or IRB in first place.
The farrier who lost the nail did not really lose the war!
IRBs may well need reform but they don't need eliminating. The root problem is iatrogenesis. Medicine is making us healthier and sicker.
I fully support rigorous and ethical studies. But, Illich is worth a least a consideration. Studies do not treat a patient, they study a population.
I'm no expert. But my own meditations have led me to view causality as "d/dx applied to logical conditionals" a la Pearl. In this framing, "reasonable foreseeability" boils down to whether the reference-class of a particular action reliably precedes bad-outcomes. Except "reasonable forseeability" makes reliable reference-classes seem qualitatively distinct from unreliable reference-classes (rather than a difference of degree) so that you can claim that distal causes "aren't true Scotsmen". If Scott were arguing that safety committees as a reference-class reliably lead to bad outcomes, then I would agree with you that Scott is wrong. But a colloquial reading suggests that Scott is discussing a particular committee in a particular state of the world.
any objections?
> Note that one cannot know in advance what the results of the proposed study will be unless it is so obvious (parachutes) that there was no need for a study or IRB in first place.
Scott isn't discussing the outcome of a particular study. He's discussing the outcomes in aggregate.
Outcomes in advance for ANY particular study or in aggregate for all or most studies aren't known.
Observational studies are generally pretty poor at telling us anything.
And RTCs depend on arms and endpoints.
You can't know something before you know it (excepting Socratic anamnesis).
Uh... I was expecting objections related to why you find the phrase "IRB's cause deaths" unreasonable.
> Outcomes in advance for ANY particular study or in aggregate for all or most studies aren't known.
Statistics is predicated on the idea that an aggregate is (almost certainly) a higher-res representation of the underlying distribution than any single datum, is it not? Scott is looking at the field of medicine as an aggregate and complaining that progress is too slow because the OHRP is far too skewed toward preventing false-positives rather than preventing false-negatives. Imagine your email provider sorts all your email (including the obviously not-spam) into the spam folder. "It's fine" you say, "since it's impossible for the provider to know in advance whether I'll judge it as truly spam."
> Observational studies are generally pretty poor at telling us anything.
> And RTCs depend on arms and endpoints.
I'm aware of this. I would have emphasized this by saying "interventions" if I had thought it relevant.
And if you're going to stand by the "we can never truly know if the OHRB is causing deaths" story, does that not entail that we can never truly be sure that the OHRB prevents harm on net? You can't have it both ways.
The IRB is to make sure studies are ethical.
Unethical means don't justify potential good ends.
"Y caused X" to me means ceteris paribus X would not have happened if Y had not happened. So both the clamping and shooting the surgeon are causes of the death of the patient, because if either one of those things hadn't happened the patient wouldn't have died.
Tens of thousands of people would have not-died if the OHRP hadn't stonewalled doctors trying to do research, therefore the OHRP is one of the causes of those deaths.
Stonewalled- meh.
You don't know in advance what the results of the research would have been.
What we can generally say is the entire system of medicine produces deaths that would not have happened if there was no medicine. Iatrogenesis. It is also true that entire system of medicine has cured disease and reduced certain cause specific deaths.
Ceteris paribus there would be more total deaths if the entire system of medicine did not exist
Ceteris paribus there would be fewer total deaths if the OHRP did not exist.
1. Everybody dies.
2. Probably if we are precise. Remember though the famous doctor strike which lead to fewer deaths!
3. Re: OHRP not existing; no possible way to know that. And the removal of ethics from medicine a very bad idea. Should prisoners give up informed consent? Maybe that would "save" lives? No.
So how would you define, in a non-ambiguous way, the difference between killing and preventing the prevention of a death?
How about killing - action which leads death where there is a reasonably foreseeable risk of death. We measure reasonable foreseeability at the time of the action.
We do studies because we do not know something. The prevention of death can be pretty complicated. Does drug A prevent a death from cancer? Well, maybe in half the cases where it is used. Well what about the other half? Shrug, people still die. Does drug A also leads to death because of its toxity? Well, yes, there is a reasonable foreseeable risk of death, but it is small (whatever that means). So the preventing the prevention of death is a pretty complicated ball of wax. Don't take drug A if you are allergic to drug A. How do I know if I'm allergic? If you take drug A and die soon there after from a reaction you were probably allergic.
IRBs can only be measured on the basis of whether the prevent unethical research. Maybe we have reached equipoise about whether IRBs (which are the de facto standard of care for research on humans) are successfully preventing unethical research, and we need to do studies. How to design such a study might be very difficult or impossible or maybe very easy, I don't know.
We have a situation where there is obvious iatrogenesis. It existed both before and after IRBs, but IRBs are not intended to aid that problem. IRBs are only to ensure that research on humans is ethical. Have there been unethical studies notwithstanding IRBs? I think the answer to that is yes.
I notice you helpfully defined killing, which is nice, but not prevention of prevention of death. I know this may come across as pedantic, but the reason I'd like to see that is that your definition (like most reasonable definitions trying to support your type of position) makes less-rigorous-than-at-first-apparent words like "action" do a lot of heavy lifting.
(I could also pick on what exactly is the meaning of "reasonable foreseeability" but that doesn't seem like a fundamental ingredient in this disagreement.)
So I guess the most fundamental equivalentish question here is: what is an action?
E.g., am I killing the starving child I could have fed? I am reasonably sure I could find and feed such a child!
Or, were I a doctor, would I be killing a patient I refused to treat for something very deadly and treatable if no other doctor were reasonably available?
What do you think the answers are?
I don't see we can reasonable say that IRBs are killing people. Killing someone is not the same thing as preventing someone from reasonably foreseeably preventing a death.
Are you killing a child, no. Are you culpable for some kind of other ethical or moral failing, maybe.
Some one gets into a fender bender with the only surgeon who can do a life saving procedure scheduled in 2 days. Surgeon, who was actually the negligent car driver, can't do the surgery because of injury in fender bender and patient dies. It is not reasonably foreseeable that negligent driving would lead to delay or cancellation of surgery. Surgeon might be responsible for fender bender but is not responsible for patient demise. Even "but for" causation has legal and moral limitations.
Proximate v actual cause- Look up Palsgraf v Long Island RRco. The philosophy question here is secondary to the legal liability question.
What if we reframed the article, instead of using the word cause we wrote something like "in a world without the stricter IRB system tens of thousands of people would still be alive". Do you disagree with that statement, and if so is it the empirical claim you disagree with (i.e. you think those people would not be alive in a world without the stricter IRB system)? To me, Scott's argument is still very persuasive even if I accept your point that saying the IRB *caused* the deaths to happen is wrong/imprecise. A second question I have is, under your view does the IRB system *cause* any lives to be saved? Or should we just abandon using the word cause in this type of way completely.
I disagree with the statement only because it presupposes the outcome of studies (ostensibly delayed by zealous IRBs).
If we knew the outcome we wouldn't need a study and IRB would not be involved.
IRBs we're invented after I was born, so I like to say like me still a work in progress.
Medicine should become more scientific but medicine is not manufacturing.
Even manufacturing has method to determine when a destructive test should be done. See Deming, out of the crisis, p 418 ff (1993 edition) (1982)
Utilitarianism alone cannot cure the problems of medical reversals. The problems with ultilitariansim have been pretty clear since Swift's A modest proposal.
It is wishful thinking: if only medicine were more scientific (like quality manufacturing) (pesky IRBs), no one would die.
Unfortunately, people will always die. You cannot test or measure you way out of that dilemma. (For some, Easter is the long term solution to that problem.)
I'm not sure if I really disagree with you, I'm struggling to see where you disagree with the overall premise of the Book Review. I agree people will always die and I generally agree that doing a ceteris paribus style analysis and easily conclude 'X more people would be alive". But none of that seems to contradict the hypothesis that the current IRB system is too strict, and unnecessarily delays/halts studies that might could be enormously beneficial.
Is it too strict? How would we know?
Let's do a study.
But if we know the eventual result of the study, it seems like we can reason in just this way.
Consider the FDA delaying the approval of beta blockers for many years in the US, after they were in widespread use in Europe. At the time they did this, maybe it wasn't possible to know for certain if this would be a net good or net bad. But now, many decades later, we can know that it was a bad decision to delay beta blockers, one that must have resulted in more people dying of heart attacks than would have died in a parallel world where they'd been approved in the US a decade earlier.
I read it as just rhetorical hyperbole. A strenuous call to consider things unseen as well as seen, to use Bastiat's potent phrasing. I don't think it's meant to be taken literally, and if we insist everyone composing an argumentative essay (as opposed to drafting legislation, say) eschew rhetorical exaggeration, we turn into Vulcans and can only have sex every 7 years.
and even then only in missionary position.
"Cause" is a very imprecise word anyhow. If a man dies of a heart attack, all of the following are in some sense causes: Man's cell phone was lost so it took him a long time to contact someone. Man did not take his prescribed heart meds. Ambulance got caught in traffic jam. Man was human and everybody dies. Heart stopped beating. Lack of blood flow to brain caused irreparable damage. So saying that an IRB's stopping a certain piece of research from being done is a cause in one of the looser senses of the word, but it's not nonsense. Consider the man with the phone. Maybe it's true that if he had gotten to the hospital 20 mins earlier he would have lived, and that if his cell phone had not been lost he would have gotten there 20 mins earlier. It would not be absurd for his spouse to think, guiltily, that she knew he was bad about keeping track of his cell phone, and that if she had devised some better system for keeping it from getting lost that he would probably be alive today.
We could change the wording from "IRB caused deaths" to something else "IRB's refusal to approve experiment makes it responsible for the deaths" -- but what's your point exactly? Everybody agrees the IRB's preoccupation with never making a mistake in the direction of allowing too much has led to it making many mistakes in the direction of allowing too little, and that those mistakes have interfered with putting into practice some things that would have saved a lot of lives. Nobody thinks the IRB members went out with guns and shot all the people. Is your point purely semantic? I don't really get what your problem is with the phrase "IRB caused deaths."
"Everybody agrees the IRB's preoccupation with never making a mistake ..." No everybody does not agree.
Medicine is not manufacturing.
What is problem? Saying IRBs cause deaths is not accurate. And it is a sketchy rhetorical trick. Once we agree that A causes a lot death then next logical step may be to get rid of A.
If I said that your perseveration over this one issue is giving me and various other people a headache, do you think that's a dangerous rhetorical trick too? Like that we're on a slippery slope, and we're more than halfway to accusing you of physically tormenting those who disagree with you, and that if we're not called out on our use of "headache" here someone might actually try to bring assault charges against you?
Kind, necessary and true?
If you think my comment does not meet this criterion, feel free to report it. I actually think it passes. Here's why: My comment is
an argument against your point that "saying IRBs cause deaths is not accurate. And it is a sketchy rhetorical trick. Once we agree that A causes a lot death then next logical step may be to get rid of A." I'm saying, so does that principle apply in other situations -- the principle that if you speak in a loose way about something harming people, the world will take what you say literally and punish whoever you are complaining about as though they had *literally* done the harm?
Of course my comment also is a way of saying that it seems to me that you're nitpicking, and I find it irritating, and it looks from the comments like others do too. Would it be less offensive if I said your perseveration was irritating, rather than that it was giving people me a headache? Do you really think it's unkind and unfair to tell someone you find their remarks irritating? That's different from saying they're basically an irritating person, you know. And it leaves the listener free to say that the speaker's emotional reaction is a result of some flaw in them, or of their failing to grasp the listener's point.
"Irritating" I thought most of the people here were supposed to be rationalists. Why would so-called rationalist get irritated.
People have pushed and I responded. Normally, the moving party gets the last word. Right? The alleged irritation seems to be a desire of proponents opposing mine to have the last word.
Scott personally responded with a hypothetical in which he personally was rushing into a OR to attack healthcare personnel (a rather disturbing idea). To which I responded with a pretty understandable idea of reasonable foreseeability. Discussion over.
To say the IRBs have killed people or caused deaths is a very radical position. It is as radical as suggesting that doctors who failed to sign the forms or make adjustments required by IRB caused death (also not true.)
IRBs keeps medicine from becoming manufacturing. They are based in the recognition that people are not objects upon which experimentation can be done without complete and voluntary participation with informed consent.
Lets do the simple thought experiment.
It's the trolley problem, and I'm standing there and about to pull the lever to kill 1 person and save 5.
But then you jump in my way and restrain me, and the 5 die.
There was clear and reliable causal chain of events in place that was going to end with 1 person dying.
Instead, you intervened in a way that was clearly and legibly going to interrupt that cause a chain of events, and instead institute a different causal chain of events that ends with 5 people dying.
I think it is totally fair to say you caused 4 extra deaths in this case.
You took an affirmative and intentional action, that action increased the number of deaths by 4, and this outcome of your actions was entirely predictable to everyone involved at all times.
If we don't call that 'causing', then I think the word loses all practical meaning and usefulness in these domains.
The lesson of the trolley problem is to stay away from trolleys.
Excellent essay. Since you’re interested in distinguished families, you might note that Ezekiel Emanuel is the brother of Rahm and Ari.
From a utilitarian point of view I wonder how many Ezekiels it takes to make up for one Rahm.
I've been out of touch; what's so bad about Rahm? As mayors of Chicago go, he doesn't seem to stand out much.
Well, his successors have done much to burnish his reputation post facto.
I suppose my statement was the equivalent of, "as demons of the 6th circle of Hell go, he's pretty normal".
I'm a Chicagoan born and raised, and resumed being a city resident some years ago after living in a close suburb for a while. Rahm turned out to be pretty generic as big-city mayors go and surprisingly meek about it.
That last part was unexpected and I still wonder about it. Going in he'd seemed likely to be either a bold "move fast and break things" mayor or a "pointlessly tick off pretty much everybody and have no particular base of support left" train wreck. And he definitely never stopped _talking_ about being the first thing, but...by his actions in office he was mostly straight out of central casting. (E.g. he talked big about standing up to the teacher's and police unions but in actual contract negotiations got absolutely rolled by both of them.)
Rahm won a second term only by being gifted with a terrible opponent, and in the end his deciding not to run for a third term was greeted with a collective shrug. My neighbors and I -- some of whom voted for Rahm and some didn't -- agree that we did not see _that_ outcome as a possibility with him.
Reminds me of The Governator, although some have claimed he was neutered (so to speak) by his wife's clan finding out about the housekeeper, and establishing a quid pro quo for silence until the end of his term.
Well, that's not so bad. It sounds like he may have bitten off more than he could chew, local-politics-wise.
Ezekiel Emanuel literally believes that life after 75 isn't worth living, and opposes all life-extension measures because he believes people of that age generally cannot do meaningful work. (I have seen him defend this, in person. This is not a straw man.) I do not know why we think he should be trusted as an ethical authority.
He doesn't actually say that. He says he is not going to use medicine on himself to extend his life beyond 75.
I think the relevant question is whether he's talking about the choices he plans to make for himself, or choices he wants to see imposed on everyone.
Is there any evidence that he wants to impose this on others? I have not seen that.
Am I understanding correctly that your argument is that Zeke Emanuel's opinions about what the good life is are in no way affecting his ethical advice?
Incorrect. That is not what he said at, e.g., his lecture at Adas Israel on Yom Kippur in 2021.
What good historical examples are there of systems this broken being radically redesigned or recreated, with great results afterwards? As far as I know, most of them occur during large power shifts or violent revolutions, but I'd love to see what types of strategies worked in the past aside from those.
China did manage to transform its basketcase of an economy by allowing initially small doses of capitalism. But I guess Mao-Deng transition could qualify as a large power shift. Probably either political or technological shifts of at least this magnitude are required for any consequential radical redesigns.
One positive example is that the USG set airline fares (and had to approve new routes) and I think also set trucking freight rates for several decades. Then, they stopped, and the world is a much better place for it. That's an instance where a destructive regulatory regime went away and things seem to have gone okay.
Well, we did give women and minorities the right to vote eventually, with fairly low bloodshed.
Ending slavery *did* take a huge war, though.
> "A few ethicists (including star bioethicist Ezekiel Emanuel) are starting to criticize the current system; maybe this could become some kind of trend."
Nir Eyal's guest essay 'Utilitarianism and Research Ethics' is a nice contribution in this vein:
https://www.utilitarianism.net/guest-essays/utilitarianism-and-research-ethics/
(AFAIK, Eyal is not himself a utilitarian, but he here sets out a number of potential commonsense reforms for research ethics to better serve human interests. He's the inaugural Henry Rutgers Professor of Bioethics at Rutgers University, and founded and directs Rutgers’s Center for Population-Level Bioethics.)
>the Willowbrook Hepatitis Experiment, where researchers gave mentally defective children hepatitis on purpose
I ran across this study while reviewing historical human challenge trials as a consultant for 1 Day Sooner. Having not previously known about it, I found it quite shocking. They certainly learned a lot about hepatitis, but they definitely mistreated the kids.
The story sounded crazy to me, too. It was about Hepatitis B if I got it right, which is usually transmitted by sexual acts, giving birth and needle sharing. If people are bleeding over the place regularly and the disinfection regime of that time and place wasn`t sufficient, that may have been a reason for the high infection rates. But I couldn`t help imagining orgies of inmates and staff. Hepatitis B is not harmless and only has been really treatable for some years. Back then it doesn`t seem to have been regarded as a very serious issue.
Nope, it was Hepatitis A. Fecal-oral transmission.
I have to tell my consent form story. I was asked to join an ongoing, IRB approved study in order to get control samples of normal skin to compare to samples of melanomas that had already been collected by the primary investigator. The samples were to be 3mm in diameter taken from the edge of an open incision made at the time of another surgery (e.g. I make an incision to fix your hernia and before I close the incision I take a 3mm wide ellipse of extra skin at the edge). There is literally zero extra risk. You could not have told after the closure where the skin was taken. The consent form was 6 pages long (the consent form for the operation itself that could actually have risk was 1 page and included a consent for blood products). I had to read every page to the patient out loud (the IRB was worried that the patients might not be literate and I wasn’t allowed to ask them because that would risk harm by embarrassing them). They had to initial every page and sign at the end.
I attempted to enroll three patients. Every one stopped me at the first page and said they would be happy to sign but they refused to listen to the other 5 pages of boilerplate. The only actual risk of the study seemed to be pissing off the subjects with the consent process itself. I quit after my first clinic day.
I never did any prospective research again as chart review and database dredging was much simpler.
You will also find that many institutions are dodging the IRB by labeling small clinical trials as “Performance Improvement Projects” which are mandated by CMS and the ACGME. They are publishing the crappy results in throwaway or in-house journals to get “pubs” on resident CVs.
We could get reform pretty quick if some adventurous Federal appeals court were to order that *all* consent forms had to follow the same standards. So e.g. before you can click through your Facebook/Google account creation or Windows install license agreement, a lawyer from FB, GOOG, or MSFT has to call you up and personally read every page to you, slowly, and you have to DocuSign each page as he does. Better set aside the whole afternoon if you're buying a car or house ha ha.
I love this!
I am sorry for your/our loss.
The broader question of "why did all this administrative bullshit massively explode in the 90s and cripple society" seems underexplored, presumably because the administrators won't let you explore it.
Wasn't the whole point of the 90s the triumph of neoliberalism and deregulation? How come a less regulated economy goes hand in hand with a more regulated research sector?
And, as Matthew Yglesias has pointed out, more land use regulation as well:
https://www.slowboring.com/p/yimbyism-can-liberate-us-from-anti
Sort of a cyclical/two side of the same coin answer:
Without the excuse of 'we were following all of the very strict and explicit regulations, so the bad thing that happened was a freak accident and not our fault' to rely on, companies had to take safety and caution and liability limitation and PR management into their own hands in a much more serious way.
And without the confidence in very strict and explicit regulations to limit the bad things companies might do, and without democratically-elected regulators as a means to bring complaint and affect change, we became much more focused on seeking remedy for corporate malfeasance by suing companies into oblivion and destroying them in the court of public opinion.
Basically, government actually *can* do useful things, as it turns out.
One of the useful things it can do is be a third party to a dispute between two people or entities, such as 'corporations' and 'citizens', and use it's power to legibly and credibly ensure cooperation by explicitly specifying what will be considered defection and then punishing it harshly. This actually allows the two parties, which might otherwise be in conflict, to trust each other much more and cooperate much better, because their incentives have been shifted by a third party to make defection more costly.
Without government playing that role, you can fall back into bad equilibrium of distrust and warring, which in this case might look like a wary populace ready to sue and decry at the slightest excuse, and paranoid corporations going overboard on caution and PR to shield from that.
Alternatively, the deregulation didn't go far enough, as it didn't touch tort law. Based on the US tort cases I hear of, IMO tort liability should be significantly curtailed: a defendant should only be held responsible if it either caused the harm to the plaintiff directly, or intentionally, or if it created a situation far outside the range of possibilities a reasonable person should expect, or if it breached a regulation. Also, damages awards in America are often higher than I'd consider reasonable by an order of magnitude (occasionally, for emotional harm, by several orders of magnitude).
More importantly, any tort liability should be waivable in contract.
---
What do you mean by cooperation and defection? As far as I understand, this terminology comes from the prisoner dilemma, and it applies to game theoretic equivalents of the prisoner dilemma or the tragedy of commons. But I don't see how that applies here. If an entity puts me at risk in a way I consent to, I don't see how that's defection, yet regulation often bans it.
It seems like the answer is increased litigation, no?
In the examples cited above there isn't any litigation mentioned as preceding the rise in IRB oversight. If litigation was the response to bad studies, instead of trying to prevent them from happening through IRB, then studies like Scott's with no chance of harm wouldn't have been prevented, but people harmed by studies would have recourse.
The economy is not less regulated at all. There was no overall reduction in regulation that has happened at any point since WW2 in the US. you may be able to point to specific industries (like airlines); however, usually thats a specific type of regulation that is removed while others spring up in its place. Different regulation does not mean deregulation.
That overall statement is accurate if the measures used are things like total number of rules issued, total numbers of pages of rules, etc. And sometimes those sorts of crude measuring sticks also line up with a qualitative judgement; I would nominate for instance the federal tax code as one such example.
In other examples though the sheer bulk of published regulation doesn't change the fact that at its heart a given sector is far less regulated than when I was a kid. Airlines, telephony, consumer banking, trucking, freight rail, some other things. E.g. there are far more pages of federal rules related to air travel now than in the 1970s, but no sensible person would dispute that the 1970s airline industry wasn't drastically more regulated -- in its essentials -- than it is today.
Maybe there's 2 things here - the overall degree of external control, and the degree of bureaucratic micromanagement. For an example of one without the other, how about a country with a dictator who issues verbal orders and will shoot anyone who doesn't comply or who doesn't perform well. If he tells his local airline to fly particular routes, they're very much under his control, but at the same time this isn't what we'd call "regulation".
I’m surprised, doctors have only wished death upon the IRB. The 1000’s of lives lost due to study delays vs the lives of a few IRB administrators, sounds like a very easy version of the trolley problem
I doubt murdering them would actually help. The default answer is "no", so if they're all dead then every study gets rejected.
I imagine you're joking, but in case any reader thinks otherwise, I'd like to point out that the administrators would just be replaced, you'd harm your cause by associating it with murder, and you'd harm civilized society in general by making it a less generally-law-abiding place.
(Real life almost never presents anything as simple as a trolley problem.)
I'm not *disagreeing* with you, but these sound very similar to reasons why, 200 years ago in the American South, killing slaveholders would be a bad idea.
I think, as a lone individual, it would have been? Putting aside arguments about whether a peaceful end to slavery was possible for the USA, a lopsided civil war is a very different prospect than an attempted slave revolt (which were almost invariably unsuccessful and resulted in truly horrific reprisals). An abolitionist in early 19th century USA is much better served politically campaigning in the North than trying to be a vigilante in the South
Agreed. Killing a few random slaveowners strikes me as a pretty bad strategy for an abolitionist.
So, like, as a matter of rational utilitarian public policy, I agree.
However, there's also a part of me that says, fuck that, any survivors are owed a blood debt, and unless sufficient recompense is made, they've got a personal right to vengeance that supersedes any human-made law. And as of yet, no amount of rational argument has made a dent in this part of me; it's like trying to talk myself into believing gravity doesn't exist. (But who knows, maybe one day I'll throw myself at the ground and miss.)
Yes, many people have such a part of them, and it's the primary reason we've never managed to stop killing each other.
I mean, I don't like it, and in my case I'm pretty sure most of it is from PTSD, and I'm working on that. So far that's therapy, medication, drugs, prayer, and meditation, but not much progress over too long a time. :-/
But for the sake of discussion, I'll suggest that the problem is largely psychopaths and assholes, not their victims. And we shouldn't blame the tit-for-tat-style defensive mechanism that keeps said psychopaths and assholes from exploiting otherwise defenseless people.
Aren’t pacifist movements usually bolstered by the presence of a radical violent parallel movement? It’s not necessarily an either or
I'm not sure there's any contradiction between those two things? It can be the case that you have a personal right to vengeance, and also that your particular plan for exacting vengeance is going to end horribly for everyone including you. Moral rights do not come with physical capabilities attached, nor do they grant immunity from consequences.
I wouldn't say that there's an inherent contradiction. But roughly speaking, human societies tend to have rules against doing things that end horribly, like blood feuds. And this seems to be mostly tolerable for most people, if the society provides an alternative way of dealing with the root cause. But what happens when something slips through the cracks, and society bans the older remedies without offering anything in their place?
Also: "John Brown's body lies a-mould'rin' in the grave..."
While I agree that assassination is far from an ideal solution to the problem, an administrator who got the job because their predecessor was forcibly removed - as part of a highly publicized complaint about a proposal being rejected - would surely take that unlikely-but-catastrophic outcome into account during their own day-to-day assessments of risk. The question is how to make such an incentive similarly effective while minimizing damage to the wider rule of law.
That incentive is INHERENTLY a harm. We do not generally want public servants changing their decisions out of fear that one crazy zealot will murder them if they don't; that would lead to worse outcomes more often than it leads to better ones, by giving every lone crazy a veto.
(Imagine that same administrator changing decisions out of fear of anti-vaxxers, or the "cell phones cause cancer" people.)
In the current system they're changing their decisions out of fear that one crazy zealot will get them fired, but not personally killed, which is still a serious problem. I wouldn't be particularly surprised if cell phones actually did somehow cause at least one case of life-threatening cancer every few years, worldwide, but, yeah, that's not much of a peg to hang policy decisions on - and it's about the same level of excess caution being applied to medical research. Gotta re-balance those scales somehow.
How does this work in other countries? This review describes a peculiar sequence of events in the US. Not every country had a Tuskegee, a Hans Jonas, an asthma experiment death, or a lack of bigger issues to worry about. Yet the US can't be that much of an outlier either, otherwise this would be a story of how all medical research has left the US. (At least fundamental/early stage research and post-approval research; drug companies may need to do some trials in the US to get approved in its lucrative market.)
Did other countries independently take a similar path? Did they copy the US? Did they have much stricter laws on human experiments to begin with?
It did mention the UK having an easier process.
The USA is the biggest and richest drug market, and the FDA refuses to accept the results of trials not done in the USA, so a lot of research has to be done there. The early stage stuff isn't done on humans anyway, so it's possible the IRBs for eg. mouse studies aren't especially onerous, or maybe it's just that the inertia of being a very rich country is enough to stop people moving - at an individual level, moving overseas means a massive pay cut for any academic, the USA pays better than almost anywhere else.
This makes me wonder how much of an improvement to the world would be available by simply setting up reciprocal approval between FDA and the UK, EU, Canadian, and other first-world medical regulatory bodies.
>It’s not as bad as it sounds
Only in hindsight, because they got lucky. They didn't know that the children would be asymptomatic--the experiment was to prove it. What if they had been wrong?
It's not like hepatitis has no treatments, and they knew the kids were infected - I don't know about hep specifically but almost all diseases are much easier to treat if the diagnosis is 100% certain and known early in disease progression.
For a longer look at the legal issues:
https://virginialawreview.org/wp-content/uploads/2022/04/Paul_Book.pdf
Yuval Noah Harari, in something of a throwaway line, points out that the UN Universal Declaration of Human Rights is essentially the closest thing we have to a world constitution, and the most fundamental of these rights is the right to life.
I raise this because I'd love to see how far a lawsuit against this IRB nonsense could go based on Human Rights and general wokeness; the essential argument being that this "oversight", far from being costless and principled, is in fact denying the human rights of uncountable numbers of people, both alive and not yet born, by preventing, for no good reason, the accumulation of useful medical knowledge...
Too bloodless. Slap a few of them in the face.
Given the post-WWII circumstances of the creation of UN and its Declaration, I'm pretty skeptical that you would succeed in leveraging its principles into "we have a right to medical experimentation on human beings free of constraints!" OTOH, some time has indeed passed.
A couple of weeks ago I suggested (partly in jest) to some physicians who think about these kinds of issues that maybe a study where participants are doctor's families, parents and grandparents would be useful to put some skin in game.
That however would not really be a "randomized" study and would introduce confounding problems.
Taking the campaign for human challenge trials for Covid vaccines as an example, there was no shortage of volunteers. Many, many doctors historically experimented on themselves. Or are you intending to draw on the emotive difference between experimenting on oneself and experimenting on one's family?
P.S. I would still expect the vast majority to bite the bullet with family too - better to roll the dice and maybe find a cure for whatever disease the test is on than to suffer without treatment because of negligible risks
Well that is arguably what happens when parents give children a vaccine under emergency authorization.
To channel Illich: why would you give your healthy child a drug that might under unknown circumstances make them sick?
These are not easily answered questions.
It is one thing to sacrifice yourself it is quite another to potentially sacrifice others.
Perhaps we should in addition to Illich explore Rene Girard. How pagan are we still?
Or what on earth was Abraham thinking? Here's to finding a ram stuck in a bush.
The issue is generally not "sacrificing others against their will", it is ALLOWING others who are VOLUNTEERING to make the choice to sacrifice themselves.
The most ridiculous version of this comes in the whole Henrietta Lacks nonsense, where we are supposed to be upset not that a person was experimented on, but simply that some tiny part of them was used for experiments.
There seems to be a truly incomprehensible gap here between people like me who look at this and say "yeah, so what?" and people who seem to think it represents a second holocaust.
Henrietta Lacks - she did not volunteer her cells. And money was made off of them. Somewhere between "so what" and "holocaust" is the right answer. Both of those reactions would be outliers.
Someone made money off something I wasn't using and couldn't have made money off myself?
Yeah, that's VERY MUCH in the "so what" category.
Sure she did. She voluntarily underwent the cancer biopsy during which the cells were removed. What you probably mean is that she didn't volunteer the cells *as a culture line." (Parenthetically, it would've been a bit tricky to secure the permission directly, on account of she had already died by the time the attempt was made to establish the cell line, but one assumes her heirs could've been asked, assuming her heirs were entitled as a matter of principle to inherit her cancer along with the remainder of her earthly belongings.)
As for the money...I dunno, there's a long line of finders-keepers in common law tradition that dims hope here. If I throw away a lottery ticket that turns out to be worth $100 million, I will have no luck at all suing the garbageman who picked it up and claimed the prize. Basically, what's yours is yours only to the extent you exhibit a clear intention to keep it. Perhaps if Henrietta had exhibited a desire to keep her cancer cells, the case would have a leg to stand on, but unfortunately one would guess that keeping her cancer was about the furthest thing from her mind.
Which is to say, if we hop in a time machine and give Henrietta a 6-page consent form in which we explicitly say we're going to attempt to create an immortal cell line from her tumor cells for all kinds of exciting research purposes that may pay off in 50-100 years, it would be pretty surprising to me if she said anything other than sure - fine - who gives a fuck? just get this shit out of me.
Negligible risks in general for studies to cure? I would disagree with that as a general statment.
Or are you just making a Covid statement - in which case it certainly depends on your age.
As a "general" matter you can't ethically volunteer your spouse, parents, or grandparents to take risks: you can only volunteer yourself.
Curiously, you can volunteer your children which should raise some problematic concerns.
There seems to be a basic, incomprehensible, gap between those who understand the concept of willingness to sacrifice oneself (or the things one loves) and those who live in a world of Hume's finger scratching.
This gap is explored in great detail as one of the major themes of Terra Ignota, but in modern discourse both sides seem not to comprehend that the other side even exists as a set of real humans out there.
.............................
As I've said before, I'd love to see Terra Ignota turned into a seriously produced TV series; it could do wonders to improve public consciousness; but it's probably too complicated and too cross-cutting in its heroes and villains? (On the other hand, I could have said that about Game of Thrones, so...)
Of course the lighter version of this is seen in Joss Whedon's canon; both Buffy sacrificing herself (twice) to save everyone AND the slacker losers in The Cabin in the Woods refusing to sacrifice themselves -- destruction of the whole world rather than scratching of my finger, indeed.
you could have just told us to watch Finding Nemo lol
Huh?
I've noticed the bureaucrat hegemony as well on a smaller scale. I believe it's the sign of a mature, post-peak society. Everything is pretty much built out, so risk/reward favors rent-seeking petty overlords.
It's a sign that the low-hanging fruit has mostly been picked. "People who want to get stuff done, go somewhere else!" I mean, your examples are pretty piddly in the grand scheme. 100 years ago the study would have been identifying a deadly disease, and there would be no IRB (or grants or funding for that matter). Today it's settings on a ventilator? Small potato, even if the surface area is large.
The two historical examples that come to mind are late imperial china and (post?) Soviet russia. Not a good omen?
This is my working-hypothesis as well. Confidence level is on the low end, however.
I also remember one of Paul Graham's earlier essays saying something like "startups move fast & break things because they have nothing to lose. But as the company matures, red-tape creeps in. Because broken objects are alarming, while the costs of red-tape manifest as invisible externalities." I don't remember which one it was, though.
But look at how many of the MFaBT crowd are currently facing jail!
It's a pretty small fraction of people involved in startups, but a high fraction of people involved in startups that involved defrauding investors, stealing money, engaging in money laundering, etc.
If you are a wealthy society, you might as well spend the money on nice things. Regulation, done right, at least, is nice things. It means you're not sending children up chimneys or making people work 18 days in factories.
Sure, but as with all things, there are diminishing returns. The first step in the safety regs where you require guardrails over long drops, forbid exposed high voltage wires in reach, etc., gives you large benefits. The five hundredth step where you have covered every flat surface at the worksite with warning signs and the employees spend 50% of their time at safety trainings gives you very few additional benefits.
I think this gets at one of the fundamental problems. If the ideology is to press for more safety, it never stops. We need an ideology that presses for the optimal amount of safety.
That isn't the "all regulation bad" ideology, either.
I don't think it's the sign of an *actual* post-peak society, any more than similar malaise and conservatism in the late Roman Empire was a sign that the Mediterranean littoral had achieved everything it was possible for men to achieve by AD 300. It definitely seems like a sign of a lack of self-confidence, a loss of mojo, a belief that we, at least, are severely limited in what we can achieve further, so we might as well turn our attention to seeing that the pie is cut up exquisitely fairly, and everybody gets at least a participation prize.
Why can’t you sue an IRB for killing people for blocking research? You can clearly at least sometimes activist them into changing course. But their behavior seems sue-worthy in these examples, and completely irresponsible. We have negligence laws in other areas. Is there an airtight legal case that they’re beyond suing, or is it just that nobody’s tried?
If they're created by statute, then sovereign immunity would prevent you. If they're acting in accordance with statutory or regulatory requirements, then they can't be in breach of a duty of care by failing to undertake an unlawful act. Failing both of those, it would be too remote.
One major problem is that it is often hard to identify the specific victims -- the people who would been helped by the research, if it had been carried out. Unless you can find identifiable victims of the IRB's refusal, you don't have anyone who has standing to sue.
Even if you get over that hurdle, to find negligence by the IRB, you would have to show that the IRB had a duty of care towards the potential beneficiaries of the research. Most courts would be reluctant to find such a duty without an explicit law passed by Congress (or a state legislature) creating one. After all, you have no right to demand that the researcher do the research in the first place. If you aren't entitled to the research at all, then it doesn't matter whether the researcher decided not to do it, or the IRB vetoed it.
This quote:
“He was uncertain that people could ever truly consent to studies; there was too much they didn’t understand, and you could never prove the consent wasn’t forced.”
Makes me think that we’ve finally found a real live member of the BETA-MEALR party:
https://slatestarcodex.com/2013/08/25/fake-consensualism/
"some kind of lawyer-adminstrator-journalist-academic-regulator axis"
Yup! Which was why my view proposed AI regulation is (as I wrote in the last open thread, in https://astralcodexten.substack.com/p/open-thread-271/comment/14409338 ):
Oh shit. For an example of recent regulatory action see https://astralcodexten.substack.com/p/the-government-is-making-telemedicine - and this is a _favorable_ case. The government has a century of experience in regulating medicine.
It isn't wholly impossible for the government to eventually settle on a fairly sane set of regulations. Traffic regulations work reasonably sanely. But this isn't the way to bet, particularly in a new area of technology.
Yes, AI is potentially dangerous. While I think Eliezer Yudkowsky probably overestimates the risks, I personally guess that humanity probably has less than 50/50 odds of surviving it. Nonetheless, I would rather take my chances with whatever OpenAI and its peers come up with rather than see an analog to the telemedicine regulation fiasco - failing to solve the problem it purports to address, and making the overall situation pointlessly worse - in AI.
The claim in this essay (“Repeal Title IX,” https://www.firstthings.com/article/2023/01/repeal-title-ix) is that Title IX bureaucracy followed the exact same crackdown-> defensive bureaucratisation as Scott describes below for IRBs. (I see more upside for Title IX than the author, but I find her overall analysis of the defensive dynamics compelling).
“The surviving institutions were traumatized. They resolved to never again do anything even slightly wrong, not commit any offense that even the most hostile bureaucrat could find reason to fault them for. They didn’t trust IRB members - the eminent doctors and clergymen doing this as a part time job - to follow all of the regulations, sub-regulations, implications of regulations, and pieces of case law that suddenly seemed relevant. So they hired a new staff of administrators to wield the real power. These administrators had never done research themselves, had no particular interest in research, and their entire career track had been created ex nihilo to make sure nobody got sued.”
It's almost as if our current practices of science impede the practice of science.
This is yet another example of how attempts to reduce risk in fact serve to reduce variance, and this turns out to be net negative. True of a broad range of policies justified in the name of safety.
Most of the comments diving into the details of IRBs are missing the point. The dynamics on display here has little to do with IRBs in particular. They're common to most institutions set up to prevent risk by requiring prior permission.
Strong link problems vs weak link problems.
https://twitter.com/a_m_mastroianni/status/1645851495974281218
That's an interesting dichotomy. I think it could use a different name, though...
Interesting framing, but it rests on an unproven assumption: that you can place a point on that x axis of quality (i) accurately and (ii) quickly.
If this assumption is true, I agree with everything. Unfortunately, the assumption is very likely false.
If it is false, the dichotomy is not useful. Because if you approach the problem as a strong link, you're basically maximizing the number of points you get. If you do not place them accurately and/or evaluating each point takes a lot of time/effort/expertise, you end up with a worse outcome overall _even if_ you have generated a lot of excellent points.
To use the same example, if we have tons of papers published and we suck at knowing which ones are good and which ones are bad, the good ones will not emerge to the top and the bad ones will not fade into oblivion. So even if we get 10 revolutionary papers a year, we would never know and go on with the status quo as if they never existed.
Ironically, this is _exactly_ how scientific publishing is working right ow. The author is lamenting we're approaching science as a weak link, but the publish-or-perish paper mill everybody is high on is exactly treating science as a strong link problem, by maximizing the output and taking crap shoots at everything. The reaction the author is lamenting at, is a way to try and figure out how do we better evaluate the quality of the outputs.
Ugh, what is wrong with our species? Maybe we are ungovernable. This fungus of dysfunctionality sprouts in any crevice, and there are always, always crevices.
Rot accumulates:
https://www.overcomingbias.com/p/what-makes-stuff-rothtml
The way around the rot that accumulates within an organism's lifetime is the creation of a new organism without all that rot. The cultural group selection discussed in Henrich's "The Secret of Our Success" replaces groups that have rotted with ones still thriving. However, that kind of selection isn't really taking place in the modern day. We have deliberately made lots of barriers to one of the biggest mechanisms of such group selection (war), and things like nukes also make it much more of a hassle to overthrow an obviously dysfunctional government like North Korea.
"Maybe we are ungovernable. "
Would that we were.
What's wrong with being ungovernable? I don't intend to be governable. I'm not a cog in a machine, still less a program on a CPU, and I don't want to be either.
I'm pretty prickly myself, but surely you don't really believe in zero government.
How did we go from not everything can be achieved by government to zero government? In any event, I'm just objecting to what I saw as the implication that "governable" is an appellation a free man would find complimentary.
I mean, I feel like bellowing "This! Is! Sparta!" but lack the abs to pull it off, alas.
Here’s deep context beyond the IRB context
1. Loss aversion
Medicine: First DO no harm.
Law: You may step over a drunk in the gutter with impunity. But you come to his aid at your peril.
People are more upset at losing a $20 bill than they would be to learn that they had walked past a $100 bill.
2 Bambi's mom and Stalin
We can, with complete indifference watch vast tracts of wild lands disappear, but we shed tears over the cute orphaned fawn. "One death is a tragedy. A million deaths is a statistic."
3. Every thing happens for a reason
No bad outcome without a bad actor. Someone is to blame, and must be punished. To carry out this principle, new duties continually are being created by statute or judicial decision.
4. Living in the moment
Just as we find deferred gratification difficult, and discount future positive outcomes, we also underweight future negative outcomes.
5. Rules based order
Life is chaotic, full of lurking dangers, and must be constrained so we have rules. But life is also full of paradox, randomness, and edge cases that must be handled on a case-by-case basis until some underlying regulatory principle can be discerned. But the grown hall monitors who administer the rules know that exercising judgment is far more likely to get them canned than not. They will only do that when the new rule is almost in place. For a sweetener they usually ask an expanded scope of authority and a new cadre of courtiers to support the enhanced dignity of the bureau.
6. Petty tyrannies
Hall monitors grow up but never lose the taste for pettifogging displays of social dominance.
yeah okay, but this stuff has been true for thousands of years - there's still an interesting question of why regulatory barriers appear to have been increasing since some time in the 90s
The 90s followed on the deregulation of the late 70s and the 80s
Mostly the late 70s. Here's a list of the things that the Carter Administration deregulated with the support of the Democratic-majority Congress:
air travel (signed 1978)
trucking (signed 1980)
freight rail (signed 1980)
consumer banking (signed 1980)
long-distance phone service (signed 1980)
Carter also continued and amplified the Nixon Administration's legal fight to deregulate telephony more generally, and issued an executive order requiring federal agencies to perform and to publish specific analyses of the costs and benefits of proposed new rules, and instigated the first-ever federal law requiring agencies to consider alternatives before issuing new regulations.
Then here's the list of things that Reagan Administration deregulated:
Interstate busses (1982)
Home telephone services (1984)
Very interesting. How would you say these deregulations came about? What were the factors at play?
Carter was strongly anti-monopoly and viewed deregulating those industries as an economically-progressive reform. The left wing of his own party mostly reacted in the knee-jerk manner, but he was able to find enough Democratic centrists to combine with GOP votes to get those measures passed and then he proudly signed them into law.
There was almost no other policy area, domestic or foreign, in which Carter possessed and successfully carried out such a clear and coherent strategy. Also he became obsessed with and put most of his personal energy into what became the Camp David Accords. Meanwhile his skills at and frankly interest in haggling with Congressional leaders were so limited that by 1980 he'd basically painted himself into a box politically.
He and his close advisors have never stopped believing that had the Iranian situation blew up and those embassy hostages taken [an outcome which Carter's own fumbling and bumbling helped to enable, just to be clear], Carter would have run for and won a second term largely on the strength of those economic reforms. As you can probably tell, I disagree -- all things considered, pretty much any GOP nominee in 1980 would have defeated him.
(In full disclosure my mother was appointed to a key federal position by President Carter and never had anything but the highest regard for him. She's gone now but just for the record, opinions expressed here are no one's but mine etc.)
"had the Iranian situation _not_ blown up", is what I obviously meant there.
So essentially a leader who almost incidentally believed in and carried out some reforms? No particular movement/lobby/general broader belief that we could point to? How did he tap the right expertise? For e.g, it could be argued that in the US, the left is pretty anti-monopoly, but they seem to be very ham handed in how they're going about it.
Also, were all the industries you listed monopolies, and hence coming under Carter's deregulation scanner? Because my limited understanding, and mostly from the airline deregulation, is that prices were deregulated.
How do you imagine regulating AI is going to play out?
I think there will not be much regulation at all, because (1) we do very little regulation of tech companies now; (2) there is a great deal of money to be made out of GPT4 and the like, and so lots of interests will apply pressure in the direction of not regulating; (3) you have to be fairly sharp and well-informed about tech to understand the issues, and most in power are not.
I don't understand why we need regulation of research ethics at all. As long as the researchers aren't doing something that is illegal for a "civilian" to do on their own, why do we need an IRB to monitor them? All of the examples of bad research you give here are very clearly torts, and any competent lawyer could easily extract massive settlements from those doctors and their institutions nowadays. Fear of lawsuits on its own could deter the majority of truly harmful research.
If you're giving someone an experimental medicine or otherwise doing research on an activity that is potentially harmful, all that should be required is a full explanation of the risks, what the doctors know and don't know about the drug, etc. Consent by a competent adult is all that any reasonable system of research should require (and I'd add that consent should only be necessary where there's a risk of a meaningful harm--if it wouldn't be illegal for you to give people surveys as a civilian, then you shouldn't need to get consent to collect them as a researcher, for example).
I think part of the problem with that is that fear of lawsuits would also deter a lot of non harmful research - the value of an IRB from this perspective is that it gives you a pretty good shield against frivolous claims about your research being negligently harmful.
Yeah, there's no reason to think courts are going to do a good job distinguishing between reasonable and unreasonable risks in medical research, particularly with dueling credentialed experts and a sympathetic victim.
I think Gbdub has the right answer: universities and hospitals are classic "deep pockets" defendants, and if they didn't got to extreme lengths to be able to show to hostile[1] juries "yeah there was TOTALLY informed consent here, and we absolutely bent over backwards to be sure there was no forseeable harm that we didn't explain clearly, and all these experts (who are willing to testify in our defense) said it was ethical" then their in-house legal staff would have chronic insomnia and many ulcers.
--------
[1] Because, as demonstrated in some previous thread, your ordinary person tends to think that (1) Big Faceless Corporation is more likely to be evil than mistaken, compared to an individual person, and (2) if BFC has to fork out $50 million no actual human beings will be harmed, they'll just declare a slightly smaller dividend which will only piss off Scrooge McDuck because his swimming pool full of gold will be 1" shallower.
We don't let normal people cut other people open with knives, even if the cut open person signed a consent form first.
Doctors get to do a lot of things that normal people don't as part of their licensed position, and that licensing also comes with restrictions on those abilities.
Lasagna and Epstein is a legendary research duo.
Also when you brought up NIMBYism, I imagined the IRB wearing joker makeup and saying "You see I'm not a monster, I'm just ahead of the curve"
I wish some lawyer would use that study showing that short consent forms work better to argue that anyone demanding a long consent form is negligently endangering the subjects.
There's an obvious solution. Create a meta-regulatory-agency which requires other regulatory agencies to fill out enormous amounts of paperwork proving that their regulations won't cause any harm.
I work at a big tech company and this is depressingly relatable (minus the AIDS, smallpox and death).
Any time something goes wrong with a launch the obvious response is to add extra process that would have prevented that particular issue. And there is no incentive to remove processes. Things go wrong in obvious, legible, and individually high impact ways. Whereas the extra 5% productivity hit from the new process is diffuse, hard to measure, and easy to ignore.
I've been trying to launch a very simple feature for months and months, and there are dozens of de facto approvers who can block the whole thing over some trivial issue like the wording of some text or the colour of a button. And these people have no incentive to move quickly.
Same in defense contracting. Easily half and probably more of the cost and schedule of programs comes from “quality standards” and micromanagement that gives the illusion of competent oversight. Distilling very complicated technical problems into easy to understand but basically useless metrics so paper shufflers and congressional staffers can feel smart and like they know what’s going on is a big part of my job.
When in reality, we learn by screwing up in novel ways - the new process rarely catches any problems because we already learned our lesson and the next screw up will probably be something new and unanticipated. But the cost of the new process stays forever, because no one wants to be the guy that makes “quality controls less rigorous”.
I haven’t finished the article, and I presume someone did, but did no one think about just taking out some extra insurance? This kind of thing is what insurance (and creative kinds of insurance) were meant to handle. Even with grandstanding congressmen, you can say your alleged victims were fully compensated.
IIRC the book mentioned there are laws against having insurance for this because it (according to whatever regulator made this law) produces the wrong incentives.
That's so perverted - the insurance company has exactly the right incentives!
Maybe "wrong incentives" was code for "incentivizes the replacement of the regulator". ;-)
*deep sigh*
I hate to be that guy... no, actually, I love being that guy. You really think that this is down to connectedness?
There's a book that comes out in 1957 that predicts _exactly_ this kind of insanity throttling progress with consequent disasters. And, yes, though it focused more on railways and steel makers, it also predicted the effect on medical progress.
That book was called "Atlas Shrugged".
Please start paying attention.
Why is it so much worse in the US than in more socialist countries?
Which socialist countries? I don't know about places like Cuba, North Korea etc. but I do not think they are hotbeds of innovation.
But let me be generous and assume you mean social democratic nations (words mean things: socialism isn't social democracy), such as e.g. Denmark. I went and looked, and Denmark is ahead of the US in terms of economic freedom, as is Iceland, the Netherlands and even Canada.
But this misses a deeper point. Rand never predicted that the United States would fall to socialism, but to fascism. She predicted an alliance of bent big business and corrupt big government that passed laws that enriched the 0.01% while screwing the 99.99%, backed up by nationalist hysteria, militarised police. and a compliant media (please, stop me if any of this is ringing any bells). Stuff like the above is exactly what A.S. predicts.
Also, social democracy couldn't work.
"“A mixed economy is an explosive, untenable mixture of two opposite elements,” freedom and statism, “which cannot remain stable, but must ultimately go one way or the other"
Well - yeah? See the 2008 financial crash, riots, Trumpism etc. etc. Conversely, see how the Scandinavian social democracies are, again, more free market now than the United States.
The problem is that the internal contradictions of a mixed economy are, well, contradictions and sooner or later lead to a bang. Then people are thrown into a crisis and ask "How do we fix this?", and they're either move towards more state control and more disasters or away from state control. See also the economic history of Venezuela.
You are missing the point that some social democracies actually are stable. The free market segment of Scandinavian economies may be freer than that of the US, but the welfare segment is more generous as well!
That's only true if you discount the vast amount of corporate welfare, direct and indirect, in the United States.
The social democracies that are stable are the ones that have moved towards being free market, and indeed have done so much that they are more so than the U.S. That is, in fact, entirely predictable, and was predicted in advance by, well, guess who?
Here's a better reason for the vetocracy: Court Overreach. The reason lawyers have as much power as they do is because decision making has been forcibly taken from policymakers and more efficient bureaucracies by overzealous judges with no limits on their power.
IRBs have to deal with lawyers, our approval process for environmental protection goes through courts rather than the EPA, and every single new apartment building has to win or beat a lawsuit to get built.
This is because entrenched interests want to wrest policymaking away from legislators and give it to courts.
Court Overreach is a response to the real problem, Representative Abdication.
Once upon a time, countries were run by people who were chosen to run them by everyone else, in a process called elections. People chose whomever they thought was best to run the country based on what they were like and what they said they'd do, and were basically happy with the result. However, there was a tiny misalignment between winning elections and governing well. As a result, in every election, a candidate who was willing to optimise slightly more for election-winning than good governance was more likely to be elected.
Fast forward to now, and politicians aren't interested in anything that doesn't involve winning elections; competition is fierce enough that this is all they do, and governance only occurs to the extent that it's useful to win elections. This means most of the non-election-wining aspects of governance get pawned off onto civil servants. It's also led to Trevelyanism (in the US, the Pendleton Act), in order to prevent the whole government from being swallowed and repurposed by the election-winning machine.
The result is that there are two choices - either a bureaucracy that's accountable to no-one, or a bureaucracy that's accountable to judges. Seeing this, judges have stepped in as probably a better alternative than civil servants.
Go along to your state legislature and look at the people there. They couldn't run a state if they wanted to; most of them couldn't run a raffle. This applies to most European parliaments, and also to the US' Congress. The civil services are all an awful mess of blame-shifting and empire-building whose incentives are almost as bad as the legislators, leaving judges as the only people who are disinterested enough to make some sort of decision (although now generally beginning their own empire-building campaigns, as the term "non-justiciable" slowly fades into history).
All of this is probably inevitable. We live in an entropic universe, people deteriorate with age, machines wear out, iron rusts, wood rots away, societies degenerate into decadence and governments collapse into nothing but rent-seeking and power struggles.
A quick search didn't turn up any usage of the term "Trevelyanism", though there is the real Trevelyan surname.
It's this guy:
https://en.wikipedia.org/wiki/Sir_Charles_Trevelyan,_1st_Baronet
He wrote a report in the 1850s on reforming the British civil service to make it meritocratic and non-partisan.
The average judge is just a bureaucrat with less oversight.
Optimizing for winning elections is not a thing that happens, its just a fact of democracy that we are capable of minimizing. All of these "accountable bureaucracies" are inevitably just accountable back to representatives - there's no easy solutions to Power.
For all the talk of lawyers, it seems like this particular issue was driven largely by overreaction from government in the late 90s… lawyers at least would have to prove harm to win a tort, whereas government agencies can simply shut down whole research departments on a whim because some Congressman decided Something Must Be Done.
Another example of the bureaucratic calculation, bureaucrats are exposed to all the downsides of a positive decision but get none of the upsides, so they are incredibly cautious about making a decision. If you want a different system, change the incentives for the decision makers.
The idea of a dispassionate safety reviewer sounds great until you start to realize it will be populated by agents who have their own interests to consider.
Yep. See also the no-fly list, and the apparently Kafkaesque process people wrongly added to it have to go through to be taken off. (What happens to your career if you take someone off the list and they they turn out to later do something terrible?)
One thing a student who did an exchange year in America pointed out to me is that, unlike in Europe, speed limits are kind of meaningless. The speed limit on the interstate may be 70. Nobody goes 70. They all drive somewhere between 75 and 90, and whether that's judged to be breaking the rules or not is ambiguous and situational. It depends who you are, who the traffic cop is, and so on. If you grow up in the US, this is second nature, but if you come into the system from the outside it seems utterly mysterious.
The American system has a lot of rules on paper, and functioning in it requires you to know which rules are real, which rules you can break or ignore with impunity, and which rules to subvert by entangling the interests of the rule-enforcers with your own and co-opting them.
This is not efficient but it does ensure that real power is wielded by those who understand the system best, which tends to be those closest to it.
You don't need cops to enforce speed limits, cameras can do it.
Funny you should say that; don't ask me for statutory cites, but my understanding is that in many US jurisdictions, speeding tickets must be issued by a real live cop and not a camera, except where explicitly carved out by statute.
On paper, a response to potential "but the machine was poorly calibrated" defenses.
Not saying any of that makes sense, but fits reassuringly into the "everybody is[, at least officially,] as risk averse as can be" narrative.
Indeed, yet traffic enforcement is still done in large part by traffic cops in the US. Having the unpredictable human element as part of the structure of enforcement seems to be a feature, not a bug.
Your student surely can't have come from the UK, where the official speed limit on the equivalent of interstates is 70mph, but the enforced speed limit is 85 (though it's seen as polite to slow down to about 75 when passing a marked police car, which in turn will always travel at about 65 so the charade doesn't slow traffic too much).
From my experience of driving in other European countries, I'd say the same applies across most of Europe. Maybe not Switzerland.
It's true that I've chatted to a number of northern Europeans who find Anglophone indirectness and lack of clarity infuriating; but I'd guess that in general this isn't an American problem, or even an Anglophone problem, it's a human being problem.
And here I'd gotten the impression that Americans were anomalously blunt & direct.
I used to think that. Probably because I'm English, and I think that in English culture that is how they're seen, by and large. Canadian too, I imagine.
I therefore found it very interesting when I talked about Americans to continental Europeans, who said that of course Americans aren't direct, they're almost as annoyingly indirect as Brits, and why can't they ever say what they mean. Which is, once you think about it, absolutely true - when compared to a culture where people actually do try to say what they mean, Americans are just as bad as the British, it's just that on top of everything else they're pretending to be happy.
You'd almost think that the use of the English language leads to indirect, hyper-polite cultures with language codes that are frustratingly impenetrable for outsiders. Though if that was your theory, I'm not sure how you'd explain Australia.
Perhaps their nigh-incomprehensible accent & slang serves the same function?
/s, sort of
I thought the "real limit" was 1 mph under the limit for reckless driving (usually the posted limit + 15 mph), maybe lowered a bit more for speedometer and detector inaccuracy? :-)
The Gordian knot of all this bureaucratic nonsense could be decisively cut if in any lawsuit for damages the compensation amount was strictly limited, according to a set of guidelines and, based on the latter, decided by the judge rather than the jury.
Of course the jury would still be the sole arbiters of whether there was a liability. I'm not suggesting they would have no role and a panel of judges would go into a huddle and decide the whole case, although that would be even simpler and cheaper!
The snag is I think in the US judge-determined awards would be contrary to some constitutional principle, dating from a former age when life was simpler and maybe jurors were more pragmatic about how much compensation to award, instead of the lottery jackpot win level of awards which often seems to prevail today.
Isn't this just part of the greater liability law issue the US seems to have (seen from the outside)? It looks like the incentives for liability claims are so huge that a significant part of the legal professionals are dealing with either putting forward these changes or preventing them. This seems to be a significant part of the extreme US health spending
From the outside , the US seems to have great deal of over regulation *and* under regulation.
To be frank I have not thought about it much, but my assumption is that LLM's (large language models), AI's - are going to revolutionise medical research.
At first their prognoscations will be challenged and resisted, but as time goes on and they start to get a track record of getting it right, we will come to rely on them more and more.
AI's are going to change literally everything, and in ways that are completely unpredictable.
I too think they are going to revolutionize medical research too, and also medical practice. For instance a surgeon removing a cancer needs to know where the cancer ends and healthy cells begin. If he had something in his glasses that showed a magnified version of whatever he was looking at to an AI, the AI could do instant biopsies of every place surgeon looks, using pattern recognition to distinguish heathy from cancerous cells, give him instant feedback on where tumor edges are. I do worry, though, that there's so much money and fun to be had from things like GPT-4 that the AI development companies are going to continue to develop the flashy silly party tricks side of AI, rather than the side that aids science and medicine.
>It defends them insofar as it argues this isn’t the fault of the board members themselves. They’re caught up in a network of lawyers, regulators, cynical Congressmen, sensationalist reporters, and hospital administrators gone out of control. Oversight is Whitney’s attempt to demystify this network, explain how we got here, and plan our escape.
This is consistent with my experience of government administration more generally. The people implementing rules tend to be very aware of the problems and irrationalities of the system, but they're following instructions and priorities from senior management, and ultimately politicians, for whom the primary incentive is to avoid embarrassing scandals, not maximise outcomes.
Politicians are to some extent just following their rational self interest, as the negative press and public opinion from one death from a trial is far greater than a thousand from inaction.
I'm in the UK so consent is a bit less onerous here, and yet, I've attempted to participate in 3 covid challenge trials, trials for which there are not exactly truckloads of willing participants, and yet I keep getting denied because I'm a carrier for Factor 5. I don't even have it, I'm just a carrier, and yet because it raises my risk of clotting ever so slightly, I keep getting denied. This AFTER I've had covid twice, once before being vaccinated, and without any sign of blood clotting. Bureaucracy gone awry.
I’m talking about a King with limited powers, much less than a President would have. There are plenty of constitutional monarchies where the King’s power is nontrivial but which have legislatures and Prime Ministers for ordinary governance.
At all levels of power from unlimited to extremely limited, it is easy to find stupid assholes filling jobs of that level. Whatever power setting you give your quasi-Monarch, the idea's useless unless you have a reliable way to ensure whoever gets the job is smart, honest, and conscientious.
On the general theory of risk avoidance in every form dominating politics, I think there's more to it. The risk of gun violence is famously ignored by politics, for instance. The risk of injury to pedestrians and cyclists by cars is pretty famously ignored. COVID risk certainly wasn't uniformly assessed.
So it isn't that society can't tolerate risks or make ROI-based decisions in many situations. I suspect the specific kind of legal risk medical professionals and institutions face have the issue that tail risk imposed is huge and does not do a good job assessing the ambiguity of day-to-day decisions. Doctors famously believe this is onerous.
But compare that with policing. Society famously tolerates quite a bit of risky behavior from police and assesses police officers face ambiguous situations and need to be given latitude to act, and while tail risk there has increased substantially I'm not sure how it compares to the situation in medicine.
Maybe a National Medical Research Association joined by millions of voters suffering from conditions that would benefit from medical research, and fanatically protective of medical research prerogatives, could change the landscape. National Health Council? Maybe the big orgs like AHA and ACS need to advocate less for funding and more for red tape removal.
I am wondering if it is partly caused by the fact that many people are very suspicious of research (all those nazi doctors!). For example, as an ecologist, in my country (France) I am supposed to follow several training sessions, fill a lot of forms, ask for many authorization, etc.. if I want to capture, mark and release a wild animal without hurting it, whereas hunters can just hunt and kill the same animals.
"Gun violence" is an overtly political term so it's reasonable that its only used as a political football.
https://hwfo.substack.com/p/ar-15s-are-mindbogglingly-safe
"I don’t know exactly who to blame things on"
I do. America is the victim of its own success. It's gotten so rich, so comfortable, so risk-averse that a billion to maybe save a handful of lives isn't patently absurd. There are no reality checks anymore. Like you quote, "At a time when clerks and farm boys were being drafted and shipped to the Pacific, infecting the mentally ill with malaria was generally seen as asking no greater sacrifice of them than of everyone else." Whereas these days what little of warfare remains is mostly done by unmanned drones.
I think you've unintentionally elided two distinct points: first, that IRBs are wildly inefficient and often pointless within the prevailing legal-moral normative system (PLMNS); second, that IRBs are at odds with utilitarianism.
Law in Anglo-Saxon countries, and most people's opinions, draw a huge distinction between harming someone and not helping them. If I cut you with a knife causing a small amount of blood loss and maybe a small scar, that's a serious crime because I have an obligation not to harm you. If I see a car hurtling towards you that you've got time to escape from if you notice it, but don't shout to warn you (even if I do this because I don't like you), then that's completely fine because I have no obligation to help you. This is the answer you'd get from both Christianity and Liberalism (in the old-fashioned/European sense of the term, cf. American Right-Libertarianism). Notably, in most Anglo-Saxon legal systems, you can't consent to be caused physical injury.
Under PLMNS, researchers should always ask people if they consent to using their personal data in studies which are purely comparing data and don't change how someone will be treated. For anything that affects what medical treatment someone will or won't receive, you'd at least have to give them a full account of how their treatment would be different and what the risks of that are. If there's a real risk of killing someone, or permanently disabling them, you probably shouldn't be allowed to do the study even if all the participants give their informed consent. This isn't quite Hans Jonas' position, but it cashes out pretty similarly.
That isn't to say the current IRB system works fine for PLMNS purposes; obviously there's a focus on matters that are simply irrelevant to anything anyone could be rationally concerned with. But if, for example, they were putting people on a different ventilator setting than they otherwise would, and that risked killing the patient, then that probably shouldn't be allowed; the fact that it might lead to the future survival of other, unconnected people isn't a relevant consideration, and nor is "the same number of people end up on each ventilator setting, who cares which ones it is" because under PLMNS individuals aren't fungible.
Under utilitarianism, you'd probably still want some sort of oversight to eliminate pointless yet harmful experiments or reduce unnecessary harm, but it's not clear why subjects' consent would ever be a relevant concern; you might not want to tell them about the worst risks of a study, as this would upset them. The threshold would be really low, because any advance in medical science could potentially last for centuries and save vastly more people than the study would ever involve. The problem is, as is always the case for utilitarianism, this binds you to some pretty nasty stuff; I can't work out whether the Tuskegee experiment's findings have saved any lives, but Mengele's research has definitely saved more people than he killed, and I'd be surprised if that didn't apply to Unit 731 as well. The utilitarian IRB would presumably sign off on those. More interestingly, it might have to object to a study where everyone gives informed consent but the risk of serious harm to subjects is pretty high, and insist that it be done on people whose quality of life will be less affected if it goes wrong (or whose lower expected utility in the longer term makes their deaths less bad) such as prisoners or the disabled.
The starting point to any ideal system has to be setting out what it's trying to achieve. Granted, if you wanted reform in the utilitarian direction, you probably wouldn't advocate a fully utilitarian system due to the tendency of the general public to recoil in horror.
I’m pretty sure everyone who has basic reading comprehension grasped that this was an argument against utilitarianism. My viewing Mengele’s etc. actions as abhorrent is necessary for the article to make sense, and I don’t think the fact you’re a moron makes your life less valuable in the slightest.
I don’t think, if that’s true, it’s as obvious as you think. This is also a pretty stock response from Nazis when they get called out in their Nazi shit. That said, I don’t know you and it is entirely possible that your argument against utilitarianism just happened to look like a defense of Nazi shit.
Regardless…
> I don’t think the fact you’re a moron makes your life less valuable in the slightest.
Sick burn, 10/10. It makes me want to believe you aren’t a Nazi just because of how great it was.
Extreme aggression based on basic miscomprehension, banned.
Very interesting. French law says that "non assistance to a person in danger" is a crime, and I was surprised to learn that this clause is not universal at all.
It seems to me that nobody advocate a fully utilitarian system but I would expect the large majority of people to agree that not willing to submit people to very minor inconvenience or even risk if they agree, even if there are potential larger benefits, is not the right position either.
> Under utilitarianism
I do think it's high time someone proved utilitarianism.
Bentham's original proof was:
(1) People only desire pleasure (and only seem to desire other things because they desire pleasure).
(2) From (1), we can infer that pleasure is the only desirable thing.
(3) Therefore increasing the total quantity of pleasure is the most desirable thing.
(4) Therefore utilitarianism.
Very few living utilitarians (none?) buy this though; they mostly say either that they have a moral intuition in favour of pleasure and only pleasure being important, or that on analysis pleasure-maximisation is the hidden unifying principle of all their moral intuitions.
You comment that we should better distinguish the inefficiencies and pointlessness of IRBs, and that IRBs are incompatible with utilitarianism. I can't comment on the relationship between IRBs and the prevailing norms of the legal system, which we should in turn sharply distinguish between prevailing moral and cultural norms among the public.
I wanted to address your point about utilitarianism. You say that "it's not clear why subjects' consent would ever be a relevant concern," and that Mengele's research has "definitly saved more people than he killed," which may also apply to Unit 731. The key point is that, if horrific human torture-research like that of Mengele and Unit 731 saves lives on net, or more accurately is "net positive utility," then that is a serious challenge for whether or not utilitarianism is the proper moral foundation for research ethics.
The BBC goes a little deeper, pointing out that "Allied forces also snapped up other Nazi innovations. Nerve agents such as Tabun and Sarin (which would fuel the development of new insecticides as well as weapons of mass destruction), the antimalarial chloroquine, methadone and methamphetamines, as well as medical research into hypothermia, hypoxia, dehydration and more, were all generated on the back of human experiments in concentration camps." [1]
The use of these examples is emotionally powerful, which is good because it drives home how important the philosophical issue is. However, we have to be careful in analyzing it. For many of these experiments, the results could have also been obtained through normal scientific studies that we consider ethical, so the marginal "value" of the brutal Nazi or Japanese experiments is significantly undermined. Some modern researchers continue to use Nazi data on hypothermia, because there is no ethical way to obtain it, but the benefit there is much lower.
Any moral system runs into calculation and reference class issues. If we were able to use brutal, unethical experiments to speed-run biomedical progress, and this was in fact the fastest way to long-term save and improve QoL for the highest number of people, would utilitarianism obligate us to do it? Perhaps yes, and that would be a real moral quandary. On the other hand, if such brutal expeirments don't in fact systematically provide the highest benefits relative to a conventionally ethical research program, the sting is taken out of the thought experiment. Furthermore, if the only way to run such brutal experiments is in the context of a brutal society, such as that of Nazi Germany or the Japanese Empire's invading army in Manchuria, then we have to weigh the negatives of the wider societal brutality against the research.
We won't ever have definitive data proving which real-world system is objectively best from a utilitarian perspective. But it would be a consistent and common-sense utilitarian position to claim that conventional research ethics, perhaps with a more efficient and flexible regulatory system, is the only sustainable, and therefore the most efficient, way to drive scientific progress forward, as compared to a program of deregulated and brutal human experimentation that would have to take place in a similar society to that of Nazi Germany or the most racist aspects of early 20th century America.
A rule-utilitarian IRB that held such a position would change its behavior by prioritizing the benefits of research as well as the cost, but could also enforce common-sense constraints on specific types of experiments because the net utility of decisions made under the rule exceeds that of the net utility of decisions made not under the rule, even though enforcing the rule has net negative utility in specific cases. This is still reasoning from consequences, which is why it's utilitarian rather than deontological, but it is considering the consequences of the rule, rather than of individual acts.
For that reason, I disagree with your claim that you would not want to advocate for a "fully utilitarian system," or that a "utilitarian IRB would presumably sign off on [Mengele's research or Unit 731]," partly because a fully utilitarian system would never have agreed to permit the existence of Nazi Germany or the Japanese abuses in Manchuria, but also because a fully rule-utilitarian system would ban even net life-saving abusive research on the grounds that compliance with the rule saves more lives on net than not following the rule.
[1] https://www.bbc.com/future/article/20190723-the-ethics-of-using-nazi-science
I'm not sure, when it comes to medical research, what a rule-utilitarian IRB would ultimately end up caring about; it's sitting in circumstances where it's in a better position to weigh the cost/benefit risks, so I think you'd need to be a pretty hard rule-utilitarian not to let it do it do its own calculations.
What I don't see is how even a rules-based rule-utilitarian IRB would find itself having a rule about patient consent, unless it was subscribing to a version of hard rule-utilitarianism with an absolute "no non-consensual interference with people" rule (making the utilitarianism redundant, and bringing us back to Hans Jonas). The crucial, defining feature of utilitarianism is that nothing other than happiness* matters. Common-sense rules can only come into play if they ultimately lead back to that.
It's probably right for an individual utilitarian to advocate for a marginal shift towards looser regulation. But the position they should be hoping for is utilitarian regulation, and in terms of societies they should advocate one where the prevailing norms are theirs as the most likely to increase utility.**
This, I think, is the flaw in your argument. There are three possible levels: utilitarian individuals, a utilitarian IRB, and a utilitarian society. A utilitarian IRB in a utilitarian society could happily permit whatever it concluded maximised utility, including non-consensual dangerous research on humans; it's not clear why a utilitarian society would object to this, even if they wouldn't have colonised Manchuria. A utilitarian IRB in a non-utilitarian society wouldn't get to dictate what the society it was in was like, but if the buck stopped with it then it would lean as far towards doing away with consent, and to treating harm/risk of harm equally to benefit/risk of benefit, as it could get away with without the non-utilitarians stepping in and shutting it down.
*Or preferences. I'm fairly sure preferences don't save consent though, as you have to weigh it against people's preferences not to die of whatever horrible disease this will be treating.
**This might, empirically, be false (eg. if a Christian society were happier than a utilitarian one because religion makes people happier), and could lead to a fun short story about a secret society of utilitarians trying to convert everyone to Methodism before dissolving itself and taking all knowledge of utilitarianism to their graves.
I think your point about the context of the IRB in society is a crucial one. In a utilitarian society that embraced cost benefit analysis for all its decisions, and was rigorously moral in personal behavior as in policy, then indeed, the IRB would probably do away with patient consent and focus on speeding research that would promote health and happiness and barring harmful or wasteful research.
In a society mainly composed of non-utilitarians, a utilitarian IRB must operate within the strictures imposed on it by wider society. It can be activist, pushing for pure utilitarianism, but it can also adopt a pragmatic bargaining posture - looking for compromises with prevailing norms where most people feel they’re better off moving in a marginally more utilitarian direction. That’s what it sounds like this book is doing, and it’s what I would envision for a utilitarian IRB.
Even if some brutal experiments have benefits to society that exceed the harm, experimenting on unconsenting people under a brutal dictatorship isn't the only possible way to obtain the data. Even if you can't get anyone to consent to being brutally killed, you can pay 1000 people to consent to a 0.1% risk of being brutally killed. Then you randomly pick one of them to experiment on.
Unfortunately, libertarian societies where you can legally consent to such risks are rarer than brutal dictatorships, so our only data on some things are from brutal dictatorships.
> I can't work out whether the Tuskegee experiment's findings have saved any lives, but Mengele's research has definitely saved more people than he killed, and I'd be surprised if that didn't apply to Unit 731 as well. The utilitarian IRB would presumably sign off on those.
That does not follow, even granting the claim about break-even for Mengele and Unit 731 (and ignoring second-order consequences like, say, the harm of assisting the continuance of IRBs by furnishing a convenient founding myth). Utilitarianism / consequentialism are usually formulated with some sort of maximizing. So, it is not enough to show that an action makes the world slightly better off / higher-utility, because there are many other actions, which could be better.
...I came here to say that this is very much NOT the answer of Christianity, and given how famous the Parable of the Good Samaritan is that's not exactly hard to know.
This is one of the reasons if you do not save a drowning man you could have easily saved without risk to yourself, many people will judge you harshly.
People might judge you harshly if you fail to save someone in a very concrete case, but they generally won't in an abstract context like what IRBs do.
I don't think the prevailing legal system as categorically bans consensual harm as you say. Boxing matches are legal. Surgery is legal, even though it always injures the patient. Medical treatments that carry a significant chance of killing the patient are legal when the benefit outweighs the harm. Dangerous jobs are legal if the risk is contained. And medical experiments that carry a tiny risk of death are still legal, even if the IRB system requires the risk to be very tiny.
---
I disagree that consent would be irrelevant under utilitarianism. Requiring consent is a good way to ensure that the benefit to society exceed the harm to the experimental subjects, especially if they participate for personal benefits (payment, or access to the experimental drug) rather than out of altruism. And requiring consent doesn't preclude even brutal experiments if benefit to society exceeds the harm: even if you can't get anyone to consent to definitely being experimented on, you can get many people to consent to a small chance of being experimented on, then perform the experiment on a random subset.
You say that, instead of people who consent, a utilitarian would prefer to experiment on people who are the least harmed by the experiment. But it should be assumed (especially if you are a preference utilitarian) that those who are the least harmed by the experiment are the most willing to consent! People know what's good or bad for them better than you do (especially if you properly inform them about the consequences of the experiment); their choice of consenting or not is useful information about it, in the same way as people's voluntary choices in the marketplace are much better information about their needs and preferences than a government can ever gather in other ways.
Also, disregarding consent can create all sorts of perverse incentives that are bad from a utilitarian standpoint. Such as that if hospitals perform risky experiments on patients without asking for their consent in the interest of the greater good, people may avoid going to hospitals if they are sick.
The dude is advocating Nazi Shit so unsubtly that his post contained apologia for Nazi death camp atrocities. I don’t think he’s someone that gives a good god damn about any actual argument. Just trying to get more people on the Nazi shit bandwagon.
Well, the Nazi shit around here is certainly getting less subtle.
> Academic ethicists wrote lots of papers about how no amount of supposed benefit could ever justify a single research-related death.
Ah, the perennial "trolley problem", gifted to us by academic virtue ethicists.
Or at least a peculiar variant of the problem, in which the trolley was headed to run over a few hapless people, until the ethicists virtuously pulled the lever that makes it run over thousands.
I think the trolley metaphor is more accurate (and closer to the "standard" trolley problem) if you do it the other way around.
The trolley is a disease, the people it's currently headed towards are the thousands of people who will die (of "natural causes", so not anybody's fault) if no cure for the disease is found. We have the option to pull the lever (experiment with potential cures, risking some health complications for the trial participants when an experiment goes wrong) and thereby kill a small number of people who would have otherwise lived, while in the long run saving thousands.
From a utilitarian POV the answer is obvious. But just like a lot of non-utilitarians feel that pulling the lever is wrong because then the blood of the one victim will be on your hands, while you are not responsible for the deaths of the multiple people on the trolley's original track, it gives a little more insight into the mindset of an IRB administrator who feels that "curing diseases is not my job; preventing people from getting hurt in the process of finding the cure is my job."
well, more precisely still: the trolley is heading for many thousands. Researchers are actively trying to pull the lever to a track with a couple of people. The IRB is actively restraining the researchers and pulling them away from the lever, which is not exactly inaction on their part.
No, I inverted the trolley metaphor on purpose, because the inversion is /exactly what happened/ when considered from the "academic ethicist" POV.
The usual trolley metaphor is "don't pull the lever, and 4 people die" versus "pull the lever and YOU caused 1 person to die".
In our case however, the people pulling the lever here WERE the "academic ethicists", by insisting on an explosively expanding and unaccountable IRB bureaucracy.
Without them pulling any lever, the trolley was set on the tracks where a few casualties per year occurred. They pulled the lever, and switched the trolley to the tracks where thousands of *extra* casualties per year occur.
This might be a wonderful job programme for a bunch of administrators who often have next to zero knowledge on the subject matter (people like Dr. Whitman being evident exceptions to the rule), and who often delight in crushing other people, who do know the subject matter, with mountains of mindless inane busywork. This might even be a solid way of protecting institutions from willful predatory lawyers. (which is the most likely reason why they are able to persist)
But /actually ethical/, this might not be.
The massive increase in administration is killing health care in many ways. A small example: in 1985 I moved to Canada to run a rural practice including a 29 bed hospital with one administrator. We never ran out of beds, the ER was open 24/7 as was the lab and x-ray. These days that 'hospital' has 8 beds, 15 administrators and ER and x-ray during business hours on weekdays. No lab. The 8 beds are permanently filled, acute cases get transferred over the mountain to a regional hospital 50 miles away and most of the time those who would have to present to the ER also have to drive over the mountain.
Administration and management have become like ivy growing on an oak tree, thriving as the tree is killed. More nutrition goes to the ivy just as more money flows into admin than patient care. This is new. No one has seen or dealt with this before. How do we reverse it? I cannot imagine a government firing all the administrators and replacing them with one hard-working real person rather than a bureaucrat, but that is what it will take and if we don't do it one day there will be no room for any patients in that little hospital, and an exponentially rising number of admins will just have meetings with each other all day long for no purpose whatsoever. Forty years ago John Ralston Saul foresaw this in his wonderful book Voltaire's Bastards, but we have done nothing, learned nothing. We are facing not so much institutional inertia, but institutional entropic death.
What did the old administrator do, and what do the new ones do? Is there a change in the number/type of patient providers too? That’s quite the contrast.
I was going along nodding my head in general agreement til I got to the part where you said this just like NIMBYism.
No.
This is the near opposite of NIMBYism. When people (to cite recent examples in my neighborhood) rise up to protest building houses on unused land, they do it because they are more or less directly “injured”.
A person who prefers trees instead of town houses across the street is completely different from some institution that wants a dense thicket of regulations to prevent being sued. There is no connection.
Actually, it's worse than you say. I say that as someone who lives in metro Boston and has slowly come to understand some of the politics. The biggest barrier is that the current residents are injured by the possibility that a child of a family with a lower income than the average of the current residents may live in the new house. It's actually a known fact that the "outcomes" of high-school students is strongly affected by the average socioeconomic status of the students in the school, which gives the residents a strong incentive to keep poor people out of their school district (which in metro Boston corresponds to the suburban municipality boundaries). Unfortunately, this is both perfectly rational and a gruesome tragedy of the commons.
You can frame an objection to anything in the of a risk of injury. Onerous regulation of medical trials is in fact framed in those terms.
But it's ridiculous to claim that the current level of regulation is justified in terms of harm reduction. Just as nearly everyone who currently claims that they are injured by housing being built on unoccupied land near their home is being ridiculous. Inconvenienced, put out, made sad; yes. Injured, no.
I think there are clear similarities.
In both cases you have one party (the researcher or developer) wanting to do something to benefit others (the ill or newly-housed), and a third-party (the IRB or the NIMBY) butting in to block something that's no business of theirs in the first place.
This reminds me a lot of a concept in software engineering I read in the google Site Reliability Engineering book, the concept of error budgets as a way to resolve the conflict of interest between progress and safety.
Normally, you have devs, who want to improve a product, add new features, and iterate quickly. But change introduces risk, things crash more often, new bugs are found, and so you have a different group whose job it is to make sure things never crash. These incentives conflict, and so you have constant fighting between the second group trying to add new checklists, change management processes, and internal regulations to make release safer, and the first group who try to skip or circumvent these so they can make things. The equilibrium ends up being decided by whoever has more local political power.
The "solution" that google uses is to first define (by business commitee) a non-zero number of "how much should this crash per unit time". This is common, for contracts, but what is less common is that the people responsible for defending this number are expected to defend it from both sides, not just preventing crashing too often but also preventing crashing not often enough. If there are too few crashes, then that means there is too much safety and effort should be put on faster change/releases, and that way the incentives are better.
I don't know how directly applicable this is to the legal system, and of course this is the ideal theory, real implementation has a dozen warts involved, but it seemed like a relevant line of thought.
Insurance has similar incentives - they want to charge premiums proportional to the actual risk involved, and then whoever is buying insurance has a direct, already quantified costing for various safety procedures they are considering. If a check saves $5 per month in premiums it better not take much time to fill out; if it saves $5 million per month that procedure is worth substantial (but not infinite) productivity loss
Your description of "normally, devs....." does not match in the least the way I thought of code when I was doing it (I'm recently retired). I don't doubt at all that you're correctly describing a certain cohort, but "wants to add features and doesn't much care if the program has a ton of bugs" is close to the opposite of the way I worked. But, again, that's the past.
I work in software engineering and we use this method. It works pretty well. It also has the upside that people who do better work become sought after in the organization because product people (who normally have no idea about the quality of engineers) know they want "that developer/team who delivered a million things within the error budget" instead of "that developer/team who had to use up most of their error budget dealing with minor improvements"
"Patients with a short consent form that listed only the major risks got twice the score on a comprehension test compared to those with the longer form; they were also more likely to miss cases where their medical histories made the study procedure dangerous"
I think you mean LESS likely to miss cases?
Thanks for writing this; I've watched people I know suffer through IRBs...
>Journalists (“if it bleeds, it leads”) and academics (who gain clout from discovering and calling out new types of injustice), operating in conjunction with these people, pull the culture towards celebrating harm-avoidance as the greatest good
So, which one are you, a journalist or an academic?
Okay, that's a bit snarky, but I'm genuinely wondering why there isn't equal incentive for a journalist or academic to do what you're doing here. "Obstructive bureaucrats are literally killing people" is a perfect "if it bleeds it leads" headline!
It'd be a really boring newspaper article though. Imagine handing a terribly-written version of this article full of basic factual errors to a normal person. Just because Scott can make something interesting to his readers doesn't mean newspapers can make something interesting to theirs.
On the one hand, I think we are seeing exactly that, in the publication of this book and in Scott's coverage of it. Or if we turn to the sale of kidneys, we've seen a spate of articles in support of a regulated system for permitting kidney sales.
But the populace has to be psychologically prepared to view obstructive bureaucrats as the perpetrator. Scientists are the good guys to the liberal media right now, and bureaucrats are currently working to ban mifepristone. NIMBYs are starting to become a household word, as we are increasingly able to pin homelessness on them, as well as obstruction of a green energy transition. Simultaneously, scientists and their supporters have gotten a lot better at figuring out how to anticipate and defuse or avoid being painted in a negative light in media coverage, with certain exceptions.
As we transition from viewing IRBs as the heroic protectors of victimized study participants and toward viewing them as obstructionist bureaucrats bent on destroying the environment, perpetuating homelessness in service of the property values of the rich, taking away a woman's right to choose, enforcing a policy of banning responsible organ sales and thereby creating a horrific and exploitative black market overseas, I think we will see a shift in media coverage of the kind you describe.
This blog post made me so unbearably angry. It's like a Kafka story except worse because millions of people continue to die as a result of cruelly circular bureaucracy. I don't have anything constructive to add, just the wordless frustration of the truly horrified. IRBs must die.
Another data point in a giant pile of data points for my theory that liability concerns rule everything around us. How did we get here, and how do we escape?
Fascinating article! I feel for the American researchers!
It seems to me a very interesting case of a general problem where whether we find something acceptable or not depends a lot on the distribution of costs and benefits, and maybe less on the average cost or benefit. We tend to care a lot if the costs or benefits are high, at least for some people, and much less if the individual costs or benefits are low for all people, even if their sum is very high. It is not at all obvious to me what the general solution to this problem should be (although in this case there is no doubt that the current IRB process should be changed!)
I understand why lawyers and journalists might contribute to the problem. But if academic ethicists are dedicated to routing out new forms of injustice, how come so few have noticed the injustice of good not done that you've laid out in this post?
In my experience, allowing an appeal process to an alternative decider is always net beneficial, even if the person taking the appeal is very likely to approve the previous decision. It is another point of review, which serves two major purposes. The first is that egregious cases can still be identified and shut down, which I think even a non-expert dean would be willing to do in very extreme cases. Secondly, it puts the earlier review levels on notice that someone may be looking at their work. Even if they eventually get their stance approved through the appeal, it shines light on their bad behavior and reduces their prestige in the relevant areas.
Let me mention my mental journey through this post, as it points out an important aspect:
> how they insist on consent processes that appear designed to help the institution dodge liability or litigation
When I read this early on, I said to myself, of course, the people running the institution have a fiduciary responsibility to avoid having it sued. So at root the problem is our litigious society. But further down I read:
> the IRB’s over-riding goal is clear: to avoid the enormous risk to the institution of being found in noncompliance by OHRP.
This is different, it's not really the litigious problem, as nobody is actually worried about subject suing the researchers. (And as other posters have mentioned, current malpractice insurance probably can handle that.) The risk is that the OHRP will declare them noncompliant and cut off their federal research funding. And it seems that essentially all medical research is done if not by, at least, in facilities that get so much money from the US federal government that they have to do what it says.
So we're in a situation of "private law" where everybody is financially dependent on one bureaucracy and "he who has the gold makes the rules".
Unrelated to the content, this is the first time I've spotted use of the double "the," and I think it's specifically because it was followed by the repeated As in American Academy of Arts.
> I don’t know exactly who to blame things on, but my working hypothesis is some kind of lawyer-adminstrator-journalist-academic-regulator axis. Lawyers sue institutions every time they harm someone (but not when they fail to benefit someone). The institutions hire administrators to create policies that will help avoid lawsuits, and the administrators codify maximally strict rules meant to protect the institution in the worst-case scenario. Journalists (“if it bleeds, it leads”) and academics (who gain clout from discovering and calling out new types of injustice), operating in conjunction with these people, pull the culture towards celebrating harm-avoidance as the greatest good, and cast suspicion on anyone who tries to add benefit-getting to the calculation. Finally, there are calls for regulators to step in - always on the side of ratcheting up severity.
Read up on Jonathan Haidt's research on Moral Foundations Theory. What you're describing flows directly from the intense left-wing bias in academia.
In a nutshell, there are five virtues that seem to be inherent points on the human moral compass, found across all different cultures. Care/prevention of harm, fairness, loyalty, respect for authority, and respect for sanctity. Liberals tend to focus strongly on the first two, while conservatives are more likely to weight all five more or less evenly. There's also a very strong bias towards immediate-term thinking among liberals, while conservatives are more likely to look at the big picture and take a long-term perspective.
When you have a system like modern-day academia that's actively hostile to conservative thought, you end up with an echo chamber devoid of the conservative virtues, a place where all they have is a hammer (short-term harm prevention) and so every little thing starts to look like a form of harm that must be prevented. And ironically, all this hyperfocus on harm-prevention ends up causing much greater harm over the long term, but short-term harm prevention inhibits any attempts to do anything about it.
> IRBs aren’t like this in a vacuum. Increasingly many areas of modern American life are like this. The San Francisco Chronicle recently reported it takes 87 permits, two to three years, and $500,000 to get permission to build houses in SF; developers have to face their own “IRB” of NIMBYs, concerned with risks of their own. Teachers complain that instead of helping students, they’re forced to conform to more and more weird regulations, paperwork, and federal mandates. Infrastructure fails to materialize, unable to escape Environmental Review Hell. Ezra Klein calls this “vetocracy”, rule by safety-focused bureaucrats whose mandate is to stop anything that might cause harm, with no consideration of the harm of stopping too many things. It’s worst in medicine, but everywhere else is catching up.
See also: COVID response. There's precious little in the way of evidence that lockdowns and masking actually saved any lives, but along the way to saving those few-if-any lives we created long-term effects that killed a lot of people and damaged millions more.
I'm not aware that Haidt has claimed that liberals have a stronger bias towards immediate term thinking than conservatives. Nor is it obviously true; for example long term climate change has been more of a concern on the left than the right.
One similarity between this discussion of IRB's and the debate over the Affordable Care Act is that statistical lives tend to be given a lot less weight than particularized lives. The people who are now alive because of the Affordable Care Act, or who would be alive if we had less restrictive IRB reviews, are not easily identified, so for most people they don't carry the same moral weight as, say, a person who is harmed by a medical experiment gone wrong. However, I don't believe that Haidt's moral foundations theory predicts that this tendency should be larger for liberals than conservatives, and the fact that the Affordable Care Act was largely supported by liberals and opposed by conservatives is evidence that, if anything, the opposite is true.
> Nor is it obviously true; for example long term climate change has been more of a concern on the left than the right.
Sort of. Everyone agrees that climate change is a long-term issue. But the left's response to it is always immediate-term in nature, framing it as a "crisis" that demands MASSIVE CHANGES RIGHT NOW!!! in order to head it off before it's too late, whereas conservatives tend to see it as a problem that can be safely dealt with over a longer period, preferably without the society-wrecking upheavals that always seem to accompany any form of "massive changes right now."
> The people who are now alive because of the Affordable Care Act ... are not easily identified
Possibly because the ACA most notably dealt with insurance's treatment of pre-existing conditions, AKA chronic issues that, by definition, cause greater or lesser degrees of misery but do very little actual killing?
If IRB reform *is* possible, what can an individual do to make it more likely?
I'm hoping for a better option than "write your congressman," but it is a top-down problem. Grassroots approaches (like those applied to electoral or zoning reform) are a bad idea. Even at the state level, getting North Carolina to preempt federal regulations for its universities seems....risky.
I'm not saying this rates as a New EA Cause Area, but I don't want to leave this $1.6B bill lying on the ground.
>"Lawyers sue institutions every time they harm someone (but not when they fail to benefit someone)."
I wonder if Congress re-writing institutional mandates to make them at least consider benefits instead of just risks would cause (at least the threat of) the parenthetical lawsuits against inaction. The courts don't seem like the best place to handle this cost-benefit analysis but this seems to me like the least intractable path forward. Would this help create action, or would it only increase the core problem of everything being done for the sake of lawsuit protection?
Why do we need special rules for medicine?
The law has rules about what dangerous activities people are allowed to consent to, for example in the context of dangerous sports or dangerous jobs. Criminal and civil trials in this context seem to be a fairly functional system. If Doctors do bad things, they can stand in the accused box in court and get charged with assault or murder, with the same standards applied as are applied to everyone else. If there need to be exceptions, they should be exceptions of the form "doctors have special permission to do X".
One reason for special rules is so you can expedite specific sub-classes of lawsuit.
Cf. "STATUTE: don't reuse cotton swabs on other patients" vs "STATUTE: don't be evil".
If you have a decision procedure that is slow but yields the right results, there is a technique to speed it up. In programming it's called caching. In law it's precedent.
There is no problem you can't solve with another level of indirection. So the obvious solution: Regulate the regulators. Make the regulators prove that a regulation they make or enforce is not killing more people than it saves.
Add IRBs to the list of "reasons why the US should adopt loser pays for suits like every other developed country."
There’s some trolley logic at work here - we are okay with hundreds of theoretical people inadvertently dying but we can’t handle even a few dying from direct action. The whole situation reminds me of the medical system’s risk-compliance-legal axis who all trained at the school of “no” and ascribe to the maxim that thing-doing is what gets us in trouble, so the best thing to do is nothing.
This is a good way to put it. Probably because there’s plausible deniability in the indirect action, and not directly, clear attributable causes, vs. easy to play the blame game in the direct.
Clinical researcher here. I wanted to comment on this suggestion:
- Let each institution run their IRB with limited federal interference. Big institutions doing dangerous studies can enforce more regulations; small institutions doing simpler ones can be more permissive. The government only has to step in when some institution seems to be failing really badly.
This is kind of already how it goes. Smaller clinical sites tend to use what we call "central IRBs", which are essentially IRBs for hire. They can pick and choose which IRB best suits their needs. These include IRBs like Advarra and WIRB. Meanwhile, most clinicians at larger academic institutions have to use what we call a "local IRB", which is the institution-specific board that everything has to go through no matter what. In some cases, they can outsource the use of a 'central' IRB, but they still have to justify that decision to their institutional IRB, which still includes a lengthy review process (and the potential the IRB says "no").
What's the difference between a central and a local IRB? At least 2x the startup time, but often longer (from 3 months to 6+ months). Partly, this is because a smaller research site can decide to switch from WIRB to Advarra if their review times are too long, so central IRBs have an incentive to not be needlessly obstructive. While a central IRB might meet weekly or sometimes even more than once a week, with local IRBs you're lucky if they meet more than once a month. Did you miss your submission deadline? Better luck next month. You were supposed to get it in 2 weeks before the board meeting.
But this isn't the end of the difference between smaller clinics and those associated with large institutions. At many academic centers, before you can submit to the IRB you have to get through the committee phase. Sometimes you're lucky and you only have one committee, or you maybe you can submit to them all simultaneously. More often, you have to run the gauntlet of sequential committee reviews, with each one taking 2-5 weeks plus comments and responses. There's a committee to review the scientific benefit of the study (which the IRB will also review), one to review the safety (again, also the IRB's job), and one to review the statistics (IRB will opine here as well).
In my experience, central IRBs tend to not just have a much faster turn-around time, they also tend to ask fewer questions. Often, those questions are already answered in the protocol, demonstrating that the IRB didn't understand what they were supposed to be reviewing. I don't remember ever going back to change the protocol because of an IRB suggestion.
Maybe you could argue that local IRBs are still better for other reasons? I'm not convinced this is the case. We brought in a site through a local IRB on a liver study. It took an extra six months past when most other sites had started (including other local IRB sites - obviously a much more stringent IRB!). Did that translate to better patient safety?
Nope, the opposite happened. One of the provisions of the protocol was that patients would get periodic LFT labs done (liver function tests) to make sure there was no drug-induced liver injury. In cases of elevated LFTs, patients were supposed to come back into the site for a confirmation within 48 hours of receiving the lab results. We were very strict about this, given the nature of the experimental treatment. The treatment period went on for 2 years, so there's a concern that a long-term treatment might result in long-term damage if you're not careful.
This site, with its local IRB, enrolled a few patients onto our study. At one point, I visited the site to check on them and discovered the PI hadn't been reviewing the lab results in a timely manner. Sometimes he'd wait a month or more after a patient's results came in to assess the labs. Obviously they couldn't follow the protocol and get confirmatory LFT draws in time. Someone with a liver injury could continue accumulating damage to this vital organ without any intervention, simply because the PI wasn't paying attention to the study. I was concerned, but these studies can sometimes be complicated so I communicated the concern - and the reason it was important - to the PI. The PI agreed he'd messed up and committed to do better.
When I came back, six months later, I discovered things had gotten worse, not better. There were multiple instances of patients with elevated LFTs, including one instance of a critical lab value. NONE of the labs had been reviewed by anyone at the site since I visited last. They hadn't even pulled the reports from the lab. There was nobody at the wheel, but patients kept getting drug so the site could keep getting paid.
Since it's not our job to report this kind of thing to the IRB, we told them to do it. We do review what they report, though, so we made sure they told the whole story to the IRB. These were major, safety-related protocol violations. They did the reporting. The PI blamed the whole fiasco on one of his low-paid research coordinators - one who hadn't actually been working on the study at the time, but the IRB didn't ask for details, so the PI could pretty much claim whatever and get away with it. The PI then said he'd let that guy go, so problem solved. The hutzpah of that excuse was that it's not the coordinator's job to review lab reports, it's the PI's job. This would be like claiming the reason you removed the wrong kidney is because you were relying on one of the nurses to do the actual resection and she did it wrong. The obvious question should have been WTF was the nurse doing operating on the patient!?! Isn't that your job? Why weren't you doing your job?
What was the IRB's response to this gross negligence that put patient safety in danger? They ACKNOWLEDGED RECIEPT of the protocol violation and that was the end of it. They didn't censure the PI, or ask further questions, or anything. If 'strict IRBs' were truly organized in the interest of patient safety, that PI would not be conducting any more research. We certainly put him on our list of investigators to NEVER use again. But the IRB ignored the whole thing.
I'm not convinced that this is a 'tradeoff' between spending a bunch of money to stall research versus saving patients' lives through more stringent review. I think that the vetocracy isn't about safety, so much as the illusion of safety.
To what extent is this a purely U.S. phenomenon? While I'm sure researchers everywhere gripe about these things, I don't typically see these utter horror stories elsewhere.
And shouldn't researchers just move their research operations if the U.S. climate (only) is crippling?
Or it could indicate that self-regulation ins't all that onerous.
"Patients with a short consent form that listed only the major risks got twice the score on a comprehension test compared to those with the longer form; they were also more likely to miss cases where their medical histories made the study procedure dangerous (eg a person with a penicillin allergy in a study giving penicillin)"
Typo? Which group was (eg) more likely to give penicillin to people with a penicillin allergy?
I refer to it as "Cover your ass, not your buddies".
I ran into it just last week; we're prototyping a new machine at our farm that uses high calcium hydrated lime to break down organic matter, corrosive stuff and it's blowing back in our faces, so I wanted to know what sort of protective measures we should be using.
So I called poison control, they had no advice, but told me to call OH&S, so I did. OH&S had no immediate advice but offered me a consultation appointment. Sure.
Appointment swings around and they start asking about our overall health and safety policy. I tell them there isn't one, we don't have time for that.
They tell me that we really need one in case someone gets hurt and they try to sue.
I tell them that we don't have Worker's Compensation for our guys, so if something happens, we want them to sue us, and we want to lose, so that the injured employee can get a payout from our liability insurance.
They proceed to tell me that it's not my problem, and that we should have a CYA safety policy that no one ever reads so that if something happens, we don't lose the lawsuit.
I reiterate that we need to lose that lawsuit or a dude who loses a leg would be left with nothing. They again, say, well, that's not really your problem...
I point out their moral bankruptcy, and try to refocus the conversation on the lime dust.
They tell me they have no idea how to handle it safely, they just know how to protect the company from legal liability.
This doesn't make a lot of sense to me. Why would you force a worker to sue you if you actually want to pay them? Lawsuits are only necessary when the two parties disagree - the worker wants to get paid for their injury, and the company doesn't want to pay them.
If you both agree they deserve to get paid, you could just say (or write into their contract) "If you get hurt on the job, send us the hospital bill and we'll pay it." Then they wouldn't need to hire an expensive lawyer to get compensated for their injury.
We're in Canada, so it's not the hospital bills that are the issue; it's how he's going to pay his bills for the rest of his life after he loses his right hand in an auger, or breaks his back falling from a horse.
We don't have that kind of cash reserves; but we have liability insurance. Liability insurance typically requires a lawsuit to at least be filed before they will pay out.
The part i find puzzling about this story is why your insurance company doesn't require you to have an OHS policy before they sell you the liability insurance. I understand why you want it to be easy for you to lose the lawsuit, but I don't understand why the insurance company is happy to play along.
Farms get all sorts of weird exemptions to that sort of stuff, and specialized insurance companies to cater to them. The industry has so much less regulatory pressure compared to most others that it's a whole other world.
You must be familiar with working with strong bases. That's all you've got here, so do the same thing. Watch out for the eyes in particular, I would say.
I'm not familiar with working with strong bases to be honest, hence I was looking for help and advice.
"Watch out for the eyes in particular" is the sort of advice that I was looking for, thus far we've been noticing the burns mainly on our hands.
Oh sorry. Most farmers to my recollection work with some nasty bases, like anhydrous ammonia. What I'm getting at is that while acids burn and probably hurt more, bases eat away tissue in a more dangerous and harder to heal way. (It's the same as the saponification reaction that turns fat into soap.) Also I think corneas are unusually vulnerable because they aren't well enervated or vascularized, and they're moist -- the calcium hydroxide does its damage when it comes into contact with water, because that releases the (very reactive) free hydroxide anions -- so you really want to protect the eyes.
But this is not my area of expertise at all, so please consult with someone for whom it is!
Is this just me, or does the trajectory of IRBs mirror the rise of woke? And perhaps also the timing?
Interesting parallel. Both are situations where we are so, so worried about the instance of one type of error happening that we’ve become overly adherent to the other side of the coin, almost to the point of ridiculousness, but because it’s become “socially unacceptable” to allow any of the first type of error to happen. And maybe both by an intellectual majority but an overall minority?
I wonder if the similarities have to do with the internet era, as mentioned in the post. It allows people who are angry about one thing to band together with others online and make noise, where pre-internet it would be very hard to do so due to geographical dispersion.
I would bet money there is significant overlap in the Venn Diagram circles between Twitter flamers, Woke, Medical Bureaucrats and the general Radical Precautionary Principle types.
On the “lawyer-adminstrator-journalist-academic-regulator axis,” don’t forget that lobbyists are mostly lawyers too. That means that they think like lawyers. So when something bad happens, their reaction is more law. When that law goes too far, their reaction is ... more law. That doesn’t make sense to non-lawyers, but it does to lawyers. Obviously, you just need to have the law make more finely grained distinctions, in order to do the good of the original law without the bad. So let’s add some exceptions and defenses and limits. So the law now goes from one question or decision to many questions or decisions. And that means you need specialists to figure it out. Hence the modern compliance department, which is an awful lot like the commissars that Soviet governments embedded in every organization -- there to make sure that you do the government’s bidding. In detail.
Oh, yeah. This is also a(nother) really good reason for prohibiting anyone who has held the right to practice law within the last ten years from being elected to any legislative body.
Right as the IRBs are radically reformed to be less paranoid and harm/liability-obsessive, we'll radially reform police departments to be more paranoid and harm/liability-obsessive
I'm a scientist who does medical research at several top tier institituions. I only do research, and every month or so one of my projects is submitted to an IRB somewhere. I do clinical trials and observational studies, as well as a lot of health system trials (e.g., where we are randomizing doctors or hospitals, not patients). I have a few observations, some of which aren't consistent with what Scott reports here.
1. I've never had an IRB nix a study or require non-trivial modifications to a study. This may be because my colleagues and I are always thinking about consent when we design a study, or it may be because top tier institutions have more effective IRBs. These institutions receive vast amounts of funding for doing research, which may incentivize a more efficient and flexible IRB.
2. I have done some small studies on the order of Scotts questionnaire investigation. For these, and even some larger studies, we start by asking the IRB for a waiver of consent - we make the case that there are no risks, etc, and so no consent is needed. We have always recieved the waiver. Searching PubMed turns up many such trials - here's a patient randomized trial of antibiotics where the IRB waived the requirement for patient consent: https://pubmed.ncbi.nlm.nih.gov/36898748/ I am wondering if the author discusses such studies where IRBs waive patient consent.
3. There are people working on the problem of how terrible patient consent forms can be. There are guidelines, standards, even measures. And of course research into what sort of patient consent form is maximally useful to patients (which is determined by asking patients). I helped develop a measure of informed consent for elective surgery (not the same thing as a trial, but same problem with consent forms) that is being considered for use in determining payment to providers.
4. Every year or so I have to take a test to be/stay certified for doing human subjects research. Interestingly, all the materials and questions indicate that the idea of patient consent emerged from the Nuremberg Trials and what was discovered there about the malfeasance of Nazi scientists. I'm surprised to hear the (more plausible) sequence of events Scott reports from the book.
5. Technology, especially internet + smartphones, is beginning to change the underlying paradigm of how some research is done. There are organizations which enroll what are essentially 'subscribers' who are connected via app and who can elect to participate in what is called 'distributed' research. Maybe you have diabetes, so you sign up; you get all the latest tips on managing diabetes, and if someone wants to do a study of a new diabetes drug you get an alert with an option to participate. There is still informed consent, but it is standardized and simplified, and all your data are ready and waiting to be uploaded when you agree. Obviously, there are some concerns here about patient data, but there are many people who *want* to be in trials, and this supports those people. These kinds of registries are in a sense standardizing the entire process, which will make it easier/harder for IRBs.
While this book sounds very interesting, and like one I will read, it also maybe obscures the vast number of studies that are greenlighted every day without any real IRB objections or concerns.
"top tier institutions have more effective IRBs."
I also work in research.
I think this is a big one.
If you work somewhere where not much research is done then you may get inexperienced IRB members who have little knowledge of what's normal who have lots of time to play the game of inventing more and more implausible harms.
From scotts story it sounded like much of his experience was tied to a bureaucratic mechanism that was rusted into barely being able to function because not much research was happening at his hospital.
When you're at an institution that does almost nothing *except* research it tends to be much less painful. Though of course that's still bad because it creates a big barrier to entry for anyone working outside of a few big institutions.
I guess I equate "being governed" with having a government. I certainly don't think everything can be achieved by government. I don't like the *idea* of being governed, though in practice I mostly have little problem with living with the law. There are not many illegal things I want to do, and those I really wanted to do, such as smoking weed back when it was illegal, I have found easy to get away with. I think our government did a lousy job with covid, but I personally was not greatly inconvenienced -- were you? I read up on the science, and navigated the info about risks and personal safety successfully -- still have not had covid, despite going in to work through the entire pandemic. So overall, I have had an easy time with being governed. But whenever I read something like Scott's post here, or really anything about how we can do a better job of organizing life so that there is more fairness and less misery I am filled with rage and hopelessness. Even Scott's article made me have fantasies of slapping the faces of IRB members. Consequently I am not well-read or well-informed about government, the constitution, politics, or any other related matters. Regarding this topic I am resigned to being part of the problem not part of the solution.
"How would you feel if your doctor suggested - not as part of a research study - that he pick the treatment you get by flipping a coin" if I knew that the doctor really genuinely didnt know which option were better then i would prefer for him to flip a coin rather than dither
> Also, I find it hard to imagine a dean would ever do this
Plausibly, the IRB only has incentives pointing towards caution, but the Dean has incentives pointing in both directions. Having a successful and famous study or invention come out of their institution brings fame and clout and investment, and sometimes direct monetary gain depending on how IP rights are handled in the researcher's contract with the institution.
If you want to join the army in Canada, you have to say you've never been sick a day in your life, and you've never had a single injury. You say you're allergic to grass, you broke your leg in high school, sometimes you feel really sad... any of these will disqualify you.
I don't know how it got to be that way, it doesn't make sense, the army isn't in a position to be especially picky... somehow I think it's related to whatever causes this IRB situation though.
Hi Scott, I am curious if your questions on bipolar study included anything that might be considered “questions on self-harm.” These sorts of questions might raise the risk assessment from low to moderate and require that you include precursor warnings on the risk of your questionnaire. I’m genuinely trying to make a best case argument for the hindrances you faced, so anything that you might see as potential “red flags” to your reviewers would be helpful. Thanks!
Note: Although I am but an entomologist and have almost no regulatory bodies for my actions, my fiancé often designs the interfaces that researchers use for submitting IRB forms at a university. We’re trying to speculate what happened in your case, so anything you think might be important would be tremendously helpful. Thank you!!!
He wasn't trying to get approval for the questionnaire, though. The study just involved looking at the relationship between the responses to the questionnaire they were already using and the patients' ultimate diagnoses.
That's part of the craziness of the whole IRB system. The treatments themselves (in this case, a questionnaire) can all be used in isolation. The only illegal part is examining whether they work.
Right, this is the part that seemed kinda Kafka-esque. Scott isn’t proposing any new intervention at all, nothing about the patient experience will actually change. He was literally just going to collect data on an existing process.
Gotcha, thanks @Mallard and @Gbdub. I missed that detail. Ya, that's a completely different and much more complex problem. Thanks for the clarification!
That said I think it’s really cool that there is someone here that can potentially provide perspective from “the other side”! Hopefully Scott responds.
No it did not. I would also add that this question (whether asking people questions about suicide increases the risk of suicide) is one that psychiatrists care a lot about and have studied in depth, and the answer appears to be no.
My big annoyance regarding this area as someone who was close friends with a medical ethics professor at university (and still is 20 years later), is just the incredibly low quality of reasoning among the “leading lights”. You have people like Leon Kass who was on the President’s Bioethics advisory Council or whatever who by their writings didn’t appear to be able to think themselves out of a wet paper bag.
Now I doubt this grey eminences were actually that stupid, but they clearly had political and religious commitments that were preventing them from thinking remotely clearly about the topics they were put in charge of. So disappointing. I remember being told this is the top contemporary thinking in this area and just finding the arguments hot garbage.
Thank you, Scott, for this careful and thought-provoking essay.
Since so many people wonder, the study by Lynn Epstein and Louis Lasagna showed that people who read the short consent form were better at both comprehending the experiment and about realizing that the study drug might be dangerous to them.
Much of this fascinating conversation on ACX is on the theoretical side, and there’s a reason for that. IRBs are ever on the outlook for proposed research that would be unethical—that is why they exist. But there is no national database of proposed experiments to show how many were turned down because they would be abusive. In fact, I know of no individual IRB that even attempts to keep track of this. There are IRBs that are proud they turned down this or that specific protocol, but those decisions are made in private so neither other IRBs nor the public can ever see if they were right. Some IRBs pride themselves on improving the science of the protocols they review, but I know of no IRB that has ever permitted outside review to see if its suggestions actually helped. Ditto for a dozen other aspects of IRB review that could be measured, but are not. It’s a largely data-free zone.
I got an interesting email yesterday from a friend who read my book. She is part of a major enterprise that helps develop new gene therapies. From her point of view, IRBs aren’t really a problem at all. Her enterprise has standard ways of doing business that the IRBs they work with accept. She sees this work with and around big pharma as providing the relatively predictable breakthroughs that will lead to major life-enhancing treatments down the road. This is a world of big money and Big Science, and it’s all about the billions. A new drug costs $2.6 billion to develop; the FDA employs 17,000 people and has a budget of $3.3 billion; the companies involved measure their value and profits in the billions.
The scientists I am speaking for in "From Oversight to Overkill" are lucky when they can cobble together a budget in the millions, and much of the work they do, like Scott’s frustrating project, is entirely unfunded. They are dealing with OHRP, an agency with a budget of $9 million that employs 30 people. Unlike big pharma with its standardized routines, they are trying new approaches that raise new regulatory questions. And because OHRP operates on such a smaller scale, its actions are rarely newsworthy even when they make no sense at all. This includes decisions that suppress the little projects with no funding that people just starting out attempt.
Of course, the smaller budgets of the scientists in my book don’t mean that their findings will be trivial. It has always been true that when myriad scientists work to better understand human health and disease, each in their own way, that the vast majority will make, at most, tiny steps, and that a very few will be on the track of something transformative. A system that makes their work more difficult means that we, the public who struggle with disease and death in our daily lives, are the ones who suffer.
Hi, just downloaded the Kindle and can't wait to get stuck in. Purely as a matter of interest, is there no patients' rights body to do battle for needed treatment?
If the public had any idea how many people die not long before an effective treatment becomes available, a treatment that was delayed by bureaucratic hurdles that have nothing to do with making research safer, there would be an uproar! But the system defends itself, very effectively, by claiming that "ethics requires this" and "we can't have another Tuskegee." As a result, the scientists, who know all about the problem, are intimidated. As a result of that, the public remains in the dark.
Have you ever considered organizing an uproar? Surely if enough popular and respected scientists signed a strongly worded open letter, it could get some traction in the right direction?
Popular and respected scientists feel this is a problem that can't be fixed--the fear of looking as if they are against ethics, and don't care about Tuskegee, is too strong, even though these are false arguments. Since the scientists are afraid to speak out, the book is my attempt to organize an uproar in the group that is being hurt--people who are vulnerable to cancer and heart attacks, which is to say the public.
Thanks for responding! I feel like there is enough "anti government" on the right and enough "pro scientist" on the left that they would win easily as long as public outreach hit the basic talking points. Maybe I'm overestimating how much popular scientists are able to coordinate, and how respected they are in the general population (vs just my circles).
Do I understand correctly that the ISIS-2 horror story was well before 1998, and still within the supposed "golden age"?
Yes. Consent forms seem to be their own problem.
Between the horrors of NIMBYism, the virtuous corpse mountain left behind by IRBs, and the laughable insanity of NEPA and CEQA, perhaps the purest expression of the underlying concept is the Precautionary Principle. So far as I can tell, someone made the following chart in all earnestness. It is a fully-generic excuse for inaction.
https://www.sourcewatch.org/index.php?title=File:Precautionary_principle.png
When you think your options range between a worst-case of "Life continues" for inaction and a worst-case of "Extreme Catastrophe" for action, well, here we are. Too bad life literally didn't continue for the people getting subpar medical treatment.
Wow, that picture is horrible. Even the outcome "benefits enjoyed, no harm done" is painted in scary orange color; as opposed to the green outcome of no action taken.
The comical inefficiency of IRB doesn't seem to be a controversial point. Why didn't you ignore it and simply conduct and publish your survey research anonymously? Maybe you judged that your study wasn't important enough to overcome your risk aversion. Why didn't the authors of ISIS 2 conduct and publish their study anonymously, if the interventions as such were not against regulation?
I have my own theory of everything: the median age is 38. Perhaps it's unfair to call Gary Ellis a coward who's responsible for thousands of deaths and unquantifiable unnecessary suffering. Perhaps he's just a regular old guy in a society of increasingly older guys who lack the knees or back to stand up for anything.
I can't wait for the rationalistic explanations in ten years of why things continue to go increasingly wrong for a country in whom the average resident is 42 years old and obese. Maybe you think we only need to find the right balance of incentives in a carefully engineered system; if so, you're in good company. I believe it was Kant who famously said - about government, but surely we could apply his wisdom to institutions of science and medicine -
"The problem of organizing a state, however hard it may seem, can be solved even for a race of exhausted, sclerotic, amorphous blobs"
"Have you read this study? Some dude claims to have done some research off the books so it's untraceable and unverifiable and the guy is anonymous but his findings are really interesting..."
Anonymous publishing could be accomplished in any number of ways which are not much less reliable than the current regime, and especially for such studies as Scott's - or it could be done with open-secret pseudonimity, or even more blatantly. In any case this is a collective action problem, a prisoner's dilemma where defection is raising a stink about 'unauthorized research', or joining a witch hunt to smoke out the devilish survey-givers, and it's no more likely to be solved than the IRB is likely to undergo spontaneous reformation...
but imagine if there could be institutional mutinies in the medical sector inspired by good and useful causes, like there are for wokeness in every organization in the Anglosphere! That Netflix exec getting defenestrated for quoting the n-word in a benign context, but instead it's doctors refusing to go along with persecuting Scott for recording data his psych ward collects anyway.
Maybe that is the answer: maybe there is actually plenty of rebellion against injustice in these institutions, but the ideology of the young and vital revolutionaries is just not liberal humanism anymore - and isn't that so old and outmoded, compared to our shiny new hyper-protestant self flagellation. Why would we care about 'people dying preventable deaths' - how passé! Join the new century already!
In my specific case, because I needed to do the study officially in order to pass residency, for which it was a requirement.
In the general case, people usually need large consortia that get funding, which is hard to do completely in secret. Also, it's hard to publish things anonymously and get people to listen to them, especially if most people are angry at you and might (eg) refuse to cite your study on principle, or deny that it had ever occurred.
Appalling.
Early on, you said, “The public went along, placated by the breakneck pace of medical advances and a sense that we were all in it together.”
That last part — the sense that we were all in it together — speaks volumes. To my mind its loss explains most of what has gone wrong with the world today. But how did we lose that?
We are criminally ignoring a "more connected populace".and the IQ needed to process that data flow. It's no wonder people resort to regression, stasis, or revolution copes. It's not like IQ or coping mechanisms were better in the past. We need to wrestle with restrictions to transparency, and prioritize defaults, or it will be left behind for those that started the game with fewer rules.
Reminds me of https://randsinrepose.com/archives/the-worry-police/
And https://scottaaronson.blog/?p=5675
>Greg Koski said that “a complete redesign of the approach, a disruptive transformation, is necessary and long overdue", which becomes more impressive if you know that Dr. Koski is the former head of the OHRP, ie the leading IRB administrator in the country.
I've heard of many similar cases of the former head of <org> calling for major reforms, but if they didn't have the will or the political capital to do it while they were there, it seems unlikely the next guy will either (even if they agree).
To the extent that the problem is that Hospitals and their IRBs are overly incentivized to avoid harm more so than they are incentivized to cause good, might this be a good opportunity for something like impact certificates?
Like if there was a poll of 500 million dollars to be given each year to whichever set of hospitals did the most good in their studies, (more money given to those who did more beneficial studies) would that put some pressure the other direction?
My brother, who has some experience in this area, had this to say, “For the most part I feel you just have to know how to build relationships with your irb people and what words will trigger them. I never submit a protocol without first talking with my irb person, and thus usually don’t hit these types of bottle necks. The assumption should be that unless you’re super clear in your explanation, they’ll be risk averse and put forward roadblock. Because that’s their job.”
So from his telling, there’s a certain amount of gladhandling that is needed for getting research past IRB’s. This is probably not great for scientific research (it doesn’t seem fair that because you aren’t up for getting coffee with your IRB person you can’t do your research), but it does mean that scientists aren’t quite as helpless as the book presents them.
> Hans Jonas: “progress is an optional goal.”
I think this is the most morally deranged thing I have read a philosopher stating in a long time.
In my mind, technological process is often the prerequisite for social progress. From a modern perspective, most iron age civilisations look rather terrible. Slavery, war, starvation. Good luck finding a society at that tech level who would agree to avoid "the violation of the rights of even the tiniest minority, because these undermine the moral basis on which society's existence".
If you don't have the tech to prevent frequent deaths during child birth, in addition to death being bad in itself, you will end up with a population in which a significant number of males can't find partners. The traditional solution for getting rid of excess males is warfare.
If you don't have contraception tech, your population will be reduced by disease and starvation instead.
If your farming tech sucks, most of your population will spend their lives doing back-breaking labor in the fields and have their surplus extracted under duress to support a tiny sliver of elite.
If running a household is a full-time job at your tech level, good luck achieving gender equality.
That is not to say that all technological progress is obviously good. Sometimes, it might not be worth the cost in alignment shift (like the freezing water exeriments in Dachau), and sometimes we might judge that a particular tech will have overwhelmingly negative consequences (like figuring out the best way random citizens can produce neurotoxins in their kitchen).
And of course, you can always plead that while past progress was absolutely necessary (lest you be called an apologist for slavery, war and starvation), the present tech level (which allows for ideas such as human rights) is absolutely sufficient and any future tech is strictly optional. Of course, statistically speaking, it would be extremely unlikely that you just happen to live at exactly that point.
"Increasingly many areas of modern American life are like this."
Yep, America is frustratingly anti-tree-climbing. I am a happier person, in better physical shape, when I can climb trees. That was fine at home where I had a backyard with trees, but here American bureaucratic idiocy gets in the way. You see if I climb a tree on someone else's property, fall out, and get hurt, I could sue them. As a result city parks make sure to cut off any branches that would make a tree climbable, lest someone hurt themselves on it.
Harsh as it sounds, we need to hold people responsible for their own mistakes. Only then can we be free to take what risks we judge worthwhile.
Administrators, bureaucrats, and managers are “minimaxers”. They seek to minimize the maximum bad outcome.
... and the ethics review panel continued to demand pen not pencil.
Meanwhile, at the Wuhan Institute of Virology ...