My comment: this is a good essay, but I'm confused that the only take on Alzheimers I ever hear is "everyone important subscribes to the amyloid hypothesis, but this is an example of bad science and cracks are starting to show in the facade". If everyone important subscribes to it, why won't anyone defend it? Are they too embarrassed, and just waiting to collect a few more funding checks before quietly retiring? Or is this the thing where everyone wants to read sweeping theories about how The Arrogant Experts Are Wrong About Everything and nobody wants to hear those experts patiently explain the boring facts?
If any ACX readers are arrogant experts willing to stand up for amyloid, please send me an email and pitch me on a post.
My guess is that the issue here is the one Kuhn identifies, that a paradigm never goes away until you have an alternative. Say what you will about the amyloid plaque hypothesis, at least it’s a hypothesis. It has lots of problems, but it also has some scraps of evidence for it. If no one else has *any* hypothesis, then this one will naturally dominate the literature.
Sort of like how late 19th century astronomy had all sorts of hypotheses about why the planet Vulcan was so hard to observe in the way it interferes with the perihelion shift of Mercury, but no one saying “gravity is wrong”.
There is at least one serious alternative hypothesis, tau proteins. I doubt that this would fare much better for the level of scrutiny applied to the amyloid hypothesis here, but I am not sure that it is so much inferior either. (But I am happy to be corrected by experts.)
Actually, there are a couple of alternative. If you want a list, I can recommend the ending of this interview here, starting from the paragraph
"Karl Herrup: Well, you’re getting very close to that question. Because the question that I hate the most is, “Okay, wise guy, if it’s not amyloid, what is it?”"
No one doubts that tau proteins are involved in the progression of Alzheimer's - almost everyone with characteristic amyloid levels also has elevated tau levels, the best biomarker for Alzheimer's in plasma is pTau-217, etc. But in my amateurish understanding, elevated and accumulating tau is not specific - it's a self-reinforcing symptom of dying neurons, and the big question is, what makes them die in the first place? In CTE, it's punches to the head. In Parkinson's, it's accumulation of alpha-synuclein. In Alzheimer's, it's [fill in the blank with something plausible].
So, what are the odds that - given all the abnormalities with amyoids that are consistently observed in AD patients, but not in patients with other tauopathies - amyloids are not causally involved in the disease, and instead the trigger is something completely different?
One nitpick: "all the abnormalities with amyoids that are consistently observed in AD patients"
I also only have a amateurish understanding, but I thought that this was exactly the main criticism of the amyloid hypothesis: that it is not consistent. Plaques are missing completely in some AD patients (though Gemini says that it's only in 12%), and there are very many patients with severe plaques who don't develop Alzheimer. I am not saying that tau abnormalities is better (I don't know), but I don't think amyloids are consistent with Alzheimer.
It's not my area of interest except as a field or research whose leading lights have been accused of fraud (and, yes, many of them are very likely guilty of fraud). But last time I checked, there are something like nine alternative theories to the amyloid plaque hypothesis. I don't have an opinion on any of them, but the fact that there are so many alternative theories suggests that the APH paradigm is not as dominant as Scott believes it to be. (?)
My understanding is that the NIH/NIA now supports a "multiple pathologies" approach, of which amyloids play one part. And the Alzheimer's Association encourages combination therapy development for the major hypothetical contenders to the APH (e.g., anti-amyloid + anti-tau + anti-inflammatory).
The APH proponents no longer seem to dominate the discourse the way that they once did. So, they're beyond the denial and anger phase, and they're in the negotiating phase ("APs might still play a role, please keep funding us.")
Ok -- I'll take the bait on this. Not to critique the core point of this review (that we should take results from genetic animal systems with great caution). But to partially defend the relevance of amyloid as a drug target in AD.*
1. There's a word not found in this review: Leqembi. That's the *second* FDA-approved anti-amyloid antibody. It’s not a cure. Indeed, it’s modestly effective and has significant side effects. But it’s proof of principle. We give patients cancer drugs that aren’t curative, but extend life. That's similar here. Patients who took the drug got maybe 3-6 months of ‘cognition extension.’ Not great, but it’s a start. If I could paste the graph from the approval documents, I would -- but you can find it here:
2. The Aβ*56 fraud (which is bad!) has basically nothing to do with why the amyloid hypothesis was adopted. You can find a fairly expert discussion in the comments here (https://www.alzforum.org/news/community-news/sylvain-lesne-who-found-av56-accused-image-manipulation). The fraud also isn't what made big pharmas invest (and they are still investing) in amyloid-directed therapies. Lesne first published in 2006. At that point Lilly was already *in the clinic* with a gamma secretase inhibitor. People didn't back these programs because of Lesne and Aβ*56 but because...
3. The human genetic support for the role of amyloid in AD is profound! I won't reiterate it all here, but just google up a review paper and you can see why everyone was so excited. (In brief, Downs’ syndrome patients and families with early-onset Alzheimer’s all had mutations in the amyloid pathway. The book "How Not to Study a Disease" by severe amyloid skeptic Karl Herrup has a fair and illuminating discussion of the scientific history.
So in short -- there was groupthink around the centrality of amyloid in AD** BUT, it's also the case that after many failed drugs, there's some emerging evidence that anti-amyloid therapies may help in humans. And there is hope that improved products may further enhance efficacy. (e.g., Roche's brain-shuttle enabled anti-amyloid antibody https://www.roche.com/media/releases/med-cor-2025-04-03).
*A bit of a cheat. The strong version of the Amyloid hypothesis is that Amyloid is the cause of everything. Here I defend a weaker thesis -- that there is reason to think amyloid is relevant to AD pathology and that anti-amyloid therapy has the potential to provide meaningful benefit.
**It is not uncommon to find an overemphasis on using familial/genetic forms of a disease as a model for sporadic. I have heard ALS drug developers make the same point about SOD1, and the pointlessness of the SOD1 mouse)
it’s pieces like this, and comments like this, that are why i can’t quit ACX. I’ve wanted this for ages and everything is either written down for dummies or would involve too much investment in fluency acquisition for mere curiosity (vs work, where I do it every so often for a new business pitch—I’m a biopharma brand strategist). thank you, seriously
This gets to my problems with this review. We're hearing one side of an argument in a scientific field with which most of us are unfamiliar. We don't know what the state of the discussion is, so it's difficult to read this essay in context.
Are there serious flaws in an important 30-year-old paper? How do these compare to the flaws in other, similar papers on different subjects? How much of this is reading back later discoveries and developments?
My general rule is that I don't read scientific papers in unfamiliar fields. Journals are meant to be conversations between active participants in cutting-edge research. Much of Ph.D. training is learning the background to participate in these conversations. Popular science writing, if done honestly and well, can summarize the current state of these discussions. But these writers need to be careful to include all sides, and not just rely on a single paper.
> We're hearing one side of an argument in a scientific field with which most of us are unfamiliar. We don't know what the state of the discussion is, so it's difficult to read this essay in context.
And so many people have opinions that may or may not be based on actual research into the literature. Frequently, it's hard to know unless they're able to reference studies that support their arguments.
Unlike you, my general rule is that I read (or try to read) scientific papers in unfamiliar fields that interest me. For me, decoding the terms and the arguments is the best way to understand a new subject. Popular science reporting is a dank sewer of unsupported claims. Most science reporters don't understand the underlying science they're reporting on, so the opinions of their sources skew their reportage. I've never seen a science reporter ask their source a hard question, let alone take a critical look at the data in a paper or study. Popular science articles written by experts can be somewhat better if you are willing to accept that they may be giving you a sincerely biased opinion. But much of popular science writing doesn't pass the "gee whiz" sniff test. I'm thinking of the articles that New Scientist and Quanta Magazine like to publish.
That's a very academic take. I can't remember the last paper I have read in a field I was trained in or participate in. Usually I don't even care why the author did the work as I am trying to solve a different problem and a given paper just happens to be the closest thing anyone has published.
Figures and experimental. Everything else is just commentary.
Those 3-6 months of "cognition extension" are often quoted in the media, because it's easy to understand, but as far as I understand, they are more of an assumption and not (yet) supported by data. See here:
I don't find the commentary in the linked thread terribly compelling. For sure, it's an extrapolation -- from the absolute change vs. placebo to a comparison between the curves. What was shown as primary endpoint is a ~27% lower decline in clinical dementia rating vs. baseline (1.21 vs. 1.66). Ok, you may ask, is 0.45 really meaningful? One way is to look at the rating scale. Another is to ask "over what time period would a AD patient lose 0.45 on the CDR. And the answer in this trial population was 3-6 months.
Some points:
1. Agree it's a modest/marginal impact
2. Also agree it would have been better to have a "time to definitive event" endpoint like progression-free survival or overall survival in cancer. My understanding is that the AD field lacks any such definitive event definitions (beyond death).
3. This is, of course, benefit at the time point observed in the study. It will be important to see to what degree benefit persists (or doesn't) over time.
Multiple drugs that target amyloid plaque buildup are actively being prescribed to treat Alzheimer's disease. The science is compelling and clear that amyloid plaque removal does something, at least.
Is it unfortunate that these drugs don't do more? Yes. But to say the amyloid hypothesis is "wrong" is to ignore reality. We've already commercialized the hypothesis.
Bottom line: If the average reader of this comment was diagnosed with Alzheimer's disease tomorrow, your physician would probably prescribe you a drug that removes amyloid plaque.
Peruse this paper if you don't believe me. Donanemab, an anti amyloid-beta mAb, hardly slows down cognitive decline in patients. Sure it's better then placebo, but barely by anything, and certainly less than you'd expect if it was indeed the primary driver of the disease.
consider this study looking at discontinuation of Infliximab, an anti TNF-alpha mAb. From the theory that TNF-alpha plays a large role in Crohn's disease inflammation, does an mAb that blocks its activity do anything? Well check out figure 2A and notice that while none of the people receiving continuing Infliximab treatment had a symptomatic relapse, HALF OF THE PEOPLE without the treatment did!
So the drugs reduce cognitive decline by 1/3, and amyloid explains at least 1/3 of cognitive decline. That seems not too bad honestly, certainly not as bad as the harshest critics make it out to be.
It's probably better than 1/3 since it takes the drugs almost a year to completely clear amyloid from the brain. It would be interesting to see that trial continued for another year. If you just look at cognitive decline after amyloid has been cleared then you can pretty precisely quantify the contribution from amyloid.
Infliximab is almost instant, so the analogy is limited.
I think it's important to be careful about what "subscribing to the amyloid hypothesis" really means. The evidence is very strong that amyloids play a significant role in the disease - genetic variants like APOE4 that influence amyloid accumulation are strong risk factors for AD; you see characteristic biomarker levels (in particular Ab42/Ab40 ratio in cerebro-spinal fluid) in AD patients that you don't get in other neurodegenerative diseases; you find the characteristic plaques in deceased patients, etc. That part is sound as far as I can tell, and pointing at a misguided study or two doesn't change that.
The question what exactly the amyloids DO, and whether the buildup uf the plaques is the whole story or just a part, is not so clear, and my impression is that researchers are open to the possibility that there's more going on. The new hot topic seems to be CAA: Cerebral Amyloid Angiopathy - it turns out that amyloids also accumulate in blood vessels, weakening the walls and leading to micro-bleeding. (Apparently, this got more attention when it was found that brain bleeding is a not-so-rare side effect of lecanemab.)
My impression is that the people doing bad science aren't reading ACX, or if they are, they're too self aware to brag about doing bad science.
Out of the publishing scientists I know, "it's kinda bullshit but maybe there's something in it" is the average level of confidence for their own papers.
Well I'm not a basic scientist, so I don't know if I can get down in the weeds of mechanistic explanations. But one newer supportive piece of evidence for the amyloid hypothesis are the clinical trials for lecanumab and donanemab.
"Consider the familiar structure of a scientific paper: Introduction (background and hypothesis), Methods, Results, Discussion, Conclusion. This format implies that the work followed a clean, sequential progression: scientists identified a gap in knowledge, formulated a causal explanation, designed definitive experiments to fill the gap, evaluated compelling results, and most of the time, confirmed their hypothesis.
Real lab work rarely follows such a clear path. Biological research is filled with what Medawar describes lovingly as “messing about”: false starts, starting in the middle, unexpected results, reformulated hypotheses, and intriguing accidental findings. The published paper ignores the mess in favour of the illusion of structure and discipline. It offers an ideal version of what might have happened rather than a confession of what did."
Sort of OT, but judicial reasoning is similar. On paper, the trial judge is supposed to determine the facts and then apply the law to those facts.
As a practical matter, in the real world, first the judge makes up their mind, then seeks factual and legal determinations that support the preordained conclusion.
The difference is that the judge has a lot more power than the scientist, at least unless the scientist totally makes shit up.
Sometimes the judge makes up their mind, but then the opinion "won't write" (facts and/or law don't support the snap judgement) and the judge changes their mind. Happens more than you would think.
> As a practical matter, in the real world, first the judge makes up their mind, then seeks factual and legal determinations that support the preordained conclusion.
Yes, the essay titled *The Myth of the Rule of Law* by John Hasnas makes this exact same point.
Finster's Second Law readeth thusly:."There is no such thing as law. There is only context."
The longer form version goes something like this: "Laws are for little people. Policy is for people who matter, because policy determines when the law applies, how and to whom."
"...messing about false starts..." Great line. But when you're working in a field where you don't control the environment or experiment at all? My graduate work was as an analytical geochemist. When I was working on core samples from the Deep Sea Drilling Project - you didn't have any kind of control except in what you chose to, or what you had the capability to, analyze for. Talk about holding your breath until the end where you had to figure out what justifiable conclusions you could draw. So you can end up with and present two types of conclusions - the ones the data fully supports, and the ones that you wish you had enough data to fully support. At that point, all you can be is honest. If you're very unlucky, or didn't fully consider the kind of data you might obtain, you could end up having to backtrack mightily or start entirely over. A dissertation is expected to produce a positive conclusion of some sort. In a paper published by an established researcher, you can get away with less certainty, which does not mean you should be less forthcoming about how well those conclusions are supported.
In the fields I've worked in, the general approach is to test for about 100 different conclusions, about 3 of which you'd like to prove, and over 50 of which you're pretty certain are right. Then you have 3 "fantastic" papers, and 50 "solid" papers. Of course, you're probably not going to get the statistical power to disprove those 50...
The drawback to all this is that you can get 10 different researchers testing for those 3 conclusions, never realizing that other people are doing the /exact same research/ and getting null-results (did not pass statistical significance).
This is my field (amyloid, GluT4, insulin signalling, metabolism and cognition) and I very much enjoyed the piece; I'm happy to say that I have been vocally pointing out flaws in the strong amyloid (and especially, in plaques-as-causative) hypotheses for quite a few years now :).
To Scott's Q below: at least part of the issue is that many, many billions of pharma dollars were spent on developing treatments (primarily antibodies) aimed at reducing amyloid load. So there is and was a huge sunk cost. There was also, at least for a while, a gate-keeping role of editors at major Alzheimer's journals and funding bodies.
Medawar has a good discussion of inductive in the old sense, and why it is problematic - there is no logic of scientific discovery (even if we relax criteria so that the evidence doesn’t have to *prove* the conclusion but just make it the *right* conclusion). But he and Platt then go on to endorse similarly too-systematic accounts of how science *should* work, whether it’s Popper’s idea that there is *no* confirmation and just attempted falsification, or the idea that some kind of “strong” test can account for all the alternatives.
A lot of people in the internet rationalist community build the same kind of flawed assumptions into their bayesianism, as though we could figure out what is the *correct* degree of belief to have in every hypothesis.
But there’s not. All Bayesianism gives us is an internal criterion of how to make our own decision making consistent bud we violate this, we run the risk of being necessarily self-defeating. But some people who violate these principles happen to get lucky, and there are many different ways to satisfy the Bayesian rules, some of which will be lucky and some of which will be unlucky in any given domain of investigation.
Many of the problems of the paper discussed in this post are the kind of understandable flaw that any given paper will have - someone with one set of priors will interpret things in light of those. But the bigger problem is in the field that followed it, that didn’t happen to include enough irrational contrariness to force these tests that, in retrospect, seem like clear ones to have worried about.
"Nice to see someone on this site quoting Medawar, who was, of course, a Popperian. I've tried in the past to get Scott--and also his mentor, Eliezer Yudkowsky--to take a proper look at Popper's theory of knowledge, including his critique of induction (including the Bayesian one they subscribe to). Sadly, they insist on dismissing Popper, not--as far as I can tell--on the basis of anything he actually said, but on the basis of a strawman version that appears in the secondary literature. Sigh."
If it’s not too much trouble, do you have pointers to where people like Scott have dismissed Popperian views?
I’m not a philosopher of science but I do have a passing/growing interest in the topic and I’d be curious what people in the rationalist community think of popper. Naively, I would’ve thought they’d like falsificationism. usually the critiques I read of Popperian view come from a perspective rooted in Lakatos or Feyerabend.
Scott dismissed Popper in an email message years ago, and I didn't save it. I don't know whether he's written Popper elsewhere. My discussion with Yudkowsky took place via X and is also lost. (To me, anyway. I'm not a very sophisticated user.) As I recall, however, he referred me to something he'd written at his Less Wrong website. Regarding Lakatos and Feyerbend, I don't agree with them, but at least they deal with what Popper actually wrote. Most of his critics, however, say something like, "No scientific theory can be definitively falsified, therefore Popper is wrong." That's a bad take for two reasons. First, Popper never claimed theories can be falsified in practice, he just suggested that one can distinguish between the empirical sciences and other kinds of enquiry by noting the the former works with statements that have empirical content, i.e., statements that have implications that can be tested experientially, and that can therefore be falsified in principle. The second and more important reason it's a bad take is that falsifiability isn't an important part of Popper's theory of knowledge. As I explain in the paper for which I provided a link, the fundamental elements of Popper's theory are falibilism, critical rationalism, and the distinction between objective and subjective knowledge.
In "Highlights From The Comments On Tegmark's Mathematical Universe", Scott wrote...
> People need to stop using Popper as a crutch, and genuinely think about how knowledge works.
Then he went on to falsify falsifiability with examples that, if he had actually read Popper, he would have known that Popper had answers to his arguments.
I recall other instances in comment threads where Scott dismissed Popper, but I admit I can't find them at the moment (so maybe my memory is incorrect).
My dad, who was working on his doctorate in the history of science before he passed away, corresponded with Popper. He had some letters from Popper in his files. Somehow, they got lost. I wish I had them now. My dad raised me to be a Popperian, though!
Thanks for the link. I'm impressed that you found that old piece. As you'll see if you read it, I actually met Popper once. It's not, however, the piece for which I provided a link in a previous message. That was this one: https://docs.google.com/document/d/1CrtC9yvBPx06DDd7ULQ6--gDvMmpBJZO/edit?pli=1. Lots of overlap, but the one I just provided includes my take on Popper's theory of knowledge.
P.S. I'm new to this. There's a box that says, "Also share Notes." What does that mean, and should I do it?
In your article, footnote 12 says, "I will, however, refer the reader to Ω." What does Ω refer to? (The footnoted sentence is: "I will not attempt to defend Popper’s theory here.")
"Also share to Notes" will post your comment to Substack's Notes feature, which was their attempt at a Twitter-style timeline that doesn't seem to be doing super well. See Notes at https://substack.com/home
I don't know whether Popper ever wrote about or even knew about the Bayesian approach to epistemology espoused by Scott and the rest of the neorationalists. He died in 1994. However, he wrote extensively about probability and explained why his critique of induction didn't just mean statements can't be shown to be true; it also means they can't be shown to be probable. He says so at various places in The Logic of Scientific Discovery, and there's an extensive discussion in the Postscript. My copies are at work, and I'm at home right now. I'll try to remember to send you more specific citations next week.
P.S. I just asked a friend who's also a Popperian, and he said, "He writes about probability in A World of Propensities, in the appendices to LSD, in the first volume of the Postscirpt Realism & the Aim of Science, to a lesser extent in the other 2 volumes, he has an interesting discussion in the appendices to World of Parmenides, Miller has an essay in the Cambridge Companion to Popper on his contributions to probability theory. There's lots of other places where he writes about it but those are the most lengthy treatments."
I'd have thought probability was always a matter of degree in some sense, but I'm just a provincial lawyer and not very numerate. You'll have to read Popper yourself, or ask someone more knowledgeable.
Popper says that no matter what degrees of probability you're talking about you can never show that anything has that degree of probability. This (in many important cases at least) is true. But, in a sense, it misses the point of contemporary Bayesian epistemology. The point is about reasoning from premises (your priors and statistical model assumptions) to a conclusion a (probability or posterior distribution). In most cases these premises have not been shown to be true and hence the conclusion hasn't either. They're just your best guess. What Bayesian reasoning does show is that your conclusion follows from your premises
A fair point that comes out of Popper is that some hypothesis cannot be called 'scientifically supported' on the grounds that there's some universally correct method for assigning probabilities - or even Bayes Factors - to the hypothesis
While this is clearly true (how would you assign probability to solipsism?), it's not useful.
Consider, there are multiple interpretations of quantum theory that all match the equations. On what basis could you decide that one is more probable than another? (And that's without considering the possibility that the experiments might have been wrong, misinterpreted, or falsified.)
But usefully, one needs to set one's priors to match one's remembered personal history. Your memory might be false, but that's not a useful belief (except in narrow circumstances, like "where did I leave that envelope?", which can be resolved by additional experiment.)
Maybe I should. Someone needs to keep Popper's ideas alive. However, I'm far from being the ideal person to do it. I'm old and tired, and--as I pointed out somewhere else in this thread--I'm just a provincial lawyer, not a philospher or a logician.
Most of the other Astral Codex folks are amateurs, too, as are the LW peeps when it comes to the philosophy of science and philosophy of reasoning. ;-)
Generally speaking, knowledge is more accessible when it is "distilled". As I see it, if I tried to fully understand Popper...
...if I started by reading his early books, and commenting on things that don't make sense to me, or where I find alternative explanations superior, you could easily dismiss me by saying something like "of course, that's early Popper, the ideas are not fully developed yet, you actually need to read some of his later books".
...if I started by reading his later books, either they wouldn't make sense to me, or it would seem like they do, I would write an objection, and someone would respond by saying "nope, you got that completely wrong, you are obviously not familiar with his ideas expressed in his earlier books".
...so the proper way would be to collecting everything that Popper ever wrote and start studying it chronologically -- the problem is, I have a job and a family, this would take me years of time, and I am not really willing to bet them on a hypothesis. Even if Popper is a very important guy, sorry, he is still only one of many important guys, and I can't dedicate a year of my life to studying each one of them.
...or I could read a review written by someone who is a fan of Popper, but then the obvious complaint would be that the review was wrong, and I should read a different review, or preferably the original texts.
The only way out I can see is a review written by someone who has already read all of Popper, is not Elliot Temple, and can write an ELI5 version at least of the most basic claims *and* the most obvious responses to them.
(Which is not a small task, I get it. Thomas Aquinas was declared a saint for accomplishing something similar for Christianity.)
This might be possible to simplify by not trying to defend all of Popper, but some of his isolated claims. That is, if some claims make sense in isolation from the rest.
By "ELI5" I don't mean literally a five years old, not necessarily even a smart twelve years old, but the explanation should be relatively self-contained, without references to many other sources (which themselves cannot be understood without following their own references, and ultimately by spending a lifetime studying the field, most of which is basically learning people's opinions on stuff). Ideally, the explanation should not start from the position "Popper was right about everything, and if you don't believe it, you are low-status and we should laugh at you", but by providing simple examples, showing different responses to them, and explaining why some are right and some are wrong... but I guess this part is obvious.
Maybe you are not the right person to do this, but until someone does, the situation will not improve.
(I wonder whether AI could help here. For starters, we could feed it all Popper's texts, and then ask whether a situation similar to something has appeared somewhere in the texts.)
I don't know who Elliot Temple is (should I?), but otherwise I agree. The kind of guide you describe would be helpful for any philosopher who produced a large body of work, but it would be especially helpful for Popper because he wrote about so many differerent topics, because his works appeared in English out of order, and because he, himself, never bothered to provide anything like an overview or guide. In the paper for which I provided a link I offer an overview--not of Popper's entire body of work--but of his theory of knowledge alone, That's still a tall order, and I don't make any claims for its accuracy. Still, if you're interested, you can read it at pp. 15-21 in https://docs.google.com/document/d/1CrtC9yvBPx06DDd7ULQ6--gDvMmpBJZO/edit?pli=1.
Popper says that most people see science as using some kind of induction (observing individual data points, and generalizing). Instead, Popper sees the scientific method as consisting of two separate steps: (1) making a hypothesis, and (2) examining the hypothesis; and he claims that induction has no place in either of them.
Generating a hypothesis is an inherently irrational / creative / intuitive process. Popper does not spend much time talking about this step.
Induction cannot prove a hypothesis, because no matter how much evidence we have in favor of the hypothesis, it may turn out to be false; there is no point where "many" implies "all". Also, arguments in favor of induction are in general circular (e.g. "because we have used induction successfully in the past"). Also, we don't even really have solid evidence, only some unreliable sensory inputs. Shortly: we can never be sure about anything. That doesn't mean that we should stop looking for the truth; it only means we can never be sure that we have arrived at it.
Popper's critical rationalism takes advantage of an asymmetry: contradiction is more powerful logical relation than support. If a piece of evidence contradicts the hypothesis, one of them must be wrong. (It could also be the evidence; e.g. I might be hallucinating or wrong about something.) Now we can create and test new hypotheses. And this is how scientific knowledge grows.
...does this pass the "ideological Turing test", i.e. does it sound like a fair summary written by someone who is impressed by Popper's philosophy?
David Stove was very much down on Popper, even calling him one of the irrationalists (since Popper doubts possibility of knowledge per Stove). One can only falsify or disprove and can never confirm so the knowledge doesn't actually advances.
One doesn't hear much about Stove now but he was a pretty fine writer
The LW-style rationalism with its "Bayes is the ultimate answer" is probably a misguided reaction to the even more misguided anti-rationality mainstream's pomo-infused position of "indigenous ways of knowing are no less valid than the colonialist science of dead white males". Of course, some cogent points could in principle be extracted from pomo's obscurantist word salad diatribes, but few survive the encounter and maintain their sanity.
I'm pretty sure that Yudkowsky has read more than one book over his life, so the message from that particular one hitting the spot so well for an entire community probably had some additional preconditions.
Not sure what "pomo" refers to, but there has certainly been rising tide of irrationalism in the West ever since the beginning of the Cold War. I mostly blame the Frankfurt School and their French counterparts. Critical theory, structuralism, post-modernism, etc., were already well established in the legal academy when I was at law school in 1990. At first, I assumed it was a well-intended but misguided response to the problem of induction and the other knowledge problems identified by Popper and Hayek. When I tried to interest the “crits” in Popper’s alternative response, however, it became clear that, that despite the name they had chosen for their movement (critical legal theory), they were completely uninterested in rational discussion, critical or otherwise. They were basically Bolsheviks using contintental philosophy to silence their opponents and make themselves feel sophisticated. Seems as though I've been arguing with Bolsheviks my entire adult life. https://www.carolinajournal.com/opinion/what-really-divides-us-as-americans/
Yeah, by "pomo" I mean "post-modernism", "post-structuralism", etc. Ironically, Marx himself predated this, and Marxism was supposed to be a rational, testable theory. But, as we all know, true communism has never been tried, and probably never will.
True communism (not Marxim), has been tried and works well given a few constraints:
1) it has to be a small enough group that the people all know each other.
2) The people in the group have to trust the good intentions of the other members of the group.
3) There needs to be a strong, charismatic, leader.
Possibly there are a few other conditions. I'm not sure. Most successful groups that I know of were religious, but not all. OTOH, all did have a "deviant from external society" belief pattern. (E.g. one group that I know of was researching ritual magic. Didn't get and validatable results, but there were several "successes" that weren't repeatable, and many others that were subjective in nature.)
But other than the 10 points on the Communist Manifesto, Marx never gave detailed blueprint for how Communism was supposed to work. The Bolsheviks had a blank slate to work with, and they called the result Communism. Whether Marx would have been satisfied with Lenin’s formulation is an open question.
Communism predates Marx. E.g., it was (in principle) common in many early Christian communities. The Oneida community in the US was non-Marxist communist. Etc.
The communities tended to fail when they got too large, or when they lost their charismatic leader. Loss of trust was not a usual failure mode, because the charismatic leader dealt with the problem in one way or another. (See also "shunning".)
Could you say something about the charge of irrationalism being levied at Popper himself by the Australian philosopher David Stove: Popper and After: Four Modern Irrationalists. The Frankfurt school and the French counterparts as inheritors of Popper rather than opponents.
In Popper, a theory is never confirmed but only disproved. So, how does the actual knowledge advance (except in the sense of Not this, not this collection of falsified theories.)
I tried to read Stove's book but gave up after a while because of his abrasive style. Why are so many philosophers so obnoxious? Popper was pretty abrasive too in his personal life, but--with the possible exception of Hegel--I don't think he ever insulted his intellectual opponents in his books. Regarding your question, here's what I say in the paper for which I've been providing a link:
"On the basis of the preceding analysis, Popper concludes that we can never be certain we have found correct answers to our questions about the world. To refer to this insight about the human condition he uses the term 'fallibilism,' which he defines as, 'The view, or the acceptance of the fact, that we may err, and that the quest for certainty (or even the quest for high probability) is a mistaken quest.' Fallibilism, it must be emphasized, is not the same as skepticism or irrationalism, and the quest for certainty is not the same as the quest for truth. As Popper says, '[Fallibilism] does not imply that the quest for truth is mistaken. On the contrary, the idea of error implies that of truth as the standard of which we may fall short. It implies that, though we may seek for truth, and though we may even find truth (as I believe we do in very many cases), we can never be quite certain we have found it. There is always the possibility of error.'
"As I have said, fallibilism is not skepticism, and the problems Popper identifies in his critique of induction do not lead him to reject rationalism altogether. But they do lead him to reject a dispositive rationalism that attempts to use reason to answer questions directly, correctly, and once and for all. In its place he offers an alternative that he calls 'critical rationalism.' Critical rationalism takes advantage of an asymmetry among logical relations that makes contradiction a more powerful tool for rational enquiry than consistency or even entailment. To illustrate: from 'A entails B' it does not necessarily follow that either or A or B is true; however, from 'A contradicts B' it does necessarily follow that (at least) one of them is false. This asymmetry means that, while a statement like 'All swans are white' cannot be rationally justified, such a statement can be rationally criticized, and that, according to Popper, is what the scientific method is all about. It is not a method for establishing truth; it is a method for discovering and eliminating error."
Thanks a great reply. My question is, given that out knowledge as increased in past 300 years, from Newton to Einstein, how is this advance in knowledge represented in Popper's theory?
If all we have is provisional theories that have not been falsified yet, then what is our knowledge and how do we say that we know more than we knew 300 years ago?
What would change about the analysis of the paper without an agenda? The author laid out a list of the evidence they'd expect to see in support of the original paper's claims--were those standards unreasonable, or did the author of the essay not fairly evaluate them?
If neither of those things are true, then the author's agenda could very well be "demonstrate proper paper-reading technique using a real example".
Use a REAL example. Pull Seneff's paper on COVID-19 mRNA vaccines-- "where the potential issues are with this massive "nearly untested" intervention." Given that her paper got decent look-see with the OWS staff, I'd say it's a decent "How do you start analyzing whether this intervention works or not."
If you don't think her paper's good enough (or think she's a crank in general), pull a different one. This was a big, splashy intervention. Surely someone else sat down and started saying "here's where this might go wrong, and how to identify if it does."
What is the difference between the paper mentioned in the essay and the one you mention that makes the latter more "REAL" than the former, other than the latter being linked to a more heavily politicized topic?
The Seneff paper is doing what this essay purports to do -- find the "what would we expect to see if this is going to crash and burn and destroy humanity." So it's not the amyloid paper, it's the "proper analysis" -- done as an a priori "here's what we know, and here's where this might kill us all" (sorry to sound all doom and gloom, but this is a massive public health intervention over "most of humanity" -- looking at worst case is part of the fun).
Unlike the above essay, Seneff is putting her money where her mouth is, BEFORE the mRNA vaccine gets widely tested. You want to find out where potential "we have a problem" points are? you should be able to describe them before the paper to be analyzed is even written.
I'd be a lot happier with someone who stood up and said, "before I sit down and look at this paper...", not 30 years later, "here's how to analyze this paper -properly-"
2. The paper is older, the dust has settled, unlikely to rankle so many feathers
3. The subject is more mature, so we know the answer
When learning math, I was always annoyed when the practice problems didn't have a neat answer (often they were only one small change from having a neat answer, which made me suspicious that the problem-writers had accidentally flipped a sign or something). How am I supposed to know I'm doing any of these correctly? I know real life isn't always neat, but this isn't real life, it's a theoretical exercise to practice the fundamentals before moving on to messy practical applications.
I hope you find the essay you're looking for, but I think this one is better for the audience.
I really needed to be more clear that the end of the second paragraph was referring to the math problems. I'm aware Alzheimer's is real life and that the field is still active; my point stands that it's more useful to pick a less-controversial topic and to pick apart older research when learning fundamentals.
Talking about COVID vaccines... I mean, good grief, the person to whom I was responding asked for a paper WITHOUT an agenda.
You can't do science if you already know the answer. Science is all about working with uncertainty.
You aren't getting the "a priori reasoning" in the essay above. You're missing out on the fails. You're getting an ex post facto "right and true" version of "how to attempt to flunk a paper."
Seneff walks you through the potential for an antifreeze allergy in a significant segment of the population. I'm pretty confident we didn't hit that failure mode of the mRNA vaccines.
How do you know that you're doing it correctly? You aren't, unless you're uncertain, and working with uncertainty. This isn't math, math gets things correct. Science just makes a better model -- and lo, if tomorrow the model is no longer accurate, we will build a new one (Yes, there's a model out there for "no dark matter, gravity is just different in the galactic arms" -- and yes, if you wanted to look at "how to disprove dark matter" you could look at that paper. But that's just running "one disprove" not Seneff's "here's ten reasons we shouldn't do this").
30 years of evidence for what the correct answer is. Without a similar paper whose result was good, tested with just as much scrutiny, we can't know how much evidence this could be.
It took me a while to understand the introduction, because I assumed that he was talking about some specific paper. I thought it was a complaint about the rigor of some paper that had yet to be explained, and I guess I was supposed to click on the link to see which paper was being complained about and why it's surprising that a Nobel laureate would complain about it.
Perhaps inserting "[the entire concept of]" into the quote would help.
Who were the brave souls who first articulated these concerns and what do we know of what became of their concerns and their careers? In other words, science should only work with constant critical appraisal. It seems clear that this crityappraisal was either absent or smashed by powerful forces. That in itself is of course age old. But knowing it is age old should it be excused or should we learn from it?
This is an exceedingly important question to answer because it can be used to broadly determine if medicine is a science and whether its conclusions are trustworthy. If the careers of the scientists who question the amyloid hypothesis are healthy, then medicine is operating more or less like a science and can be treated as such. However, if the questioners were instead expelled because of their concerns, then medicine is not a science and should not be given the same level of trust.
Good. These are the right questions to be asking. Now, assume malevolence (devil's advocate if you must). The Pharma companies want to make money, so they're going to throw money at people whose ideas have potential therapeutics attached. So, yeah, I'm pretty sure you're going to see that the questioners did not get the shiny, big rewards -- be they patents($$$), status, or otherwise.
True. How much money do we need to waste before you decide that capitalism is the wrong way to approach medicine? (or: state the failure condition you'd accept for "we need a new solution here").
I’m open to the idea that it isn’t optimal and we should do something else. The question is, 1. where do you want to go from here, and 2. how do we get from here to there while maintaining medicine as a functioning thing?
We need more seed funding that isn't being "gatekept" by rentseekers (in this case, already established scientists). We also need more constraints on active development of plagues. These are two ideas that are at odds with each other. But perhaps there is a better system, with active safeguards.
Not so sure "medicine as a functioning thing" is currently in existence, when a simple "I see a correlation, and the intervention looks harmless" leads to screaming and shit-throwing from the public health wonks.
I think this is a very wrong model of research and of the field. It wasn't "brave souls", and there were no "powerful forces" who "smashed" the resistance. Just the opposite! When the amyloid hypothesis came up, it was met with very much skepticism. As it should be! It was a new hypothesis and had the burden of proof, and researchers demanded those proofs. The mainline opinion was that this was perhaps an interesting lead, but as most interesting leads in science it would probably be wrong.
I think what happens instead is the following: no one "smashes" people with other hypothesis. But it's harder to do "as good" research on other hypothesis (let's call it alternative B) than on Alzheimer (let's call it A), because you have no animal model. It's not enough to say "I want to explore alternative B". In order to get funding, you have to say what are steps 1,2,3 for exploring B. And the thing is that your steps for B will look a lot less promising than the corresponding steps for A, so your proposal is less likely to get funding.
For Alzheimer, it is additionally true that people got attached to A too much, and thus were over-skeptical about alternatives. It's not a sinister organization, it's just that researchers looked around 10 years after the animal model was created, and what did they see? Tons of research done on A, and very little for B. So it like A was championed by the field, so young researcher started to consider A as the standard hypothesis. It is a natural dynamics in science favoring explanations that can be investigated over explanations that can't be investigated. I think science has some ability to fight this. (Whatever you believe, scientists are a really skeptical bunch of people.) But in this case, it failed.
I think there is another mismatch here that lead "the field" astray. The community of people trying to understand what Alzheimer is, that is a different field than the people trying to cure Alzheimer. Simply speaking, the first one consists of doctors and gerontologists, basic researchers who look at clinical symptoms like "how well does the patient remember things". The other consists of neuroscientist, microbiologists, and some other "harder" sciences, and this group is much larger because it is backed up by industry. Now, both groups do not talk too much to each other, because they speak very different languages and have very different traditions, so talking to each other is really hard. I think the first group was always quite skeptical of the amyloid hypothesis, and also voiced these concerns loudly. But the doubts stayed within their own group and didn't make it to the other group, who were busy working to improve the "clinical markers" (like plaques) that they now identified with the Alzheimer disease.
I think it is important to criticize what went wrong, but we should not forget that the problems here are really, really hard. Talking to scientists from different disciplines is super-hard. I switched to neuroscience at some points of my academic career, and I needed years(!) before I could really talk to the people in the field with ease. And fostering minority opinions is important, but it also means that you need to make the decision to fund a non-promising research proposal, and favor it over a promising one. And decisions are made in a very distributed way in science. There is no czar making all decisions, every single proposal gets different reviewers. So you have to decide that proposal B, where the reviewers say that this is not going to get you far, is more important than proposal A, where the reviewers say that this is promising. It's not impossible to make such decisions, but it's really, really hard to make them in a "good" way.
I also looked into the original question of what happened to people who were skeptic of the amyloid hypothesis, but this post is getting long, so I will post this separately below. tldr: they stayed in academia and made good careers. I do recommend this interview with Krl Herrup though:
I tried to check the assumption that people who were particularly skeptical of the amyloid hypothesis had trouble with their careers. I asked Gemini who those were, and followed the two names that it gave. (It gave more, but those were groups of people.) One was Dr. Zaven Khachaturian, who, together with others, "argued that the intense focus on amyloid was overshadowing other potentially crucial avenues of investigation. They cautioned against a "tunnel vision" approach, emphasizing the complexity of the disease". I found a very long 2022 interview with him, which confirmed that he has this view still today. He essentially said that the mouse model has failed. But he also was full of praise of all the great people he worked with. There was certainly no indications in the interview that he was shut down, just the opposite. He said that after 1995,
"I became an advisor to the Alzheimer's Association and began developing the Ronald and Nancy Reagan Research Institute for them to do the next phase of the development of the program."
That sounds like he made a pretty good career. He also complains in much details how difficult it was to study Alzheimer at all! But once this was established, he does not complain that it was difficult to study alternatives to the amyloid hypothesis.
The other was Karl Herrup, who recently wrote a book in 2022(?) where he explains why he is skeptical of the amyloid hypothesis. I read a recent interview on the book:
He does mention that it's hard to get funding for alternative hypothesis. But he also does not mention that someone actively tries to shut down people. But of course (now my interpretation), if you want to get a permanent position as a young researcher, then you need funding, so if you follow other hypotheses, you will likely not be able to stay in academia. So I think it's simply the dynamics in science that I explained in my previous post, without anyone following a sinister agenda.
Thank you for this careful reply. You don't mention however the malfeasance that did take place. Nor do you mention as others have noted that powerful economic forces did likely have an impact here. Of course I did not mean some secret cabal conspiring ...
Hm, I am not sure that I understand what you mean. There are certainly powerful economic forces coming from pharma industry. But they would very much prefer to base their development on a correct hypothesis, no? They are not deliberately steering research to the wrong path.
I do agree that they have strong incentive to push a drug into the market even if it doesn't work, once they have developed that drug. But that is long after things have gone horribly wrong. They would prefer very much to have a drug that actually works.
And of course, there is an overwhelming economic force on researchers to get funding. And this is not always aligned well.
About malfeasance, I see one main "evil" thing, which is that researchers try to oversell their own work. I think it's true that the creators of the mouse model oversold their work. I don't think that they were going further than their peers, everyone tries to oversell their work. Not only in academia. But academia (and other places) should develop protection against this overselling, and this failed in case of Alzheimer. Is that what you mean?
I guess you don't need brave souls when you've seen drugs based on the research failed and failed again. By this point even the Big Pharma themselves will start to ask what's going on.
Is there a good step by step guide to reading scientific papers anywhere? This post has a good start at one, but having a checklist of things to think about would be pretty helpful.
Good question! There are actually a bunch of guides, but the ones I've seen are pretty generic, and they tend to make the assumption that authors are, by default, honest and unbiased. Ask your favorite LLM. It will spit out a few for you.
But I'd recommend: "How to read and understand a scientific paper: a guide for non-scientists" by Jennifer Raff. She suggests ways to separate sketchy papers from serious ones.
The hard part is that a lot of reading papers is "taste" and "smell." Until you're deeply enmeshed in the methods yourself, you can't identify a paper that has that "replication crisis smell" because you aren't sure if their methods are standard in the field, or if it's torturing the data. Sometimes you can spot stuff -- browse Andrew Gelman's blog long enough and you'll pick up clues -- but things like "this paper is clearly bogus because that bacteria does not grow in that medium" is almost impossible to pick up unless you've actually tried to grow said bacteria in said medium, or you know what (genetic edit such and such) does to bacterial metabolism
Other things matter too -- like "huh it's weird that this paper is in Scientific Reports and not a specialized journal, is there anything that prevented it from getting published elsewhere? Oh yeah, they completely omitted this critical control that reviewers familiar with that model would have asked for." Or sometimes "Nope this paper is great, guess one of the authors had a publication credit for reviewing or their library is covering the open access fee!" Very, very tough to spot and recognize without actual training in the field, going to conferences, etc., no matter how many papers you read on your own.
As is, I'm confused about what it means to buy "Other". Naively one might think that a vote for "Other" would mean that some review other than the three currently in the market would win. But you're planning to add reviews as they are posted, and eventually all possible reviews should appear in the market. Doesn't this mean that "Other" can never possibly win?
Other is a mechanic that is automatically included in multi-questions Manifold markets. Shares in "Other" get converted automatically whenever new answers are added
Given that, I admit that I like the idea of having a weekly rollout of answers so we can see the impact they cause when they are added. Also, we do not know all the finalists beforehand, such as the School review.
The illustrative application of scientific rigor is what marks the best of this essay. I've been attempting to work on the 'detective' mode while reading papers for a bit, so I'm glad to read this well-written piece. Overall, I'd say this leaves the story a bit unfinished by it's own promise initially. This essay could've used a larger view along with this segment: on other evidence ignored, on more recent hypotheses that show promise. Scott's comment addresses another potential direction too.
Nice to see someone on this site quoting Medawar, who was, of course, a Popperian. I've tried in the past to get Scott--and also his mentor, Eliezer Yudkowsky--to take a proper look at Popper's theory of knowledge, including his critique of induction (including the Bayesian one they subscribe to). Sadly, they insist on dismissing Popper, not--as far as I can tell--on the basis of anything he actually said, but on the basis of a strawman version that appears in the secondary literature. Sigh.
I liked this review for its hands on review of the science, but I’m not really convinced that amyloids aren’t a predicate to Alzheimer’s. I’m not sure anyone holds the strong hypothesis at all these days, and critique of a 20 year old paper doesn’t seem like a persuasive way to argue against the weak one.
It's a review for a mostly lay audience. To me, it seems clear that it's not intended to advance the actual state of the debate in the field but rather to give us a better window into the history of it, which I think it does admirably.
Having read the whole essay I'm still unsure what exactly this is a review of? Overly condensed scientific papers? The Alzheimers literature? Making transgenic mice? This feels like a regular essay with the word "review" slapped on it so that it can meet the criteria for a review competition.
>Through a painstaking process called sub-cloning, equal parts molecular biology and divination, they managed to insert into their mouse a human APP gene carrying the mutation found in families with high rates of early-onset Alzheimer's.
>You can design your construct perfectly on paper, but in truth, you solve the problem by tweaking reagents like an alchemist, trying to find the perfect brew to coax your foreign gene into the plasmid at high efficiency.
I guess I'm an alchemist then :) I didn't choose the name Metacelsus for nothing...
More seriously, I'm glad I'm not stuck with the molecular biology tools of 1995. Or worse, 1984, when researchers had to clone a gene by literally scraping pieces of mouse chromosome 17 off of glass slides using tiny needles. https://pubmed.ncbi.nlm.nih.gov/6697397/
If the content is AI generated, it is pretty good in the cathegory. I and many others couldn't do it with the tools I know. I regularly review terrible AI generated papers, which make no sense and do not resemble this. At worst, I would expect, the backbone are human-made bullet points transformed by AI to text ?
> At some point, science’s self-correcting machinery—and the brilliance and curiosity of a new generation of researchers—will win out.
This statement or something like it is de rigeur for essays that criticize a certain episode of science while wishing to preserve respect for science in general. But as Popper (mentioned elsewhere in the comments) might have noted, it's not falsifiable: we have no way to determine how many relevant errors have not yet been corrected, let alone how many of those eventually will be corrected.
Sure, we're batting 100%: all known errors have eventually been identified. But that's the wrong denominator.
The best we can realistically hope for in science is that errors have an acceptably short half-life and that the process of science is close to optimal for the tradeoff between making new knowledge and discovering errors.
How would we ever know anything, that is not provisional. Newton's theory is provisional,. Einstein is also provisional. But in what sense our knowledge has advanced going from Newton to Einstein?
Intuitively we know of course, we are more knowledgeable now, but how can we say on basis of Popper?
This was great to read, but I kept feeling like I must have read this author's work before, until I realised what was happening was that it reminds me a lot of Claude, particularly 4 Opus. I'd be interested to know if he helped out.
The essay feels persuasive to me and makes me despair about a review process - how it does not catch problems and how slow are the correction mechanisms after the paper makes it to a (top) journal. Nowadays it is worse than before, because the reviewers are scarce. What is the future, how to deal with this ?
Because Alzheimer's runs in my family, I always feel like I should put more time into trying to get a clear picture of it. The fracas over the amyloid hypothesis, though, has made some part of my brain so skeptical of any information I hear that I've developed a serious block about this. Would anyone be able to point me to an easy-to-understand source of what we know to be true about its etiology?
I enjoyed this narrative immensely. And the logical flow of the author's argument was superb. My only nit to pick is that I wish there were some links to the paper(s) quoted and the images displayed.
I should have been clearer about my specific nits. It was the opening paragraph that really confused me...
> “The scientific paper is a ‘fraud’ that creates “a totally misleading narrative of the processes of thought that go into the making of scientific discoveries.”
Which scientific paper is a fraud? The embedded link leads to Medawar's paper on what constitutes fraud, but someone else seems to be claiming that some unspecified paper is a fraud. Yes, we've got a link to the 1995 Games, et al paper. Is this the paper that someone (unidentified from the quotes within quotes) is accusing of fraud, though?
And if the Games, et al paper is the root of this discussion (and it seems to be), are those images (panel a through d) from the paper? If so, there should be some source reference under them.
The "fraud" claim is a little exaggerated. It describes something I know from my own work in lower-tier molecular biology for food science. We often do a somewhat chaotic work, changing hypotheses and protocols, but when we sit down to write a paper, we make it look as if the final protocol was the one we chose from the beginning by super logical steps. It is expected from us, for easier reading. The readers do not want the paper too convoluted. So it is not a fraud as you would imagine it. The experiments and results are all truthful. This modified nareative is a general feature of most scientific papers.
I once wrote in a paper, that my work was interrupted by my maternal leave. My boss facepalmed and removed the sentence. However, one reviewer asked about the time gap which was apparent from several details in the paper, so we had to put the sentence back :-) . So... this kind of "fraud".
The paper by Games et al from 1995 had another problem. It contained true findings, but they were wrongly interpreted and also, the review process was not strict enough. Apparently, everybody wanted it to be true, that we are closer to Alzheimers cure.
I checked the list of the 34 co-authors on the "Alzheimer-type neuropathology in transgenic mice overexpressing amyloid precursor protein" paper, and only Eliezer Masliah was involved in the recently-discovered tranche of fraudulent amyloid plaque studies. He was caught manipulating images, and 132 papers related to Alzheimer’s and Parkinson’s research were flagged.
According to the Guardian, "Independent neuroscientists reviewed it and described the scope as “breathtaking” — not easily dismissed as simple mistakes." He was forced to resign from the NIA/NIH, but he also held a tenured professorship at UCSD. No action was taken by UCSD against him, but he's now listed as professor emeritus, so I guess he gets to keep his pension.
A superb and deeply interesting review. My only quibble is the missed opportunity:
A company named _Athena_ produces a mouse model that mysteriously appears in the literature fully-formed, without a creation story? The parallel was so cosmically on-the-nose that its absence created a joke-shaped hole in the narrative. And if there's one place on the internet I expect those specific holes to be filled, it's here.
I really liked this. Thanks. I know science has always had this issue with getting stuck in the wrong paradigm. But the capture mechanism seems stronger these days. In many fields there is this one idea that captures all the grants, people, papers. With very weak incentives to think outside that box. String theory, Lambda CDM, Smell is shape and those are only the ones I know about.
1. Certain fields of scientific research are both extremely expensive and reliant on unstable (?) techniques, things that require extreme precision but may contain inherent aspects that can affect the research in hard to discern ways.
2. This leads to acceptance of the first theories that show some benefit, demonstrate results, or obtain funding/consensus
3. However this doesn't mean the research is actually efficacious: the expense prevents reviewing a lot of different angles and reward structure incentivizes first to market.
I think the scientist side though can't help: attacking the hypothesis isn't the problem, the problem is more in resource allocation and management. We need to fund more types of research, things like better tooling and training, and stuff.
To use an analogy, we are unearthing a labryinth with many entrances, but it is so costly that our need forces us to take the first ones we can find. but whether the entrance leads deep inside or not we cant tell.
The goal would be to get a lot of people unearthing and exploring. the more entrances uncovered the more chance one leads to the ruin's heart.
I feel like we tend to underrate management or support over intelligence; rather than promote individual brilliance we need to really build a much larger base of support to enable more approaches to flourish. Not sure how though, capitalism does not directly reward this, and increasingly competition leads to quick consolidation instead of what we thought.
#2 matches my anecdata closely. The joke I always heard was "You need to tell the NIH you're going to do something totally revolutionary and paradigm-shifting that also follows naturally from the established literature".
On the one hand I enjoyed this essay, on the other hand I wonder if maybe there should be a control. Go back and look at a paper that really did change the history of the disease it studied and subject it to the same scrutiny.
All papers are written by humans, under space and budget constraints. Perhaps even the good papers show weaknesses and have cracks in them?
By now, it is possible to guess the author's name from the comments. She also reviews old papers which pointed in right direction. They are published here on substack.
Thank you. I don't know if I found her but I did find a woman who writes interesting things about Alzheimer's, brains, and autoimmunity (and psychiatric diagnoses).
Took half an hour to an hour to find. Wouldn't even have attempted without your comment.
Now I have several hours of good reading ahead of me for next week :)
- I'm confused about the references to "glassblowing" : I've pulled many pipettes for mouse embryo mouth-pipetting and done embryo microinjection and there isn't any "glassblowing" involved, it's just heating and pulling to a desired thickness, which is also what is described in the linked article. No blowing required. Is "glassblowing" meant to refer generally to "shaping glass with heat", or is there an additional technique being referenced that I am not familiar with?
- I'm also confused at how difficult the essay says molecular cloning and microinjection are. At first I assumed this must just be how things were in the 90s, but then I ran into "Today, fancy $100K integrated microinjection/scope systems help the process along (though it still takes years to master)". The linked video shows a technique which I've learned and taught, and it's really not that hard. If you let the embryos grow a few days you can see which ones you killed, and I think anyone can get >90% efficiency with a week or two of practice. Though a laser to put a hole in the zona pellucida really helps (which the person in the video does not seem to have).
The overall thrust of the essay isn't affected by these minor issues.
Nice to find another molecular biologist in the commentariat! Agreed on both of these points, but as someone who also does "real" glassblowing (both furnace and torch), I don't mind counting pulling pipettes as a simple form of glassblowing. But you're right, it's a super easy form.
Many neat glassblowing projects (~all solid sculpture work) don't involve actual blowing either, as long as you're actively shaping hot glass I'm happy to count it.
I'll add a third minor complaint (or actually just flesh out the cloning part of your second complaint) - cloning isn't all that hard. It was harder in the 90s and there is still some amount of voodoo wizardry, but any decent undergrad researcher can clone constructs fairly reliably after a month or two in the lab.
I see! I'm no glass-blowing expert, and it seems I was too picky with my definitions. I'll tell my lab-mates that a glass-blower said our pipette-pulling counts. :-)
I agree re: cloning. I've heard that even some of the fancier high-schools do some basic cloning now.
>However, cracks are showing in this façade. In 2021, the FDA granted accelerated approval to aducanumab (Aduhelm), an anti-amyloid drug developed by Biogen, despite scant evidence that it meaningfully altered the course of cognitive decline
This is supposed to be an example of how the narrow-minded establishment is just totally off on the wrong track. But this author doesn't make any mention of lecanumab or donanumab, which are also FDA approved anti-amyloid mabs being actively prescribed now. Does the author not know about them? Anyone with even a shallow knowledge of the current field would... Or does the author know, but not like to mention them because they have better evidence for efficacy and so are harder to explain away?
Does the commenter not know how to spell lecanemab and donanemab... Or does the commenter know, but does not like to spell them correctly so that other readers will have a harder time discovering that many scientists consider "the efficacy of lecanemab and donanemab ... unimpressive and the the evidence for harm to patients ... much more solid than the evidence of any noticeable benefit"?
Of course they're just typos, not evidence of ignorance or insidious intent. Likewise, I can think of half a dozen justifiable reasons why an abridged review of the seminal Alzheimer's/amyloid paper does not need to also review of every drug approved in its wake.
There's a recent trial of donanemab which seemed to show that it slowed, but did not reverse, disease progression. I'm wondering if you have any criticisms of it?
It seems like a significant number of patients saw complete amyloid removal, yet results were very moderate and getting a significant result relied on including functional parameters along with cognitive ones to achieve statistical significance. Decline was slowed, but not reversed, even with complete removal of amyloid. (I'm not sure how they got complete removal if donanemab doesn't target vascular amyloid.)
I think you're asking me. I have background in statistics and human biology and anatomy. So, full disclosure, this is not my field. But I'll have a look.
At a glance...
As you noted, donanemab flushes away lots of amyloid for tiny changes in test performance. For whatever it's worth, difference in means using a basic two-sample t-test is not significant at p=0.05 for half of the first six treatment/placebo comparisons.
So, at the very least we can say that getting rid of lots of amyloid doesn't do very much to eliminate Alzheimer's disease biomarkers or cognitive symptoms. I'd also add that even where the differences are significant, the effect sizes are so small that you have to question whether it translates to a meaningful difference in QOL (“minimal clinically important difference," the smallest change a patient can perceive, is sometimes used... maybe not for AD). And, like you mentioned, it looks like the treated Alzheimer's patients continue to deteriorate, just more slowly.
In exchange for these miniscule 'improvements,' early-stage Alzheimer's sufferers get a worrying increase in brain bleeds and related symptoms. 10 of 131 suffered cerebral microhaemorrhages (compare to 3/125 placebo). 18 of 131 suffered superficial siderosis of the CNS (4/125 placebo). There's probably some co--occurrence here. Troubling nevertheless. Even more troubling that there's a movement to start prophylactically treating people with plaques but no evidence of cognitive AD. Knowing the side effects, would you take it?
Notes:
- The Lowe blog post I linked to initially itself links to some of the relevant studies, which I haven't read.
- I've assumed above that the scales for the trial test variables are simple linear -- one unit change is the same amount of observable difference everywhere along the range.
This was interesting. I've never looked over clinical trial results before.
Thanks for your time. I was leaning in a similar direction. Given the relatively small effect size I was almost wondering if the benefits we did see could be some kind of statistical artifact. Given the increase in infusion related reactions in the experimental group the study couldn't be properly blinded for many participants. And the test was done by Eli Lilly so there's that potential conflict of interest.
But yeah, this seems to suggest an upper range for what amyloid targeting drugs might do, I think. And the upper range is just not that great.
"Thinking about this work triggers me. I spent years of my PhD setting up and monitoring mouse breeding pairs for timed pregnancies. Every morning began with the ritual of checking for copulatory plugs (don’t ask!). Only ~20-25% of pairs would successfully mate overnight: some females aren’t receptive; some males are layabouts. The failed pairings had to be separated and re-paired in the evening, so fertilization timing could be precisely tracked. Once mating was confirmed (those copulatory plugs again), the female was euthanized, and her oviducts—tiny tubes containing the precious fertilized eggs—carefully dissected. Then you flush out the one-cell zygotes using a finely-tuned glass pipette (yet another moment where glass-blowing skills came in handy)."
I do hope that this review was AI-Generated, because I would like to think that a human author, writing out a paragraph about how they spent years snuffing out countless mouse lives for (self-admitted) minimal gain would have a slightly more nuanced takeaway than what seems to read as "man, I hate being reminded how much of a time-consuming hassle it was killing pregnant mice". Then again, I've known STEM grads who seemed genuinely puzzled why their grade school children were so upset after hearing them describe cutting the heads off small vertebrates for research courses.
'This discrepancy matters because in human Alzheimer's disease, cognitive decline correlates most strongly with neuronal loss (cell death), not with plaque burden. Some patients with significant amyloid deposits show minimal symptoms; others, with fewer plaques but more neurodegeneration, experience severe dementia.:
If this is true, isn't it strong evidence against the amyloid hypothesis already? Why go to all this trouble?
"before looking at the paper, you imagine the experiments and results that would justify the claim" is a bit silly, and tbh, potentially arrogant! (E.g., even if you are specialist the authors likely know more about the specific question they are studying than you do).
To know what a paper is claiming, you need to start reading it! At least the abstract etc... To me, this review seem to present an idealised and unpractical process.
Lazy reading is bad, but there's nothing particularly wrong with someone reading the paper, and then asking pertinent probing questions. While reflecting post reading might potentially be more biased (maybe...?) than forming hypothesis before reading the former is also going to be a hell of a lot more efficient.
"Comparisons between brain regions with and without plaques. If you see neuronal death, it should co-occur with the presence of plaques, not be spread throughout the brain."
Seems far to presumptive. It could easily be the case that plagues cause a cascade affect that influences the whole brain (not just locally).
Ideally, the pattern in the brain of transgenic mice should mimic what happens in the human brains with real Alzheimer disease. It is an empirical question, what the pattern is like - neuronal death in the vicinity of plaques of in the areas remote from plaques as well ?
There can be nobody who does research, lab based or not, who would disagree with Medawar. As for this essay - which I found interesting and informative - at least the author took the trouble read the original papers. Far too often, you’ll see a citation where at best only the abstract was read (and which might misrepresent the paper’s core findings); at worst, only the title of the cited paper! Finally, when I was a trainee in neuropathology in the 1980s, we looked at ‘[amyloid] plaques & [tau] tangles’ as markers of Alzheimer’s, mainly on age-compared quantitative grounds. We distinguished Alzheimer’s Disease (presenile) dementia, as Alzheimer described in 1907 - where you shouldn’t see more than a scattering of plaques and tangles in a ‘young’ brain - from Senile Dementia of Alzheimer Type, SDAT - where you’d expect to see many plaques and tangles throughout the brain, compared with scattered plaques and tangles in the non-dementia brain. We’d also see a mixed vascular/Alzheimer picture. And Lewy body dementia was on the horizon as a thing.
Small nitpick, but I don't think scientists read papers like a blog post, especially under a time crunch. You typically read the abstract to filter in papers you want to read, skip to the figures/results referring to methods as necessary. Then you go to the discussion to reflect on the paper as a whole. I personally mainly enjoy reading the discussion, and only read the other stuff so I can understand it.
Of course, this depends on your level of familiarity with the field. Most introductions are redundant and repeat the same old titbits over and over again and only need to be read if you're new to the topic.
According to Nobel laureate Sir Peter Medawar, the scientific paper is a "fraud" because it presents a clean, linear narrative that hides the messy reality of scientific discovery. While this structured format makes complex work more accessible and allows others to build on it, it also has a significant drawback: it can be so compelling that readers, including other scientists, may not critically spot potential flaws or alternative interpretations.
My comment: this is a good essay, but I'm confused that the only take on Alzheimers I ever hear is "everyone important subscribes to the amyloid hypothesis, but this is an example of bad science and cracks are starting to show in the facade". If everyone important subscribes to it, why won't anyone defend it? Are they too embarrassed, and just waiting to collect a few more funding checks before quietly retiring? Or is this the thing where everyone wants to read sweeping theories about how The Arrogant Experts Are Wrong About Everything and nobody wants to hear those experts patiently explain the boring facts?
If any ACX readers are arrogant experts willing to stand up for amyloid, please send me an email and pitch me on a post.
My guess is that the issue here is the one Kuhn identifies, that a paradigm never goes away until you have an alternative. Say what you will about the amyloid plaque hypothesis, at least it’s a hypothesis. It has lots of problems, but it also has some scraps of evidence for it. If no one else has *any* hypothesis, then this one will naturally dominate the literature.
Sort of like how late 19th century astronomy had all sorts of hypotheses about why the planet Vulcan was so hard to observe in the way it interferes with the perihelion shift of Mercury, but no one saying “gravity is wrong”.
There is at least one serious alternative hypothesis, tau proteins. I doubt that this would fare much better for the level of scrutiny applied to the amyloid hypothesis here, but I am not sure that it is so much inferior either. (But I am happy to be corrected by experts.)
Another is "type 3 diabetes" in which the amyloid plaques are a desperate way for the body to deal with being "off plumb"
Actually, there are a couple of alternative. If you want a list, I can recommend the ending of this interview here, starting from the paragraph
"Karl Herrup: Well, you’re getting very close to that question. Because the question that I hate the most is, “Okay, wise guy, if it’s not amyloid, what is it?”"
https://news.uchicago.edu/where-has-alzheimers-research-gone-wrong
No one doubts that tau proteins are involved in the progression of Alzheimer's - almost everyone with characteristic amyloid levels also has elevated tau levels, the best biomarker for Alzheimer's in plasma is pTau-217, etc. But in my amateurish understanding, elevated and accumulating tau is not specific - it's a self-reinforcing symptom of dying neurons, and the big question is, what makes them die in the first place? In CTE, it's punches to the head. In Parkinson's, it's accumulation of alpha-synuclein. In Alzheimer's, it's [fill in the blank with something plausible].
So, what are the odds that - given all the abnormalities with amyoids that are consistently observed in AD patients, but not in patients with other tauopathies - amyloids are not causally involved in the disease, and instead the trigger is something completely different?
Thanks, that helps understanding it!
One nitpick: "all the abnormalities with amyoids that are consistently observed in AD patients"
I also only have a amateurish understanding, but I thought that this was exactly the main criticism of the amyloid hypothesis: that it is not consistent. Plaques are missing completely in some AD patients (though Gemini says that it's only in 12%), and there are very many patients with severe plaques who don't develop Alzheimer. I am not saying that tau abnormalities is better (I don't know), but I don't think amyloids are consistent with Alzheimer.
It's not my area of interest except as a field or research whose leading lights have been accused of fraud (and, yes, many of them are very likely guilty of fraud). But last time I checked, there are something like nine alternative theories to the amyloid plaque hypothesis. I don't have an opinion on any of them, but the fact that there are so many alternative theories suggests that the APH paradigm is not as dominant as Scott believes it to be. (?)
My understanding is that the NIH/NIA now supports a "multiple pathologies" approach, of which amyloids play one part. And the Alzheimer's Association encourages combination therapy development for the major hypothetical contenders to the APH (e.g., anti-amyloid + anti-tau + anti-inflammatory).
The APH proponents no longer seem to dominate the discourse the way that they once did. So, they're beyond the denial and anger phase, and they're in the negotiating phase ("APs might still play a role, please keep funding us.")
Ok -- I'll take the bait on this. Not to critique the core point of this review (that we should take results from genetic animal systems with great caution). But to partially defend the relevance of amyloid as a drug target in AD.*
1. There's a word not found in this review: Leqembi. That's the *second* FDA-approved anti-amyloid antibody. It’s not a cure. Indeed, it’s modestly effective and has significant side effects. But it’s proof of principle. We give patients cancer drugs that aren’t curative, but extend life. That's similar here. Patients who took the drug got maybe 3-6 months of ‘cognition extension.’ Not great, but it’s a start. If I could paste the graph from the approval documents, I would -- but you can find it here:
(https://www.leqembi.com/-/media/Files/Leqembi/Prescribing-Information.pdf?hash=106915a5-be7a-4bbc-8c8a-68b0a326d339)
2. The Aβ*56 fraud (which is bad!) has basically nothing to do with why the amyloid hypothesis was adopted. You can find a fairly expert discussion in the comments here (https://www.alzforum.org/news/community-news/sylvain-lesne-who-found-av56-accused-image-manipulation). The fraud also isn't what made big pharmas invest (and they are still investing) in amyloid-directed therapies. Lesne first published in 2006. At that point Lilly was already *in the clinic* with a gamma secretase inhibitor. People didn't back these programs because of Lesne and Aβ*56 but because...
3. The human genetic support for the role of amyloid in AD is profound! I won't reiterate it all here, but just google up a review paper and you can see why everyone was so excited. (In brief, Downs’ syndrome patients and families with early-onset Alzheimer’s all had mutations in the amyloid pathway. The book "How Not to Study a Disease" by severe amyloid skeptic Karl Herrup has a fair and illuminating discussion of the scientific history.
So in short -- there was groupthink around the centrality of amyloid in AD** BUT, it's also the case that after many failed drugs, there's some emerging evidence that anti-amyloid therapies may help in humans. And there is hope that improved products may further enhance efficacy. (e.g., Roche's brain-shuttle enabled anti-amyloid antibody https://www.roche.com/media/releases/med-cor-2025-04-03).
*A bit of a cheat. The strong version of the Amyloid hypothesis is that Amyloid is the cause of everything. Here I defend a weaker thesis -- that there is reason to think amyloid is relevant to AD pathology and that anti-amyloid therapy has the potential to provide meaningful benefit.
**It is not uncommon to find an overemphasis on using familial/genetic forms of a disease as a model for sporadic. I have heard ALS drug developers make the same point about SOD1, and the pointlessness of the SOD1 mouse)
it’s pieces like this, and comments like this, that are why i can’t quit ACX. I’ve wanted this for ages and everything is either written down for dummies or would involve too much investment in fluency acquisition for mere curiosity (vs work, where I do it every so often for a new business pitch—I’m a biopharma brand strategist). thank you, seriously
Thanks!
This gets to my problems with this review. We're hearing one side of an argument in a scientific field with which most of us are unfamiliar. We don't know what the state of the discussion is, so it's difficult to read this essay in context.
Are there serious flaws in an important 30-year-old paper? How do these compare to the flaws in other, similar papers on different subjects? How much of this is reading back later discoveries and developments?
My general rule is that I don't read scientific papers in unfamiliar fields. Journals are meant to be conversations between active participants in cutting-edge research. Much of Ph.D. training is learning the background to participate in these conversations. Popular science writing, if done honestly and well, can summarize the current state of these discussions. But these writers need to be careful to include all sides, and not just rely on a single paper.
> We're hearing one side of an argument in a scientific field with which most of us are unfamiliar. We don't know what the state of the discussion is, so it's difficult to read this essay in context.
And so many people have opinions that may or may not be based on actual research into the literature. Frequently, it's hard to know unless they're able to reference studies that support their arguments.
Unlike you, my general rule is that I read (or try to read) scientific papers in unfamiliar fields that interest me. For me, decoding the terms and the arguments is the best way to understand a new subject. Popular science reporting is a dank sewer of unsupported claims. Most science reporters don't understand the underlying science they're reporting on, so the opinions of their sources skew their reportage. I've never seen a science reporter ask their source a hard question, let alone take a critical look at the data in a paper or study. Popular science articles written by experts can be somewhat better if you are willing to accept that they may be giving you a sincerely biased opinion. But much of popular science writing doesn't pass the "gee whiz" sniff test. I'm thinking of the articles that New Scientist and Quanta Magazine like to publish.
Quanta's mathematics and CS coverage seems quite good.
in an article about strong inference and controls, this...
> How do these compare to the flaws in other, similar papers on different subjects?
...would have been the perfect thing to include!
My guess: there are no perfect scientific papers. Most scientists have one eye on their career. All face budget and space constraints.
No single paper can be definitive. It's in the agglomeration of many weak efforts that we get strong science.
That's a very academic take. I can't remember the last paper I have read in a field I was trained in or participate in. Usually I don't even care why the author did the work as I am trying to solve a different problem and a given paper just happens to be the closest thing anyone has published.
Figures and experimental. Everything else is just commentary.
Those 3-6 months of "cognition extension" are often quoted in the media, because it's easy to understand, but as far as I understand, they are more of an assumption and not (yet) supported by data. See here:
https://x.com/ProfRobHoward/status/1936506424625967612?t=8Gg_e3EV0az3QOp3iPTYLg&s=19
I don't find the commentary in the linked thread terribly compelling. For sure, it's an extrapolation -- from the absolute change vs. placebo to a comparison between the curves. What was shown as primary endpoint is a ~27% lower decline in clinical dementia rating vs. baseline (1.21 vs. 1.66). Ok, you may ask, is 0.45 really meaningful? One way is to look at the rating scale. Another is to ask "over what time period would a AD patient lose 0.45 on the CDR. And the answer in this trial population was 3-6 months.
Some points:
1. Agree it's a modest/marginal impact
2. Also agree it would have been better to have a "time to definitive event" endpoint like progression-free survival or overall survival in cancer. My understanding is that the AD field lacks any such definitive event definitions (beyond death).
3. This is, of course, benefit at the time point observed in the study. It will be important to see to what degree benefit persists (or doesn't) over time.
Just checking in to note that open-label data on Lequembi (just out today) show a continued benefit at the 3 year time point...
https://www.neurologylive.com/view/open-label-extension-data-shows-lecanemab-continued-effect-alzheimer-disease-after-3-years
Multiple drugs that target amyloid plaque buildup are actively being prescribed to treat Alzheimer's disease. The science is compelling and clear that amyloid plaque removal does something, at least.
Is it unfortunate that these drugs don't do more? Yes. But to say the amyloid hypothesis is "wrong" is to ignore reality. We've already commercialized the hypothesis.
Bottom line: If the average reader of this comment was diagnosed with Alzheimer's disease tomorrow, your physician would probably prescribe you a drug that removes amyloid plaque.
Yeah, and the drugs that are being prescribed *don't actually do much*.
https://jamanetwork.com/journals/jama/fullarticle/2807533
Peruse this paper if you don't believe me. Donanemab, an anti amyloid-beta mAb, hardly slows down cognitive decline in patients. Sure it's better then placebo, but barely by anything, and certainly less than you'd expect if it was indeed the primary driver of the disease.
https://evidence.nejm.org/doi/full/10.1056/EVIDoa2200061
consider this study looking at discontinuation of Infliximab, an anti TNF-alpha mAb. From the theory that TNF-alpha plays a large role in Crohn's disease inflammation, does an mAb that blocks its activity do anything? Well check out figure 2A and notice that while none of the people receiving continuing Infliximab treatment had a symptomatic relapse, HALF OF THE PEOPLE without the treatment did!
https://www.science.org/content/blog-post/faked-beta-amyloid-data-what-does-it-mean
if you want a more thorough breakdown of the failings of beta-amyloid to explain Alzheimer's I strongly recommend giving this article a read.
So the drugs reduce cognitive decline by 1/3, and amyloid explains at least 1/3 of cognitive decline. That seems not too bad honestly, certainly not as bad as the harshest critics make it out to be.
It's probably better than 1/3 since it takes the drugs almost a year to completely clear amyloid from the brain. It would be interesting to see that trial continued for another year. If you just look at cognitive decline after amyloid has been cleared then you can pretty precisely quantify the contribution from amyloid.
Infliximab is almost instant, so the analogy is limited.
If anything, the fact that it has been commercialized and still has such lukewarm results could be seen as a modest disproving result.
I think it's important to be careful about what "subscribing to the amyloid hypothesis" really means. The evidence is very strong that amyloids play a significant role in the disease - genetic variants like APOE4 that influence amyloid accumulation are strong risk factors for AD; you see characteristic biomarker levels (in particular Ab42/Ab40 ratio in cerebro-spinal fluid) in AD patients that you don't get in other neurodegenerative diseases; you find the characteristic plaques in deceased patients, etc. That part is sound as far as I can tell, and pointing at a misguided study or two doesn't change that.
The question what exactly the amyloids DO, and whether the buildup uf the plaques is the whole story or just a part, is not so clear, and my impression is that researchers are open to the possibility that there's more going on. The new hot topic seems to be CAA: Cerebral Amyloid Angiopathy - it turns out that amyloids also accumulate in blood vessels, weakening the walls and leading to micro-bleeding. (Apparently, this got more attention when it was found that brain bleeding is a not-so-rare side effect of lecanemab.)
You can be in an epistemology tragic situation, where your least bad hypothesis as still.pretty bad.
My impression is that the people doing bad science aren't reading ACX, or if they are, they're too self aware to brag about doing bad science.
Out of the publishing scientists I know, "it's kinda bullshit but maybe there's something in it" is the average level of confidence for their own papers.
Scott, I emailed you. I’m very knowledgeable about it and willing to stand up for the hypothesis.
Well I'm not a basic scientist, so I don't know if I can get down in the weeds of mechanistic explanations. But one newer supportive piece of evidence for the amyloid hypothesis are the clinical trials for lecanumab and donanemab.
"Consider the familiar structure of a scientific paper: Introduction (background and hypothesis), Methods, Results, Discussion, Conclusion. This format implies that the work followed a clean, sequential progression: scientists identified a gap in knowledge, formulated a causal explanation, designed definitive experiments to fill the gap, evaluated compelling results, and most of the time, confirmed their hypothesis.
Real lab work rarely follows such a clear path. Biological research is filled with what Medawar describes lovingly as “messing about”: false starts, starting in the middle, unexpected results, reformulated hypotheses, and intriguing accidental findings. The published paper ignores the mess in favour of the illusion of structure and discipline. It offers an ideal version of what might have happened rather than a confession of what did."
Sort of OT, but judicial reasoning is similar. On paper, the trial judge is supposed to determine the facts and then apply the law to those facts.
As a practical matter, in the real world, first the judge makes up their mind, then seeks factual and legal determinations that support the preordained conclusion.
The difference is that the judge has a lot more power than the scientist, at least unless the scientist totally makes shit up.
Sometimes the judge makes up their mind, but then the opinion "won't write" (facts and/or law don't support the snap judgement) and the judge changes their mind. Happens more than you would think.
And sometimes the en banc demands that the defendant time-travel, despite the logistical difficulties involved.
> As a practical matter, in the real world, first the judge makes up their mind, then seeks factual and legal determinations that support the preordained conclusion.
Yes, the essay titled *The Myth of the Rule of Law* by John Hasnas makes this exact same point.
Finster's Second Law readeth thusly:."There is no such thing as law. There is only context."
The longer form version goes something like this: "Laws are for little people. Policy is for people who matter, because policy determines when the law applies, how and to whom."
"...messing about false starts..." Great line. But when you're working in a field where you don't control the environment or experiment at all? My graduate work was as an analytical geochemist. When I was working on core samples from the Deep Sea Drilling Project - you didn't have any kind of control except in what you chose to, or what you had the capability to, analyze for. Talk about holding your breath until the end where you had to figure out what justifiable conclusions you could draw. So you can end up with and present two types of conclusions - the ones the data fully supports, and the ones that you wish you had enough data to fully support. At that point, all you can be is honest. If you're very unlucky, or didn't fully consider the kind of data you might obtain, you could end up having to backtrack mightily or start entirely over. A dissertation is expected to produce a positive conclusion of some sort. In a paper published by an established researcher, you can get away with less certainty, which does not mean you should be less forthcoming about how well those conclusions are supported.
In the fields I've worked in, the general approach is to test for about 100 different conclusions, about 3 of which you'd like to prove, and over 50 of which you're pretty certain are right. Then you have 3 "fantastic" papers, and 50 "solid" papers. Of course, you're probably not going to get the statistical power to disprove those 50...
The drawback to all this is that you can get 10 different researchers testing for those 3 conclusions, never realizing that other people are doing the /exact same research/ and getting null-results (did not pass statistical significance).
This is my field (amyloid, GluT4, insulin signalling, metabolism and cognition) and I very much enjoyed the piece; I'm happy to say that I have been vocally pointing out flaws in the strong amyloid (and especially, in plaques-as-causative) hypotheses for quite a few years now :).
To Scott's Q below: at least part of the issue is that many, many billions of pharma dollars were spent on developing treatments (primarily antibodies) aimed at reducing amyloid load. So there is and was a huge sunk cost. There was also, at least for a while, a gate-keeping role of editors at major Alzheimer's journals and funding bodies.
Medawar has a good discussion of inductive in the old sense, and why it is problematic - there is no logic of scientific discovery (even if we relax criteria so that the evidence doesn’t have to *prove* the conclusion but just make it the *right* conclusion). But he and Platt then go on to endorse similarly too-systematic accounts of how science *should* work, whether it’s Popper’s idea that there is *no* confirmation and just attempted falsification, or the idea that some kind of “strong” test can account for all the alternatives.
A lot of people in the internet rationalist community build the same kind of flawed assumptions into their bayesianism, as though we could figure out what is the *correct* degree of belief to have in every hypothesis.
But there’s not. All Bayesianism gives us is an internal criterion of how to make our own decision making consistent bud we violate this, we run the risk of being necessarily self-defeating. But some people who violate these principles happen to get lucky, and there are many different ways to satisfy the Bayesian rules, some of which will be lucky and some of which will be unlucky in any given domain of investigation.
Many of the problems of the paper discussed in this post are the kind of understandable flaw that any given paper will have - someone with one set of priors will interpret things in light of those. But the bigger problem is in the field that followed it, that didn’t happen to include enough irrational contrariness to force these tests that, in retrospect, seem like clear ones to have worried about.
I posted the following elsewhere in the comments:
"Nice to see someone on this site quoting Medawar, who was, of course, a Popperian. I've tried in the past to get Scott--and also his mentor, Eliezer Yudkowsky--to take a proper look at Popper's theory of knowledge, including his critique of induction (including the Bayesian one they subscribe to). Sadly, they insist on dismissing Popper, not--as far as I can tell--on the basis of anything he actually said, but on the basis of a strawman version that appears in the secondary literature. Sigh."
Logic plays a role in scientific discovery, but not as a means of constructing or justifying conclusions. It is, instead, one of the tools we use to help us criticize that test hypotheses. See pp. 15-21 in https://docs.google.com/document/d/1CrtC9yvBPx06DDd7ULQ6--gDvMmpBJZO/edit?pli=1.
If it’s not too much trouble, do you have pointers to where people like Scott have dismissed Popperian views?
I’m not a philosopher of science but I do have a passing/growing interest in the topic and I’d be curious what people in the rationalist community think of popper. Naively, I would’ve thought they’d like falsificationism. usually the critiques I read of Popperian view come from a perspective rooted in Lakatos or Feyerabend.
Scott dismissed Popper in an email message years ago, and I didn't save it. I don't know whether he's written Popper elsewhere. My discussion with Yudkowsky took place via X and is also lost. (To me, anyway. I'm not a very sophisticated user.) As I recall, however, he referred me to something he'd written at his Less Wrong website. Regarding Lakatos and Feyerbend, I don't agree with them, but at least they deal with what Popper actually wrote. Most of his critics, however, say something like, "No scientific theory can be definitively falsified, therefore Popper is wrong." That's a bad take for two reasons. First, Popper never claimed theories can be falsified in practice, he just suggested that one can distinguish between the empirical sciences and other kinds of enquiry by noting the the former works with statements that have empirical content, i.e., statements that have implications that can be tested experientially, and that can therefore be falsified in principle. The second and more important reason it's a bad take is that falsifiability isn't an important part of Popper's theory of knowledge. As I explain in the paper for which I provided a link, the fundamental elements of Popper's theory are falibilism, critical rationalism, and the distinction between objective and subjective knowledge.
What is your Twitter account, in case others want to try searching for your tweet?
@JonGuze
Is this the conversation you were thinking of?
https://x.com/JonGuze/status/1701772371189752106
In "Highlights From The Comments On Tegmark's Mathematical Universe", Scott wrote...
> People need to stop using Popper as a crutch, and genuinely think about how knowledge works.
Then he went on to falsify falsifiability with examples that, if he had actually read Popper, he would have known that Popper had answers to his arguments.
https://www.astralcodexten.com/p/highlights-from-the-comments-on-tegmarks
I recall other instances in comment threads where Scott dismissed Popper, but I admit I can't find them at the moment (so maybe my memory is incorrect).
My dad, who was working on his doctorate in the history of science before he passed away, corresponded with Popper. He had some letters from Popper in his files. Somehow, they got lost. I wish I had them now. My dad raised me to be a Popperian, though!
And is this the article you're referring to, Jon?
https://ourkarlpopper.net/2020/10/26/jonguze/
Thanks for the link. I'm impressed that you found that old piece. As you'll see if you read it, I actually met Popper once. It's not, however, the piece for which I provided a link in a previous message. That was this one: https://docs.google.com/document/d/1CrtC9yvBPx06DDd7ULQ6--gDvMmpBJZO/edit?pli=1. Lots of overlap, but the one I just provided includes my take on Popper's theory of knowledge.
P.S. I'm new to this. There's a box that says, "Also share Notes." What does that mean, and should I do it?
I also don't know what the "also share notes" does. I've never tried it.
In your article, footnote 12 says, "I will, however, refer the reader to Ω." What does Ω refer to? (The footnoted sentence is: "I will not attempt to defend Popper’s theory here.")
"Also share to Notes" will post your comment to Substack's Notes feature, which was their attempt at a Twitter-style timeline that doesn't seem to be doing super well. See Notes at https://substack.com/home
Here’s a discussion I found while looking for another post:
https://conjecturesandrefutations.com/2017/11/22/yudkowsky-on-popper/
The post I was looking for argued that the falsification process is a special case of Bayesian reasoning. It was useful, too bad I can’t find it.
What did Popper say about Bayesianism?
I don't know whether Popper ever wrote about or even knew about the Bayesian approach to epistemology espoused by Scott and the rest of the neorationalists. He died in 1994. However, he wrote extensively about probability and explained why his critique of induction didn't just mean statements can't be shown to be true; it also means they can't be shown to be probable. He says so at various places in The Logic of Scientific Discovery, and there's an extensive discussion in the Postscript. My copies are at work, and I'm at home right now. I'll try to remember to send you more specific citations next week.
P.S. I just asked a friend who's also a Popperian, and he said, "He writes about probability in A World of Propensities, in the appendices to LSD, in the first volume of the Postscirpt Realism & the Aim of Science, to a lesser extent in the other 2 volumes, he has an interesting discussion in the appendices to World of Parmenides, Miller has an essay in the Cambridge Companion to Popper on his contributions to probability theory. There's lots of other places where he writes about it but those are the most lengthy treatments."
> his critique of induction didn't just mean statements can't be shown to be true; it also means they can't be shown to be probable
That sounds rather binary, whereas Bayesianism is about DEGREES of probability.
I'd have thought probability was always a matter of degree in some sense, but I'm just a provincial lawyer and not very numerate. You'll have to read Popper yourself, or ask someone more knowledgeable.
Popper says that no matter what degrees of probability you're talking about you can never show that anything has that degree of probability. This (in many important cases at least) is true. But, in a sense, it misses the point of contemporary Bayesian epistemology. The point is about reasoning from premises (your priors and statistical model assumptions) to a conclusion a (probability or posterior distribution). In most cases these premises have not been shown to be true and hence the conclusion hasn't either. They're just your best guess. What Bayesian reasoning does show is that your conclusion follows from your premises
A fair point that comes out of Popper is that some hypothesis cannot be called 'scientifically supported' on the grounds that there's some universally correct method for assigning probabilities - or even Bayes Factors - to the hypothesis
While this is clearly true (how would you assign probability to solipsism?), it's not useful.
Consider, there are multiple interpretations of quantum theory that all match the equations. On what basis could you decide that one is more probable than another? (And that's without considering the possibility that the experiments might have been wrong, misinterpreted, or falsified.)
But usefully, one needs to set one's priors to match one's remembered personal history. Your memory might be false, but that's not a useful belief (except in narrow circumstances, like "where did I leave that envelope?", which can be resolved by additional experiment.)
There are a few things to be said about that:-
Rationalists claim.that Bayes subsumes Popper, that Bayes was Popper done right, not that Popper was wrong.The
https://www.greaterwrong.com/posts/XTXWPQSEgoMkAupKt/an-intuitive-explanation-of-bayes-s-theorem
Elliot Temple's attempt to sell Popper on lesswrong was a disaster,.and probably put people off on the whole.
https://www.greaterwrong.com/posts/85mfawamKdxzzaPeK/any-good-criticism-of-karl-popper-s-epistemology
I'm.also not clear why you don't discuss the issue on LW.itself,.rather than transient media like X.
Maybe I should. Someone needs to keep Popper's ideas alive. However, I'm far from being the ideal person to do it. I'm old and tired, and--as I pointed out somewhere else in this thread--I'm just a provincial lawyer, not a philospher or a logician.
P.S. Thanks for the links.
I wouldn't worry about the amateurism, even the senior people at l
was wrong aren't professionals.
Most of the other Astral Codex folks are amateurs, too, as are the LW peeps when it comes to the philosophy of science and philosophy of reasoning. ;-)
> Someone needs to keep Popper's ideas alive.
Generally speaking, knowledge is more accessible when it is "distilled". As I see it, if I tried to fully understand Popper...
...if I started by reading his early books, and commenting on things that don't make sense to me, or where I find alternative explanations superior, you could easily dismiss me by saying something like "of course, that's early Popper, the ideas are not fully developed yet, you actually need to read some of his later books".
...if I started by reading his later books, either they wouldn't make sense to me, or it would seem like they do, I would write an objection, and someone would respond by saying "nope, you got that completely wrong, you are obviously not familiar with his ideas expressed in his earlier books".
...so the proper way would be to collecting everything that Popper ever wrote and start studying it chronologically -- the problem is, I have a job and a family, this would take me years of time, and I am not really willing to bet them on a hypothesis. Even if Popper is a very important guy, sorry, he is still only one of many important guys, and I can't dedicate a year of my life to studying each one of them.
...or I could read a review written by someone who is a fan of Popper, but then the obvious complaint would be that the review was wrong, and I should read a different review, or preferably the original texts.
The only way out I can see is a review written by someone who has already read all of Popper, is not Elliot Temple, and can write an ELI5 version at least of the most basic claims *and* the most obvious responses to them.
(Which is not a small task, I get it. Thomas Aquinas was declared a saint for accomplishing something similar for Christianity.)
This might be possible to simplify by not trying to defend all of Popper, but some of his isolated claims. That is, if some claims make sense in isolation from the rest.
By "ELI5" I don't mean literally a five years old, not necessarily even a smart twelve years old, but the explanation should be relatively self-contained, without references to many other sources (which themselves cannot be understood without following their own references, and ultimately by spending a lifetime studying the field, most of which is basically learning people's opinions on stuff). Ideally, the explanation should not start from the position "Popper was right about everything, and if you don't believe it, you are low-status and we should laugh at you", but by providing simple examples, showing different responses to them, and explaining why some are right and some are wrong... but I guess this part is obvious.
Maybe you are not the right person to do this, but until someone does, the situation will not improve.
(I wonder whether AI could help here. For starters, we could feed it all Popper's texts, and then ask whether a situation similar to something has appeared somewhere in the texts.)
I don't know who Elliot Temple is (should I?), but otherwise I agree. The kind of guide you describe would be helpful for any philosopher who produced a large body of work, but it would be especially helpful for Popper because he wrote about so many differerent topics, because his works appeared in English out of order, and because he, himself, never bothered to provide anything like an overview or guide. In the paper for which I provided a link I offer an overview--not of Popper's entire body of work--but of his theory of knowledge alone, That's still a tall order, and I don't make any claims for its accuracy. Still, if you're interested, you can read it at pp. 15-21 in https://docs.google.com/document/d/1CrtC9yvBPx06DDd7ULQ6--gDvMmpBJZO/edit?pli=1.
My attempt at a summary:
Popper says that most people see science as using some kind of induction (observing individual data points, and generalizing). Instead, Popper sees the scientific method as consisting of two separate steps: (1) making a hypothesis, and (2) examining the hypothesis; and he claims that induction has no place in either of them.
Generating a hypothesis is an inherently irrational / creative / intuitive process. Popper does not spend much time talking about this step.
Induction cannot prove a hypothesis, because no matter how much evidence we have in favor of the hypothesis, it may turn out to be false; there is no point where "many" implies "all". Also, arguments in favor of induction are in general circular (e.g. "because we have used induction successfully in the past"). Also, we don't even really have solid evidence, only some unreliable sensory inputs. Shortly: we can never be sure about anything. That doesn't mean that we should stop looking for the truth; it only means we can never be sure that we have arrived at it.
Popper's critical rationalism takes advantage of an asymmetry: contradiction is more powerful logical relation than support. If a piece of evidence contradicts the hypothesis, one of them must be wrong. (It could also be the evidence; e.g. I might be hallucinating or wrong about something.) Now we can create and test new hypotheses. And this is how scientific knowledge grows.
...does this pass the "ideological Turing test", i.e. does it sound like a fair summary written by someone who is impressed by Popper's philosophy?
David Stove was very much down on Popper, even calling him one of the irrationalists (since Popper doubts possibility of knowledge per Stove). One can only falsify or disprove and can never confirm so the knowledge doesn't actually advances.
One doesn't hear much about Stove now but he was a pretty fine writer
The LW-style rationalism with its "Bayes is the ultimate answer" is probably a misguided reaction to the even more misguided anti-rationality mainstream's pomo-infused position of "indigenous ways of knowing are no less valid than the colonialist science of dead white males". Of course, some cogent points could in principle be extracted from pomo's obscurantist word salad diatribes, but few survive the encounter and maintain their sanity.
>Bayes is the ultimate answer"
Comes from reading one book by Jaynes and not much else.
I'm pretty sure that Yudkowsky has read more than one book over his life, so the message from that particular one hitting the spot so well for an entire community probably had some additional preconditions.
Not much else in philosophy of science.
Not sure what "pomo" refers to, but there has certainly been rising tide of irrationalism in the West ever since the beginning of the Cold War. I mostly blame the Frankfurt School and their French counterparts. Critical theory, structuralism, post-modernism, etc., were already well established in the legal academy when I was at law school in 1990. At first, I assumed it was a well-intended but misguided response to the problem of induction and the other knowledge problems identified by Popper and Hayek. When I tried to interest the “crits” in Popper’s alternative response, however, it became clear that, that despite the name they had chosen for their movement (critical legal theory), they were completely uninterested in rational discussion, critical or otherwise. They were basically Bolsheviks using contintental philosophy to silence their opponents and make themselves feel sophisticated. Seems as though I've been arguing with Bolsheviks my entire adult life. https://www.carolinajournal.com/opinion/what-really-divides-us-as-americans/
Yeah, by "pomo" I mean "post-modernism", "post-structuralism", etc. Ironically, Marx himself predated this, and Marxism was supposed to be a rational, testable theory. But, as we all know, true communism has never been tried, and probably never will.
True communism (not Marxim), has been tried and works well given a few constraints:
1) it has to be a small enough group that the people all know each other.
2) The people in the group have to trust the good intentions of the other members of the group.
3) There needs to be a strong, charismatic, leader.
Possibly there are a few other conditions. I'm not sure. Most successful groups that I know of were religious, but not all. OTOH, all did have a "deviant from external society" belief pattern. (E.g. one group that I know of was researching ritual magic. Didn't get and validatable results, but there were several "successes" that weren't repeatable, and many others that were subjective in nature.)
But other than the 10 points on the Communist Manifesto, Marx never gave detailed blueprint for how Communism was supposed to work. The Bolsheviks had a blank slate to work with, and they called the result Communism. Whether Marx would have been satisfied with Lenin’s formulation is an open question.
Communism predates Marx. E.g., it was (in principle) common in many early Christian communities. The Oneida community in the US was non-Marxist communist. Etc.
The communities tended to fail when they got too large, or when they lost their charismatic leader. Loss of trust was not a usual failure mode, because the charismatic leader dealt with the problem in one way or another. (See also "shunning".)
Could you say something about the charge of irrationalism being levied at Popper himself by the Australian philosopher David Stove: Popper and After: Four Modern Irrationalists. The Frankfurt school and the French counterparts as inheritors of Popper rather than opponents.
In Popper, a theory is never confirmed but only disproved. So, how does the actual knowledge advance (except in the sense of Not this, not this collection of falsified theories.)
I tried to read Stove's book but gave up after a while because of his abrasive style. Why are so many philosophers so obnoxious? Popper was pretty abrasive too in his personal life, but--with the possible exception of Hegel--I don't think he ever insulted his intellectual opponents in his books. Regarding your question, here's what I say in the paper for which I've been providing a link:
"On the basis of the preceding analysis, Popper concludes that we can never be certain we have found correct answers to our questions about the world. To refer to this insight about the human condition he uses the term 'fallibilism,' which he defines as, 'The view, or the acceptance of the fact, that we may err, and that the quest for certainty (or even the quest for high probability) is a mistaken quest.' Fallibilism, it must be emphasized, is not the same as skepticism or irrationalism, and the quest for certainty is not the same as the quest for truth. As Popper says, '[Fallibilism] does not imply that the quest for truth is mistaken. On the contrary, the idea of error implies that of truth as the standard of which we may fall short. It implies that, though we may seek for truth, and though we may even find truth (as I believe we do in very many cases), we can never be quite certain we have found it. There is always the possibility of error.'
"As I have said, fallibilism is not skepticism, and the problems Popper identifies in his critique of induction do not lead him to reject rationalism altogether. But they do lead him to reject a dispositive rationalism that attempts to use reason to answer questions directly, correctly, and once and for all. In its place he offers an alternative that he calls 'critical rationalism.' Critical rationalism takes advantage of an asymmetry among logical relations that makes contradiction a more powerful tool for rational enquiry than consistency or even entailment. To illustrate: from 'A entails B' it does not necessarily follow that either or A or B is true; however, from 'A contradicts B' it does necessarily follow that (at least) one of them is false. This asymmetry means that, while a statement like 'All swans are white' cannot be rationally justified, such a statement can be rationally criticized, and that, according to Popper, is what the scientific method is all about. It is not a method for establishing truth; it is a method for discovering and eliminating error."
For more information read pp. 16-21 in https://docs.google.com/document/d/1CrtC9yvBPx06DDd7ULQ6--gDvMmpBJZO/edit?pli=1. Thanks.
Thanks a great reply. My question is, given that out knowledge as increased in past 300 years, from Newton to Einstein, how is this advance in knowledge represented in Popper's theory?
If all we have is provisional theories that have not been falsified yet, then what is our knowledge and how do we say that we know more than we knew 300 years ago?
Scott, you might want to consider converting the footnotes to Substack's native format to make them easier to read.
I was going to, but I couldn't find footnote [1] and eventually gave up.
i enjoyed this post, but i think this was kind of funny:
> Fortunately, at the start of this analysis, we took the time to define the experimental standards needed to evaluate these claims.
No we didn't! We're re-reading a 30 year old paper with an agenda! This is the exact same thing that we're accusing scientific papers of doing.
What would change about the analysis of the paper without an agenda? The author laid out a list of the evidence they'd expect to see in support of the original paper's claims--were those standards unreasonable, or did the author of the essay not fairly evaluate them?
If neither of those things are true, then the author's agenda could very well be "demonstrate proper paper-reading technique using a real example".
Use a REAL example. Pull Seneff's paper on COVID-19 mRNA vaccines-- "where the potential issues are with this massive "nearly untested" intervention." Given that her paper got decent look-see with the OWS staff, I'd say it's a decent "How do you start analyzing whether this intervention works or not."
If you don't think her paper's good enough (or think she's a crank in general), pull a different one. This was a big, splashy intervention. Surely someone else sat down and started saying "here's where this might go wrong, and how to identify if it does."
What is the difference between the paper mentioned in the essay and the one you mention that makes the latter more "REAL" than the former, other than the latter being linked to a more heavily politicized topic?
The Seneff paper is doing what this essay purports to do -- find the "what would we expect to see if this is going to crash and burn and destroy humanity." So it's not the amyloid paper, it's the "proper analysis" -- done as an a priori "here's what we know, and here's where this might kill us all" (sorry to sound all doom and gloom, but this is a massive public health intervention over "most of humanity" -- looking at worst case is part of the fun).
Unlike the above essay, Seneff is putting her money where her mouth is, BEFORE the mRNA vaccine gets widely tested. You want to find out where potential "we have a problem" points are? you should be able to describe them before the paper to be analyzed is even written.
I'd be a lot happier with someone who stood up and said, "before I sit down and look at this paper...", not 30 years later, "here's how to analyze this paper -properly-"
I think the author made a better choice.
1. The topic is less polarized
2. The paper is older, the dust has settled, unlikely to rankle so many feathers
3. The subject is more mature, so we know the answer
When learning math, I was always annoyed when the practice problems didn't have a neat answer (often they were only one small change from having a neat answer, which made me suspicious that the problem-writers had accidentally flipped a sign or something). How am I supposed to know I'm doing any of these correctly? I know real life isn't always neat, but this isn't real life, it's a theoretical exercise to practice the fundamentals before moving on to messy practical applications.
I hope you find the essay you're looking for, but I think this one is better for the audience.
I really needed to be more clear that the end of the second paragraph was referring to the math problems. I'm aware Alzheimer's is real life and that the field is still active; my point stands that it's more useful to pick a less-controversial topic and to pick apart older research when learning fundamentals.
Talking about COVID vaccines... I mean, good grief, the person to whom I was responding asked for a paper WITHOUT an agenda.
You can't do science if you already know the answer. Science is all about working with uncertainty.
You aren't getting the "a priori reasoning" in the essay above. You're missing out on the fails. You're getting an ex post facto "right and true" version of "how to attempt to flunk a paper."
Seneff walks you through the potential for an antifreeze allergy in a significant segment of the population. I'm pretty confident we didn't hit that failure mode of the mRNA vaccines.
How do you know that you're doing it correctly? You aren't, unless you're uncertain, and working with uncertainty. This isn't math, math gets things correct. Science just makes a better model -- and lo, if tomorrow the model is no longer accurate, we will build a new one (Yes, there's a model out there for "no dark matter, gravity is just different in the galactic arms" -- and yes, if you wanted to look at "how to disprove dark matter" you could look at that paper. But that's just running "one disprove" not Seneff's "here's ten reasons we shouldn't do this").
30 years of evidence for what the correct answer is. Without a similar paper whose result was good, tested with just as much scrutiny, we can't know how much evidence this could be.
The author should find a major paper in HIV or Cystic Fibrosis or H.Pylori. Or GLP-1. Something where we have made actual major progress.
And subject it to the same withering critique.
I suspect any such paper would have gaps, holes and insufficiencies. Because authors are weak, budgets are small, time is finite.
Nevertheless some of those papers would be right in vital ways.
OP's post tells us to engage in strong inference but does not itself include a control. It's a missed opportunity! A meta-failure in my view.
It took me a while to understand the introduction, because I assumed that he was talking about some specific paper. I thought it was a complaint about the rigor of some paper that had yet to be explained, and I guess I was supposed to click on the link to see which paper was being complained about and why it's surprising that a Nobel laureate would complain about it.
Perhaps inserting "[the entire concept of]" into the quote would help.
Who were the brave souls who first articulated these concerns and what do we know of what became of their concerns and their careers? In other words, science should only work with constant critical appraisal. It seems clear that this crityappraisal was either absent or smashed by powerful forces. That in itself is of course age old. But knowing it is age old should it be excused or should we learn from it?
This is an exceedingly important question to answer because it can be used to broadly determine if medicine is a science and whether its conclusions are trustworthy. If the careers of the scientists who question the amyloid hypothesis are healthy, then medicine is operating more or less like a science and can be treated as such. However, if the questioners were instead expelled because of their concerns, then medicine is not a science and should not be given the same level of trust.
Good. These are the right questions to be asking. Now, assume malevolence (devil's advocate if you must). The Pharma companies want to make money, so they're going to throw money at people whose ideas have potential therapeutics attached. So, yeah, I'm pretty sure you're going to see that the questioners did not get the shiny, big rewards -- be they patents($$$), status, or otherwise.
The existence of failure modes doesn’t invalidate the entire modality.
True. How much money do we need to waste before you decide that capitalism is the wrong way to approach medicine? (or: state the failure condition you'd accept for "we need a new solution here").
I’m open to the idea that it isn’t optimal and we should do something else. The question is, 1. where do you want to go from here, and 2. how do we get from here to there while maintaining medicine as a functioning thing?
We need more seed funding that isn't being "gatekept" by rentseekers (in this case, already established scientists). We also need more constraints on active development of plagues. These are two ideas that are at odds with each other. But perhaps there is a better system, with active safeguards.
Not so sure "medicine as a functioning thing" is currently in existence, when a simple "I see a correlation, and the intervention looks harmless" leads to screaming and shit-throwing from the public health wonks.
I think this is a very wrong model of research and of the field. It wasn't "brave souls", and there were no "powerful forces" who "smashed" the resistance. Just the opposite! When the amyloid hypothesis came up, it was met with very much skepticism. As it should be! It was a new hypothesis and had the burden of proof, and researchers demanded those proofs. The mainline opinion was that this was perhaps an interesting lead, but as most interesting leads in science it would probably be wrong.
I think what happens instead is the following: no one "smashes" people with other hypothesis. But it's harder to do "as good" research on other hypothesis (let's call it alternative B) than on Alzheimer (let's call it A), because you have no animal model. It's not enough to say "I want to explore alternative B". In order to get funding, you have to say what are steps 1,2,3 for exploring B. And the thing is that your steps for B will look a lot less promising than the corresponding steps for A, so your proposal is less likely to get funding.
For Alzheimer, it is additionally true that people got attached to A too much, and thus were over-skeptical about alternatives. It's not a sinister organization, it's just that researchers looked around 10 years after the animal model was created, and what did they see? Tons of research done on A, and very little for B. So it like A was championed by the field, so young researcher started to consider A as the standard hypothesis. It is a natural dynamics in science favoring explanations that can be investigated over explanations that can't be investigated. I think science has some ability to fight this. (Whatever you believe, scientists are a really skeptical bunch of people.) But in this case, it failed.
I think there is another mismatch here that lead "the field" astray. The community of people trying to understand what Alzheimer is, that is a different field than the people trying to cure Alzheimer. Simply speaking, the first one consists of doctors and gerontologists, basic researchers who look at clinical symptoms like "how well does the patient remember things". The other consists of neuroscientist, microbiologists, and some other "harder" sciences, and this group is much larger because it is backed up by industry. Now, both groups do not talk too much to each other, because they speak very different languages and have very different traditions, so talking to each other is really hard. I think the first group was always quite skeptical of the amyloid hypothesis, and also voiced these concerns loudly. But the doubts stayed within their own group and didn't make it to the other group, who were busy working to improve the "clinical markers" (like plaques) that they now identified with the Alzheimer disease.
I think it is important to criticize what went wrong, but we should not forget that the problems here are really, really hard. Talking to scientists from different disciplines is super-hard. I switched to neuroscience at some points of my academic career, and I needed years(!) before I could really talk to the people in the field with ease. And fostering minority opinions is important, but it also means that you need to make the decision to fund a non-promising research proposal, and favor it over a promising one. And decisions are made in a very distributed way in science. There is no czar making all decisions, every single proposal gets different reviewers. So you have to decide that proposal B, where the reviewers say that this is not going to get you far, is more important than proposal A, where the reviewers say that this is promising. It's not impossible to make such decisions, but it's really, really hard to make them in a "good" way.
I also looked into the original question of what happened to people who were skeptic of the amyloid hypothesis, but this post is getting long, so I will post this separately below. tldr: they stayed in academia and made good careers. I do recommend this interview with Krl Herrup though:
https://news.uchicago.edu/where-has-alzheimers-research-gone-wrong
I tried to check the assumption that people who were particularly skeptical of the amyloid hypothesis had trouble with their careers. I asked Gemini who those were, and followed the two names that it gave. (It gave more, but those were groups of people.) One was Dr. Zaven Khachaturian, who, together with others, "argued that the intense focus on amyloid was overshadowing other potentially crucial avenues of investigation. They cautioned against a "tunnel vision" approach, emphasizing the complexity of the disease". I found a very long 2022 interview with him, which confirmed that he has this view still today. He essentially said that the mouse model has failed. But he also was full of praise of all the great people he worked with. There was certainly no indications in the interview that he was shut down, just the opposite. He said that after 1995,
"I became an advisor to the Alzheimer's Association and began developing the Ronald and Nancy Reagan Research Institute for them to do the next phase of the development of the program."
That sounds like he made a pretty good career. He also complains in much details how difficult it was to study Alzheimer at all! But once this was established, he does not complain that it was difficult to study alternatives to the amyloid hypothesis.
https://history.nih.gov/collections/oral-histories/khachaturian-zaven-2022/
The other was Karl Herrup, who recently wrote a book in 2022(?) where he explains why he is skeptical of the amyloid hypothesis. I read a recent interview on the book:
https://news.uchicago.edu/where-has-alzheimers-research-gone-wrong
He does mention that it's hard to get funding for alternative hypothesis. But he also does not mention that someone actively tries to shut down people. But of course (now my interpretation), if you want to get a permanent position as a young researcher, then you need funding, so if you follow other hypotheses, you will likely not be able to stay in academia. So I think it's simply the dynamics in science that I explained in my previous post, without anyone following a sinister agenda.
Thank you for this careful reply. You don't mention however the malfeasance that did take place. Nor do you mention as others have noted that powerful economic forces did likely have an impact here. Of course I did not mean some secret cabal conspiring ...
Hm, I am not sure that I understand what you mean. There are certainly powerful economic forces coming from pharma industry. But they would very much prefer to base their development on a correct hypothesis, no? They are not deliberately steering research to the wrong path.
I do agree that they have strong incentive to push a drug into the market even if it doesn't work, once they have developed that drug. But that is long after things have gone horribly wrong. They would prefer very much to have a drug that actually works.
And of course, there is an overwhelming economic force on researchers to get funding. And this is not always aligned well.
About malfeasance, I see one main "evil" thing, which is that researchers try to oversell their own work. I think it's true that the creators of the mouse model oversold their work. I don't think that they were going further than their peers, everyone tries to oversell their work. Not only in academia. But academia (and other places) should develop protection against this overselling, and this failed in case of Alzheimer. Is that what you mean?
was the research oversold or falsified?
I guess you don't need brave souls when you've seen drugs based on the research failed and failed again. By this point even the Big Pharma themselves will start to ask what's going on.
Is there a good step by step guide to reading scientific papers anywhere? This post has a good start at one, but having a checklist of things to think about would be pretty helpful.
Good question! There are actually a bunch of guides, but the ones I've seen are pretty generic, and they tend to make the assumption that authors are, by default, honest and unbiased. Ask your favorite LLM. It will spit out a few for you.
But I'd recommend: "How to read and understand a scientific paper: a guide for non-scientists" by Jennifer Raff. She suggests ways to separate sketchy papers from serious ones.
https://violentmetaphors.com/2013/08/25/how-to-read-and-understand-a-scientific-paper-2/
The hard part is that a lot of reading papers is "taste" and "smell." Until you're deeply enmeshed in the methods yourself, you can't identify a paper that has that "replication crisis smell" because you aren't sure if their methods are standard in the field, or if it's torturing the data. Sometimes you can spot stuff -- browse Andrew Gelman's blog long enough and you'll pick up clues -- but things like "this paper is clearly bogus because that bacteria does not grow in that medium" is almost impossible to pick up unless you've actually tried to grow said bacteria in said medium, or you know what (genetic edit such and such) does to bacterial metabolism
Other things matter too -- like "huh it's weird that this paper is in Scientific Reports and not a specialized journal, is there anything that prevented it from getting published elsewhere? Oh yeah, they completely omitted this critical control that reviewers familiar with that model would have asked for." Or sometimes "Nope this paper is great, guess one of the authors had a publication credit for reviewing or their library is covering the open access fee!" Very, very tough to spot and recognize without actual training in the field, going to conferences, etc., no matter how many papers you read on your own.
Read first, bet later: as always here is the manifold market for who will win the not-a-book review contest, now updated with this candidate!
https://manifold.markets/BayesianTom/who-will-win-acxs-everythingexceptb
Since we have a list of most of the finalists (from https://www.astralcodexten.com/p/open-thread-387), wouldn't it make sense to add them all to the market now?
As is, I'm confused about what it means to buy "Other". Naively one might think that a vote for "Other" would mean that some review other than the three currently in the market would win. But you're planning to add reviews as they are posted, and eventually all possible reviews should appear in the market. Doesn't this mean that "Other" can never possibly win?
Other is a mechanic that is automatically included in multi-questions Manifold markets. Shares in "Other" get converted automatically whenever new answers are added
Given that, I admit that I like the idea of having a weekly rollout of answers so we can see the impact they cause when they are added. Also, we do not know all the finalists beforehand, such as the School review.
The illustrative application of scientific rigor is what marks the best of this essay. I've been attempting to work on the 'detective' mode while reading papers for a bit, so I'm glad to read this well-written piece. Overall, I'd say this leaves the story a bit unfinished by it's own promise initially. This essay could've used a larger view along with this segment: on other evidence ignored, on more recent hypotheses that show promise. Scott's comment addresses another potential direction too.
Nice to see someone on this site quoting Medawar, who was, of course, a Popperian. I've tried in the past to get Scott--and also his mentor, Eliezer Yudkowsky--to take a proper look at Popper's theory of knowledge, including his critique of induction (including the Bayesian one they subscribe to). Sadly, they insist on dismissing Popper, not--as far as I can tell--on the basis of anything he actually said, but on the basis of a strawman version that appears in the secondary literature. Sigh.
I liked this review for its hands on review of the science, but I’m not really convinced that amyloids aren’t a predicate to Alzheimer’s. I’m not sure anyone holds the strong hypothesis at all these days, and critique of a 20 year old paper doesn’t seem like a persuasive way to argue against the weak one.
It's a review for a mostly lay audience. To me, it seems clear that it's not intended to advance the actual state of the debate in the field but rather to give us a better window into the history of it, which I think it does admirably.
For a more thorough discussion of this topic, follow the series here:
https://journalclubwithmyka.substack.com/p/of-mice-and-mechanisms
https://journalclubwithmyka.substack.com/p/of-mice-and-mechanisms-part-2
https://journalclubwithmyka.substack.com/p/of-mice-and-mechanism-part-3
https://journalclubwithmyka.substack.com/p/of-mice-and-mechanism-part-4
Having read the whole essay I'm still unsure what exactly this is a review of? Overly condensed scientific papers? The Alzheimers literature? Making transgenic mice? This feels like a regular essay with the word "review" slapped on it so that it can meet the criteria for a review competition.
It is a review of a "game-changer" paper in Nature published in 1995.
But I don't see any links to this paper. Do you have one?
You can download it here if you register for free at Researchgate.
https://www.researchgate.net/publication/278397957_Alzheimer-type_neuropathology_in_transgenic_mice_overexpressing_V717F_-amyloid_precursor_protein
The original link to Nature is here:
https://www.nature.com/articles/373523a0
The first link is inside the Essay.
>Through a painstaking process called sub-cloning, equal parts molecular biology and divination, they managed to insert into their mouse a human APP gene carrying the mutation found in families with high rates of early-onset Alzheimer's.
>You can design your construct perfectly on paper, but in truth, you solve the problem by tweaking reagents like an alchemist, trying to find the perfect brew to coax your foreign gene into the plasmid at high efficiency.
I guess I'm an alchemist then :) I didn't choose the name Metacelsus for nothing...
More seriously, I'm glad I'm not stuck with the molecular biology tools of 1995. Or worse, 1984, when researchers had to clone a gene by literally scraping pieces of mouse chromosome 17 off of glass slides using tiny needles. https://pubmed.ncbi.nlm.nih.gov/6697397/
I wonder what future generations will consider our current version of scraping chromosomes off of slides?
it's aigc 🙁
If the content is AI generated, it is pretty good in the cathegory. I and many others couldn't do it with the tools I know. I regularly review terrible AI generated papers, which make no sense and do not resemble this. At worst, I would expect, the backbone are human-made bullet points transformed by AI to text ?
Evidence for that?
> At some point, science’s self-correcting machinery—and the brilliance and curiosity of a new generation of researchers—will win out.
This statement or something like it is de rigeur for essays that criticize a certain episode of science while wishing to preserve respect for science in general. But as Popper (mentioned elsewhere in the comments) might have noted, it's not falsifiable: we have no way to determine how many relevant errors have not yet been corrected, let alone how many of those eventually will be corrected.
Sure, we're batting 100%: all known errors have eventually been identified. But that's the wrong denominator.
The best we can realistically hope for in science is that errors have an acceptably short half-life and that the process of science is close to optimal for the tradeoff between making new knowledge and discovering errors.
How would we ever know anything, that is not provisional. Newton's theory is provisional,. Einstein is also provisional. But in what sense our knowledge has advanced going from Newton to Einstein?
Intuitively we know of course, we are more knowledgeable now, but how can we say on basis of Popper?
Verified predictions span a wider range of known phenomena?
This was great to read, but I kept feeling like I must have read this author's work before, until I realised what was happening was that it reminds me a lot of Claude, particularly 4 Opus. I'd be interested to know if he helped out.
The essay feels persuasive to me and makes me despair about a review process - how it does not catch problems and how slow are the correction mechanisms after the paper makes it to a (top) journal. Nowadays it is worse than before, because the reviewers are scarce. What is the future, how to deal with this ?
This review was incredibly boring. I'm sorry, but I just have to force myself to even skim it.
Because Alzheimer's runs in my family, I always feel like I should put more time into trying to get a clear picture of it. The fracas over the amyloid hypothesis, though, has made some part of my brain so skeptical of any information I hear that I've developed a serious block about this. Would anyone be able to point me to an easy-to-understand source of what we know to be true about its etiology?
I enjoyed this narrative immensely. And the logical flow of the author's argument was superb. My only nit to pick is that I wish there were some links to the paper(s) quoted and the images displayed.
This link was directly in the essay. Maybe you were confused by the researchgate portal - it requires free registration.
https://www.researchgate.net/publication/278397957_Alzheimer-type_neuropathology_in_transgenic_mice_overexpressing_V717F_-amyloid_precursor_protein
I should have been clearer about my specific nits. It was the opening paragraph that really confused me...
> “The scientific paper is a ‘fraud’ that creates “a totally misleading narrative of the processes of thought that go into the making of scientific discoveries.”
Which scientific paper is a fraud? The embedded link leads to Medawar's paper on what constitutes fraud, but someone else seems to be claiming that some unspecified paper is a fraud. Yes, we've got a link to the 1995 Games, et al paper. Is this the paper that someone (unidentified from the quotes within quotes) is accusing of fraud, though?
And if the Games, et al paper is the root of this discussion (and it seems to be), are those images (panel a through d) from the paper? If so, there should be some source reference under them.
The "fraud" claim is a little exaggerated. It describes something I know from my own work in lower-tier molecular biology for food science. We often do a somewhat chaotic work, changing hypotheses and protocols, but when we sit down to write a paper, we make it look as if the final protocol was the one we chose from the beginning by super logical steps. It is expected from us, for easier reading. The readers do not want the paper too convoluted. So it is not a fraud as you would imagine it. The experiments and results are all truthful. This modified nareative is a general feature of most scientific papers.
I once wrote in a paper, that my work was interrupted by my maternal leave. My boss facepalmed and removed the sentence. However, one reviewer asked about the time gap which was apparent from several details in the paper, so we had to put the sentence back :-) . So... this kind of "fraud".
The paper by Games et al from 1995 had another problem. It contained true findings, but they were wrongly interpreted and also, the review process was not strict enough. Apparently, everybody wanted it to be true, that we are closer to Alzheimers cure.
I checked the list of the 34 co-authors on the "Alzheimer-type neuropathology in transgenic mice overexpressing amyloid precursor protein" paper, and only Eliezer Masliah was involved in the recently-discovered tranche of fraudulent amyloid plaque studies. He was caught manipulating images, and 132 papers related to Alzheimer’s and Parkinson’s research were flagged.
According to the Guardian, "Independent neuroscientists reviewed it and described the scope as “breathtaking” — not easily dismissed as simple mistakes." He was forced to resign from the NIA/NIH, but he also held a tenured professorship at UCSD. No action was taken by UCSD against him, but he's now listed as professor emeritus, so I guess he gets to keep his pension.
... but yes, it is a confusing structure. It should be stated clearly at the beginning, what this is a review of.
The link to the paper by Games et al. 1995 is burried in the middle. If it is the subject of this review, one would expect it to be at the start.
Maybe the author would indeed say, that she is not reviewing this paper, but the particular hypothesis about Alzheimer disease.
At the end, some other later papers are linked, too.
A superb and deeply interesting review. My only quibble is the missed opportunity:
A company named _Athena_ produces a mouse model that mysteriously appears in the literature fully-formed, without a creation story? The parallel was so cosmically on-the-nose that its absence created a joke-shaped hole in the narrative. And if there's one place on the internet I expect those specific holes to be filled, it's here.
For this paper, the peer review process failed.
I really liked this. Thanks. I know science has always had this issue with getting stuck in the wrong paradigm. But the capture mechanism seems stronger these days. In many fields there is this one idea that captures all the grants, people, papers. With very weak incentives to think outside that box. String theory, Lambda CDM, Smell is shape and those are only the ones I know about.
non-science guys thoughts.
1. Certain fields of scientific research are both extremely expensive and reliant on unstable (?) techniques, things that require extreme precision but may contain inherent aspects that can affect the research in hard to discern ways.
2. This leads to acceptance of the first theories that show some benefit, demonstrate results, or obtain funding/consensus
3. However this doesn't mean the research is actually efficacious: the expense prevents reviewing a lot of different angles and reward structure incentivizes first to market.
I think the scientist side though can't help: attacking the hypothesis isn't the problem, the problem is more in resource allocation and management. We need to fund more types of research, things like better tooling and training, and stuff.
To use an analogy, we are unearthing a labryinth with many entrances, but it is so costly that our need forces us to take the first ones we can find. but whether the entrance leads deep inside or not we cant tell.
The goal would be to get a lot of people unearthing and exploring. the more entrances uncovered the more chance one leads to the ruin's heart.
I feel like we tend to underrate management or support over intelligence; rather than promote individual brilliance we need to really build a much larger base of support to enable more approaches to flourish. Not sure how though, capitalism does not directly reward this, and increasingly competition leads to quick consolidation instead of what we thought.
#2 matches my anecdata closely. The joke I always heard was "You need to tell the NIH you're going to do something totally revolutionary and paradigm-shifting that also follows naturally from the established literature".
On the one hand I enjoyed this essay, on the other hand I wonder if maybe there should be a control. Go back and look at a paper that really did change the history of the disease it studied and subject it to the same scrutiny.
All papers are written by humans, under space and budget constraints. Perhaps even the good papers show weaknesses and have cracks in them?
A meta-demonstration of strong inference.
By now, it is possible to guess the author's name from the comments. She also reviews old papers which pointed in right direction. They are published here on substack.
Thank you. I don't know if I found her but I did find a woman who writes interesting things about Alzheimer's, brains, and autoimmunity (and psychiatric diagnoses).
Took half an hour to an hour to find. Wouldn't even have attempted without your comment.
Now I have several hours of good reading ahead of me for next week :)
Her. I didn't want to write her name or that of the substack because she's supposed to be anonymous until the voting is over.
Some (very) minor complaints:
- I'm confused about the references to "glassblowing" : I've pulled many pipettes for mouse embryo mouth-pipetting and done embryo microinjection and there isn't any "glassblowing" involved, it's just heating and pulling to a desired thickness, which is also what is described in the linked article. No blowing required. Is "glassblowing" meant to refer generally to "shaping glass with heat", or is there an additional technique being referenced that I am not familiar with?
- I'm also confused at how difficult the essay says molecular cloning and microinjection are. At first I assumed this must just be how things were in the 90s, but then I ran into "Today, fancy $100K integrated microinjection/scope systems help the process along (though it still takes years to master)". The linked video shows a technique which I've learned and taught, and it's really not that hard. If you let the embryos grow a few days you can see which ones you killed, and I think anyone can get >90% efficiency with a week or two of practice. Though a laser to put a hole in the zona pellucida really helps (which the person in the video does not seem to have).
The overall thrust of the essay isn't affected by these minor issues.
Nice to find another molecular biologist in the commentariat! Agreed on both of these points, but as someone who also does "real" glassblowing (both furnace and torch), I don't mind counting pulling pipettes as a simple form of glassblowing. But you're right, it's a super easy form.
Many neat glassblowing projects (~all solid sculpture work) don't involve actual blowing either, as long as you're actively shaping hot glass I'm happy to count it.
I'll add a third minor complaint (or actually just flesh out the cloning part of your second complaint) - cloning isn't all that hard. It was harder in the 90s and there is still some amount of voodoo wizardry, but any decent undergrad researcher can clone constructs fairly reliably after a month or two in the lab.
I see! I'm no glass-blowing expert, and it seems I was too picky with my definitions. I'll tell my lab-mates that a glass-blower said our pipette-pulling counts. :-)
I agree re: cloning. I've heard that even some of the fancier high-schools do some basic cloning now.
>However, cracks are showing in this façade. In 2021, the FDA granted accelerated approval to aducanumab (Aduhelm), an anti-amyloid drug developed by Biogen, despite scant evidence that it meaningfully altered the course of cognitive decline
This is supposed to be an example of how the narrow-minded establishment is just totally off on the wrong track. But this author doesn't make any mention of lecanumab or donanumab, which are also FDA approved anti-amyloid mabs being actively prescribed now. Does the author not know about them? Anyone with even a shallow knowledge of the current field would... Or does the author know, but not like to mention them because they have better evidence for efficacy and so are harder to explain away?
Does the commenter not know how to spell lecanemab and donanemab... Or does the commenter know, but does not like to spell them correctly so that other readers will have a harder time discovering that many scientists consider "the efficacy of lecanemab and donanemab ... unimpressive and the the evidence for harm to patients ... much more solid than the evidence of any noticeable benefit"?
- Derek Lowe (https://www.science.org/content/blog-post/does-it-work-does-it-do-harm-and-more-basic-questions)
Of course they're just typos, not evidence of ignorance or insidious intent. Likewise, I can think of half a dozen justifiable reasons why an abridged review of the seminal Alzheimer's/amyloid paper does not need to also review of every drug approved in its wake.
There's a recent trial of donanemab which seemed to show that it slowed, but did not reverse, disease progression. I'm wondering if you have any criticisms of it?
https://clinicaltrials.gov/study/NCT03367403?id=NCT03367403&rank=1&tab=results
It seems like a significant number of patients saw complete amyloid removal, yet results were very moderate and getting a significant result relied on including functional parameters along with cognitive ones to achieve statistical significance. Decline was slowed, but not reversed, even with complete removal of amyloid. (I'm not sure how they got complete removal if donanemab doesn't target vascular amyloid.)
I think you're asking me. I have background in statistics and human biology and anatomy. So, full disclosure, this is not my field. But I'll have a look.
At a glance...
As you noted, donanemab flushes away lots of amyloid for tiny changes in test performance. For whatever it's worth, difference in means using a basic two-sample t-test is not significant at p=0.05 for half of the first six treatment/placebo comparisons.
So, at the very least we can say that getting rid of lots of amyloid doesn't do very much to eliminate Alzheimer's disease biomarkers or cognitive symptoms. I'd also add that even where the differences are significant, the effect sizes are so small that you have to question whether it translates to a meaningful difference in QOL (“minimal clinically important difference," the smallest change a patient can perceive, is sometimes used... maybe not for AD). And, like you mentioned, it looks like the treated Alzheimer's patients continue to deteriorate, just more slowly.
In exchange for these miniscule 'improvements,' early-stage Alzheimer's sufferers get a worrying increase in brain bleeds and related symptoms. 10 of 131 suffered cerebral microhaemorrhages (compare to 3/125 placebo). 18 of 131 suffered superficial siderosis of the CNS (4/125 placebo). There's probably some co--occurrence here. Troubling nevertheless. Even more troubling that there's a movement to start prophylactically treating people with plaques but no evidence of cognitive AD. Knowing the side effects, would you take it?
Notes:
- The Lowe blog post I linked to initially itself links to some of the relevant studies, which I haven't read.
- I've assumed above that the scales for the trial test variables are simple linear -- one unit change is the same amount of observable difference everywhere along the range.
This was interesting. I've never looked over clinical trial results before.
Thanks for your time. I was leaning in a similar direction. Given the relatively small effect size I was almost wondering if the benefits we did see could be some kind of statistical artifact. Given the increase in infusion related reactions in the experimental group the study couldn't be properly blinded for many participants. And the test was done by Eli Lilly so there's that potential conflict of interest.
But yeah, this seems to suggest an upper range for what amyloid targeting drugs might do, I think. And the upper range is just not that great.
"Thinking about this work triggers me. I spent years of my PhD setting up and monitoring mouse breeding pairs for timed pregnancies. Every morning began with the ritual of checking for copulatory plugs (don’t ask!). Only ~20-25% of pairs would successfully mate overnight: some females aren’t receptive; some males are layabouts. The failed pairings had to be separated and re-paired in the evening, so fertilization timing could be precisely tracked. Once mating was confirmed (those copulatory plugs again), the female was euthanized, and her oviducts—tiny tubes containing the precious fertilized eggs—carefully dissected. Then you flush out the one-cell zygotes using a finely-tuned glass pipette (yet another moment where glass-blowing skills came in handy)."
I do hope that this review was AI-Generated, because I would like to think that a human author, writing out a paragraph about how they spent years snuffing out countless mouse lives for (self-admitted) minimal gain would have a slightly more nuanced takeaway than what seems to read as "man, I hate being reminded how much of a time-consuming hassle it was killing pregnant mice". Then again, I've known STEM grads who seemed genuinely puzzled why their grade school children were so upset after hearing them describe cutting the heads off small vertebrates for research courses.
'This discrepancy matters because in human Alzheimer's disease, cognitive decline correlates most strongly with neuronal loss (cell death), not with plaque burden. Some patients with significant amyloid deposits show minimal symptoms; others, with fewer plaques but more neurodegeneration, experience severe dementia.:
If this is true, isn't it strong evidence against the amyloid hypothesis already? Why go to all this trouble?
Good review. Think conclusion could have summarized the evidence against the paper more and maybe include more funny offhands like tanking $2 billion.
"before looking at the paper, you imagine the experiments and results that would justify the claim" is a bit silly, and tbh, potentially arrogant! (E.g., even if you are specialist the authors likely know more about the specific question they are studying than you do).
To know what a paper is claiming, you need to start reading it! At least the abstract etc... To me, this review seem to present an idealised and unpractical process.
Lazy reading is bad, but there's nothing particularly wrong with someone reading the paper, and then asking pertinent probing questions. While reflecting post reading might potentially be more biased (maybe...?) than forming hypothesis before reading the former is also going to be a hell of a lot more efficient.
"Comparisons between brain regions with and without plaques. If you see neuronal death, it should co-occur with the presence of plaques, not be spread throughout the brain."
Seems far to presumptive. It could easily be the case that plagues cause a cascade affect that influences the whole brain (not just locally).
Ideally, the pattern in the brain of transgenic mice should mimic what happens in the human brains with real Alzheimer disease. It is an empirical question, what the pattern is like - neuronal death in the vicinity of plaques of in the areas remote from plaques as well ?
There can be nobody who does research, lab based or not, who would disagree with Medawar. As for this essay - which I found interesting and informative - at least the author took the trouble read the original papers. Far too often, you’ll see a citation where at best only the abstract was read (and which might misrepresent the paper’s core findings); at worst, only the title of the cited paper! Finally, when I was a trainee in neuropathology in the 1980s, we looked at ‘[amyloid] plaques & [tau] tangles’ as markers of Alzheimer’s, mainly on age-compared quantitative grounds. We distinguished Alzheimer’s Disease (presenile) dementia, as Alzheimer described in 1907 - where you shouldn’t see more than a scattering of plaques and tangles in a ‘young’ brain - from Senile Dementia of Alzheimer Type, SDAT - where you’d expect to see many plaques and tangles throughout the brain, compared with scattered plaques and tangles in the non-dementia brain. We’d also see a mixed vascular/Alzheimer picture. And Lewy body dementia was on the horizon as a thing.
Small nitpick, but I don't think scientists read papers like a blog post, especially under a time crunch. You typically read the abstract to filter in papers you want to read, skip to the figures/results referring to methods as necessary. Then you go to the discussion to reflect on the paper as a whole. I personally mainly enjoy reading the discussion, and only read the other stuff so I can understand it.
Of course, this depends on your level of familiarity with the field. Most introductions are redundant and repeat the same old titbits over and over again and only need to be read if you're new to the topic.
Okay, now do a result of similar age and importance that *does* hold up.
According to Nobel laureate Sir Peter Medawar, the scientific paper is a "fraud" because it presents a clean, linear narrative that hides the messy reality of scientific discovery. While this structured format makes complex work more accessible and allows others to build on it, it also has a significant drawback: it can be so compelling that readers, including other scientists, may not critically spot potential flaws or alternative interpretations.