Comment deleted
Expand full comment
RemovedDec 17, 2021·edited Dec 17, 2021
Comment removed
Expand full comment

This reminds me of Beware Isolated Demands For Rigor.

Expand full comment
Dec 17, 2021·edited Dec 17, 2021

I 100% agree with the sentiment here, but maybe we SSC/ACX people are weird in how we use "evidence?" I looked up what dictionary.com says:

* that which tends to prove or disprove something; ground for belief; proof.

* something that makes plain or clear; an indication or sign:

Maybe some people think "evidence" means "proof?"

Similarly: In the last few years I've noticed headline writers using "refute" to mean "dispute," which makes me crazy. But some dictionaries say "refute" means "deny or contradict" - can I blame them?

Expand full comment
Dec 17, 2021·edited Dec 17, 2021

“No evidence that 45000 died of vaccine complications” and “no evidence that vaccines cause new variants” are just the same kind of “no evidence” hand waving about possibly/likely true claims that the other cases are. If vaccines didn’t contribute to new variants it would violate everything we know about evolution. And with several billion people vaccinated, cases of vaccine-induced myocarditis running around one in ten thousand for males, and plenty of other possible side effects, it is perfectly reasonable to suppose that tens of thousands have died due to vaccination

[LOL Scott now appears to have edited the piece to remove the example about vaccines causing new variants being false, but it was there originally]

Expand full comment

There's no evidence long covid is real. Except if you accept that me/cfs is real. Then you have a strong prior an infection can cause serious long last deibilty after the acute infection is gone. But of course, there's "no evidence" patients with me/cfs aren't just making things up.

The absence of a single clear biomarker in me/cfs is considered to be " no evidence" and that absence of evidence is definitive for some peopple, rather than a good reason to do more research. I hope we can learn more about how long covid works.

Expand full comment

Re homeopathy: Low Dose Immunology sometimes uses dosages that are close to the highest concentrations homeopathy uses. There could be a mechanism there, for some instances of homeopathy. The logic behind it is generally bananas....or at least "alternative"

Expand full comment

I think part of the problem is the way we use euphemism to soften our language. "No evidence" has a literal meaning that is about how we haven't studied it enough yet, but in some contexts is a nicer way to say idiotic. While in some ways I think this is a very good thing, it sometimes comes at the cost of clarity.

I've noticed this in other contexts to. I think describing a moronic claim as "flawed logic" is a similarly technically correct understatement. It gives more cedes more ground than necessary to Motte Bailey and other fallacies. I was once in a conversation with a friend where I wanted to strongly express that their claim was a fallacy, but I found that I didn't even have the language to express the strength of my disagreement.

I don't have a good solution. I certainly don't think we would be better off with constant reddit-style "you're and idiot" "no u" arguments, but an inability to clearly express the wrongness of flawed logic seems like a serious limitation.

Expand full comment

Thank you for this. Communication from the WHO, CDC, FDA et al has been embarrassingly horrible throughout the entire pandemic. “No evidence of…” has been a particular thorn in my side because of the exact reason why you mentioned it. When the WHO would vehemently say obviously wrong statement like “no evidence that masks work”, “no evidence that asymptomatic people transmit the disease” etc, it’s pathetically comical as to how they shot themselves in the foot. No one believes them now.

What’s wrong with saying “we don’t know if masks work yet”? It’s so much clearer and saying that wouldn’t have destroyed their reputation. Instead of “No evidence that hydroxychlorquine works”, say “based on all available studies, hydroxychloroquine does not work.”

I think the biggest takeaway from the pandemic is that the deference to science has evaporated in the age of social media and if scientists and science/medical groups like the WHO, CDC or FDA want to regain their ability to lead people, they will need to reengineer their communications strategy. Using scientifically accurate but stupidly confusing phrases like “No evidence that…” really need to be eliminated. Clear, concise and easy to understand language needs to be used 100% of the time. Literally dumb it down so that it’s clear to the lowest common denominator such that we don’t get another politicization of what we saw with COVID.

Expand full comment

Point of clarification: This piece confuses “science journalists” with “science communicators,” which obscures how this problem occurs. “Science communicators” describes a job class ranging from public information officers to PR flacks; they aren’t journalists, and journalists in turn aren’t communicators, in the sense that their primary obligation isn’t to promote new research but to report on and contextualize it. “Science communicators” write the press releases with phrases such as “no evidence,” and far too often journalists just repeat that language because they aren’t sophisticated enough (or are too busy) to interrogate that claim or the valences of the phrase. So, yes, the problem is bad science communication, but also bad journalism, or stenography masquerading as journalism, which a lot of science journalism is. (Edits for grammar only.)

Expand full comment
Dec 17, 2021·edited Dec 17, 2021

I think the natural "Well, then what?" followup is probably @literalbanana on the skilled practice of ignorance: https://carcinisation.com/2020/01/27/ignorance-a-skilled-practice/

Every finding is at least a little indexical, and it's ultimately a personal choice on how much you're allowed to abstract and generalize beyond a literal retelling of what happened. For something like UFOs, you're presumably agreeing that "Mr. X will tell you that he was abducted by UFOs" is a replicable truth, and taking issue with when the indexical "these people will tell you" can wash into the less-indexical "This thing can be sometimes observed". This is a judgement call! But we don't have great language about making these judgement calls, and I think that's ultimately the snarl that makes science communication so insufferable. A desire for "objectivity" means trying to find the most defensible source of judgement call, bigger groups are more defensible than individuals, and bigger groups skew conservative because you drift from needing one-person-saying-yes to no-people-saying-no.

I think an ideal way to write scientific articles is for the author or interview subject to just literally described the process they undertook, and what they found when undertaking it. Let the reader do the work of deciding whether that process leads to finding that generalize. Pretensions of objectivity (THIS one is "anecdote", THAT one is "evidence") boil down to an appeal to authority that "science" is an institution that can reliably decide which findings are allowed to generalize. And since we watched institutions fuck it up roughly a billion times throughout the pandemic, hopefully we can stop pretending we believe that!

Expand full comment

One of the best encapsulations of this phenomenon I’ve read

Expand full comment

Reminds me of this genius, but maybe apocryphal, marketing ploy from the 19th century: "Warning: the many reports that this tonic makes your hair grow back have not all been verified."

Expand full comment

Garrett Jones once said something that I think about a lot: “To a wise Bayesian, when evidence is cheap to acquire, absence of evidence is indeed evidence of absence.” Perhaps this conversation could use some depth by thinking about the search costs of that evidence.

Expand full comment

Scott: Mistake theory! Here's how we can fix it.

Journos: Haha lets bash that guy over the head with our partisan conflict theory powers ;)

Expand full comment

I feel like this is a downstream result of a curious phenomenon in the modern intellectual climate: that we have almost entirely discounted arguments from expert knowledge as invalid, in favor of arguments from evidence, so we have no way -- as a society -- to argue that "a reasonably understanding of the science says this should not be possible".

Like: should masks work? there's no evidence, but it follows if a virus is transported in aerosolized particles that yes, they should help. Should homeopathy work? No, a basic understanding of chemistry and biology says it's absurd. But those arguments, which would be perfectly convincing in person, carry no weight at the level of common understanding that journalism is allowed to assume, so they're reduced to only reporting on the state of the evidence. Hence the nonsensical headlines.

I kinda think that in the far future people will look back on our era and talk about this as one of dominant social trends of the time: that weird time when all arguments resorted to data and no one was allowed to be intelligent.

Incidentally, this is the failure mode that my corporate tech job has. No one is willing to make bold claims about what would be good decisions -- everything is heavily rooted in a sort of ritual of performing data science, which lets us assign value to stupid short term decisions (if we add this popup, revenue increases, the data proves it) but not wise long term decisions (if we keep popups to a minimum people will like our brand and stick with it for years, giving brand loyalty and doing right by our customers).

A lot of the stupid things in tech follow from this kind of decision-making, which as far as I can tell is endemic to Bay Area-style businesses, but not really a healthy way to run a company. I used to work at Amazon and they notably _didn't_ work this way, which IMO is one of the main reasons they've managed to grow so large, because they have the ability -- bestowed by Bezos -- to take very long-term strategies instead of chasing short-term improvements in their metrics.

Expand full comment

I think youʼre right. And not just scientists publishing papers, doctors making prescription decisions seem to also think about these two different concepts the same way: https://www.overcomingbias.com/2008/08/doctor-there-ar.html

Expand full comment

I think you're being way too kind to the news articles at the top.

These are professional communicators we're talking about. If they wanted to convey "we haven't looked into X but our priors on X are high" they could have. In fact their priors on X were low. Or, worse, their priors on X weren't so low but they wanted other people's to be low. They were intentionally using the "no evidence" ambiguity to make it sound like they'd looked into X and disconfirmed it when they hadn't.

Expand full comment

As a conspiracy theorist, I find it maddening how I am constantly mocked for my conspiracy theories, even though every single one of them thus far has turned out to be true.


Expand full comment

Ooh, do "linked to" next. As in "red wine linked to heart health".

And there's a special place in hell for "Scientists Find". What it sounds like: total scientific consensus. What it actually means: two guys published a paper.

Expand full comment

Remember also that reporters don't write the headlines. Headlines are written by low-level editorial staff glancing at the first paragraph.

Expand full comment

Oh, I have *ample* evidence that other people are smarter than me.

Expand full comment

I'd be more or less forgiving of two of the headline formulations: "No Clear Evidence" and "No Hard Evidence." Perhaps "No Proof Yet" would be a good phrase to use in the early days.

Expand full comment

This post lacks in conflict theory. Feynman once said, "... poets do not write to be understood". Well, journalists write to be misunderstood. While there is no evidence exaggeration of scientific findings is not accidental, it is not.

Expand full comment

Well, all I can say is that you must have an astonishingly high prior that the purpose of science journalism is to inform, rather than (say) to entertain ha ha.

Also...when you say the search for truth is Bayesian, I would have to say that depends on what you mean by the phrase. If you mean the kind of search for the truth that a jury does, or a voter, or a physician listening to a patient's tale of his symptoms in the ER, or that any random Person A does when listening to Person B's remarkable story about what happened last Saturday, then that seems quite sound to me.

But if you mean the search for the truth in the sense of discovery, scientific progress, innovation -- then, no, I would say Bayes' Theorem is of no assistance at all -- indeed, I would say it is more or less explicitly rejected by empiricism, the most successful model for scientific progress the world has ever known, as a throwback to medieval scholasticism -- the belief that the quality and persuasiveness of the argument is the best way to decide the truth of a proposition. In the modern era we *train* our scientists to question what seems certain, to doubt theoretic arguments, most especially those that are personally persuasive. (Feynmann wrote a great essay about this which I cannot find unfortunately.)

Personally, I've never known someone to make a genuine (nontrivial, scientific) discovery thinking along Bayesian lines, because those lines always point to the conventional wisdom. Genuine discovery is a black-swan thing, and you only get there by black-swan thinking -- by, among other things, resolutely ignoring the reasonable priors and wandering off into "crazy" directions. Suppose air is *not* just one element, as every reasonable chain of logic and all the evidence suggests it is? What if time is *not* the same for every observer, batshit crazy as that seems at first? What if RNA could catalyze chemical reactions all by itself -- somehow?

I think there really is (perhaps unfortunately) a surprisingly thin wavy line between original thinking that leads to powerful discovery and irrationality, paranoia, UFOlogy, even madness, which has been noted more than once among the most creative people. How you tell the difference I couldn't really say, being neither mad nor profoundly creative.

Expand full comment

Part of this, to me, seems to be due to science institutions being unwilling to give thoughtful priors in the face of no newly generated evidence, to science communicators. When a science communicator queries an institution, if there was no new evidence generated, they simply return "no evidence."

We seem to be in this strange world, where the only way to get credible priors, absent new evidence, is to find the handful of smart scientists on twitter, who are willing to deal you some blackmarket priors.

Expand full comment

Err, your picture looks like a lot of things I completely agree with (transmission from kids, mask wearing being useful) and then the huge claim that Covid is a lab escape, which generally I only see in anti-vax circles who are taking horse dewormer voluntarily and prophylacticly. I know it wasn’t the point of the post, but if you currently believe covid is a lab escape, I’d be fascinated as to why. And I think it’s quite distracting from your argument to have it in there as an example.

Expand full comment
Dec 17, 2021·edited Dec 17, 2021

the problem is more complex:

Trisha Greenhalgh:"Will COVID-19 be evidence-based medicine’s nemesis?"


Published: June 30, 2020

"..The 20th-century logic of evidence-based medicine, in which scientists pursued the goals of certainty, predictability and linear causality, remains useful in some circumstances (for example, the drug and vaccine trials referred to above). But at a population and system level, we need to embrace 21st-century epistemology and methods to study how best to cope with uncertainty, unpredictability and non-linear causality.."

"... philosophical contrasts between the evidence-based medicine and complex-systems paradigms. Ogilvie et al have argued that rather than pitting these two paradigms against one another, they should be brought together .."

Expand full comment

(parachute) another medical satire: "maternal kisses" --> "moratorium on the practice"

"Maternal kisses are not effective in alleviating minor childhood injuries (boo-boos): a randomized, controlled and blinded study"


"Maternal kissing of boo-boos confers no benefit on children with minor traumatic injuries compared to both no intervention and sham kissing. In fact, children in the maternal kissing group were significantly more distressed at 5 minutes than were children in the no intervention group. The practice of maternal kissing of boo-boos is not supported by the evidence and we recommend a moratorium on the practice."

context: https://www.statnews.com/2016/01/13/journals-publish-fake-studies/

Expand full comment

This isn't bad communication, it's a deliberate tactic. The aim is to make certain claims look bad and their proponents crazy - using the same language to describe the case for alien abductions and a lab origin of the coronavirus is a feature, not a bug. Looked at positively, it's an attempt to curate and shape the debate and lead their audience away from misinformation.

Of course, the information environment around coronavirus has been far more fast-moving than the media is used to, so it just reveals them for the toads they are.

Expand full comment

But note that even Bayesian reports wouldn't be a perfect silver bullet to solve this problem!

E.g., suppose the Bayesian Times published the headline "50% Probability That Schrödinger's Cat Is Dead". This could mean either "We have no idea what Schrödinger's cat is up to, but based purely on its breed and date of birth the prior on it having died by now is 50%" or "Actually, Schrödinger has just been spotted forcing his cat into a quantum murder box, and according to quantum mechanics there is exactly 50% probability that it is now dead".

Now this isn't *as much* of a problem, because *if* you do your priors right these Bayesian headlines would still give you the information you need to inform your decisions (e.g., whether to bet on Schrödinger's cat being dead) even if they wouldn't really tell you how strong the evidence (as opposed to the prior) for a given claim is.

It is still a problem, though, both because priors are hard to do right (especially if a priori there are literally infinite mutually exclusive equally likely scenarios) and because in one case the probability that a new piece of information will significantly shift your probability in either direction is low, and in the other it is high (but equally high in both directions).

Fortunately (?) it's not a problem that this is a problem because the Bayesian Times will never come to be, so we'll never even get to mitigate the magnitude of the problem such that this contribution is significant. I guess. It would still be nice to have a systematic way to fix this - maybe going a little meta and using confidence intervals rather than just point estimates of probability?

Expand full comment

"No Evidence" ==> "Managing uncertainty in the COVID-19 era" ( BMJ Opinion 5 th July 2020 )




This is because COVID-19 is, par excellence, a complex problem in a complex system.

Complex systems are, by definition, made up of multiple interacting components. Such

systems are open (their boundaries are fluid and hard to define), dynamically evolving

(elements in the system feed back, positively or negatively, on other elements), unpredictable

(a fixed input to the system does not have a fixed output) and self-organising (the system

responds adaptively to interventions). Complex systems can be properly understood only in

their entirety; isolating a part of the system in order to ‘solve' it does not produce a solution

that works across the system for all time. Uncertainty, tension and paradox are inherent; they

must be accommodated rather than resolved.


Managing uncertainty in a pandemic: five simple rules

1. Most data will be flawed or incomplete. Be honest and transparent about this.

2. For some questions, certainty may never be reached. Consider carefully whether to wait

for definitive evidence or act on the evidence you have.

3. Make sense of complex situations by acknowledging the complexity, admitting

ignorance, exploring paradoxes and reflecting collectively.

4. Different people (and different stakeholder groups) interpret data differently. Deliberation

among stakeholders may generate multifaceted solutions.

5. Pragmatic interventions, carefully observed and compared in real-world settings, can

generate useful data to complement the findings of controlled trials and other forms of




Expand full comment

👻👻👻ooooooooh👻👻👻I am the ghost of Christmas future, Emil Kirkegaard, they threw my Nazi pedophile ass out of a helicopter. Repent, repent of the heresy of Jensenism, There is still time! Lewontin was right!👻👻👻ooooooooh👻👻👻

The evidence against the hereditarian hypothesis of the black-white IQ gap in the US was compelling decades ago, it is decisive today. Continuing to hold to it under present circumstances requires a commitment to absurd views about EA heritability in Africa.

The real answer was so obvious that it's been imbedded in popular culture for years.

ARSA snp frequencies in the two populations:https://m.youtube.com/watch?v=f3PJF0YE-x4

Birth cohorts effected by tetraethyl lead: https://m.youtube.com/watch?v=q8wQYK1tJUc

👻👻👻👻ooooooh👻👻👻it’s getting spookier and spookier in here


Expand full comment

Bayesian reasoning is great, but it only works if people start out with more or less the same priors. If I firmly believe that poltergeists exist (P(ghost)~=1), then I'm likely to interpret any random tremor or noise in the house as evidence for angry spirits. You could tell me that angry spirits are totally imaginary (P(ghost)~=0) until you're blue in the face, but you wouldn't be able to provide much more than mere assertions. You could point out how ghosts are unscientific, but doing so would merely lower my confidence in science, as opposed to ghosts -- since, from my point of view, ghosts just obviously exist.

Expand full comment
Dec 17, 2021·edited Dec 17, 2021

I think "no evidence" is part of an implicit society-wide project to make knowledge somehow not require intelligence and common sense and rationality to acquire and apply. Because any time people are using their brains, different conclusions will be reached. Thinking is risky and threatens collective solidarity.

Many in the PMC would prefer to neutralize this threat with rules that can be mindlessly, "objectively" applied, like p < 0.05 or "things are true if and only if there is a peer reviewed study in this list of 'respected' journals", to define what knowledge is in the public square.

Which is not to say they're wrong to want this. The benefits are obvious! But rational folks need to have a clear-eyed accounting of the pros and cons, which may be very context-dependent.

Expand full comment
Dec 17, 2021·edited Dec 17, 2021

If one wanted to be less charitable (and I do), One would argue some lazy/dishonest writers use this phrase to launder weak claims by giving them the same wording as strong claims.

Expand full comment

//Well, because it seems intuitively obvious that if something is spread by droplets shooting out of your mouth, preventing droplets from shooting out of your mouth would slow the spread.//

Only if there's no risk compensation.

Expand full comment

Snake oil:

FWIW I've read an article that claimed that the oil of a specific chinese snake reduced inflammation. I don't recall whether the claim was that it needed to be topical or consumed.

Given that, "snake oil" seems to me to be sort of "misrepresented traditional medicine used out of context, and with fradulent ingredients". I often think that this more nuanced meaning would make the common uses of the term "snake oil" more meaningful.

Still...there is reasonable evidence that snake oil works, if you choose the proper snake, and if it's really snake oil, and if you want to reduce inflamation (possibly of the joints...I believe that it was topical, I just can't remember the article well enough to assert it). But those details tend to get lost in the shuffle.

Expand full comment
Dec 17, 2021·edited Dec 18, 2021

Suggesting that scientists and journalists are merely confused is a very charitable interpretation of their motives. The message of all those articles, the party line that everyone's dutifully repeating is "Do not panic, do not change your behaviour and especially do not buy any N-95s, we need those for ourselves".

When a bunch of people who later admitted to a goal of "prevent the public from buying up the supply of masks" said "no evidence that it's airborne", we should consider the possibility that they weren't confused about modeling the world and it was not a coincidence that their misleading word choice aligned perfectly with their goals.

Expand full comment

> This is utterly corrosive to anybody trusting science journalism.

I've been saying for years that most science journalism is borderline trash, sometimes even when the journalist in question has training in science. The pandemic has brought in all kinds of journalists that haven't a lick of trained in science so you can guess how my opinion has shifted.

> You read “no evidence of human-to-human transmission of coronavirus”, and then a month later it turns out such transmission is common. You read “no evidence linking COVID to indoor dining”, and a month later your governor has to shut down indoor dining because of all the COVID it causes.

This has been happening for years. People read headlines about studies claiming one thing about fat, cholesterol, sugar, or what have you, and then a week later they see a headline claiming the exact opposite. I think this is a big reason why there's so much lay person distrust in science.

Scientists have only compounded this by not taking the replication crisis much more seriously.

> Here we should reject journal articles because they disagree with informal evidence!

I'm not sure I count that as informal evidence. If anything, mechanistic explanations are more formal than observational data. In any case, "knowledge" has to be logically consistent, so what we have are two sets of conflicting evidence, only one of which I think counts as "knowledge".

> But I think the most virtuous way to write this is to actually investigate.

Sadly a lost art in today's journalism. I was reading recently about how all of the journalists who were groomed in environments in which facts were important, and how to investigate and report those facts in a balanced way have aged out. They have been progressively replaced by social media-driven pseudo-journalists that aren't much interested in objective truth so much as "my truth".

I really appreciate this post though, the problems of journalism are legion but this might just be a good starting place for change.

Expand full comment

I don’t think journalists, as a rule, are confused. We write stuff that we know editors will like and use, so they'll pay us and our children won't starve. It's option 2 throughout, it's clear that "there's no evidence for" in the trade essentially means "only anti-science Trump supporters believe that." I think Scott knows this, but he's being nice and giving us journalists the benefit of the doubt.

Expand full comment

Another great article showing that, to be an intellectual, you don't need to be all-knowing. You need only take ideas seriously.

Expand full comment

I also think "no evidence" often means:

"We are uncomfortable with this assertion and since no one has looked into it at all we're dismissing it out of hand"

It's like, epistemologically, nothing can possibly true until someone has gathered data

I saw this a lot with the notion that TikTok might be doing some spying for the Chinese state. This seemed to just make sense but folks who were uncomfortable with the idea would say "there is no evidence for that"

Expand full comment

I feel like this discussion is good but it sort of misses the distinction between:

People have studied this and the studies show no evidence


There's no evidence because no one has studied it

For example, lots of traditional remedies get dismissed because there is no evidence that they work, but there's also no funding to study them. if a traditional remedy DID work even the people who didn't pay to study it could profit off that news

so very little study gets done

is that the same as a case where real researchers have actually checked and found no evidence?

Expand full comment

Haha I literally opened the FT to a headline saying there was "No evidence" that Omicron was less severe, immediately got annoyed that they weren't more clear, then found this in my email.

Expand full comment

I’m a bit more cynical - it seems like 90% of the time, “no evidence” in an article basically means “I, the author, want this to be false but I don’t actually have (or can’t be bothered to find and communicate) an opposing fact to refute it”. I see mostly two flavors:

1) “No evidence Omicron is less deadly” - there is literally no strong evidence one way or the other, but the author wants to give the impression that “Omicron is less deadly” is emphatically false.

2) “Donald Trump said, without evidence, that…” - an isolated demand for rigor. Politicians of any stripe rarely give citations for their claims, whether they are well founded or not. Tacking “without evidence” to the claim, without citing opposing evidence, is basically a throwaway line to discredit the statement without going to the effort to refute it (if it is indeed refutable!) and tends to be isolated to statements the author opposes.

Expand full comment

Every time I've had to oil a snake, snake oil has done the job perfectly. That's at least anecdotal evidence that snake oil works?

Expand full comment

Beware also the ever-popular argument from ignorance.

"Just because we have no evidence that Donald Trump is not Mickey Mouse wearing a disguise just means we need to look harder! In the meantime, we can safely assume that Trump in fact has big black ears and also a tail. Say, has anyone ever seen Trump and Mickey in the same room together?

Expand full comment

I hate to do self-promotion, but I made similar points in February to address people who were saying "we don't know how well AstraZeneca works in US populations / the elderly", "we don't know how well NovaVax works", etc because of a lack of Phase III RCT data. People were ignoring all the other strong reasons to believe those vaccines would work (in particular immunogenicity studies).


Different from Scott's post, I explain how one can reason from prior knowledge / scientific theory in both a Bayesian and "Popperian" way.

Expand full comment

>I challenge anyone to come up with a definition of "no evidence" that wouldn't be misleading in at least one of the above examples.


- A verifiable and compelling reason to believe that something is true

**No Evidence:**

- Lack of a verifiable and compelling reason to believe that something is true.

**Parachutes:** People die when the fall long distances. We can come up with many examples of this. It will be relatively hard to find examples of people who have jumped from a high distance who have not died(without a parachute). People who use parachutes may die sometimes when something goes wrong, but they die at drastically lower rates than people who do not use Parachutes. This may not be formal scientific evidence, but it is a reason, we can verify it, and it is also compelling.

**Aliens:** Individual reports of abductions may be a "reason" per-se to believe in abductions, but as far as I know, none of them have been compelling, and we can verify that they did claim to be abducted, but we can't verify that they're telling the truth. Now if someone got abducted, and a lot of people saw it, and if it was caught on video, it would be compelling, and we would have verifciation of the event, and if we had such an event then perhaps belief in alien abductions would be the norm. If people did get abducted hundreds of times, then you might expect there to be more witnesses if such a conspicous event occured, but there aren't. A witness in the area who saw nothing is a sort of evidence that a particular abduction didn't happen.

**Homeopathy:** "No evidence" doesn't really apply here. It's just that the evidence opposed to it is even stronger. You did acknowledge that the understanding about water, chemicals, and immunology counted as informal evidence against homeopathy working. I think it's a mistake to assume that evidence from jouranal articles must always be stronger or informal evidence, or vise-versa. Maybe if you keep doing the formal study and it keeps showing the same results, it could get there. I have a hard time believing that those 90 studies the only studies on homeopathy out there. Do any studies show that it doesn't work? Do studies that show that it doesn't work tend to be better at getting replicated than those that show it does? I'll admit that I haven't personally done a meta-alalysis on homeopathy studies, but If I did and found that the positive results outweigh the negative ones, than I may actually have to revise my opionion on the practice.

**Henry VIII's Spleen:** We have verifiable and compelling evidence that Henry VIII was a human. We know who his parents were, we know where he was born. we also know what he looked like, and he looked like a human. I'm assuming that we have verifiable and compelling evidence that humans have spleens. I don't feel like checking right now. But if you'd like to contest this go ahead. Via Modus Ponens, that works out to verifiable and compelling evidence that Henry VIII had a spleen.

Expand full comment

The problem is that there is always some evidence that snake oil works. All someone has to do is write a paper that looks "sciencey" enough. Most people won't be able to tell the difference. So saying "Snake oil doesn't work" isn't good communication either.

What would be better is: "Studies overwhelmingly have found that snake oil doesn't work. There are some % of studies claiming it does, but their methodology is severely flawed for x, y z reasons".

The second line is more respectful towards the intelligence of the average person.

Expand full comment

It seems to me that the solution that's really being pointed to here is for journalists to write stories that are explicitly Bayesian - including descriptions of a set of priors, and a Bayes' formula update based on whatever new evidence has been marshalled. The math of the process operates simply enough that you can basically include all of it within an article that outwardly looks like it's written in plain and straightforward English

Expand full comment

"Is there "no evidence" for alien abductions? There are hundreds of people who say they've been abducted by aliens! By legal standards, hundreds of eyewitnesses is great evidence! If a hundred people say that Bob stabbed them, Bob is a serial stabber - or, even if you thought all hundred witnesses were lying, you certainly wouldn't say the prosecution had “no evidence”! When we say "no evidence" here, we mean "no really strong evidence from scientists, worthy of a peer-reviewed journal article". But this is the opposite problem as with the parachutes - here we should stop accepting informal evidence, and demand more scientific rigor."

Okay! Obligatory Chesterton quotations that are (I hope) apposite!

(1) From an essay "The Real Journalist":


"I will give an instance (merely to illustrate my thesis of unreality) from the paper that I know best. Here is a simple story, a little episode in the life of a journalist, which may be amusing and instructive: the tale of how I made a great mistake in quotation. There are really two stories:

the story as seen from the outside, by a man reading the paper; and the story seen from the inside, by the journalists shouting and telephoning and taking notes in shorthand through the night.

...That is a little tale of journalism as it is; if you call it egotistic and ask what is the use of it I think I could tell you. You might remember it when next some ordinary young workman is going to be hanged by the neck on circumstantial evidence."

(2) From the story "The Trees of Pride":

"I am too happy just now in thinking how wrong I have been," he answered, "to quarrel with you, doctor, about our theories. And yet, in justice to the Squire as well as myself, I should demur to your sweeping inference. I respect these peasants, I respect your regard for them; but their stories are a different matter. I think I would do anything for them but believe them. Truth and fancy, after all, are mixed in them, when in the more instructed they are separate; and I doubt if you have considered what would be involved in taking their word for anything. Half the ghosts of those who died of fever may be walking by now; and kind as these people are, I believe they might still burn a witch. No, doctor, I admit these people have been badly used, I admit they are in many ways our betters, but I still could not accept anything in their evidence."

The doctor bowed gravely and respectfully enough, and then, for the last time that day, they saw his rather sinister smile.

"Quite so," he said. "But you would have hanged me on their evidence."

So am I saying "believe everything, including fairies and homeopathy"? No, I think what both Scott and Chesterton are getting at is that (a) there's a great deal of confusion around what does and does not constitute 'evidence' and (b) we apply tests of 'do I think this is reasonable or not?' when weighing evidence - we might not believe the Cornish farmer about ghosts on his bare word that he saw a ghost in the wood, but we might very well convict a man on that same farmer's bare word that he saw him in the wood where a murder happened.

"Be a little more discrimating and a lot less quick to believe all you read" is the moral here, I fancy.

(2) "This doesn’t have the same faux objectivity as “No Evidence Snake Oil Works”.

I think this may be perhaps they do this because there is always the faint chance that somebody may come up with a study saying that *this* particular snake oil does in fact work, and if they confidently run a story about "Auld Doctor McSnakey's Guaranteed Cure-All Oil" is bunkum, then somebody tests it and goes "I hate to break it to you but...", they will be liable for getting sued by Doctor McSnakey.

Expand full comment

Apparently you can live without a spleen - so it's not THAT obvious that Henry VIII had a spleen. Just sayin'.

Great article by the way.

Expand full comment
Dec 17, 2021·edited Dec 17, 2021

>Here’s another: No Evidence Vaccines Cause Miscarriage.

Uh so doesn't this fall under "this is probably true, but we haven't checked"? I mean vaccines definitely cause fevers, and fevers probably cause miscarriages, so it would be surprising if vaccines *didn't* cause miscarriages. After all, many European countries do not recommend the vaccine for pregnant women.

Edit: thinking about this a bit more, it seems like Scott is treating science communicators as people who are honestly trying to describe the world accurately, whereas I see them as choosing what to say based on what their statement will cause other people to do or believe. The linked example is not exactly this, but if 45k people in the world die of vaccine side-effects, that's a risk many science-communicators would be willing to take. But they think that admitting this will cause backlash, so phrasing it via "no evidence" is safer. See also https://www.lesswrong.com/tag/simulacrum-levels . Thinking about it this way makes the early cases make sense also.

Expand full comment

There has now been a randomized controlled trial of parachute use when jumping from airplanes.

Read. The. Entire. Article.


Expand full comment

It's not even just that people get the two types of "no evidence" confused, it's that people often exploit the confusion to mislead people, as a Motte and Bailey.

Like when people were saying "No evidence COVID-19 came from a lab", a lot of them were saying:

"1. This thing is super plausible... but we haven’t checked yet, so we can’t be sure."

while knowing that people would interpret it as:

"2. We have hard-and-fast evidence that this is false, stop repeating this easily debunked lie."

Expand full comment

I assume part of the communication/journalism dilemma here is that most people are never going to read more than a headline about any particular claim. So any nuanced, investigative analyses that can't be summarized as a sentence header above a picture on Facebook is as almost the same as writing nothing at all.

Expand full comment

"“No Evidence That Snake Oil Works” is the bread and butter of science journalism."

From Scientific American:

"In a series of later papers, the most recent published in the Annals of Nutrition & Metabolism in July 2007, Shirai and his team evaluated the effects of Erabu sea-snake oil on a number of outcomes in mice, including maze-learning ability and swimming endurance. In both cases, snake oil significantly improved the ability of the mice in comparison with those fed lard." https://www.scientificamerican.com/article/snake-oil-salesmen-knew-something/

Expand full comment

Further to the alien abduction claims, I once met a psychiatrist who had extracted an implant from the roof of the mouth of one of her patients who reported repeated alien abductions.

Coincidentally, elemental analysis showed that it had the same make-up as mercury dental fillings.

Expand full comment

Something really confusing (to me at least) and somewhat common in science reporting (and for that matter in actual scientific papers) is the construct:

"Study shows no evidence of XXXX".

https://www.google.com/search?channel=fs&client=ubuntu&q=%22study+show+no+evidence%22 for many examples (**). For ex. "The findings from this study show no evidence of increased mortality risk in association with higher hemoglobin values in endogenous EPO patients."

I think it's generally meant to say that the study looked for an effect and found none. (I mean how do you actually 'show' no evidence). But it's such a backwards way of saying that. They didn't find no evidence of an effect, they found evidence that there is no effect! Or more properly that the effect is within epsilon of no effect. Or even more properly that the effect is smaller than minimal detectable effect size given the power of the study to the certainty of whatever p-value they used. But I'm not sure I've ever seen that reported like that. It actually feels really close to the fallacy that "statistical significance is not itself significant" in a way.

Consider "Study shows no evidence Ivermectin improves COVID outcomes" vs. "Study provides evidence that Ivermectin does not improve COVID outcomes". Why favor the former phrasing?

**Interesting aside, the first two results of that search for me are a preprint study with the conclusion "No positive RT-PCR result was found in the semen or testicular biopsy specimen. The results from this study show no evidence of sexual transmission of 2019-nCov from males." and a COVID literature review that quotes from a study that "The results from this study show no evidence of transmission of SARS-CoV-2 through vaginal sex from female to her partner. However, the risk of infection of non-vaginal sex and other intimate contacts during vaginal sex should not be ignored." Not sure what to make of that, either my search history is a bit weird or google/people are really worried about sexual transmission of COVID...

Expand full comment

I don’t remember where I first heard it, and it itself is a somewhat fuzzy statement, but a good mantra for journalist types and consumers of news is that “absence of evidence is not evidence of absence.”

Expand full comment

In most of those "no evidence" headlines, the journalists were surely trying to use "no evidence" in the "almost certainly not true" way. It's not a mistake; it's a conflict.

Expand full comment

"Maybe it would sound less authoritative. Breaking an addiction to false certainty is as hard as breaking any other addiction. But the first step is admitting you have a problem."

Actually, I have a feeling that at least part of the problem here is due to underconfidence. Saying: "there is no evidence for X" creates a much safer line of retreat than claiming that X is indeed false.

Expand full comment

I wonder if instead of "There is no evidence of X" we could just say "There is evidence that X is wrong" followed by the evidence, which might include a list of studies which attempted to prove X but failed. And for the cases where there really isn't evidence yet, we'd just stick with "There hasn't been much research done yet so we don't know if X is true."

Expand full comment

One could argue ”There is no evidence exonerating covid vaccines in the deaths of 45,000 people”.

Expand full comment

I agree with all that but I'm not convinced this is merely the result of journalists being confused. Rather, I think it's the result of the incentives of journalists not to legitimize (or even repudiate) views their peers see as bad/harmful.

I mean consider a journalist assigned a piece about whether or not COVID might have originated from a Chinese research lab. The journalist realizes that if they say 'no evidence either way' they give a wide swatch of the public an excuse to start blaming the Chinese and fear that it will lead to anti-Chinese violence/discrimination and bad public policy.

Of course, this should be balanced against the harm of decreased public trust in experts. The problem is that responsibility for that decreased trust is diffuse in a way that means the journalist isn't likely to suffer loss of social status or even find themselves feeling guilty. OTOH if they write the article that makes it clear that we can't yet reject the idea the virus came from China and then someone beats someone of Chinese descent to death they have to feel bad and may even find their career at risk.


This kind of asymmetry in moral blame is basically the same thing that causes the FDA to withhold probably life-saving medications and overregulation generally.

Expand full comment

I've been thinking this for years but no evidence. Until now.

Expand full comment

"No evidence that wearing a mask works" and "no evidence that vaccines kill 45000 people" are exactly the same thing: "scientists" (the media, and adjacent truthmongers) want you to believe that the thing in the second half of the sentence is false. The difference between those two is that the second one is actually false. But "scientists" aren't making the statement based on whether it's false--they'll say it for unproven things they want you to believe and for proven things they want you to believe.

This is not an example of "scientists" using the same terminology to inform you of two very different degrees of proof. This is an example of "scientists" using the same terminology for the same purposes, that may happen to have different degrees of proof, but where they don't care about that part at all.

Expand full comment

The way to write without bullshit is to prefer active verbs. People only pull off these obfuscations by using the passive tense. “There is no evidence” is ambiguous, but “we did X and Y and Z and didn’t find evidence” cuts to the quick. Knowing you’ll have to explain yourself forces you to put in the rigor.

But if we’re really diving deep on “what is evidence”, and “how do we communicate it”, sure Frequentism is laughably wrong but Bayesianism doesn’t solve the problem either. Even Bayes provides no formal and rigorous methods of idea generation or rejection.

Suppose you only have one theory of some data, and for sake of argument let’s say it’s an exponential model. Suppose the true mechanism is actually distributed Pareto. Give me a formal, rigorous, verifiable way to reject the exponential model and replace it with the true model. How will you know that the outliers are the story and that you shouldn’t toss them out?

Science is something people do when they understand the statistical nature of the world and can clearly explain what would change their mind, regardless of how formal or rigorous their reasoning. Newspapermen live in a world of absolutes where they are the authority telling us How It Be. Journalists don’t get any points for saying “I was wrong and you’ve changed my mind”.

Expand full comment

I’m sure I’m kicking a hornet’s nest because of the likely priors of many Scott followers, but … I’ve often thought about this flaw in reasoning with regard to the common comment that there is “no evidence” for (or proof

of) God’s existence. To theists like myself, this comment is mind-boggling because the evidence for God is everywhere and obvious.

Those making this comment seem to mean that the existence of God cannot (at present) be empirically proved. But there are many other methods and standards of proof, some of which are mentioned in this article.

Expand full comment

Reminds me a bit of the book War and Chance, in which the author argued that instead of saying thing like "We have no evidence the Soviets are placing missiles on Cuba", generals and intelligence analysts should just give a probability like "There is a 5% chance the Soviets are placing missiles on Cuba".


"There is a 5% chance COVID-19 will spread from person to person" is a statement much more useful to policymakers than "There is no evidence that COVID-19 spreads from person to person".

Certainly, if you are going to talk about prior and posterior probabilities, it will help if you tell people what those probabilities actually are.

If you want to express how big something is, you have to use units of distance, eg meters. If you want to express how fast something is, you have to use units of speed, eg meters per second. If you want to express how powerful something is, you have to use units of power, eg horsepower.

And if you want to express how likely something is, you have to use units of likelihood. Eg "probability 1%", 10%, 50% 95% etc.

You cannot accurately convey estimates of uncertainty without using numbers to measure your uncertainty.

Expand full comment

There are other traps in EBM

Taleb video

"MINI-LESSON 9: Evidence Based Science & Mistakes in Particularizing the General (Simplified)"


Expand full comment

This is incoherent. Yes, it's true that evidence needs to be considered in the light of the sum total of previous evidence and of our theoretical understanding. But, for example, for a brief period in January 2020, there really was no evidence of human-human transmission. It was an entirely new and unknown virus. A Chinese study cleared that up rather rapidly.

Quoting a politician (Murphy) saying there is 'no evidence' is not poor scientific communication, it's reporting the words of a politician who didn't know what the heck he was talking about.

And if lunkheads don't understand absence of evidence is not evidence of absence, we need to educate them, not stop using an accurate phrase.

Expand full comment

Good read, much appreciated!

Expand full comment

"Evidence" is strong word in the mind of the common man who is likely to think about "evidence" in a legal sense of the word

Wouldn't it be better to use "There is not enough information/data to confirm if..."

That way as new data is collected, the conclusion can be changed

Expand full comment

Does anyone else really, really want the government to start massively subsidizing media outlets (pending periodic evaluations and audits to ensure accurate reporting)? Surely the reason the media sucks is that you don't have to buy newspapers to learn the news - you can get it from a search engine or social media, which means a lot of media production can't afford quality research to back it, and the version people end up getting is biased.

Expand full comment

The “do you have evidence” fallacy, mistaking evidence of no harm for no evidence of harm, is similar to the one of misinterpreting NED (no evidence of disease) for evidence of no disease. This is the same error as mistaking absence of evidence for evidence of absence, the one that tends to affect smart and educated people, as if education made people more confirmatory in their responses and more liable to fall into simple logical errors.


Expand full comment

This isn't really a verbiage problem. The use of 'no evidence' etc. are simply ways for media to report their preferred conclusions (or those of their masters) regardless of study data.

Maybe you can convince them to use a different weaselly phrase instead, but what will that achieve? They might as well keep using 'noevidence' so that we know when to be very skeptical.

Expand full comment

If the vaccines have only killed 45,000 people worldwide I would be quite relieved

Expand full comment
Dec 19, 2021·edited Dec 19, 2021

“No evidence” means that at this time there is no scientific data to show that the assertion is true, so the best we can do is use related data to evaluate the truth of the assertion.

This works for all the examples.

Expand full comment

In the real world, people don't use anything like a Bayesian process to search for truth unless they are required to do so by some sort of formal constraint like the laws governing approval of new drugs, or the publishing requirements of the big-name journals in certain fields of science. In the real world, even in science, people believe something when a fact strikes a chord for them. Perhaps it fits neatly into some explanatory scheme they favor, or it suits their prejudices, or it relieves stress (or, in the case of certain personality disorders, it increases stress), or it has other implications or serves other purposes that they like or are prone to.

Philosophers of science have even tried to justify a version of this pre-rational approach to belief. They call it "inference to the best explanation", which just means, "pick the explanation that you like best".

Not that I mean to imply that the Bayesian process is epistemologically any better, just pointing out that it is seldom used.

Expand full comment

I noticed a cousin of "no evidence" in the press during the Trump administration.

"President Trump today claimed without evidence that ..."

Expand full comment
Dec 20, 2021·edited Dec 20, 2021

It seems that a defensible claim to 'no evidence' relies on a proper formulation of the question that the writer deploys the 'no evidence' assertion to disprove. Just as the real news many undecided people need to read, without saying so, is which news is chosen by editors to not report. Editors didn't ask the question because they don't want to hear the answer: they have the answer they want already.

Expand full comment

While you have diagnosed the problem, you haven't proposed the remedy of how to reasonably prove there is sufficient evidence against a particular claim. Studies need to be performed to explicitly rule out the smallest effect size of interest, by utilizing statistical equivalence testing.

Expand full comment

I try, when possible, to not say "no evidence". I'll usually say "that study hasn't been done," or "There were studies that showed this didn't work." Part of the problem with this is positive publication bias, so a negative result won't be as readily accessible. I think science journals ought to have negative versions of themselves, so that important failures are widely seen and easily citable.

Expand full comment

> Here we should reject journal articles because they disagree with informal evidence!

For homeopathy, we reject journal articles because they disagree with formal evidence in the "hardest" field of science: physics. But also, surely there are some well-powered journal articles saying homeopathy at 10+ centessimal potency it doesn't work?

Expand full comment

you accord journalists too much morality in my view. I think they're more inclined to spread fear and uncertainty than to explain and clarify. The objective is sensationalism and stroking of their own egos --rarely is there any follow-up of the same story when evidence one way or the other is eventually uncovered. As well, journos rarely give solid evidence to support their own opinionated spin --other than the spin and verbiage of their peers. The motto is "Never let the truth get in the way of a good headline." Sad.

Expand full comment

There is a parachute study now, another joke paper.

Parachute use to prevent death and major trauma when jumping from aircraft: randomized controlled trial


Expand full comment

'In traditional science, you start with a “null hypothesis” along the lines of “this thing doesn’t happen and nothing about it is interesting”.'

I don't think this is true; at least it's not the way the "successful" sciences work. I know it's the Popper story, as then modified by Kuhn, but that doesn't mean it's true.

The way real science works as done by real scientists is a form of extreme pattern recognition: you look at a bunch of disparate items and (after a vast amount of thinking, and internally considering and discarding alternative hypotheses) you propose a "pattern" (eg a mathematical structure) that describes what you are seeing. Other, equally good scientists, appreciate the point, can see how the pattern fits (and doesn't) , and use that to make the pattern better. This, in essence, is what physicists really mean when they go on about beautiful equations, and talk about how it doesn't matter *that much* if one experiment doesn't fit because chances are that will be resolved; the overall pattern works so well that it's probably "essentially" true.

This is completely different from Popper, substantially different from Kuhn, and has nothing in common with the social science/medicine model Scott described. So why isn't it the way science is usually discussed? In one word -- because it is aristocratic, not democratic. It says that the only people who can do truly leading edge science are those few who are exceptionally good at pattern-matching. Any monkey can propose hypotheses and run the experiments (and, god nows, that's 99% of what social science and medicine consist of); but only a few can do what I have described. (And, even worse, the talent of "extreme pattern recognition" goes by another name, a name that must not be spoken, involving the letter that comes before J and the letter that comes after P.)

So: we start with an insane model of how society and people work. ("Aristocracy bad, Democracy good -- anywhere and everywhere". We force science (certainly how it's discussed, even how much of it is practiced, though the hard sciences have mostly, so far, escaped this ...for now...) to conform to this fantasy. And then we're amazed when the machinery falls apart, both in practice and even more so in communication, when those doing the communicating have no freaking clue what real science looks or smells like, only what they were told in some "philosophy of science" class.

Expand full comment

:"are hundreds of people who say they've been abducted by aliens! By legal standards, hundreds of eyewitnesses is great evidence! If a hundred people say that Bob stabbed them, Bob is a serial stabber - or, even if you thought all hundred witnesses were lying, you certainly wouldn't say the prosecution had “no evidence”! When we say "no evidence" here, we mean "no really strong evidence from scientists, worthy of a peer-reviewed journal article". But this is the opposite problem as with the parachutes - here we should stop accepting informal evidence, and demand more scientific rigor."

Somewhat tangentially but this is a terrible example and a very common talking point of people who believe in the paranormal. "Stabbings" are things we have many, many independent lines of evidence to confirm the existence of and in this example presumably the specific stabbings themselves have left plenty of wounds, ER records, etc. The eyewitnesses are just making a claim about a specific instance of a thing we already know broadly exists (stabbings) AND we have corroborating evidence of the specific stabbings in question. This isn't the same as positing the existence of some entirely new unconfirmed phenomenon (aliens visiting earth) *purely on the basis of eyewitness testimony*. If 100 witnesses claimed Bob stabbed them with a real plasma lightsaber from Star Wars or bludgeoned them with a handheld perpetual motion machine their testimony would not mean squat in the courts b/c those are things we have no reason to believe exist in the first place and eyewitnesses are a terrible basis on which to hang their existence. Plus they don't even have crazy impossible plasma burns to corroborate their stories! If they DID have crazy unexplainable burns that would open a whole other can of worms but I don't know of any alien abduction stories with corroborating physical evidence that doesn't have a more parsimonious explanation.

A lot of this is just the unavoidable contested vagaries of language. Saying "no evidence" implies no *good* evidence and "good" evidence is always inherently contentious. If I say I have a deep gut feeling or a premonition in a dream that Tom is a burglar it's fine to say there's "no evidence" for that even though deeply felt gut feelings and premonitions and revealed knowledge have constituted a kind of "evidence" for most of human history. Entire religions and societies and cultures have risen and fallen and shaped their entire value systems around revealed knowledge and premonition and visions in dreams. Talk to religious people enough about why they believe and many will eventually say something along the lines of "I can just *feel* the presence of God deep down and that's how I know He's real." By many peoples' working definition that's a kind of evidence too!

Expand full comment

Russell's Teapot comes to mind.

There has been a lot of dismissal of knowledge about viruses and vaccines on the basis that we haven't yet proven it to apply specifically to SARS-CoV-2 as it does to other viruses and vaccines (and can be expected to apply to SARS-CoV-2).

Expand full comment

Scott, you are my mental doppelganger (but smarter, more educated and more serious). I've been trying (your classification of types of "no evidence" cases was hugely helpful) for quite some time to explain this to people who either "like" posts when it's used incorrectly or in "exchanges" (usually not "discussions") with the "Science Faithful" and the "Science Deniers" (being one or the other seems to have become a huge cultural, mental problem).

Expand full comment