I'll also say I think metacalculus should have questions like "omicron less than half as deadly as delta" and "less than 90%". Just saying "less than" can only resolve to "no" if we establish that it's deadlier, because there could always be a smaller difference.

I can't find the study now, but there's a study showing that media coverage of COVID-19 was biased negative. While this seems like media SOP (if it bleeds it leads), it seems to have gotten worse with the pandemic.

When researchers want to evaluate a question, they formulate the null hypothesis. In this case, it would be that Omicron and Delta are equally deadly. The researchers will find a test statistic which they will use to obtain a p-value. The p-value tells us the chance of obtaining these results at least as extreme as the ones we got, provided the null hypothesis is true. A threshold for rejection (alpha) will be established. Too extreme of test results points toward our null not being true. If the threshold is reached or surpassed, the null hypothesis is rejected. The headline writer would see that the null hypothesis was not rejected and write "No evidence Omicron less severe than Delta." This is not a good thing to do for reasons Scott mentioned in his original post.

From my understanding: You are saying that metacalculus is behaving like a Bayesian with priors who is updating on new information. The fact that a study failed to reject the null hypothesis is evidence used to update. Metacalculus moved more toward 50-50 on the question of deadliness. They reduced confidence in Omicron being less deadly because a failure to reject the null hypothesis is evidence of some kind and Bayesian's should incorporate all available evidence.

My confusion is this: Without knowing the exact test statistic, can we know that Metacalculus is behaving like a rational Bayesian? One issue with setting alpha of 0.05 is that it is somewhat arbitrary. If you made me guess whether or not the null was true and you gave me a study with adequate N and a p-value of 0.051, I would say that the null is probably false. However, if you gave me a p-value of .85, I would think the null is probably true. This would be despite the fact that both are statistically insignificant conclusions. The threshold is demanding. I don't think it would be rational to treat it like a binary if we are being good Bayesians. The actual p-value and power/N of the study would be relevant.

How could I consider the power and p-value to determine which direction I should update in provided a conclusion in which you reject the null hypothesis? To me, it doesn't seem possible to know which direction to update in merely from the fact that a study failed to reject the null hypothesis. If I'm wrong on that, could someone explain why? (edit: I'm not suggesting Metacalculus is not considering the true values or interested in the true values in this particular situation, I'm interested in the theoretical side of the question, to be clear.)

Useful context: That metaculus question is judged by if the first 3 of 4 studies show (p < .05) that omicron is less severe, so in terms of resolution of that question this was important evidence.

In fact if I'm reading it correctly that means people have a very strong feeling that Omicron is less lethal, because they think that despite this study the ones selected for the question resolution will show it is less lethal.

Here is an interesting article from a UK newspaper regarding an exchange between a journalist and a SAGE modeller on Twitter. The modellers did not allow for omicron being less deadly than delta. It also suggests strongly that the modellers are used to provide policy based evidence, not evidence to make policy.

I think there's a fair amount of confusion stemming from the overlooked fact that Covid generally is becoming milder. Not due to a change in itself but to progressively increasing immunity. So the question of whether Omicron is milder than Delta misses the important fact that Delta is milder than Delta! At least Delta today has milder consequences than it did six months ago.

In the UK for example case fatality rates have fallen by a factor of ten. If Omicron is also currently 50% milder than Delta then of course that's relevant, and somewhat good news. But less of a big deal than the ongoing reduction in severity due to increased immunity.

That's true, there are no more restrictions in the UK until, maybe, after Christmas. Scotland has already cancelled Hogmanay and Mark Drakeford in Wales is playing Scrooge. Neil Ferguson at Imperial College has been issuing apocalyptic warnings of 5000 deaths a day from Omicron.

Absolute nonsense, he's been wrong since the start of the pandemic and people are tired of him crying wolf. Maybe now the Government will ignore him and his headline grabbing predictions.

This is good logic, if #1 and #2 are both true, but premise #1 typically goes unstated or at least unsubstantiated. I will occasionally read a line that says "There is no evidence that vaccines cause autism", and after decades of research and studies involving millions of people, we would expect to see evidence, therefore it's a meaningful statement.

Back in Jan 2020, when COVID-19 was brand new, "No evidence for X" didn't mean as much. Right now, when Omicron is pretty new, "No evidence for Omicron being <whatever>" doesn't mean that much. At the very least, the truth will require a lot more than 1 sentence to express and a headline itself just inflammatory overconfident nonsense (as is so often the case). Somebody should start a Bayesian news publication and start headlining things "Weak evidence for not X" as opposed to "No evidence for X".

Is the evidence (in the scientific sense) really that much stronger for “Omicron is more infectious/contagious than Delta” vs. “Omicron is less deadly than Delta”? Because the media was VERY quick to reject the former null hypothesis, but very reluctant to reject the null in the latter case.

You say it's an unusually clear example of the difference between classical and Bayesian ways of thinking, but I don't know how to interpret that, because you give us just one datum: the change in the Metaculus prediction. Your statement led me to expect two responses, one classical and one Bayesian. Also, I don't know if you think Metaculus represents Bayesian thinking, or gives a Bayesian summary of the beliefs of classical thinkers. Also, I don't know what "classical" means to you.

I'm guessing that you're contrasting your own imagined perfect-Bayesian response, versus the Metaculus consensus as a representation of the consensus among classical thinkers. In that case, for this to present a contrast, you'd have to believe that the consensus prediction *shouldn't* have dropped (otherwise we could find only a quantitative difference, and as you specified no prior, we have no way of knowing what the perfect-Bayesian response would have been).

But I think Bayesians must say it should have dropped, by conservation of evidence, since rejecting the null hypothesis should have made the consensus prediction rise. Any difference between Bayesian and "classical" would be quantitative, not qualitative.

TLDR: I don't even see a contrast being made, so I can't find it obvious.

## Addendum To "No Evidence" Post

edited Dec 23, 2021I continue to maintain that this study *does* provide strong evidence that omicron isn't *a lot* milder that delta. https://www.reddit.com/r/dspeyer/comments/rl1lfj/the_first_halfway_decent_study_of_omicron_is_out

I'll also say I think metacalculus should have questions like "omicron less than half as deadly as delta" and "less than 90%". Just saying "less than" can only resolve to "no" if we establish that it's deadlier, because there could always be a smaller difference.

I can't find the study now, but there's a study showing that media coverage of COVID-19 was biased negative. While this seems like media SOP (if it bleeds it leads), it seems to have gotten worse with the pandemic.

edited Dec 23, 2021When researchers want to evaluate a question, they formulate the null hypothesis. In this case, it would be that Omicron and Delta are equally deadly. The researchers will find a test statistic which they will use to obtain a p-value. The p-value tells us the chance of obtaining these results at least as extreme as the ones we got, provided the null hypothesis is true. A threshold for rejection (alpha) will be established. Too extreme of test results points toward our null not being true. If the threshold is reached or surpassed, the null hypothesis is rejected. The headline writer would see that the null hypothesis was not rejected and write "No evidence Omicron less severe than Delta." This is not a good thing to do for reasons Scott mentioned in his original post.

From my understanding: You are saying that metacalculus is behaving like a Bayesian with priors who is updating on new information. The fact that a study failed to reject the null hypothesis is evidence used to update. Metacalculus moved more toward 50-50 on the question of deadliness. They reduced confidence in Omicron being less deadly because a failure to reject the null hypothesis is evidence of some kind and Bayesian's should incorporate all available evidence.

My confusion is this: Without knowing the exact test statistic, can we know that Metacalculus is behaving like a rational Bayesian? One issue with setting alpha of 0.05 is that it is somewhat arbitrary. If you made me guess whether or not the null was true and you gave me a study with adequate N and a p-value of 0.051, I would say that the null is probably false. However, if you gave me a p-value of .85, I would think the null is probably true. This would be despite the fact that both are statistically insignificant conclusions. The threshold is demanding. I don't think it would be rational to treat it like a binary if we are being good Bayesians. The actual p-value and power/N of the study would be relevant.

How could I consider the power and p-value to determine which direction I should update in provided a conclusion in which you reject the null hypothesis? To me, it doesn't seem possible to know which direction to update in merely from the fact that a study failed to reject the null hypothesis. If I'm wrong on that, could someone explain why? (edit: I'm not suggesting Metacalculus is not considering the true values or interested in the true values in this particular situation, I'm interested in the theoretical side of the question, to be clear.)

edited Dec 23, 2021Useful context: That metaculus question is judged by if the first 3 of 4 studies show (p < .05) that omicron is less severe, so in terms of resolution of that question this was important evidence.

In fact if I'm reading it correctly that means people have a very strong feeling that Omicron is less lethal, because they think that despite this study the ones selected for the question resolution will show it is less lethal.

Here is an interesting article from a UK newspaper regarding an exchange between a journalist and a SAGE modeller on Twitter. The modellers did not allow for omicron being less deadly than delta. It also suggests strongly that the modellers are used to provide policy based evidence, not evidence to make policy.

https://www.telegraph.co.uk/opinion/2021/12/19/tackled-sage-covid-modeller-twitter-quite-revelation/

I think there's a fair amount of confusion stemming from the overlooked fact that Covid generally is becoming milder. Not due to a change in itself but to progressively increasing immunity. So the question of whether Omicron is milder than Delta misses the important fact that Delta is milder than Delta! At least Delta today has milder consequences than it did six months ago.

In the UK for example case fatality rates have fallen by a factor of ten. If Omicron is also currently 50% milder than Delta then of course that's relevant, and somewhat good news. But less of a big deal than the ongoing reduction in severity due to increased immunity.

That's true, there are no more restrictions in the UK until, maybe, after Christmas. Scotland has already cancelled Hogmanay and Mark Drakeford in Wales is playing Scrooge. Neil Ferguson at Imperial College has been issuing apocalyptic warnings of 5000 deaths a day from Omicron.

https://www.telegraph.co.uk/business/2021/12/23/prof-lockdowns-apocalyptic-omicron-claims-undermine-faith-vaccines/

Absolute nonsense, he's been wrong since the start of the pandemic and people are tired of him crying wolf. Maybe now the Government will ignore him and his headline grabbing predictions.

1. If X were true, we would see evidence for it.

2. There is no evidence for it.

3. Therefore, X is not true.

This is good logic, if #1 and #2 are both true, but premise #1 typically goes unstated or at least unsubstantiated. I will occasionally read a line that says "There is no evidence that vaccines cause autism", and after decades of research and studies involving millions of people, we would expect to see evidence, therefore it's a meaningful statement.

Back in Jan 2020, when COVID-19 was brand new, "No evidence for X" didn't mean as much. Right now, when Omicron is pretty new, "No evidence for Omicron being <whatever>" doesn't mean that much. At the very least, the truth will require a lot more than 1 sentence to express and a headline itself just inflammatory overconfident nonsense (as is so often the case). Somebody should start a Bayesian news publication and start headlining things "Weak evidence for not X" as opposed to "No evidence for X".

Is the evidence (in the scientific sense) really that much stronger for “Omicron is more infectious/contagious than Delta” vs. “Omicron is less deadly than Delta”? Because the media was VERY quick to reject the former null hypothesis, but very reluctant to reject the null in the latter case.

I'm too dumb for this Substack. Someone please just tell me if I have to start washing my groceries again. Thanks in advance.

No evidence that Omicron is more lethal than Delta

edited Dec 23, 2021You say it's an unusually clear example of the difference between classical and Bayesian ways of thinking, but I don't know how to interpret that, because you give us just one datum: the change in the Metaculus prediction. Your statement led me to expect two responses, one classical and one Bayesian. Also, I don't know if you think Metaculus represents Bayesian thinking, or gives a Bayesian summary of the beliefs of classical thinkers. Also, I don't know what "classical" means to you.

I'm guessing that you're contrasting your own imagined perfect-Bayesian response, versus the Metaculus consensus as a representation of the consensus among classical thinkers. In that case, for this to present a contrast, you'd have to believe that the consensus prediction *shouldn't* have dropped (otherwise we could find only a quantitative difference, and as you specified no prior, we have no way of knowing what the perfect-Bayesian response would have been).

But I think Bayesians must say it should have dropped, by conservation of evidence, since rejecting the null hypothesis should have made the consensus prediction rise. Any difference between Bayesian and "classical" would be quantitative, not qualitative.

TLDR: I don't even see a contrast being made, so I can't find it obvious.

If you'd like some good quality science journalism, I recommend Science News.

https://www.sciencenews.org/

Like a good dinosaur, I subscribe to the printed magazine.