83 Comments
Dec 23, 2021·edited Dec 23, 2021

I continue to maintain that this study *does* provide strong evidence that omicron isn't *a lot* milder that delta. https://www.reddit.com/r/dspeyer/comments/rl1lfj/the_first_halfway_decent_study_of_omicron_is_out

I'll also say I think metacalculus should have questions like "omicron less than half as deadly as delta" and "less than 90%". Just saying "less than" can only resolve to "no" if we establish that it's deadlier, because there could always be a smaller difference.

Expand full comment
Comment deleted
Expand full comment

Is that a consensus? Based on what? A couple months ago I saw articles claiming it's some 2.3 times deadlier than the wild type.

Expand full comment

I think this is a person who thinks that 99% survival rate is mild, despite being a much lower survival rate than things like drunk driving or BASE jumping, and things like "serving in the Vietnam War" having a 98% survival rate.

Expand full comment

You are comparing a set of actions (drunk driving, base jumping, serving in Vietnam) that young people do. Base jumping and war elicit feelings of extreme fear. This triggers a sensation that these things are dangerous, rightfully. But I'm not so sure it is a fair comparison because someone who would serve in Vietnam (18-26 year old) is going to have a higher survival rate for COVID than 99%. If you put someone on the low end of the survival spectrum into Vietnam (like a 95 year old) they would probably have am much lower than 98% chance of survival.

Expand full comment

Sure. But it still seems like innumeracy to think that a 1/1000 chance of dying from something is negligible. A 1/1000 chance of a moderately bad outcome can be negligible, but dying isn't just any old moderately bad outcome. Covid appears to be a disease that is maybe a little bit less bad than measles or polio, but not hugely less bad. I can't quite understand what someone means by calling delta covid "mild" unless they think that the vast majority of diseases, and even deadly global pandemics, are "mild".

Expand full comment

Yes, that is fair.

Expand full comment

> But it still seems like innumeracy to think that a 1/1000 chance of dying from something is negligible.

Right, but 1/1000 is not the actual value. For 20-24 year olds, that number is two orders of magnitude off: https://www.acsh.org/news/2020/11/18/covid-infection-fatality-rates-sex-and-age-15163

And even those numbers mask the big risk factors that an individual would know if they qualify for -- immunosuppression, obesity, etc.

Expand full comment

If all you do is rely on the MSM for your COVID information you're going to be very very misinformed. Of course, most of the individuals betting in metaculus are probably also getting their COVID information spoon fed to them by scientifically illiterate reporters, editors, and headline writers (who want to capture viewer eyes).

If you look at the raw numbers for South Africa's NICD, you can see distinct differences between the previous Delta Surge and the current Omicron surge (which also seems to be on the downturn in ZA).

This paper: The adjusted odds (aOR) of being admitted to a hospital with SGTF (Omicron) vs non-SGTF (Delta) infections are 0.2. SGTF-infected individuals had lower odds of severe disease (req Oxygen, ventilation, developed ARDS, or died): aOR 0.3 (95% CI 0.2-0.5).

https://www.medrxiv.org/content/10.1101/2021.12.21.21268116v1.

Note: Omicron cases peaked about 5 days ago and are now declining, so week 6 of Omicron also looks like its peak. But looking at the published mortality stats showed that Omicron is killing a much lower percentage of peopld compared to previous waves...

Week 6 of Omicron deaths are 10% of ZA's First Wave peak, while active daily Omicron cases peaked at 127% of First Wave. But that first wave variant wasn't the wild-type variant, because it had the D614G mutation.

Week 6 Omicron deaths are also 4% of the Beta peak. FYI, Beta had the highest COVID active case rate of all the waves in South Africa. Omicron has peaked at 91% of Beta.

And Week 6 Omicron deaths are 13% of the Delta peak.

So we have plenty of data to conclude that Omicron is *significantly* less virulent than any of the previous strains that caused case surges in South Africa. And all you need is Google and some basic math skills to check what you're reading in the news. I wonder what percentage of people betting on Metaculus are doing that? ;-)

Expand full comment

Does that take into account that more people have had a previous strain or been vaccinated compared with Delta?

Expand full comment

I'm assuming not. How would you account for that except to simply break out "vaccinated" vs. "unvaccinated" as categories. And then of course there are selection effects for each category (or you would want to know what % is vaccinated for each age group).

Expand full comment
Dec 28, 2021·edited Dec 28, 2021

> Omicron is killing a much lower percentage of peopld compared to previous waves

which tells us nothing about omicron if the difference can be explained by vaccines lowering the chance of death.

Indeed, even if unvaccinated people die at lower rates, that can be explained by lots of unvaccinated people having antibodies from previous infection, and having already died from previous infection.

Smallpox remained deadly for thousands of years ... I suspect the reduction in mortality rate for omicron is just an artifact resulting from differences in antibodies in the current vs previous populations. Researches who want to address this question need to look for unvaccinated people with *no prior infection* who catch omicron (which is probably not easy data to get; I've heard that distinguishing between variants is not routinely done in most countries.)

Expand full comment

Actually, the lower death rate for Omicron cannot be explained by a higher percentage of people having antibodies from previous infections. We saw the CFR *and* excess death rates go up with Delta, even though there were previously two waves of COVID-19. If that were the case CFR and excess deaths should have gone down each wave as a greater number of people had developed antibodies from infection.

Expand full comment

The increased excess deaths with delta (and also the fact that delta displaced other variants) can be explained as "more (previously unexposed) people got delta because it is more contagious than its predecessors". (As an immune-escaping variant, omicron doesn't even need to be more contagious to become dominant ... though Zvi thinks it is.)

Right now ~95% of UK people have Covid antibodies (https://www.ons.gov.uk/peoplepopulationandcommunity/healthandsocialcare/conditionsanddiseases/articles/coronaviruscovid19latestinsights/antibodies). It was more like 60-80% when delta swept through in May/June (https://www.bbc.com/news/health-57489740).

So omicron is spreading in a different environment than delta did.

As for the CFR, the UK CFR *decreased* with the arrival of delta (from 3% in February to 0.14% in July), though it did rise in Canada and the US. CFR is affected by many factors and I wouldn't read too much into it.

(Note: am not cherry-picking UK: I use UK because it's the only country I know about whose antibody data is easy to access.)

Expand full comment

There's other studies by now, though, e.g. https://www.medrxiv.org/content/10.1101/2021.12.21.21268116v1

The consistent message seems to be that omicron infections currently seem to be somewhat (20-80%) less severe than delta infections, but that disentangling the effects of intrinsic severity, vaccination, prior infection and immune escape is going to be hella tricky.

Expand full comment

I can't find the study now, but there's a study showing that media coverage of COVID-19 was biased negative. While this seems like media SOP (if it bleeds it leads), it seems to have gotten worse with the pandemic.

Expand full comment

Though I agree - how would actually "positively biased" media coverage of COVID-19 look like? "Brazil's pension system saved for this decade - Obrigado Corona!" - "The way to save climate: Keep pandemics coming!" - "Greta: Build monument to virus!" - "Kids: Hooray, school is off again!!!" https://cdn.prod.www.spiegel.de/images/982c12de-313b-46c6-9d96-5a1f705576f5_w948_r1.778_fpx43.08_fpy49.98.jpg

Expand full comment

No - Omicron shows a divergence between deaths and positive PR tests: end of the pandemic in sight. Or the Netherlands engaging in senseless Lockdowns despite falling infection, death and hospitalisation rates.

Expand full comment

I want to believe this is true and I've even seen some hopeful coverage making this case. Unfortunately even if turns out to be the truth, all Omicron tells us for certain is that new variants can successfully bypass some immunity from previous variants. I am also hopeful that future variants will continue to be milder and milder (due to existing exposure, etc.) but we're not there yet.

Expand full comment

I think I saw something like on average people believing COVID had a 20% mortality rate (post infection).

Expand full comment

And, wow, that must be more than half! ;) Here in Germany we had an early study (Heinsberg-Studie by Prof. Streeck https://www.medrxiv.org/content/10.1101/2020.05.04.20090076v2 ) showing an IFR (infect. fatalitiy rate) of 0.36%. Much less than expected - due to many more infected than positively tested. That was spring 2020. It was in the media (Zach Weinersmith quoted it), Streeck was on TV. But that number never really got quoted often. A "normal" media user would think it is 2-4%. A real-life human ... yeah .. 20? perwhat? whatever? ... hated math ... . - Streeck got the image of a "ver-harmloser", 'trying to spin' Covid into sth. "harmless". - On the other hand, 0,36 of 81 million is ca. 300.000. Not an amount of extra-deaths our politicians can handle. Lukashenko can.

Expand full comment

Weirdly, this hasn't been true for the entire pandemic. During January and February of 2020, the media was biased in an extremely positive direction (very few news agencies were willing to entertain the idea of an actual pandemic, except as a panic to dismiss), and even into early March, there were more "don't panic" stories than "panic" ones. That has mostly changed, but there's still the weird thing that ventilation is underemphasized, and the media tends to avoid talking about the fact that eating and drinking aren't as protective as wearing a mask.

Expand full comment
Dec 23, 2021·edited Dec 23, 2021

When researchers want to evaluate a question, they formulate the null hypothesis. In this case, it would be that Omicron and Delta are equally deadly. The researchers will find a test statistic which they will use to obtain a p-value. The p-value tells us the chance of obtaining these results at least as extreme as the ones we got, provided the null hypothesis is true. A threshold for rejection (alpha) will be established. Too extreme of test results points toward our null not being true. If the threshold is reached or surpassed, the null hypothesis is rejected. The headline writer would see that the null hypothesis was not rejected and write "No evidence Omicron less severe than Delta." This is not a good thing to do for reasons Scott mentioned in his original post.

From my understanding: You are saying that metacalculus is behaving like a Bayesian with priors who is updating on new information. The fact that a study failed to reject the null hypothesis is evidence used to update. Metacalculus moved more toward 50-50 on the question of deadliness. They reduced confidence in Omicron being less deadly because a failure to reject the null hypothesis is evidence of some kind and Bayesian's should incorporate all available evidence.

My confusion is this: Without knowing the exact test statistic, can we know that Metacalculus is behaving like a rational Bayesian? One issue with setting alpha of 0.05 is that it is somewhat arbitrary. If you made me guess whether or not the null was true and you gave me a study with adequate N and a p-value of 0.051, I would say that the null is probably false. However, if you gave me a p-value of .85, I would think the null is probably true. This would be despite the fact that both are statistically insignificant conclusions. The threshold is demanding. I don't think it would be rational to treat it like a binary if we are being good Bayesians. The actual p-value and power/N of the study would be relevant.

How could I consider the power and p-value to determine which direction I should update in provided a conclusion in which you reject the null hypothesis? To me, it doesn't seem possible to know which direction to update in merely from the fact that a study failed to reject the null hypothesis. If I'm wrong on that, could someone explain why? (edit: I'm not suggesting Metacalculus is not considering the true values or interested in the true values in this particular situation, I'm interested in the theoretical side of the question, to be clear.)

Expand full comment

Whether you should treat it as a binary depends on whether you have the actual p-value, or more detailed data, available or not. If you only know whether the result was positive or negative, you should definitely increase your confidence if it's positive, and reduce it if it's negative. For a lower significance threshold, you should increase your confidence more if it's positive than if the significance threshold were lower, while a negative result should decrease your confidence less.

If you do have the p-value, I think what direction you should update in also depends on other things like the sample size as well as your priors. If the sample is very large, a p=0.05 could mean that there is probably no effect (since if there were one, with a large sample the p-value would probably be smaller); with a small sample, the same p-value may increase your confidence that there is an effect. If you only have whether the result is significant or not with a given threshold, the direction you should update in is clear, but the amount you should update by also depends on the sample size and your priors.

Expand full comment

I agree with your first paragraph but I'm not so sure about the second part. I'll re-ask my comment below for your opinion:

Imagine I have a prior of 50-50 for whether X=Y. I test the null hypothesis with power of .80. For the one tailed alternative hypothesis X>Y:

1. I get a p-value of 0.051 (insignificant) should I really "ignore" this information and not use it to update? You wouldn't adjust your prior to maybe 51% chance it isn't X=Y?

2. I get a p-value of 0.049 (significant) should you now adjust your prior despite .051 and .049 being extremely similar?

3. My significance level is now alpha=0.10 and power is .80. I get a result of 0.051 (significant). Should I not adjust my prior?

Another question: I am given a die. I'm told it might be loaded. I assume 50-50 chance it is loaded if someone tells me this. I roll it once. I get a 6. I roll it again, I get a 6. One more time, I get a 6. At this point I would update heavily to it being loaded despite N=3 and chance of coincidence high. I do not think I am behaving irrationally.

Expand full comment
Dec 23, 2021·edited Dec 23, 2021

>One issue with setting alpha of 0.05 is that it is somewhat arbitrary. If you made me guess whether or not the null was true and you gave me a study with adequate N and a p-value of 0.051, I would say that the null is probably false.

Higher N decreases the p-value of real results (makes them more significant) while not affecting that of false positives (as your p threshold is literally the rate of false positives). p = 0.051 in a large study is nonsignificant and should be ignored; it's when getting that from a *small* study that you'd go "maybe try more power".

To quote Scott in "5-HTTLPR: A Pointed Review":

"They show off the power of their methodology by demonstrating that negative life events cause depression at p = 0.000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001, because it’s pretty easy to get a low p-value in a sample of 600,000 people if an effect is real."

Expand full comment
Dec 23, 2021·edited Dec 23, 2021

Imagine I have a prior of 50-50 for whether X=Y. I test the null hypothesis with power of .80. For the one tailed alternative hypothesis X>Y:

1. I get a p-value of 0.051 (insignificant) should I really "ignore" this information and not use it to update? You wouldn't adjust your prior to maybe 51% chance it isn't X=Y?

2. I get a p-value of 0.049 (significant) should you now adjust your prior despite .051 and .049 being extremely similar?

3. My significance level is now alpha=0.10 and power is .80. I get a result of 0.051 (significant). Should I not adjust my prior?

Another question: I am given a die. I'm told it might be loaded. I assume 50-50 chance it is loaded if someone tells me this. I roll it once. I get a 6. I roll it again, I get a 6. One more time, I get a 6. At this point I would update heavily to it being loaded despite N=3 and chance of coincidence high. I do not think I am behaving irrationally.

Expand full comment

Power of 0.8 isn't really "adequate N" as I was using it. (Also, if you actually know what your power is then half the job is done.)

"Adequate N" gets you power of like 0.9999+, at which point the Bayesian update for p~=0.05 is negligible or even toward the null*. I'd disregard both 0.049 and 0.051 in that scenario.

There's also publication bias to consider.

*P(A|B) = P(A and B)/P(B) = P(B|A)P(A)/(P(B|A)P(A) + P(B|~A)P(~A))

P(B|~A) = 0.051 or 0.049

P(B|A) = 0.0002 (since 0.9999 of the distribution given A is higher than that, in your power, and hence two-tailed P(at least this extreme) = 0.0002)

P(A) = P(~A) = 0.5

hence P(A|B) ~= 0.004 << 0.5

Expand full comment
Dec 23, 2021·edited Dec 23, 2021

Actually, should the null hypothesis be that omicron is as deadly as delta? Supposedly it's not a descendant of delta, but is most closely related to some versions of the original strain, last seen in 2020 (with a mystery about what happened in between). Then the natural null hypothesis should be that it's as deadly as the original strain.

But we can't confront omicron with the original strain in a study, because the original strain isn't circulating (or at least isn't common) any more, and the conditions (such as pre-existing immunity) are different than back in 2020. So the null hypothesis that's practically possible to study is indeed that omicron is as deadly as delta.

Expand full comment

> The p-value tells us the chance of obtaining these results at least as extreme as the ones we got, provided the null hypothesis is true.

provided the null hypothesis is true, *and* provided that both the existing data and any future data are sampled uniformly at random from the same underlying unknowable distribution.

> Too extreme of test results points toward our null not being true.

Strictly speaking, this conclusion is outside the scope of the Null Hypothesis Significance Testing paradigm (you have to use Bayes's Theorem to obtain it, and we obviously don't know the true distribution of the test statistic). It is however the only actionable statement, and is thus an example of why NHST is useless for statistical inference.

Overall, I'd encourage you read Gelman's The Problems With P-Values Are Not Just With P-Values

Expand full comment

"you have to use Bayes's Theorem to obtain it," Agree.

"and we obviously don't know the true distribution of the test statistic" Can we still update without knowing the true distribution of the test statistic?

Can you explain your final sentence a bit more: "It is however the only actionable statement, and is thus an example of why NHST is useless for statistical inference."

Expand full comment

P[X | H0] is great and all, but in terms of making a decision, what does it tell us? Nothing really. H0 is often a point estimate and thus objectively impossible for systems as noisy as epidemiology/psychology/etc. Let's say the null is formulated as θ=t. Then even if θ "truly" equals t+ε in the real world, P[X | θ=t] need not be anywhere close to P[X | θ=t+ε] without even more assumptions.

In Bayesian updating, we have P[θ=t | X] = P[X | θ=t] · P[θ] / P[X]. P[X] is a constant and can just be integrated out. Indeed, the prior, P[θ] cannot be known. But that's kinda the entire point of Bayesian probability! We get to subjectively assign a prior and then see what happens. You come out and say it, unlike sweeping it under the rug, pretending it's not there, and calling yourself "objective" like the bastardized version of frequentism that is NHST

Expand full comment

>can we know that Metacalculus is behaving like a rational Bayesian?

I really don't understand this question.

Metaculus isn't behaving at all - it is reflecting the behaviors of its participants. Some will be 'behaving like a rational Bayesian' many will not. You can't know what drove the participants to flip from Yes to No and back again.

Metaculus is merely communicating what the participants believe is the true answer. The method that they do this (prediction tournament) has been shown (theorized?) to be a fairly accurate Bayesian predictor - but it isn't, itself, doing any prediction.

Expand full comment

"The method that they do this (prediction tournament) has been shown (theorized?) to be a fairly accurate Bayesian predictor - but it isn't, itself, doing any prediction."

That answers my question then.

I know groups of people don't think like individuals think. But if I say that the US decided to put sanctions on Iran, I think people understand what I mean.

Expand full comment

We see a lot of cherry-picked examples where prediction markets do well on this blog. I would also love to see a post that discusses and analyzes some of their failures. Maybe there are some useful lessons in there.

Expand full comment

Unless the majority of individuals betting on this question in Metaculus are looking at the actual data, it's just GIGO bullshit. I don't understand this obsession that Scott has with Metaculus, because there's no way to know how well-informed the Metaculus users are.

Expand full comment

You don't have to know how they are arriving at their answers - aggregates are generally more accurate than individuals, especially when they have skin in the game.

Expand full comment

Aggregates ignorance can make up for knowledge? The Kasparov vs the World chess game proved that thousands of people working together online still couldn't beat a grandmaster. Likewise, the wisdom of crowds generally hasn't worked for aggregates of *knowledge*, either — which is why there's a long history of scientific consensuses which have later been proven wrong.

Expand full comment
Dec 24, 2021·edited Dec 24, 2021

By all means ignore the empirical evidence. Doesn’t mean anyone else should.

And moving the goal posts from predictions about the future to games of skill isn’t going to salvage your claim either.

Expand full comment

Show me the empirical evidence please with some links. People believe stupid things. Even smart people believe stupid things.

Expand full comment

I suggest searching scholar.google.com

Expand full comment

Follow up question: If crowds are so wise, why do we need AI. Why don't we just crowd-source protein folding solutions?

Expand full comment
Dec 27, 2021·edited Dec 27, 2021

folding@home much

edit:oops! I meant Foldit!

Expand full comment

Good point. We are not yet decided if the masses are smarter or dumber then the individuals. Guess it depends...

Expand full comment

Was thinking the same. Stock markets are similar flawed. The people which would be best in predicting a stock, insiders, are legally not allowed to place bets (but will do it acasionally anyway). My understanding is that public bets are a better then nothing way of gathering real opinions but that is something completely different thing then what will happen. 'Forecasting is hard especially if it's about the future'

Expand full comment

The stock market is a great example of consensus thinking that overreacts to positive and negative information inputs!

Expand full comment
Dec 23, 2021·edited Dec 23, 2021

Useful context: That metaculus question is judged by if the first 3 of 4 studies show (p < .05) that omicron is less severe, so in terms of resolution of that question this was important evidence.

In fact if I'm reading it correctly that means people have a very strong feeling that Omicron is less lethal, because they think that despite this study the ones selected for the question resolution will show it is less lethal.

Expand full comment

Here is an interesting article from a UK newspaper regarding an exchange between a journalist and a SAGE modeller on Twitter. The modellers did not allow for omicron being less deadly than delta. It also suggests strongly that the modellers are used to provide policy based evidence, not evidence to make policy.

https://www.telegraph.co.uk/opinion/2021/12/19/tackled-sage-covid-modeller-twitter-quite-revelation/

Expand full comment
Dec 23, 2021·edited Dec 23, 2021

The weird thing here, if this is true, is that AFAIK the UK government isn't one that supports strict measures out of ideology/partisanship; in July, they committed not to reintroduce restrictions. It doesn't seem like it's in their interest to reintroduce restrictions, much less lockdowns, unless there will be a catastrophic wave otherwise.

And it's unlikely that there would be one. Case counts have been high since the summer, with most domestic restrictions lifted, yet death rates have been far lower than during earlier waves. They are facing the Omicron wave with a high level of herd immunity, from vaccinations and previous infections.

Expand full comment

I think there's a fair amount of confusion stemming from the overlooked fact that Covid generally is becoming milder. Not due to a change in itself but to progressively increasing immunity. So the question of whether Omicron is milder than Delta misses the important fact that Delta is milder than Delta! At least Delta today has milder consequences than it did six months ago.

In the UK for example case fatality rates have fallen by a factor of ten. If Omicron is also currently 50% milder than Delta then of course that's relevant, and somewhat good news. But less of a big deal than the ongoing reduction in severity due to increased immunity.

Expand full comment

That's true, there are no more restrictions in the UK until, maybe, after Christmas. Scotland has already cancelled Hogmanay and Mark Drakeford in Wales is playing Scrooge. Neil Ferguson at Imperial College has been issuing apocalyptic warnings of 5000 deaths a day from Omicron.

https://www.telegraph.co.uk/business/2021/12/23/prof-lockdowns-apocalyptic-omicron-claims-undermine-faith-vaccines/

Absolute nonsense, he's been wrong since the start of the pandemic and people are tired of him crying wolf. Maybe now the Government will ignore him and his headline grabbing predictions.

Expand full comment

1. If X were true, we would see evidence for it.

2. There is no evidence for it.

3. Therefore, X is not true.

This is good logic, if #1 and #2 are both true, but premise #1 typically goes unstated or at least unsubstantiated. I will occasionally read a line that says "There is no evidence that vaccines cause autism", and after decades of research and studies involving millions of people, we would expect to see evidence, therefore it's a meaningful statement.

Back in Jan 2020, when COVID-19 was brand new, "No evidence for X" didn't mean as much. Right now, when Omicron is pretty new, "No evidence for Omicron being <whatever>" doesn't mean that much. At the very least, the truth will require a lot more than 1 sentence to express and a headline itself just inflammatory overconfident nonsense (as is so often the case). Somebody should start a Bayesian news publication and start headlining things "Weak evidence for not X" as opposed to "No evidence for X".

Expand full comment

My new favorite quip about this is that "No evidence that its raining" means very different things if you're in a room without windows or not.

Expand full comment

I like "weak evidence for not X". For the popular press, though, we need enough writers/editers/whoever with the courage to say "we really just don't know yet." The world would be a better place if people could embrace a little uncertainty.

Expand full comment

Is the evidence (in the scientific sense) really that much stronger for “Omicron is more infectious/contagious than Delta” vs. “Omicron is less deadly than Delta”? Because the media was VERY quick to reject the former null hypothesis, but very reluctant to reject the null in the latter case.

Expand full comment

I'd like to know this too.

Expand full comment

I'm too dumb for this Substack. Someone please just tell me if I have to start washing my groceries again. Thanks in advance.

Expand full comment

Don’t wash your groceries, but do wash your hands with soap. I’m too dumb too, but I don’t let that stop me from giving advice like this :)

Expand full comment

Just wear a mask at the beach and wipe down all surfaces and we'll have flattened the curve before you know it.

Expand full comment

Also make sure everyone who touches your food wears gloves that can't be washed properly, and consider a face shield ... and of course the obligatory https://www.theonion.com/cdc-recommends-also-wearing-face-mask-on-back-of-head-i-1841669382

Expand full comment

No evidence that Omicron is more lethal than Delta

Expand full comment

You say it's an unusually clear example of the difference between classical and Bayesian ways of thinking, but I don't know how to interpret that, because you give us just one datum: the change in the Metaculus prediction. Your statement led me to expect two responses, one classical and one Bayesian. Also, I don't know if you think Metaculus represents Bayesian thinking, or gives a Bayesian summary of the beliefs of classical thinkers. Also, I don't know what "classical" means to you.

I'm guessing that you're contrasting your own imagined perfect-Bayesian response, versus the Metaculus consensus as a representation of the consensus among classical thinkers. In that case, for this to present a contrast, you'd have to believe that the consensus prediction *shouldn't* have dropped (otherwise we could find only a quantitative difference, and as you specified no prior, we have no way of knowing what the perfect-Bayesian response would have been).

But I think Bayesians must say it should have dropped, by conservation of evidence, since rejecting the null hypothesis should have made the consensus prediction rise. Any difference between Bayesian and "classical" would be quantitative, not qualitative.

TLDR: I don't even see a contrast being made, so I can't find it obvious.

Expand full comment

I am likewise confused by that assertion.

Expand full comment

I think it's the contrast between media and prediction market. The market sees a single study which can't reject the null hypothesis, slightly updates to a lower chance, but afterwards continues to update according to other information, ending up even higher than before. The media sees a single study which can't reject the null hypothesis, and then immediately goes all the way to "NO EVIDENCE!". The prediction market is certainly no theoretic perfect bayesian reasoner, but it's at least trying.

Expand full comment

Thanks; that makes sense.

Expand full comment

If you'd like some good quality science journalism, I recommend Science News.

https://www.sciencenews.org/

Like a good dinosaur, I subscribe to the printed magazine.

Expand full comment