698 Comments
⭠ Return to thread

I strongly disagree with the claim that Polymarket's price that Trump would win was mispriced in the direction of being too high on e.g. November 4th.

I explained why in a series of comments I wrote on the Facebook post of a friend who shared Scott's post. Copying them here (sorry for not polishing them up, but I think they're quite comprehensible despite being written sloppily):

----

Scott says several things I disagree with in this post and it would take a full post to adequately explain why I disagree with all of it.

Listing out some issues:

(1) He claims that the markets were "mispriced" without explaining what he means by this.

(2) Later he references the concept of a "'true' probability", which he does not define.

(1b) By "mispriced" does he mean that the market price does not match the "'true' probability"? Or does he mean that the market price does not reflect the most-informed credence that humans can come up with (a different probability)?

(1c) His "mispriced" claim seems to be that the markets should have been priced closer to Metaculus's ~50% than their actual ~60% Trump price. (1b continued) Whether he thinks this because he thinks Metaculus was accurately forecasting the "'true' probability" or because he thinks Metaculus' forecast was the best forecast one could make given the information available to the world before the election is unclear.

(1d) In either case, I claim that Scott is wrong to claim that the markets ought to have been priced below 60%.

And now (because it's easier), to just say some related beliefs I hold without responding directly to Scott's post:

I believe that looking back on how the election unfolded provides us with evidence that the "true probability" of Trump winning was >90%.

Note that I've been trying to come up with a single coherence concept of "true probability" for years that works for what I mean in forecasting conversations like this one while at the same time not contradicting our knowledge of physics and how it is roughly deterministic (or random with quantum mechanics).

I don't have a single definition, so what I will say is that I mean something like "if we were to reset the Universe to the state it was in one week before the election and had some random noise (like a rock that almost punctures someone's tire in real history actually causing them to get a flat, or vice versa) to make sure that they did not all play out exactly the same (due to determinism, if quantum randomness doesn't actually make a difference), then, in what percentage of simulations would Trump win the election? "True probability" is probably a misnomer given this meaning, but this captures our meaning, similar to how we want "true probability" to mean something such that "the true probability that the D6 will land on 6 when I toss it right now is 1/6th", even though if we had enough information about how the die was oriented in your hand and how your brain worked we could predict the exact way in which you would toss it and how it would tumble through the air and hit the table and land with high probability (since quantum randomness probably doesn't affect this, unless it does with cognition somehow, but you get might point), meaning that 1/6th probably isn't *actually* the "true probability" and the true probability is probably exactly or almost exactly 1 or 0.

Note that had PA been decisive and had Trump won by 1 vote or 10,000 votes in PA, I think that the random noise injected into the simulations a week prior to the election would be enough to make it so that Harris would have won in >10% of elections in the simulated worlds that started a week before the election. But the further the election is from being close, the less of a difference that the random noise makes, and with what actually happened I think it's reasonable to predict that Trump would win in >90% of the simulated-world-elections (which I'm calling "true probability" even though my definition is arbitrary and "true probably" shouldn't have an arbitrary definition.)

All this said, might there a good reason why forecasts or market prices were only ~50-60%, even if I'm right that the "true probability" was >90% that Trump won?

Yes.

There is (what Philip Tetlock calls) "irreducible uncertainty" in the world. For forecasters trying to predict how a D6 will land, this may result in them making forecasts close to 1/6th that it lands on 6, even though they have video footage of the die flying through the air. This is because their uncertainty in the measurements introduces too much noise, and if they're slightly wrong about how the die will strike the table then that will completely change how it will land. So even though someone with a perfect knowledge of physics and the state of the world when they make their forecast midflight could more accurately predict how the die will land, the forecasters knowledge is limited and they cannot possibly know better. There is inherent uncertainty that they can't get rid of given the information they have access to.

Was 50% or 60% the best forecast one could make given the irreducible uncertainty, and assuming I'm right that the "true probability" was >90%?

Not necessarily.

On November 4th I wrote:

"My 2/10 low information inside view judgment is that Trump is about 65% likely to win PA and the election. My all-things-considered view is basically 50%.

However, notably, after about 10 hours of thinking about who will win in the last week, I don't know if I actually trust Nate and prediction markets to be doing a good job. I suspect that there may be well-informed people in the world who *know* that the markets are wrong and have justified "true" beliefs that one candidate is >65% likely to win. Such people presumably have a lot of money on the line, but not enough to more [sic] [move] the market prices far from 50%." (https://www.facebook.com/.../pfbid02kHsKK9oWUKGo3bvhs9wV9...)

In other words, I thought before the election and I think now that people more informed than me (I said I was only 2/10 informed) might be justified in making their all-things-considered believe >65% in favor of a particular candidate (unlike me, who meta-updated downward from my inside view toward believing the markets and better informed forecasters; not my "50%" in the quote was not meant to say that I believed the forecasters/Nate more than the markets, just that I didn't trust my inside view judgment of 65% Trump due to the markets and forecast aggregations assigning lower probabilities and me knowing I was relatively uninformed.)

Why do I think that? In short, because forecasters aren't close to being at their peak strength, and political prediction markets are nowhere near as efficient as e.g. stock prices. As Scott said, there probably weren't any big professional trading firms betting on Polymarket, let alone many like in stock markets. It just wouldn't be surprising if a very-informed person or group was able to correctly identifying that the prevailing wisdom of November 4th about how likely Trump was going to win was wrong.

So, expanding on "not necessarily" above: Could Trump-wins have been priced too highly on Polymarket despite Trump having a "true probability" of >90% of winning?

Yes, but (1) Scott and I and all of us reading this don't know if that's the case.

And (2) us learning now that Trump won and had a >90% "true probability" of winning is strong evidence that the hypothetical very informed forecasters I mentioned (whom may have actually existed, as I said on Nov 4th I thought was ~50% likely) were people who were at >65% Trump rather than people who were at <35% Trump. And thus it's more likely that ~60% market prices were too low rather than too high, given the best forecasts that could be made given irreducible uncertainty.

Scott also says a lot of stuff re the French whale on Polymarket, some of which I'd critique, but I don't think it's as important as the fundamental mistake Scott is making (claiming that Polymarket's price on Trump winning was too high).

Expand full comment

strong agree

I'm reminded of that scene in hpmor

If you want to criticize your opponents' reasoning for being incorrect, at least wait for them to actually be factually mistaken

It feels very strange for him to be complaining about a probability estimate that was directionally correct compared to the one scott proposes

If lottery tickets are actually negative sum, then you will get lots of opportunities to criticize people who buy them and then lose money

It feels like Scott is currently in the position of somebody who has been pro-lottery his entire life, and then suddenly posted his very first criticism of the lottery the day after winning the jackpot

Maybe he's even correct, but seriously?

Expand full comment

Well actually I'd push back on this. I think there is a true thing that Scott could have said in his post that is not far from what he actually said.

If we consider a different question where there is more irreducible uncertainty in the than there is for a Presidential Election, such as a soccer game or the role of a die (see this comment-reply of mine above to someone else: https://www.astralcodexten.com/p/congrats-to-polymarket-but-i-still/comment/76065426), such that learning the result that actually happened only causes us to update our estimate of the "true probability" that the event was going to happen to, say, 60%, rather than >90%, then I could see a post very similar to Scott's post being reasonable.

Suppose that Polymarket put the result of the soccer game at 70% that Team A wins, and Metaculus puts it at 55%. Then Team A wins. Scott could then write a post saying "Hey people, just because Team A won doesn't mean that Polymarket's 70% was better than Metaculus' 55% forecast. I still think Polymarket's 70% was price too high and that Metaculus' forecast was better."

This is related to what Tetlock calls the "wrong side of maybe" fallacy. (Hey, I've been thinking about this for 8+ years: https://imgur.com/gallery/wrong-side-of-maybe-fallacy-uG3rgGU ). Just because a forecast was on the wrong-side-of-maybe doesn't mean it was wrong or bad. It's evidence that it was bad, but not decisive evidence, and there are cases where it is not actually bad.

More generally it's also true that just because a forecast is less-close-to-the-result-that-actually-happened doesn't decisively mean that it's worse than a forecast that was closer to what actually happened. Again, it's generally *evidence* that it's worse, but that's not definitively the case, and there are exceptions. E.g. People widely regard the Dilbert guy Scott Adams as being overconfident when he stated 99% Trump in 2016.

If Polymarket had been at 99% Trump for a week before the election, yet the result had turned out exactly as it had (same number of votes everywhere, Nate and Metaculus thinking ~50%, etc), then saying that Polymarket had Trump winning priced too highly would be reasonable.

Expand full comment

I think you can only conclude that with a larger sample size of predictions. If outcomes that Polymarket assigns a 60% probability to actually happen 60% of the time, then Polymarket is well-calibrated.

Expand full comment

"If outcomes that Polymarket assigns a 60% probability to actually happen 60% of the time, then Polymarket is well-calibrated."

Yes, that is the definition of "well-calibrated".

However, "well-calibrated" does not mean that the Trump actually wins 60% of the time, nor does it mean that 60% is the best forecast a highly-informed, rational forecaster can make on the question.

"I think you can only conclude that with a larger sample size of predictions."

I'm not sure that I know what you are referring to by "that". Is it my last statement? That: "If Polymarket had been at 99% Trump for a week before the election [...] then saying that Polymarket had Trump winning priced too highly would be reasonable."

If so, let me clarify that it'd be reasonable because we'd know that market actors, like Scott Adams, didn't have the actual knowledge to know that Trump would win 99+% of the time, even if it was actually the case that he would win that much. Market actors have epistemic uncertainty that would prevent them from knowing that. So we'd know that the market price was irrational, even if it happened to correspond to the true frequency of Trump winning.

Expand full comment

If the market is well-calibrated, then if it assigns 60%, I conclude that the available evidence is consistent with a 60% probability.

I don't think Scott Adams was taking 100-to-1 bets. I think he was BSing.

Expand full comment

"If the market is well-calibrated, then if it assigns 60%, I conclude that the available evidence is consistent with a 60% probability."

It sounds like your claim is that the available evidence is consistent with the aleatoric probability that Trump would win was 60%. I wrote a concise argument against that just now in this comment[1], which I'll copy here:

"The January 2024 Less Wrong post "Will quantum randomness affect the 2028 election?" makes clear that uncertainty about election outcomes is almost entirely epistemic uncertainty, not aleatoric uncertainty. Thus, whichever outcome happens is very strong evidence of what the true aleatoric probability of that outcome happening was. Specifically, it is very strong evidence that the outcome that happened was very likely to happen. Therefore, a day before the 2024 election Trump was very likely to win (>99%). Therefore, Polymarket's ~60% price was priced too low, not too high. https://www.lesswrong.com/posts/HT8SZAykDWwwqPrZn/will-quantum-randomness-affect-the-2028-election "

Here is an explanation of what "aleatoric uncertainty" is, from this source[2]:

"So aleatoric uncertainty is, if I tell you I’m going to flip a coin, you’re roughly 50-50 on whether the coin will come up heads. But you have almost no epistemic uncertainty, because you’re quite confident that your distribution is correct. And there isn’t a way to reduce it further with a practical amount of knowledge. If you knew the whole state of the world, you could get it to zero, but you don’t. So in that situation, you have all aleatoric uncertainty and no epistemic uncertainty. So in that case, you don’t expect to learn more by getting more data about the problem. And in the epistemic case, I flip the coin, it lands, I close my hand and now I’m going to reveal the data point, what the coin is. And so I’m still 50-50, but I have no aleatoric uncertainty and all epistemic uncertainty. I just don’t know the answer, but there is definitely an answer to be learned."

[1] https://www.astralcodexten.com/p/congrats-to-polymarket-but-i-still/comment/76263538

[2] https://www.lesswrong.com/posts/bLr68nrLSwgzqLpzu/axrp-episode-16-preparing-for-debate-ai-with-geoffrey-irving

Expand full comment

(disclamer: didn't read Scott's article, got here via your comment on FB)

(1b) does he mean that the market price does not reflect the most-informed credence that humans can come up with?

Surely yes? (basis: it's always seemed to me that Scott and I conceive of probability the same way, i.e. if I have already flipped a coin but no one has looked at it, we should say the probability of heads is 50.5% or so, notwithstanding that the "real" probability is 100% or 0%). Or perhaps something further in this direction, like: almost nobody had better information than Nate Silver, and if a few people did, they probably didn't have the resources to move the market that much. (by analogy: one man has seen the coin, but not shared the results with anyone, and isn't rich enough to move the market on it)

The best argument in favor of "markets priced correctly" that comes to my mind is "wisdom of crowds": if enough low-information people in swing states were betting, that might aggregate enough information to set the price correctly even if none of the individual betters had enough information to do so.

Expand full comment

"Surely yes? (basis: it's always seemed to me that Scott and I conceive of probability the same way, i.e. if I have already flipped a coin but no one has looked at it, we should say the probability of heads is 50.5% or so, notwithstanding that the "real" probability is 100% or 0%)"

I agree with this. (Note that "real" probability would more appropriately be called "aleatoric" probability)

"Or perhaps something further in this direction, like: almost nobody had better information than Nate Silver, and if a few people did, they probably didn't have the resources to move the market that much."

I think you're assuming unjustifiably that Nate's forecast was near the best quality possible. Even if he was very informed, he may not have applied the information he had most appropriately to form a forecast. (Two forecasters given access to the same information about a forecasting question often have different credences, due to different background knowledge from their life, different methodologies, etc.)

"and if a few people did, they probably didn't have the resources to move the market that much. (by analogy: one man has seen the coin, but not shared the results with anyone, and isn't rich enough to move the market on it)"

The French trader moving the market toward Trump may have been less informed and irrational relative to Nate, and may have effectively been betting on Trump randomly and just got lucky. We can stipulate that that was the case and I claim that still wouldn't show that the "correct market price" should have been less than 60% rather than greater.

I'll make a stronger claim: Even if *all* the Metaculus forecasters were better informed forecasters who were stronger / higher skilled at forecasting in general than *all* the Polymarket traders, that still wouldn't show that the "correct market price" ought to have been closer to the Metaculus aggregation than what the Polymarket market price actually was.

Why? Because the Metaculus traders, despite being higher skilled and more informed forecasters than the Polymarket traders were likely still not making the best estimates possible, despite inevitable epistemic uncertainty due to irreducible uncertainty in the world that would prevent them from ever being able to be as confident in the outcome as they would be if they had zero epistemic uncertainty and their credences matched the aleatoric uncertainty perfectly.

So the fact that the Polymarket price may have been largely what it was due to low-information traders is actually pretty irrelevant to the question of whether "correct market price" ought to have been what the Metaculus aggregation was or something else.

We don't know precisely how much the best possible forecast could have reduced epistemic uncertainty, but I don't think we have any reason to believe that reality tricks people into doing worse than chance on this question when they try to become very informed. On the contrary, this is the sort of normal forecasting question where we'd expect it to be possible to be able to get some evidence in the right direction. This implies that the "correct market price" should be >50%.

But how much more than 50%? I don't know precisely, but I'd be surprised if one couldn't get beyond >60%. As I wrote on November 4th, I suspected that there actually were highly informed people who were justifiably above >65% in favor of a particular candidate, though I didn't know which candidate. In other words, before the election I believed that it was possible to reduce epistemic uncertainty by enough to get over 15% of the way away from 50%. And that was before I had the benefit of hindsight of knowing that aleatoric uncertainty was >99%, and thought the election could have been close such that aleatoric uncertainty would have been <90% in one direction or another.

The fact that Nate and Metaculus was only at 50% does not at all show that it wasn't possible to reduce epistemic uncertainty enough to reach a credence of >65% (including both epistemic uncertainty and aleatoric uncertainty).

So I think the "correct market price", i.e. the price that the market should have been at if it were reflective of the credences of forecasters who had reduced their epistemic uncertainty as much as possible given inevitable uncertainty in the world, was probably about 65-90%, not 60%, and definitely not 50%. This 65%-90% is still much more uncertainty than a zero epistemic uncertainty that matches the true aleatoric uncertainty (what you called the "real probability"), but it's not so uncertain as to say that humans can't reduce any epistemic uncertainty they have when it comes to election results.

Expand full comment