"So, how close to this does one have to get to count as maximally informed?"
Good question, and I agree that my "maximally informed" was ill-defined. What I meant was that a "maximally informed" person is a person who reduces their epistemic certainty to near-zero and is only left with aleatoric uncertainty.
"So, how close to this does one have to get to count as maximally informed?"
Good question, and I agree that my "maximally informed" was ill-defined. What I meant was that a "maximally informed" person is a person who reduces their epistemic certainty to near-zero and is only left with aleatoric uncertainty.
How close to zero epistemic uncertainty it possible to get?
In one sense, your person with an infinite budget who polls everyone will have extremely little epistemic uncertainty, and will only fail to predict who fails to get to the polls on election day do to a surprise wrench in their plans, etc. (Note that I think this hypothetical person would have been able to get to 90+% Trump.)
And in another sense, your hypothetical person is more informed than it was actually possible for anyone to be, since no one actually had the budget to go interview every eligible voter in a trustworthy way to get them to disclose who they planned to vote for and how likely they were to actually end up voting in a way for that person to reveal that information honestly.
So I think what I meant by "maximally informed" was actually a lower bar than your hypothetical. I meant basically 'as in formed as possible' in a practical way, rather than according to the laws of physics, or some higher standard.
Perhaps another way to say this is to point out that whoever you think was the most informed in reality (Nate Silver and his team?) weren't tackling this problem with a $100M budget, and probably weren't even tackling it with a $10M budget, but if they were, they would have been better informed. (They obviously weren't tackling it with the near-infinite budget that'd be required to gain the trust of every eligible voter and get them to honestly share with you the knowledge they have about who they plan to vote for, but that's too high a standard). Spending a $100M budget on hiring a team of forecasters and support to make a massive effort to forecast the 2024 election would have at least been realistically plausible. E.g. Someone who cared to make that happen could do that for the 2028 election. There'd be no guarantee that the project would make a good forecast just because it has a large budget, but it'd be much better equipped to become more informed and figure out the truth of who is more likely to win in an election that would not wind up being that close.
Had the 2024 election wound up like the 2020 election, even a team with a $100M budget might have struggled to determine the likely winner. Though perhaps that's because even in retrospect determining who was most likely to win the 2000 election is hard. But given that the 2024 election wasn't very close, I expect that very informed people (e.g. with a $100M budget and strong forecasting skills in general, would have been able to figure out that Trump was >60% likely to win, and probably >70% likely to win.
Presumably an extremely informed group that settled on e.g. 75% Trump before the 2020 election wouldn't have been literally maximally informed, but they'd be close enough for my purposes. And mainly I just want to say that I think it was theoretically possible for a group to become informed enough to come to a 75% forecast for Trump like that without being irrationally overconfident.
>Presumably an extremely informed group that settled on e.g. 75% Trump before the 2020 election wouldn't have been literally maximally informed, but they'd be close enough for my purposes. And mainly I just want to say that I think it was theoretically possible for a group to become informed enough to come to a 75% forecast for Trump like that without being irrationally overconfident.
That seems reasonable.
I think there are at least two other plausible candidates for a "maximally informed" bettor.
The higher bar one would be someone/some group that had access to all of the data from all of the polls that had actually been made. This is arguable, since some of the polls (such as Theo's) were completely private, and some (most?) of the polls that disclosed their aggregate results weren't disclosing their full raw data. So, these considerations make this a somewhat unreasonable standard (though still below my hypothetical of a pollster with an infinite budget :-) ).
The lower bar would be whichever person/group had the "most" poll data actually available to them. One of the snags in this bar is that the measures of the amount of poll data aren't quite comparable. The set of polled voters available to one group are unlikely to be a strict superset of the set available to some other group. If one group polled 20,000 voters, and another polled only 15,000 but did a better job of matching their polled voters to the likely-to-actually-vote population, who has "more" data?
Reasonable thoughts. Note though that poll data is far from the only kind of data that can inform a forecaster about the probability of Trump winning.
E.g. Forecasters could have learned about the probability of Trump winning by watching the Presidential debate, by watching Trump's interview on the Jake Paul podcast or the Joe Rogan podcast or Harris interviews, by seeing what kind of political discussions people are having on Twitter, by seeing what kind of reasons people give for supporting or opposing Trump, etc, by seeing how these things compare to what Biden looked like during his 2020 election campaign, by looking at voter registration numbers by party in different states compared to the 2016 and 2020 election, by observing whether your known Republican neighbor bothers to put up a Trump sign this year or not, by reading political philosophy and finding out whay Jason Brennan means by "hobits" in a political context, etc. All of these things provide information.
True! Many Thanks! ( I would hate to have to make a prediction given _only_ non-poll information. It is so easy to fall into the typical mind fallacy... )
Your "That seems reasonable" reply suggests to me that you agree with me that Scott's claim that Polymarket had Trump-wins priced too highly is wrong. Is that the case?
Many Thanks! I'm going to hedge. I think that Scott's claim seems likely to be wrong. After all, if the voting can be approximated as nearly deterministic, an absolutely knowledgeable observer would have said Trump's odds were 99%+ or some similar number, since he ultimately did win.
I don't know enough about the information available to all of the bettors on Polymarket to say that, in aggregate, they had "justified knowledge" that Trumps odds were >60%. It is certainly _consistent_ with what happened, but it is also consistent with the possibility that the aggregate information, if perfectly analyzed, would have implied odds of 58%, and both Theo and Polymarket's bettor pool were also somewhat lucky.
My knee-jerk, meta-meta-meta suspicion is that most people underestimate their error bars (even in physics!), and many estimates of probabilities should be pushed further away from 0.00 and 1.00 most of the time...
I will say this: In one of the subthreads, someone said "I believe the markets". My point of view is that I'm skeptical that the markets are a great source of unbiased probability estimates, but it is also true that I have never had a case where I was so confident of my knowledge of a situation that I was willing to bet _against_ the market.
"So, how close to this does one have to get to count as maximally informed?"
Good question, and I agree that my "maximally informed" was ill-defined. What I meant was that a "maximally informed" person is a person who reduces their epistemic certainty to near-zero and is only left with aleatoric uncertainty.
How close to zero epistemic uncertainty it possible to get?
In one sense, your person with an infinite budget who polls everyone will have extremely little epistemic uncertainty, and will only fail to predict who fails to get to the polls on election day do to a surprise wrench in their plans, etc. (Note that I think this hypothetical person would have been able to get to 90+% Trump.)
And in another sense, your hypothetical person is more informed than it was actually possible for anyone to be, since no one actually had the budget to go interview every eligible voter in a trustworthy way to get them to disclose who they planned to vote for and how likely they were to actually end up voting in a way for that person to reveal that information honestly.
So I think what I meant by "maximally informed" was actually a lower bar than your hypothetical. I meant basically 'as in formed as possible' in a practical way, rather than according to the laws of physics, or some higher standard.
Perhaps another way to say this is to point out that whoever you think was the most informed in reality (Nate Silver and his team?) weren't tackling this problem with a $100M budget, and probably weren't even tackling it with a $10M budget, but if they were, they would have been better informed. (They obviously weren't tackling it with the near-infinite budget that'd be required to gain the trust of every eligible voter and get them to honestly share with you the knowledge they have about who they plan to vote for, but that's too high a standard). Spending a $100M budget on hiring a team of forecasters and support to make a massive effort to forecast the 2024 election would have at least been realistically plausible. E.g. Someone who cared to make that happen could do that for the 2028 election. There'd be no guarantee that the project would make a good forecast just because it has a large budget, but it'd be much better equipped to become more informed and figure out the truth of who is more likely to win in an election that would not wind up being that close.
Had the 2024 election wound up like the 2020 election, even a team with a $100M budget might have struggled to determine the likely winner. Though perhaps that's because even in retrospect determining who was most likely to win the 2000 election is hard. But given that the 2024 election wasn't very close, I expect that very informed people (e.g. with a $100M budget and strong forecasting skills in general, would have been able to figure out that Trump was >60% likely to win, and probably >70% likely to win.
Presumably an extremely informed group that settled on e.g. 75% Trump before the 2020 election wouldn't have been literally maximally informed, but they'd be close enough for my purposes. And mainly I just want to say that I think it was theoretically possible for a group to become informed enough to come to a 75% forecast for Trump like that without being irrationally overconfident.
Many Thanks!
>Presumably an extremely informed group that settled on e.g. 75% Trump before the 2020 election wouldn't have been literally maximally informed, but they'd be close enough for my purposes. And mainly I just want to say that I think it was theoretically possible for a group to become informed enough to come to a 75% forecast for Trump like that without being irrationally overconfident.
That seems reasonable.
I think there are at least two other plausible candidates for a "maximally informed" bettor.
The higher bar one would be someone/some group that had access to all of the data from all of the polls that had actually been made. This is arguable, since some of the polls (such as Theo's) were completely private, and some (most?) of the polls that disclosed their aggregate results weren't disclosing their full raw data. So, these considerations make this a somewhat unreasonable standard (though still below my hypothetical of a pollster with an infinite budget :-) ).
The lower bar would be whichever person/group had the "most" poll data actually available to them. One of the snags in this bar is that the measures of the amount of poll data aren't quite comparable. The set of polled voters available to one group are unlikely to be a strict superset of the set available to some other group. If one group polled 20,000 voters, and another polled only 15,000 but did a better job of matching their polled voters to the likely-to-actually-vote population, who has "more" data?
Reasonable thoughts. Note though that poll data is far from the only kind of data that can inform a forecaster about the probability of Trump winning.
E.g. Forecasters could have learned about the probability of Trump winning by watching the Presidential debate, by watching Trump's interview on the Jake Paul podcast or the Joe Rogan podcast or Harris interviews, by seeing what kind of political discussions people are having on Twitter, by seeing what kind of reasons people give for supporting or opposing Trump, etc, by seeing how these things compare to what Biden looked like during his 2020 election campaign, by looking at voter registration numbers by party in different states compared to the 2016 and 2020 election, by observing whether your known Republican neighbor bothers to put up a Trump sign this year or not, by reading political philosophy and finding out whay Jason Brennan means by "hobits" in a political context, etc. All of these things provide information.
>All of these things provide information.
True! Many Thanks! ( I would hate to have to make a prediction given _only_ non-poll information. It is so easy to fall into the typical mind fallacy... )
Your "That seems reasonable" reply suggests to me that you agree with me that Scott's claim that Polymarket had Trump-wins priced too highly is wrong. Is that the case?
Many Thanks! I'm going to hedge. I think that Scott's claim seems likely to be wrong. After all, if the voting can be approximated as nearly deterministic, an absolutely knowledgeable observer would have said Trump's odds were 99%+ or some similar number, since he ultimately did win.
I don't know enough about the information available to all of the bettors on Polymarket to say that, in aggregate, they had "justified knowledge" that Trumps odds were >60%. It is certainly _consistent_ with what happened, but it is also consistent with the possibility that the aggregate information, if perfectly analyzed, would have implied odds of 58%, and both Theo and Polymarket's bettor pool were also somewhat lucky.
My knee-jerk, meta-meta-meta suspicion is that most people underestimate their error bars (even in physics!), and many estimates of probabilities should be pushed further away from 0.00 and 1.00 most of the time...
I will say this: In one of the subthreads, someone said "I believe the markets". My point of view is that I'm skeptical that the markets are a great source of unbiased probability estimates, but it is also true that I have never had a case where I was so confident of my knowledge of a situation that I was willing to bet _against_ the market.
Reasonable-seeming-to-me!
Many Thanks!