With sufficient liquidity, the CCP would lose so much money so fast doing that that bribery would be cheaper. That's one of the intended features of a prediction market.
There's some kind of "liquidity to influence" ratio that's important here. If the liquidity-to-influence ratio gets too low (for a specific question on a specific market) then market manipulation becomes worthwhile, for certain interested parties.
I am concerned that the more we talk about prediction markets, the quicker we get to the point where their influence exceeds their liquidity, and then silly things start to happen.
Was there ever a real culture with a true anti-censorship ethos? Every culture has its taboo topics, which nobody but a few weirdos even think to question, an activity which is by definition very low status. Censorship becomes a hot topic only when there's a culture war with more than one legitimate pretender to societal dominance, an unstable state of affairs.
Yeah, I thought https://www.replicationmarkets.com/ was a great experiment in using a prediction tournament to get percentages on how likely different papers were to replicate. Sadly, they don't seem to be operating anymore...
>The professor wouldn't have the ability to sell the options for a certain number of years.
I mean, if a professor tried to sell me those in his own work I'd only pay 5 cents, because the very fact that he's selling them implies that replication is unlikely (the 5 cents is for the replication getting P < 0.05 by chance as well; if the criterion's set at P < 0.01 then I'd pay 1 cent and so on). So I'm not sure why this rule would be necessary unless you're assuming that a Greater Fool exists.
I think that rule is because there's a third option - nobody tries to replicate the study, maybe because it's hard or complicated or just not an exciting enough result to draw further study. In which case the professor might be stuck with tokens that don't pay out even though the professor is confident that his results will replicate. Allowing the professor to sell the token before allows them to capture some of the value from it before the attempted replication is finished.
Not about prediction markets, but related to Parrhesia's proposal.
What would happen if researchers were financially rewarded according to each study's citation impact (i.e. using article-level metrics), but only after a sufficient number of independent replications with sufficient sample size have also been published?
If researchers were rewarded only for citation impact, then they would have an incentive for statistical misconduct. If researchers were rewarded only for publishing replicable research, then they would have an incentive to publish boring things of the 'water is wet' type. My suggestion is meant to incentivize research that could bring about surprising (or as you say, 'counterintuitive') results, yet without incentivizing statistical misconduct.
Citations from people at the researchers' own institution and at the institution where the researcher earned their PhD could both be excluded from the tally, to reduce collusion. Citations from *other fields* might be double-counted, as they would probably indicate that the research is important. (Also, collusion would be harder to organize.)
To balance out what gets published, there could also be a rule that a certain share of a researcher's studies have to be replications. When that share is met, then the researcher's original studies are eligible for the reward.
"A few smart people will know ways around this." Outlaw something, only outlaws will use it, or those using it will be deemed to be outlaws. Such a waste. Other countries may be smart enough to make better policy choices.
I don't think "real money" will be a key feature for prediction markets to go mainstream -- it will need to be a reputation-based system.
You're well aware of the "why would I invest money on something that is 99% likely to happen but will not resolve for 5 years" problem. There are also all the problems of poker sites - the few people that make money are subsidized by a large amount of financial losers. And losing money predicting the future is less fun than poker. And third -- you want an ecosystem where the top predictors more than double their "money" every year - which is possible with fake money but not real money.
I could 100% believe this. Prediction markets as some analogue to competitive programming -- most people pop in just long enough to flex that they've got big brains and get hired by a big corp that wants those skills and pays good money, plus a smaller contingent who just like it enough to do it regardless of the rewards
I'm skeptical of reputation based systems for a few reasons.
First of all, this kind of reputation just isn't that valuable. I'm a well-known public commentator who makes money by commenting on important issues, everyone knows I use Metaculus, and nobody has ever asked me what my Metaculus reputation is! What hope is there for reputations to provide incentives for anyone else?
Second, because it's hard to do right. Metaculus' system incentivizes making lots of thoughtless predictions; if there was any incentive to care about it, people would game it to the point of uselessness. Partly this reinforces my first point (nobody cares), but Metaculus is usually impressively competent and if even they are getting this wrong I think it's tough. Lots of reputation systems are naturally vulnerable to the "create a new account, get your 100 free reputation points, then lose a bet against your normal account that gives them all your reputation" attack. Other systems are vulnerable to "create a new account, get your 100 free reputation points, stake them all on a long-shot bet, repeat if you lose, finally one time you'll win and have a reputational fortune" attack. And none of them let you do the thing where you (someone who has never used the site before) are very very sure the site is wrong and willing to invest a lot of risk into correcting it. If the site is real money, you can invest your life savings into this; if it's reputation only, you can only invest your 100 free reputation points.
I'm "Level 5", but that's completely meaningless without you knowing how many predictions I've made or how much time I've spent on the site, which is both a cause of the problem (people rightly don't care about Metaculus reputations) and an effect of it (if Metaculus thought people would care, they would provide a usable number)
I don't buy argument that Metaculus would have done it if people cared. How do they know if they haven't seriously tried it?
People should care about prediction track records.
An election prediction of someone who got say 12 out of 14 last elections right is worth listening to more than some random forecaster. Or maybe even a collection of random forecasters.
One problem is that they could influence a prediction too much and create group think. So some kind of extra reward would have to be offered for being contrarian to top forecasters.
Reputation doesn't work because being an excellent forecaster isn't adequately cool - not in the way that being a poker shark or even a Scrabble champ is cool.
If high reputation on a major prediction market site signified to the world that you're a sexy Nostradamus, the Farseerr dating app would follow, people would put links to their scores in their twitter bios, and all the rest of it.
Point isn't to have yet another election prediction method. Point is to generalize this to the point where you put on your facebook a "will I look better with a beard as decided by hotornot.com", subsidize it with $100 and get an answer.
I get why some people are so gung-ho about this. It means significantly less uncertaintly overall in all areas of life. There are (as of yet theoretical) ways to trade long term bets, so you can make money now from betting if we'll get a man to Europa in 30 years.
So yeah, Scott is right to be disappointed that more people seem to be working on rounding corners in facebook windows than working on this.
What is your track record? Metaculus is 12% underconfident on (exactly) 1000 questions. I couldn't beat metaculus with my 17% underconfidence (on way less questions). This information is more important than your level, since track record tells us more about the *quality* of your predictions, not the quantity. (You can find it under your profile)
Some of those issues have straightforward solutions (if the average Joe gets 100 reputation points for free, perhaps Joe can pay to start with 10000 points, and Barack Obama would get 5 million points for free).
Other issues may well be impossible to solve. The "stake everything on a longshot bet" issue happens on other forms of reputation as well; if you remember Bill Mitchell from 2016 he built quite a reputation by making one bet on "every piece of evidence proves Trump will win in 2016".
I don't think you should be able to pay for more points, since then you can buy your way to a good reputation, and reputation doesn't accurately track how good you are at predicting.
Also, either you buy points for money at a 1:1 ratio, or you do some weird complicated function. If it's the weird complicated function, I bet it's game-able; if it's 1:1, then it's basically money, so why not call it that?
It would certainly be weird and gameable -- not necessarily gameable to get real money, but gameable to get reputation points.
On the other hand, you probably want "people who are good at gaming systems" to participate in your prediction market, so that's not necessarily a bad thing.
If the initial allocations are unfair or random, that's fine because they'll end up distributed rationally. Just make sure there's exactly one "initial" event.
You could get around the gambling/unregulated futures issues by making the payout solely upside -- i.e., no losing for bad predictions but only rewards for correct predictions. It's not 100% ideal. But like stock traders competing for a bonus from their firm, there should be a powerful incentive to be right if serious money is at stake.
Better predictions should be worth enough to someone with an interest in the predicted events to be worth funding the upside payments. Perhaps a billionaire like Elon Musk could fund the venture in return for getting first use of the information before releasing it to the public. A billionaire with a known network for predicting the future ought to be able to figure out how to monetize that reputational asset.
at least regarding the "free point attacks" you list: that's why you don't do reputation with "points", you do reputation with reputation. because a person's reputation is what it is.
as for incentives: no one on the internet wants to shut up. they apparently don't need any incentives.
How do we measure reputation, then? If I want to quickly know whether [blogger] is any good at predictions, what tells me? "Ask around about his reputation" is obviously worthless here, but I see no other way.
Additionally, without some measure like points or money, a lot of the neat features we want — the ones we get from it being a prediction *market* — don't work.
yes, measuring reputation involves finding out how other people evaluate the person in question. that's reputation. but i'm not sure what is "obviously worthless" in your example (people find value in asking about reputations every day), or what particular "neat features" you are referring to.
When I first heard about this, I immediately noticed a conflict-theory explanation for why Metaculus would do this. I don't have proof, and it could be wrong - if someone has proof that it's wrong, please tell me. But my suspicion is this:
[Metaculus is known to have its own proprietary prediction-aggregation software, which presumably has its own, hidden, reputation system or something similar to one. The Metaculus principals, I posit, make money from the Metaculus predictions, either directly via investments/prediction markets or via selling them to third parties. The open reputation system, I posit, is there solely to incentivise people to feed data into Metaculus, and is designed around that goal rather than the goal of letting people actually figure out whose predictions matter (that's what their proprietary system is for, which is for paying customers/Metaculus principals). The useless data is no problem for them, I posit, because their actual aggregation software filters it out when making Metaculus predictions.]
What hope is there for reputations to provide incentives for anyone else? On the one hand, reputation systems will always be crude metrics. On the other hand, so will money. On the gripping hand: Stackoverflow. Incentivizing quality over quantity is a harder problem, however.
I work for Metaculus and wanted to share my perspective on Scott's point about reputation and incentivizing lots of off-the-cuff predictions. For us, there's a difference between 'community' forecasting and 'competitive' forecasting—and we want to encourage both! We support more casual community forecasting where learning and fun are more the focus because fostering and supporting a sizable, engaged forecasting community is part of our overall mission to promote the forecasting mindset in the world. We want Metaculus to be a place to learn, to build forecasting skill, and to connect with interesting, bright people.
Tournaments, though, are where iron sharpens iron: If someone is highly-ranked on a tournament leaderboard and wins prize money, it's because they outperformed other forecasters and contributed a lot of information with their forecasts! We only very recently introduced the current incarnation of Metaculus tournament scoring, with more upgrades on the way. These, combined with our other plans to build out our reputation system more broadly, are going to make the competitive side of Metaculus much stronger.
Also, it's not quite right to say that no one cares about people’s Metaculus reputation: Two organizations that we know of with quite high bars for hiring have incorporated Metaculus track records into their recruiting processes, and we're currently in conversations with a third.
It's possible in principle to combine that with smart contracts using e.g. the mirror protocol. (This is not an endorsement, and I haven't audited their system, and there might be some gaping holes in it)
I wonder if you can create composable prediction markets by allowing bets in any currency, including other bets. (And shorts of other bets, and other bets in a specific currency...)
edit: Hang on, this is just a bad reinvention of hedging maybe. Otoh, maybe you can do nonlinear combinations that way? I don't know enough about economy to see what that gives you. Conditional probabilities?
If you have a market for "will X happen" and the currency is units of "will Y happen" then the price is P(X^Y) which you can divide by P(Y) to get P(X|Y).
I don't think this adds anything new. I'd rather denominate everything in shares of SPY so that the long-term stuff isn't prohibitively expensive in opportunity cost.
I disagree, I suspect people who lose money on prediction markets will blame something external and go back in with even wilder swings to try and earn their money back. There are definitely people like that who seem to be subsidizing me on Kalshi.
I'm not sure people realize how many of us hold "gambling is repulsive" as a first principle. I'm not saying I think gambling should be illegal. But when I see someone buying a lottery ticket, playing poker or otherwise gambling, I definitely feel the same kind of feeling I do when I am at the store and standing next to someone who smells strongly of urine. I don't think urine dude is a BAD person, necessarily, but clearly he's got a problem and I have every reason to hope he starts bathing and changing his clothes.
Oh, yeah, it drives me crazy when I tell people I'm on a forecasting site and when I explain it they're like, "oh so you gamble?" If I try and explain it to them I still get "well there's an element of chance."
Have prediction markets or super forecasters had any luck in predicting the direction of stocks, the success of a startup, etc? This seems like a way to make profit off good predictions outside of the prediction markets themselves
Perhaps, but applying it to something like company worth that can be used to make money can increase the mainstream awareness/popularity of prediction markets and maybe even lead to regulatory changes
The stock market, including futures and options, is already a prediction market for the direction of stocks. Any subset of individual opinions would just amount to a smaller, inferior prediction market.
"I need a prediction summarizing the wisdom of millions about what economic activities are worth investing in. If only there was some kind of....hmm...well, let's say signal." (generic investor-activist, circa 2022)
The problem with this is that we care about things that are not economic activities. And structuring things as "Will X happen" is much easier than structuring them as "Will X happen and then also have the effect on the stock market than I expect".
Meh. Things that have zero to very little economic impact are ethereal abstractions of interest to almost nobody. "Economic impact" is just another way to say "people have to spend money (= the saved hours of their working life) to avoid evil consequences" or "people are able to reap greater rewards for the same work (or no work at all!) because of these good consequences." That covers nearly every form of good and bad luck, which is why economic cost or benefit is pretty much the catch-all measure of broad social impact.
Let's rephrase to say we care about things that have economic relevance but cannot be neatly captured by stock movements. The Covid pandemic, an invasion of the Ukraine, adoption of a new technology, should I save for college for my kid or assume college won't be a norm any longer, etc. The things we want to make decisions on are both far broader and far more specific than what stock can measure.
I think a secondary prediction market for public corporations could be interesting and useful if it was based on one or more measurable values like "Net Profit" or "Brand recall in the annual survey from XXXX research company" or even "P/E ratio of the listed stock".
The stock markets are prediction markets based on some measurable metrics for the underlying companies, but also based on memes, short interest, futures markets, potential future growth, potential exposure to liability, and game theory re: everyone else's thoughts on those things.
It is infamously difficult for stock pickers to outperform a well-diversified buy-and-hold portfolio over time because there are so many of these factors baked into the price, and because that game theory piece plays such a big role. It would be really cool to see if people are better at predicting future corporate performance when the concrete statistics are divorced from the rest of what fuels speculation.
You may have a point about certain sub-micro bets that are below the level of stock price that the market sets. Some of that is covered by individual analysts, of course. But they have loads of conflicts and incentives to not stray too far from consensus. Medium-term earnings could be an informative betting pool.
I thought about it, but that file path is what you'd get by default for a particular, very popular OS/browser combination, so it leaks less than you might think. Nevertheless, it's not worth arguing about and I'm fine getting rid of it; deleted.
I think the usual way to subsidize a prediction market is providing liquidity. Without liquidity, what happens is that you, for example, want to buy the YES token at 60% probability, but there is no-one to do the other side of this trade. I think if someone provides liquidity to a market, they need to lock up capital in it, but at the end they will get it all back.
You create a prediction market with an Automated Market Maker for each forecaster you invite to your very exclusive platform. This makes it profitable for each to participate and, to a first approximation, forces them to collaborate.
I think this is far too negative / absolute on the US regulatory piece. If the conditions for making it work are satisfied elsewhere and it begins to attract real money / real volume, the US will grudgingly accept it for fear of losing out on the position as the financial capital of the planet. I strongly doubt the CFTC wanted to accept bitcoin futures but eventually they had no choice; I expect you get that here eventually.
I don't think our government actually cares that much about competition. It's very complacent, and no regulator is going to lose their job because we got invaded by a more competitive government.
No, but regulators will lose their jobs if stakeholders are sufficiently aggressive in lobbying and they prove intransigent. I guarantee the high-frequency market making firms would love to enter to this business if it gets sufficiently large; right now, it's just a curiosity.
There doesn't seem to be any lobby actually in favor of the Census changing their practices to inject fake data for "privacy" (they claim they're required to do this, even though the same law didn't require it before) and this would break a whole lot of things that depend on accurate Census data.
Am I the only person around here who thinks prediction markets are a really bad idea and ideally should not exist? Like, I don't want to live in a world where (a) making money by gambling on your ability to predict the future accurately is a common thing or (b) where decisions are being made by [edit: I mean 'based on the predictions of'] an anonymous mass of gamblers trying to predict the future. Those both seem independently shitty. And prediction markets do not seem like the only solution to the problem of forecasting being difficult and hard to calibrate or scale.
"Bets are a tax on bullshit", ie, betting isn't the only way to do forecasting well, but it's a way to sort out good forecasting for self-serving bullshit. We want our forecasts to be well-correlated with ground reality, not whatever ulterior motives the forecaster might have, and "prediction markets" or "explicitly pay superforecasters based on performance" are the only two that I've heard that work, and the latter is both hard to scale and more-or-less "making money by gambling on your ability to predict the future" anyway.
Prediction markets are repeatedly shown to be the most accurate way to predict the future. There's alpha in being able to predict the future better than others, whether we like it or not.
Why? Without meaning the question prejudicially, honestly why? Is it connected with your statement about "where decisions are being made by an anonymous mass of gamblers" - but these markets of prediction bets aren't making the decisions, they're predicting the decisions. Or are you asserting a dynamic I'm not seeing here?
I mentioned above that I don't like either side of it. Quick explanations are:
* I don't like the idea of a society where a lot of people spend a lot of brainpower gambling on the future because that's just stupid, and also, it has all sorts of weird side effects like incentivizing things to happen for money instead of, like, because we want them to happen because they're good.
* I don't like the idea of decisions being made by [or, yes, I meant based on the forecasts of] anonymous hoards of gamblers, because, again, it conflates financial interests with everything else that matters, and I want to see society being driven more by leadership and decision making and less by mindlessly trusting data that comes out of machines.
Not to mention the immense conflict of interest that starts to arise once money is coupled to outcomes because, after all, the money can also start trying to influence those outcomes. It's not like there hasn't been cheating in literally every thing that anyone ever gambled on. But now, yay, you can gamble on _everything_, how fun.
I don't see why anybody would _want_ this. It is vaguely appealing in a technocratic sense: how cool, we can build a system that predicts things whose inputs are money and time. But I can't imagine a world where it is implemented not being worse for it.
I guess I'm confused, in that you seem to believe that decisions are in any way not connected to financial incentives now, but leaving that cynicism aside, do you think that if we have prediction markets people will "decide things based on the forecasts of"? That's not how prediction markets work, and if they started to, wouldn't that just break prediction markets, since who's going to bet on something which will be decided in a way that's dependent on the bet? That's more like voting shares than prediction markets, which also happens, but is already a mechanism in our society.
" it has all sorts of weird side effects like incentivizing things to happen for money instead of, like, because we want them to happen because they're good." Evidence? Again, this ties to your contention that prediction markets are determinant. What is the evidence for that?
Maybe I'm misunderstanding a large chunk of how prediction markets function.
I think if prediction markets become a major part of our societal discourse, i.e. like the stock market, then yes I do think government officials and other decision makers will be influenced by those predictions. And since those predictions are financial in nature, it means the decisions by those people will be influenced by financial incentives.
I feel like the whole point of prediction markets is that they are a social machine for producing good predictions; if the only point of that was "just to know, for fun" then they would not be very interesting to anybody. The point, I've assumed, is to get them big enough that their predictions are _accurate_, because they're aggregating over all the knowledge in the market, and then, ideally, use those predictions for something (such as corporate- or government- decision-making).
But maybe _I'm_ missing a large chunk of how prediction markets function and just assuming that is the unstated goal of the whole project. I've certainly just sorta assumed this is where everyone's coming from.
It seems to me like you're drastically overestimating the quality of the decisions that emerge from "leadership and [human] decision making" in the absence of hard data.
Personally -- and this is maybe a kooky take that I couldn't convince anyone of, it's just a feeling really -- I think people dramatically underestimate the quality of _good_ leadership, because there is so little of it around to see as example.
We have had plenty of societies driven by leadership and decision making. Unfortunately these have been totalitarian. Democracy is effectively a prediction market, as the voters gamble on which party will be the best option. This leads to inconsistent leadership and tends to leave decision making in the hands of individuals. It is clear however that democracies out perform totalitarian models on just about every useful metric. Perhaps that hoard of gamblers are going to have better ideas than the leaders and decision makers, whose qualifications for such roles are not actually testable.
Strongly disagree with the analogy between democracy and prediction markets, because voters pay nothing individually for being wrong. The feedback loop is much weaker in democracy than in prediction markets.
> We have had plenty of societies driven by leadership and decision making. Unfortunately these have been totalitarian.
This seems very incorrect. All societies are driven by leadership and decision-making. The quality and degree of those things (and the moral compass of the leaders) varies, but it is absurd to say that having leadership means having totalitarianism.
>"I don't like the idea of a society where a lot of people spend a lot of brainpower gambling on the future"
1. We already live in that society due to the existence of stock markets. The countries that allow stock markets to exist are a lot better off than the countries that don't.
2. Whether or not prediction markets are allowed, people will spend lots of time reading and writing speculation about the future, such as in newspaper op-eds. Prediction markets will make all that speculation much more accountable and accurate.
3. Without prediction markets, media elites can use their platforms to make false predictions with impunity and dishonestly twist politics towards the preferences of their class. The truth is more egalitarian than media ownership structure. Prediction markets are a powerful way to speak truth to power.
One objection I could think of is the fact that one might not want to base policy on something so easily manipulated by spending cash. (Not that other stuff we base policy on, like academic opinions, can also be manipulated by spending cash.)
Suppose an evil version of Scott would like indoor gatherings to be permitted at a certain place and time. The obvious way to do this would be to spend cash on the question "will bad stuff happen if we allow indoor gatherings?", lowering the odds as estimated by the market by, say 10%.
Ideally, a superforcaster would notice this gap and invest in the opposite side, but practically, there are some limitations. Perhaps the market takes a long time to resolve, and the ROI is not worth it. (Are leveraged prediction markets a thing?)
This becomes especially bad in case of self-fulfilling prophecies, because then even the financial punishment might not apply.
Suppose both the Afghan government (as of 2021) and the Taliban are firm believers in prediction markets. Someone willing to influence the outcome might simply place a huge bet on their favorite side, knowing that the other side will most likely not fight to much against terrible odds.
(I would argue that in that case, the natural equilibrium would not be that the prediction market is trusted 100%, but less, so that a prediction would not easily become self-fulfilling. In general, it is not clear to me that such feedback loops do not appear. If you ran a prediction market on the outcome of a Keynesian beauty contest, and the judges knew about the market prediction, they might very well go along with them.)
I guess I see your point, but this seems like way more expensive than just buying politicians, especially when the very act of gaming it this way would (I predict... see what I did there?) get priced in to the market itself. If a prediction market were skewing this way, I'm genuinely wondering if it wouldn't be evident, and takers on the other side of the bet would be hard to find, at the proposed odds.
Again, I'm really not fluent in prediction markets, but these critiques seem (perhaps naively) to be self-correcting, or at least pushing directly against what such markets are supposed to do... and if that happens, they'd cease to be relevant quickly. At least that's my sense of it.
A tax on lying is better than no tax on lying. The latter is what we currently have in the media without prediction markets. If you want to lie the country into a war in Iraq, you don't even need to spend a zillion dollars manipulating prediction markets! You just need to know the right people at major newspapers.
If there are huge, liquid prediction markets, and you "know the right people at major newspapers", then wouldn't you place your bets, proceed as before, with the same outcome?
...except that since you would gain a zillion dollars, you didn't need any other motivation than money, such as ideology, loyalty, whatever.
Another way to phrase it is, if there is a tax on lying, which incents people to lie less, why would they respond by conforming their speech to reality rather than vice versa? Cf. Karl Rove's famous quote "when we act, we create our own reality."
As we see with existing markets, if the market isn't doing what you want it to, it's very easy to convince people it's being manipulated by a conspiracy of traders.
No, seriously, I want to know the answer to this question. Have prediction markets so far produced surprising odds for events, contra public opinion, *and been proven right* reliably?
And just to be clear, I asked the question in hope of a serious answer one way or another - not that I object to Carl P offering a snarky answer in the meantime.
Huh. So if I want to know whether Apophis will hit the Earth sometime in the 2100s, or whether a new nasty COVID variant will emerge, I'm best off getting a load of people to bet on the outcome and taking note of where the odds fall? That's great news. It's sure a lot easier than trying to integrate Newton's equations or sequencing a crapton of viral RNA.
It's not the problem of generating knowledge/predictions that you're trying to solve via prediction markets. It's the problem of _aggregating_ knowledge/predictions in a way that you have a good sense for which knowledge/prediction you can have faith in.
Right now, it's hard to say which expert has put how much work into their prediction, and whether they're making the prediction to be right or to score political points or reputation points or whatever other points. However, the expert knows. If they back up their opinion with a bet made with their money, conditional on their prediction being right, you have a sense for their degree of confidence. In a market, over time, the aggregation of such bets should give you the best predictions. That's (my understanding of) the theory of it
With sufficient liquidity, the CCP would lose so much money so fast doing that that bribery would be cheaper. That's one of the intended features of a prediction market.
There's some kind of "liquidity to influence" ratio that's important here. If the liquidity-to-influence ratio gets too low (for a specific question on a specific market) then market manipulation becomes worthwhile, for certain interested parties.
I am concerned that the more we talk about prediction markets, the quicker we get to the point where their influence exceeds their liquidity, and then silly things start to happen.
Was there ever a real culture with a true anti-censorship ethos? Every culture has its taboo topics, which nobody but a few weirdos even think to question, an activity which is by definition very low status. Censorship becomes a hot topic only when there's a culture war with more than one legitimate pretender to societal dominance, an unstable state of affairs.
Yeah, I thought https://www.replicationmarkets.com/ was a great experiment in using a prediction tournament to get percentages on how likely different papers were to replicate. Sadly, they don't seem to be operating anymore...
>The professor wouldn't have the ability to sell the options for a certain number of years.
I mean, if a professor tried to sell me those in his own work I'd only pay 5 cents, because the very fact that he's selling them implies that replication is unlikely (the 5 cents is for the replication getting P < 0.05 by chance as well; if the criterion's set at P < 0.01 then I'd pay 1 cent and so on). So I'm not sure why this rule would be necessary unless you're assuming that a Greater Fool exists.
I think that rule is because there's a third option - nobody tries to replicate the study, maybe because it's hard or complicated or just not an exciting enough result to draw further study. In which case the professor might be stuck with tokens that don't pay out even though the professor is confident that his results will replicate. Allowing the professor to sell the token before allows them to capture some of the value from it before the attempted replication is finished.
I'm not saying that letting him sell them at all is a bad idea. I'm wondering why letting him sell them immediately is a bad idea.
Not about prediction markets, but related to Parrhesia's proposal.
What would happen if researchers were financially rewarded according to each study's citation impact (i.e. using article-level metrics), but only after a sufficient number of independent replications with sufficient sample size have also been published?
If researchers were rewarded only for citation impact, then they would have an incentive for statistical misconduct. If researchers were rewarded only for publishing replicable research, then they would have an incentive to publish boring things of the 'water is wet' type. My suggestion is meant to incentivize research that could bring about surprising (or as you say, 'counterintuitive') results, yet without incentivizing statistical misconduct.
Citations from people at the researchers' own institution and at the institution where the researcher earned their PhD could both be excluded from the tally, to reduce collusion. Citations from *other fields* might be double-counted, as they would probably indicate that the research is important. (Also, collusion would be harder to organize.)
To balance out what gets published, there could also be a rule that a certain share of a researcher's studies have to be replications. When that share is met, then the researcher's original studies are eligible for the reward.
I think you significantly overestimate the competence and coherence of the "powers that be."
What makes you think so? and what evidence, if any, would make you think otherwise?
Sorry, fixed.
"A few smart people will know ways around this." Outlaw something, only outlaws will use it, or those using it will be deemed to be outlaws. Such a waste. Other countries may be smart enough to make better policy choices.
That is usually the point of outlawing something, yes.
I don't think "real money" will be a key feature for prediction markets to go mainstream -- it will need to be a reputation-based system.
You're well aware of the "why would I invest money on something that is 99% likely to happen but will not resolve for 5 years" problem. There are also all the problems of poker sites - the few people that make money are subsidized by a large amount of financial losers. And losing money predicting the future is less fun than poker. And third -- you want an ecosystem where the top predictors more than double their "money" every year - which is possible with fake money but not real money.
I could 100% believe this. Prediction markets as some analogue to competitive programming -- most people pop in just long enough to flex that they've got big brains and get hired by a big corp that wants those skills and pays good money, plus a smaller contingent who just like it enough to do it regardless of the rewards
I'm skeptical of reputation based systems for a few reasons.
First of all, this kind of reputation just isn't that valuable. I'm a well-known public commentator who makes money by commenting on important issues, everyone knows I use Metaculus, and nobody has ever asked me what my Metaculus reputation is! What hope is there for reputations to provide incentives for anyone else?
Second, because it's hard to do right. Metaculus' system incentivizes making lots of thoughtless predictions; if there was any incentive to care about it, people would game it to the point of uselessness. Partly this reinforces my first point (nobody cares), but Metaculus is usually impressively competent and if even they are getting this wrong I think it's tough. Lots of reputation systems are naturally vulnerable to the "create a new account, get your 100 free reputation points, then lose a bet against your normal account that gives them all your reputation" attack. Other systems are vulnerable to "create a new account, get your 100 free reputation points, stake them all on a long-shot bet, repeat if you lose, finally one time you'll win and have a reputational fortune" attack. And none of them let you do the thing where you (someone who has never used the site before) are very very sure the site is wrong and willing to invest a lot of risk into correcting it. If the site is real money, you can invest your life savings into this; if it's reputation only, you can only invest your 100 free reputation points.
What is your metaculus reputation?
I'm "Level 5", but that's completely meaningless without you knowing how many predictions I've made or how much time I've spent on the site, which is both a cause of the problem (people rightly don't care about Metaculus reputations) and an effect of it (if Metaculus thought people would care, they would provide a usable number)
I don't buy argument that Metaculus would have done it if people cared. How do they know if they haven't seriously tried it?
People should care about prediction track records.
An election prediction of someone who got say 12 out of 14 last elections right is worth listening to more than some random forecaster. Or maybe even a collection of random forecasters.
One problem is that they could influence a prediction too much and create group think. So some kind of extra reward would have to be offered for being contrarian to top forecasters.
Prediction track records require context, though. Maybe predicting the outcome of 12 out of the last 14 elections was trivially easy.
Yeah. I've gotten tons of Metaculus points by predicting stuff like "99% chance no new US civil war before July 2021".
Yeah there should be a difficulty measure on predictions. I don't like the current system Metaculus has.
Reputation doesn't work because being an excellent forecaster isn't adequately cool - not in the way that being a poker shark or even a Scrabble champ is cool.
If high reputation on a major prediction market site signified to the world that you're a sexy Nostradamus, the Farseerr dating app would follow, people would put links to their scores in their twitter bios, and all the rest of it.
No, but sufficiently high reputation combined with verification of 1 account per person could be valuable for $
Point isn't to have yet another election prediction method. Point is to generalize this to the point where you put on your facebook a "will I look better with a beard as decided by hotornot.com", subsidize it with $100 and get an answer.
I get why some people are so gung-ho about this. It means significantly less uncertaintly overall in all areas of life. There are (as of yet theoretical) ways to trade long term bets, so you can make money now from betting if we'll get a man to Europa in 30 years.
So yeah, Scott is right to be disappointed that more people seem to be working on rounding corners in facebook windows than working on this.
What is your track record? Metaculus is 12% underconfident on (exactly) 1000 questions. I couldn't beat metaculus with my 17% underconfidence (on way less questions). This information is more important than your level, since track record tells us more about the *quality* of your predictions, not the quantity. (You can find it under your profile)
Some of those issues have straightforward solutions (if the average Joe gets 100 reputation points for free, perhaps Joe can pay to start with 10000 points, and Barack Obama would get 5 million points for free).
Other issues may well be impossible to solve. The "stake everything on a longshot bet" issue happens on other forms of reputation as well; if you remember Bill Mitchell from 2016 he built quite a reputation by making one bet on "every piece of evidence proves Trump will win in 2016".
I don't think you should be able to pay for more points, since then you can buy your way to a good reputation, and reputation doesn't accurately track how good you are at predicting.
Also, either you buy points for money at a 1:1 ratio, or you do some weird complicated function. If it's the weird complicated function, I bet it's game-able; if it's 1:1, then it's basically money, so why not call it that?
It would certainly be weird and gameable -- not necessarily gameable to get real money, but gameable to get reputation points.
On the other hand, you probably want "people who are good at gaming systems" to participate in your prediction market, so that's not necessarily a bad thing.
Yeah, but then it's still not telling us anything useful.
If the initial allocations are unfair or random, that's fine because they'll end up distributed rationally. Just make sure there's exactly one "initial" event.
You could get around the gambling/unregulated futures issues by making the payout solely upside -- i.e., no losing for bad predictions but only rewards for correct predictions. It's not 100% ideal. But like stock traders competing for a bonus from their firm, there should be a powerful incentive to be right if serious money is at stake.
Better predictions should be worth enough to someone with an interest in the predicted events to be worth funding the upside payments. Perhaps a billionaire like Elon Musk could fund the venture in return for getting first use of the information before releasing it to the public. A billionaire with a known network for predicting the future ought to be able to figure out how to monetize that reputational asset.
If there's only an upside, that incentivizes making a trillion accounts and spamming the site with completely random predictions.
at least regarding the "free point attacks" you list: that's why you don't do reputation with "points", you do reputation with reputation. because a person's reputation is what it is.
as for incentives: no one on the internet wants to shut up. they apparently don't need any incentives.
How do we measure reputation, then? If I want to quickly know whether [blogger] is any good at predictions, what tells me? "Ask around about his reputation" is obviously worthless here, but I see no other way.
Additionally, without some measure like points or money, a lot of the neat features we want — the ones we get from it being a prediction *market* — don't work.
yes, measuring reputation involves finding out how other people evaluate the person in question. that's reputation. but i'm not sure what is "obviously worthless" in your example (people find value in asking about reputations every day), or what particular "neat features" you are referring to.
When I first heard about this, I immediately noticed a conflict-theory explanation for why Metaculus would do this. I don't have proof, and it could be wrong - if someone has proof that it's wrong, please tell me. But my suspicion is this:
[Metaculus is known to have its own proprietary prediction-aggregation software, which presumably has its own, hidden, reputation system or something similar to one. The Metaculus principals, I posit, make money from the Metaculus predictions, either directly via investments/prediction markets or via selling them to third parties. The open reputation system, I posit, is there solely to incentivise people to feed data into Metaculus, and is designed around that goal rather than the goal of letting people actually figure out whose predictions matter (that's what their proprietary system is for, which is for paying customers/Metaculus principals). The useless data is no problem for them, I posit, because their actual aggregation software filters it out when making Metaculus predictions.]
Anybody can look at Metaculus's average prediction on any question, so I don't think this is right.
Unless they've changed things, the community prediction isn't the same thing as the secret-sauce Metaculus prediction.
What hope is there for reputations to provide incentives for anyone else? On the one hand, reputation systems will always be crude metrics. On the other hand, so will money. On the gripping hand: Stackoverflow. Incentivizing quality over quantity is a harder problem, however.
I work for Metaculus and wanted to share my perspective on Scott's point about reputation and incentivizing lots of off-the-cuff predictions. For us, there's a difference between 'community' forecasting and 'competitive' forecasting—and we want to encourage both! We support more casual community forecasting where learning and fun are more the focus because fostering and supporting a sizable, engaged forecasting community is part of our overall mission to promote the forecasting mindset in the world. We want Metaculus to be a place to learn, to build forecasting skill, and to connect with interesting, bright people.
Tournaments, though, are where iron sharpens iron: If someone is highly-ranked on a tournament leaderboard and wins prize money, it's because they outperformed other forecasters and contributed a lot of information with their forecasts! We only very recently introduced the current incarnation of Metaculus tournament scoring, with more upgrades on the way. These, combined with our other plans to build out our reputation system more broadly, are going to make the competitive side of Metaculus much stronger.
Also, it's not quite right to say that no one cares about people’s Metaculus reputation: Two organizations that we know of with quite high bars for hiring have incorporated Metaculus track records into their recruiting processes, and we're currently in conversations with a third.
How PeerBet.io legal ?
I’d bet my whole life’s reputation on real money markets being more effective
> You're well aware of the "why would I invest money on something that is 99% likely to happen but will not resolve for 5 years" problem
This is solved in principle; bet S&P 500 shares rather than dollars, or put the money into something which produces yield (easy to do in crypto).
Also kind of in practice by https://hedgehog-markets.medium.com/
It's possible in principle to combine that with smart contracts using e.g. the mirror protocol. (This is not an endorsement, and I haven't audited their system, and there might be some gaping holes in it)
https://cryptobriefing.com/mirror-protocol-now-offers-access-to-sp-500-index/
I wonder if you can create composable prediction markets by allowing bets in any currency, including other bets. (And shorts of other bets, and other bets in a specific currency...)
edit: Hang on, this is just a bad reinvention of hedging maybe. Otoh, maybe you can do nonlinear combinations that way? I don't know enough about economy to see what that gives you. Conditional probabilities?
If you have a market for "will X happen" and the currency is units of "will Y happen" then the price is P(X^Y) which you can divide by P(Y) to get P(X|Y).
I don't think this adds anything new. I'd rather denominate everything in shares of SPY so that the long-term stuff isn't prohibitively expensive in opportunity cost.
I disagree, I suspect people who lose money on prediction markets will blame something external and go back in with even wilder swings to try and earn their money back. There are definitely people like that who seem to be subsidizing me on Kalshi.
I'm not sure people realize how many of us hold "gambling is repulsive" as a first principle. I'm not saying I think gambling should be illegal. But when I see someone buying a lottery ticket, playing poker or otherwise gambling, I definitely feel the same kind of feeling I do when I am at the store and standing next to someone who smells strongly of urine. I don't think urine dude is a BAD person, necessarily, but clearly he's got a problem and I have every reason to hope he starts bathing and changing his clothes.
Oh, yeah, it drives me crazy when I tell people I'm on a forecasting site and when I explain it they're like, "oh so you gamble?" If I try and explain it to them I still get "well there's an element of chance."
How PeerBet.io legal ?
Was there a prediction on how quickly regulatory capture would be used to stifle competition and innovation?
Have prediction markets or super forecasters had any luck in predicting the direction of stocks, the success of a startup, etc? This seems like a way to make profit off good predictions outside of the prediction markets themselves
There is a private superforecaster site to do this, but if I recall correctly, its too early to tell the results.
Which one? Open Judgement?
I think it's called the New York Stock Exchange.
What’s private about NYSE?
I think the point of prediction markets is to apply the 'wisdom of successful investors' to questions other than "what should this company be worth"?
Perhaps, but applying it to something like company worth that can be used to make money can increase the mainstream awareness/popularity of prediction markets and maybe even lead to regulatory changes
The stock market, including futures and options, is already a prediction market for the direction of stocks. Any subset of individual opinions would just amount to a smaller, inferior prediction market.
"Price is a signal." (Adam Smith, circa 1776)
"I need a prediction summarizing the wisdom of millions about what economic activities are worth investing in. If only there was some kind of....hmm...well, let's say signal." (generic investor-activist, circa 2022)
+1
The problem with this is that we care about things that are not economic activities. And structuring things as "Will X happen" is much easier than structuring them as "Will X happen and then also have the effect on the stock market than I expect".
Meh. Things that have zero to very little economic impact are ethereal abstractions of interest to almost nobody. "Economic impact" is just another way to say "people have to spend money (= the saved hours of their working life) to avoid evil consequences" or "people are able to reap greater rewards for the same work (or no work at all!) because of these good consequences." That covers nearly every form of good and bad luck, which is why economic cost or benefit is pretty much the catch-all measure of broad social impact.
Let's rephrase to say we care about things that have economic relevance but cannot be neatly captured by stock movements. The Covid pandemic, an invasion of the Ukraine, adoption of a new technology, should I save for college for my kid or assume college won't be a norm any longer, etc. The things we want to make decisions on are both far broader and far more specific than what stock can measure.
I think a secondary prediction market for public corporations could be interesting and useful if it was based on one or more measurable values like "Net Profit" or "Brand recall in the annual survey from XXXX research company" or even "P/E ratio of the listed stock".
The stock markets are prediction markets based on some measurable metrics for the underlying companies, but also based on memes, short interest, futures markets, potential future growth, potential exposure to liability, and game theory re: everyone else's thoughts on those things.
It is infamously difficult for stock pickers to outperform a well-diversified buy-and-hold portfolio over time because there are so many of these factors baked into the price, and because that game theory piece plays such a big role. It would be really cool to see if people are better at predicting future corporate performance when the concrete statistics are divorced from the rest of what fuels speculation.
You may have a point about certain sub-micro bets that are below the level of stock price that the market sets. Some of that is covered by individual analysts, of course. But they have loads of conflicts and incentives to not stray too far from consensus. Medium-term earnings could be an informative betting pool.
Hi Scott, the link to the CFTC decision is [broken, details omitted in edit]. Feel free to delete this if fixed.
Here's the CFTC press release https://www.cftc.gov/PressRoom/PressReleases/8478-22 - has a sidebar that links to an identical filename, though when you click it you won't get an ordinary file, just a download, which probably accounts for it. For the closest thing to a clickable link you can right-click and select 'copy link', which gets you https://www.cftc.gov/media/6891/enfblockratizeorder010322/download
Sorry, fixed.
It's probably not a good idea to let people know much about the file structure of Scott's machine. Can that bit still be edited?
I thought about it, but that file path is what you'd get by default for a particular, very popular OS/browser combination, so it leaks less than you might think. Nevertheless, it's not worth arguing about and I'm fine getting rid of it; deleted.
(remember also that every subscriber got a Substack tracking link that resolves to that same path...)
How do you structure a prediction market subsidy so that it encourages more/better forecasting?
I think the usual way to subsidize a prediction market is providing liquidity. Without liquidity, what happens is that you, for example, want to buy the YES token at 60% probability, but there is no-one to do the other side of this trade. I think if someone provides liquidity to a market, they need to lock up capital in it, but at the end they will get it all back.
You create a prediction market with an Automated Market Maker for each forecaster you invite to your very exclusive platform. This makes it profitable for each to participate and, to a first approximation, forces them to collaborate.
I think this is far too negative / absolute on the US regulatory piece. If the conditions for making it work are satisfied elsewhere and it begins to attract real money / real volume, the US will grudgingly accept it for fear of losing out on the position as the financial capital of the planet. I strongly doubt the CFTC wanted to accept bitcoin futures but eventually they had no choice; I expect you get that here eventually.
I don't think our government actually cares that much about competition. It's very complacent, and no regulator is going to lose their job because we got invaded by a more competitive government.
No, but regulators will lose their jobs if stakeholders are sufficiently aggressive in lobbying and they prove intransigent. I guarantee the high-frequency market making firms would love to enter to this business if it gets sufficiently large; right now, it's just a curiosity.
There's less money spent on politics (advertising + lobbying) than almonds.
https://slatestarcodex.com/2019/09/18/too-much-dark-money-in-almonds/
There doesn't seem to be any lobby actually in favor of the Census changing their practices to inject fake data for "privacy" (they claim they're required to do this, even though the same law didn't require it before) and this would break a whole lot of things that depend on accurate Census data.
https://www.slowboring.com/p/census-data
They just make decisions for no real reason that benefits anybody.
Am I the only person around here who thinks prediction markets are a really bad idea and ideally should not exist? Like, I don't want to live in a world where (a) making money by gambling on your ability to predict the future accurately is a common thing or (b) where decisions are being made by [edit: I mean 'based on the predictions of'] an anonymous mass of gamblers trying to predict the future. Those both seem independently shitty. And prediction markets do not seem like the only solution to the problem of forecasting being difficult and hard to calibrate or scale.
"Bets are a tax on bullshit", ie, betting isn't the only way to do forecasting well, but it's a way to sort out good forecasting for self-serving bullshit. We want our forecasts to be well-correlated with ground reality, not whatever ulterior motives the forecaster might have, and "prediction markets" or "explicitly pay superforecasters based on performance" are the only two that I've heard that work, and the latter is both hard to scale and more-or-less "making money by gambling on your ability to predict the future" anyway.
Liars would be against bullshit detectors ==> spooky opposition
Prediction markets are repeatedly shown to be the most accurate way to predict the future. There's alpha in being able to predict the future better than others, whether we like it or not.
I'm not saying they won't work, I'm saying they seem bad for society and I don't want them to be a thing.
Why? Without meaning the question prejudicially, honestly why? Is it connected with your statement about "where decisions are being made by an anonymous mass of gamblers" - but these markets of prediction bets aren't making the decisions, they're predicting the decisions. Or are you asserting a dynamic I'm not seeing here?
I mentioned above that I don't like either side of it. Quick explanations are:
* I don't like the idea of a society where a lot of people spend a lot of brainpower gambling on the future because that's just stupid, and also, it has all sorts of weird side effects like incentivizing things to happen for money instead of, like, because we want them to happen because they're good.
* I don't like the idea of decisions being made by [or, yes, I meant based on the forecasts of] anonymous hoards of gamblers, because, again, it conflates financial interests with everything else that matters, and I want to see society being driven more by leadership and decision making and less by mindlessly trusting data that comes out of machines.
Not to mention the immense conflict of interest that starts to arise once money is coupled to outcomes because, after all, the money can also start trying to influence those outcomes. It's not like there hasn't been cheating in literally every thing that anyone ever gambled on. But now, yay, you can gamble on _everything_, how fun.
I don't see why anybody would _want_ this. It is vaguely appealing in a technocratic sense: how cool, we can build a system that predicts things whose inputs are money and time. But I can't imagine a world where it is implemented not being worse for it.
I guess I'm confused, in that you seem to believe that decisions are in any way not connected to financial incentives now, but leaving that cynicism aside, do you think that if we have prediction markets people will "decide things based on the forecasts of"? That's not how prediction markets work, and if they started to, wouldn't that just break prediction markets, since who's going to bet on something which will be decided in a way that's dependent on the bet? That's more like voting shares than prediction markets, which also happens, but is already a mechanism in our society.
" it has all sorts of weird side effects like incentivizing things to happen for money instead of, like, because we want them to happen because they're good." Evidence? Again, this ties to your contention that prediction markets are determinant. What is the evidence for that?
Maybe I'm misunderstanding a large chunk of how prediction markets function.
I think if prediction markets become a major part of our societal discourse, i.e. like the stock market, then yes I do think government officials and other decision makers will be influenced by those predictions. And since those predictions are financial in nature, it means the decisions by those people will be influenced by financial incentives.
I feel like the whole point of prediction markets is that they are a social machine for producing good predictions; if the only point of that was "just to know, for fun" then they would not be very interesting to anybody. The point, I've assumed, is to get them big enough that their predictions are _accurate_, because they're aggregating over all the knowledge in the market, and then, ideally, use those predictions for something (such as corporate- or government- decision-making).
But maybe _I'm_ missing a large chunk of how prediction markets function and just assuming that is the unstated goal of the whole project. I've certainly just sorta assumed this is where everyone's coming from.
It seems to me like you're drastically overestimating the quality of the decisions that emerge from "leadership and [human] decision making" in the absence of hard data.
Personally -- and this is maybe a kooky take that I couldn't convince anyone of, it's just a feeling really -- I think people dramatically underestimate the quality of _good_ leadership, because there is so little of it around to see as example.
Would you also abolish the stock markets and commodity futures markets?
Not abolish, but yeah massively reform, probably. They seem pretty fucked.
We have had plenty of societies driven by leadership and decision making. Unfortunately these have been totalitarian. Democracy is effectively a prediction market, as the voters gamble on which party will be the best option. This leads to inconsistent leadership and tends to leave decision making in the hands of individuals. It is clear however that democracies out perform totalitarian models on just about every useful metric. Perhaps that hoard of gamblers are going to have better ideas than the leaders and decision makers, whose qualifications for such roles are not actually testable.
Strongly disagree with the analogy between democracy and prediction markets, because voters pay nothing individually for being wrong. The feedback loop is much weaker in democracy than in prediction markets.
> We have had plenty of societies driven by leadership and decision making. Unfortunately these have been totalitarian.
This seems very incorrect. All societies are driven by leadership and decision-making. The quality and degree of those things (and the moral compass of the leaders) varies, but it is absurd to say that having leadership means having totalitarianism.
>"I don't like the idea of a society where a lot of people spend a lot of brainpower gambling on the future"
1. We already live in that society due to the existence of stock markets. The countries that allow stock markets to exist are a lot better off than the countries that don't.
2. Whether or not prediction markets are allowed, people will spend lots of time reading and writing speculation about the future, such as in newspaper op-eds. Prediction markets will make all that speculation much more accountable and accurate.
3. Without prediction markets, media elites can use their platforms to make false predictions with impunity and dishonestly twist politics towards the preferences of their class. The truth is more egalitarian than media ownership structure. Prediction markets are a powerful way to speak truth to power.
"I don't like the idea of a society where a lot of people spend a lot of brainpower gambling on the future"
that's what all decision-making is, though
concerns about whether money is a good way to do that is another matter...
One objection I could think of is the fact that one might not want to base policy on something so easily manipulated by spending cash. (Not that other stuff we base policy on, like academic opinions, can also be manipulated by spending cash.)
Suppose an evil version of Scott would like indoor gatherings to be permitted at a certain place and time. The obvious way to do this would be to spend cash on the question "will bad stuff happen if we allow indoor gatherings?", lowering the odds as estimated by the market by, say 10%.
Ideally, a superforcaster would notice this gap and invest in the opposite side, but practically, there are some limitations. Perhaps the market takes a long time to resolve, and the ROI is not worth it. (Are leveraged prediction markets a thing?)
This becomes especially bad in case of self-fulfilling prophecies, because then even the financial punishment might not apply.
Suppose both the Afghan government (as of 2021) and the Taliban are firm believers in prediction markets. Someone willing to influence the outcome might simply place a huge bet on their favorite side, knowing that the other side will most likely not fight to much against terrible odds.
(I would argue that in that case, the natural equilibrium would not be that the prediction market is trusted 100%, but less, so that a prediction would not easily become self-fulfilling. In general, it is not clear to me that such feedback loops do not appear. If you ran a prediction market on the outcome of a Keynesian beauty contest, and the judges knew about the market prediction, they might very well go along with them.)
I guess I see your point, but this seems like way more expensive than just buying politicians, especially when the very act of gaming it this way would (I predict... see what I did there?) get priced in to the market itself. If a prediction market were skewing this way, I'm genuinely wondering if it wouldn't be evident, and takers on the other side of the bet would be hard to find, at the proposed odds.
Again, I'm really not fluent in prediction markets, but these critiques seem (perhaps naively) to be self-correcting, or at least pushing directly against what such markets are supposed to do... and if that happens, they'd cease to be relevant quickly. At least that's my sense of it.
Plausible deniability if anti-corruption laws become more strict?
A tax on lying is better than no tax on lying. The latter is what we currently have in the media without prediction markets. If you want to lie the country into a war in Iraq, you don't even need to spend a zillion dollars manipulating prediction markets! You just need to know the right people at major newspapers.
If there are huge, liquid prediction markets, and you "know the right people at major newspapers", then wouldn't you place your bets, proceed as before, with the same outcome?
...except that since you would gain a zillion dollars, you didn't need any other motivation than money, such as ideology, loyalty, whatever.
Another way to phrase it is, if there is a tax on lying, which incents people to lie less, why would they respond by conforming their speech to reality rather than vice versa? Cf. Karl Rove's famous quote "when we act, we create our own reality."
As we see with existing markets, if the market isn't doing what you want it to, it's very easy to convince people it's being manipulated by a conspiracy of traders.
"Prediction markets are repeatedly shown to be the most accurate way to predict the future."
Where has this been repeatedly shown?
In prediction markets?
No, seriously, I want to know the answer to this question. Have prediction markets so far produced surprising odds for events, contra public opinion, *and been proven right* reliably?
And just to be clear, I asked the question in hope of a serious answer one way or another - not that I object to Carl P offering a snarky answer in the meantime.
Huh. So if I want to know whether Apophis will hit the Earth sometime in the 2100s, or whether a new nasty COVID variant will emerge, I'm best off getting a load of people to bet on the outcome and taking note of where the odds fall? That's great news. It's sure a lot easier than trying to integrate Newton's equations or sequencing a crapton of viral RNA.
It's not the problem of generating knowledge/predictions that you're trying to solve via prediction markets. It's the problem of _aggregating_ knowledge/predictions in a way that you have a good sense for which knowledge/prediction you can have faith in.
Right now, it's hard to say which expert has put how much work into their prediction, and whether they're making the prediction to be right or to score political points or reputation points or whatever other points. However, the expert knows. If they back up their opinion with a bet made with their money, conditional on their prediction being right, you have a sense for their degree of confidence. In a market, over time, the aggregation of such bets should give you the best predictions. That's (my understanding of) the theory of it