Thanks, that's great and exactly the kind of thing I was hoping to learn about by posting this.
It looks like the site only has thirty markets, most of them are boring, doesn't let users trivially start new markets, and breaks when I try to sort by category - but it's a good start and proof of concept.
Yeah, I haven't used Augur, but I've used Polymarket. Given the description of Augur as "nonfunctional or so minimally functional as to be useless", Polymarket is probably a better choice.
The PredictIt markets on the presidential race were horrendously bad. The primary were the worst, as there was a lot of genuine uncertainty about the outcome until early March, and people buying into PredictIt seemed to bet heavily on the most interesting potential outcomes (Sanders, Bloomberg, etc.) while the price on Biden (the boring outcome) was always low and plummeted post-Iowa.
It can be fun to look at prediction markets, but I have not so far seen any reason to actually act on their predictions. They can change drastically as time goes on. Even if they eventually converge to something close to the actual outcome, that's not really what you want. One would like to be able to base action on a reliable prediction well beforehand,(not counting speculation activity here), and that does not seem to be what I have seen.
As for Polymarket, I haven't found any good write-ups as to how exactly they interface with the blockchain layer. Compare with the Gnosis and Augur whitepapers for an example of what this typically looks like:
That makes me a tiny bit worried about whether or not they're "truly" decentralized and trustless. But I'm not an expert in the area, so this could be just FUD on my part. I'd be happy to be proven wrong here.
I use Polymarket, and looked into their tech a bit (I'm the CTO of a blockchain company). Short answer is your funds are safe (not in their control), but there is still a bit of centralization for now. They are built on Matic, which is a "layer 2 scaling" solution for ETH, meaning it's one of the numerous companies that use some cool cryptography to drastically reduce tx cost and increase speed. Polymarket also took the path of essentially putting a wallet in your browser based on your email. They don't have access to the private keys though. No one does except you actually. They only went this route so they could create a cleaner, more "familiar" experience of using email and password. (You can read more about all this on their FAQ: https://polymarket.com/faq)
The main point of centralization right now is that they have a "Markets Integrity Committee" -- which is just the people at the company -- that ultimately decides the outcome of the markets. IMHO, this is fine for now. You can try to decentralize this part later. They're just trying to get a working product going, and I think that's the right way to go at these early stages. Their experience is much cleaner/faster/cheaper than Augur. They can add everything else once they nail that part.
Why is it important to decentralize the outcome deciding process? As far as I know, most interesting predictions are of the type you can't automate the verdict on (with the stock market as notable exception, but the market for stock predictions is the stock market itself) so it's either the company, the traders or independent commity that would decide the outcome. I see the case for independent commity, but letting the traders decide what was the outcome sounds completely bonkers.
The idea, as far as I understand it, is that if bets that one outcome "will" happen suddenly skyrocket, that implies the bet has suddenly become a sure thing, and presumably the question is then considered decided.
So no individual person declares an outcome, it's decided by the presumably correct answer suddenly becoming a super-bright Schelling point. This sounds far from foolproof to me just writing this out, though...
It's important because otherwise the system is prone to fraud, especially in a fully permisionless prediction market. The central committee could, for example, place a bunch of anonymous bets on X happening, and then simply declare that X happened to take other people's money.
But remember, "decentralize" doesn't mean "automate". What you really want is a trustworthy way to decide the outcome that doesn't give too much power to any one entity. So one way to do this is to have a group of say 10 random individuals all secretly cast a ballot for what the "correct" outcome is. If your implementation is sound (ie. ensure the individuals don't know who the others are, and they have something to lose) then you can get them to vote on the outcome where they win if they vote with the plurality and lose if they vote against. This is known as a "Shelling" game, because given no other information and no communication, the natural point for people to land on when asked "who won X election" is to just vote the truth.
I believe Augur does something like this. Polymarket may eventually move to it too. But it's slower and significantly more complicated to build, so I think they just didn't do it for V1.
Has there been a work-up done yet about the regulatory hurdles to robust prediction markets and a road map to their navigation? I am feeling increasingly optimistic about the potential social benefits and would be curious to learn more about how to make the dream a reality
Also curious whether there are identified biases with the internet fans. They seem more likely than the average expert to pick up trends like Gamestonk (or even the early rise of Pres. Trump's trajectory), but I would bet there would be at least commensurate shortfalls in an internet-heavy-predictor market
Robin Hanson would probably know this, or who would know this. From what I remember, there's basically no interest in prediction markets internal to a company or organization. Public prediction markets are illegal and there's no political coalition willing to fight for them. Building a political coalition to legalize prediction markets seems like the big 'next step' for making them a reality in the sense I expect you meant.
In the sense of even ever really trying them at all, you could build a company and use a prediction market internally. Sadly, that too seems exceedingly hard to do.
It's hard but much easier from a regulatory perspective if all you're doing is rewarding your employees with cash for predicting things about their work. I'm working in this space. Hard but not impossible. Problems are mostly political when it turns out people bet against their bosses success. Pseudonymity helps.
I'd like to know why it's still illegal since the US federal ban on sports gambling was overturned. It's a state-level decision at that point and most states have either legalized it or have legislation pending.
I'm starting to suspect that this link only works in the US or something, as I've seen it shared a couple of times and it's always just been Times New Roman 'Sorry, this content is not available'.
It seems to be about location. I tried with a VPN from Australia, Denmark, Argentina, and failed. But from the US it worked both on desktop and mobile.
I wonder how much of the lack of performance of prediction markets is due to the fact that a) it still feels a bit like gambling which isn't an easy mental model to get over, b) there just isn't enough liquidity as it doesn't seem embedded in regular analysis life and c) there still isn't a robin hood equivalent that makes this just ... easy. The first two require behaviour changes, where we all act like Tetlocks forecasters with the third enabling the first two.
At least for me the Schelling point I'm stuck at is to just not use it at all.
All that said it seems like the adoption would be like a step function. Once it's visible enough and liquid enough then the answers that come from the markets become useful, which drives a positive feedback loop of even more adoption and its integration into 538 and Vox.
I don't think something feeling like gambling explains it being slow to catch on; real gambling remains popular, as do day trading, loot boxes, etc. Realistically this should be a point in favor of it getting popular.
I think of the lack of liquidity as a regulatory problem. If you could make real money (as opposed to Fake Internet Points or the small amounts of money allowed on PredictIt), it would be worth people's time to do this as a full-time job, the same way some people do finance/investing as a full-time job. I agree there's a role for making things more liquid as part of making the case to the government to deregulate it, which is one reason I'm trying to signal-boost these so hard.
I find PredictIt really really easy to use (Metaculus not that much). I can't remember if there was a learning curve. I'm curious if you feel like it doesn't make sense to you and you wouldn't intuitively know how to use it. I've also never tried RobinHood, so you might have clue me in if they have something special beyond just intuitive UI.
Well, gambling here in the UK is around a15B business, and grows at around 5-6%, and it's legal. Yet it's still sporadic and event based except for a few who are more enthused about doing it. You still find very very few people doing it as a profession.
Re usage, I do find it easy. But it's not as directly "fun" as RH is. Feels more like using my Schwab to trade, or Bloomberg when I used to run a fund. RH is highly intuitive but it's not just that - it's more that it's purpose built to make it fun and not seem like work.
All that said, I'd love a world where they existed because I'd know where consensus opinion was on key topics. As it is I'm still leaning towards this being a hard social problem (see aforemention UK sports point). People have to have incentives beyond money (though that's needed) for them to spend time and money to do it. I'd say though that considering where I'm typing it, we're probably those people.
Would def be intrigued to see someone try. FHI is a tad too academic to make this a bit more mass market would be my guess. You really need someone who can bring more people in (like you?). Ideally 538 starts one and as a bonus we get to replace CNNs needle.
I'm trying in a different regulatory environment, it's one of those nobody-wants-to-move-first-in-case-they-lose-face things. The gambling markets where I am are very insular.
I think you're probably looking at the spread-betting firms to do this: There are 100+ but almost all of them only do financial markets; a few do some sports as well.
Sporting Index does some political spread betting, but not a broader prediction market.
I suspect that part of it is the regulator (FCA) requires very precise terms for spread contracts, and that's a lot of work for not enough customers.
The bookies (betfair is the big one for non-sports, though both William Hill and Ladbrokes do some business there) do some political business, but they had a bunch of issues with the US election (they didn't define their terms of when to settle very clearly and got a lot of complaints; some settled on the first network to call, others waited for the electoral college to meet). They'll still do it, because it's very profitable for them, but also their regulator (Gambling Commission) gives them a lot more discretion in settling contracts, unlike the FCA.
But that means fixed-odds betting only (though Betfair Exchange and the similar markets have tradable bets at any odds you choose, which is almost like a true market).
> gambling here in the UK [...] You still find very very few people doing it as a profession.
That’s because bookies have the right to kick out anyone they want, and they ruthlessly exercise that right on any account that seems to be winning too much. Professional gamblers have to use all sorts of tricks with multiple bank accounts and parasitic apps to avoid notice, and are at the mercy of a change in the bookie’s security.
The Hollywood Stock Exchange for predicting the box office revenue of upcoming or potential movies in return for Internet Points has been around for a couple of decades. Cantor Fitzgerald wanted to turn it into a real futures market using real money, but the government said no.
One issue was that much of what makes HSX useful is all the insider trading. People with jobs in Hollywood constantly talk about which movies in production looks like they will be a hit from the daily rushes and which ones are shaping up to be duds. Nobody wants the government to have to police this kind of useful and inevitable gossip to shore up the purity of a financial market that the world doesn't need.
Modern baseball statistical analysis emerged in the mid-1970s as an esoteric hobby. By the 1980s Bill James could make a living off publishing baseball statistics books. By the 1990s, there were thousands of avid modern analysts, most doing it for free, with perhaps a dozen (?) making a living off it. Around 2000, baseball teams started to hire the hobbyists, and today perhaps 100 or more are employed by baseball front offices.
Something similar could happen with world event forecasters, but it would probably take a popular contest to emerge for interest to centralize around, perhaps a TV show with teams competing to update their predictions over the course of a year.
Baseball is on television, but contests for forecasting world events are not. So because there are a huge number of baseball fans, there are some high end baseball fans who are into high brow statistical analysis. Because forecasting isn't on TV, there aren't many forecasting fans.
Dream up a good format for a popular TV game show for expert forecasters, and many of the other things you want for prediction formats would follow.
That's exactly the issue. People don't just trade because of financial incentive given. It has to become part of your daily life. A world where you use prediction markets inside companies to forecast production and within govt to predict events might seem great theoretically, but there's no clear incentive ladder to make it happen.
Maybe we need Fantasy Forecasting games online, the way football fans have Fantasy Football.
The time span question is a tough one. NFL fans love fantasy football since they find out every week whether they won or lost. But historical events play out over longer time periods.
Maybe Covid forecasting would be enough of a rollercoaster ride to interest forecasters: You could predict the number of hospitalized cases for each of the 50 states for the next 52 weeks and win points each week, and update your forecasts.
I'd watch it, or at least have it on while I do other things. But I'm going to be part of a much smaller sample here. All that said internet did do a better job or Covid forecasting, though most did with a basic extrapolation rather than through any hidden local information, as Weyl or Scott would say.
At times I feel we overestimate the pull of gamification or financial incentives that drive behaviour en masse. They're great for.marginal improvements, but for larger changes there needs to be a genuine want. One reason I'm in favour of more expert simulations that prediction markets could augment perhaps (https://www.strangeloopcanon.com/p/simulating-understanding).
The lack of a Robinhood-like app seems like a benefit. The people who can be deterred by something like a complicated UI might be less likely to put in the research time necessary to be anything more than dumb money. Unless you're arguing they need more dumb money to act as bait for the smart money?
Actually dumb money (or at the very least, "inert" money betting for boring outcomes) is in fact probably needed to jump-start prediction markets. The Biden vs Trump betting market was very liquid and in fact at least partially fueled by dumb money on both sides (myself included, never again... even though I won).
I think that gambling (for most gamblers) is probably better modeled either as consumption (something people do for fun) or as irrational.
If it's true, it means that as long as most or lots of people come to prediction markets treating them like a casino, the amount of noise (bets made for fun or without understanding of the real odds) in the system would prevent the predictions from being any good for actual prediction.
In theory "smart money" is supposed to bet against the gamblers and profit, but we can see its not necessarily the case even in the stock market. See the whole GameStop conundroom.
So, to paraphrase Ostrom's law:
A market mechnism that don't works in practice shouldn't work in theory.
Or at least, we need to understand why the practice don't currently work before we try to scale it up and expect it to solve the yet unclear problems.
Retail trading in the equity markets is around 25%. The % of gamblers in it to win is c.50%, from the 2019 report I think that UK did on the topic.
My question remains that people underestimate how liquid the financial markets actually are. For prediction markets to work well they'll need comparable liquidity, which nothing else has had. Despite rabid fandom and worldwide love, sports isn't there yet. I really doubt predicting world events will become our next hobby.
I think rather the best use of the tool is likely to be for expert calibration, perhaps that someone like 538 might do to calibrate themselves better. For other world events my bet would be on better increasing our understanding of the actual event modeling/ simulations (wrote about it here https://www.strangeloopcanon.com/p/simulating-understanding) and perhaps later using expert and layman inputs to better calibrate it.
Sports is there, I would say. There are a lot of large syndicates using algorithmic trading for sports betting so I think the market is pretty efficient.
The really interesting historical events are the ones that don't get anticipated much ahead of time, as I explained in my review of Philip Tetlock's book "Superforecasters."
If one could make a lot of money wagering on Covid deaths or on Covid mutations, that could either be an invitation to a bad (evil) actor or maybe a Netflix series.
Another thing that's great about metaculus is that it's very open and easy to get into. There is a tutorial and quiz where you can check how well-calibrated you are, and you can start answering questions and commenting, or even open your own questions, quite fast!
I wonder what kind of analysis has been done regarding which markets are better predictors. Metaculus has a track record page:
I'm both interested in and deeply confused by prediction markets as a concept. I'd love to see a tutorial about how these markets work mechanically, so I can feel confident placing money in them.
Examples of questions I have include: What am I actually buying when I buy a share? I believe Scott at one point said that a share pays out at some point if the prediction it corresponds to comes true, but where does that money come from? When a market first opens, who are you buying "fresh" stock from and how do they decide what to charge? Also, how can they run markets for complex questions, as opposed to just "yes/no" stuff?
These are the sorts of questions I've spent the last few years understanding the answers to for the "real" stock market, and there's both much more information published about those and much more incentive to invest time, and still very hard to understand. All the "you should care about prediction markets" stuff that has been shared with me so far is either very abstract ("they're just great, for reasons, trust me") or else incredibly in the weeds.
I've used predictit and am familiar with that system so I'll answer based on that.
Basically every market gets created with some rule specifying when the market ends. You can then attempt to purchase yes/no stock. When you offer to buy a yes stock at a price of 0.81$, your demand can be met either by someone directly selling you a yes for 0.81$ or by someone offering to buy a No for 0.19$. If the latter happens, there is now 1$ more in the market, 1 more No and 1 more Yes than there was before. Similar things occur when selling yes/no stocks. When the market ends, it resolves to either a yes or a no and the winners get payed 1$ per stock that they hold.
PredictIT also has fees, but we can ignore those for basic understanding of the system.
You just have to notice that, the exchange can convert every no ask at X$ to yes bid at (1-X)$ and every no bid at X$ to yes ask at (1-X)$ implicitly and then they can match the orders as they would on any other bid/ask exchange (two queues sorted by price, if an incoming order would make the prices cross, there is a match and both orderes are processed) with respect to the yes share.
Let's say someone opened a yes-no prediction market on Augur (say for deciding if there will be rain tomorrow), but there are not trades yet. Then anyone can take $1 and deposit it into the market to get 2 shares back: 1 "yes" share and 1 "no" share. When the market *resolves* on the expiration date of the market, then either the "yes" share or the "no" share (depending on which outcome came true) can be traded in to get that $1 back.
Now, back to the beginning: you just converted $1 into two shares. But, you actually think that the market will resolve with "no", so it's a good idea to sell the "yes" share, because you think it will be worthless at the end. Of course, you try to get as high a price for it as possible. If someone thinks that the "yes" outcome is at least 40% likely, they should be willing to buy the share for $0.40. That's because the expected value is 40%*$1=$0.40 (the probability they assign to the "yes" outcome multiplied by the reward for the case where "yes" is the real outcome). So you see, after a lot of trading happens, the price of the "yes" share gives you directly the probability that the market assigns to the "yes" outcome.
One problem that I see with the reasoning here is that there is still no evidence that the vaccines interrupt transmission of this virus. I was hoping that such evidence would have been published by now, and I would have thought that a vaccine that is 95% effective at preventing symptomatic disease would do so, so I am becoming worried. If this holds up, it would mean that some significant percentage of vaccinated patients can carry and transmit the virus. Your description seems to indicate that vaccinated patients cannot be infected and transmit the virus. Is there an indication of what directions these models would go if vaccinated patients can transmit the virus?
I personally am curious to your priors here. Based on 3-4 interviews with vaccine experts, they all agree that while it's TECHNICALLY possible for a vaccine to allow transmission, it would, at worst, be the same transmission rate as an asymptomatic carrier. Most experts say odds of an asymptomatic carrier transmitting is 85% lower than a symptomatic carrier.
So, if the vaccine is 95% effective, but you are worries that carriers could still transmit the virus, that's WORST CASE SCENARIO 85% * 95% = 80.1% effectiveness.
So, while this situation would be bad, I'm curious as to why this point is worrisome. Are you worried about it only being 80% effective, or are you worried about a different factor that I'm not considering?
I am worried that vaccinated patients may be a hazard to non-vaccinated patients. That is one of the reasons why we are being told to continue being cautious post-vaccination. If vaccinated patients can carry and transmit the virus, even at low rates, the current generation of vaccines may not be able to end the pandemic.
My big question is why is it taking so long to get the evidence? It should be fairly easy to show that these vaccines interrupt transmission of the virus with tens of thousands of vaccinated patients in the clinical trials. That’s why I’m worried. I was expecting a publication by now.
All is not lost though. There is an intranasal vaccine in development that clearly interrupts transmission of the virus in animals. If that translates to humans, then ending the pandemic might require revaccinating everyone, which is obviously not logistically desirable.
It may be that these parenteral vaccines don’t elicit robust mucosal immunity. We just don’t know yet.
I hope that gives you an idea of my line of thinking.
In rationalist and left-leaning circles, the "concern" of vaccinated spread is extremely common. Maybe I am sensitive to it, but it has popped up frequently for me on all platforms.
I would like to formally ask you, and others reading this to stop bringing up the argument of vaccinated spread, as I believe it is an informational hazard. This would be true if spreading the knowledge hurt more people than it helped.
If people distrust the vaccines ability to stop the coronavirus, that will cause fewer people to get it and fewer people to "go back to normal" once vaccinated, due to fear, causing additional secondary effects of depression, slow economy, etc.
Most epidemiologists say it is a low chance that there's some low level of spread of the virus.
"Ending the pandemic" might not mean eradication of the virus. But at some point, like in the united states pre-February, there becomes no reason to social distance or wear masks. Those that are at-risk will be vaccinated and herd immunity will do the rest.
I think a reason that it's taking long to get the evidence, is a classic example of Russel's teapot. If there were a vaccinated person that caused an outbreak, it would be easy to prove that it DOES spread. But it's almost impossible to prove that a group of people do NOT spread the outbreak. Even the best countries usually can not pinpoint where outbreaks occur. That's why the term "community spread" as a cause is a large proportion. However, I believe pfizer is working on a study, and no doubt the study says that virus spread is much lower with large margin-of-error bars. Would that help your worry?
It actually doesn't seem that easy to determine if the vaccines stop spread or not. How would you know, except through challenge trials (difficulties there are well-litigated) or animal trials (obviously limited applicability)?
In order to prove that vaccinated people can spread covid, you'd need to find someone who caught it, was around vaccinated people, and then prove that no one else they interacted with could have given it to them. Even statistical proof (as opposed to any single confirmed case) would be pretty difficult to separate from all the noise. Unless the chance of unvaccinated spread is significantly higher than the chance of inadvertent spread through any of the myriad unavoidable sources, how would you ever know?
For comparison, we actually don't know if it's possible to spread HIV through oral sex! Scientists presume it's possible, but no one has ever gotten HIV while being able to credibly assert that they never engaged in any higher risk behavior. The chance of oral spread is too low, and the number of people engaging in oral sex with the HIV+ but not in any other risky behavior is too low.
Personally, I wouldn't have expected evidence to be published by now about that. It is easy to track a vaccinated and control group for infections. But how would you track how many people they infect? Also, the incentives are much lower.
The natural experiment is soon to come. Israel has about half of its population vaccinated. If all works well, soon we should start seeing a very steep decline in deaths, though not so much in cases (even if your fears don't pan out). But if in a month or two (ballparking here) the cases don't start going down dramatically there, that's when I would start to worry.
I think the problem with the Israel natural experiment is that although we can see how many cases there are per day, and use that to infer some sort of R, we know from observing other places that R has often changed in ways that are hard to understand. (Why did it get higher in the Dakotas starting in August/September, and stay high for three or four months, and then suddenly fall, just as it was rising everywhere else in the country?) If R was generally fairly stable, then a sudden change in R around vaccination would indicate that the vaccine was preventing transmission. And if R stayed constant but the total number of new cases fell by half, then we would know that the vaccine was preventing cases but not preventing transmission.
The expectation is that new cases and hospitalizations, eventually deaths, will decline as a share of all cases and hospitalizations first in Israel among those over 60 and in towns where most were vaccinated. Back on January 21, I blogged that I'd be worried if we didn't have evidence of that on February 1.
Today, February 1, the first solid evidence on a national scale that then groups most vaccinated in December were doing better emerged:
"Before 2023, will the United States CDC recommend that those who have already been vaccinated for SARS-CoV-2 (COVID-19) be vaccinated again due to a mutation in the virus?"
I would put the likely answer to the first part of that question at nearly 100%, although not for the specific reason in the latter clause of the question.
Infection or vaccination do not seem to confer permanent immunity, and the SARS-CoV-2 is almost certain to become endemic like its close relative "the common cold" and like influenza. If we're LUCKY, vaccination for "COVID-19" will become an annual event as with the flu vaccine. If we're less lucky, six months may be the norm.
It may be necessary for already vaccinated patients to receive a booster specific to the B1.351 South Africa variant. This is being actively studied, and Moderna already has a clinical trial in progress.
I don't think the evidence actually supports the idea that COVID immunity is measurably waning yet. (Permanent immunity is of course a premature claim.)
First of all, we wouldn't normally expect a flu-like annual reactivation with COVID. The flu has that because the flu is specifically optimized for large scale mutations that break immunity (basically by shuffling large chunks of the genome around.) COVID doesn't have that design, so we'd a priori expect a more typical immunity duration.
Second, the studies which have been broadly spread about declining antibody levels are merely confirming the expected. Antibody levels for any disease drop drastically after infection has been over for a while. Our long term immunity is not based on continuous antibody levels (which would be rather inefficient and increase our risk of autoimmune issues) but rather based on Memory B cells which store the design for the antibody. If those cells are triggered in the future, your body will rapidly mass-produce the antibodies again.
Hypermind is pretty good. Like Metaculus, you trade in fake internet money. But Hypermind has some institutions/people who pay to put markets up, and that money is split between the people who made the most fake money on that particular market. Hypermind then will send you an Amazon gift card once you get to 15 Euros. So there's an extra incentive to spend your fake money wisely.
Philip Tetlock who wrote the Superforecasting book is also behind https://www.gjopen.com which also uses fake internet points. The questions are more varied and there's more user interaction.
He also got a group of people that were the best "superforecasters" and their predictions are now at https://goodjudgment.io/superforecasts/. I've been following it pretty closely and they've been tracking the question about vaccines better than Metaculus.
One of Tetlock's points is that people get better at forecasting from public feedback of who was right in the past. I'd like to be able to filter prediction market forecasts by, say, who performed well on similar questions in the past.
In the annual forecasting contest that I've run for the past 6 years, participants love this horse-race aspect. For the first year, I stupidly gave everyone code names so that they could make honest forecasts (i.e., without concern for what their guesses might be signaling), and that just annoyed people. The group-average forecast of superforecasters (people who have competed more than once and always beaten the average) does quite well in Brier score terms. https://braff.co/advice/f/announcing-the-2021-narcissist-forecasting-contest
My intuitive concern about this concept is that A) useful insights will tend to get Dunning-Krugered into oblivion and B) people's betting choices are as much a matter of personal risk taking tendencies and disposable income as any confidence about what they're betting on.
Insofar as large parts of the financial industry are essentially prediction markets, I feel like their behavior generally vindicates these concerns.
I don't understand what you mean by [A] – can you give a (hypothetical) example of that?
A perhaps under-mentioned part of [B] is that a long-term prediction market, i.e. one that persists for a 'long time', is one in which longer-term successful bettors can be both spotted and 'validated'. Yes, market participants can be wrong – but only to the extent that they still have something to lose. It's still a vast improvement in incentives over generic punditry!
By "Dunning-Krugered into oblivion", I mean that on any given question a large number of contributors will be people with low-information who believe they have high information for one reason or another, and that those people will swamp the predictive value of the market for a given question.
Two hypotheticals regarding the question of how many deaths there were going to be:
1) The extreme case of someone who genuinely believed that the pandemic was fake or planned, and was consequently going to go away after the election. They consider themselves high information, but in fact are working from false premises. We know these people often willing to stake money on their views -- see the markets for Trump's election in two months following the election.
2) (My personal danger) The armchair epidemiologist, most often someone well-educated who views themselves as excellent at research, has been following experts on twitter, reading Wikipedia articles on relevant topics, and keeps up with whatever Nate Silver says about things. This person is effectively a noisy proxy for "expert consensus", but with enough confidence in their research to invest money in their conclusions.
Coming back to the market metaphor: low information investors join the market all the time, in a continuous stream of people deciding to get into investing. Some of them have shallow pockets so they're gone quickly and replaced quickly, but others have deep pockets and will continue to pollute the market with ill-informed bets. The depth of a person's pockets -- or willingness to burn money! -- is not correlated with their accuracy.
To your final point: I'm pretty sure it's fallacious to expect "longer-term successful bettors" in any broad sense. Accurate bets are predicated on knowledge relevant to the prediction being made, and it's impossible to have genuine expertise in a huge range of fields. Consequently, we should expect that an individual either stays in their lane, and is successful, but only on the occasional issue that pings off their knowledge, or that an individual becomes overconfident in their predicting ability/intuition writ large and starts placing bets outside their knowledge area, leading to failures and a declining success rate.
>The depth of a person's pockets -- or willingness to burn money! -- is not correlated with their accuracy.
I think there might be some correlation there, although I don't know how significant we could expect it to be. That is, we might expect someone who does well to keep going more often than someone who doesn't, across both deep- and shallow-pocketed segments, and success will also tend to deepen shallow pockets.
>I'm pretty sure it's fallacious to expect "longer-term successful bettors" in any broad sense.
It looks like such people do, in fact, exist; the famous "superforecasters", of course, but also some individuals on e.g. Metaculus that consistently do pretty well in various disparate areas.
Is there good evidence that there is such a thing as a person that is a "superforecaster" across a range of areas? Or is this like a "superspreader", where anyone can be it on one occasion, but not on another?
Seems like superforecasters are mainly people who are willing to spend a ton of time researching a large number of questions in depth. I'm sure there's an element of natural talent too. But if there were a decent amount of real money involved, I think subject area experts would be more likely to weigh in and a generalist strategy would be less viable. Professional forecasters would do best by specializing in one or a few areas where they become experts.
Tetlock's study was aimed at finding people who are good forecasters across a large number of short-term (one-year) world events. It's quite possible that somebody who specializes in forecasting a single subject matter (e.g., China-India border clashes) would do even better about their subject than Tetlock's generalist super-forecasters, but it's hard to get any kind of sample size over a short number of years about a single field in order to determine whether the single topic forecaster is truly prescient or just lucky.
In general, what Tetlock found about forecasting world events was the same thing that was discovered about analyzing baseball statistics in the last quarter of the 20th Century: there are a lot of smart guys out there who can beat the current crop of professional experts. Today, the professional experts at picking baseball players are now often guys who got their start as amateur sabermetricians. Perhaps in the future the CIA and the like will be staffed by former amateurs who showed their mettle in public contests.
These are all general arguments against _any_ kinds of (financial) markets working too.
But there's a lot of evidence that, even in practice, this isn't a systemic flaw despite there being lots of individual exceptions.
You're right that "The depth of a person's pockets – or willingness to burn money! – is not correlated with their accuracy." – but their long-term prediction market returns are very much correlated with their accuracy. It seems strictly better for inaccurate predictors to _pay_ for their wrong beliefs.
> I'm pretty sure it's fallacious to expect "longer-term successful bettors" in any broad sense.
That's a sensible 'outside view' but it seems to be wrong – see the 'superforecasters' research for one big family of examples.
One of the major attractive features of prediction markets is that participants have to pay for their wrong beliefs. So, if individuals _are_ experts 'in their lane', then they might be able to move some specific markets if they have info to contribute. And if they're wrong about something in 'another lane' (or even their 'own lane'), they'll lose their bet and thus be _less_ able to influence the ongoing market in other predictions.
Markets, and 'price systems' more generally, definitely aren't perfect – that seems to be what you're arguing against. But there's a LOT of evidence that they do in fact work remarkably well, _especially_ relative to alternatives.
A is only a problem if people who don't know much about an issue are systematically biased in one direction for some reason. There are cases where that's true, but many more cases where it's not.
Low-information (but not zero-information) guesses tend to be centered on the correct answer, with high variance. Averaging hundreds of thousands of them can often give a pretty good idea of the correct centerpoint.
Doesn't always work, but neither does asking someone who claims to be an expert.
I doubt that the group of people likely to participate in prediction markets is anything like a uniform sample of the population, so I'm not at all on board with asserting they're unlikely to be systematically biased in one direction on any given issue.
I buy that low-but-not-zero guesses are likely to be centered on the correct answer when the relevant information is only on one axis (e.g. value of change in the jar). But most questions we actually care about are massively multidimensional. Take predictions about COVID deaths: knowledge factors include underlying biology of the virus, underlying effectiveness of the vaccines, government effectiveness in a region, social factors impacting behavior in a region, and a dozen others, and the answer is a complex mix of those numbers. In those multivariate conditions, should I still expect low-information guesses to average to a correct answer? Has that been studied?
The other factor is that "low" and "zero" information tend to imply that knowledge is zero-bounded, but it's not. Lots of people are making guesses/decisions based on factually false information. If the populations knowledge -- that, coupled to it, accuracy -- has an upper bound but no lower bound, I wouldn't expect very good results.
>I doubt that the group of people likely to participate in prediction markets is anything like a uniform sample of the population
Fair enough, I was talking about the general concept of aggregate predictions. I agree that the current markets will be of limited usefulness until a much broader population is engaged (and, as Scott says, until real money is on the line).
That said, we shouldn't expect that self-selected population to be systematically biased about *every* question, and we can probably observe what topics they tend to be off about, and in which direction, to compensate.
>In those multivariate conditions, should I still expect low-information guesses to average to a correct answer?
I'm pretty sure the Central Limit Theorem directly implies that the questions being hugely multivariate will make the answers *more* accurate, not less. After all, even if your intuitions are way off on one of the factors, the impact will be moderated by your intuitions on all the other factor, some of which may b biased in the opposite direction.
>Lots of people are making guesses/decisions based on factually false information.
Yes, but absent systematic bias across the entire population, they are doing this *in both directions*. Which is where averaging comes in to save the day.
Again, systematic bias is certainly possible, but it shouldn't be assumed a priori; it doesn't happen without a reason, and that reason is often either not there or easy enough to notice and correct for.
> I'm pretty sure the Central Limit Theorem directly implies that the questions being hugely multivariate will make the answers *more* accurate, not less. After all, even if your intuitions are way off on one of the factors, the impact will be moderated by your intuitions on all the other factor, some of which may b biased in the opposite direction.
That works if you're talking about a linear composition of multiple dimensions. But if the interaction of the dimensions is non-linear, then there's every reason to suppose that an average of noisy estimates will be biased. Just as a simple example, say that X is some variable that people are noisily distributed about a mean on, such that their average is an unbiased estimator of the truth. Then if we asked people to estimate X^2 instead of estimating X, the average of their guesses would be biased *above* the true value of X^2. (That's because if one person underestimates X by the same amount that someone else overestimates X, then they will underestimate X^2 by *less* than the other person overestimates X^2.)
Sure, but doesn't that require them to think about X as affecting the outcome via x^2 when they make their estimate? Since they're just plugging these estimates into a vague sense in their heads, not into an actual equation where the ^2 term is visible.
I'd expect most people to *treat* most factors as linear when thinking about them, even if they're not actually.
The point is that it doesn't matter how people *think* of it - it matters how their errors are distributed. If errors are distributed linearly, then a straight average will be an unbiased estimator. If errors are distributed in some more complex way, then the straight average will usually be a biased estimator. In a higher dimensional problem, there are often more ways for errors to be distributed non-linearly.
I wonder if a site could strike a deal with regulators allowing a higher volume of trades if they only paid out based on users' average performance over a lot of predictions, rather than on individual trades.
That would make it much harder to win or lose money on luck alone, and the site would be more clearly a skill-based competition.
I don't think that would work either. And, reasonably, even with your proposed change, it would probably still be too similar to 'gambling' or 'investing' for them to have any reason to grant an exception or turn a blind eye.
I think maybe the best way around the legal prohibition is 'through' – i.e. create a fully compliant and regulated market. Or create 'prediction securities' and offer them via existing financial markets.
Is there any legal obstacle to creating prediction securities? What does it actually take to like, sell futures linked to GDP or something instead of a financial asset?
Kalshi has solved the regulation problem after a 2 year legal battle! The Exchange is set to launch relatively soon. Read more about the regulatory win here: https://kalshi.com/news/kalshi-designation
Re vaccine hesitancy, I haven't seen anyone talk about this but if I had been infected with COVID, I could imagine being hesitant to take a vaccine for some combination of reasons. (Do I need it? Do I need it before others? Is it worth any even slim risk, given that I already have antibodies?)
If 25% of the US has already had COVID, would we expect a lot of the hesitancy to come from that camp? Or maybe not? I haven't seen this discussed or studied but I'm curious about it.
That's a good point. The best number I can find says about 9% of people have been infected, but I don't know how many of those people know it (or how many people who haven't think they have).
In my job I'm eligible for the second tier, and also have a role signing up my organization's employees for both first and second tier distribution. I also happen to have gotten COVID a few months ago.
I turned down the vaccine because I am relatively young and healthy, the vaccine is in short supply with high demand in this area, and I already have antibodies.
I have to admit that I am also hedging against a low chance of a significant complication (side effects, etc.) from the vaccine. I'm not an anti-vaxxer, but mistakes have been known to happen with such things. This reason would not cause me to avoid it without the other conditions above.
I find the concern about this issue likely overblown in general. My null-hypothesis is that the people who want to wait and see are telling the truth, and that either 1) they're right to be hesitant because concerns will come to light in the next few months, or 2) the vaccine is fine and good and so they'll be convinced by the time it matters.
All the data I've seen seems to show that we have no shortage of "willing arms," so in order to be concerned I'd want, at the very least, some polling that actually breaks people down by e.g. *how long* they want to wait, before I get too worried that the "willing arm" curve will ever dip below the "available shots" curve.
Wanting to wait and see doesn't seem particularly irrational to me, so why assume that these people are irrational or incalcitrant without any evidence? Why assume the anti-vax movement will be a bigger impediment here than for e.g. chicken pox or measles, for which the movement is too big for sure but not actually troublesome enough to warrant all this hand-wringing.
Anecdotally, I've noticed that a non-negligible fraction of people that have already had covid are actually among the most concerned about infection - I think it's some effect related to the bits of uncertainty that is out there about reinfection, combined with a deep experiential knowledge of how unpleasant it can be to have covid, so that their slight uncertainty about getting it ends up leaving them with just as much concern as your average person who doesn't have a sense of how unpleasant it is.
Right, thanks. Question: I'm guessing this means that it would be harder for me to outperform the market? I figure that if PredictIt is poorly-calibrated, that would mean I could (if better calibrated) make predictable money off of it, even if only in small quantities. Whereas I'm guessing that the standards of calibration required are probably a bit higher to predictable outperform Betfair?
I haven't looked into it but my guess is yes. PredictIt is a really dumb money market, there's even outright manipulation by political partisans.
On the other hand, the PredictIt fees are so high that it makes arbitrage much more difficult than it might seem. A lot of the time probabilities don't even sum to 1 -- but you'd lose money trying to arbitrage this. And you can only have so many outstanding contracts. It's obviously inefficient but in such a way that this can be hard to take advantage of.
Outright manipulation by political partisans would never happen in the real stock market, of course. *cough* Perhaps on stocks whose names rhyme with BlameFrock.
But yeah, PredictIt seems like it's just obviously dumb a lot of the time.
It depends. Betfair's market around the 2020 US presidential election was probably pretty off (unless you thought the outcome was significantly in the Biden-favoured half of the distribution, even though 538 says the opposite).
I think this post leans a little too hard on efficient markets. By the end of the post at times you seemed to be treating the Metaculus probabilities as though they are the actual probabilities.
Metaculus obviously has less "dumb money" than PredictIt. But it still has weird biases or cases where the market is obviously wrong, due to Silicon Valley biases, long-running questions where people drop out and don't update, overoptimism about certain technologies (like reinforcement learning), etc.
We're probably still in the world where more hedging language is merited when reporting the probability coming out of a prediction market. They don't really give "the probability" yet, I don't think, although a thick and liquid enough market might, for some things.
I think I'm treating it the same way I would treat some competent expert giving their probability - not incontrovertibly true, but by default worthy of respect until we have particular reason to challenge it. Every expert has their biases, but so do nonexperts and so do I, so taking an expert's probability as the best we can do seems fair to me.
It seems to me that your discussion of the market on fatalities with and without challenge trials is taking the market results rather more seriously than we should. If we were quite confident that there would be 50,000 more deaths without challenge trials than with, then obviously we should do the challenge trials. But what we have here is that the expectation of deaths without challenge trials, according to a particular set of estimators, is 50,000 greater than the expectation of deaths with challenge trials, according to some overlapping set of estimators. And these estimators are drawn from a biased pool, that has been primed by lots of amateurish discussion of challenge trials this year, without expressing much knowledge about past thinking about the epistemic relevance of challenge trials.
Financial markets work well (to the extent they work well) because liquidity and speed of information transfer as well as transaction clearance effectively prevents arbitrage. You need to have no arbitrage to even have a theory of pricing that remotely corresponds to fundamental value, and prediction markets are seemingly useless if the bets don't correspond to the bettors' actual beliefs regarding fundamental probability and are instead attempts to benefit from arbitrage.
In financial markets, this happens because of market makers, institutions with enormous cash reserves of lines of credit ready to take the opposite side of any bet in order to profit purely off of juice. This is the same purpose a book serves in normal betting, except books also set the prices. This means the prices of bets are not even intended to reflect anything about the probability of an event happening, but rather the book's assessment of how much money is going to take each side of a bet at a particular price.
It's a subtle difference, but it can result in systematic deviation of betting lines from reality where some team can very consistently over or underperform their spreads for a long time because the betting public includes so much stupid money placing bets for bad reasons.
How do prediction markets avoid this? Seemingly, the point is to make optimal predictions, but every incentive from the perspective of the book is to make money off of systematic and predictable human irrationality instead.
I realize after reading this that I'm not making clear the actual difference between market makers and books, but market makers in financial markets are distinct from the exchanges. If you're trying to buy and sell on the NYSE, there are many market makers to choose from, so they're in competition to offer good prices, not just in terms of bid/ask spread, but they need to all be offering roughly the same actual prices. Books, on the other hand, are the exchanges, and there are so few of them that they can effectively just set their own prices.
I agree prediction markets are bad at this right now. That's what I meant in the first part, where I say that I'm still waiting for the perfect prediction market - one which is so lucrative and liquid that the same sorts of people who correct mispricing in financial markets will swoop in with enough money to exploit as many dumb bets as people can put up.
This post succeeded in making me more interested in actively participating in Metaculus. I've been winding down my participation in PredictIt (mediocre place to put capital even if you can beat the average), but previously had no interest in Internet points. If you're planning on keeping up a recurring review of interesting developments in the space, I think it could actually drive a significant amount of traffic in their direction.
Wouldn't any full fledged prediction market have serious "insider trading" problems, at least for many predictions of interest? E.g. AOC, or anyone close to her, massively shorting herself just before dropping out? Typically, actors don't have their hand forced (like officials trying to avoid as many covid deaths as possible) but can choose from a number of options, right? Seems like a lot of infrastructure and regulation would be necessary to avoid such problems, likely more than for stock markets nsider trading.
This would suck for traders, but (directly) be good for people trying to use it to predict outcomes. I don't know about the indirect effect from eg traders refusing to trade because they don't know if they're betting against an insider.
Even if we wanted to prevent this, would it be harder than preventing CEOs from insider trading on their own stocks? It actually sounds like an easier problem, since CEOs have many legitimate reasons to own their stocks and AOC has no legitimate reason to own "AOC will drop out" shares.
> It actually sounds like an easier problem, since CEOs have many legitimate reasons to own their stocks and AOC has no legitimate reason to own "AOC will drop out" shares.
Sure, but isn't this sort of admitting that for prediction markets to work, they'll need a regulatory framework and enforcement approximately as good as the stock market?
Because that moves my predictions about when they will be useful from 'a year after they are legalized' to '50-100 years after they are legalized'. Regulatory frameworks like that don't spring up overnight, and enforcement is a nightmare.
Prediction markets like Hollywood Stock Exchange benefit from insider trading: the pool guy overhears his client the movie mogul boasting about how the latest script for "Joker 2" has a "Sixth Sense" level plot twist, so the pool guy bets his Internet Points on "Joker 2" doing even better at the box office than the market expected.
Since no money changes hands at Hollywood Stock Exchange, this is all innocent fun and useful for people who have a need to develop a sense of what movies will be hits or not.
But when money is involved, insider trading is a big concern.
For Europeans, I think a notable market is BetFair. Much lower prices than PredictIT (generally 5% on profits, but can be less with discounts) and very good liqudity on popular markets and when the topic is emotionally charged, you can still make easy money https://www.lesswrong.com/posts/y8RWtNBiksbSzm9j4/bet-on-biden
I understand the value of prediction markets, for where people are predicting their own behavior (who will I vote for, when will I get vaccinated), but I see a lot less value in predicting events that people have no real control over, like COVID deaths.
Sure, we can make a group guess, but why is that at all accurate? What info can I get from that prediction that I could not get from just reading the newspaper (the same info source that all the other punters have)?
Let's say we had a liquid, stable market for the Presidential election on election night. Do we really think it would have been stable, in favor of Biden, around 10pm EST? My guess is, everyone would have been watching the same numbers come in, and the bets would have swung wildly from Biden, to Trump, and back again by early Wednesday morning. All of that to say, if no one really knows anything (and certainly not anything different), why do these markets have any value?
Presumably the people involved are balancing out any inherent bias they bring to the question, and therefore create some kind of crowdsourced consensus on the real answer. Two people can read the same newspaper article about how many deaths to expect and come up with a different conclusion. 1,000,000 people reading that article and putting down a guess is going to give us a much more stable prediction of what the information we have really means.
Of course, you have to assume some kind of knowledge and ability to predict in the group involved, and also that they aren't from a single subset of the population that all bring the same bias to their guesses.
I have to admit that I agree with you that Scott seems to feel that there's a lot more value in that than there really is. A single point of real data (like your presidential election scenario) will wildly swing an existing market based on speculation.
If prediction markets become big and liquid and profitable enough, you'll end up getting sources that aren't just people reading the newspaper. Or at least people who read all the newspapers.
In the election example, when the numbers started coming in and it looked like Trump was leading, what might happen is whatever hedge-fund-like entities were participating would see the chance for profit and stabilize the price swing. And they would conceivably have predicted a Biden victory based on the fact that early votes always swing Republican, and the later counted mail-in votes swing Democratic.
But also, a wild swing isn't necessarily a mark against prediction markets. As the results come in, there's more and more data to take into account. If, say, Trump had somehow won California, then the markets all swinging to his favor would be a feature, not a bug.
Also, before I end up trying to explain hypothetical evidence too much, is there a way to check how the actual prediction markets fared? PredictIt shows a bit of a swing at ~20c, Metaculus doesn't (it was actually remarkably stable, at 8% Trump even before the election), but I can't find very high resolution data so it's hard to say.
I strongly suspect Scott has a bigger readership than the entire Metaculus userbase, and that approximately 100% of existing Metaculus users are subscribed here. If this becomes a regular feature, I expect a large bump in predictions on the questions featured in ASX every Monday, and for them to move in the direction hinted at by Scott's commentary. I guess the screenshots in the post are enough information to track that.
For the reinfection question (currently 277 predictions), the fine print says:
"If coronavirus infection confers partial immunity to the new strain, such that getting the disease is less likely but still possible, this may still count so long as scientific evidence exists (for example in a published paper) that the protection is significantly less for the new strain than the old."
And the discussion in the comments suggests that this is to be interpreted as 'statistically significant', in which case I think it's basically certain that there will be at least one variant with a statistically significantly different reinfection rate that infects that many people globally. In fact, I wouldn't be surprised if it's already happened. The main reason I'm not more confidently yes, is that the wording isn't completely clear on what it means by 'significant'.
I also think that the modal scenario is that we start needing annual COVID vaccines, and everyone I know who works on COVID modelling seems to agree with this. So far, we've only seen a small number of notable new variants, but that's without any selection pressure on the virus from vaccines. I guess the main way that this doesn't become a thing is if somehow the dominant strain is mild enough that vaccination isn't worthwhile (or politics happens, I suppose).
I'm wondering why they think it will be annual, given that there are structural reasons for flu to evolve faster (it's segmented), and the rate of mutation depends on the number of people infected? Once the pandemic is over, it seems like new worrisome mutations should go down a lot.
You're right, there's no particular reason it would be annual. It seems very likely it will be endemic and regular vaccinations will be a thing, but no good reason for annual cycles.
I tried out Metaculus myself after the last time it was mentioned. While I really like it. There were two things about it that bothered me.
First, when making a prediction using a probability density, I would expect that setting the variance to be very high would minimize the impact that that prediction would have to my score, regardless of the outcome. Instead it seems to have the opposite effect. Can anyone explain why that would be?
Second, it seems like most of the questions are fairly long term. Long term questions seem less useful for calibrating. Is there a way to find shorter term questions so I can make more progress?
I may be missing something, but the FAQ only discusses how distributions work at a very high level. In fact, it seems to agree with my intuition, but not with when I am seeing actually happen:
> Making your distribution wider or narrower reflects your confidence in the central value you've specified, and decides the stakes: a narrower distribution give more points if your central value is right, but more losses if it's very wrong.
To see the actual scoring rules you have to click the "For those who are interested and have a stomach for more mathematical detail, the technical details of the scoring are as follows (click to reveal)." and the "Here are the details…" for categorical/numerical scoring rules respectively.
However, these aren't that necessary, I did read through these when I started (3 years ago), but I really only use the general facts about the scoring when predicting.
It seems like the basic principle behind prediction markets is that most people want to make money and that smart money will overwhelm the noise from dumb money. This premise seems a bit shaky as of late? For a thinly-traded market, it seems like semi-organized dumb money from the 4chan/lizardman/McBoatface contingent is quite capable of launching a denial-of-service attack if they feel like it, basically by throwing money. Piling on in this way is apparently as much fun as gambling and as feasible as any Kickstarter. It seems like a good profit opportunity for people who can manage to catch the money they throw, but this doesn't prevent them from throwing, and so it's not so good for rational price discovery.
This is on top of skepticism about anything based on asking people questions without cross-examination. Scott has written about problems with surveys before, but for more skepticism, see @literalbanana's Survey Chicken essay: https://carcinisation.com/2020/12/11/survey-chicken/
If there is a denial-of-service attack on a prediction market then it will usually be obvious and we can just ignore the result. But perhaps there will be cases where the result is plausible?
I wouldn't worry about the WSB thing setting a precedent until we see how it ends.
My expectation is still that a small handful who got in early and dump their stock first will make a huge profit, everyone else will lose their lunch, the losers will become disenchanted and re-frame the whole thing as a setup by those who started it and got rich, and it will be very difficult to get people to jump onboard such schemes in the future.
I could certainly be wrong, but we don't know what this example is teaching us yet.
If there were a good prediction market (large, liquid, no-limits, real money), one of two things would happen:
1. It would make good predictions
2. You could make basically limitless amounts of money off of it.
I agree PredictIt sucks. I made a few thousand dollars off it last year. I would have made more, except that there are limits to how much money you're allowed to put in. If there were no limits, I would have made a few hundred thousand dollars. I'm not exceptionally good at this, just not quite as dumb money as the people driving prices. I would be okay with either of the two options above.
I think the reason you can't make limitless amounts of money off of WSB being dumb is a the way shorts work - you can't buy a share in "Gamestop is actually worth less than this", only in "The market will value Gamestop at less than this at time X". Prediction markets can sell the real deal - no matter how much dumb money says Trump will definitely win, when Trump loses your shares will go up to max price.
It would be interesting if (2) were true in the "Las Vegas exists" sense. The limit is probably how much people are willing to spend on entertainment? There would be competition to post entertaining questions and get publicity for questions that entertain the masses. Using "entertain" loosely.
Regulated as gambling, eh? What's the nearest Indian Reservation to Wall Street?
Also UK bookmakers are allowed to take bets on political questions. So you can check the odds for things like next POTUS (Kamala leading at 4) or PM (Sunak at 29/10) there.
(Bonus: senate almost certainly won't convict, Joe is more likely than not to make it to '24 and Puerto Rico probably won't be a state by '22) https://www.oddschecker.com/politics
ok nice intro to the prediction markets. more if this please. a little less arrogance @ "I don't think the average Metaculan truly understands the difficulty...", as well as contradiction "...this is a pretty impressive example of decentralized expertise weighing in..."
The FDA is already going to speed up clinical trials for COVID booster shots against new variants:
"Peter Marks, the head of the Food and Drug Administration division that oversees vaccines, said Friday that the agency will do what it can to speed the process. It won’t require big clinical trials, for instance. Rather than studies of tens of thousands of people, the agency will mandate much smaller studies of a few hundred. The goal would be to ensure that the vaccines produce the desired immune response and to see whether the products cover just the new variants or the original virus as well as the new variation, he said.
“We would intend to be pretty nimble with this . . . so that we can get these variants covered as quickly as possible,” Marks said on an American Medical Association webinar."
i'm also aware of prediqt : https://prediqt.com/ , which uses the EOS blockchain, and foretell : https://www.cset-foretell.com/ , run by georgetown university. i don't have much personal experience with either, but my dream for metaculus is for it to eventually aggregate other prediction markets and forecasts (like those from fivethirtyeight)
I have some experience with cset-foretell; would recommend. I was #1 for a long time, but I'm currently #3 after a traumatic resolution on a facial recognition funding question
Re: aggregation, you might be interested in metaforecast.org/, written by yours truly, which does something similar to what you were suggesting
Is it possible to place error bars around the Metaculus prediction for deaths w/ and w/out challenge trials, based on the actual track record of similar questions? Evaluating counterfactuals such as how many people would have died if challenge trials were allowed to proceed is really only useful insofar as the Metaculus predictions in the past are somewhat accurate and the difference in predictions is outside the error range of past predictions.
I see that Metaculus has a track record page, but I'm not statistically advanced enough to convert their metrics to standard deviations around individual question predictions.
Most interesting near term prediction to me: the current petition for a recall election for CA Gov. Newsom at 75% chance of succeeding on Metaculus (about 65% on PredictIt). Something I've of course read about in general news but had no clue what the probability was.
I wasn't too impressed by the tech mogul positioning to replace him, but he's probably at least a lot smarter and less good-old-boys-y than Newsom, and maybe that would end up being good in some way?
How do prediction markets handle conditional hypotheticals such as "how many people would die if we did challenge trials"? My understanding is that prediction markets work by eventually paying out when the real world provides a result. But in the case of these conditionals there's no way to ever know the actual outcome. Do those "shares" all expire worthless? Are they refunded for purchase price? This seems pretty important for any proposed policy question (if we implement policy x, result will be y). I'd like to understand this mechanism better.
I wonder if it's legal to use virtual currencies in a prediction market? Between Apple/Google app stores, Steam and other game platforms, in-app and in-game purchases, etc., I'm pretty sure the amount spent in these online marketplaces is in the hundreds of billions globally. If these currencies were made sufficiently fungible with real money, that should be enough to attract investors.
Or, hell, could Bezos just run a prediction market that pays out in Amazon gift certificates? How do the gambling laws actually deal with such alternate currencies?
As a general practical question, even if prediction markets weren't classified as gambling and you could attract real investors dumping billions into predictions, I wonder how you protect them from manipulation?
Like, in a fully open market, how do ask a question like 'On what day will Company X announce their new vaccine' without Company X just dumping a billion dollars into that market on a much later day they're sure they'll be ready by, and waiting until then to announce?
I guess in that case the market did tell you when they would announce with great accuracy, but only by fucking up the thing being measured.
In the regular stock market, insider trading is forbidden. People in companies with special knowledge cannot trade in their own companies' stocks except under special circumstances (for example announcing several months in advance). So if prediction markets become a thing, regulators could very well deal with insider trading similarly.
But insider trading improves the accuracy of prediction market forecasts, as I pointed out in 2010 in regard to plans to transform the Hollywood Stock Exchange into a real futures market:
But when you're predicting real-world events, is there a worry about the effect you have on those events themselves?
Like, if there's a market for when Pfizer announces a new vaccine, isn't Pfizer incentivized to pick a day a few months later than anyone else is guessing, and just sit on it until then?
I understand that then the prediction market is 'working' because we're really damn sure when Pfizer will announce, but only because the market itself changed the useful, unsure answer into a different, much worse, but definite answer.
You can’t really say, “if only we had done challenge trials we would have saved 50,000 lives,” since the world where we did challenge trials looks different in other ways from the world where we didn’t e.g. additional bureaucracy lifted, or even greater urgency due to larger death rate.
> we don't have herd immunity for the danger period in winter 2021
Why are you assuming Covid is seasonal? As far as I can tell, Covid likes hot weather just fine. Summer was less deadly in North America because the big cities locked down and flyover country didn't have it yet, not because Covid doesn't like the heat. Brazil is getting hit badly in the summer heat now.
AFAIK, Governor Newsom of California is targeting June for general vaccine rollout, which makes the May 19th nationwide prediction somewhat optimistic.
I am a user of CSET Foretell, and it is interesting drawing comparisons between Foretell, a subsidized opinion pooling platform, and a traditional prediction market. The first difference is that it doesn't give a strong appearance of gambling, which is a plus in the policy making context. To get people to participate, there are some financial prizes such as lotteries for people who place in the top of the leaderboard or forecast during a time period. There are also social incentives though such as the ability to join teams, which have a separate leaderboard, and the feeling of contributing to pressing national security problems. This is separate from UI engineering that I think also adds great value like automatic reminders to update forecasts and historical data visualization for those of us who don't like having to track down and plot everything.
I think a likely future incentive would be CSET recruiting top forecasters into an elite opinion pool that consults with policymakers. It's not a coincidence that Cultivate Labs built both CSET's and the Good Judgment Project's platforms, with superforecaster cachet being a major reason to join Good Judgment Open.
I am convinced by arguments I've seen elsewhere that for real-money prediction markets to attract serious researchers placing smart bets, the markets need to be subsidized so that they are positive-sum rather than zero-sum for the participants. Commodities futures markets are zero-sum in terms of money, but positive-sum when you account for the benefit to the people who are able to use them to hedge their business. Those people effectively subsidize the market makers and speculators in exchange for the service of hedging.
If we are serious about getting a prediction market to answer a question, we need to subsidize the market with a bunch of money for the smart bettors to win.
It seems to me like many of the questions you might ask on a prediction market can implicitly function as hedges against wide array of possible outcomes, so you'd still have at least half of the participants benefiting from the ability to hedge (and, presumably, "paying" for it in expectation, which can provide the incentive for other participants who don't need to hedge to close the loop).
I don't know much about prediction markets, so probably a dumb question: What about derivatives on predictions? Wouldn't trading in, say, options on a prediction provide incentives to game the system and destroy the predictive power of the original market?
So here are some other links and platforms you might be interested in:
- Good Judgment Open: https://www.gjopen.com/. Top 2% go on be selected as superforecasters, so amateur participants have some motivation. I have the impression that this platform is comparable to Metaculus in terms of accuracy, but I don't have actual data. They're particularly centered on geopolitical questions.
- CSET-foretell: cset-foretell.com/. Similar to Metaculus, but smaller and attempts to influence public policy in the US, particularly around transformative technology. Also sponsored by OpenPhilanthropy.
- Polymarket has already been mentioned by other people. Also by you in the past: "I got emails from two different prediction aggregators saying they would show they cared by opening markets into whether the Times would end up doxxing me or not. One of them ended up with a total trade volume in the four digits. For a brief moment, I probably had more advanced decision-making technology advising me in my stupid conflict with a newspaper than the CIA uses for some wars. I am humbled by their support."; Polymarket was one of them, though the market was from their old site and thus only remains in the archives. https://web.archive.org/web/20200717060418/https://poly.market/market/will-the-new-york-times-publish-an-article-revealing-scott-alexanders-full-name In particular, PolyMarket might have boring markets, but they have recently been great for making money from unsophisticated traders.
- Omen. https://omen.eth.link/ This seems to be truly decentralized. They sometimes use Kleros (kleros.io), a decentralized judging system which I find really elegant. They haven't seen much volume recently, probably because of usability problems / high ETH fees. They powered https://coronainformationmarkets.com/ for a while, but they didn't see much volume either.
- https://metaforecast.org/: A search engine, but for probabilities. In very early beta. But the idea is that you can just search for something, like "covid", and it will get you the forecasts from the platforms I mentioned above (and PredictIt, which you also mention)
-I also worked with foretold.io at some point in the past. They're good if you want to create your own private communities and forecasts with friends, and it has a powerful distribution editor based on the one from Guesstimate, but which takes some getting used to. It also hasn't seen much activity recently, but I've been working on using it as a backend for making large numbers of predictions at once. As an example cool thing one can do, a while back I entered things Dominic Cummings said: https://www.foretold.io/c/952d822e-b979-4c42-8560-7f878ad23c9e?state=pending, to see how accurate he was (I haven't seen how these resolve yet, though)
These are great resources! Not sure if it fits into your broader category, but here's my contribution (annual forecasting contest in its 6th year, where I wrote a series of 16 blog posts over the course of last year using the 2020 contest to illustrate techniques: https://braff.co/advice/f/forecasting-masterclass-1-billie-and-brexit)
At least to start with, I'd be much more interested in a backwards-looking analysis of how well these markets have done at predicting events in the past.
My prediction for US coronavirus deaths as of late last year was 665,000 (median). Posted and defended on DSL, where I hope it earned me a bit of respect from my SSC/DSL/ACX peers, which is a sort of "fake internet point" that I understand and place some value on. Enough to do about an hour of dedicated research and math in that particular case, on top of the general Covid situational awareness stuff that I'd have been doing anyway.
If there'd been a way to make serious money off that sort of prediction, I'd have put in a lot more than an hour's work. And judging by the Metaculus results, probably come to about the same conclusion and made no serious money, but worth a try at least. I don't understand the value of Metaculus fake internet points well enough to devote any great length of time pursuing them. If that changes, and if my understanding of MFIPs is that they are really good things to have, then I'll probably engage with Metaculus.
If it's just that there's a small nerdy community that will respect me for my Big Number, I've already got a big IQ I can brag about to people who care about that sort of thing, and I've got a community where I can earn a more nuanced and informed sort of respect than "He has a Big Number".
FYI Scott, the US intelligence community had an internal prediction market only accessible from TS networks until maybe mid last year. I enrolled in it a few years ago, but never personally made any predictions due to just not having any basis for making a good guess about anything without investing a ton of time.
Theoretically, this probably should have performed better than a public prediction market since the participants have access to classified information. I didn't pay enough attention to know to what extent it was ever evaluated for predictive accuracy compared to individual experts and public prediction markets. Of course, you couldn't earn actual money, so I think lack of activity is the biggest reason it shut down and may not have outperformed public markets even with the additional information.
It was voluntary and not used to inform real policy decisions. I can't remember now which agency set this up, but probably NSA. They tend to be behind most of the classified but otherwise public-facing applications anyone with IC PKI identities can get to.
On a semi-related note, something I miss about no longer working in a SCIF is, even with a clearance, I can't access these sites any more since I don't have physical access to a connected workstation. One of the more interesting things you learn when you first get access to the same briefings given to the Joint Chiefs of Staff every day is just how wrong a lot of public news actually is, along with how much isn't really unknown but only not publicly revealed to avoid revealing that we have the capability to know it.
Is there any kind of research on the differences in cognitive bias between risk-averse and risk-prone individuals? Because it feels like prediction markets definitely attract more risk-prone people.
Like some other commenters, I'm puzzled by Scott's implied assumption that the consensus that forms around any prediction in a prediction market will be inherently valuable (not to say accurate). Whay should this be? How can we know that predictors in such markets aren't simply pooling their ignorance?
Is there an assumption that such markets will tend to be populated by people who are more likely to be accurate predictors? If so, what's the evidence for that assumption?
The inaccurate predictors run out of money to spend on the prediction market (also they don't want to lose any more money) and the accurate ones end up with more money they can spend on predicting things.
That, plus "people who are good at forecasting can use prediction markets to make money, thus joining the ranks of rich people who are good at forecasting".
Excellent explication of conditional prediction markets! (For those who don't know, those are the basis of Robin Hanson's Futarchy.)
Sadly, I think the truest true answer to many forecasting questions is "no one has any clue, it's an utter coin flip" and no amount of market liquidity will change that. It's possible that's what's happening with the two conditional markets for COVID deaths with and without challenge trials. They're saying "339k deaths with" and "426k deaths without" but maybe what that really means in both cases is "gosh, no clue, a few hundred thousand ish I guess?" The Metaculus distributions for the with-vs-without predictions do mostly overlap each other. Still, lived saved in expectation is a big deal!
So I'm not actually disagreeing with anything here.
> Sadly, I think the truest true answer to many forecasting questions is "no one has any clue, it's an utter coin flip" and no amount of market liquidity will change that.
Metaculus publishes their track record [1] on their site. They have an average brier score of 0.122 (which is really good), and their calibration is a little under-confident for sub-50% predictions but overall great. Unfortunately I believe this only takes into account binary questions, not distributions. But this, along with much of Tetlock's research [2] means I'm skeptical that most forecasting questions are actually 50/50 no matter how much info you have.
Great points! Thank you! Clarification: I didn't mean "coin flip" as in 50% so much as like very diffuse distributions based on little information. For example, if the probability of Biden getting reelected is 60% because (I'm making this up) that's just the historical frequency for incumbents and we know nothing else then I think of that as a coin flip. I might be abusing the term a little.
An ignorant prior with little Bayesian updating might be the more precise way to put it.
> of course, you can still buy all the Gamestop stock you want
this gave me an idea: can't we hijack stocks to turn them into predictions ?
for each bet create two corporations, for example the Trump Will Get Impeached Corporation and the Trump Will Not Get Impeached Corporation. they'd both be contractually obligated to, if they lose give all of their wealth to the other one, and if they win, buy the shares back from all shareholders at the new, increased value.
Ugh. What are covid "deaths"? The PCR test isn't standardized, and by varying the number of cycles, you can vary the covid death rate. What if someone who is in control of the number of cycles also bets on the number of deaths?
BTW, PCR is why the death prediction keeps rising -- because PCR counts people who were merely exposed but destroyed the virus, people who were exposed and had no or very mild symptoms, people who were sick but aren't anymore, and finally the ones who are showing symptoms. Eventually everyone will fit into one of those categories and so every death will be a covid death.
1. The persistent popularity of casinos, lotteries, and other forms of gambling indicate that yes, people are more than happy to throw their money away on bad odds. No matter how bad someone is at predicting, they probably still aren't bad enough to drop their odds lower than that of buying a lottery ticket (i.e., they aren't being any less rational). And yet lottery ticket buyers still persist. This call into question the ability of prediction markets to "select" for high-knowledge participants over the long run
2. The reason this works, is because prediction markets (and all other forms of gambling) aren't closed systems. Sure, someone who makes all of their money Blackjack, and funnels 100% of their earning back into more Blackjack, will eventually sink-or-swim based on their Blackjack playing skills. But in practice, most people spend a small portion of their earnings on Blackjack, and have outside sources of income, such that losing in Blackjack is only a minor deterrence to continuing play. So long as people have outside jobs, "dumb-money" will continually pour into prediction markets with little consequence.
3. More abstractly: Due to the diminishing marginal utility of money, "dollars do not equal knowledge units". A wealthy person would gladly gamble away $10,000 on a whim, while a poor person would only bet this if he know the outcome with near-certainty. This means that the degree of knowledge a person has in regards to an event occurring, *cannot be directly inferred by the amount they are willing to bet*. The implications of this are significant. Because for prediction markets, pooling dollars bet ≠ the aggregated knowledge of an event, so you cannot even say with certainty in a market with (for example) 60% yes and 40% no, "the bettors in this market think the event is more likely to occur than not". It's *evidence* for it, sure, but other forms of polling may find different outcomes in the same group of people, if they adjust for levels of certainty using different measures.
4. The common sentiment in the rationalist community seems to be something like "betting markets are beautiful in theory, but flawed in practice". I take the inverse position: Betting markets are flawed in theory, but have practical application so long as you understand their theoretical limitations (e.g., they are poll-like things that basically conduct themselves, for free). It's good that we have these prediction markets, because having information is better than not having information. But that's as far as my hope goes: If there was some way to bet against prediction markets eclipsing the usefulness of traditional polling in the coming years, I would.
I have the impression that he would give quite different predictions than the average / median on Metaculus. If only there were more than fake internet points up for grabs there, he might have an incentive to participate in those predictions.
I go through every question in Metaculus and if I don't have any prediction of my own, I jsut pick what the crowd picks. I want to see how well I perform over time and get those super nifty Tachyons. I don't actually care about the future...just winning.
I did PredictIt this election cycle and made around $200 on a $500 investment. Never again. What a horrible experience. First its pure tribalism distilled and slammed one shot after the next. Second, the rules are super specific (which they should be and sometimes aren't in Metaculus--though discussion usually helps) and people make money off of gaming the rules of a question (This is the best example: https://www.predictit.org/markets/detail/4353). third, it's far more important to understand trading (hedging, arbitrage, etc.) than making correct guesses. In the market I listed above, somehow, they pumped the price of Republicans winning the senate after the Georgia election because of the rules and a ton of people made money by choosing the wrong answer. I think Metaculus avoids this problem beautifully.
I support Metaculus Mondays. There's even some questions I'm interested in, like, "When will Epistemic Humility become a common phrase?" Support them on Patreon: https://www.patreon.com/metaculus/posts
I'm confused by what your example is supposed to show? The price of "D House, D Senate" spiked to near-100% after the Jan 5 special election that confirmed they'd have a majority, then tailed off over time (presumably because of Republicans hoping that it would somehow get overturned), then jumped back up on Jan 20th when it became clear that no miracle would happen. That's about what I would have expected, though I'm surprised that it put the odds of Republicans winning so high.
But I don't see how this is "gaming the rules" - the Democrats do in fact have control of the Senate by most reasonable definitions, and it seems pretty reasonable that the contract shouldn't resolve until Jan 20th in case a senator dies or something before being sworn in.
(And who is the "they" who pumped the price of a Republican win? If the price went up because a bunch of Republicans believed there would be a surprise reversal and they were wrong, then that sounds like the market is working as intended - moving money from people who make bad predictions to people who make better ones.)
All of the action in that market was in the comments section. There are thousands of comments and most are garbage, so not worth wading through, IMO. There were four contracts: DD, DR, RD, and RR you could buy them as, ostensibly, a yes vote. RD and RR were obviously No and never rose above a penny or two, which is normal in a market that should have settled. This market stuck out to me and probably all of the PredictIt users because it was one of the only markets with a spread that had some play in it. The question everyone asked was: Why isn't DD at $0.99? I'd say most people were like me and saw easy money.
The rules stated that the market would resolve at midnight on Jan 21st. The Georgia Sec of State said publicly that he had until the 22nd to certify the results and that the counties didn't have to report until Friday the 16th. Because of MLK day. Due to that and the general craziness of those few weeks there was actual uncertainty that it might not resolve at midnight on the 21st, however, it became a huge pump and dump for DR yes who bought low , say $0.18 and sold it (if they were smart) at $0.50 (or something like that, I forget exactly how high it got). It was easily one of the busiest markets on the site, and big ol' flame wars in the comments between DD and DR people attempted to push the price around. People who had bought DD at $0.70ish were freaking out and dumping their positions while the DR price skyrocketed above anything realistic. I held through the whole thing, but it was mostly on the hope that the timing would work out and had nothing to do with the actual question. It was pure Zero-sum, white-knuckle trading on news, not who made the right guess. Metaculus does much better in this regard by removing the weird partisan, jocko-homo competitive animus from the equation.
In the end, there may have been some Republican zombies making the bet, but aside from the garbage spam in the comments, it was mostly people messing with each other over the specifics of midnight January 21st deadline and telling each other stories about the inefficiency of government officials, etc. The money made and lost had nothing to do with the actual question and everything to do with rumors, vitriol and watching the market. If someone had correctly predicted in October--and was right the whole time--they would have had to weather the crazy price disruptions during that final week AND would likely not have made as much money as someone in on the DR pump.
I suppose you could make an argument that somehow the market 'worked' because it resolved correctly and moved based on the news, but I'd say that's not what you actually want a prediction market to do. This is actually something that Metaculus suffers from as well, i.e. the utmost importance of asking well articulated questions with very specific resolution criteria. MY experience with Metaculus, so far, is that the community is more focused on getting the questions right than they are on winning the bet, which was the opposite experience I had with PredictIt.
I started to type out a snarky comment about how 6% of PredictIt users don't even realize AOC won't be old enough in 2024, and then thought I'd better google it. Turns out she'll turn 35 three weeks before the election, which makes her several years older than I thought she was. Interesting.
Using prediction markets to inform governmental policy seems like it could create some really perverse incentives. For instance, if the question is "What will be the employment rate in we (rasie/don't raise) the minimum wage to $15?"
A government will raise the minimum wage if the predicted decrease in employment isn't too great. Doesn't this create strong incentives for business owners to enter the market and enter overly dramatic predictions of employment decreases in order to manipulate policy makers into not implementing a policy that goes against their interests?
(2)
Curious about how these markets work with actual money. For binary predictions it seems simple enough. I'm less sure for continous predictions. Is payout proportional to how close the predictor's value was to the actual value? How does the money placed get distributed?
(3)
What is the consensus view on cryptocurrency-based / decentralized prediction markets? I see a few built ontop of etherium (https://defiprime.com/prediction-markets) but I haven't heard nearly as much about them as I've heard about Metaculus. Do they lack the technical expertise and polish that Metaculus offers? Or is it just that the inherent exoticness of cryptocurrencies makes them less accessible than mainstream alternatives?
This seems like the perfect use case for crypto: A clear need to introduce actual money into prediction markets + the technology to skirt government regulation
I have a question that I think is really obvious, but I don't see it being addressed. I wonder if I'm missing something.
If one were to use prediction markets in the "creepy deep magic way", i.e. asking people what the consequences of political decisions would be - wouldn't people want to influence the outcome to support their political leanings?
I understand that this would come at some cost for them, and this surely is a correcting factor, but I would guess that political motivations might still sometimes drastically alter prediction poll results.
Sorry for any grammatical mistakes, this is not my native tongue.
Thanks, that's great and exactly the kind of thing I was hoping to learn about by posting this.
It looks like the site only has thirty markets, most of them are boring, doesn't let users trivially start new markets, and breaks when I try to sort by category - but it's a good start and proof of concept.
I learned about polymarket from r/ssc and bet on whether the NYT will publish the article on you. Have been happily using it since.
There's also [futuur](https://futuur.com/) which is play money like Metaculus.
And https://prediqt.com/markets
And https://omen.eth.link/
but yeah, they all suck compared to their potential.
Futuur allows to play with real money too
Yeah, I haven't used Augur, but I've used Polymarket. Given the description of Augur as "nonfunctional or so minimally functional as to be useless", Polymarket is probably a better choice.
The political markets aren't that accurate though. There's a lot of dumb Trump money, and the 2% transaction fee means there's less incentive to bet against the 4% of money who thinks that Trump will be president on March 31, 2020 (https://polymarket.com/market/will-donald-trump-be-president-of-the-usa-on-march-31-2021)
The PredictIt markets on the presidential race were horrendously bad. The primary were the worst, as there was a lot of genuine uncertainty about the outcome until early March, and people buying into PredictIt seemed to bet heavily on the most interesting potential outcomes (Sanders, Bloomberg, etc.) while the price on Biden (the boring outcome) was always low and plummeted post-Iowa.
It can be fun to look at prediction markets, but I have not so far seen any reason to actually act on their predictions. They can change drastically as time goes on. Even if they eventually converge to something close to the actual outcome, that's not really what you want. One would like to be able to base action on a reliable prediction well beforehand,(not counting speculation activity here), and that does not seem to be what I have seen.
On the topic of fees, see: https://www.lesswrong.com/posts/c3iQryHA4tnAvPZEv/limits-of-current-us-prediction-markets-predictit-case-study
While we're at it, we should mention the existence of gnosis, too.
I thought this write-up on augur vs. gnosis was pretty good: https://medium.com/@akhounov/hopefully-impartial-comparison-of-gnosis-and-augur-f743d11d6d37. It provides some insight into the kinds of finnicky technical decisions that go into building markets like these.
As for Polymarket, I haven't found any good write-ups as to how exactly they interface with the blockchain layer. Compare with the Gnosis and Augur whitepapers for an example of what this typically looks like:
- https://github.com/gnosis/research/blob/master/gnosis-whitepaper.pdf
- https://augur.net/whitepaper.pdf
That makes me a tiny bit worried about whether or not they're "truly" decentralized and trustless. But I'm not an expert in the area, so this could be just FUD on my part. I'd be happy to be proven wrong here.
I use Polymarket, and looked into their tech a bit (I'm the CTO of a blockchain company). Short answer is your funds are safe (not in their control), but there is still a bit of centralization for now. They are built on Matic, which is a "layer 2 scaling" solution for ETH, meaning it's one of the numerous companies that use some cool cryptography to drastically reduce tx cost and increase speed. Polymarket also took the path of essentially putting a wallet in your browser based on your email. They don't have access to the private keys though. No one does except you actually. They only went this route so they could create a cleaner, more "familiar" experience of using email and password. (You can read more about all this on their FAQ: https://polymarket.com/faq)
The main point of centralization right now is that they have a "Markets Integrity Committee" -- which is just the people at the company -- that ultimately decides the outcome of the markets. IMHO, this is fine for now. You can try to decentralize this part later. They're just trying to get a working product going, and I think that's the right way to go at these early stages. Their experience is much cleaner/faster/cheaper than Augur. They can add everything else once they nail that part.
Why is it important to decentralize the outcome deciding process? As far as I know, most interesting predictions are of the type you can't automate the verdict on (with the stock market as notable exception, but the market for stock predictions is the stock market itself) so it's either the company, the traders or independent commity that would decide the outcome. I see the case for independent commity, but letting the traders decide what was the outcome sounds completely bonkers.
The idea, as far as I understand it, is that if bets that one outcome "will" happen suddenly skyrocket, that implies the bet has suddenly become a sure thing, and presumably the question is then considered decided.
So no individual person declares an outcome, it's decided by the presumably correct answer suddenly becoming a super-bright Schelling point. This sounds far from foolproof to me just writing this out, though...
Metaculus uses this mechanism on some questions, and I've seen instances where people say "well my true estimate is 10%, but if I put in 1% then I push the market toward automatic resolution, and if enough people do that it's free money." Example here: https://www.metaculus.com/questions/3681/will-it-turn-out-that-covid-19-originated-inside-a-research-lab-in-hubei/
It's important because otherwise the system is prone to fraud, especially in a fully permisionless prediction market. The central committee could, for example, place a bunch of anonymous bets on X happening, and then simply declare that X happened to take other people's money.
But remember, "decentralize" doesn't mean "automate". What you really want is a trustworthy way to decide the outcome that doesn't give too much power to any one entity. So one way to do this is to have a group of say 10 random individuals all secretly cast a ballot for what the "correct" outcome is. If your implementation is sound (ie. ensure the individuals don't know who the others are, and they have something to lose) then you can get them to vote on the outcome where they win if they vote with the plurality and lose if they vote against. This is known as a "Shelling" game, because given no other information and no communication, the natural point for people to land on when asked "who won X election" is to just vote the truth.
I believe Augur does something like this. Polymarket may eventually move to it too. But it's slower and significantly more complicated to build, so I think they just didn't do it for V1.
It looks like both polymarket and omen run on the gnosis protocol.
https://blog.gnosis.pm/omen-and-the-next-generation-of-prediction-markets-2e7a2dd604e
https://twitter.com/gnosisPM/status/1309451163041923073
Has there been a work-up done yet about the regulatory hurdles to robust prediction markets and a road map to their navigation? I am feeling increasingly optimistic about the potential social benefits and would be curious to learn more about how to make the dream a reality
Also curious whether there are identified biases with the internet fans. They seem more likely than the average expert to pick up trends like Gamestonk (or even the early rise of Pres. Trump's trajectory), but I would bet there would be at least commensurate shortfalls in an internet-heavy-predictor market
Robin Hanson would probably know this, or who would know this. From what I remember, there's basically no interest in prediction markets internal to a company or organization. Public prediction markets are illegal and there's no political coalition willing to fight for them. Building a political coalition to legalize prediction markets seems like the big 'next step' for making them a reality in the sense I expect you meant.
In the sense of even ever really trying them at all, you could build a company and use a prediction market internally. Sadly, that too seems exceedingly hard to do.
It's hard but much easier from a regulatory perspective if all you're doing is rewarding your employees with cash for predicting things about their work. I'm working in this space. Hard but not impossible. Problems are mostly political when it turns out people bet against their bosses success. Pseudonymity helps.
I'd like to know why it's still illegal since the US federal ban on sports gambling was overturned. It's a state-level decision at that point and most states have either legalized it or have legislation pending.
There has been a lot of work; search a law journal database. For example: https://openscholarship.wustl.edu/law_lawreview/vol97/iss2/10/
Facebook just released a prediction market app called Forecast. Uses fake money, but has pretty decent execution on the UI/UX.
https://forecastapp.net/
Link doesn't work, did you mean this: https://www.forecast.app
It's not forecast.app. https://forecastapp.net/ weirdly works on my phone browser, but not on my desktop browser.
It's not forecast.app.
https://forecastapp.net/ weirdly works on my phone browser, but not on my desktop browser.
Link works fine for me on desktop.
Hmm. An initial poke at the site's code looks like it might only work on iOS and Mac, or on Safari browser?
I'm starting to suspect that this link only works in the US or something, as I've seen it shared a couple of times and it's always just been Times New Roman 'Sorry, this content is not available'.
Am I understanding this right that you can't use this from a computer, only from a cellphone? Surely they can't really be that obnoxious, can they?
It's Facebook. However obnoxious you think they are, they're worse.
It seems to be about location. I tried with a VPN from Australia, Denmark, Argentina, and failed. But from the US it worked both on desktop and mobile.
I wonder how much of the lack of performance of prediction markets is due to the fact that a) it still feels a bit like gambling which isn't an easy mental model to get over, b) there just isn't enough liquidity as it doesn't seem embedded in regular analysis life and c) there still isn't a robin hood equivalent that makes this just ... easy. The first two require behaviour changes, where we all act like Tetlocks forecasters with the third enabling the first two.
At least for me the Schelling point I'm stuck at is to just not use it at all.
All that said it seems like the adoption would be like a step function. Once it's visible enough and liquid enough then the answers that come from the markets become useful, which drives a positive feedback loop of even more adoption and its integration into 538 and Vox.
I don't think something feeling like gambling explains it being slow to catch on; real gambling remains popular, as do day trading, loot boxes, etc. Realistically this should be a point in favor of it getting popular.
I think of the lack of liquidity as a regulatory problem. If you could make real money (as opposed to Fake Internet Points or the small amounts of money allowed on PredictIt), it would be worth people's time to do this as a full-time job, the same way some people do finance/investing as a full-time job. I agree there's a role for making things more liquid as part of making the case to the government to deregulate it, which is one reason I'm trying to signal-boost these so hard.
I find PredictIt really really easy to use (Metaculus not that much). I can't remember if there was a learning curve. I'm curious if you feel like it doesn't make sense to you and you wouldn't intuitively know how to use it. I've also never tried RobinHood, so you might have clue me in if they have something special beyond just intuitive UI.
Well, gambling here in the UK is around a15B business, and grows at around 5-6%, and it's legal. Yet it's still sporadic and event based except for a few who are more enthused about doing it. You still find very very few people doing it as a profession.
Re usage, I do find it easy. But it's not as directly "fun" as RH is. Feels more like using my Schwab to trade, or Bloomberg when I used to run a fund. RH is highly intuitive but it's not just that - it's more that it's purpose built to make it fun and not seem like work.
All that said, I'd love a world where they existed because I'd know where consensus opinion was on key topics. As it is I'm still leaning towards this being a hard social problem (see aforemention UK sports point). People have to have incentives beyond money (though that's needed) for them to spend time and money to do it. I'd say though that considering where I'm typing it, we're probably those people.
I actually am pretty confused why nobody in the UK (Future of Humanity Institute?) has set up a good real-money prediction market there.
The big shops allows non-sports bets for entertainment + politics. As here - https://www.betfair.com/sport/special-bets and https://sports.williamhill.com/betting/en-gb/politics
Would def be intrigued to see someone try. FHI is a tad too academic to make this a bit more mass market would be my guess. You really need someone who can bring more people in (like you?). Ideally 538 starts one and as a bonus we get to replace CNNs needle.
I'm trying in a different regulatory environment, it's one of those nobody-wants-to-move-first-in-case-they-lose-face things. The gambling markets where I am are very insular.
I think you're probably looking at the spread-betting firms to do this: There are 100+ but almost all of them only do financial markets; a few do some sports as well.
Sporting Index does some political spread betting, but not a broader prediction market.
I suspect that part of it is the regulator (FCA) requires very precise terms for spread contracts, and that's a lot of work for not enough customers.
The bookies (betfair is the big one for non-sports, though both William Hill and Ladbrokes do some business there) do some political business, but they had a bunch of issues with the US election (they didn't define their terms of when to settle very clearly and got a lot of complaints; some settled on the first network to call, others waited for the electoral college to meet). They'll still do it, because it's very profitable for them, but also their regulator (Gambling Commission) gives them a lot more discretion in settling contracts, unlike the FCA.
But that means fixed-odds betting only (though Betfair Exchange and the similar markets have tradable bets at any odds you choose, which is almost like a true market).
> gambling here in the UK [...] You still find very very few people doing it as a profession.
That’s because bookies have the right to kick out anyone they want, and they ruthlessly exercise that right on any account that seems to be winning too much. Professional gamblers have to use all sorts of tricks with multiple bank accounts and parasitic apps to avoid notice, and are at the mercy of a change in the bookie’s security.
The Hollywood Stock Exchange for predicting the box office revenue of upcoming or potential movies in return for Internet Points has been around for a couple of decades. Cantor Fitzgerald wanted to turn it into a real futures market using real money, but the government said no.
One issue was that much of what makes HSX useful is all the insider trading. People with jobs in Hollywood constantly talk about which movies in production looks like they will be a hit from the daily rushes and which ones are shaping up to be duds. Nobody wants the government to have to police this kind of useful and inevitable gossip to shore up the purity of a financial market that the world doesn't need.
Modern baseball statistical analysis emerged in the mid-1970s as an esoteric hobby. By the 1980s Bill James could make a living off publishing baseball statistics books. By the 1990s, there were thousands of avid modern analysts, most doing it for free, with perhaps a dozen (?) making a living off it. Around 2000, baseball teams started to hire the hobbyists, and today perhaps 100 or more are employed by baseball front offices.
Something similar could happen with world event forecasters, but it would probably take a popular contest to emerge for interest to centralize around, perhaps a TV show with teams competing to update their predictions over the course of a year.
Baseball is on television, but contests for forecasting world events are not. So because there are a huge number of baseball fans, there are some high end baseball fans who are into high brow statistical analysis. Because forecasting isn't on TV, there aren't many forecasting fans.
Dream up a good format for a popular TV game show for expert forecasters, and many of the other things you want for prediction formats would follow.
That's exactly the issue. People don't just trade because of financial incentive given. It has to become part of your daily life. A world where you use prediction markets inside companies to forecast production and within govt to predict events might seem great theoretically, but there's no clear incentive ladder to make it happen.
Maybe we need Fantasy Forecasting games online, the way football fans have Fantasy Football.
The time span question is a tough one. NFL fans love fantasy football since they find out every week whether they won or lost. But historical events play out over longer time periods.
Maybe Covid forecasting would be enough of a rollercoaster ride to interest forecasters: You could predict the number of hospitalized cases for each of the 50 states for the next 52 weeks and win points each week, and update your forecasts.
I'd watch it, or at least have it on while I do other things. But I'm going to be part of a much smaller sample here. All that said internet did do a better job or Covid forecasting, though most did with a basic extrapolation rather than through any hidden local information, as Weyl or Scott would say.
At times I feel we overestimate the pull of gamification or financial incentives that drive behaviour en masse. They're great for.marginal improvements, but for larger changes there needs to be a genuine want. One reason I'm in favour of more expert simulations that prediction markets could augment perhaps (https://www.strangeloopcanon.com/p/simulating-understanding).
The lack of a Robinhood-like app seems like a benefit. The people who can be deterred by something like a complicated UI might be less likely to put in the research time necessary to be anything more than dumb money. Unless you're arguing they need more dumb money to act as bait for the smart money?
Actually dumb money (or at the very least, "inert" money betting for boring outcomes) is in fact probably needed to jump-start prediction markets. The Biden vs Trump betting market was very liquid and in fact at least partially fueled by dumb money on both sides (myself included, never again... even though I won).
I think that gambling (for most gamblers) is probably better modeled either as consumption (something people do for fun) or as irrational.
If it's true, it means that as long as most or lots of people come to prediction markets treating them like a casino, the amount of noise (bets made for fun or without understanding of the real odds) in the system would prevent the predictions from being any good for actual prediction.
In theory "smart money" is supposed to bet against the gamblers and profit, but we can see its not necessarily the case even in the stock market. See the whole GameStop conundroom.
So, to paraphrase Ostrom's law:
A market mechnism that don't works in practice shouldn't work in theory.
Or at least, we need to understand why the practice don't currently work before we try to scale it up and expect it to solve the yet unclear problems.
Retail trading in the equity markets is around 25%. The % of gamblers in it to win is c.50%, from the 2019 report I think that UK did on the topic.
My question remains that people underestimate how liquid the financial markets actually are. For prediction markets to work well they'll need comparable liquidity, which nothing else has had. Despite rabid fandom and worldwide love, sports isn't there yet. I really doubt predicting world events will become our next hobby.
I think rather the best use of the tool is likely to be for expert calibration, perhaps that someone like 538 might do to calibrate themselves better. For other world events my bet would be on better increasing our understanding of the actual event modeling/ simulations (wrote about it here https://www.strangeloopcanon.com/p/simulating-understanding) and perhaps later using expert and layman inputs to better calibrate it.
Sports is there, I would say. There are a lot of large syndicates using algorithmic trading for sports betting so I think the market is pretty efficient.
The really interesting historical events are the ones that don't get anticipated much ahead of time, as I explained in my review of Philip Tetlock's book "Superforecasters."
https://www.takimag.com/article/forecasting_a_million_muslim_mob_steve_sailer/
If one could make a lot of money wagering on Covid deaths or on Covid mutations, that could either be an invitation to a bad (evil) actor or maybe a Netflix series.
You can already do that with the stock market. Just short Carnival. Forget coronavirus, just short Carnival and bomb a cruise ship.
Since nobody is doing that, I think these sorts of things are less likely to happen than we imagine.
Are you sure? Somebody shorted Target and tried bombing it. https://www.bloomberg.com/opinion/articles/2017-02-17/volatility-trades-and-explosive-shorts
Okay, maybe 'mostly nobody' is doing that (or getting caught)!
Also the plot of Casino Royale.
Don't do that, I'm long on Carnival
this would be an explicit violation of Law 5 of Matt Levine's Laws of Insider Trading.
Another thing that's great about metaculus is that it's very open and easy to get into. There is a tutorial and quiz where you can check how well-calibrated you are, and you can start answering questions and commenting, or even open your own questions, quite fast!
I wonder what kind of analysis has been done regarding which markets are better predictors. Metaculus has a track record page:
https://www.metaculus.com/questions/track-record/ (unsure if links are allowed, but let me try)
Do the other contenders have something similar, and have they been compared?
By the way, reddit has tried to move into this space as well, with the subreddit r/Predictor, similar to metaculus.
Typo thread:
There seems to be a spurious link pasted in the last sentence of the Augur section.
Typos are a bit more expensive when people read articles through immutable old email.
I'm both interested in and deeply confused by prediction markets as a concept. I'd love to see a tutorial about how these markets work mechanically, so I can feel confident placing money in them.
Examples of questions I have include: What am I actually buying when I buy a share? I believe Scott at one point said that a share pays out at some point if the prediction it corresponds to comes true, but where does that money come from? When a market first opens, who are you buying "fresh" stock from and how do they decide what to charge? Also, how can they run markets for complex questions, as opposed to just "yes/no" stuff?
These are the sorts of questions I've spent the last few years understanding the answers to for the "real" stock market, and there's both much more information published about those and much more incentive to invest time, and still very hard to understand. All the "you should care about prediction markets" stuff that has been shared with me so far is either very abstract ("they're just great, for reasons, trust me") or else incredibly in the weeds.
I've used predictit and am familiar with that system so I'll answer based on that.
Basically every market gets created with some rule specifying when the market ends. You can then attempt to purchase yes/no stock. When you offer to buy a yes stock at a price of 0.81$, your demand can be met either by someone directly selling you a yes for 0.81$ or by someone offering to buy a No for 0.19$. If the latter happens, there is now 1$ more in the market, 1 more No and 1 more Yes than there was before. Similar things occur when selling yes/no stocks. When the market ends, it resolves to either a yes or a no and the winners get payed 1$ per stock that they hold.
PredictIT also has fees, but we can ignore those for basic understanding of the system.
You just have to notice that, the exchange can convert every no ask at X$ to yes bid at (1-X)$ and every no bid at X$ to yes ask at (1-X)$ implicitly and then they can match the orders as they would on any other bid/ask exchange (two queues sorted by price, if an incoming order would make the prices cross, there is a match and both orderes are processed) with respect to the yes share.
Let's say someone opened a yes-no prediction market on Augur (say for deciding if there will be rain tomorrow), but there are not trades yet. Then anyone can take $1 and deposit it into the market to get 2 shares back: 1 "yes" share and 1 "no" share. When the market *resolves* on the expiration date of the market, then either the "yes" share or the "no" share (depending on which outcome came true) can be traded in to get that $1 back.
Now, back to the beginning: you just converted $1 into two shares. But, you actually think that the market will resolve with "no", so it's a good idea to sell the "yes" share, because you think it will be worthless at the end. Of course, you try to get as high a price for it as possible. If someone thinks that the "yes" outcome is at least 40% likely, they should be willing to buy the share for $0.40. That's because the expected value is 40%*$1=$0.40 (the probability they assign to the "yes" outcome multiplied by the reward for the case where "yes" is the real outcome). So you see, after a lot of trading happens, the price of the "yes" share gives you directly the probability that the market assigns to the "yes" outcome.
One problem that I see with the reasoning here is that there is still no evidence that the vaccines interrupt transmission of this virus. I was hoping that such evidence would have been published by now, and I would have thought that a vaccine that is 95% effective at preventing symptomatic disease would do so, so I am becoming worried. If this holds up, it would mean that some significant percentage of vaccinated patients can carry and transmit the virus. Your description seems to indicate that vaccinated patients cannot be infected and transmit the virus. Is there an indication of what directions these models would go if vaccinated patients can transmit the virus?
Hi Michael Finfer!
I personally am curious to your priors here. Based on 3-4 interviews with vaccine experts, they all agree that while it's TECHNICALLY possible for a vaccine to allow transmission, it would, at worst, be the same transmission rate as an asymptomatic carrier. Most experts say odds of an asymptomatic carrier transmitting is 85% lower than a symptomatic carrier.
So, if the vaccine is 95% effective, but you are worries that carriers could still transmit the virus, that's WORST CASE SCENARIO 85% * 95% = 80.1% effectiveness.
So, while this situation would be bad, I'm curious as to why this point is worrisome. Are you worried about it only being 80% effective, or are you worried about a different factor that I'm not considering?
I am worried that vaccinated patients may be a hazard to non-vaccinated patients. That is one of the reasons why we are being told to continue being cautious post-vaccination. If vaccinated patients can carry and transmit the virus, even at low rates, the current generation of vaccines may not be able to end the pandemic.
My big question is why is it taking so long to get the evidence? It should be fairly easy to show that these vaccines interrupt transmission of the virus with tens of thousands of vaccinated patients in the clinical trials. That’s why I’m worried. I was expecting a publication by now.
All is not lost though. There is an intranasal vaccine in development that clearly interrupts transmission of the virus in animals. If that translates to humans, then ending the pandemic might require revaccinating everyone, which is obviously not logistically desirable.
It may be that these parenteral vaccines don’t elicit robust mucosal immunity. We just don’t know yet.
I hope that gives you an idea of my line of thinking.
In rationalist and left-leaning circles, the "concern" of vaccinated spread is extremely common. Maybe I am sensitive to it, but it has popped up frequently for me on all platforms.
I would like to formally ask you, and others reading this to stop bringing up the argument of vaccinated spread, as I believe it is an informational hazard. This would be true if spreading the knowledge hurt more people than it helped.
If people distrust the vaccines ability to stop the coronavirus, that will cause fewer people to get it and fewer people to "go back to normal" once vaccinated, due to fear, causing additional secondary effects of depression, slow economy, etc.
Most epidemiologists say it is a low chance that there's some low level of spread of the virus.
"Ending the pandemic" might not mean eradication of the virus. But at some point, like in the united states pre-February, there becomes no reason to social distance or wear masks. Those that are at-risk will be vaccinated and herd immunity will do the rest.
I think a reason that it's taking long to get the evidence, is a classic example of Russel's teapot. If there were a vaccinated person that caused an outbreak, it would be easy to prove that it DOES spread. But it's almost impossible to prove that a group of people do NOT spread the outbreak. Even the best countries usually can not pinpoint where outbreaks occur. That's why the term "community spread" as a cause is a large proportion. However, I believe pfizer is working on a study, and no doubt the study says that virus spread is much lower with large margin-of-error bars. Would that help your worry?
It would, but I’d like to reserve judgment until I see the paper.
It actually doesn't seem that easy to determine if the vaccines stop spread or not. How would you know, except through challenge trials (difficulties there are well-litigated) or animal trials (obviously limited applicability)?
In order to prove that vaccinated people can spread covid, you'd need to find someone who caught it, was around vaccinated people, and then prove that no one else they interacted with could have given it to them. Even statistical proof (as opposed to any single confirmed case) would be pretty difficult to separate from all the noise. Unless the chance of unvaccinated spread is significantly higher than the chance of inadvertent spread through any of the myriad unavoidable sources, how would you ever know?
For comparison, we actually don't know if it's possible to spread HIV through oral sex! Scientists presume it's possible, but no one has ever gotten HIV while being able to credibly assert that they never engaged in any higher risk behavior. The chance of oral spread is too low, and the number of people engaging in oral sex with the HIV+ but not in any other risky behavior is too low.
Personally, I wouldn't have expected evidence to be published by now about that. It is easy to track a vaccinated and control group for infections. But how would you track how many people they infect? Also, the incentives are much lower.
The natural experiment is soon to come. Israel has about half of its population vaccinated. If all works well, soon we should start seeing a very steep decline in deaths, though not so much in cases (even if your fears don't pan out). But if in a month or two (ballparking here) the cases don't start going down dramatically there, that's when I would start to worry.
I look forward to that data. I would love for my concerns to be unfounded.
I think the problem with the Israel natural experiment is that although we can see how many cases there are per day, and use that to infer some sort of R, we know from observing other places that R has often changed in ways that are hard to understand. (Why did it get higher in the Dakotas starting in August/September, and stay high for three or four months, and then suddenly fall, just as it was rising everywhere else in the country?) If R was generally fairly stable, then a sudden change in R around vaccination would indicate that the vaccine was preventing transmission. And if R stayed constant but the total number of new cases fell by half, then we would know that the vaccine was preventing cases but not preventing transmission.
The expectation is that new cases and hospitalizations, eventually deaths, will decline as a share of all cases and hospitalizations first in Israel among those over 60 and in towns where most were vaccinated. Back on January 21, I blogged that I'd be worried if we didn't have evidence of that on February 1.
Today, February 1, the first solid evidence on a national scale that then groups most vaccinated in December were doing better emerged:
https://www.unz.com/isteve/on-schedule-vaccines-in-israel-are-starting-to-work/
"Before 2023, will the United States CDC recommend that those who have already been vaccinated for SARS-CoV-2 (COVID-19) be vaccinated again due to a mutation in the virus?"
I would put the likely answer to the first part of that question at nearly 100%, although not for the specific reason in the latter clause of the question.
Infection or vaccination do not seem to confer permanent immunity, and the SARS-CoV-2 is almost certain to become endemic like its close relative "the common cold" and like influenza. If we're LUCKY, vaccination for "COVID-19" will become an annual event as with the flu vaccine. If we're less lucky, six months may be the norm.
It may be necessary for already vaccinated patients to receive a booster specific to the B1.351 South Africa variant. This is being actively studied, and Moderna already has a clinical trial in progress.
I don't think the evidence actually supports the idea that COVID immunity is measurably waning yet. (Permanent immunity is of course a premature claim.)
First of all, we wouldn't normally expect a flu-like annual reactivation with COVID. The flu has that because the flu is specifically optimized for large scale mutations that break immunity (basically by shuffling large chunks of the genome around.) COVID doesn't have that design, so we'd a priori expect a more typical immunity duration.
Second, the studies which have been broadly spread about declining antibody levels are merely confirming the expected. Antibody levels for any disease drop drastically after infection has been over for a while. Our long term immunity is not based on continuous antibody levels (which would be rather inefficient and increase our risk of autoimmune issues) but rather based on Memory B cells which store the design for the antibody. If those cells are triggered in the future, your body will rapidly mass-produce the antibodies again.
And we have evidence that the Memory B cells are doing MORE than just fine; they're actually optimizing the stored antibodies for a more robust antibody distribution than people who have recently had the disease had. See https://blogs.sciencemag.org/pipeline/archives/2021/01/19/memory-b-cells-infection-and-vaccination for a fairly accessible summary of this.
"But,https://www.metaculus.com/questions/5908/confirmed-us-covid-deaths-by-2022/ like the real Messiah, it’s taking its sweet time."
Hypermind is pretty good. Like Metaculus, you trade in fake internet money. But Hypermind has some institutions/people who pay to put markets up, and that money is split between the people who made the most fake money on that particular market. Hypermind then will send you an Amazon gift card once you get to 15 Euros. So there's an extra incentive to spend your fake money wisely.
Philip Tetlock who wrote the Superforecasting book is also behind https://www.gjopen.com which also uses fake internet points. The questions are more varied and there's more user interaction.
He also got a group of people that were the best "superforecasters" and their predictions are now at https://goodjudgment.io/superforecasts/. I've been following it pretty closely and they've been tracking the question about vaccines better than Metaculus.
One of Tetlock's points is that people get better at forecasting from public feedback of who was right in the past. I'd like to be able to filter prediction market forecasts by, say, who performed well on similar questions in the past.
In the annual forecasting contest that I've run for the past 6 years, participants love this horse-race aspect. For the first year, I stupidly gave everyone code names so that they could make honest forecasts (i.e., without concern for what their guesses might be signaling), and that just annoyed people. The group-average forecast of superforecasters (people who have competed more than once and always beaten the average) does quite well in Brier score terms. https://braff.co/advice/f/announcing-the-2021-narcissist-forecasting-contest
FYI the word for a Metaculus user is a 'Metaculite'. See e.g. https://www.metaculus.com/questions/4786/what-will-be-the-minimum-credence-metaculites-will-give-trumps-re-election-chances-in-2020/
My intuitive concern about this concept is that A) useful insights will tend to get Dunning-Krugered into oblivion and B) people's betting choices are as much a matter of personal risk taking tendencies and disposable income as any confidence about what they're betting on.
Insofar as large parts of the financial industry are essentially prediction markets, I feel like their behavior generally vindicates these concerns.
Am I missing something?
I don't understand what you mean by [A] – can you give a (hypothetical) example of that?
A perhaps under-mentioned part of [B] is that a long-term prediction market, i.e. one that persists for a 'long time', is one in which longer-term successful bettors can be both spotted and 'validated'. Yes, market participants can be wrong – but only to the extent that they still have something to lose. It's still a vast improvement in incentives over generic punditry!
By "Dunning-Krugered into oblivion", I mean that on any given question a large number of contributors will be people with low-information who believe they have high information for one reason or another, and that those people will swamp the predictive value of the market for a given question.
Two hypotheticals regarding the question of how many deaths there were going to be:
1) The extreme case of someone who genuinely believed that the pandemic was fake or planned, and was consequently going to go away after the election. They consider themselves high information, but in fact are working from false premises. We know these people often willing to stake money on their views -- see the markets for Trump's election in two months following the election.
2) (My personal danger) The armchair epidemiologist, most often someone well-educated who views themselves as excellent at research, has been following experts on twitter, reading Wikipedia articles on relevant topics, and keeps up with whatever Nate Silver says about things. This person is effectively a noisy proxy for "expert consensus", but with enough confidence in their research to invest money in their conclusions.
Coming back to the market metaphor: low information investors join the market all the time, in a continuous stream of people deciding to get into investing. Some of them have shallow pockets so they're gone quickly and replaced quickly, but others have deep pockets and will continue to pollute the market with ill-informed bets. The depth of a person's pockets -- or willingness to burn money! -- is not correlated with their accuracy.
To your final point: I'm pretty sure it's fallacious to expect "longer-term successful bettors" in any broad sense. Accurate bets are predicated on knowledge relevant to the prediction being made, and it's impossible to have genuine expertise in a huge range of fields. Consequently, we should expect that an individual either stays in their lane, and is successful, but only on the occasional issue that pings off their knowledge, or that an individual becomes overconfident in their predicting ability/intuition writ large and starts placing bets outside their knowledge area, leading to failures and a declining success rate.
>The depth of a person's pockets -- or willingness to burn money! -- is not correlated with their accuracy.
I think there might be some correlation there, although I don't know how significant we could expect it to be. That is, we might expect someone who does well to keep going more often than someone who doesn't, across both deep- and shallow-pocketed segments, and success will also tend to deepen shallow pockets.
>I'm pretty sure it's fallacious to expect "longer-term successful bettors" in any broad sense.
It looks like such people do, in fact, exist; the famous "superforecasters", of course, but also some individuals on e.g. Metaculus that consistently do pretty well in various disparate areas.
Is there good evidence that there is such a thing as a person that is a "superforecaster" across a range of areas? Or is this like a "superspreader", where anyone can be it on one occasion, but not on another?
Seems like superforecasters are mainly people who are willing to spend a ton of time researching a large number of questions in depth. I'm sure there's an element of natural talent too. But if there were a decent amount of real money involved, I think subject area experts would be more likely to weigh in and a generalist strategy would be less viable. Professional forecasters would do best by specializing in one or a few areas where they become experts.
Tetlock's study was aimed at finding people who are good forecasters across a large number of short-term (one-year) world events. It's quite possible that somebody who specializes in forecasting a single subject matter (e.g., China-India border clashes) would do even better about their subject than Tetlock's generalist super-forecasters, but it's hard to get any kind of sample size over a short number of years about a single field in order to determine whether the single topic forecaster is truly prescient or just lucky.
In general, what Tetlock found about forecasting world events was the same thing that was discovered about analyzing baseball statistics in the last quarter of the 20th Century: there are a lot of smart guys out there who can beat the current crop of professional experts. Today, the professional experts at picking baseball players are now often guys who got their start as amateur sabermetricians. Perhaps in the future the CIA and the like will be staffed by former amateurs who showed their mettle in public contests.
These are all general arguments against _any_ kinds of (financial) markets working too.
But there's a lot of evidence that, even in practice, this isn't a systemic flaw despite there being lots of individual exceptions.
You're right that "The depth of a person's pockets – or willingness to burn money! – is not correlated with their accuracy." – but their long-term prediction market returns are very much correlated with their accuracy. It seems strictly better for inaccurate predictors to _pay_ for their wrong beliefs.
> I'm pretty sure it's fallacious to expect "longer-term successful bettors" in any broad sense.
That's a sensible 'outside view' but it seems to be wrong – see the 'superforecasters' research for one big family of examples.
One of the major attractive features of prediction markets is that participants have to pay for their wrong beliefs. So, if individuals _are_ experts 'in their lane', then they might be able to move some specific markets if they have info to contribute. And if they're wrong about something in 'another lane' (or even their 'own lane'), they'll lose their bet and thus be _less_ able to influence the ongoing market in other predictions.
Markets, and 'price systems' more generally, definitely aren't perfect – that seems to be what you're arguing against. But there's a LOT of evidence that they do in fact work remarkably well, _especially_ relative to alternatives.
A is only a problem if people who don't know much about an issue are systematically biased in one direction for some reason. There are cases where that's true, but many more cases where it's not.
Low-information (but not zero-information) guesses tend to be centered on the correct answer, with high variance. Averaging hundreds of thousands of them can often give a pretty good idea of the correct centerpoint.
Doesn't always work, but neither does asking someone who claims to be an expert.
I doubt that the group of people likely to participate in prediction markets is anything like a uniform sample of the population, so I'm not at all on board with asserting they're unlikely to be systematically biased in one direction on any given issue.
I buy that low-but-not-zero guesses are likely to be centered on the correct answer when the relevant information is only on one axis (e.g. value of change in the jar). But most questions we actually care about are massively multidimensional. Take predictions about COVID deaths: knowledge factors include underlying biology of the virus, underlying effectiveness of the vaccines, government effectiveness in a region, social factors impacting behavior in a region, and a dozen others, and the answer is a complex mix of those numbers. In those multivariate conditions, should I still expect low-information guesses to average to a correct answer? Has that been studied?
The other factor is that "low" and "zero" information tend to imply that knowledge is zero-bounded, but it's not. Lots of people are making guesses/decisions based on factually false information. If the populations knowledge -- that, coupled to it, accuracy -- has an upper bound but no lower bound, I wouldn't expect very good results.
>I doubt that the group of people likely to participate in prediction markets is anything like a uniform sample of the population
Fair enough, I was talking about the general concept of aggregate predictions. I agree that the current markets will be of limited usefulness until a much broader population is engaged (and, as Scott says, until real money is on the line).
That said, we shouldn't expect that self-selected population to be systematically biased about *every* question, and we can probably observe what topics they tend to be off about, and in which direction, to compensate.
>In those multivariate conditions, should I still expect low-information guesses to average to a correct answer?
I'm pretty sure the Central Limit Theorem directly implies that the questions being hugely multivariate will make the answers *more* accurate, not less. After all, even if your intuitions are way off on one of the factors, the impact will be moderated by your intuitions on all the other factor, some of which may b biased in the opposite direction.
>Lots of people are making guesses/decisions based on factually false information.
Yes, but absent systematic bias across the entire population, they are doing this *in both directions*. Which is where averaging comes in to save the day.
Again, systematic bias is certainly possible, but it shouldn't be assumed a priori; it doesn't happen without a reason, and that reason is often either not there or easy enough to notice and correct for.
> I'm pretty sure the Central Limit Theorem directly implies that the questions being hugely multivariate will make the answers *more* accurate, not less. After all, even if your intuitions are way off on one of the factors, the impact will be moderated by your intuitions on all the other factor, some of which may b biased in the opposite direction.
That works if you're talking about a linear composition of multiple dimensions. But if the interaction of the dimensions is non-linear, then there's every reason to suppose that an average of noisy estimates will be biased. Just as a simple example, say that X is some variable that people are noisily distributed about a mean on, such that their average is an unbiased estimator of the truth. Then if we asked people to estimate X^2 instead of estimating X, the average of their guesses would be biased *above* the true value of X^2. (That's because if one person underestimates X by the same amount that someone else overestimates X, then they will underestimate X^2 by *less* than the other person overestimates X^2.)
Sure, but doesn't that require them to think about X as affecting the outcome via x^2 when they make their estimate? Since they're just plugging these estimates into a vague sense in their heads, not into an actual equation where the ^2 term is visible.
I'd expect most people to *treat* most factors as linear when thinking about them, even if they're not actually.
The point is that it doesn't matter how people *think* of it - it matters how their errors are distributed. If errors are distributed linearly, then a straight average will be an unbiased estimator. If errors are distributed in some more complex way, then the straight average will usually be a biased estimator. In a higher dimensional problem, there are often more ways for errors to be distributed non-linearly.
I wonder if a site could strike a deal with regulators allowing a higher volume of trades if they only paid out based on users' average performance over a lot of predictions, rather than on individual trades.
That would make it much harder to win or lose money on luck alone, and the site would be more clearly a skill-based competition.
I don't think that would work either. And, reasonably, even with your proposed change, it would probably still be too similar to 'gambling' or 'investing' for them to have any reason to grant an exception or turn a blind eye.
I think maybe the best way around the legal prohibition is 'through' – i.e. create a fully compliant and regulated market. Or create 'prediction securities' and offer them via existing financial markets.
Is there any legal obstacle to creating prediction securities? What does it actually take to like, sell futures linked to GDP or something instead of a financial asset?
Kalshi has solved the regulation problem after a 2 year legal battle! The Exchange is set to launch relatively soon. Read more about the regulatory win here: https://kalshi.com/news/kalshi-designation
Re vaccine hesitancy, I haven't seen anyone talk about this but if I had been infected with COVID, I could imagine being hesitant to take a vaccine for some combination of reasons. (Do I need it? Do I need it before others? Is it worth any even slim risk, given that I already have antibodies?)
If 25% of the US has already had COVID, would we expect a lot of the hesitancy to come from that camp? Or maybe not? I haven't seen this discussed or studied but I'm curious about it.
That's a good point. The best number I can find says about 9% of people have been infected, but I don't know how many of those people know it (or how many people who haven't think they have).
Right, I have no clue. (Youyang Gu's model is where I got the ~25% total infected in the US from: https://covid19-projections.com/)
In my job I'm eligible for the second tier, and also have a role signing up my organization's employees for both first and second tier distribution. I also happen to have gotten COVID a few months ago.
I turned down the vaccine because I am relatively young and healthy, the vaccine is in short supply with high demand in this area, and I already have antibodies.
I have to admit that I am also hedging against a low chance of a significant complication (side effects, etc.) from the vaccine. I'm not an anti-vaxxer, but mistakes have been known to happen with such things. This reason would not cause me to avoid it without the other conditions above.
Zvi Mowshowitz mostly agrees with Youyang Gu's Covid Machine Learning Project
~25% estimate, though he's recently emphasized he considers these lower bound figures:
https://thezvi.wordpress.com/2021/01/28/covid-1-28-muddling-through/
https://thezvi.wordpress.com/2021/01/21/covid-1-21-turning-the-corner/
I find the concern about this issue likely overblown in general. My null-hypothesis is that the people who want to wait and see are telling the truth, and that either 1) they're right to be hesitant because concerns will come to light in the next few months, or 2) the vaccine is fine and good and so they'll be convinced by the time it matters.
All the data I've seen seems to show that we have no shortage of "willing arms," so in order to be concerned I'd want, at the very least, some polling that actually breaks people down by e.g. *how long* they want to wait, before I get too worried that the "willing arm" curve will ever dip below the "available shots" curve.
Wanting to wait and see doesn't seem particularly irrational to me, so why assume that these people are irrational or incalcitrant without any evidence? Why assume the anti-vax movement will be a bigger impediment here than for e.g. chicken pox or measles, for which the movement is too big for sure but not actually troublesome enough to warrant all this hand-wringing.
Anecdotally, I've noticed that a non-negligible fraction of people that have already had covid are actually among the most concerned about infection - I think it's some effect related to the bits of uncertainty that is out there about reinfection, combined with a deep experiential knowledge of how unpleasant it can be to have covid, so that their slight uncertainty about getting it ends up leaving them with just as much concern as your average person who doesn't have a sense of how unpleasant it is.
Does anyone know whether there's a UK equivalent of PredictIt?
normal bookies like Betfair -- the situation is really much better in the UK due to increased tolerance of gambling.
Right, thanks. Question: I'm guessing this means that it would be harder for me to outperform the market? I figure that if PredictIt is poorly-calibrated, that would mean I could (if better calibrated) make predictable money off of it, even if only in small quantities. Whereas I'm guessing that the standards of calibration required are probably a bit higher to predictable outperform Betfair?
I haven't looked into it but my guess is yes. PredictIt is a really dumb money market, there's even outright manipulation by political partisans.
On the other hand, the PredictIt fees are so high that it makes arbitrage much more difficult than it might seem. A lot of the time probabilities don't even sum to 1 -- but you'd lose money trying to arbitrage this. And you can only have so many outstanding contracts. It's obviously inefficient but in such a way that this can be hard to take advantage of.
Right, thanks again. I appreciate you taking the time to explain.
Outright manipulation by political partisans would never happen in the real stock market, of course. *cough* Perhaps on stocks whose names rhyme with BlameFrock.
But yeah, PredictIt seems like it's just obviously dumb a lot of the time.
You're doing the same thing I am where you keep thinking of it as "GameStock"!
Hah, you're right. BlameCop, I guess.
It depends. Betfair's market around the 2020 US presidential election was probably pretty off (unless you thought the outcome was significantly in the Biden-favoured half of the distribution, even though 538 says the opposite).
I think this post leans a little too hard on efficient markets. By the end of the post at times you seemed to be treating the Metaculus probabilities as though they are the actual probabilities.
Metaculus obviously has less "dumb money" than PredictIt. But it still has weird biases or cases where the market is obviously wrong, due to Silicon Valley biases, long-running questions where people drop out and don't update, overoptimism about certain technologies (like reinforcement learning), etc.
We're probably still in the world where more hedging language is merited when reporting the probability coming out of a prediction market. They don't really give "the probability" yet, I don't think, although a thick and liquid enough market might, for some things.
I think I'm treating it the same way I would treat some competent expert giving their probability - not incontrovertibly true, but by default worthy of respect until we have particular reason to challenge it. Every expert has their biases, but so do nonexperts and so do I, so taking an expert's probability as the best we can do seems fair to me.
It seems to me that your discussion of the market on fatalities with and without challenge trials is taking the market results rather more seriously than we should. If we were quite confident that there would be 50,000 more deaths without challenge trials than with, then obviously we should do the challenge trials. But what we have here is that the expectation of deaths without challenge trials, according to a particular set of estimators, is 50,000 greater than the expectation of deaths with challenge trials, according to some overlapping set of estimators. And these estimators are drawn from a biased pool, that has been primed by lots of amateurish discussion of challenge trials this year, without expressing much knowledge about past thinking about the epistemic relevance of challenge trials.
The prediction markets I looked at did terrible for the presidential primaries.
I made $3000 in election prediction markets, so no argument there.
Financial markets work well (to the extent they work well) because liquidity and speed of information transfer as well as transaction clearance effectively prevents arbitrage. You need to have no arbitrage to even have a theory of pricing that remotely corresponds to fundamental value, and prediction markets are seemingly useless if the bets don't correspond to the bettors' actual beliefs regarding fundamental probability and are instead attempts to benefit from arbitrage.
In financial markets, this happens because of market makers, institutions with enormous cash reserves of lines of credit ready to take the opposite side of any bet in order to profit purely off of juice. This is the same purpose a book serves in normal betting, except books also set the prices. This means the prices of bets are not even intended to reflect anything about the probability of an event happening, but rather the book's assessment of how much money is going to take each side of a bet at a particular price.
It's a subtle difference, but it can result in systematic deviation of betting lines from reality where some team can very consistently over or underperform their spreads for a long time because the betting public includes so much stupid money placing bets for bad reasons.
How do prediction markets avoid this? Seemingly, the point is to make optimal predictions, but every incentive from the perspective of the book is to make money off of systematic and predictable human irrationality instead.
I realize after reading this that I'm not making clear the actual difference between market makers and books, but market makers in financial markets are distinct from the exchanges. If you're trying to buy and sell on the NYSE, there are many market makers to choose from, so they're in competition to offer good prices, not just in terms of bid/ask spread, but they need to all be offering roughly the same actual prices. Books, on the other hand, are the exchanges, and there are so few of them that they can effectively just set their own prices.
I agree prediction markets are bad at this right now. That's what I meant in the first part, where I say that I'm still waiting for the perfect prediction market - one which is so lucrative and liquid that the same sorts of people who correct mispricing in financial markets will swoop in with enough money to exploit as many dumb bets as people can put up.
This post succeeded in making me more interested in actively participating in Metaculus. I've been winding down my participation in PredictIt (mediocre place to put capital even if you can beat the average), but previously had no interest in Internet points. If you're planning on keeping up a recurring review of interesting developments in the space, I think it could actually drive a significant amount of traffic in their direction.
Thanks, that's the plan!
Wouldn't any full fledged prediction market have serious "insider trading" problems, at least for many predictions of interest? E.g. AOC, or anyone close to her, massively shorting herself just before dropping out? Typically, actors don't have their hand forced (like officials trying to avoid as many covid deaths as possible) but can choose from a number of options, right? Seems like a lot of infrastructure and regulation would be necessary to avoid such problems, likely more than for stock markets nsider trading.
This would suck for traders, but (directly) be good for people trying to use it to predict outcomes. I don't know about the indirect effect from eg traders refusing to trade because they don't know if they're betting against an insider.
Even if we wanted to prevent this, would it be harder than preventing CEOs from insider trading on their own stocks? It actually sounds like an easier problem, since CEOs have many legitimate reasons to own their stocks and AOC has no legitimate reason to own "AOC will drop out" shares.
> It actually sounds like an easier problem, since CEOs have many legitimate reasons to own their stocks and AOC has no legitimate reason to own "AOC will drop out" shares.
Sure, but isn't this sort of admitting that for prediction markets to work, they'll need a regulatory framework and enforcement approximately as good as the stock market?
Because that moves my predictions about when they will be useful from 'a year after they are legalized' to '50-100 years after they are legalized'. Regulatory frameworks like that don't spring up overnight, and enforcement is a nightmare.
Prediction markets like Hollywood Stock Exchange benefit from insider trading: the pool guy overhears his client the movie mogul boasting about how the latest script for "Joker 2" has a "Sixth Sense" level plot twist, so the pool guy bets his Internet Points on "Joker 2" doing even better at the box office than the market expected.
Since no money changes hands at Hollywood Stock Exchange, this is all innocent fun and useful for people who have a need to develop a sense of what movies will be hits or not.
But when money is involved, insider trading is a big concern.
For Europeans, I think a notable market is BetFair. Much lower prices than PredictIT (generally 5% on profits, but can be less with discounts) and very good liqudity on popular markets and when the topic is emotionally charged, you can still make easy money https://www.lesswrong.com/posts/y8RWtNBiksbSzm9j4/bet-on-biden
I understand the value of prediction markets, for where people are predicting their own behavior (who will I vote for, when will I get vaccinated), but I see a lot less value in predicting events that people have no real control over, like COVID deaths.
Sure, we can make a group guess, but why is that at all accurate? What info can I get from that prediction that I could not get from just reading the newspaper (the same info source that all the other punters have)?
Let's say we had a liquid, stable market for the Presidential election on election night. Do we really think it would have been stable, in favor of Biden, around 10pm EST? My guess is, everyone would have been watching the same numbers come in, and the bets would have swung wildly from Biden, to Trump, and back again by early Wednesday morning. All of that to say, if no one really knows anything (and certainly not anything different), why do these markets have any value?
Presumably the people involved are balancing out any inherent bias they bring to the question, and therefore create some kind of crowdsourced consensus on the real answer. Two people can read the same newspaper article about how many deaths to expect and come up with a different conclusion. 1,000,000 people reading that article and putting down a guess is going to give us a much more stable prediction of what the information we have really means.
Of course, you have to assume some kind of knowledge and ability to predict in the group involved, and also that they aren't from a single subset of the population that all bring the same bias to their guesses.
I have to admit that I agree with you that Scott seems to feel that there's a lot more value in that than there really is. A single point of real data (like your presidential election scenario) will wildly swing an existing market based on speculation.
If prediction markets become big and liquid and profitable enough, you'll end up getting sources that aren't just people reading the newspaper. Or at least people who read all the newspapers.
In the election example, when the numbers started coming in and it looked like Trump was leading, what might happen is whatever hedge-fund-like entities were participating would see the chance for profit and stabilize the price swing. And they would conceivably have predicted a Biden victory based on the fact that early votes always swing Republican, and the later counted mail-in votes swing Democratic.
But also, a wild swing isn't necessarily a mark against prediction markets. As the results come in, there's more and more data to take into account. If, say, Trump had somehow won California, then the markets all swinging to his favor would be a feature, not a bug.
Also, before I end up trying to explain hypothetical evidence too much, is there a way to check how the actual prediction markets fared? PredictIt shows a bit of a swing at ~20c, Metaculus doesn't (it was actually remarkably stable, at 8% Trump even before the election), but I can't find very high resolution data so it's hard to say.
I strongly suspect Scott has a bigger readership than the entire Metaculus userbase, and that approximately 100% of existing Metaculus users are subscribed here. If this becomes a regular feature, I expect a large bump in predictions on the questions featured in ASX every Monday, and for them to move in the direction hinted at by Scott's commentary. I guess the screenshots in the post are enough information to track that.
For the reinfection question (currently 277 predictions), the fine print says:
"If coronavirus infection confers partial immunity to the new strain, such that getting the disease is less likely but still possible, this may still count so long as scientific evidence exists (for example in a published paper) that the protection is significantly less for the new strain than the old."
And the discussion in the comments suggests that this is to be interpreted as 'statistically significant', in which case I think it's basically certain that there will be at least one variant with a statistically significantly different reinfection rate that infects that many people globally. In fact, I wouldn't be surprised if it's already happened. The main reason I'm not more confidently yes, is that the wording isn't completely clear on what it means by 'significant'.
I also think that the modal scenario is that we start needing annual COVID vaccines, and everyone I know who works on COVID modelling seems to agree with this. So far, we've only seen a small number of notable new variants, but that's without any selection pressure on the virus from vaccines. I guess the main way that this doesn't become a thing is if somehow the dominant strain is mild enough that vaccination isn't worthwhile (or politics happens, I suppose).
I'm wondering why they think it will be annual, given that there are structural reasons for flu to evolve faster (it's segmented), and the rate of mutation depends on the number of people infected? Once the pandemic is over, it seems like new worrisome mutations should go down a lot.
You're right, there's no particular reason it would be annual. It seems very likely it will be endemic and regular vaccinations will be a thing, but no good reason for annual cycles.
I tried out Metaculus myself after the last time it was mentioned. While I really like it. There were two things about it that bothered me.
First, when making a prediction using a probability density, I would expect that setting the variance to be very high would minimize the impact that that prediction would have to my score, regardless of the outcome. Instead it seems to have the opposite effect. Can anyone explain why that would be?
Second, it seems like most of the questions are fairly long term. Long term questions seem less useful for calibrating. Is there a way to find shorter term questions so I can make more progress?
On the homepage if you select question status="open", you can set order by="soonest resolving".
In the FAQ, you can read the scoring rule. It's not that simple, your points also depend on other predictors.
Thanks for the pointer.
I may be missing something, but the FAQ only discusses how distributions work at a very high level. In fact, it seems to agree with my intuition, but not with when I am seeing actually happen:
> Making your distribution wider or narrower reflects your confidence in the central value you've specified, and decides the stakes: a narrower distribution give more points if your central value is right, but more losses if it's very wrong.
To see the actual scoring rules you have to click the "For those who are interested and have a stomach for more mathematical detail, the technical details of the scoring are as follows (click to reveal)." and the "Here are the details…" for categorical/numerical scoring rules respectively.
Furthermore, to completely understand how it works it's useful to read through https://www.metaculus.com/help/scoring/ too.
However, these aren't that necessary, I did read through these when I started (3 years ago), but I really only use the general facts about the scoring when predicting.
It seems like the basic principle behind prediction markets is that most people want to make money and that smart money will overwhelm the noise from dumb money. This premise seems a bit shaky as of late? For a thinly-traded market, it seems like semi-organized dumb money from the 4chan/lizardman/McBoatface contingent is quite capable of launching a denial-of-service attack if they feel like it, basically by throwing money. Piling on in this way is apparently as much fun as gambling and as feasible as any Kickstarter. It seems like a good profit opportunity for people who can manage to catch the money they throw, but this doesn't prevent them from throwing, and so it's not so good for rational price discovery.
This is on top of skepticism about anything based on asking people questions without cross-examination. Scott has written about problems with surveys before, but for more skepticism, see @literalbanana's Survey Chicken essay: https://carcinisation.com/2020/12/11/survey-chicken/
If there is a denial-of-service attack on a prediction market then it will usually be obvious and we can just ignore the result. But perhaps there will be cases where the result is plausible?
Yes, specific attacks are possible. But they're news _because_ they're exceptional.
I wouldn't worry about the WSB thing setting a precedent until we see how it ends.
My expectation is still that a small handful who got in early and dump their stock first will make a huge profit, everyone else will lose their lunch, the losers will become disenchanted and re-frame the whole thing as a setup by those who started it and got rich, and it will be very difficult to get people to jump onboard such schemes in the future.
I could certainly be wrong, but we don't know what this example is teaching us yet.
If there were a good prediction market (large, liquid, no-limits, real money), one of two things would happen:
1. It would make good predictions
2. You could make basically limitless amounts of money off of it.
I agree PredictIt sucks. I made a few thousand dollars off it last year. I would have made more, except that there are limits to how much money you're allowed to put in. If there were no limits, I would have made a few hundred thousand dollars. I'm not exceptionally good at this, just not quite as dumb money as the people driving prices. I would be okay with either of the two options above.
I think the reason you can't make limitless amounts of money off of WSB being dumb is a the way shorts work - you can't buy a share in "Gamestop is actually worth less than this", only in "The market will value Gamestop at less than this at time X". Prediction markets can sell the real deal - no matter how much dumb money says Trump will definitely win, when Trump loses your shares will go up to max price.
It would be interesting if (2) were true in the "Las Vegas exists" sense. The limit is probably how much people are willing to spend on entertainment? There would be competition to post entertaining questions and get publicity for questions that entertain the masses. Using "entertain" loosely.
You could make limitless amounts of money of it - if you were right every time, which seems unlikely.
Regulated as gambling, eh? What's the nearest Indian Reservation to Wall Street?
Also UK bookmakers are allowed to take bets on political questions. So you can check the odds for things like next POTUS (Kamala leading at 4) or PM (Sunak at 29/10) there.
(Bonus: senate almost certainly won't convict, Joe is more likely than not to make it to '24 and Puerto Rico probably won't be a state by '22) https://www.oddschecker.com/politics
ok nice intro to the prediction markets. more if this please. a little less arrogance @ "I don't think the average Metaculan truly understands the difficulty...", as well as contradiction "...this is a pretty impressive example of decentralized expertise weighing in..."
The FDA is already going to speed up clinical trials for COVID booster shots against new variants:
"Peter Marks, the head of the Food and Drug Administration division that oversees vaccines, said Friday that the agency will do what it can to speed the process. It won’t require big clinical trials, for instance. Rather than studies of tens of thousands of people, the agency will mandate much smaller studies of a few hundred. The goal would be to ensure that the vaccines produce the desired immune response and to see whether the products cover just the new variants or the original virus as well as the new variation, he said.
“We would intend to be pretty nimble with this . . . so that we can get these variants covered as quickly as possible,” Marks said on an American Medical Association webinar."
https://www.washingtonpost.com/health/covid-mutations-herd-immunity/2021/01/30/0741722e-627c-11eb-9430-e7c77b5b0297_story.html
i'm also aware of prediqt : https://prediqt.com/ , which uses the EOS blockchain, and foretell : https://www.cset-foretell.com/ , run by georgetown university. i don't have much personal experience with either, but my dream for metaculus is for it to eventually aggregate other prediction markets and forecasts (like those from fivethirtyeight)
I have some experience with cset-foretell; would recommend. I was #1 for a long time, but I'm currently #3 after a traumatic resolution on a facial recognition funding question
Re: aggregation, you might be interested in metaforecast.org/, written by yours truly, which does something similar to what you were suggesting
Is it possible to place error bars around the Metaculus prediction for deaths w/ and w/out challenge trials, based on the actual track record of similar questions? Evaluating counterfactuals such as how many people would have died if challenge trials were allowed to proceed is really only useful insofar as the Metaculus predictions in the past are somewhat accurate and the difference in predictions is outside the error range of past predictions.
I see that Metaculus has a track record page, but I'm not statistically advanced enough to convert their metrics to standard deviations around individual question predictions.
Most interesting near term prediction to me: the current petition for a recall election for CA Gov. Newsom at 75% chance of succeeding on Metaculus (about 65% on PredictIt). Something I've of course read about in general news but had no clue what the probability was.
That is interesting, thanks.
I wasn't too impressed by the tech mogul positioning to replace him, but he's probably at least a lot smarter and less good-old-boys-y than Newsom, and maybe that would end up being good in some way?
How do prediction markets handle conditional hypotheticals such as "how many people would die if we did challenge trials"? My understanding is that prediction markets work by eventually paying out when the real world provides a result. But in the case of these conditionals there's no way to ever know the actual outcome. Do those "shares" all expire worthless? Are they refunded for purchase price? This seems pretty important for any proposed policy question (if we implement policy x, result will be y). I'd like to understand this mechanism better.
I think the idea is that they're redeemed for the purchase price, but I actually don't know of any real life examples.
Presumably you could offer bets on (x and y) and compare it to bets on x to find out p(y | x)
I wonder if it's legal to use virtual currencies in a prediction market? Between Apple/Google app stores, Steam and other game platforms, in-app and in-game purchases, etc., I'm pretty sure the amount spent in these online marketplaces is in the hundreds of billions globally. If these currencies were made sufficiently fungible with real money, that should be enough to attract investors.
Or, hell, could Bezos just run a prediction market that pays out in Amazon gift certificates? How do the gambling laws actually deal with such alternate currencies?
> could Bezos just run a prediction market that pays out in Amazon gift certificates?
Ooof.
I imagine that in that case Amazon gift certificates would become much more liquid, though.
Marc Whipple (sometimes commenter here) is a lawyer and did a guest appearance on a show about this.
https://art19.com/shows/robot-congress/episodes/3bd9c24e-bdf7-451b-8cfa-4064585c1a52
The short answer is that you can gamble pretend things as long as you cannot convert them into real things.
As a general practical question, even if prediction markets weren't classified as gambling and you could attract real investors dumping billions into predictions, I wonder how you protect them from manipulation?
Like, in a fully open market, how do ask a question like 'On what day will Company X announce their new vaccine' without Company X just dumping a billion dollars into that market on a much later day they're sure they'll be ready by, and waiting until then to announce?
I guess in that case the market did tell you when they would announce with great accuracy, but only by fucking up the thing being measured.
Seems like a hard problem.
In the regular stock market, insider trading is forbidden. People in companies with special knowledge cannot trade in their own companies' stocks except under special circumstances (for example announcing several months in advance). So if prediction markets become a thing, regulators could very well deal with insider trading similarly.
But insider trading improves the accuracy of prediction market forecasts, as I pointed out in 2010 in regard to plans to transform the Hollywood Stock Exchange into a real futures market:
https://www.takimag.com/article/betting_on_the_hollywood_stock_exchange_better_play_roulette/
If the government cracked down on insider trading in the name of fairness, the utility of prediction markets would decline.
But when you're predicting real-world events, is there a worry about the effect you have on those events themselves?
Like, if there's a market for when Pfizer announces a new vaccine, isn't Pfizer incentivized to pick a day a few months later than anyone else is guessing, and just sit on it until then?
I understand that then the prediction market is 'working' because we're really damn sure when Pfizer will announce, but only because the market itself changed the useful, unsure answer into a different, much worse, but definite answer.
You can’t really say, “if only we had done challenge trials we would have saved 50,000 lives,” since the world where we did challenge trials looks different in other ways from the world where we didn’t e.g. additional bureaucracy lifted, or even greater urgency due to larger death rate.
> we don't have herd immunity for the danger period in winter 2021
Why are you assuming Covid is seasonal? As far as I can tell, Covid likes hot weather just fine. Summer was less deadly in North America because the big cities locked down and flyover country didn't have it yet, not because Covid doesn't like the heat. Brazil is getting hit badly in the summer heat now.
https://www.cnn.com/2021/01/27/americas/manaus-brazil-covid-19-new-variant-intl/index.html
AFAIK, Governor Newsom of California is targeting June for general vaccine rollout, which makes the May 19th nationwide prediction somewhat optimistic.
I am a user of CSET Foretell, and it is interesting drawing comparisons between Foretell, a subsidized opinion pooling platform, and a traditional prediction market. The first difference is that it doesn't give a strong appearance of gambling, which is a plus in the policy making context. To get people to participate, there are some financial prizes such as lotteries for people who place in the top of the leaderboard or forecast during a time period. There are also social incentives though such as the ability to join teams, which have a separate leaderboard, and the feeling of contributing to pressing national security problems. This is separate from UI engineering that I think also adds great value like automatic reminders to update forecasts and historical data visualization for those of us who don't like having to track down and plot everything.
I think a likely future incentive would be CSET recruiting top forecasters into an elite opinion pool that consults with policymakers. It's not a coincidence that Cultivate Labs built both CSET's and the Good Judgment Project's platforms, with superforecaster cachet being a major reason to join Good Judgment Open.
There are no coronavirus predictions due to the tight question moderation, but if you want a worrying prediction, the crowd forecast of a military conflict in the South China Sea before July is 11%. https://www.cset-foretell.com/questions/85-will-the-chinese-military-or-other-maritime-security-forces-fire-upon-another-country-s-civil-or-military-vessel-in-the-south-china-sea-between-january-1-and-june-30-2021-inclusive
I am convinced by arguments I've seen elsewhere that for real-money prediction markets to attract serious researchers placing smart bets, the markets need to be subsidized so that they are positive-sum rather than zero-sum for the participants. Commodities futures markets are zero-sum in terms of money, but positive-sum when you account for the benefit to the people who are able to use them to hedge their business. Those people effectively subsidize the market makers and speculators in exchange for the service of hedging.
If we are serious about getting a prediction market to answer a question, we need to subsidize the market with a bunch of money for the smart bettors to win.
It seems to me like many of the questions you might ask on a prediction market can implicitly function as hedges against wide array of possible outcomes, so you'd still have at least half of the participants benefiting from the ability to hedge (and, presumably, "paying" for it in expectation, which can provide the incentive for other participants who don't need to hedge to close the loop).
Yes, I thought this was Hanson's opinion (I may be out of date by several years).
I don't know much about prediction markets, so probably a dumb question: What about derivatives on predictions? Wouldn't trading in, say, options on a prediction provide incentives to game the system and destroy the predictive power of the original market?
So here are some other links and platforms you might be interested in:
- Good Judgment Open: https://www.gjopen.com/. Top 2% go on be selected as superforecasters, so amateur participants have some motivation. I have the impression that this platform is comparable to Metaculus in terms of accuracy, but I don't have actual data. They're particularly centered on geopolitical questions.
- CSET-foretell: cset-foretell.com/. Similar to Metaculus, but smaller and attempts to influence public policy in the US, particularly around transformative technology. Also sponsored by OpenPhilanthropy.
- Hypermind: http://hypermind.com/. Similar to PredictIt, but Frenchy. They recently got >$7k from OpenPhilanthropy to have some COVID-19 predictions, which can be seen here: https://prod.hypermind.com/ngdp/en/showcase2/showcase.html?sc=Covid19
- Polymarket has already been mentioned by other people. Also by you in the past: "I got emails from two different prediction aggregators saying they would show they cared by opening markets into whether the Times would end up doxxing me or not. One of them ended up with a total trade volume in the four digits. For a brief moment, I probably had more advanced decision-making technology advising me in my stupid conflict with a newspaper than the CIA uses for some wars. I am humbled by their support."; Polymarket was one of them, though the market was from their old site and thus only remains in the archives. https://web.archive.org/web/20200717060418/https://poly.market/market/will-the-new-york-times-publish-an-article-revealing-scott-alexanders-full-name In particular, PolyMarket might have boring markets, but they have recently been great for making money from unsophisticated traders.
- PredictionBook. https://predictionbook.com/. This site is *old* Gwern was a power user for a while. You can see other people's predictions here: https://predictionbook.com/predictions
- Omen. https://omen.eth.link/ This seems to be truly decentralized. They sometimes use Kleros (kleros.io), a decentralized judging system which I find really elegant. They haven't seen much volume recently, probably because of usability problems / high ETH fees. They powered https://coronainformationmarkets.com/ for a while, but they didn't see much volume either.
- Elicit is another forecast aggregator, which I associate with casually eliciting probabilities from many people at once. It can be embedded on LessWrong, and substack could maybe get it to work as well (so e.g., people to make forecasts while they're reading SSC). See here: https://www.lesswrong.com/posts/JLrnbThMyCYDBa6Gu/embedded-interactive-predictions-on-lesswrong-2 (embedded), or here: https://www.lesswrong.com/posts/hQysqfSEzciRazx8k/forecasting-thread-ai-timelines (not embedded, but still cool). Also, you might be able to embed Metaculus predictions as well?
- Man, I can't believe you missed https://www.replicationmarkets.com/ and https://socialscienceprediction.org/. They try to predict whether papers will replicate. The first one used to pay really well ($100k in prizes), but they are in between projects right now.
I also have several things of my own:
- https://metaforecast.org/: A search engine, but for probabilities. In very early beta. But the idea is that you can just search for something, like "covid", and it will get you the forecasts from the platforms I mentioned above (and PredictIt, which you also mention)
- I have a monthly forecasting newsletter: https://forecasting.substack.com, which covers forecasting related stuff. You might be interested in the edition for January 2021, which I just posted: https://forecasting.substack.com/p/forecasting-newsletter-january-2021, or in an overview of 2020: https://forecasting.substack.com/p/2020-forecasting-in-review
- This list: https://docs.google.com/spreadsheets/d/1XB1GHfizNtVYTOAD_uOyBLEyl_EV7hVtDYDXLQwgT7k/ of prediction markets, created by Jacob Lagerros which I also try to mantain.
-I also worked with foretold.io at some point in the past. They're good if you want to create your own private communities and forecasts with friends, and it has a powerful distribution editor based on the one from Guesstimate, but which takes some getting used to. It also hasn't seen much activity recently, but I've been working on using it as a backend for making large numbers of predictions at once. As an example cool thing one can do, a while back I entered things Dominic Cummings said: https://www.foretold.io/c/952d822e-b979-4c42-8560-7f878ad23c9e?state=pending, to see how accurate he was (I haven't seen how these resolve yet, though)
These are great resources! Not sure if it fits into your broader category, but here's my contribution (annual forecasting contest in its 6th year, where I wrote a series of 16 blog posts over the course of last year using the 2020 contest to illustrate techniques: https://braff.co/advice/f/forecasting-masterclass-1-billie-and-brexit)
At least to start with, I'd be much more interested in a backwards-looking analysis of how well these markets have done at predicting events in the past.
https://www.metaculus.com/questions/track-record/
See also a comparison between the track record of metaculus and that of experts here: https://www.metaculus.com/news/2020/06/02/LRT/
Thanks!
My prediction for US coronavirus deaths as of late last year was 665,000 (median). Posted and defended on DSL, where I hope it earned me a bit of respect from my SSC/DSL/ACX peers, which is a sort of "fake internet point" that I understand and place some value on. Enough to do about an hour of dedicated research and math in that particular case, on top of the general Covid situational awareness stuff that I'd have been doing anyway.
If there'd been a way to make serious money off that sort of prediction, I'd have put in a lot more than an hour's work. And judging by the Metaculus results, probably come to about the same conclusion and made no serious money, but worth a try at least. I don't understand the value of Metaculus fake internet points well enough to devote any great length of time pursuing them. If that changes, and if my understanding of MFIPs is that they are really good things to have, then I'll probably engage with Metaculus.
If it's just that there's a small nerdy community that will respect me for my Big Number, I've already got a big IQ I can brag about to people who care about that sort of thing, and I've got a community where I can earn a more nuanced and informed sort of respect than "He has a Big Number".
FYI Scott, the US intelligence community had an internal prediction market only accessible from TS networks until maybe mid last year. I enrolled in it a few years ago, but never personally made any predictions due to just not having any basis for making a good guess about anything without investing a ton of time.
Theoretically, this probably should have performed better than a public prediction market since the participants have access to classified information. I didn't pay enough attention to know to what extent it was ever evaluated for predictive accuracy compared to individual experts and public prediction markets. Of course, you couldn't earn actual money, so I think lack of activity is the biggest reason it shut down and may not have outperformed public markets even with the additional information.
It was voluntary and not used to inform real policy decisions. I can't remember now which agency set this up, but probably NSA. They tend to be behind most of the classified but otherwise public-facing applications anyone with IC PKI identities can get to.
On a semi-related note, something I miss about no longer working in a SCIF is, even with a clearance, I can't access these sites any more since I don't have physical access to a connected workstation. One of the more interesting things you learn when you first get access to the same briefings given to the Joint Chiefs of Staff every day is just how wrong a lot of public news actually is, along with how much isn't really unknown but only not publicly revealed to avoid revealing that we have the capability to know it.
Is there any kind of research on the differences in cognitive bias between risk-averse and risk-prone individuals? Because it feels like prediction markets definitely attract more risk-prone people.
Like some other commenters, I'm puzzled by Scott's implied assumption that the consensus that forms around any prediction in a prediction market will be inherently valuable (not to say accurate). Whay should this be? How can we know that predictors in such markets aren't simply pooling their ignorance?
Is there an assumption that such markets will tend to be populated by people who are more likely to be accurate predictors? If so, what's the evidence for that assumption?
The inaccurate predictors run out of money to spend on the prediction market (also they don't want to lose any more money) and the accurate ones end up with more money they can spend on predicting things.
I see. Thanks. So the target market is "rich people who are good at forecasting".
That, plus "people who are good at forecasting can use prediction markets to make money, thus joining the ranks of rich people who are good at forecasting".
Mike Tenpence
Hillary Clinton was 5% to be the nominee in March 2020. QAnon has given lots of rational PredictIt bettors free money.
Excellent explication of conditional prediction markets! (For those who don't know, those are the basis of Robin Hanson's Futarchy.)
Sadly, I think the truest true answer to many forecasting questions is "no one has any clue, it's an utter coin flip" and no amount of market liquidity will change that. It's possible that's what's happening with the two conditional markets for COVID deaths with and without challenge trials. They're saying "339k deaths with" and "426k deaths without" but maybe what that really means in both cases is "gosh, no clue, a few hundred thousand ish I guess?" The Metaculus distributions for the with-vs-without predictions do mostly overlap each other. Still, lived saved in expectation is a big deal!
So I'm not actually disagreeing with anything here.
> Sadly, I think the truest true answer to many forecasting questions is "no one has any clue, it's an utter coin flip" and no amount of market liquidity will change that.
Metaculus publishes their track record [1] on their site. They have an average brier score of 0.122 (which is really good), and their calibration is a little under-confident for sub-50% predictions but overall great. Unfortunately I believe this only takes into account binary questions, not distributions. But this, along with much of Tetlock's research [2] means I'm skeptical that most forecasting questions are actually 50/50 no matter how much info you have.
[1] https://www.metaculus.com/questions/track-record/
[2] https://smile.amazon.com/Superforecasting-Science-Prediction-Philip-Tetlock-ebook/dp/B00RKO6MS8/ref=sr_1_1?dchild=1&keywords=superforecasting&link_code=qs&qid=1612300294&sr=8-1&tag=amznsearchff-20 or https://slatestarcodex.com/2016/02/04/book-review-superforecasting/
Great points! Thank you! Clarification: I didn't mean "coin flip" as in 50% so much as like very diffuse distributions based on little information. For example, if the probability of Biden getting reelected is 60% because (I'm making this up) that's just the historical frequency for incumbents and we know nothing else then I think of that as a coin flip. I might be abusing the term a little.
An ignorant prior with little Bayesian updating might be the more precise way to put it.
> of course, you can still buy all the Gamestop stock you want
this gave me an idea: can't we hijack stocks to turn them into predictions ?
for each bet create two corporations, for example the Trump Will Get Impeached Corporation and the Trump Will Not Get Impeached Corporation. they'd both be contractually obligated to, if they lose give all of their wealth to the other one, and if they win, buy the shares back from all shareholders at the new, increased value.
This term "lighter side" I do not think it means what you think it means...
Ugh. What are covid "deaths"? The PCR test isn't standardized, and by varying the number of cycles, you can vary the covid death rate. What if someone who is in control of the number of cycles also bets on the number of deaths?
BTW, PCR is why the death prediction keeps rising -- because PCR counts people who were merely exposed but destroyed the virus, people who were exposed and had no or very mild symptoms, people who were sick but aren't anymore, and finally the ones who are showing symptoms. Eventually everyone will fit into one of those categories and so every death will be a covid death.
I'd like to offer a few brief counter-arguments:
1. The persistent popularity of casinos, lotteries, and other forms of gambling indicate that yes, people are more than happy to throw their money away on bad odds. No matter how bad someone is at predicting, they probably still aren't bad enough to drop their odds lower than that of buying a lottery ticket (i.e., they aren't being any less rational). And yet lottery ticket buyers still persist. This call into question the ability of prediction markets to "select" for high-knowledge participants over the long run
2. The reason this works, is because prediction markets (and all other forms of gambling) aren't closed systems. Sure, someone who makes all of their money Blackjack, and funnels 100% of their earning back into more Blackjack, will eventually sink-or-swim based on their Blackjack playing skills. But in practice, most people spend a small portion of their earnings on Blackjack, and have outside sources of income, such that losing in Blackjack is only a minor deterrence to continuing play. So long as people have outside jobs, "dumb-money" will continually pour into prediction markets with little consequence.
3. More abstractly: Due to the diminishing marginal utility of money, "dollars do not equal knowledge units". A wealthy person would gladly gamble away $10,000 on a whim, while a poor person would only bet this if he know the outcome with near-certainty. This means that the degree of knowledge a person has in regards to an event occurring, *cannot be directly inferred by the amount they are willing to bet*. The implications of this are significant. Because for prediction markets, pooling dollars bet ≠ the aggregated knowledge of an event, so you cannot even say with certainty in a market with (for example) 60% yes and 40% no, "the bettors in this market think the event is more likely to occur than not". It's *evidence* for it, sure, but other forms of polling may find different outcomes in the same group of people, if they adjust for levels of certainty using different measures.
4. The common sentiment in the rationalist community seems to be something like "betting markets are beautiful in theory, but flawed in practice". I take the inverse position: Betting markets are flawed in theory, but have practical application so long as you understand their theoretical limitations (e.g., they are poll-like things that basically conduct themselves, for free). It's good that we have these prediction markets, because having information is better than not having information. But that's as far as my hope goes: If there was some way to bet against prediction markets eclipsing the usefulness of traditional polling in the coming years, I would.
Have you seen Zvi's weekly series of Covid posts? (Most recent post: https://www.lesswrong.com/posts/Rhy5g75NdRKdw9ibB/covid-1-28-muddling-through )
I have the impression that he would give quite different predictions than the average / median on Metaculus. If only there were more than fake internet points up for grabs there, he might have an incentive to participate in those predictions.
I go through every question in Metaculus and if I don't have any prediction of my own, I jsut pick what the crowd picks. I want to see how well I perform over time and get those super nifty Tachyons. I don't actually care about the future...just winning.
I did PredictIt this election cycle and made around $200 on a $500 investment. Never again. What a horrible experience. First its pure tribalism distilled and slammed one shot after the next. Second, the rules are super specific (which they should be and sometimes aren't in Metaculus--though discussion usually helps) and people make money off of gaming the rules of a question (This is the best example: https://www.predictit.org/markets/detail/4353). third, it's far more important to understand trading (hedging, arbitrage, etc.) than making correct guesses. In the market I listed above, somehow, they pumped the price of Republicans winning the senate after the Georgia election because of the rules and a ton of people made money by choosing the wrong answer. I think Metaculus avoids this problem beautifully.
I support Metaculus Mondays. There's even some questions I'm interested in, like, "When will Epistemic Humility become a common phrase?" Support them on Patreon: https://www.patreon.com/metaculus/posts
I'm confused by what your example is supposed to show? The price of "D House, D Senate" spiked to near-100% after the Jan 5 special election that confirmed they'd have a majority, then tailed off over time (presumably because of Republicans hoping that it would somehow get overturned), then jumped back up on Jan 20th when it became clear that no miracle would happen. That's about what I would have expected, though I'm surprised that it put the odds of Republicans winning so high.
But I don't see how this is "gaming the rules" - the Democrats do in fact have control of the Senate by most reasonable definitions, and it seems pretty reasonable that the contract shouldn't resolve until Jan 20th in case a senator dies or something before being sworn in.
(And who is the "they" who pumped the price of a Republican win? If the price went up because a bunch of Republicans believed there would be a surprise reversal and they were wrong, then that sounds like the market is working as intended - moving money from people who make bad predictions to people who make better ones.)
All of the action in that market was in the comments section. There are thousands of comments and most are garbage, so not worth wading through, IMO. There were four contracts: DD, DR, RD, and RR you could buy them as, ostensibly, a yes vote. RD and RR were obviously No and never rose above a penny or two, which is normal in a market that should have settled. This market stuck out to me and probably all of the PredictIt users because it was one of the only markets with a spread that had some play in it. The question everyone asked was: Why isn't DD at $0.99? I'd say most people were like me and saw easy money.
The rules stated that the market would resolve at midnight on Jan 21st. The Georgia Sec of State said publicly that he had until the 22nd to certify the results and that the counties didn't have to report until Friday the 16th. Because of MLK day. Due to that and the general craziness of those few weeks there was actual uncertainty that it might not resolve at midnight on the 21st, however, it became a huge pump and dump for DR yes who bought low , say $0.18 and sold it (if they were smart) at $0.50 (or something like that, I forget exactly how high it got). It was easily one of the busiest markets on the site, and big ol' flame wars in the comments between DD and DR people attempted to push the price around. People who had bought DD at $0.70ish were freaking out and dumping their positions while the DR price skyrocketed above anything realistic. I held through the whole thing, but it was mostly on the hope that the timing would work out and had nothing to do with the actual question. It was pure Zero-sum, white-knuckle trading on news, not who made the right guess. Metaculus does much better in this regard by removing the weird partisan, jocko-homo competitive animus from the equation.
In the end, there may have been some Republican zombies making the bet, but aside from the garbage spam in the comments, it was mostly people messing with each other over the specifics of midnight January 21st deadline and telling each other stories about the inefficiency of government officials, etc. The money made and lost had nothing to do with the actual question and everything to do with rumors, vitriol and watching the market. If someone had correctly predicted in October--and was right the whole time--they would have had to weather the crazy price disruptions during that final week AND would likely not have made as much money as someone in on the DR pump.
I suppose you could make an argument that somehow the market 'worked' because it resolved correctly and moved based on the news, but I'd say that's not what you actually want a prediction market to do. This is actually something that Metaculus suffers from as well, i.e. the utmost importance of asking well articulated questions with very specific resolution criteria. MY experience with Metaculus, so far, is that the community is more focused on getting the questions right than they are on winning the bet, which was the opposite experience I had with PredictIt.
Ah, I didn't know that the Sec of State said it wouldn't resolve until Jan 22. I can see how that would cause problems. Thanks for explaining.
I started to type out a snarky comment about how 6% of PredictIt users don't even realize AOC won't be old enough in 2024, and then thought I'd better google it. Turns out she'll turn 35 three weeks before the election, which makes her several years older than I thought she was. Interesting.
A couple takeaways:
(1)
Using prediction markets to inform governmental policy seems like it could create some really perverse incentives. For instance, if the question is "What will be the employment rate in we (rasie/don't raise) the minimum wage to $15?"
A government will raise the minimum wage if the predicted decrease in employment isn't too great. Doesn't this create strong incentives for business owners to enter the market and enter overly dramatic predictions of employment decreases in order to manipulate policy makers into not implementing a policy that goes against their interests?
(2)
Curious about how these markets work with actual money. For binary predictions it seems simple enough. I'm less sure for continous predictions. Is payout proportional to how close the predictor's value was to the actual value? How does the money placed get distributed?
(3)
What is the consensus view on cryptocurrency-based / decentralized prediction markets? I see a few built ontop of etherium (https://defiprime.com/prediction-markets) but I haven't heard nearly as much about them as I've heard about Metaculus. Do they lack the technical expertise and polish that Metaculus offers? Or is it just that the inherent exoticness of cryptocurrencies makes them less accessible than mainstream alternatives?
This seems like the perfect use case for crypto: A clear need to introduce actual money into prediction markets + the technology to skirt government regulation
I have a question that I think is really obvious, but I don't see it being addressed. I wonder if I'm missing something.
If one were to use prediction markets in the "creepy deep magic way", i.e. asking people what the consequences of political decisions would be - wouldn't people want to influence the outcome to support their political leanings?
I understand that this would come at some cost for them, and this surely is a correcting factor, but I would guess that political motivations might still sometimes drastically alter prediction poll results.
Sorry for any grammatical mistakes, this is not my native tongue.