144 Comments
User's avatar
Jared's avatar

Typo thread

"If Russia invades Ukraine, this person will win +58 points; if it doesn’t, they will win +32 points. Why does Manifold allow this?" - This should say "Metaculus".

Expand full comment
Scott Alexander's avatar

You're right, thank you.

Expand full comment
Some Guy's avatar

Recent prediction market convert and they do things I really, really like. They teach people to forecast, they use the wisdom of crowds super efficiently, they tie consequences to being correct or incorrect, and they incentivize participation. It’s like turning in a search light and aiming it into the future.

Question: Does anyone have a long term strategy on how to make this stick to decision makers? That’s my main point of curiosity. Is the hope that they just get so efficient they can’t be ignored? I’m sure this can make money but I had hoped something like this (I have my own weird scheme I’m super into just like I’m sure everyone here does) could be a civilization’s sense organ.

Right now it seems like the plan is to make really good eyeballs and figure out how to hook them up to the brain later? Is that right? Genuine curiosity.

Expand full comment
Scott Alexander's avatar

Mostly agree with this; right now the problem is scaling the technology and making sure it works well, after that decision-makers will either notice on their own or we can pressure them into it. Though if you're not familiar with Philip Tetlock's work with IARPA that would be a good place to start as an example of government/forecasting interfaces.

Expand full comment
Some Guy's avatar

Thanks Scott. Buying one of his books now. Been reading the Hobbit to my son but at two months, I bet he doesn’t notice the substitution.

Expand full comment
Austin Chen's avatar

We have talked to a few managers who want a private version of Manifold for internal company use. (And of course, we use it to guide our own decisions eg https://manifold.markets/Austin/what-new-manifold-features-would-be ) The discussions are still really early, but I think this is one way Manifold can have a huge impact: by pushing forward this use of futarchy and decision markets in real-world use cases!

If a private Manifold instance sounds like something you'd want for your team -- get in contact at akrolsmir@gmail.com!

Expand full comment
Some Guy's avatar

Know this might be a bit out of left field but I’d pay you a monthly subscription fee for a browser extension that told me whether or not a given website/Twitter account/etc was reliable or not on average if it had been ranked by Manifold.

Expand full comment
Austin Chen's avatar

Hm, that's interesting! I think a browser extension for something like Trustpilot (https://www.trustpilot.com/) might be more appropriate for the generic website use case.

For a specific Twitter user, if they link their Twitter account in their Manifold bio, it might be feasible to have an extension bring up their trading history, so you can see if they're as good at punditry as they claim. We're probably pretty far from being able to do this at the moment, but thanks for your feedback -- always helpful to hear what users are so interested in that they'd pay!

Expand full comment
Toni Gemayel's avatar

What about Kalshi?

Expand full comment
Lumberheart's avatar

Kalshi went through the "near-impossible regulatory hurdles" and allows real money for US citizens. As far as I know, it's the only one to do so.

Expand full comment
Isaac King's avatar

Predictit does too.

Expand full comment
Wandering Musings's avatar

Yeah still a bit confused by Scott's persistent negative stance towards the first company to be able to create legalized prediction markets ever...

Expand full comment
chipsie's avatar

Does he have a negative stance toward it? I recall him being quite excited about it.

Expand full comment
Scott Alexander's avatar

They're real money, which makes them irrelevant to this discussion of play money and reputation systems.

Expand full comment
ranaya's avatar

they're fantastic if you want to predict untinteresting things that will resolve in a week, like the weather in new york

Expand full comment
wewest's avatar

Kalshi is also limited since, because of the regulatory hurdles, every question is a binary options contract offered by Kalshi. No one will ever be able to post their own questions on Kalshi without serious changes in the CFTC.

Expand full comment
Level 50 Lapras's avatar

on the bright side, it means no "will someone try to assassinate the president" questions

Expand full comment
Lars Doucet's avatar

Ooooh I like the per-market loan system. They should at least try it and see what happens!

Expand full comment
Austin Chen's avatar

Yeah, I like the idea a lot too, as a way to lower the barrier to trading and encouraging activity, without causing significant inflation or giving an edge to people to who trade on everything (ala Metaculus).

As a bonus, it basically allows for 10 "free" comments!

Expand full comment
CTravis's avatar

Hey there, I work for Metaculus and wanted to share my perspective on Scott's points about reputation and about how Metaculus incentivizes predictions. Tournaments have a different scoring mechanism than the rest of the platform, because there are cash prizes at stake. If someone is highly-ranked on a tournament leaderboard and wins prize money, it's because they outperformed other forecasters and contributed a lot of information with their forecasts.

Expand full comment
David Piepgrass's avatar

Do they get some other form of reputation point for winning real money or otherwise doing well?

Expand full comment
CTravis's avatar

As we build out more features on the platform, we’re planning to expand the reputation system so that performance on tournaments becomes part of users’ track record pages.

Expand full comment
DaneelsSoul's avatar

I was thinking some about the issue with betting on conditional markets that may well never trigger, particularly in the context of Scott's which book should I review markets, or similar. And I think at least a partial solution is the following:

If there are multiple conditional markets whose conditions for triggering are mutually exclusive, you should be allowed to use the same dollar to bet in as many of those markets as you choose.

Expand full comment
Nick Allen's avatar

So, leverage?

Expand full comment
Will's avatar

Absolute accuracy is usually represented by brier scores, and brier scores suck, because they don't use logarithms, so they can't appreciate the huge difference between 1% and 0.01%. I have an idea to construct a better formula. (https://en.wikipedia.org/wiki/Brier_score#Definition) Instead of just squaring (Ft-Ot), apply the transformation g(x) = -lg(1-x), where x=abs(Ft-Ot) So your penalty is ~zero if your probability estimate is close to correct, but your penalty goes to infinity as your confidence in the wrong outcome goes to 1.

Predictit is very negative-sum due to fees, and it works fine.

Positive sum sucks because it incentivizes spamming the most predictions without regard for accuracy.

Positive sum combined with bad formulas (see above) is what lets people get away with predicting 99% on questions that should be 96%. On a real money negative-sum market, you're not going to make much return by doing that. Predictit's incentives are such that 99.9% certainties often trade at 97 cents. I think negative-sum tends to under-estimate probabilities of 99% events, while positive-sum can over-estimate probabilites, but doesn't have to, if they fix the formulae (see above).

Expand full comment
Scott Alexander's avatar

I have never seen a prediction market even try to distinguish between 0.1% and 0.01%. I agree this would be a useful ability to cultivate.

Expand full comment
Richard Gadsden's avatar

Betfair Exchange goes up to 1000, ie predicting 0.1% probability. Weirdly, it only goes down to 1.01 (99% probability). This is obviously meaningless when there are only two possible outcomes, but when there are more than two, then it means that very unlikely possibilities can be meaningfully predicted - while if one option has captured 99% of the probability space, then the market is effectively over anyway.

Expand full comment
Sniffnoy's avatar

I mean you could use any proper scoring rule, right? Isn't what you're suggesting just the log score? Or am I missing something?

Expand full comment
Will's avatar

Proper just means the scoring rule is optimized by predicting the true probability of the event. Proper doesn't say anything about fairly penalizing incorrect probabilities.

Here are three scenarios:

Adam predicts 99.9% on 10 events, and 9 of them happen.

Bob predicts 50% on 10 events, and 5 of them happen

Carl predicts 90% on 10 events, and 8 of them happen

We can probably agree that Adam is the worst predictor in this bunch, then Carl, then Bob.

But according to brier scores, Adam is the best.

If you divide brier score by reference brier score, the ranking is correct but the distance between Adam and Carl is smaller than the distance between Carl and Bob. This still feels wrong.

If you use logarithmic error term and divide the score by the reference logarithmic score, then you get the correct ranking plus correct distances. Bob gets 1, Carl gets 1.08, and Adam gets 2.12

I made a spreadsheet to demonstrate all these examples:

https://docs.google.com/spreadsheets/d/1RYxqWp_hJooxYMoDW9PwfREGEuS-V418nb_o4hygGPo/edit?usp=sharing

Expand full comment
LGS's avatar

Switching to log score won't help here, because people can just learn the rule "never be more confident than 99%, no matter what". (The value 99% may need to be smaller, like 95%, depending on the scoring rule). Then Adam will be back to being the best predictor.

More explicitly, the problem is that by switching to the log score, you're successfully penalizing 99.9999% wrong predictions... but you're NOT successfully REWARDING correct 99.9999% predictions. This is for a fundamental reason: it is impossible to reward 99.9999% well enough so that people would actually care to make them (instead of just making 99% predictions), because the difference between 99.9999% and 99% only shows up ~1% of the time. So what you end up with is people rounding off their predictions to 99% always, or maybe to 95%, and you've gained nothing.

(Techincally, of course, both Brier and log-score are proper, so they technically give the correct incentive even if your confidence is 99.9999%, but the issue is that you lose very little score if you do the "wrong" thing and round down your predictions to 99% or 95% instead.)

By the way, there exist proper scoring rules that are even more aggressive than log-score, penalizing overconfidence even more. For example, a penalty of sqrt(1/x-1) if you've assigned probability x to the correct outcome is a proper scoring rule, and the penalty goes to infinity as x->0 faster than it does for the log-score.

As a final aside, your formula isn't the usual log-score, so are you even sure that it is proper? Did you check?

Expand full comment
Will's avatar

I don't know the mathematical background beyond what I just read on wikipedia so I'm just reinventing it from scratch and I wouldn't know how to check if the rule is strictly proper.

Expand full comment
MellowIrony's avatar

> We can probably agree that Adam is the worst predictor in this bunch, then Carl, then Bob.

Can't say I agree with this. My gut reaction is that I'd rather have Carl as my forecaster, and indeed your spreadsheet has Carl beating both Adam and Bob on log score by a little over 0.2 bits per prediction.

The impression I get from your score calculation is that you're trying to reward good calibration, but I find it more useful to think of calibration as a hack to extract probabilities from subjective belief, an instrumental goal in service of accuracy. Bob's perfect calibration gives me no information to adjust my prior away from 50%. "If you lack the ability to distinguish truth from falsehood, you can achieve perfect calibration by confessing your ignorance; but confessing ignorance will not, of itself, distinguish truth from falsehood." (https://www.lesswrong.com/posts/afmj8TKAqH6F2QMfZ/a-technical-explanation-of-technical-explanation)

Expand full comment
Will's avatar

50% provides information. There's no reason to expect a randomly selected proposition to be anywhere near 50%. But I get you that making ten 90% predictions provides more bits of information than making ten 50% predictions, and that should be rewarded somehow.

But in order to condense scores into a 1-dimensional spectrum, this reward has to trade off against the reward for correct calibration, and I don't know what's the right ratio for that tradeoff.

Expand full comment
John Schilling's avatar

I'm not sure how you'd measure "bits of information" in this context, but ten accurate predictions of the form "with p~90, company X will not go bankrupt this year" is less informative than ten accurate predictions of the form, "with p~50, company Y will see its stock value increase tenfold this year".

Expand full comment
Will's avatar

So whether the predictor is really sticking his neck out is a function of the content of the predictions, and can't be scored by a formula that just compares predicted probabilities with binary outcomes. Any absolute-accuracy formula is going to reward people equally for:

* a correct 90% prediction that the sun will rise tomorrow

* a correct 90% prediction that aliens will blow up major cities tomorrow

Seems like absolute accuracy just sucks and can't be fixed.

Market-based relative accuracy could pay the former 1.000001x his wager and pay the latter 1000000x his wager.

Expand full comment
Filip Graliński's avatar

This is log loss/log likelihood/cross-entropy commonly used in machine learning.

Expand full comment
Nick Allen's avatar

In predictit you generally won't see good arbitrage above ~94% certainty because they charge something like 6% when your withdraw winnings, so there's no gains to be made up there.

Expand full comment
Will's avatar

The withdrawal fee is 5%, but you can be patient and roll over your money many many times on 97% markets that should be 99.9%, before you withdraw it.

Expand full comment
RickRoller's avatar

Now that regulators have given the green light to some real-money prediction markets in the U.S., do we think they'll start to incorporate that into policy decisions? I feel like a fully-regulated prediction market is the only sustainable way to create real, skin-in-the-game based information signals.

Expand full comment
Level 50 Lapras's avatar

You also need to subsidize them and regulate them heavily enough to actual provide useful signals. Prediction markets aren't magic.

Expand full comment
Freedom's avatar

Huh? Sportsbooks are unsubsidized and essentially unregulated and they work great.

Expand full comment
Level 50 Lapras's avatar

Those are pure gambling, not prediction engines. And in any case, there are still rules about athletes not betting on themselves or taking bribes to throw a game, etc.

Expand full comment
Freedom's avatar

"Those are pure gambling, not prediction engines"

Then why are they the most accurate prediction markets in the world?

"there are still rules about athletes not betting on themselves or taking bribes to throw a game, etc."

It seems like those rules actually make the predictions LESS accurate?

Expand full comment
Edward Pierzchalski's avatar

Alternatively/in addition to initial loans, is there a reason why these markets don't do some kind of dynamic margin depending on the implied probabilities of the market?

If I've bought 100 contracts at $0.90, then with ~90% probability I expect a $100 payout and a $10 profit. Given that my odds are so high, the market provider probably doesn't need me to put up $90 in collateral, right? At 'reasonable' 5x leverage I'd put up ~$20, at horrifying-crypto-exchange 50x leverage I'd put up $2.

I can see one issue, which is that if collateral requirements for long positions go down as the price goes up, then this makes it easier to go *even longer*; if the 'correct' price is lower, then the increased (or at least not-decreased) collateral requirements for short positions make it harder for the market to correct.

Expand full comment
James's avatar

One reason why your book review markets are showing probabilities significantly higher than 44% for getting 125 likes is that Yes bettors are incentivized to add more likes.

Anyone who bet Yes would very likely add a like to such a post, and maybe get friends and family to as well (or create fake accounts to add likes, if they really want to win play-money).

This is the problem with using metrics — they often stop being useful when you condition action on them.

One solution is to loop in human judgment: Will Scott believe that this book review was a relative success 1 month after the article was published?

Or, on a scale from 0 to 100, how valuable will Scott judge this book review to be one month after posting?

Or, if you want to be really free-form, you can use our new free-response markets: What will the reader reaction be to this book review? Users would then submit text answers and bet on which answer best describes how the book review was received, and then Scott would choose one winner (or multiple winners). This kind of market could give qualitative descriptions that binary and scalar markets cannot.

Expand full comment
hammerspacetime's avatar

This is exactly what I came to the comments section to say; I think it's one of my core objections to prediction markets. I should go and bet a *ton* of money on each of Scott's future book reviews getting 125+ Likes; I'm sure I can convince 125+ subscribing SSC commenters to like those future posts, especially if I tell them to do the same thing and get easy money (albeit not very much easy money by the time 125+ people do this). Now that I know Likes on ACX psots are a thing people make prediction markets about, I think I get even less meaning out of the Like count on a given post than I already did.

I'm not convinced this is evadable even for high-profile predictions like whether or not Starlink gets set up by X date or whatever. Elon Musk is definitely not above making a ton of money in a prediction market by delaying the release of Starlink until just after a date in order to make a ton of prediction market money.

Expand full comment
David Piepgrass's avatar

How are you going to convince us to 'like' the post, exactly?

Expand full comment
Mr. Doolittle's avatar

Scott is a Trustworthy Person, so I don't think he would game the system. I don't think he would actually bet on a prediction about himself or try to gain (reputation/play money) from it. That said, Scott is pretty rare, and seems to care more about prediction markets working than about gaining within such a market. I don't see how such a system scales, especially with real money on the line. Scott could easily game such a system, and it's obvious how he could do so.

I would be interested in hearing how user-generated questions with user-defined payouts would not be overwhelmingly scams if a prediction market became actually popular. This is significantly more concerning if/when real money were involved. There's a very good reason that the US doesn't want to allow these sites to use real money on user-generated questions.

Expand full comment
Dirichlet-to-Neumann's avatar

This only very remotely on topic, but one of the reasons I did not participate in any of the book review markets is that I think "Number of likes" is not a very good mesure of readers satisfaction. I'd say that number of comments is a much better proxy.

Expand full comment
Scott Alexander's avatar

Yes, I noticed that the Sadly Porn review had relatively few likes, but provoked a lot of discussion, and got many subreddit upvotes. I still don't really understand the dynamics here and one day I might try to track different posts' Substack-likes-to-Reddit-upvotes ratio and see if any patterns fall out.

Expand full comment
Tamuz Hod's avatar

Might be relevant to note that I was not aware their was an ability to "like" something in substack until today.

Expand full comment
Notmy Realname's avatar

Ditto, and I'm still not clear on the point. Other social media uses likes as a way to demonstrate engagement to the algorithm so the algorithm promotes the content, but to my understanding substack doesn't do that. Is it purely as information to the author? If so that could use clarification.

Expand full comment
Sebastian's avatar

I guess it works as an appreciation gesture or some kind of favorite system.

Scott probably has a lot of people like me who don't use substack for anything but ACX and therefore miss the 'social network' features. Reddit likes might be a better indicator, but they're still very dependent on unrelated factors like posting time.

Expand full comment
A Wild Boustrophedon's avatar

It actually never occurred to me that my likes meant anything – if I like a popular creator's work on, say, YouTube, it's one of a hundred thousand others and has zero impact. I only use them to train the recommendation algorithm.

Now that I hear that over half of your (book review) posts get 125 likes or less, and also that it's a metric you care about, I'll definitely start liking things I enjoy more.

Expand full comment
mb706's avatar

Doesn't Substack give you some engagement metric (unique visits, visit duration etc.) that you can use?

Expand full comment
NLeseul's avatar

I think that a "Like" to most people suggests a positive, happy emotional reaction, which I can't imagine all that many readers having to the "Sadly, Porn" review. It's content that a lot of readers had an emotional reaction to and wanted to engage with, but it's a reaction and a type of engagement that "Like" just doesn't really describe properly.

I think Facebook used to have a similar dilemma... if someone posts to say "My personfriend got fired from their job and my dog has covids; life sucks", it feels pretty inappropriate to say that you "Like" that post. Which is probably why Facebook has transitioned to a wider range of reaction emoji now.

I haven't really used Reddit all that much, but I suspect that an "upvote" on Reddit has fewer emotional connotations for people and is more of a neutral "I appreciated this post and believe more people should see it" signal.

Expand full comment
Lambert's avatar

I mean you did force substack to turn off comment likes here, a move which was generally well-recieved by the commentariat here. I think we just dislike likes.

Expand full comment
David Piepgrass's avatar

Was it well-received though? I like likes.

Expand full comment
Will's avatar

Even if I like a post, It's kind of a crapshoot whether I'll remember to click the heart, and this might be biased by whether I'm distracted by the comments or reading links. Likes+Comments is probably a better metric than Likes alone.

Expand full comment
hammerspacetime's avatar

Ironically, I just found out the other day that getting a very high comment-to-like ratio on a twitter post is called "getting ratioed" and is apparently considered a very negative experience.

Expand full comment
Dirichlet-to-Neumann's avatar

Likes are a more direct boost to our ego I guess. But ACX has not exactly the same readership as Twitter...

Expand full comment
Mr. Doolittle's avatar

Maybe it's a "more heat than light" phenomenon, and controversial doesn't necessarily mean better?

A lot of Scott's more controversial blog posts have also been among his most liked and more famous, but the connection is tenuous.

Expand full comment
leo's avatar

Typo: If Russia invades Ukraine, this person will win +58 points; if it doesn’t, they will win +32 points. Why does *Manifold* --> *Metaculus* allow this? They want to incentivize people to forecast.

Expand full comment
James's avatar

We are also still thinking about how to get better predictions for long term markets, where you note that incentives are not-so-good, like for Dwayne Johnson's presidential bid.

We talked to Robin Hanson today, and he suggested creating 3 parallel currencies which are used for short term (<1 month), medium term (1 month - 1 year) and long term (1 year+) questions. He says the shorter term currencies would be able to trade in the longer term markets, but not vice versa.

I quite like this solution and think it would work. It's another example of a zero-sum reputation solution.

Expand full comment
matt knox's avatar

I sorta think that the loan thing solves for the long-term bets too-if I can bet without tying up any money, I don't need other currencies, as long as I'm willing to bet small.

Expand full comment
James's avatar

Yeah, the loan thing is a good idea as well!

Expand full comment
Richard Gadsden's avatar

Real-money markets have the same issues with long-term markets. Professional bettors would rather bet on a short term market than lock their money up for any extended period.

The only time you get lots of money in a long-term market is when there is an option whose probability is rising fast. For instance, anyone who predicted that Obama would win the 2008 election prior to the 2004 Convention speech would make a huge profit, as the first probability surge came from that speech.

The only alternative I could think of would be paying interest to locked-up bets, but you'd either have to pay out at the end (which still locks the money up, but does increase the ROI), or you pay out interest on an unsettled prediction (based on the current market value of the prediction) and that incentivises things like "I will flip a coin in 2032, will it come up heads?" as that will stay at 50-50 until resolved and you can just grab the interest.

Expand full comment
Richard Gadsden's avatar

One thing that helps with real-money markets is the ability to have "trading bets", where you can cash out of the market at the current price. You get people effectively betting on /predicting the future market price rather than the actual result.

A typical example is that Presidential prediction markets are affected strongly on price by the results of primaries, so if there isn't a market on a specific primary, you can predict the primary by betting on the overall Presidential just before the primary and then take your profit after the winner's prices rise. But you can only do that if you can take two bets, on both sides of the question, and resolve those into a profit and get you out of the market.

Expand full comment
matt knox's avatar

One fairly obvious wrinkle to add to your loan strategy (which I love!) is that you can't transfer it without first paying off the loan. So if you get a M$10 loan and take a position, you can only make money by selling it for more than M$10. There would be some margin to be had there for folks who trolled around looking for questions and taking mispriced positions, but that would be highly useful! Also, you could fairly easily incorporate that into the ranking: A given user could have at least 3 distinct values by which they could be sorted in a leaderboard: current balance, current gain (abs | %age) over money put in, and a weird confidence interval-looking thing that showed how much they currently owe on loans and the value of their positions.

It's probably necessary if you have this option to make every user pay at least a little to get in-otherwise someone could make a bunch of sockpuppets, each of which would make a large number of stupid bets against their main, making it rich.

Expand full comment
James's avatar

Yes, I like your analysis of the loans. We'd be somewhat concerned about people printing money with fake accounts, although they can kind of do that today by creating many new accounts that start with M$ 1000.

But overall it's an intriguing idea, and the incentives are pretty good since you can still lose money eventually if you bet the wrong way. Maybe Scott is onto something here.

Expand full comment
matt knox's avatar

I'd be tempted to get rid of the starting money (or at least tie it to something expensive-ish to make, like a CC, phone number, etc.), and then gate the loans with an actual small payment.

Expand full comment
Thomas Johnson's avatar

Regarding the leaderboard, I think it's important to remember that assessing a forecaster's performance based on a single number has the same problems as assessing an investor's performance based only on the amount of money made. You fall victim to a number of issues, like "did they make all of their money on a single high-payoff prediction that was mostly luck?" or "Were they good only for a brief period of time when there were markets on a specific political event?" or "are they a very high variance forecaster and they just happen to be on a lucky streak?"

You need more sophisticated analysis to answer how good a forecaster is: Things like a sharpe ratio of their profits, or a graph of their winnings over time to show when they were active or not would be a start.

For less formal analysis, Kaggle's rating system (https://www.kaggle.com/progression) might provide a good starting point

Expand full comment
James's avatar

One way we disaggregate performance is by category. You can see your performance for different subsets of markets by topic. E.g. the leaderboards for just the ACX 2022 prediction markets: https://manifold.markets/fold/acx-predictions-for-2022

Expand full comment
Mr. Doolittle's avatar

I used to watch a lot of poker, and it was frustrating to watch some moderate-skill player go all-in early in a game and luck into a giant stack of chips. Once they had a chip advantage, they could lean on other players harder and push them out of the game (or, if the advantage was high enough, force other players to go all-in on near 50-50 odds multiple times).

Essentially, I am agreeing with you that you need more information than a leaderboard to determine actual skill.

Expand full comment
gmt's avatar

> they solve this by actually reputationalizing play money profits, which works

It partially works. The problem is that profits are also a function of the money you have. Suppose I have $1 and I know for certain that a market that currently has a 1% odd will actually come true. At best, I can end up with ~$100. If I have $1000 to start with, maybe there isn't enough liquidity for me to wind up with $100,000, but maybe there's enough liquidity for me to wind up with $3000.

Buying more money allows you to multiply any winnings, so it's not a good judge of how good of a predictor you are. I think a better judge would be to normalize your profit by the amount you've bought, but that unfortunately takes away Manifold's business model.

Expand full comment
Austin Chen's avatar

Yes -- Scott called out Robert McIntyre as #1 on the leaderboard, and (I hope he'd be okay with me saying that) he's purchased M$ before. This does make it easier to post higher total profits on the leaderboard, assuming he's 51% right or better.

This is something that's actually quite tricky to get right, and we'd welcome thoughts that others have on how to set up leaderboards that are meaningful and accurate. Specifically, we're hoping to run a prediction tournament for cash prizes sometime in the future, and we'd want to know how to fairly set that up (eg limit everyone to the same amount of initial buy-in?)

Expand full comment
Notmy Realname's avatar

Do purchases made decrease the total profit number?

Expand full comment
Austin Chen's avatar

Nope; though they are counted against the personal UI where we show your % profit. E.g. If you'd made M$100 on your initial M$1000, you'd see "+10%"; then when you buy another M$1000, you see "+5%" instead.

(Everything ofc is subject to change -- let us know what would be best for your use case!)

Expand full comment
Pontifex Minimus 🏴󠁧󠁢󠁳󠁣󠁴󠁿's avatar

There could be a 2nd score of how much profit divided by the amount bet. By having multiple scores, a website can incentivize multiple behaviours.

Expand full comment
Maxander's avatar

On the use of positive sum markets to incentivize voting -- there's probably some ideal tradeoff point between number of bids people make vs the accuracy of each guess (i.e. where they stand on the Susan-to-Randy spectrum) which maximizes the amount of real information entering the market. I don't know how Metaculus calculates payouts, but there's presumably also some parameter (or set of parameters) controlling how positive-sum the markets tend to be. This obviously gives Metaculus the ability to tune how generous payouts are in order to optimize user behavior (assuming that people, on average, resemble rational actors enough to vote more readily when payouts are more generous and vice versa.) Of note, they *wouldn't* have this ability if Metaculus bids were denominated in dollars or whatever, since then positive-sum markets simply drive them broke. So perhaps there is a way in which play money markets can be epistemically superior after all.

Of course, this same flexibilty could be joined to the benefits of a real-money market by having bids be made in play money which can be exchanged for real money afterwards, at some rate related to the generosity of the payout system in a way that keeps the market operators financially afloat.

Expand full comment
ranaya's avatar

this is exactly what metaculus does. the ratio between "absolute points" to "relative points" appreaches 0 as the number of forecasters increases.

Expand full comment
Richard Gadsden's avatar

It's worth pointing out that real-money markets are negative-sum (the market takes a vig).

Also, real-money markets need to get a certain amount of attention (a few thousand dollars) before they are making any sort of useful prediction, which heavily restricts the non-sports markets that are created.

There is far more money on a first round tennis match in an obscure tournament than on any but the biggest non-sporting markets.

This is partly because people like getting results quickly, and sports generate large numbers of results in a short period of time, where most non-sporting predictions require locking up money for long periods of time. In many cases, you have to get in relatively early, as a great many predictions become near-certainties (99-1 propositions) for quite a long time before they are finally resolved.

If you have to lock up money for a long time if you want to be making a meaningful prediction, then the ROI is much worse than sports - but the predictive power of the market would be much more valuable. I think this is a hard problem to solve; if you could earn a lot more money by studying the 30th-50th ranked tennis players and predicting their early round results when they play each other, then many smart superpredictors would be incentivised to do that rather than providing information useful to society.

Expand full comment
Zygohistomorphic's avatar

I have a few observations about Manifold, which you may need to take with a grain of salt because I did manage to lose quite a bit of play money over the weekend.

Assuming I haven't misunderstood their Technical Guide (https://manifoldmarkets.notion.site/Technical-Guide-to-Manifold-Markets-b9b48a09ea1f45b88d991231171730c5), placing bets on Manifold is, in fact, (slightly) negative-sum. Of the bet pool, 4% goes to the market creator as a 'commision' and 1% is "burned" as a 'platform fee.

In addition to not wanting to tie up their money for long periods, another reason for people not to correct markets is elucidated here: https://kevin.zielnicki.com/2022/02/17/manifold/ - essentially, because your payout depends on the state of the market at resolution, not only on the state when you place your bid, you get less expected profit if the market moves in the correct direction as people get more information near when trading closes.

Expand full comment
Austin Chen's avatar

You're right on both counts! The platform fee is our attempt at fighting inflation (from users joining or buying M$), and also something we're testing for eventual hypothetical crypto use case.

We also take Kevin's criticisms fairly seriously, and are considering alternative market-making mechanisms such as LMSR; see the discussion here: https://manifold.markets/kjz/will-manifolds-developers-agree-wit

Expand full comment
James's avatar

Note: 1% and 4% fees are on trader profits, not the bet pool. It is still technically negative sum though!

Expand full comment
Roger Sweeny's avatar

Sports books try to set a "line" so that half the bets are on one side and half on the other. Over time, the line changes to keep things that way, so it is a kind of prediction market. Since the book always takes a cut, the market is negative-sum for the bettors.

Expand full comment
Sioned Baker's avatar

I am very interested to see how positive-sum systems resolve the issue of Randy-strategist bettors because of the implications for what I think is the most interesting model: positive-sum for-profit prediction markets. The idea is simple: by offering a game where the house always loses, you incentivize people to forecast. Your actual business model will be making money off of having more or less some knowledge of the future, as generated by the wisdom of crowds.

Expand full comment
Notmy Realname's avatar

I'd love to hear why the longterm prediction issue you highlight here, which is also pretty apparent to anybody who's used a real money or limited play money prediction market, is solvable.

If a prediction market can't successfully predict if the Rock will be president in two years, how can we expect prediction markets to predict if a new virus will be an issue in two years, if a potential war will be considered a good idea in two years etc.? If the idea is that these markets are accurate due to incentivizing and rewarding correctness, how can real world decision makers lean on prediction markets for their predictive value if nobody bothers to show up due to the inherently low annual return?

I am very worried that people are so fond of a decision making tool that by design only seems capable of extremely short term thinking.

Expand full comment
LGS's avatar

"Far from being a subsidy - money which it is easy for other people to get - this feels like smart money - money that other people should be scared to bet against. So how does this open the market at all?"

Scott, if you don't think that the market can beat your own prediction, why are you even betting in the first place? What is the point of asking the market for a prediction on your life events if you don't think they can outperform you?

(I agree they likely can't outperform you, but then again, I think that asking the market for predictions about your life events is stupid. You seem to think it is not stupid, so I don't understand your thought process here.)

Expand full comment
Mr. Doolittle's avatar

I also would like to know Scott's thoughts here. He seems very interested in user-defined questions, which makes sense to me on some level, but obviously is including very personal questions like "will this person get married?" that seem very poorly tuned for this kind of market.

The usefulness of prediction markets seems to only come from questions big enough to generate and utilize different perspectives such that biases are squeezed out in aggregate. If you take away the multiple perspectives (really, how many different perspectives can genuinely exist about whether someone will [personal life choice]?), then you're just having different people bet about the limited and probably known information going into it (i.e., the potential spouse's brother betting on the marriage question).

Expand full comment
soren's avatar

For the conditional prediction market on book review likes, it seems a bit like the problem with assassination markets where market participants can affect the outcome themselves: people can increase (but not decrease) the chance of any book passing the like threshold by just liking the post themselves or with sockpuppet accounts - 125 likes is not hard to achieve. That asymmetry seems like it could inflate probabilities a bit.

Or maybe Scott realized this and it's a 5head play to increase likes on his twitter posts.

Expand full comment
Belobog's avatar

Scott, I think there may be a fundamental disconnect in people's (or at least my) understandings of prediction markets, because you didn't mention what I think is the most obvious reason the book review predictions are all positive: people think that you're most likely to review the books that have the highest probability of getting the required number of likes. Thus, they see these not only as predictions, but also as an opportunity to pay you to review books. Honestly, when you first announced the predictions, I thought that was your intention, "get involved in prediction markets, and you might get to pick what book I review next!" Even if that wasn't your intention, the fact that the bet going one way or the other will reward people via a side channel rather than purely with the play money seems like it will always have a distorting effect, and will be unavoidable for prediction markets. It seems like the more that real life decisions are based on the results of prediction markets, the worse this distortion will become.

Expand full comment
Melvin's avatar

Now that Scott has signalled that he cares about, and is paying attention to, the number of likes that his posts get, he'll probably get more likes overall. It never would have occurred to me to press the like button before, but now I'll probably start doing it (but only on the posts that I like more than the median ACX post, which seems to be the optimal strategy)

Expand full comment
Sebastian's avatar

I feel I picked up a subtle hint in the last posts, so I'm gonna go for it: What is your Metaculus score?

Expand full comment
darwin's avatar

>Manifold lets you buy their play money for real money, which in theory would destroy any reputational value. But they solve this by actually reputationalizing play money profits, which works:

Say that you earn 1% returns on your predictions. Your profits would still be 10x higher if you spent 10x more real money, so you had more play money to invest, right?

Meaning, this leaderboard is still pay to win, even though you can't win by being below average - unless I'm missing something?

Expand full comment
Mahath's avatar

> even though you can't win by being below average - unless I'm missing something?

Multiple accounts with contradictory predictions.

Expand full comment
Mr. Doolittle's avatar

Scott mentions ~200 active players on one of the markets. Creating a handful of extra accounts can probably be a major factor in specific questions.

Expand full comment
Jared Frerichs's avatar

Thank you for the article. I prefer how Metaculus awards points for a true and false statement. Part of being successful at predictions is to continue being around to predict, even if you were not on the money.

Expand full comment
Spiny Stellate's avatar

Why can't a play money market offer separate blocks of money for each quarter of market resolution times. So the "Dwayne Johnson for President" money would be in an Autumn 2024 block (which all users would get, like with every block) but which could only be spent on questions that resolve in Autumn 2024.

Then no one would worry about tying up this money because it could only be used for things that paid out in Autumn 2024 anyway, so you would just use it whenever a good market came along, in any year, provided it resolved in Autumn 2024.

And ideally you'd be able to sell again (if someone wanted to buy at your offer price) at any time, but this is not entirely necessary.

Expand full comment
Xavier Moss's avatar

A few weeks ago you posted an article about 'why you suck', and while that was obviously hyperbolic I would point to 'Mantic Monday' as a reason I read you less and less. When I used to see an SSC article, I was excited – when I see an ACT article, I think 'I hope it's actually something interesting,' but usually it's something like this.

The blog has largely shifted from articles applying the rationalist perspective to wider issues to articles writen exclusively for the rationalist subculture and its status symbols and obsessions. This is fine! It's your subculture after all and you're on of its most prominent members, there's no reason for you not to write what you care about. But for me, this article really is about its title – the mechanics of using play money to gain reputation within a subculture just isn't particularly interesting.

These Monday posts never teach me anything about Russia or Ukraine. Knowing the aggregate opinion, even if it were accurate, isn't interesting without a discussion of the underlying reasons for the predictions. It's the equivalent of writing 'stocks go down on Russia news,' then a paragraph on which companies went down in the Dow, then one on the Nikkei, then one on the FTSE, etc., etc. Interesting only for people playing that game, and at least they play with real money, albeit usually not their own.

I don't mean this as a criticism, obviously this is interesting to others, but if that previous post was genuinely interested in how your blog is seen to have changed, the proliferation of inside-baseball rationalist articles is the big reason why I check the blog, but skip most content and don't pay to subscribe.

Expand full comment
Level 50 Lapras's avatar

I agree completely.

Expand full comment
Scott Alexander's avatar

I think of this as something I do in addition to everything else, not replacing it, which makes me less concerned about this. I used to aim for two effortposts a week; now I usually have two effortposts plus a Monday post on something inside basebally.

Expand full comment
tailcalled's avatar

I'm thinking conditional prediction markets would probably be improved a lot if you could use the same money to bet on each of the conditions (whenever the conditions are mutually exclusive).

Expand full comment
Carl Pham's avatar

It's fascinating that your solutions for fixing what you see are errors in implementation that allow the market to be perverted for unrelated individual motives all rely on knowing what the prediction market *should* be predicting. Do you have sufficient faith in the result of fixing these errors that you'll then be able to accept on pure faith in the mechanism the predictions that *can't* be checked (by your instincts)? Because presumably to have any real value the predictions markets have to make predictions which can't be checked. How do you get to a place where you have sufficient faith in predictions that can't be checked that you bet important things -- real money, in large amounts, career and/or life choices? Do you just keep tweaking the system until it gives you results that you like for predictions that *can* be checked?

Expand full comment
c1ue's avatar

Thank you for the considered overview of different theoretical prediction markets, and a somewhat deeper examination of 2 specific ones.

However, you don't address the fundamental value proposition of prediction: how can a prediction market weed out junk? Ukraine invasion is an example put forward by Scott Alexander in another article on prediction markets; my response was that each and every single prediction in that space is crap because there is NO ONE who can possibly have any relevant information on the matter outside of Putin and Shoigu.

Even the supposed "intelligence experts" and the POTUS have been constantly predicting invasion only to be shown wrong.

I wonder greatly if this entire prediction market thing is an outgrowth of efficient market theory - the largely garbage macroeconomic meme that somehow anyone and everyone in a free market is fully informed on everything and makes the right decisions.

In my view, GIGO - and enabling betting doesn't change that.

Expand full comment
Mahath's avatar

Well, in some cases you may wish to produce a better garbage out of garbage inputs.

"Predicting, especially future, is hard"

Expand full comment
c1ue's avatar

Is better garbage, not still garbage?

Or are you saying you can pluck pearls from shit?

Expand full comment
Mr. Doolittle's avatar

Biden may or may not be wrong in his thoughts, which is a complication here. Misdirection really exists, and both Putin and Biden are using it heavily. How would the rest of us ever know what's real and what's misdirection? Add to that actual mistakes and lack of knowledge, and the majority of us are going to look foolish in our predictions.

Last week Scott was talking about the smart money betting on war in Ukraine. War didn't happen last week, when it was predicted by much of the media and a good bit of the "smart money." I'm not sure we will ever know exactly why, or how likely war was or was not last week. It may still happen, or it could all be an elaborate ruse. Either way this whole scenario feels like a pretty hard hit against prediction markets.

Expand full comment
c1ue's avatar

Misdirection or not - repeated highly certain statements of "Russia will invade on XX day" which repeatedly do not pan out - do not seem to be a builder of confidence or affirmation of competence/expertise.

Expand full comment
A Wild Boustrophedon's avatar

Let's assume – utterly unfairly – that Robert McIntyre is cheating. What might he be doing?

It looks like you get M$1000 free when you create a Manifold account. This means I could create a dozen or so sock-puppet accounts and have my main account bet against them to farm play money. I don't know how much you can make off a given position, but Robert has M$6719 profit which is a pretty small multiple of the joining bonus.

On Reddit, even back when karma wasn't valuable in real-money terms, upvote bot rings were endemic. Both Reddit and Manifold attract a large number of coders with time on their hands who like winning at things.

Am I missing something here? Does anyone with more Manifold experience know a reason this wouldn't work?

Expand full comment
Marvin's avatar

>so far, none of them actually produce any kind of a reputation. By this I mean something like: if I claim “I have an IQ of 160” or “I can bench press 300 lbs”, people might be impressed by me.

It is true that these systems do not produce social reputation, but this is a metric that doesn't really help us in designing a better reputation system. So, I think we should instead try to model "professional reputation", similar to the type of reputation that is vital for scientists. (but not too similar, we shouldn't copy the mistakes that lead to the "publish or perish" or in our case the "quantity over quality" culture.) You note that the problem with markets is that they are not "strategy proof" ( https://en.wikipedia.org/wiki/Strategyproofness ), i.e. revealing your true prediction is not always the best strategy to gain the most points. However, markets can get away without this property because they are efficient to a certain degree. (I guess we could say e.g. Polymarket is Pareto efficient ( https://en.wikipedia.org/wiki/Pareto_efficiency ) in the sense that if we consider the outcomes of prediction as "goods", users can always "trade" with the market (by making a prediction that differs from the current market consensus) when they have goods they do not like. But I'm not sure if this notion of efficiency is relevant in this case)

Can we design a reputation system that both incentivizes making truthful predictions (in particular, disincentivizes not making predictions when you believe you can make an accurate prediction) and accurately measures prediction strength? If we want strategy-proofness, we cannot measure reputation purely on the prediction outcomes. The reputation of scientist as a researcher is not primarily based on the number of papers published or grants obtained, but on the degree that other scientists trust this person.

We can try to measure trust as a separate metric besides reputation on prediction markets as follows (all numbers and percentages are made up): for a limited number of times per day, any user can choose to "trust" another user and immediately gains 1 reputation for the effort. If user A "trusts" user B, user A obtains 10% of the amount of reputation user B gains on the first prediction user B makes after being trusted. We keep track of every time someone trust someone else to build a trust network. By using a network ranking algorithm such as PageRank ( https://en.wikipedia.org/wiki/PageRank ), we obtain a "trust score/ranking" for each user, which is based on how often other users would expect you to make a correct prediction. To encourage users to become trusted, we can give periodic reputation awards for being highly trusted.

Of course, in practice, we may trust certain people only when they make predictions within a certain domain, so it may be useful to "trust" someone only for the next prediction on a question in a certain category. Another issue may be that many people simply only trust the highest ranked person. I'm not sure if this is bad. This is a pretty naive action that is likely more common with a low-trust user, and therefore does not generate much trust. And if someone somehow happens to be almost always right, well, then they should be highly trusted, of course.

Expand full comment
Isaac King's avatar

> Manifold lets you buy their play money for real money, which in theory would destroy any reputational value. But they solve this by actually reputationalizing play money profits, which works

Isn't this still game-able? Just buy lots of play money and bet it randomly on a bunch of different questions. The leaderboard appears to track total winnings, so you can still pull ahead of others by investing enough money to have larger payouts on the bets you do win. In order to solve this, the leaderboard would need to track something like winnings/investment.

Expand full comment
Nick Allen's avatar

The biggest problem with positive-sum is that it incentivizes people to bet based on the expected value of the bet instead of their best estimate, and thereby degrades the informational value of their bet.

This is probably also why nobody brags about their metaculus score: they sense that rewarding gamesmanship degrades the usefulness of the market.

Expand full comment
Mr. Doolittle's avatar

Betting against the market on things that are mispriced may be a great way to make money, but it doesn't have the same feel that Scott seems to want from Superforecasters, which is the understanding that when they bet on things it's more likely to be true than a normal user.

Expand full comment
Nick Allen's avatar

Bit of a chicken-and-egg problem there.

I would like to see the aggregate update based on the amount wagered adjusted by the predictive success of the wagerer, though.

Expand full comment
ranaya's avatar

the expected value of a metaculus bet is always higher when you bet the "true odds".

taking the current ukraine market, you get +57 for yes and no if you bet 60%. let's suppose your true belief is with the community, that it's 90% likely to be yes. if you bet 90%, your payout would be: +136 for yes, -215 for no. if the "true odds" are 90%, then the EV is (0.9 * 136) + (0.1 * -215) = 101, vs the 60% bet would be 57 points. now, if the "true odds" actually were 60%, a 90% prediction would be (0.6 * 136) + (0.4 * -215) = -4 points.

this isn't just the case of this one example, it's how the whole platform works.

Expand full comment
Nick Allen's avatar

And for people who want to put in the time to develop high-quality guesses, that's the dominant strategy, but the point is that positive sum is easily gameable for reputation, which is likely connected to the fact that nobody brags about their reputation.

Expand full comment
Straw's avatar

Conditional prediction markets with mutually exclusive conditions should allow you to bet the same money in each of them- multiplying the available capital by the number of conditions.

Expand full comment
Mike's avatar

On the book review markets, I think there is less incentive for people for people to bid down because you've stated (and people also implicitly know) that the chance of the market resolving either way is dependent on how how positive it is. So bidding down doesn't just lock up my play money for a year, but also lowers the odds the market will resolve even if I am correct! Though I think this problem would also be fixed by the interest-free loan idea.

Expand full comment
Iz's avatar

Hope this is appropriate to post here: If anyone here is involved with Polymarket, I see that they’re hiring an engineer and would love to interview with them.

Expand full comment
Milli's avatar

That you told people that there is a "like" button might make posts before this less comparable to posts after.

I for one did not know one could "like" posts.

Expand full comment
Daniel Tilkin's avatar

Hmm, could Manifold do some sort of linked conditional markets? Where only (at most?) one market resolves, and the rest are N/A. That way you could put M$100 into the linked markets, and bet it on as many of them as you want. Since only one resolves, the most you could lose in M$100, so this doesn't risk anyone going negative.

Here, it might be "If Scott reviews this book FIRST of the books in the group, will it get 125 likes".

This doesn't resolve the issue of likes being gameable, and thus not an ideal resolution criteria.

Expand full comment
Level 50 Lapras's avatar

I think the best way to look at prediction markets is that you (as market maker) are offering a bounty for information. People will participate in the market if they think the share of the subsidy they can claim thanks to their personal knowledge is worth the cost of the time required to investigate the question.

Of course, this also shows why there are thorny legal issues surrounding them. In the traditional financial system, insider trading is considered wrong and illegal. However, insider trading is basically the entire raison d'etre of prediction markets. They're a decentralized leak bounty (at best - at worst they turn into straight out bribes if decision makers participate).

Expand full comment
Lambert's avatar

I think what's going on here is that money, pretend or otherwise, has two purposes in a prediction market that don't quite line up. One is to provide an incentive structure and the other is to ration market-making power.

Also i think it's a good idea for play money markets to stay reasonably close to how irl money works for reasons of verisimilitude. We want to simulate a real money market so we hopefully find any pitfalls before they lead to financial ruin.

Expand full comment
qbolec's avatar

I feel confused about "locking money for 2.5 years" issue. Assume that people are willing to wait 1 year, but not 2.5 half years. Wouldn't that imply that one year before resolution the price will become correct? But then wouldn't it imply that people who anticipate that fact will realise that buying at correct price two years before resolution and then selling one year before resolution will give them the profit they want in one year? But then wouldn't it imply that the price will become correct two years before resolution? And then people who realise this fact will start investing three years before...etc.etc. and by inducition the price should be corrected now?

Expand full comment
Wasserschweinchen's avatar

If, e.g., I expect to be able to make 10% a year on other investments, I will not enter into a two-year investment that I expect to make less than 21% on. This would seem to imply that the pricing of long-term bets is likely to give inaccurate predictions.

Expand full comment
qbolec's avatar

I understand this. The part I don't understand is: why one would have to wait till the bet resolution, as opposed to sell as soon as the price gets high enough.

Expand full comment
Wasserschweinchen's avatar

Oh, OK, then you and I have the same objection to the article :)

Expand full comment
John Schilling's avatar

The volume of existing predicting markets may be insufficient to reliably support that sort of secondary market.

Expand full comment
Ben Campbell's avatar

Ability to sell positions is indeed one of the (partial) fixes. The markets that suffer from this problem most are the one that don't have the option to sell positions after going on record

Expand full comment
Ninety-Three's avatar

"They buy “yes” on books they like, but don’t buy “no” on books they don’t like, because that would be against the imaginary rules for the voting that they are falsely imagining this to be."

Are you sure those aren't the actual rules? You took a group of people who did not care about Manifold play money and told them that they could convert it into increased chance of you reviewing a book, they proceeded to exchange the worthless currency for the one they valued. Yes dominating no is a product of the seven-way race: why cast a no vote on every book except the one you want when you could cast six yes votes on the one you do?

Expand full comment
H.....'s avatar

You could get around the "resolves N/A" thing by asking: "Will I review X *and* will it get at least 125 likes?". Then a vote for "yes" is a vote of confidence for X, and a vote for "no" is a vote of no confidence, but if you don't end up reviewing X then it resolves "no".

Expand full comment
Brian Pansky's avatar

"Still, part of me wishes that reputation systems could actually give someone a good reputation"

This is backwards. A good reputation should give someone a good rating in the reputation system, not the other way around. Otherwise the reputation system is fallaciously circular. But, yes, as a record of reputation, it would only be valuable if a good (recorded) reputation also had real consequences.

"why would anybody want play money? The obvious answer is that it’s a reputation system in disguise"

This is backwards. Your "reputation system" is a play money system in disguise (it is not a reputation system).

Expand full comment
George H.'s avatar

Huh, I wanted to vote for Rene Girard, and then looked at all the odds and bet the negative of everything. (If you review Nixonland I might lose...) Having bet negative on everything, I won't be hearting any of the above book reviews, well unless it goes above 125 and I did like the review a lot. (I want to say that in general, positive feedback is bad for a control loop... and more people should build control loops to understand that simple fact.... our first job is to get the sign (+/-) right.) If more than 125 people vote that some review will get a heart, then it's almost guaranteed to be true.... hmm I may have bet wrong.

Can I ask something? If you don't review the book I just get my 'money' back is that right?

(I'm selling a lot of no votes...)

Expand full comment
Ninety-Three's avatar

I think the ~90% odds of 125 likes might be correct. Your normal book reviews rarely do so well, but if a bunch of people have play money riding on the outcome, all it takes is one of them caring enough to stuff the ballot. The average better doesn't even need to intend to stuff the ballot, merely noticing that there are a lot of betters should cause them to expect a significant chance of ballot-stuffing and adjust their bets accordingly.

Expand full comment
Muncle's avatar

I think prediction markets could borrow some ideas from perpetual futures markets(I’ll preface this by saying I don’t have much experience with trading prediction markets, so maybe this is already available or doesn’t work for obvious reasons):

* First, you need the ability to trade in and out of the markets at any time - this means you can trade events that change the probability, instead of just the final outcome. i.e. if you think Putin's speech will raise the odds of Russia invading Ukraine, you can buy in at 70% probability and get out profitably an hour later at 80%. This lets you deploy your information in concentrated time periods

* Second, as you already say, you need leverage - this is how people can make money on low volatility markets like foreign exchange, where the prices move by basis points. Let's say you have $100 in your account. You could bet $500 on a given market for 5x leverage. Now, if the probability increases by 20%, you will have doubled your money. On the flip-side, a 20% drop will get you liquidated, so you need to manage your risk.

* The final piece of the puzzle is what's called cross-margin. This means that any position you hold counts towards your margin - let's say you start with $100 in your account. You bet $50 on some market. While the price of that specific market stays the same, you would still have $100 in your account - $100 + $0 profit. If your bet turns out good and doubles in value, you would now have $150 of margin, all without closing the position. At the same time, you could go and open as many other positions as you like, and your available margin will always be the the cash balance you have plus the sum of all profits and losses from your open positions.

All this combined lets and informed trader make much more money on small short term moves, requires much less capital for the same absolute returns and lets you construct a portfolio of uncorrelated bets which also boosts your returns if done correctly, solving the problem of small returns and long time-frames.

This is all adding risk of course - the more leverage you use, the quicker you can get liquidated, and similarly, if you bet on correlated markets they can all move against you at the same time. However all of this favours more informed participants at the expense of lay people, which is ultimately what you want if you want to get to a point where people are doing this professionally.

Expand full comment
GaBeRockKing's avatar

Allowing leverage/options contracts would get even better if it was something anyone in the entire market could do, and you got leverage from specific people. That would introduce the meta-game of predicting which predictors are the most efficient based on their score and previous guesses, and consequently giving their predictions more weight over the entire market (at the cost of guessing wrong, losing a bunch of money, and then both you AND the person you sold levarage to go bankrupt and stop influencing the market.)

Expand full comment
David Piepgrass's avatar

What if you could get play-money "leverage" for betting "yes" when the probability is high, or when the question closes a long time in the future, or extra leverage if it's both?

Expand full comment
Emil O. W. Kirkegaard's avatar

There is a proper-ish Metaculus leaderboard. This is based on points per question, so just spamming across questions at random will maybe accumulate many points, but not points per question. I'm currently sitting at top 45 out of the 200 people on that list. Seems to only include people who participated at least on some number of questions or attained some number of total points. I can't tell how many users Metaculus has in total, so it's hard to say how elite this leaderboard is, but nevertheless, it is interesting.

It is still a problematic measure since some kinds of questions give more points (longer running, continuous), so one can play strategically by participating mainly in those to get a higher point gain per question. One could probably get around this issue with some adjusting, but it's not been done, and the data from this leaderboard doesn't seem public, so I can't give it a try myself.

https://metaculusextras.com/points_per_question

Expand full comment
Tilia's avatar

If it is playmoney, you could implement a system where you don't pay the cost of the bet untill you sell or the bet is resolved. Then no money will be locked up in long predictions. Combine this with having a maximum investment per market which is tied to your current money pool, to get the effect that people who done well previously should have more levarage in future bets.

This would only work with play money though. Becasue it would cause lots of people to go negative, which someone would have to pay for if it was reall money, but not a problem with play money.

There should probably also be some way to declare play money bankruptcy, where if you go permanently neggative you get to start over after time out, possibly with progressivly loger time outs everytime you screw up.

Also, points or play money don't have to have reall value. Lots of humans likes seeing numbers go up, even if these numbers don't mean anything. Lots of humans play computer games which also don't give braging rights.

When you get to reall money, there is a diffrence, becasue then people can really do predictions full time. If there is no money, it can't be more than a hoby. In this case I expect imaginary internet points is as good a reward as any.

Expand full comment