247 Comments
User's avatar
Moose's avatar

In my experience trading as a hobby, kalshi's resolution system is a lot better than polymarket's. Kalshi rules are written a lot better as well, at least on average.

Harry's avatar

Yeah, the polymarket resolution system is a nightmare. My buddy recently went way down the rabbit hole and tried being a UMA resolver. (UMA is the crypto token that gives you voting rights to resolve polymarket disputes)

(Here is an AI write-up of our sprawling email thread on it that he agreed to let me publish: https://7goldfish.com/articles/Broken_Truth_Machines.php )

The short answer is: The price to control the decision market is *way* lower than the price of the things being decided now. It's concentrated in a very small number of hands, and at least one guy with like 25% is clearly a bad actor.

Por Poisson's avatar

"Bonus question: Is there a way to simplify this so that we don’t have to run all four markets?"

https://blog.monad.xyz/blog/self-resolving-prediction-markets

Scott Alexander's avatar

I don't understand how this avoids Keynesian beauty contest dynamics.

If everyone else in the world thinks Democrats have a 10% chance of winning, but I think it's 90%, and it's my turn, and the current probability is 10%, and I bet at 90%, then the next person to come after me bets it back down to 10%, the final predictor bets 10%, and the person who came after me gets my money. So why should I bet my true belief instead of the expected market consensus?

Por Poisson's avatar

I think it rests on the idea that those should be the same thing. And since stopping is random no one ever knows who the last person will be.

I think the way it purports to avoid the beauty contest dynamics is that the equilibrium has ~0 payoff since the method only pays you if you add information. If everyone bets the expected market consensus, it works in the KBC case, but here the probability never changes and no one makes money. So more of a safeguard + incentive for the better equilibrium.

darwin's avatar

>Just before the secret operation that captured Maduro, someone placed a mysterious $32,000 wager placed on YES.

I feel like this is underselling things; if the first set of Google results are correct (let me know if they're not), the bettor won over $400,000.

$400,000 isn't what it used to be, but it's still a nice chunk of change. I'm pretty sure Trump has changed public policy for smaller bribes than this; I certainly wouldn't be surprised if there were some people in the Defense department willing to change what advice they give for this amount.

This brings me back to my big fear about actually-ubiquitous, actually-big-money prediction markets: with no way to keep central actors out of the market, incentives to manipulate the market can swamp incentives around the thing the market is predicting. Once that happens to enough things, reality itself becomes anti-inductive.

Scott wrote about how stock markets are anti-inductive; oversimplifying, if it's possible to know something about what they will do, investments will change to take advantage of that predictability until it's not doing that anymore.

https://slatestarcodex.com/2015/01/11/the-phatic-and-the-anti-inductive/

Right now, people who have to make a hard decision have to weigh up the costs and benefits of each option, and choose the best one. People aren't perfect at doing this, but on average they trend towards good decisions for self-interested reasons.

With prediction markets in play, suddenly you have an added incentive, which is 'do the maximally surprising thing so you make more money when you bet on yourself to do that.'

The maximally surprising thing is probably not the best decision most of the time! That's why it's surprising!

I've seen a lot of responses to this objection, but nothing that I really buy. I certainly have no proof that Maduro was captured *because* someone involved wanted to make $400K, I don't even think it's a very likely hypothesis at this point in the graph. But the fact that it's *possible* and *plausible*, and that there's nothing to prevent it going forward and the trends are only towards making it easier and more profitable, should be cause for concern.

People like to say that ASI doesn't have to hate you to kill you, it's just big enough that if it's indifferent to you, anything it does is likely to destroy you incidentally.

Financial systems can be that big and that indifferent, and the same reasoning applies.

Capitalism is at least theoretically aligned to consumers to some extent, since they get to decide what to buy.

Prediction markets aren't. If they get big enough, they will influence and change the world the same way any other system of incentive gradients does. But they have nothing inside them that naturally points them towards incentivizing good outcomes. They're unaligned, and getting bigger.

Shankar Sivarajan's avatar

> 'do the maximally surprising thing so you make more money when you bet on yourself to do that.'

Wouldn't this eventually be self-correcting, with traders pricing in this possibility, reducing the amount of money "central actors" can win by betting on themselves doing surprising things, thereby making them less likely to do it?

Arie's avatar

Every bet has two sides. So you can always not do the thing and make money that way instead.

kenakofer's avatar

Right. And the equilibrium is that manipulable markets stay lower liquidity as smart money stays away.

darwin's avatar

Sure, but even under 100% efficient/ideal conditions, that just means there's an equilibrium where the expected payout for the next dollar invested in a manipulable market is $1.

That equilibrium isn't actually the same thing as 'there's not enough money in the market to incentivize people making the wrong decision'. It's a cap on how strong that incentive is, but the incentive still exists.

I feel like we have enough evidence of this already. It's not like no one bets on sports, just because the athletes could take a dive to fix the game. That's actually been a huge problem with sports and sports betting since time immemorial, and we don't solve it with market forces, we solve it with incredibly tight scrutiny and penalties towards athletes who try it. Something that doesn't/can't exist for prediction markets, at least as currently formulated.

Shankar Sivarajan's avatar

And the remaining money preferentially flows towards the people the market is about, which I think of as fair payment for participating by doing unlikely things or refraining from doing likely things (a bit like Taboo, I guess) for the traders' amusement.

darwin's avatar

It 'self-corrects' in the sense of reaching an equilibrium. I don't think it self-corrects in the sense of incentivizing the original best action.

If the investors bet on the 'wrong' answer so much that it is not worth the central actor's trouble to take the 'wrong' action and insider trade about it, we have succeeded in not pushing people into taking 'wrong' actions, but we've also sacrificed the accuracy of the prediction.

If the investors *don't* bet on the 'wrong' answer enough to dissuade the inside trader, then the market has more accurately predicted what they will do, but only by incentivizing them to do the 'wrong' thing.

Presumably these concerns reach some equilibrium where the central actor has equal expected utility from taking either path. But this equilibrium represents *both* an incentive towards the wrong choice *and* a loss of accuracy in the prediction, relative to the imagined use-case where actors and bettors are causally isolated.

At least, it seems that way to me, but still eager to get feedback.

Shankar Sivarajan's avatar

Ultimately, yeah, the existence of a pathway through which one can make lots of money affects the actions he takes. The question is only how much.

darwin's avatar

Well, I mean, *my* question is 'can/should we stop that pathway from existing before it's too late?'

Shankar Sivarajan's avatar

Oh, in that case, sure, you could ban anyone with material information from trading on prediction markets, like they do with insider-trading.

darwin's avatar

You could, but if I understand Scott here he's actively scornful of that idea, and no one in the space is interested in doing that or has any ideas for how to implement it.

Again, correct me if I'm wrong there.

MathWizard's avatar

"Self-correcting" in that it destroys the market. Every outcome with a central actor ends up low liquidity and thus inaccurate, and prediction markets can only survive in scenarios where nobody can control the outcome, or at least the cost to do so vastly outweighs the necessary liquidity needed to sustain a healthy prediction market.

Mio Tastas Viktorsson's avatar

Sam Kriss riffed on this exact thing very funnily in his prophesies for 2026

Desertopa's avatar

I've been bearish on the real world value of prediction markets for the last fifteen years or so, and I think the existing data set is pretty well aligned with my suspicion that they're unlikely to be a significant net positive for society. I suspect that, rather than some sort of corrective method being developed, this sort of thing is only likely to become more common and substantial as prediction markets and awareness of them expand

Dana's avatar

This is very well put. I agree that the downsides of these prediction markets seem likely to far outweigh the upsides; you've explained why much more clearly than I was able to do for myself.

JamesLeng's avatar

Obvious patch is "I will not participate in prediction markets related to my official acts" gets added to the oath of office for any decisionmaker whose stability is desirable.

darwin's avatar

Supreme Court ruled that the President cannot be prosecuted for crimes, but sure that could have *some* effect for other people in government.

But AFAIK some of these markets are currently set up to take anyonymous bets and pay in crypto, and I think that type of anonymity and untraceability is considered a key feature by a lot of advocates. I agree that having all names be public and all payouts be traceable could allow for a regulatory regime to be placed on top of the markets n order to mitigate the damage caused, which is what we already do for sports betting and stock markets and everything else.

But creating a whole new regulatory regime is awful and inefficient and never works anywhere near perfectly. And I'd expect it to mess with the predictive power of the markets.

JamesLeng's avatar

Current legal situation around Trump is obviously not long-term stable.

Even allowing for the sake of argument that such an official is not subject to criminal prosecution *in general,* they can certainly be removed from office for violating the duties associated with that office. That doesn't necessarily require deanonymizing the markets in general; just denying a few pivotal officials the privacy to conduct their own arbitrary anonymous money-laundering... which is likely beneficial to society as a whole even in the absence of prediction markets.

DanielLC's avatar

There have been prediction markets that the president has a hand in for a long time, like stock markets. Prediction markets are really only a new issue for people lower down who wouldn't be able to make so much money on that.

The solution there was to put your money in a blind trust, but that was only ever a tradition and Trump just didn't follow it.

Onid's avatar

This has been my trepidation with markets from the very beginning

Liminal Nous's avatar

> With prediction markets in play, suddenly you have an added incentive, which is 'do the maximally surprising thing so you make more money when you bet on yourself to do that.'

I don't know - if I were an insider harboring malcontent for my measly government salary, I would prefer to make 5% or 10% on the more mundane and less surprising decisions, and do so often. This way, the risk of anyone ever noticing is reduced to near-zero, while simultaneously maximizing my earnings potential over the long run as my bankroll compounds.

Intentionally advocating for maximally surprising outcomes just to hit a 10-bagger now and then is a completely irrational strategy. It wouldn't be long before your bet ends up in a blog article about insider trading in prediction markets, and if you were one-of-one or one-of-few who fervently advocated for that unexpected outcome, your colleagues would surely take notice.

darwin's avatar

Well, that's assuming you're in a position where lots of prediction markets hinge on things you can influence or know for certain over years.

Which will probably be true for some departments and high-ranking people, but not for everyone. And I'm talking about this affecting *everyone*, not just the government.

(also... you know, time discounting, including all the reasons things that get called time discounting are rational. It's pretty common to try to go for a big score today instead of a steady stream)

darwin's avatar

>But an account named fhantombets claims to have interviewed the winning trader; although he did not reveal his exact strategy, his claims better match a story where he was good at navigating WordPress directories, and found that the Nobel team put a draft of the announcement up early in a nonpublic part of their WordPress site. He won about $70,000.

I'm going to go on record saying this is bad, actually.

Basically the resolution here is 'this wasn't caused by an incentive for insider trading, it was caused by an incentive to hack into people's systems and reveal information they meant to keep private'.

I don't think that's actually a good incentive to universalize!

Yes, I understand this isn't deep blackhat shit, it was something that wasn't too hard to do and the committee wasn't taking serious precautions.

Nonetheless, I don't want everyone to have to spend time and money on taking serious precautions about everything they do! I don't want to incentivize hacking, surveillance, social manipulation, and other forms of adversarial spying with billion dollar markets!

That sounds very dystopian to me!

Taymon A. Beal's avatar

This exact same thing already happens with some regularity in the stock market (in particular, with earnings releases that are published at a guessable URL slightly before they're officially announced). So it's not really a new problem caused by prediction markets.

Don P.'s avatar

I used to do that on our internal network, to get at revenue and profit numbers ASAP…not to trade, but because our bonuses were a formula based on those values.

darwin's avatar

Right, major corporations *do* spend tons of time and money on opsec, and it makes things much more annoying and less efficient.

What's new about prediction markets is the ability to apply those incentives and pressures to anyone or anything at any time.

Still seems bad to me.

Viliam's avatar

Yep, you could ruin someone's life by making a random bet about them, and suddenly there are many rich and smart strangers with a financial incentive to make something weird happen to the target.

This is worse than cancel mobs; more precisely, cancel mobs can also be implemented as a part of this: you just need to make a bet about whether X will keep their job, and suddenly there are people with an incentive to search their tweet history and try to get them fired. Even worse, the cancel mobs usually didn't have the tools to e.g. hack your home computer, or slash your car's tires to make you miss an important meeting, etc.

Neike Taika-Tessaro's avatar

Wouldn't the person still need to be at least of some import before this effect would happen? e.g. if someone makes a prediction market about whether I'll keep my job, the vast majority of people aren't going to bet on this at all, and so there's actually no money in rigging it? (Not trying to make an argument against this, sincerely trying to understand the concern better.)

darwin's avatar

Well, if I understand the system correctly (and correct me if I' m wrong), there's no necessary link between the importance of a question and the size of the market. If people decide to bet $1B on either side of a question about your personal life, then the winners take home $1B when the market resolves (roughly).

In sports betting, more famous contests see more betting. This is partially because spots betting is more entertainment than career for most participants, and probably because bookies themselves have a limit to how many bets they want to take on small events they haven't researched that are easy to fix.

In practice, I'd expect some of those restrictions to apply to small personal questions, and some not. Bookies deciding what to back isn't an issue since the market liquidity comes from other users, it can go psuedo-infinite. I'd expect famous things to still get more attention for the definitional reason, but on the internet it's easy for individual personal stories to become famous and viral for 5 minutes, which could be enough. And small personal markets are less complicated and easier to manipulate in ways that might draw attention from certain types of bettors.

On the average case, I'd expect that small personal markets mostly need a stake from the person starting the market to attract any attention... which still means you can basically hire anyone to do anything acausally, eg 'Will my ex-wife die before the settlement comes due, I bet $50K that she won't....'

But even there, that relies on you being comfortable risking being the average case. I think any personal market *could* blow up psuedo-randomly, the same way anyone who got their picture taken in the 2000s/2010s could psuedo-randomly become a famous react meme. Which instills a constant low-level fear of that happening to you, and incentivizes zero-sum security against it.

Neike Taika-Tessaro's avatar

Thanks for explaining!

Admittedly, I don't think "instills a constant low-level fear" follows from that something could blow up pseudo-randomly, but I may not be online enough to appreciate e.g. how concerned the average person is about getting cancelled (or, to keep with your example, becoming a famous react meme, assuming it's a negative one).

Assuming for the sake of argument that this is just true - and if you're not yet sick and tired of humouring me - could you tell me a bit about what you think the probabilities involved here are, and why? (Totally understand if you're annoyed by my clueless questions, please feel free to skip this.)

Viliam's avatar

What darwin said... it could just randomly escalate: one person bets some money, another person bets against them, the more money at stake the more interesting it becomes... someone popular tweets about the bet...

Or you could get randomly non-anonymous, for example someone accidentally makes an interesting photo of you and publishes it online... maybe you made a weird face and/or happened to be standing next to something or someone important... and although no one knows you as a person, now it's betting about an online meme.

gorst's avatar

if I notice, that a big market came together against me like this, I could bet against me to hedge against me losing my job.

Viliam's avatar

Yes, that would be a reasonable thing to do, but most people couldn't do that properly, and the damage still happens you just distribute it more evenly across possible outcomes.

Michael Watts's avatar

Well, how does getting hacked apply pressure to the Nobel Peace Prize committee to be more careful next time? They didn't suffer any penalties. They can just get hacked every year and there will be no negative effects.

darwin's avatar

That's equivalent to them just releasing the results 4 hours earlier, then 4 hours earlier the next time, then 4 hours earlier the next time.

Presumably they must have *some* stake in making the announcement themselves at a given time,otherwise they'd just start telling people in the hallway after the meeting to decide it ended, and let the news spread by word of mouth.

More to the point: it's embarrassing, organizations don't like to get embarrassed. Some boss will tell some employee 'don't let that happen again', and if it happens again that employee may get fired.

TGGP's avatar

That's not "hacking in", that's a website not bothering to secure such info from the public.

darwin's avatar

>Yes, I understand this isn't deep blackhat shit, it was something that wasn't too hard to do and the committee wasn't taking serious precautions.

>Nonetheless, I don't want everyone to have to spend time and money on taking serious precautions about everything they do!

Edward Scizorhands's avatar

weev went to jail for something only slightly more technically difficult.

TGGP's avatar

I believe his conviction was vacated, but for jurisdictional reasons.

Melvin's avatar

I'm still confused why betting on sports games is degenerate and terrible but betting on wars or elections or book review contests is fine and dandy.

Scott Alexander's avatar

Although it's obviously not a black-and-white distinction, the difference to me is:

- Betting on sports is more likely to be fun and addictive. It's appealing to normies, involves sudden emotional ups and downs, and has a short loop - games are won or lost in a few hours. It's harder (not impossible) to get addicted to betting on whether Viktor Orban will lose his parliamentary election in April.

- Betting on wars and stuff has counterbalancing social good in that you know whether wars and stuff will happen or not.

...and because of these, if you're doing lots of sports betting, you're otherwise going to be optimizing for getting people addicted, in a way that's less worth it for people doing geopolitical betting.

Shankar Sivarajan's avatar

The price of [random meme stock] might seem prima facie to be even more unlikely to be addictive than Viktor Orban's reëlection, but people get addicted to day trading (and of course, some lose all their savings, etc.)

Also, there was story I read a few years ago you might have seen about weird Manifold addiction: https://www.astralcodexten.com/p/mantic-monday-52223. I agree that was particularly unusual though.

George H.'s avatar

Yeah maybe, but consider a world were prediction markets look like sports betting markets today. The market is run by people interested in making money, not good predictions. Experts are kept out of the market (this happens in sports betting) because they can make too much money. And our world becomes gamified for fun and profit*. You have a nice dream about prediction markets, but look at ways it could go south. We'll never get rid of online sports betting, Draft Kings is here to stay.

*Imagine watching current events on the TV/phone and there's a running stream on the bottom of the page giving the current 'predictions' of what's going to happen. Text here to place your bet. Heck maybe we even get to the point where the predictions markets are sponsoring the news.

Michael Watts's avatar

> Experts are kept out of the market (this happens in sports betting) because they can make too much money.

This is part of a general pattern of legislation specifically decreeing that bookmakers need to be able to make a profit. There is no particular fundamental reason for it. It would be incredibly easy to pass opposite legislation saying that if a bookmaker advertises a price, they are required to honor that price regardless of who wants to take them up on it; there is a lot more "public accommodation" law already on the books than there is "save our bookies" law.

George H.'s avatar

Huh, I didn't know that. I'm not sure it would be 'incredibly easy' to change 'cause of lobbying by the betting industry. They make lotsa money and I assume pay taxes to the states.

Michael Watts's avatar

Sure, that's why they have all the protective laws now. But the optics on those laws are terrible. If you stage a fight in public, they'll lose.

George H.'s avatar

Yeah well my view is no one cares enough, and the state and industry both have a vested interest. Heck here in NYS the state runs numbers games in all our bars and such. I'd bet there is some guaranteed payout for the state.

netstack's avatar

Just because sports gambling is ultra-turbo-addictive doesn’t mean event gambling can’t also be kind of addictive.

N = 1, but winning less than $100 on a Kalshi prediction for the 2024 election was a remarkable rush. I felt SMART and VINDICATED and immediately dumped the balance into other active markets without doing much of any research. Guess how those turned out?

There is some threshold at which the juice is worth the squeeze. But I have to admit there’s a squeeze.

Timothy Byrd's avatar

If I were a member of the Danube Institute (which seems to me like Orban's version of the Heritage Foundation), I might want to see if it was cost effective to manipulate the perceived odds of Orban's re-election so that when he does win, people like Scott would then think this election was okay, because the odds were against Orban.

Odd anon's avatar

If a market is expertise-led, and knowing the outcome of the event in advance is a social good, it's fine and dandy. If it's people playing games of expertise against each other without a resulting social good, it's neither good nor bad. If it's people playing games of chance against each other for the dopamine rush (which often has negative side-effects, addictiveness and such), then it's degenerate and terrible.

Daniel's avatar

Is knowing whether or not the United States is about to launch a sneak attack a social good though?

Kenny Easwaran's avatar

For the people who live in the city they might attack, who rely on useful information about how to stay safe, then it absolutely is!

George H.'s avatar

Yeah I think there is a lesson to be learned from the sports betting 'take over' of the sports space. Sports betting markets sponsor much of the sports content now. Maybe one issue is that Scott and his ilk don't watch sports so they have no idea how bad it is.

netstack's avatar

I think information about wars and elections is more valuable than information about sports.

Book reviews, eh. Take them or leave them.

I wouldn’t say sports betting is inherently degenerate, either. It trends that way *very rapidly* because bookies make the most money from addicted, uninformed bettors. Is that true for real-world events?

…okay, I’ve got to think about that one. Unless degenerate bettors comes at the expense of reasonable bettors, bookies are still going to have that perverse incentive.

Seta Sojiro's avatar

Zvi Mowshowitz explained why in extensive detail. Basically, sports betting ruins lives:

https://thezvi.substack.com/p/the-online-sports-gambling-experiment

Bob Joe's avatar

The positive impact of better information about who's going to win a sports game is near useless to society, whereas the positive impact of better information about if a war is going to happen is much higher. On the other side, sports betting has been shown to be a powerful and effective way of driving self destructive behavior and problem gambling, whereas that seems to be a lot less true with generic event markets.

That said if prediction markets continue to expand and include more minutia and meaningless events, I would expect the value of information to decrease and the inducement of problem gambling to increase.

Shankar Sivarajan's avatar

> how much money someone’s going to make by taking the pro-left-wing side of all those trades!

You're assuming they'll be resolved fairly.

Viliam's avatar

Imagine Donald Trump's company resolving: "will Donald Trump win in the 2020 elections?"

TGGP's avatar
Jan 13Edited

After Kristallnacht, insurance companies actually insisted to the Nazi government that they had to pay out claims on damage to property owned by Jews:

https://www.econlib.org/archives/2009/08/insurance_reput.html

So the question might be whether crypto.com values its reputation as much as insurance companies in Nazi Germany.

Timothy's avatar

thanks for sharing the econlib article, very interesting.

Odd anon's avatar

Re AI superforecasters: Isn't that just AGI? If you can accurately (and costlessly) determine "will X happen" at a level beyond top human ability (aggregated, not individual), that implies fully general superhuman decision-making. At that point, AI would be taking over all jobs which are about making decisions, and so on.

If we haven't managed to halt AI progress by then, I can't really see how humanity would survive, so planning for (or expecting) AI to ever take over prediction markets seems unproductive.

darwin's avatar

No, I don't think that predicting what *will* happen, and deducing the *best* course of action, are the same thing at all.

Odd anon's avatar

Humans aren't hired to pick the "best" course of action, they're hired to pick the course of action which achieves certain outcomes. The CEO picks whatever makes the most money, the politician picks whichever accomplishes the policy goals (or whatever gets the most votes, depending), etc. What kinds of decisions can't be reformulated into predictive "What will happen if X decision is made?" questions?

darwin's avatar

>"What will happen if X decision is made?" questions?

But again, that's not what a prediction marketis.

A prediction market is 'Will this happen or not', not, 'what would happen under this hypothetical.'

You could *imagine* that an AI which is good at the first question is somehow modeling enough accurate details about the entire world to run a simulation forwards and predict what it produces, and that it could therefore run similar simulations for any hypothetical you ask it.

But I really, *really* don't think that's what modern LLMs that are getting good results are actually doing.

Performative Bafflement's avatar

> At that point, AI would be taking over all jobs which are about making decisions, and so on.

How is this not good?

Obviously AI is coming for white collar work, we have to figure it out regardless. But when we have AI's proven to make better predictions / decisions than any human, we finally have a chance to get REAL governance and solve all our Molochian coordination problems.

If you haven't noticed, our state capacity and ability to do just about anything has been precipitously declining here in America. We can't even agree on what type of stove to have without there being a political element to it.

This has culminated in our leader selection process. Our current method is "let literally senile / retarded 90 year old rich people decide what we do, up to and including giving them the "nuclear armegeddon" button."

You have no concerns about this?? How is "AI runs things" not vastly superior to that on every conceivable front?

Mark Roulo's avatar

"How is 'AI runs things' not vastly superior to that on every conceivable front? "

I am currently reading "The Unaccountability Machine." Does "AI runs things" have any feedback loop other than violent revolution? I'll note that voting still gets results (good or bad) in the US: Trump is different from Obama and Biden.

Europe seems to be heading towards a non-AI technocracy where the population's desires matter less. I don't think adding AI to the mix helps.

Performative Bafflement's avatar

> I am currently reading "The Unaccountability Machine." Does "AI runs things" have any feedback loop other than violent revolution?

There's no reason we can't have "democracy" in the loop here and still get much better governance than senile 90 year old rich people.

Here's a scheme I came up with in an afternoon of thinking about it:

Anyways, here goes - a democratic AI-in-the-loop governance proposal

* Democratic alignment on priorities - “voting” is now not about which professional liar / 90-year old multi-millionaire half-corpse you think should be in charge of everyone, it’s about citizens staking a capped amount of vote tokens to weight high-level objectives (economic growth, homicide and other crime rates, UBI, immigration, equity / DEI, CO₂, etc.). As in, you get a standardized menu of items you can stake your tokens against, but you only get so many tokens, so prioritization and trade-offs are built-in.

* AI proposal - The AI proposes legislation to attain the aggregated democratically defined priorities, with a detailed prompt outlining the total budget and soliciting it to consider the homeostatic landscape, to predict the primary, secondary, and tertiary effects, to outline the monitoring KPI’s and thresholds, and to define a good sunset or re-evaluation time for any proposed legislation.

* Futarchy vetting - Prediction markets and digital-twin sims price the KPI impacts before enactment, as a human check on AI predictions, and as an overall evaluation ground over many such proposals, so we can understand the overall landscape of which proposed legislation will move various needles the most. This is federally funded so there’s enough alpha in there that smart people / companies will be doing this full time.

* Democratic vetos - A stratified random sample (≈ 1 - 10k citizens depending on locality) gets the top 3 AI-optimised bundles for each priority, plus the market scores, and can veto any of them in the aggregate if enough decide to veto. This caps downside from model myopia and value-misalignment, and keeps democratic participation in the loop, without the pernicious regulatory capture and misaligned incentives we get today from full time politicians, lobbying groups, and industry insiders.

* Monitoring and execution - Smart-contract escrow releases funds only if real-time KPIs or GANTT charts track forecast bands, limiting boondoggles and downsides.

What does voting look like? You open an app and allocate your voting tokens to the high level priorities you care about.

Occasionally, you’ll get a push notification to decide whether to veto some random bills or not, which you can ignore or answer as you like. Done.

It scales to every locality size - from the federal level to state, county, municipal, and HOA levels.

And at a shot, we’ve eliminated political parties, politicians, lobbyists, industry insiders, regulatory capture, and most of the other ills that plague politics today.

TGGP's avatar

The European Parliament is strangely productive even though it seems like it shouldn't be:

https://www.siliconcontinent.com/p/how-brussels-writes-so-many-laws

dunkinsailor's avatar

There are many instances in which the polymarket Oracle made strange decision. In the end it's just a game of who has enough money to buy the votes

Eg I remember the "Fordor nuclear facility destroyed" question on polymarket, which would be resolved if IAEE or Iran confirms it or a credible media consensus. Neither happened (with credible media even doubting it's destroyed) and still it resolved too yes. This made me more vary of resolution and how effective they are working.

Ym's avatar

"If only there had been some kind of decentralized forecasting tool that could have given me a canonical probability on this outcome!"

It would likely have been biased by the overly optimistic opinion of early adopters and True Believers, so that wouldn't have saved you from "being bad" :)

geist's avatar

I've never understood how conditional prediction markets are supposed to work. Say I buy YES on something conditional on events in 2028 for $0.40 and later sell it for $0.60. If the market ends up cancelled, what happens to me? Do they try to force me to give back my $0.20? When I sell, do I only get 60 conditional cents that either become real or worthless in 2028 (along with my $0.40 spend becoming real or refunded)? But then my money is locked up for a couple years with no way to redeem it. If I get to keep my $0.20, then it must have come from someone else in the market, who will lose real money even if it gets cancelled.

kenakofer's avatar

Kalshi might void a question, but it's a nuclear option, and I think Kalshi eats the losses in the rollback. Manifold users regularly resolve markets "N/A", which rewinds all the mana gain/loss, even if that sends account balances negative. Polymarket can't resolve N/A.

kenakofer's avatar

Correction, Kalshi doesn't rollback trades. "All trades are final" is very important language, so once you've exited the market, you've exited the market. It's those holding shares that get refunded.

Robert Jones's avatar

I think the answer must be that your $0.20 is conditional, so you only actually get your $0.20 if and when the condition is met.

Anders's avatar

No idea how things do work and I might misunderstand something, but the reasonable solution is that you get $0.2 immediately, and then the buyer of your position takes over the risk of a void market. This should be analogous to most sales of financial instruments.

Throw Fence 🔶's avatar

That's insane and voids the purpose of a conditional market. The entire point is that a conditional market carries no risk either way if the condition is not met. If you have to price in the risk of a void market, it's no longer conditional.

Anders's avatar

I agree with you. I did not think things through enough.

Michael Watts's avatar

Who did you buy the YES from? How are YESes and NOs minted?

The way I assume it should work, when the YES is minted it is stamped with a purchase price, and that price is refunded to whoever holds that YES when the question is voided. You sold the YES, so when the question is voided, nothing happens to you. The person who bought it from you for $0.60 gets a refund of $0.40 if they held it to completion.

Starting more from first principles:

You can always pay $1 to mint one YES alongside one NO. The YES pays out $1 if the question resolves "yes" and the NO pays out $1 if the question resolves "no". The YES and the NO have no particular price themselves. You can sell either or both if you can find buyers.

This differs from my first take because YESes cannot be minted independently of NOs. It's better for the platform because they never have any exposure to any questions. There are two obvious approaches to voiding a question:

1. The platform refunds $1 to everyone who minted a YES/NO pair. In this case, the option to be refunded on a void question isn't transferred with the YES or NO, but could be sold on a separate market. This immediately generalizes to an improved system where instead of paying $1 to mint a pair of tradable YES/NO tokens, you pay $1 to mint a trio of tradable YES/NO/VOID tokens. At that point it's clear what will happen in any scenario. YES tokens pay out if the question resolves "yes", NO tokens pay out if the question resolves "no", and VOID tokens pay out if the question resolves "void". This system will simultaneously estimate the odds of the protasis occurring at all and of the follow-on consequence conditional on it.

2. The platform refunds $1 to anyone who turns in a matched YES/NO pair. In this case, when a question is voided, all of its tokens are worth $0.50. This generalizes to a system where the platform just pays out $0.50 on all tokens whether they're YES or NO. This feels like a strong bias in the implied probabilities and is probably undesirable.

---

One thing I want to emphasize is that your $0.20 profit has nothing to do with the resolution of the question and no system is ever going to consider it. Thinking of it in the way you apparently do is making the "where is the missing dollar" mistake.

https://en.wikipedia.org/wiki/Missing_dollar_riddle

geist's avatar

The $0.20 is important because people describe conditional prediction markets as being like they never happened if the condition isn't met. Obviously, this wouldn't be the case if traders can still have real gains and losses on a cancelled market. The two comments immediately above yours are one saying you'd keep the $0.20 and one saying you wouldn't, so there doesn't seem to be a clear understanding of what a conditional market would actually involve.

Your first option is essential a conjunctive market. You could take it one step further and have four tokens: A∧B, A∧¬B, ¬A∧B, and ¬A∧¬B. This gives you conditional probabilities because P(A|B)P(B) = P(A∧B) (set the left hand side equal to the reverse, P(B|A)P(A), and you've just discovered Bayes' rule). P(B) = P(A∧B)+P(¬A∧B), so you can use the prices of two of the tokens to derive P(A|B)=P(A∧B)/(P(A∧B)+P(¬A∧B)) and the other two to get P(A|¬B)

This gives you conditional probabilities, but it's a different thing than a conditional market. People like Robin Hanson and Scott in this post always talk about conditional markets rather than conjunctive ones, and I'm curious because I've never seen a practical explanation for what happens when they're cancelled.

Michael Watts's avatar

> The $0.20 is important because people describe conditional prediction markets as being like they never happened if the condition isn't met.

OK. But that doesn't change the fact that it's unrelated to resolving the question. You can reasonably ask how the market will behave. But as stated, you are asking the equivalent of "if I buy YES at $0.40, and then I sell it at $0.60, and then the question resolves NO, what happens to the $0.20 profit I made on YES?". This question is nonsensical and betrays a conceptual confusion. You will keep the $0.20.

geist's avatar

But if I buy A|B at $0.40 and sell it at $0.60 and then B fails to occur, if this is really a conditional market as usually described where all trades are retroactively cancelled, then I don't get to keep the $0.20. You could design a market where I do (as we've both described), but it would different from a conditional market because you'd be trading A∧B and then deriving P(A|B) from a little math rather than directly from a A|B market price.

Throw Fence 🔶's avatar

I think the only solution is that your $0.20 is conditional (on the condition being met), and therefore bound up until the market has resolved. Although I guess there could be an earlier time where the condition is known to be met, before the market resolves, in many cases.

geist's avatar

I agree, but wouldn't that kind of suck? You go to the effort of trading for profit that not only might not happen but even in the best case you can't even touch potentially for years. Also, all trading must be on a platform where the only allowable currency is conditional dollars, which you must have gotten by converting real dollars, letting the platform owners hold onto your money for free in the case where the market is cancelled. And there would be separate conditional dollars for each market that are not interchangeable, so people would have to keep track of that.

A regular market might not resolve for years, but at least you can always sell out and lock in your gains/losses and have real money back.

Michael Watts's avatar

OK, I acknowledge that. Canceling all trades retroactively can never work. So what?

Bolton's avatar

You are right that the fact that conditional markets need to hold profits against clawbacks, I think this is just an underdiscussed reason to avoid them. Play money sites like Manifold get around this because they can let balances go negative, but this just further hides the incentive distortion. I wrote a post that advocates the conjunctive approach and discusses a few of its other potential benefits a few months ago: https://thequantummilkman.substack.com/p/better-futarchy-with-combinatorial

Mark Roulo's avatar

"Say I buy YES on something conditional on events in 2028 for $0.40 and later sell it for $0.60"

Existing financial option markets handle this case and I expect that prediction markets do roughly the same thing.

So ... imaging that I purchase an option on a given stock today for $4. The option expires in January 2028. Next month things have changed and the option is now more valuable and I sell it to someone else for $6.

I *now* have a net gain of $2. I paid out $4 for my original option and then sold it to someone else for $6. I no longer have an option and I have more cash than I started with. Meanwhile, that person is now on the hook for any gains or losses in that option position and he/she paid $6 for that privilege.

If the option expires worthless then the person I sold it to is out the $6, but I keep my $2 gain. If the option is worth $10 at expiration then the person I sold it to has a net gain of $4 and I still have my $2, but nothing more.

Throw Fence 🔶's avatar

This makes the whole thing not conditional though. There isn't supposed to be any risk if the condition is not met (that's what makes it a conditional market). The way you're describing it, the risk of the condition not being met needs to be priced in, which would just make it a regular market.

geist's avatar

Actually, I think I figured out how to do this. Someone else has probably had this idea before.

First we have the event we want to condition on, like Democrats winning the 2028 election. We make a normal prediction market for this. It has creation and redemption, i.e. you can trade $1 to the market maker for one YES_B and one NO_B token and you can also trade one of each back to get the dollar out. Then you create a conditional market based on that, say GDP in 2032 is above some threshold conditional on Democrats winning the 2028 election. This market is denominated in YES_B instead of dollars, and creates and redeems into them as well, so you trade one YES_B for one YES_A|B and one NO_A|B. This market pays out like a conjunctive market YES_A|B becomes $1 if both A and B happen and $0 otherwise. Or you can think of it as a market purely on A that pays out in YES_B, which will be worth $1 if B happened and $0 otherwise.

If you want to enter the A|B market purely as a conditional market, you create a pair in the B market, use the YES_B to bet on A|B and hold the NO_B. So if B doesn't happen, you get your dollar back and if B does happen, your trades on A|B take effect. You could always trade your NO_B, but if you that, you have exposure to B and aren't purely in the A|B market.

When B happens, you can convert it to a market purely on A denominated in dollars or you can leave it in YES_B which are now pegged 1:1 to the dollar. If B doesn't happen, you can just declare the conditional tokens worthless and stop trading in them.

So before B happens or not, YES_A|B will have one price denominated in YES_B and another price denominated in dollars. The first price is P(A|B) and the second is P(A∧B). If the markets are efficient, the ratio of the prices is the price of YES_B in dollars, which will be P(B), which matches the definition that P(A|B)P(B) = P(A∧B).

That means that if you trade on the A|B market and make a 0.20 profit, that profit is in YES_B, so you can treat it purely as a conditional market and wait until B is resolved and make $0.20 profit when it happens and $0 profit when it doesn't, or you can sell your YES_B on the B market and get 0.20*P(B) at current market price as your profit.

The objection to this is that it is needlessly complicated and pointless. If you have a conjunctive market on A and B denominated in dollars, you can compute the implied P(A|B) and you can also make trades that give you identical exposure to being in a conditional market if that's what you want.

Jeff's avatar

Insider trading would be good from an informational standpoint if they simply entered the market fairly, but that isn't how an individual turns their informational advantage into the maximum amount of money. The incentive is to advocate for the wrong side publicly and then make your bet at the last possible moment so people don't have time to adjust. Unchecked this could turn prediction markets into a net negative information source.

Now I don't think this is any sort of unsolvable problem, many of the normal methods of preventing insider trading in other areas will work, but it isn't a good thing, it isn't increasing the information value.

darwin's avatar

>The incentive is to advocate for the wrong side publicly and then make your bet at the last possible moment so people don't have time to adjust.

That's the incentive for insider knowledge of what *will* happen.

But if you have insider power to influence the actual outcome, then the incentive is to nudge the actual outcome towards whatever has the lowest probability on the market already, to maximize your returns when you bet on that.

Ethan's avatar

So then the market is still inaccurate, until the last minute when you bet on that, no?

Sami's avatar
Jan 13Edited

Holy perverse incentives Batman! This ones rhyme with Goodheart's law. ( "When a measure becomes a target, it ceases to be a good measure")

Actors can make money by behaving against the common established opinion, by acting in the maximally unpredictable way. The worse the odds, and thereby the more insane the action, the more money they make.

Doug Summers Stay's avatar

Polymarket showed up on the Golden Globes broadcast. I think that's worth a mention.

Hannes Jandl's avatar

I don’t think Trump supporters perceive “Will Trump turn America into a dictatorship?” and “Make it great again?” as opposing outcomes. Rather the first condition is necessary for the second to be realized.

orthonormal's avatar

The real problem with conditional markets is that it's cheap to manipulate prices when you know that a conditional won't hold.

If on Election Eve, the market gives the GOP with a 99% chance of winning, then the markets on outcomes conditioned on a GOP win will have smart money pushing the numbers in sensible directions; but the markets on outcomes conditioned on a Dem win will have almost no smart money in them (not worth paying fees and locking up your money). If the conditional markets get used for news stories, then most of the money in them will come from people who just want to manipulate the news stories, and don't mind paying fees and locking up their money.

And if people take decision markets very seriously when deciding how to vote, you can easily get self-fulfilling spurious prophecies.

Mark's avatar

If markets give the GOP a 99% chance of winning, then it doesn't really matter how accurate the Dem markets are because the Dem outcome will almost certainly not come to pass.

10240's avatar

Then again, if you want to use the market to decide how to vote, it mostly matters if the election is close, and both parties have a chance.

----

Btw, in the conditional market example, the trickery (a market on the predictions on the main market the day before election day) doesn't seem necessary: if you want to decide how to vote in 2028 based on which party is predicted to be better for the economy, you can just look at the main conditional markets in 2028, before going to vote. You don't have to know today (when the predictions are confounded by noncausal pathways) which party will be better 2028–2032.

A better example would be useful. (Of course one can think of contrived examples, like you can't vote on election day and want to vote early, but you're still worried about predictions being affected by the possibility of events independently affecting both the outcome of the election, and the performance of the economy after the election, between the time you decide your early vote, and the election day.)

orthonormal's avatar

For a better example, this actually happened in the 2024 veepstakes! Yglesias cited Whitmer's poor showing on the conditional Manifold market "will Harris win conditional on this VP choice" as signaling an actual problem with her.

I manipulated that market back towards sanity (which only cost me 14-20 cents of mana!) and then wrote about it: https://www.lesswrong.com/posts/awKbxtfFAfu7xDXdQ/how-i-learned-to-stop-trusting-prediction-markets-and-love

(This was a case where most absolute probabilities were necessarily very near zero because there were a lot of possibilities.)

10240's avatar

That's an example of something slightly different. In my contrived example, or in Scott's example assuming that you care today about which party would be better 2028–2032 , it's irrelevant if the prediction market can itself influence the outcome.

Alex Gourley's avatar

Better example: who will win the general election conditional on who wins each primary.

Answering this requires filling out a matrix of bets, but it's extremely useful because a large enough market converts a first-past-the-post system into one that behaves very much like ranked choice.

Aftagley's avatar

"Kalshi is so accurate that it’s getting called a national security threat."

I think this is an inaccurate way to describe the concern. It implies, at least to me, that there is some concern that the wisdom of crowds model of decision markets has gotten so accurate as to be a danger.

Instead, the concern is that decision markets have enabled people to convert access to privileged information into profit. Sure, this may enhance the overall accuracy of the systems, but the accuracy isn't the concern, it's the leaking of sensitive information.

dogiv's avatar

Yeah, in this case the market still wasn't "accurate" in that it didn't have a very high probability. The insider was calibrating bet sizes to not affect it too much, I imagine. So in theory Maduro could have checked the market and still figured he was probably fine for a while. It actually would have been a better signal to look at individual trades and notice an insider-trading-like pattern of bets, but in general if that worked then other market participants would do it first and the market would have moved a lot more.

Andy the Alchemist's avatar

Turning the entire planet into a casino is in fact bad for humanity and society as a whole. Its so fucking obviously stupid this really should not even need to be said. Humanity has lost the plot completely, I genuinely fear we are all doomed.

bbqturtle's avatar

There is a scale of will for all things. Cigarettes are evil, e-cigs are less evil. Unlimited free alcohol to everyone is more evil than dosed by the glass by bartenders who have a stake. Doritos are more evil than Homemade spiced tortilla chips.

Phone casinos are more evil than traditional Casinos. However, they are both more evil than prediction markets. In a similar way, betting apps are more evil than prediction markets. It’s due to incentives.

With most sports betting casino-esque apps, the odds are much worse for the player. Buying both sides of a market will cause players to lose more than 30%. They trick players predatory spreads and lure you into more profitable parts of the app.

Casino / betting apps will kick you off if you make too many good bets - even if you lose. Prediction markets, like stock trading, slow people to make profit by researching and gaming the markets, no different than day trading.

Also it’s much less addictive. I agree it’s a net evil but less evil, and I don’t advocate for eliminating less evils when adjacent greater evils still abound. (Casino / sports betting apps).

methylxanthine's avatar

In that case, what is your opinion on stock trading? Does the stock market also turn the planet into a casino? If not, why not?

Bargain-bin Seldon's avatar

Any system that scales past direct coordination must develop abstraction layers for coordination. These layers face systemic pressure to optimize procedure and metrics at the expense of ground truth. This creates a reinforcing cycle that continues until an external system or substrate forces collapse or radical restructuring.

'Rulescucking' is the system prioritizing the map over the territory--our systemic pressure.

Will the AI born in this environment break the calcification, or turn it up to 11 and result in a closed loop where the map is the only thing we have access to?

Xpym's avatar
Jan 13Edited

> Will the Russians capture Myrnohrad? This is a small town in Ukraine far from the front line that the Russians obviously were not going to capture

This is wrong. Myrnohrad has been near the front line for a few months now, and the map in the linked tweet shows it already half-encircled.

> The resolution criteria named maps by the well-regarded Institute For The Study of War as canon.

ISW is only "well-regarded" insofar as it's the least bad of the bad bunch. They default to regurgitating Ukrainian propaganda, unless it has been definitively proven wrong.

Scott Alexander's avatar

Thanks, I've removed "far from the front line", but I think it's true that the Russians were obviously not going to capture it *within the time frame specified by the market* - I think the probability was near zero before the fake ISW map came out.

Xpym's avatar

Yeah, that part is reasonable.

Hafizh Afkar Makmur's avatar

If we're encouraging truth divulging, is there anything encouraging "revealing" info sooner than later? Otherwise we got this state where the one holding the key info will hold it until the last second and got massive profit. If there's some incentives for them to divulge it as early as possible, I think we'd solve insider trading and increase prediction market usefulness at once!

JamesLeng's avatar

Might make sense to figure out a nontrivial default payout schedule - some standard portion immediately, then further increments across days or weeks, as confirmation or errata propagate - to limit the impact of misleading preliminary results, or fraudulent short-term manipulation of otherwise-reliable indicators.

Scott Alexander's avatar

I think this might be true if you know you're the only insider. Otherwise, you should bet before someone else who has the same knowledge as you eats the alpha. IE if you think the Republicans will do unusually well next election, you should worry that your reasons (if true) will eventually become widely known popular consensus, and someone else will bet the shares on Republican victory up to their true probability.

Hafizh Afkar Makmur's avatar

Guess so. But I guess by definition, insider traders are the only one (or group) who held a critical info.

I just realized that we're back to supply and demand again. Currently your info is the most expensive the latest you reveal it. So if you hold a monopoly on that info, the way you got your "deserved" price is to hold it as long as possible. If you don't hold that monopoly, it's a race to the bottom and you try to "sell" your info cheaper than everyone else otherwise you can't sell it at all. So you "reveal" your info as soon as possible.

This also happens in stock market. For common info, there's a race to the bottom so hard they invented HFT (high frequency trading). For monopolized info, there's, well, insider trading. Both are frivolous and frowned. I heard that they're optimized around liquidity so for that purpose I guess that's fine? But that's not what we're currently optimizing for.

I think for our purposes, we want an info to be more expensive the faster you reveal it. But when I imagined several models for it, I'm not really sure it'll do that. Free market models do get broken around monopoly eh.

beleester's avatar

The set of insiders will often be reasonably small (e.g., a handful of people doing the planning for the Maduro raid), so I think this is a real concern.

Synthetic Civilization's avatar

This feels like a clean example of epistemic success without institutional uptake.

Prediction markets aggregate belief, but they don’t bind action. No one is required to listen, update, or decide differently.

So probabilities get sharper while incentives stay flat.

Truth improved; control surfaces didn’t.

That gap explains why it feels anticlimactic rather than revolutionary.

Micah Zoltu's avatar

*Way* too early to shill anything, but I wanted to put on your radar that the remainder of the original Augur treasury (several million $ at today's valuation) has been transferred to a new 501(c)3, and that group is currently funding my team (at least) to try to once again achieve Augur's goal of user-created real-money prediction markets.

We will, of course, try to solve the UX problem (again), but I can't make any promises on that front as our primary goal is that it passes the "walk away test". The "walk away test" is that if the entire dev/operations team gets tortured and murdered one day and all secrets are stolen, the platform will continue operating exactly as it was designed and the attackers can't change that in any way. Building with such a priority is great from the regulatory side, and it gives users total agency, but it usually comes with a UX cost and it means that you can expect a whole lot of really dumb markets being created by people.

My personal hope is that if we successfully build this, third parties can build curated lists of markets that are actually interesting and handle the problem of curation in a federated way since the base-layer will not be doing any curation.

To reiterate: We are just getting started and it will be quite some time before we have a product that one could actually use. If anyone is curious, we are happy to talk about our current design and design choices, but while some code exists already, it is just for fleshing out ideas/prototypes and not anything near a product. We are a very small team and we care a lot about security/privacy so we do not "move fast and break things".

JamesLeng's avatar

Might make sense to define a nontrivial default payout schedule - some standard portion immediately, then further increments across days or weeks, as confirmation or errata propagate - to limit the impact of misleading preliminary results, or fraudulent short-term manipulation of otherwise-reliable indicators.

Micah Zoltu's avatar

The current design has payouts only occur after the full dispute system has run its course. In the happy path, this is a day or two after the event. In the unhappy path, this can be as long as months.

In theory, people confident in the results who don't mind waiting on payout can buy winner positions during this wait period for slightly less than their final value (e.g., $0.99 on the $1). Of course, it may end up being a challenge to make this process seamless enough that it actually happens, but it is not something core protocol needs to care about and can be built in layers on top of the core protocol.

I *think* this addresses the problem you are getting at (please correct me if I'm misunderstanding), though likely introduces new ones (especially on the UX side).

Keep in mind, there is no "house" or "operator" in this system. It is fully autonomous with no permissioned human in the loop making decisions or calls.

Scott Alexander's avatar

Thanks for letting me know. Please definitely let me know if you reach the usable product stage (scott@slatestarcodex.com)

TGGP's avatar

What happened to Augur?

Micah Zoltu's avatar

Everyone has their own theory for why Augur didn't work out. Here are the most common ones:

A. Cost of doing anything on Ethereum at the time was too expensive.

B. The UX was too difficult.

C. On-ramping to crypto was too hard.

D. No incentives to providing liquidity (beyond usual market maker profits).

E. The people willing to provide liquidity are able to do so on "regulated" apps, it is only regular users who don't have access to regulated apps (because they cannot hire someone to setup an off-shore branch that avoids regulatory limitations).

F. No active marketing.

G. Resolution was not instant.

H. Lizardmen.

After building and releasing two versions, and seeing millions of dollars thrown at one of the presidential elections, the team building it eventually walked away (for reasons that were never explicitly stated, but I suspect regulatory fear). The app continued to function for a while, but it wasn't completely resistant to not being maintained and some of the services it depended on eventually disappeared so the app stopped working.

At the least, I am quite confident we can solve the last problem of "if the devs walk away the system will still operate". We will try to address some of the other potential issues listed above, but some of them I don't personally plan on trying to solve (like marketing, or paying liquidity providers to show up). So if those were the reasons for its prior failure, they likely will repeat unless someone else steps up.

TGGP's avatar

I'm curious: how will it continue to work if the devs walk away and services it depends on disappear?

Micah Zoltu's avatar

By not depending on any external services, only things available as P2P networks. This way, as long as there is at least one person still interested in the project, it will continue to exist/live.

More concretely (and much deeper technically):

* IPFS for distribution of the app. The app is just a static file web app (renders in a browser like any website) and the IPFS protocol is a P2P distributed filesystem. There are "IPFS gateways" that allow people to utilize one of many centralized servers to access this via HTTP and a normal web browser. Brave had native support (no need for centralized gateways) built in for a while but it was dropped, there are people working on getting it back in (in a more maintainable way).

* Ethereum for the database. All markets and trading is on a Ethereum (a P2P blockchain). This is where the biggest risk of "too expensive to use" comes from. Using Ethereum is far cheaper today than it was when Augur was around originally, but it isn't guaranteed to stay cheap. We can work to minimize the costs here, but it is possible we fail and it ends up being "too expensive" due to Ethereum transaction fees.

* ENS for discoverability and human readable addressing. Like DNS, but decentralized (runs on Ethereum) and can be setup to be nearly immutable. Once deployed, a name can be configured to point at the app on IPFS (which normally would be an unreadable content hash), and no one (even the original owners) can change it for some prepaid duration (e.g., 100 years). Most browsers won't natively work with this today, but you can use third party centralized services (of which there are a few) to do name resolution in your browser like `augur.eth.link` (not a real address, just illustrative) which would serve you the content permanently pinned to `augur.eth`. If all centralized resolvers go down, the UX for finding Augur decrease significantly as people would need to navigate to an IPFS content hash via a gateway or local IPFS node. Definitely not ideal, but the service at least will remain online and you can get an ugly link from someone you know, or resolve it yourself manually, or run your own resolver.

All 3 of these things currently have centralized services providing access to them using traditional protocols like HTTP and web browsers, and those services are "content neutral". It is possible that one or more of them disappear off of the internet eventually, but they are big enough that it seems unlikely. Everything would technically still work if only Ethereum was still operational. ENS will be online as long as Ethereum is online. IPFS is optional for file distribution (users can download the app from GitHub or webarchive or whatever), and the rest is just centralized thigns built on top of those to make it more user friendly.

SMB's avatar

If prediction markets become higher-profile, isn't there a risk of feedback effects, at least in markets where the resolution depends on a high number of individual choices?

Say that millions of Iranians view the Polymarket predictions about Khomeini's ouster and know that Polymarket tends to be very accurate. If I'm an ordinary Iranian considering taking part in anti-regime activities, the 20 percent figure would probably deter me from doing so. That seems like a bit of a lost cause. Why risk life and limb for a revolution with a 20 percent chance of success? Only the most zealous revolutionaries would stick with it.

Luk's avatar
Jan 13Edited

Yes, the TechCrunch is also an example of that feedback effect : the existence of a market, or the specific percentage it shows, will increasingly affect the outcomes it means to predict. We can even easily imagine a market that does not have any object value, for example maybe event A would only happen if the prediction market for event A says it has a <50% chance of happening, and would not happen if the market trades at >50%. The more prediction markets interact with 'the real world', the more things like this will happen.

Terzian's avatar

That's what I thought back when people were complaining a couple of years ago about covid death predictions not being influential enough, especially with policymakers. If they had been influential enough it could have led to predictions on the number of covid deaths being always wrong, regardless if high or low.

Terzian's avatar

The risk is actually extremely high. I've been worrying about this from day 1 when it comes to prediction markets and always felt that feedback effects will unavoidably get too strong and ruin the whole project. I've seen nearly no one ever adress it even in the face of clear examples like the bet on covid deaths a couple of years ago.

Mark's avatar

Yes, and everyone with money/influence knows this. Which is why both sides are plausibly trying to manipulate the market. We are talking about markets with values in the single millions; the outcome in Iran is worth hundreds of billions to either side; even a tiny chance of tipping the scales (in either direction) is worth investing in.

JamesLeng's avatar

Depends on the available alternatives. If that percentage has been down in the single digits for a long time, while conditions are intolerable but enforcers seemingly invincible, a sudden jump to 20 might be the shock that allows previously hesitant and disconnected people to coordinate on "this is the best chance we're going to get." Then if coordinated action reveals weakness in the regime's response, that new info makes odds go up further.

dogiv's avatar

In general I don't think most good things happen because the people doing them are confused about how likely they are to work. I'm sure you can find counterexamples, but there are also examples where people would be more likely to do good stuff if they had accurate predictions about the outcome.

Domenic Denicola's avatar

You might not feel like we got everything we wanted. But on the phone with my generally politically-alarmist Dad, I was somewhat able to calm him down by citing some prediction markets. He'd been worked up into a frenzy by the media into thinking Trump was going to send the troops into Greenland like he did to Venezuela. I did a quick search and quoted him the the probability of Greenland being invaded vs. joining the US voluntarily/for money vs. staying independent, and I think it helped bring him back to reality.

That's not nothing!

Mark's avatar

Yes, I have also successfully introduced prediction market data into discussions that were previously based on vague speculation.

beleester's avatar

Polymarket has it currently at 17%, which doesn't really feel that reassuring to me! That's basically "roll a 6-sided die, and if it comes up 1 then NATO explodes."

Domenic Denicola's avatar

Which market are you referring to? https://polymarket.com/event/will-the-us-invade-greenland-in-2026 is at 10%.

I agree this is still higher than I would like. My skimming of Manifold while talking to my dad on the phone found a 1% market, but I didn't investigate volume or compare multiple sites.

Florent's avatar

My objection to insider trading is about the timing.

Suppose you have a market that sits at 5% for 6 months, and you, an insider, know that it will resolve to yes. You could buy shares at the beginning, and waste 6 months of interest waiting for the resolution, or you could buy it at the last moment for maximum revenue.

But the purpose of the market is to give a signal to the rest of the world. A market changing value 5 minutes before resolution serves no one.

Christina the StoryGirl's avatar

This expresses my feeling of "sorry, this tech isn't good enough yet to even think about," which I also feel about pretty much everything AI, too. Like, call me when you've gotten to the level of (effective) perfection, like ST:TNG's "Computer" and it can do holodecks. Until then, it's wrong often enough that it's not worth even engaging.

methylxanthine's avatar

That makes some sense at an individual level, as a human being with limited attention. The problem I foresee is that rarely does anyone call you when something is “ready to engage with.” Or it becomes useful in a way that you didn’t anticipate, so your initial model of what success looks like was wrong and should have been updated. And so you just end up back at having to spend some attention on things that are still quite flawed.

Christina the StoryGirl's avatar

> "The problem I foresee is that rarely does anyone call you when something is “ready to engage with.”

What are you talking about?

If prediction markets ever begin usefully forecasting virtually all events in a way I could act on in *perfect safety*, without any risk to me of insider trading / manipulation, believe me, that information will make its way into the zeitgeist.

Ditto if one of the LLMs stops *ever* hallucinating and especially ditto if someone invents a holodeck. That'll be in the mainstream news!

For now, I don't see a reason to engage with either, and as a comparatively dumb ACX user, I kind of feel like I may be the kid from the Emperor's New Clothes on both topics. I find it baffling that both Scott and the general ACX commentariat are as into prediction markets and AI as they are - and was actually shocked that Scott has an Alexa in his home.

Anders's avatar

By waiting 6 months, you risk your insider information becoming public during that time, or that some other insider moves the market. But yes, there will absolutely be cases playing out like your example—so, good point!

TheGreasyPole's avatar

But if you go in early, there is also a risk that your insider information changes or is made irrelevant.

6 months ago, the Maduro insider trader could probably have gone in on the basis of "I know the boss definitely wants to do this so its very likely to happen" but Trump could have changed his mind, maduro could have died, Trump could have died, 101 other possibilities.

On the other hand... if our insider makes his bet literally as the GO order is given and the helicopters are taking off from the carrier, he's much surer his insider knowledge predicts the actual outcome and this is a "1:99 bet being offered at 4:1 odds".

Timothy M.'s avatar

I feel like you're just describing the probability our insider should attach to their prediction at this point.

Nick's avatar

To be fair I would say you are skipping over the possibility that the predictions are bad too quickly, at least for things like geopolitics which is probably more societally useful to know about that sports odds. How much work has been done on backtesting how often predictions are correct?

Just intuitively, it seems like there are red flags to suggest they are bad e.g (a) the 6mn market for Iran is miniscule compared to serious markets like stocks/commodities/sports gambling suggesting there are no motivated/informed institutional investors involved; (b) if I look at something like the Maduro deposed market where the clear consensus was he would not be deposed for most of the market’s lifetime it seems apparent that the retail investors betting on it did not have a good read on the Trump administration/the real life probability (and I’m not sure an insider trader causing the market to skyrocket to ‘definitely depose’ 5 minutes before it actually happens provides any informational value to society)

Steffee's avatar

Yeah, that's a good point. Scott sees the money in markets and wonders why there hasn't been revolutionary change, but the mere presence of more money doesn't necessarily mean accuracy has gotten significantly better for hard questions except when they near their deadlines?

Timothy M.'s avatar

I definitely agree that the Maduro prediction was pretty far off. It does raise an interesting question, though:

Was it off because not enough people were participating (i.e. it didn't really reflect the consensus of what people think)? If so, presumably it will get better if there's more adoption.

Or was it off because the institutional consensus was that the Trump administration wouldn't do something like this? If so, that's a bad prediction, but it DOES tell you something - namely, that our institutions do not understand what is currently happening in our government.

mikolysz's avatar

Conditional prediction markets feel like "side channels" for lobbying.

Assume you're a large company trying to influence the regulatory process (not necessarily in your own country). You can bribe / lobby politicians or donate to PACs, but that has legal implications, and is problematic if you're a foreign actor. You can also bet on the NO side of your proposed change happening, bringing its probability down. This makes it a lot more profitable to bet YES, especially for the politician who can manipulate the process.

This creates a strange paradox where the true probability of the change moves in the opposite direction of its predicted probability. The less probable the market says the change is, the more incentive the politician has to bet YES and cash out by implementing it, which makes the change itself more probable.

I also feel like prediction markets encourage stupidity. On one hand, it's beneficial for society if there are low-yes-probability markets of "politician X will do this extremely stupid thing that they're extremely unlikely to do, but which would wreck the economy." As a participant of said economy, you can use them to predict how likely the disruption is (or just to hedge against it). However, as the politician, such markets encourage you to do that stupid, economy-wrecking thing, as low-probability bets that you have 100% influence over are extremely profitable.

If you're a foreign state actor who wants the economy wrecked, you can use this to your advantage by providing liquidity to these markets.

Prediction markets are just continuing the trend of "financialization of everything", and I think that can impact the world in various weird and unpredictable ways. There are plenty of people in positions of power whose salary is much lower than their power would suggest. It's not just politicians, but also sports referees, regulatory agency personnel with authority to make major decisions, journalists and newspaper editors, law enforcement agents (who can trivially turn a "will x be caught before y" into a definite NO), and probably more I can't think of. Giving those people a direct path of converting their influence into money can have strange and unpredictable consequences.

dogiv's avatar

You're describing slightly indirect bribery, and I think mostly existing ways of preventing bribery will work fine. Governments should probably ban their employees from betting on stuff related to their job, for example. Many businesses too.

Charles Wang's avatar

I think there is a legitimate problem with prediction markets in the microeconomics of it. For one, the 4%-ish trading fees on Kalshi are ludicrous by financial standards, so there's essentially no point in smart money trading in the absence of extreme insider information or trading against a retail punter. So this already kicks the probability off from the true one by a few percent.

The bigger issue is that retail order flow is not balanced. If retails always just flipped a coin before deciding whether to buy or sell on a bet, then this would directly subsidize smart trading activity without affecting the price and incentivize good price discovery. On the other hand, if retail flow is one way, then market makers need to be paid to take the risk of sitting around for potentially months to years holding the other side and the price will be heavily distorted. Just think of how little the price of a meme stock has to do with any sort of fundamentals. I wouldn't be suprised if movements in the lab leak thing is driven entirely by this sort of effect.

I do like the conditionals idea, there's no way proper money is going to start flowing through the system unless its hooked up to things with direct economic value. I somehow managed to get Matt Levine to post my email on this for doing it on stock voting, but it probably also would be good for splitting up complicated things like stock prices into simpler bets, like stock price conditional on future earnings reports and future earnings reports since it often is possible to get one major thing right and be completely unable to make money off of it due to getting the other leg wrong.

Insider trading is probably an issue for liquidity since market makers really don't like it when your quotes only get filled by people picking you off, maybe there can be a cust flag like some exchanges have where you have to mark a trade on MNPI as such and people can flag whether they want to take the other side of that? Probably would have to suppress visibility of MNPI trades for some fixed amount of time to allow whoever was on the other side of that to offload the position? MNPI should at least be required to be disclosed by a certain amount of time after the trade since that neatly both forces useful information to become common knowledge and is a lever to stop people freely breaking their organizations rules to insider trade. People with material influence over the outcome should probably be required to publically disclose that before the trade in the way CEOs are to prevent some of the degenerate behavior related to that.

MT's avatar

Has sports betting, and now widely accessible and hugely capitalized sports betting, revolutionized our ability to predict and understand sports outcomes? But surely betting will revolutionize prediction of other things that are … more straightforward? Where the public has more insight? … for the greater good

MT's avatar

I would say advanced statistics have been much more impactful than the spread of betting, which is still limited to who's the underdog etc. Power rankings, player awards, still not odds-based and the opinions of experts are much more widespread and trusted than appeals to betting markets. If you look at win probability odds that are quoted all the time now, it's almost always statistics-based rather than going off Vegas. And this is the absolute best case for prediction markets in terms of participation, visibility, outcome resolution and so on.

Timothy M.'s avatar

I haven't done the research to know if this has moved the needle, but I think one challenge here would be that even before sports betting was ubiquitous, sports outcomes were already pretty legible and could be calculated with a lot more accuracy than e.g. policy outcomes.

Mark's avatar
Jan 13Edited

> The second is to conclude that prediction markets’ role in God’s plan was only to provide the foundation for AI superforecasters

How would AI superforecasters affect AI safety risk? Would they help us, as our understanding and power would grow and help us better defend against a possible hostile AI? Or would adding this ability to AI mean an acceleration of capabilities growth, bringing powerful AI sooner when it is harder for us to control it? This seems like the most important question here.

Scott Alexander's avatar

I don't think it's worth worrying too much about adding individual abilities to AI (except maybe extreme ones like "ability to launch nukes"). I think most of the improvement in AI forecasting comes from improvement in the base models, and so is already "priced in" by however much you're against base model capability improvement. I admit that there's a little extra work to be done with prompts, fine-tuning, etc, but I think anything you can do with a prompt, a superintelligence can do on its own, and it's probably not your autoforecaster that's going to go rogue anyway. So I'm pretty firmly on team "help us as understanding and power grows".

Enigma's avatar

*Khamenei? Not to be confused with Khomeini. Actually, yes to be confused with him, over and over again, because the names are so similar yet so different...

Also, I think the Brian Armstrong quote should include the context that he explicitly preceded his words with "I've been tracking the prediction market about what Coinbase will say on their next earnings call". Just to make it clear he wasn't being coy, he explicitly said what he was doing (and indeed, it was at least a little funny).

Scott Alexander's avatar

Thanks, I've added that context.

ricky's avatar

I'm more concerned about the order of operations being flipped. Prediction markets becoming predictor markets so to say.

News reports saying 'the chance of the US invading x has increased to 25%' could make individuals think, oh this is a likely event and further push it up. Or decision makers in the government see that there is actually a belief this could happen, and reinforce their final decision to bomb x.

ricky's avatar

ok i just read some of the comments that mentioned the feedback effect. that's exactly what i'm talking about and i bet (heh) that the impact is much bigger than people give it credit for

Scott Alexander's avatar

Do you also worry this about polls?

ricky's avatar

In short, yes.

I'd go even so far as to say media as a whole too. I like to say 'I'll see it when I believe it' instead of the normal way round. Most people don't notice but every once in a while someone does a double take. It's my way of saying that often time cause and effect is often reversed and not always in the way you'd expect. Oh damn your girlfirend dumped you because you cheated on her? Actually maybe you cheated on her so that she could dump you.

I didn't use prediction markets for elections in my example specifically because I wrote it and said, isn't that what polls are? I guess what I'm saying is if there were no polls or news companies the way you'd decide on your candidate would be on manifestos, ads, word of mouth, vibes, etc. That would lead to more people to vote for libertarians or the green party or kanye or whomever. The fact that you are constantly blasted information of democrats are up 6% after candidate x did y blunder consistently reinforces the system of D vs R. I can't remember the exact quote but essentially by trying to answer the question (or in this case interpret poll/probabilities) you accept the situation and context as the default, without questioning whether this is correct in the first place.

TotallyHuman's avatar

Doesn't the fact that political parties run obviously biased polls indicate that this is a concern with polls as well?

JerL's avatar

I think a big difference between polls/elections and other notable geopolitical events is that to influence a (reputable/good) poll requires a lot of coordination. But to influence a prediction market, just requires a lot of money.

If Bill Ackman wants to hyperstition a Trump election win through polls, he has to spend lots of money convincing lots of people to lie to pollsters to even get a signal in the polls.

If he wants to hyperstition regime change in Iran, I assume he has enough money to single-handedly raise the odds on the "Iranian regime falls this year" market appreciably in a way that, if Iranian dissidents took the market seriously, would lead them to decide "this is our year!" and go for it.

JerL's avatar

I don't think that exactly covers the case I'm talking about, for two reasons:

First, the issue isn't that the rich guy is biased or mistaken, exactly; he's paying to raise the probability, and willing to eat the loss.

And second, and more importantly, the mechanism here is that people's perception of p(success) is itself an input into p(success)--it's like a common knowledge problem where if enough people have common knowledge that the regime is unpopular, they can coordinate to overthrow it. The idea is that the rich guy is paying to increase the perception that there is common knowledge about the situation.

FWIW I agree that using a geopolitical scenario here sounds a little contrived to me (I stole this example from someone elsewhere in the thread), but I think you could absolutely have markets where the price is a signal that people can coordinate around in a way that has feedback effects on the underlying thing being bet on.

Sol Hando's avatar

Wouldn’t it be funny if we go AI Superforecasters, they immediately start predicting ASI that destroys humanity by 2030, and that’s how the public gains consensus for halting AI development?

Wouldn’t it be funnier if the only reason it was predicting this was because of the many intelligent articles written on the topic flooded its training data, so whether or not they really convinced anyone was irrelevant since they ended up with their intended effect?

I don’t really buy the AI-2027 arguments, but I think it would be very funny if this was how things played out. Assuming we get super-forecaster AI, which I think is a real possibility (especially with significant scaffolding), I would put the odds of this at 5-10%

Scott Alexander's avatar

"Wouldn’t it be funnier if the only reason it was predicting this was because of the many intelligent articles written on the topic flooded its training data, so whether or not they really convinced anyone was irrelevant since they ended up with their intended effect?"

I don't think you can really do this with (admittedly still hypothetical) AI superforecasters. The whole point of them being superforecasters is that they beat humans at this kind of thing. If they still had a pathetic failure mode like that, they wouldn't be able to beat humans. Unless for some reason they have this failure mode with AI and nothing else - but I think "people write Internet articles about it" is common to many topics.

Timothy M.'s avatar

Well, that depends on why they beat humans. If it turns out that the human position on this was poisoned very heavily by this discourse, that leaves a lot of room for the AI super forecasters to be poisoned somewhat less and still outperform.

Sol Hando's avatar

I don't know... the arguments seem to have convinced many intelligent people who have worked very hard on predicting ASI in the near future. I don't think I would call it a failure mode, any more than anyone who has been convinced by these arguments is experiencing some intellectual failure mode.

Charlie Sanders's avatar

What does it mean to "revolutionize" society? The term is loaded with so much definitional and historical cruft that it loses meaning and ceases to effectively convey a specific, testable hypothesis. A more rigorous phrasing might look something like, "perceptibly change a fundamental socioeconomic system that a majority of society interacts with on a daily basis". Testing this phrasing on past revolutions such as the American Revolution (changes to the federal government) or the development of the smartphone (changes to the method of interpersonal communication) seems to work. Applying this phrasing to prediction markets and running the query "What are the fundamental socioeconomic systems used by a majority of Americans that are most likely to noticeably be changed by the emergence of Prediction Markets?" through Gemini, ChatGPT, and Claude, the consensus answer is political elections. Based on that, I'm predicting that the 2026 midterm elections are the first time that society feels truly "Revolutionized" by prediction markets.

Incidentally, here's a fictional story that was inspired by this post: https://www.dailymicrofiction.com/p/rulescuck

Jacob's avatar

> The most interesting proposal I’ve seen in this space is to make LLMs do it; you can train them on good rulesets, and they’re tolerant enough of tedium to print out pages and pages of every possible edge case without going crazy.

Bettors would need to read all of this for it to be meaningful, which they won't

Scott Alexander's avatar

1. I think the smart ones might.

2. Depends what you mean by "meaningful". I think regardless of whether it changes bets, it's important that the markets *appear* fair, and that people who lose realize they have no ground to stand on and it's their own fault.

haze's avatar
Jan 13Edited

> a presidential administration would keep all normal restrictions on sports gambling but also let prediction markets do it as much as they wanted

I thought the rise of PMs in sports gambling was a combination of:

1. State level regulations being less favorable to having an online casino

2. Tax favorability (changes in the OBBA that limit how much casino losses you can write off)

3. Casinos banning “smart money” types who win too much

Is there additional federal regulation that’s played a role? Or just the fact that the admin has allowed Kalshi/Polymarket to expand offerings?

Stefan's avatar

I'm working on a polymarket-esque platform that attempts to incentivize good thinking and writing on subjective topics as opposed to making accurate predictions about objective future events.

Would really

really

love to get thoughts + feedback from some people here. Please DM me if interested. You will get some free Solana for your trouble at the very least.

Ashwin's avatar

Such a minor thing but I can’t resist pointing it out, as this is a personal pet peeve: the current supreme leader of Iran is Ali Khamenei, not to be confused with his predecessor, the late Ruhollah Khomeini (note the positions of the vowels!)

Michael Watts's avatar

So, I think it's a mistake to consider "insider trading and market manipulation" together.

Insider trading is basically unproblematic.

Market manipulation has more problems.

There's been a lot of coverage recently of the idea that a political candidate could bet on themselves to lose a campaign, and then withdraw, which guarantees the loss. And for all the attention it gained, I think this problem isn't even worth worrying about. Someone with a good chance to win (the only kind of person who can make money by betting on their own loss) is likely to benefit more from the victory than they do from the rigged bet on themselves.

But you don't have to be the candidate to guarantee that they lose a campaign. Anyone is free to assassinate them. Everyone is a "key decisionmaker"! Contracts on assassination are explicitly forbidden and I believe the regulatory bodies take that prohibition seriously. But most other contracts are assassination contracts in disguise. If I have enough money riding on Elon Musk not tweeting any more this week, I can ensure that it happens by killing him. If I have money on Eric Adams losing the New York City mayoral election, I can ensure that that happens by killing him.

https://xkcd.com/190/

One resolution to this would be to extend the prohibition on assassination contracts to all contracts where an assassination would force a resolution 𝘪𝘯 𝘦𝘪𝘵𝘩𝘦𝘳 𝘥𝘪𝘳𝘦𝘤𝘵𝘪𝘰𝘯. That would eliminate pretty much all contracts, including all contracts of any geopolitical interest. You obviously can't bet on whether Khomeini will lose power.

The other resolution appears to be "hope that no market is ever large enough that it occurs to someone to force it in their direction".

If we think of the operation of prediction markets as "multiplying the perceived significance of geopolitical positions by allowing people to add high levels of financial leverage to those positions"... is that something we want? In this framing, prediction markets sound like a tool for making conflicts more intractable. (Compare: "Losers in markets are huge whiners".)

TheGreasyPole's avatar

I am not sure insider trading is unproblematic, even for the markets. (its obviously problematic for the institution he's inside of, especially if its classified information).

If someone is recouping outsize returns by front-running announcements due to insider knowledge.... then that money is coming from somewhere. It's essentially a tax on the return of all market participants who are not insider trading, paid out to the insider trader.

Thats got to reduce the incentive to engage in the market unless you have privileged information, as if you don't....people who do eat all the returns from your accurate predictions... that is going to cause withdrawl of prediction makers from the markets.... and that is going to ensure the market is not predictive of the real possibility right up until the "30 mins before public release when the insider strikes".

Its for these reasons there are rules against insider trading on stock markets. An insider trader is eating the returns of the other market participants, destroying the ability to make a market if left unchecked. A market can handle a "once-in-a-while" loss for this, especially if accessing that profit comes with large risks in the form of fines/prison time..... but it can't handle it if that kind of loss is encouraged, and is allowed to take place without applying that legal risk premium to the action.

Michael Watts's avatar

> A market can handle a "once-in-a-while" loss for this, especially if accessing that profit comes with large risks in the form of fines/prison time..... but it can't handle it if that kind of loss is encouraged

Well, it can; it's not like insider trading is illegal in every functioning stock market.

You're right that it drives liquidity down.

TheGreasyPole's avatar

Perhaps "insider trading" is the wrong term, although its the colloquial term.

This is "trading on Material Non-Public Information" (MNPI). That illegal in all major markets. I guess, maybe you can find some African stock market somewhere.... or some edge case market where due to very specific conditions it has to be allowed (a market where conditions for market entry are that you must be in possesesion of MNPI?)... but, no. This generally extremely illegal for the reasons I outlined.

Google result for MNPI trading gets you...

>There are no legitimate markets where using Material Nonpublic Information (MNPI) for trading is legal, as it constitutes illegal insider trading and is strictly prohibited by regulators like the SEC in the U.S. and MAR in the EU. While the handling of MNPI (e.g., by bank loan officers) requires strict internal controls and firewalls (information barriers) to prevent misuse, the act of trading on it is illegal across all major regulated markets, including U.S., EU, and India.

Luke's avatar
Jan 15Edited

Edit: Sorry I missed the context where this was about stock markets.

IANA finance lawyer, but I think MNPI is allowed in commodities trading in the US, provided it's not obtained through fraud or other violations. I suppose this relates to your point of "a market where conditions for market entry are that you must be in possession of MNPI" because large producers/consumers need to be able to trade their commodities.

B Civil's avatar

I searched for a country that had a stock market where insider trading is legal and I could not find one. There are several where enforcement is lax. Japan seems to be notorious for this. But no country says it’s legal.

Freedom's avatar

The reason for those rules is basically ignorance. People with better predictions always make money from people with worse predictions, that is how trading markets work.

TheGreasyPole's avatar

And that's fine, if you think you're competing on a level playing field. You're competing "your prediction skill and access to public information" against anothers, and only play if you believe you're better at doing that. The interplay of those competing predictors is giving you a reasonable approximation of the true odds.

Thats not true if the playing field is not level. If thats the case, and money's on the line, the only winning move is not to play.

Say you're playing repeatedly betting on baseball games for money. Maybe one team score more runs, maybe the other guy does, you're competing predictive on skill and the teams comparative baseball skills over 100's of games.

Now what happens if you do that, and consistently find the one of the two teams you're betting on always have the umpire declare they get 10 free runs at the end of the 9th inning? And you're not privvy to which team the umpire awards the 10 free runs too (but other bettors are).

Do you continue to play for money? Or do you decline and withdraw from betting on those games?

The market for "predicting the game based on skill of predictions and skill of the players" just got destroyed, because the field is no longer level. So, people don't bet.... so the predictive market is lost.... everyone declines to play unless they know which team the umpire will award 10 runs to. And the one guy who DOES know that information can't find counter-parties to take his bet! Because, if you don't know which way the umpire is going to go....and someone wants to bet he knows the game outcome.... you know you're the bag holder and he's the guy who the Umpire tells which team the 10 runs are going to.

Raj's avatar
Jan 16Edited

I was under the impression that this was a feature, not a bug. It creates incentive for insiders to disclose, and early because someone might beat them to the punch. The point after all isn't a meritocracy of superforecasters (that's just a potential means to an ends) but a kind of pseudo-oracle.

In practice I agree though, I'm not sure how prediction markets get their liquidity. Because participating in financial markets exposes you to beta, whereas prediction markets don't seem to have an equivalent and with friction/house rake seems negative in expectation for the rational non-insider. There are some possible explanations (like using it to hedge) but it seems insufficient to establish real markets. Right now, realistically its just vibes-based gambling.

B Civil's avatar

While it is true that some people are better predictors than others, I think the idea is that everyone has access to the same facts (or rumours) that might affect their prediction of what will happen.

dogiv's avatar

If the insider is waiting until 5 minutes before resolution, then it shouldn't affect liquidity among other traders until the last 5 minutes. And if the insider bets earlier, yeah other market participants will have less opportunity to make money but that will be because the market is more accurately priced.

TheGreasyPole's avatar

In a single game, sure. But not in a repeated game.

In a repeated game, traders take into account what happenned in all the previous rounds of "the prediction trading game". If, in previous rounds some insider trader is constantly eating 50% of all the profits but 0% of all the losses.... you decline to play further iterations of the game. The risk of a loss now outweighs the benefits of a profit.

Michael Watts's avatar

> With one exception, these aren’t outright oracle failures. They’re honest cases of ambiguous rules.

I think this description is off. Take the NYT Oscar viewership example. The rules are unambiguous. We're not calling them "ambiguous" because someone might read the rules and be unsure what they meant. We're calling them "ambiguous" because they do not agree with the statement of the question. Which is to say, the rules are 𝘄𝗿𝗼𝗻𝗴. It is not appropriate to resolve a question of fact by published numbers without regard to whether those numbers are factual. This inappropriateness is obvious enough that it pushes some people to claim that, because the meaning of the rules as stated is so unreasonable, it must not have been the meaning of the rules. But this is a conceptual fuzziness that we should avoid. We have to be able to admit that it's possible for the rules to define a criterion that isn't related to the question.

The problem of having resolution rules which just plain aren't relevant to the question they supposedly resolve is not fixable. Historically, this problem is addressed by the justice system, case by case. I predict that, if prediction markets survive, they will adopt the same approach. Questions will be resolved by an arbitrator who is vested with the power to resolve them.

> The most interesting proposal I’ve seen in this space is to make LLMs do it; you can train them on good rulesets, and they’re tolerant enough of tedium to print out pages and pages of every possible edge case without going crazy.

I think this is literally incorrect? If you have an LLM produce pages and pages of edge cases, you're going to exceed the size of the context window, at which point the LLM will "go crazy".

Steffee's avatar

Very fun post! But, side note: Aren't honest men the *easiest* type of man to con?

Freedom's avatar

No, many cons involve convincing the victim they can make money from some scheme

Steffee's avatar

I think I've heard of more people getting taken advantage of from money-making-schemes that don't hurt others, aren't malicious in nature. Someone else involved towards malevolence is more likely to recognize a scam, I think

TK-421 Presents's avatar

"The new era of prediction markets has provided charming additions to the language, including “rulescuck” - someone who loses an otherwise-prescient bet based on technicalities of the resolution criteria."

Not going to get into a semantic debate about what constitutes the "new era" of prediction markets - that is an invitation to self rule-cuckoldry - but rule resolution definitional disputes have been making cucks of traders for some time. A bitter fight over "indictment" in Israeli law in a Netanyahu market several years ago directly led to my exit from prediction markets.

(That's my defensible excuse. The real reason is that the markets worked as designed - dumb money got out. My "forecasting" mostly consisted of drunkenly betting on the funniest markets / outcomes. Burn in Heaven for surviving until I quit degenerately gambling on your death markets Jimmy Carter.)

The resolution process, like 75%+ of all things, can be solved today with AI.

Christophe Biocca's avatar

> Degenerate gambling is bad. Insofar as prediction markets have acted as a Trojan Horse to enable it, this is bad.

On the other hand, prediction markets don't ban skilled bettors, which means the odds are fairer, which means the net expected value of a random bet made by a degenerate gambler is just the platform fees. So there's an offsetting benefit, and the question is how much creating new gamblers vs. poaching existing gamblers is happening.

Mark Roulo's avatar

"Most of the links end with pleas for Polymarket to get better at clarifying rules."

I think it would be "better" is we had more precise contract terms.

When folks are debating the "best MLB team" in a given season the argument is clearly pointless unless "best" is defined. And there are several reasonable ways to define "best". "Wins the World Series" is one. "Has the best win-loss record in the regular season" is another.

The rules seems to need clarifying because the terms are somethings (a) vague, or (b) clear but wrong.

With luck as these markets get larger folks will get better are writing more precise contracts that do what folks intend.

Andrew's avatar

the russian town capture case is an example of a very clearly expressed contract, yet the resolution is unsatisfying basically due to fraud in the data source. Defined terms are better than undefined terms but its not a panacea for having efficient price discovery creating useful knowledge. Read anything by matt levine regarding cds shenanigans for the limitations contract specificity in creating financial instruments that achieve their intended purpose.

netstack's avatar

One less-than-stellar consequence of the publicity is that I’m seeing more people with preconceived notions. In 2019, if I dumped prediction market lore on a family member, they’d just file it under my normal level of nerdiness. Now it’s more clearly associated with low-class sports gambling. People will bend over backwards to talk about how gambling might be bad, actually.

The Trump administration’s stance is not going to help with such legitimacy. This is true even at work, where my very red tribe coworkers take Trump seriously, but not literally. If Truth Social implements the most reliable, accurate market on the planet, at least half the country will become deeply convinced that it’s a grift and a rugpull. All prediction markets will get some guilt by association. That’s a serious loss of information even if nonpartisan traders eat the remaining partisans’ lunch.

SMB's avatar

My other thought is that someone should look at the changes in the percentage chance of a certain resolution over time (if they haven't already), especially as to late-breaking changes in the outcome's likelihood. The value of these markets goes down if they're incorporating new information right down to the wire. An accurate prediction of whether Khomeini will be out by Jan. 31 is not as valuable if it swings from 20 percent to 80 percent on Jan. 30 because of a dramatic, late-breaking development, such as a U.S. regime change operation. You'd probably need to cut off bets well ahead of any deadline specified in a question.

Frange Bargle's avatar

> Maybe people just haven’t caught on yet? Most news sources still don’t cite prediction markets, even when many people would care about their outcome.

There is a simpler explanation: The people who consume news don't like to think about math while having their daily dose of outrage in the form of "news".

I have relatives who worry about things: the dog getting out of the backyard by jumping the fence, Trump banning elections and appointing himself Dictator For Life, their child getting hospitalized because the expiration date on the milk is today, etc. I ask them to estimate the odds of these things, in the hope that working out the odds helps them see that the odds are very low. They can't do it. At all. I see the following failure modes:

1. Some people have no ability to reason when emotionally aroused. In response to "The odds of you getting that cancer are 20,000 to 1", they might say "But what if you are that one?" If I point out that their habit of texting while driving raises the odds they get in a wreck, they will respond "No, it's fine, I do it all the time."

2. Some people have never learned about probability. They don't like math, and assume anyone using math to convince them of something is using sophistry with extra steps involving boring number stuff.

I work as an engineer. I met most of my friends in school and in rationalist-adjacent spaces. Most people I interact with are capable of thinking about odds, even when the topic is scary or upsetting. It is easy to forget that these people are not normal.

I don't know if journalists are this ignorant, or if they are simply pandering to their audience. When the median reader responds to emotional garbage, and hates math, the incentives ensure that reporting does not mention probability.

Ninety-Three's avatar

"The catch is, of course, that it’s mostly degenerate gambling"

I must object that "degenerate gambling" doesn't just mean "gambling on stupid stuff". The common definition is a mix of "compulsive gambling" and "losing so much money on gambling that it becomes a financial problem", and I've heard it more precisely used to mean "gambling with money you cannot afford to lose".

Unless you have some very surprising data for me about the average sports better, most of it is probably not degenerate.

Performative Bafflement's avatar

> Unless you have some very surprising data for me about the average sports better, most of it is probably not degenerate.

Zvi goes over the data in this post, it's pretty clear that sports betting is an adversarial system built on milking people, driving many into insolvency:

https://thezvi.substack.com/i/149404877/the-short-answer

At the same time, they literally ban and kick off any smart money on their platform that consistently wins. It's literally an engine to concentrate and milk the dumbest money possible, to the point of ruin in many cases.

TGGP's avatar

Banning "smart money" does seem like a problem for a prediction market platform.

Ninety-Three's avatar

I've read that before and it's full of numbers like this:

"That suggests that for every $70k in net sportsbook gross profits from regular gamblers, someone filed for bankruptcy."

That is, at face value, something you could reasonably call mostly degenerate. But also, those numbers are insane and I think Zvi is right to not take them at face value.

I guess Scott's phrasing is fine if he thinks sports gambling is responsible for 22% of bankruptcies, in which case I will be surprised by what numbers he finds plausible.

Performative Bafflement's avatar

> "That suggests that for every $70k in net sportsbook gross profits from regular gamblers, someone filed for bankruptcy."

Wait, I guess I don't understand why it's insane?

Aren't sports bets basically parimutuel? So to make $70k of vig, the actual amounts bet were much bigger than that, probably $3.5M in total bets if it was a 2% vig?

Maybe I have a fundamental misunderstanding of how sports betting works though.

Ninety-Three's avatar

In the regulated US sports betting industry, it's mostly fixed odds and the vig is around 10% (yeah it's real high, apparently there's a lot of consumers out there who don't mind terrible odds).

Seth's avatar

Scott expresses surprise that prediction markets haven't revolutionized society. I don't get it. I genuinely don't understand why one would expect prediction markets to revolutionize society.

Even if you believe there is *literally* *actually* *scientifically* a 20% chance that Khameini is out by the end of the month--and what the heck is that even supposed to mean?--how exactly is this knowledge supposed to revolutionize anything?

This isn't a gotcha. I genuinely just don't understand the underlying worldview here.

Seta Sojiro's avatar

Decisions cumulatively worth trillions of dollars are constantly being made by politicians and business leaders who need to make the best of incomplete information about future events. If you can improve your estimate of the probabilities of those future events then that information is plausibly worth billions of dollars. So prediction markets should be able to capture some of that value.

For instance - a lot of infrastructure takes several years to build. Knowing what politician will be in office several years from now is very relevant to knowing what sorts of subsidies or taxes will be in place. Knowing which countries will be stable in several years can influence which intermediate materials or goods will be more or less expensive.

Seth's avatar

If the claim is that prediction markets could marginally improve the quality of decision-making in certain technocratic settings, then sure. I guess that doesn't seem revolutionary; certainly along the timescales of a few years.

But the example you give, forecasting who will be in office several years in the future--this is exactly the kind of thing prediction markets are not going to be able to do! There are already huge incentives to do this, and no one can reliably do it, because the nature of the problem makes it unpredictable.

Seta Sojiro's avatar

If 100% certainty is worth many billions of dollars, then probabilistic knowledge on the margins (distinguishing a 10% change vs 30% chance) is still worth billions of dollars.

The question is, are prediction markets actually delivering more accurate knowledge than what decision makers would have otherwise used? And are they asking the right questions?

AnthonyCV's avatar

"Before we begin our banquet, I would like to say a few words. And here they are: Happy happy boom boom swamp swamp swamp! Thank you!"

This was not one of the parts of HPMOR I expected to be prescient.

Freddie deBoer's avatar

what if the crowd just isn't that wise

TGGP's avatar

Then someone could make money betting against them.

Nicholas Weininger's avatar

FWIW, I made an alternate market for the CA wealth tax that specifically resolves YES only if it actually has practical effect rather than just winning the referendum. This is because I think there's a nontrivial chance that even if it passes it'll get held up in court challenges and not actually result in any big new tax assessments against the intended targets. If you're interested on implicitly betting on how big that chance is, compare Pressman's market which you link above to:

https://manifold.markets/NicholasWeininger/will-the-proposed-california-wealth

Bolton's avatar

> This doesn’t completely solve the conditional problem. There could be residual correlations based on hidden variables that affect the outcome of interest (in this case the election) without being known to bettors even on Election Day Eve. A trivial example is some extraordinary event which happens at 12:01 AM on Election Day. A more subtle example goes something like: suppose the economy is subtly good, nobody has managed to aggregate the statistics and figure this out in a legible way yet, and each individual person still only has private knowledge that the economy is good for him- or her-self. They might still be more likely to vote Republican based on their own private economic optimism, and then the hidden goodness of the economy might become manifest and improve GDP during the next term.

I have been interested in what I call the "Sometimes Sunny in Philadelphia" approach: There is some evidence (admittedly mixed) to suggest that the weather on election day can affect turnout (for example: [1]), which could plausibly influence the outcome. It seems pretty unlikely to me that there would be other major causal pathways for the weather to affect outcomes, (especially if we do some kind of comparison to weather the day before or after). So using predictions conditional on the election day weather in swing states could be a good way to probe the causal structure here.

[1]: https://www.journals.uchicago.edu/doi/abs/10.1111/j.1468-2508.2007.00565.x

warty dog's avatar

"what words will be said" markets are not as dumb as you imply - it's just hard to operationalize bets on what will be said

Don P.'s avatar

If anybody watches the HBO (UK co-production) show Industry, this week's season premiere literally featured a character making money because she knew (illicitly) that someone would mention a particular company in a negative context in Parliament.

John's avatar

One other high-level issue with the obsession with predictions markets is an unexamined vague sense that they are "magic," that they would somehow generate new knowledge or vastly improve our ability to understand the world. In reality, though, many things are genuinely uncertain until new information emerges (cf. the 2024 US presidential election forecasting markets) and in a pretty broad range of P(X) -- say, 0.1 to 0.9 -- you really can't change your behavior in an effective way.

One other issue worth noting: a lot of prediction markets are based on essentially no information except the base rate and the resolution date. So the market across its lifetime just becomes a linear interpolation between the base rate and zero.

Andrew's avatar

The article didnt include the Venezuela election as one of the disputed outcomes. Its just another in a list so fair enough but I think it raises a potential meta principle.

Basic story, credible analysis suggests opposition got the most votes but because Venezuela is a dictatorship, govt just made up some other number and said they won. Market resolved in favor of opposition despite official sources saying otherwise.

Potential meta principle: resolution should lean in the direction that makes the original question interesting/worthwhile for epistemology purposes.

The official outcome of an election in an autocracy is not interesting, govt stays in power. If they lose power its not due to elections. A more interesting question is do the ppl oppose it enough to show up to polls to vote against it.

The resolution shouldnt violate any bright lines in its original wording, but to the extent that there is reasonable disagreement biasing towards interesting question seems reasonable to me. Ultimately it rewards ppl betting on epistemological interesting questions and not combing rule fine print.

Andrew's avatar

Having an LMM resolver is a specific version of having a third party resolver. Maybe the LMM is perceived as not possibly biased, but a third party can bank on their reputation as a good resolver. They should be identified in advance and be paid based on the overall popularity of the market possibly based on transaction volume. A platform could experiment with different resolvers including LMMs to see which are most trusted. Right now platforms are essentially bundling themselves as resolvers as it seems to be a problem intrinsic to the main product and so they want to differentiate themselves, but maybe outsourcing would be better.

vectro's avatar

What is an LMM?

Joshua Hedlund's avatar

> Trump Greenland market; went way up upon Maduro capture and subsequent reignition of the discussion

Well, really it went up because one degenerate trader claimed to have inside knowledge and bid it up past 90%. I happened to see it within the hour and bought it back down and won an easy 1k in manna.

Edward Scizorhands's avatar

> Other crypto executives condemned the move, with one saying that “you need your head examined if you think it’s cute or clever or savvy that the CEO of the biggest company in this industry openly manipulated a market.”

These people are too high on their own supply.

You make a stupid market on a stupid thing that a billionaire might say, and he gets wind of it and trolls it, that's on you.

Really, that's the problem: the billionaire was making fun of you. If someone making fun of you makes you start making internet posts, stop. Log off.

A1987dM's avatar

> TechCrunch: What words will be used in Coinbase’s earnings call? Coinbase CEO Brian Armstrong delivered the company’s “earnings call”, ie a speech to investors about its recent progress. At the end, he said “And I just want to add here the words Bitcoin, Ethereum, Blockchain, Staking, and Web3 to make sure we get those in before the end of the call”.

Am I the only Hofstadter-head who thinks that such a final sentence shouldn't count as *using* those words, only as mentioning them, so that if the questions said "used" (rather than e.g. "spoken") then they should still have resolved NO?

https://en.wikipedia.org/wiki/Use%E2%80%93mention_distinction

Zanzibar Buck-buck McFate's avatar

After the UK budget I put aside £1000 for prediction markets betting, using a website called smarkets, believing I could get a better return than a cash saver. But I have ended up betting a lot of it on sport, mainly because there's just so much more sport to bet on than politics - on smarkets you can even bet on handball, netball, you name it. I stick to soccer and cricket because I'm more familiar with these sports as soap opera, and ultimately I think this is what people are betting on whether it's sport or politics - if we lived in a fictional world, how believable would this hypothetical plot twist be? Stick a bunch of handball stats in front of me and I wouldn't have a clue how to interpret them as stories. If I just bet on when Keir Starmer is going to exit as Prime Minister, it's reassuring to know how my general prediction form are going, and sports betting gives me more reliable recurring feedback.

Glau Hansen's avatar

Is there a skew in bettors leading to a skew in results? I know that sports betting is heavily dominated by young men, and it seems logical that other being markets would be as well. It seems like this should bias the results, because young men have demonstrated biases, but I'm not sure how you'd control for that.

BeingEarnest's avatar

I was surprised that PolyMarket came up in lunch at work today, after never before hearing anyone outside these online spaces talk about prediction markets. Maybe our world really is revolutionized.

Also, I think my go to investment strategy will just be to invest in anything mentioned on this blog and or done by rationalists and no one else.

periwinkle's avatar

Prediction markets make sense as a way to pay for information. If the reward is good enough relative to difficulty of prediction, you get good information. If it's not, you don't. Manifold's bounties let you pay for information like this, which is nice. Polymarket and Kalshi depend on "dumb money" (read: degenerate gamblers) to provide this liquidity, so you only get good information about easily predictable topics and topics that degenerate gamblers care about. Aka sports, frivolities, and major geopolitical events. Other topics aren't rewarding enough to people searching for an edge, so no one bothers to get an edge and make the market more correct.

Wasserschweinchen's avatar

His name is Khamenei. His predecessor was Khomeini, which must be why so many people are confused.

Peter Gerdes's avatar

User written resolution criteria interpreted only at resolution seem like an obvious issue for scaling.

They provide a serious potential for corruption -- you create lots of markets with somewhat ambiguous resolution criteria and get an insider to just slightly lean in your favor when deciding resolutions in exchange for a cut. Second they just interject extra needless uncertainty.

In the long run, I suspect we should move to some kind of more formalized scheme for question writing and be able to request/pay for binding opinions about the interpretation before resolution.

Peter Gerdes's avatar

Regarding conditional prediction markets I fear people are being a bit too careless in assuming these are always going to be a good thing. Fundamentally, once you link prediction markets to actual outcomes (so you have feedback between the prediction and it's resolution) things get into some relatively complicated game theory.

Yes there are theorems that prove you discover the truth under certain assumptions but once you have feedback that may not be the truth that would otherwise have been and when that is beneficial is complicated.

aves is pseudosuchia's avatar

In practice, a big problem with the markets of economic outcomes conditional on who wins an election is going to be regulatory risks. I'd imagine the political party that traders predict would be more likely to cause a recession might lash out at at the market.

Brian's avatar

I think part of the reason it hasn't been revolutionary is time; it is still too early to tell. All new tech takes time to have an impact. You would not have been able to appreciate the impact social media would have on politics or the economy when Facebook first started. It takes time for people to learn how to use the new tools at their disposal. Nonetheless I am already hearing normal people talk about Kalshi and it gives me some hope.

Sebastian Garren's avatar

Financial markets continue to become sophisticated at a faster rate than prediction markets. So even though it is easier to *read* a prediction market, the financial markets price in the information already. I think the examples you use of useful forecasting, are clearly markets that attract financial bets.

Will the AI bubble pop? Stock prices of input providing companies.

Will Trump turn America into a dictatorship? Not even a forecasting question.

Will YIMBY policies lower rents? Markets?

Will selling US chips to China? Markets, again?

Will kidnapping Venezuela’s president weaken international law in some meaningful way that will cause trouble in the future? Treasuries?

If America nation-builds Venezuela, for whatever definition of nation-build, will that work well, or backfire? FDI rates?

netstack's avatar

Wait.

How well-calibrated are these sports bets, anyway? Does the volume of dumb/uninformed/ride-or-die money attract enough experts to get realistic predictions?

Level 50 Lapras's avatar

Since prediction markets are everywhere, we should be able to get accurate predictions about the probability of significant future events, right?

Polymarket:

CA wealth tax appears on ballot: 50% (https://polymarket.com/event/billionaire-one-time-wealth-tax-on-california-ballot)

CA wealth tax passes: 52% (https://polymarket.com/event/billionaire-one-time-wealth-tax-passes-in-california-election-2026/billionaire-one-time-wealth-tax-passes-in-california-election-2026)

Somehow, the wealth tax is more likely to pass than it is to even appear on the ballot!