I'd think a GE is more likely than a successful leadership challenge - any successful leadership challenge will (in part) be headed off by the PM's ability to call a GE. So it depends if - conditional on Starmer's bluff being called - he does indeed take the mutually assured destruction option of calling a GE.
A change in British Prime Minister is very unlikely given the size of their majority, but not completely outside the realm of possibility
It wouldn't work under the Lascelles Principles, parliament can't be dissolved if there is a viable majority in parliament even if the current PM wants an election.
The prospects of a British general election are effectively zero, given the incumbent government has sole discretion over whether to call one (there is no mechanism to force one) and has no incentive to do so, given it would almost certainly lose power.
I think it is unlikely but a failure of the budget would be seen as a vote of no confidence, I expect parliament would be dissolved if no budget could be passed.
Nitpick: calling a probability of a possible event "effectively zero" strikes me at unnecessarily vague, because it just asserts that the probability is small compared to another, unspecified probability.
For some purposes, the probabilities 1e-3 and 1e-10 are effectively the same, for other purposes, they a worlds apart.
If I want to project my budget, then "the expected value of coins I will find on my way to work is less than a cent" is good enough, and I do not care if the probability of finding a dime on the sidewalk is more like 1e-4 or 1e-8. If instead it is the probability of dying in a car accident on my way to work, then I will care a lot.
I will grant you that in the context of forecasting contests, if you ask contestants n questions, you are unlikely to find out if they are well calibrated for probabilities more smaller than 5/n or so. As most contestants (except LLMs) will not be willing to make thousands of forecasts, this puts a practical lower limit on the probabilities which can be tested. If I ask them if a particular proton will decay this year, then all the good forecasters will max out on nope, and I will learn nothing.
Odds of us being able to assess this for the current year? (This is me saying: if you can't definitively say when we last used a nuclear warhead, then it's a poor question, which will be hard to answer in the normal course of business, as opposed to people hitting Nuclear Kill Switches)
You run a seismograph? You run a radiation detector, in places where we bomb? Question isn't "a city-destroying missile", it's a nuclear bomb. They can be quite small. If you are unable to verify all of the current targets of American materiel (there have been more than a few in the last year), I'd say this question is unanswerable.
I think you have to get pretty heavy into conspiracy theory-land to think that anyone's likely to try to get away with hiding something like that, let alone succeed. There's just too many people who would be likely to be able to notice, and have incentives to reveal it, and not enough to be gained.
How many "trustworthy" people are able to notice? Let's say we decided to, oh, say, nuke Afghanistan (obvious substitutions are obvious). A leadership strike on terrorists.
You'd have to know the coordinates, and establish that we didn't just use a "big conventional bomb" (we'll note there are several places where we are said to have allowed the use of big conventional bombs in "anger"). And, after you did all that, decide to publish this -- which for many countries would be Embarrassing, and also might invite the US doing more bombing (as we've lost plausible deniability).
Russia already refuses to say "we didn't murder that guy" because they know they won't be believed (also, it sounds better if they get the rep that they can murder anyone anywhere).
Many people would like to publicize it *because* it would be embarrassing. And why take the risk when you could just *actually* use a big conventional bomb that nobody would care about?
There are no "big conventional bombs" that anyone who understands bombs will ever mistake for even the smallest existing nuclear weapons.
And there are no nuclear bombs whose detonation will be missed by the people who professionally look for nuclear explosions, except in very niche cases like detonating inside a very large deep cavern that are not consistent with using a weapon "in anger".
I am sadly amused by the extent to which it is broadly believed that nuclear weapons are simultaneously powerful enough to Destroy The World, and not that much more powerful than big conventional weapons. Complete failure to understand scale at both ends.
I assume this means nuclear warhead detonated as an attack rather than as a test.
A third low possibility is nuclear warhead detonated by accident, and we might want to distinguish between a warhead detonated on the owner's territory rather than outside it.
Academic journals have a fairly clear ranking in any given discipline. SCImago has detailed rankings which are mostly accepted.
One would expect to see AI-written articles in pay-to-publish journals and lower quality venues already. I would like to see predictions by discipline and journal tier (e.g. top 10, top 100, top quartile, top 50%) for when academic papers fully written by AI will be (i) first published; (ii) regularly published and (iii) widely accepted as valid results. On a separate axis, one might measure the autonomy of the AI or the involvement of humans, I think the real milestone is when the AI chooses, develops and publishes a paper independently, but there will be intermediate steps, not only because journals are conservative and will insist on human authorship for a time.
Progress on these measures would allow measurement of a wide range of abilities by the AI (reasoning, data analysis in the mathematical sciences, textual analysis in the humanities, etc.) with workplace currency. If you believe in alignment as a project, it might offer insights into what the AI sees as valuable; interpretation & analysis of differences between human and AI researchers will likely be its own research area.
As a very concrete question suggestion:
'Will a paper largely or entirely written by AI pass peer review and appear in a journal with Q1-ranking on Scimago?'
Likely some slightly less academic wording would be more suitable.
This question will resolve as Yes if, before January 1, 2026, the United States has enacted one or more bills that changes at least three of the six following components of the Affordable Care Act (ACA) to either repeal or curtail them (see the fine print for fuller details):
Eliminates or reduces the ACA's Medicaid eligibility or federal funding
Eliminates or reduces the premium tax credit eligibility or amount
Eliminates or curtails the individual mandate (by law the individual mandate still exists in the ACA, but has no penalty)
Eliminates or curtails the mandate for certain employers to provide health coverage for employees. Reducing the penalties will also be considered to be relaxing the mandate.
Makes it so that ACA subsidies are no longer limited to plans that satisfy the requirements specified in the ACA, including allowing ACA subsidies to be contributed to health savings accounts or similar account
Eliminates or curtails medical underwriting restrictions
Fine Print
Below find an expanded description of the above six criteria with the criteria used to determine if each has occurred:
One or more of the following changes to Medicaid is implemented
Rescinding or reducing federal Medicaid eligibility below 133% of the federal poverty line (effectively 138% of the federal poverty line)
Rescinding or reducing the federal medical assistance percentage (FMAP) for newly eligible recipients for states which have expanded Medicaid
Reducing or capping the overall amount of Medicaid funding provided under the ACA
Imposing new federal requirements on Medicaid eligibility, such as work requirements, that have the effect of reducing the number of people who are currently eligible for Medicaid
One or more of the following changes is made to premium tax credits
Reducing taxpayer eligibility for the credits (the ACA specified that taxpayers with household incomes ranging between 100% and 400% of the federal poverty line were eligible, so a reduction would be narrowing that range by either increasing the minimum eligibility above 100% or lowering the cap below 400%)
Reducing or eliminating the premium assistance amount specified in the ACA
Imposing additional requirements on premium tax credits that have the effect of reducing the number of people who are currently eligible for premium tax credits
The individual mandate is eliminated (by law the individual mandate still exists in the ACA, but has no penalty)
Eliminating or relaxing the mandate for certain employers to provide health coverage for employees. Reducing the penalties will also be considered to be relaxing the mandate.
Making it so that ACA subsidies are no longer limited to plans that satisfy the requirements specified in the ACA, including allowing ACA subsidies to be contributed to health savings accounts or similar account
One or more of the following changes is made to the ACA's medical underwriting restrictions:
Eliminating or relaxing the ACA's restriction on excluding individuals with pre-existing conditions
Expanding the ability of health insurance issuers to price discriminate or set eligibility requirements based on certain characteristics
The above criteria need not be met in a single bill. The question will still resolve as Yes if multiple bills are enacted that have the combined effect of satisfying the above criteria.
The ACA is lengthy and complex, and Metaculus will use its judgment to assess legislation that has been passed while also relying on characterizations published in credible sources that Metaculus assesses to be knowledgeable and demonstrate sufficient expertise.
The expiration of enhanced subsidies passed in other legislation is immaterial; this question only resolves as Yes if the above criteria are met with respect to the text of the ACA."
I would like to see something along these lines, maybe "will an AI be explicitly credited as an author on a peer-reviewed paper in the top (N) journals in its field"? By explicitly credited I mean the reviewers + editors know the author is an AI, the AI is listed as an official author in the author list, and the AI's contributions meet CRediT criteria.
Will American life expectancy at birth exceed 79 years, on average, by 2026?
According to the CDC, what will the death rate by overdose be in 2026? (If we need a binary question; Under 29 people per 100,000 per year? )
What will be the highest level of commercially available self-driving car in 2026?
Will an AI be declared CEO of a Fortune 500 organization by Dec. 2026?
Will the US workforce participation rate go below 58% by 2026?
Will an AI be recognized as the author of a best-selling book by Dec 2026?
Will a nuclear weapon be detonated in 2026 as a hostile action?
Will the price of gold be over $3,800 USD by 2026?
Will the 2026 global temperature average be greater than +1.5C compared to the 1850-1900 temperature average?
Will at least one peer-reviewed scientific article state, as a finding or explicit claim, that more than 0.5% of the current U.S. population was conceived using embryo selection By December 31, 2026?
(Criteria: Published in a journal ranked in the top 50% (Q1 or Q2) of its subject category in Clarivate’s Journal Citation Reports)
Will the median home value in the Los Angeles–Long Beach–Anaheim, CA metro area be more than 5% higher on December 31, 2026 than it was on December 31, 2025?
RESOLUTION CRITERIA
This question will be resolved using the Zillow Home Value Index (ZHVI) for the Los Angeles–Long Beach–Anaheim, CA Metro Area, as published by the Federal Reserve Economic Data (FRED) service.
Data series: LAOBPPR
Value for 2025: The ZHVI value reported for 2025-12-31 (or the closest available monthly value if daily data is not available).
Value for 2026: The ZHVI value reported for 2026-12-31 (or the closest available monthly value if daily data is not available).
Resolution rule:
YES = (ZHVI_2026 / ZHVI_2025) > 1.10
NO = otherwise.
Only the ZHVI (FRED series LAOBPPR) will be used; no adjustments will be made for inflation, seasonality, data revisions after resolution, or alternative data sources.
On December 31, 2026, will the ratio of the Los Angeles Zillow Home Value Index (ZHVI) to the U.S. Consumer Price Index for All Urban Consumers (CPI-U, FRED: CPIAUCSL) be higher than on December 31, 2025?
Will the US federal minimum wage exceed $10 by December 31, 2026?
Note: I used ChatGPT5 to help me craft a resolution criteria for one of the questions. You're welcome to guess which one. ;-)
For example, one of the most consequential events of 2010s was the decision in the late summer of 2015 by Angela Merkel's German government to admit roughly one million young male Muslims, which seemed to then help lead to the populist backlash that brought Brexit and Donald Trump's first election in 2016.
But, so far as I can tell, nobody had predicted Merkel's decision since French novelist Jean Raspail's "The Camp of the Saints" in 1973. For example, it was not included in Philip Tetlock's Superforecaster competition for 2015.
Granted, nobody can objectively judge this kind of question (e.g., Raspail's novel has Hindus rather than Muslims and France rather than Germany, and it didn't happen for 42 more years, so how close did Raspail really come?) But, I would be very interested in Scott's judgement of "Here are the three most interesting subjective predictions that sort of came true in 2026" and "Here are three more predictions that didn't happen to happen in 2026, but I'd like to call your attention to them."
And in the future if one of the predictions for 2026 comes true in 2030 or whenever, the way Raspail's 1973 prediction more or less came true in 2015, you could call attention to it then.
You are entitled to assert that, but that view seems to be at odds with those of, say, the editors and contributers to Arts & Letters Daily. Over the course of my bookish life, I've read dozens of essays on whether "Brave New World," "1984," or "A Clockwork Orange" turned out to be better predictions. So, lots of highbrow individuals think writing a novel can be some kind of making a prediction.
I'm deeply suspicious of the claim that anything going on in Germany in 2015 had a meaningful impact on the US election in 2016. Americans just don't care enough about Europe for there to be big direct effects and it's too short a timeframe for big indirect effects.
LOL, your personality is something else. You are absolutely determined to be as sour and offensive as you can be at all times. In actuality, there are very many Americans that are invested in UK politics. I have known plenty of Trumpers who had high opinions of Nigel Farage, for example. And plenty of lefties who are waiting in high suspense for the downfall of perceived traitor Keir Starmer.
I would (selfishly) like to see some objective operationalization of "will the AI bubble pop", NVDA price would be one metric but even an overall S&P drop of (X) percent from its 2025 high, or some aggregate of hyperscaler stocks
I've seen 'generally accepted definitions' of a market 'correction' being a 10-20% drop in the S&P, and a 'bear market' or 'crash' being 20%+ e.g. https://www.schwab.com/learn/story/market-correction-what-does-it-mean. There are certainly a lot of attempted predictions floating around at the moment, with a consensus that the 'magnificent seven' tech stocks are overvalued, and that a correction must come at some point, whether it's over a short or longer period.
Yes I think a >= 20% drop would be a good threshold. Less than that seems not really important. Keep in mind though even sans AI at all, the base rate for bear markets has been one every ~5-6 years in the last 25 years. For an "AI bubble" to be big enough for me to care about it would need to be on par with the major financial crises of the last quarter century, which according to Claude were:
Interestingly there's some interest in ending this one the Israeli side. US military aid isn't a direct aid (it's a subsidy for US defense firms to sell to Israel), it comes with a lot of strings attached (including a similar amount of support to Egypt, which is fairly adverserial), it's a political liability, and it somewhat harms local competition. It's also just not that big anymore compared to GDP now that Israel isn't a small low-income country.
This is not true in the event of an attack. That's direct military aid (writing into the congressional budget -- no need to do anything after the attack).
So I'd like to ask a question along the lines of: The James Webb Telescope seems poised to change our model of the cosmos. Will there be a consensus move away from lamda CMD (cold dark matter) in 2026?
But I don't know how to designate the outcome. How do you quantify consensus? Oh and I'd give this only a 5% chance of happening anyway. Very hidebound these scientists.
Historically, experiments tended to lead to the crisis of a paradigm, not establish a new paradigm on their own.
From WP, it seems that the JWST has observed that some very early (high z) galaxies were brighter than they were predicted to be by the models.
I am not an astrophysicist, and have no feel how much of a challenge that is for Lambda CDM.
I think that you are not wrong to describe the cosmologists as "hidebound". The people currently in charge certainly made their career with Lambda CDM, and will be reluctant to abandon cherished beliefs. Science advances on funeral at a time and all that.
However, this reluctance to switch theories is a feature, at least a bit. I would not prefer a world in which the consensus drifted yearly to whatever theory was cool at the moment.
Even if the JWST will be as fatal to Lambda CDM as Michelson-Moreley was to the ether theory, it will take time to develop a better theory. I mean, if most of the MOND people had claimed before that their theory predicted brighter early galaxies and would be seriously disfavored if the JWST found them to have the brightness predicted by Lambda CDM, that would certainly make them more respectable.
We already have better theories. They don't have the currency they ought because they're coming from data scientists (compression theory), and not out of astrophysics.
Assuming that "the demise of one theory" must come before the next one is silly. Protein folding was dead for 20 years, before one person looked and figured out what the chemists were doing wrong (note: I'm not sure he actually fixed the models, just realized what they were ignoring).
Global mean temperature anomaly for the year compared to NASA baseline. May need to bucket it by 10th of degree Celsius to get binary questions. This may be difficult, to actually predict, but we probably need some questions to prevent results from saturating the benchmark?
I have only the vaguest idea who this guy is, and I don't even know if he's a Catholic (as distinct from "born into a Catholic heritage family", "culturally Catholic" or "lapsed").
So that's a question for his local bishop, whomever that might be.
Second, has he done something to be excommunicated for? Looking up the grounds for excommunication, they seem to be:
"The 1983 Code of Canon Law attaches the penalty of automatic excommunication to the following actions:
Apostates, heretics, and schismatics (can. 1364)
Desecration of the Eucharist (can. 1382)
A person who physically attacks the pope (can. 1370)
A bishop who consecrates another bishop without papal mandate (can. 1387)
A priest who violates the seal of the confessional (can. 1386)
A person who procures an abortion (can. 1397 §2)
Accomplices who were needed to commit an action that has an automatic excommunication penalty (can. 1329)"
So unless he's larping as PZ Myers, I doubt he's desecrated the Eucharist. I don't think he's physically attacked any pope or consecrated any bishops or violated the seal of the confessional. Has he procured abortions, paid for an abortion, helped someone get an abortion, etc.? Maybe he's an apostate, but you have to formally apostatise now; you had to write to the bishop and ask to be removed etc. unlike previously, where going around not being a Catholic was grounds enough (e.g. even Joe Biden and Nancy Pelosi are not excommunicated, notwithstanding that Nancy was refused to allow to receive the Eucharist by her bishop). It seems to be more blurry now, since formal defection is no longer "a juridical act":
"The motu proprio Omnium in mentem of 26 October 2009 removed from the canons in question all reference to an act of formal defection from the Catholic Church. Accordingly, "it is no longer appropriate to enter attempts at formal defection in the sacramental records since this juridic action is now abolished."
In late August 2010, the Holy See confirmed that it was no longer possible to defect formally from the Catholic Church.
...Although the act of "formal defection" from the Catholic Church has thus been abolished, public or "notorious" (in the canonical sense) defection from the Catholic faith or from the communion of the Church is of course possible, as is expressly recognized in the 1983 Code of Canon Law. Even defection that is not known publicly is subject to the automatic spiritual penalty of excommunication laid down in canon 1364 of the 1983 Code of Canon Law."
Better: Will the United States intervene to protect Exxon interests in Guyana? (this gets to the heart of the military intervention, and covers many use cases that your original one does not).
SpaceX manages to fill at least 20% of a starships propellant capacity from an orbiting depot.
And why I think it is a good question? As Elon Musk has said that he intends to send five Starships to Mars in 2026, and each will need to be a lot more than half full of fuel, many people will surmise that 20% will be achievable in the next year or so.
However, thinking about what the proposition entails will lead most sensible people to reject the idea quite strongly.
I'm also fascinated to know if anybody would be happy to wager a hundred pounds on the matter?
P.S. for the avoidance of doubt, I don't think any fuel will be transferred from an orbiting depot to a Starship in 2026.
>OpenAI believes that its AI models are getting smarter very quickly. So quickly that CEO Sam Altman says the company is on track to build an intern-level research assistant by September 2026, and a fully capable “AI researcher” by 2028.
Whether the 2026 part of this can be operationalized depends on how transparent or opaque OpenAI chooses to be next year. If it _did_ choose to be sufficiently transparent, I'd phrase this as:
In OpenAI, are at least 75% of programming and AI experiment tasks that would take a human 2 days to perform being done by AI systems by 9/30/2026?
Another major question is whether incremental learning is successfully added to LLMs in 2026. Maybe operationalized as:
Do any AI labs offer an LLM which changes its weights on a daily basis, based on users' interactions?
Two other unrelated and much less important questions, much easier to track:
Do the 11/2026 elections leave the GOP with a majority in the House?
Do the 11/2026 elections leave the GOP with a majority in the Senate?
What about an item having to do with efforts get some kind of personhood rights for AI? I have in mind things like legal protections for AI, and efforts to legitimize certain rights for AI. Would also include efforts to legitimize certain human-AI relationships — marrying an AI, adopting an AI. Can anyone suggest ways of making outcome crisp and judgable? Legal actions? Introduction of bills? Establishment of congressional committees to consider? Existence of an AI rights organization with more than X members or more than $Y contributions?
Given the difficulty of "registering a business" in certain states (ahem. easy), "personhood" rights for AI are nearly equivalent to "personhood" rights for a business...
I don't understand what you're getting at. AI getting personhood rights such as marriage is a shoo-in in certain states? The law says marriage must be between 2 people -- ergo nobody can legally marry their parakeet or beamer.
No, sorry, I'm being a little facetious and a little confusing at once. The facetious is "an AI can create a business, because face to face contact is not required", and confusing is in saying that "since businesses have personhood status, so too can AIs, easy peasy" (this does not apply to marriages, obviously).
Except that most businesses are not "persons." Only corporations are, and that is because the whole point of incorporating is that the resulting corporation has a legal identity separate from that of the owner(s), which limits the owner's liability. Unlike AI entities, corporations are "persons" by definition, and always have been. https://thelawdictionary.org/corporation/
Well, you are conflating "persons" and "people." "Person" has a very specific meaning in the law. People are "natural persons." Corporations are "juridical persons."
Moreover, the law can distinguish between types of persons. See eg Hague v. Committee for Industrial Organization, 307 U.S. 496 (1939) ["Natural persons, and they alone, are entitled to the privileges and immunities which § 1 of the Fourteenth Amendment secures for "citizens of the United States."]
Yeah, OK. So how does this fit with the question of what sort of changes would need to happen in order for someone to legally marry an AI, or adopt one, or charge one with a crime, or found a business with AI as a partner? What would need to change for those things to be possible, and where should we look to find out whether people are trying to bring about those changes via lawsuits or lobbying or whanot?
I don’t think it would! But there is lots of evidence many people are tending towards seeing them that way, due to ignorance and personal need. And it’s in the interest of the Ai companies to help this along, because customer loyalty. I’m looking for measures of how powerful this trend is.
How about some motorsport questions, for low-stakes breadth:
Will McLaren be in the top three of the World Constructor's Championship?
Will a manufacturer leave WEC's "Hypercar" class?
For MotoGP riders (premier class only; not Moto2 or Moto3) who miss Grands Prix due to injuries (i.e., excluding cases of "0"), what will the median number of Motorcycle Grands Prix missed due to injuries be, both consecutive Grands Prix missed and season total?
How many fatal crashes will there be at the Isle of Mann TT? (Low stakes, compared to war...)
Will the Pikes Peak International Hill Climb use the full course?
McLaren has been dominant in 2025 and were very good for most of 2024, but 2026 will have extreme regulation-changes and how regulation changes affect team standings is very difficult to forecast; manufacturers keep leaving the fastest class of the World Endurance Championship; MotoGP has some pretty extreme crashes, but fatalities are rare; the Isle of Mann TT averages 2 fatal crashes per running (2022 had 5 and 2024 had 0); and the Pikes Peak International Hill Climb takes place on Pikes Peak, in Colorado, so the course length is frequently shortened due to bad weather.
The form says that only the first 20 submissions per person will be considered. Is that 20 questions (4 sets of 5), or 20 form submissions (20 submission of 5 questions each)?
Is it uncharitable for me to think that the rationalist-adjacent quest to discover the mysterious Secrets of Forecasting has borne little fruit, and the mission gets more quixotic every day?
Is it wrong to think that superforecasters are either just Lucky People or people predicting the future through means that are impenetrable to our analysis?
Or maybe I'm being a fuddy-duddy and forecasting is just a fun hobby that doesn't need to justify itself through concrete results?
You're correct to think that superforecasters are very smart people, and most people (midwits) suck at metaintelligence. So yes, if you're saying "predicting the future" is done by "means impenetrable to our analysis"... you're kind of tattling on yourself.
The first objections to the covid19 mRNA vaccines were raised over thirty years ago. Mr. "I Told You So" is more intelligent than most.
Nah. But it amuses me to shoot across his bow and warn him that he ought to switch things up a bit. Assuming it really IS him. Maybe it's YOU!
Here's the qualities we look for in a Wimbli: extremely frequent poster, drastically right wing, somewhat populist, sometimes rather hard to follow, fond of obscure references and random facts, thick skinned (not easily upset)…and prone to reference war, death and homicide whenever he can. Though not in a directly threatening way. Not threatening to any of US, anyway.
There you have it, Wimbli, if you're reading this somehow. Consider it a list of what NOT to do.
I do have a policy about not hassling the mentally ill. But Wimbli isn't mentally ill, as far as I know, just eccentric.
BTW, you'll be happy to know that your name has been cleared by the investigation. See, I'm not ENTIRELY sure that you can be described as "thick-skinned."
Have you read any of Philip Telock's work on forecasting or Rabin Hanson's work on Prediction Markets? If so, then I'd ask what concretely makes you think it's lucky people, and why for example Tetlock's advice on better predictions don't work. If not, then I'd advise you to read them.
1. When will China decide that Russia is weak enough that it can take back some or all of the Amur annexation?
2. When will the AI bubble burst (if it hasn't already done so by the end of 2025)?
3. When will the Epstein files that the FBI is holding be released (bear in mind both Houses have to pass the bill to release them, and Trump would likely refuse to sign the bill, and then 2/3rds of both houses would have to override his veto).
i think there should be question towards chinas global influence. maybe ask how many new chinese cars will be sold in germany. if germans start to buy lots of chinese cars, that would mean that chinese engineering has surpassed western engineering. (though i also expect german government to protect their local manufacturers in some way)
I would also love to see some operationalization of "china go up / china go down" -- economically, since Taiwan issues surely already have good markets
My interest in the future is, of course, all about AI. The best single quantifiable question I can think of for that is: METR time horizons trends continue at expected rates.
The METR eval is probably the best single benchmark, and it's the one I look at most closely.
I'm not sure if it's the best measure overall though - it focuses very heavily (exclusively?) on agentic coding. I understand why - the whole recursive self improvement angle. But it doesn't fully capture the challenges of dealing with non-text data - particularly vision and working in real time.
My alternative measure would simply be revenue. The point of AI is to be useful, and the best quantifiable measure of that is how much people are willing to pay. Hard to measure for Google, but pretty easy for Anthropic and OpenAI.
Invasion of Taiwan is presumably already on the list?
Zelensky no longer president?
When I read the third paragraph I thought you meant that AI bots would also be sharing ideas for forecast questions. But also, could they?
Now I am thinking why am I wasting time on this and not just asking ChatGPT or Grok.
Yeah I'm just gonna do it. Hopefully not too many people do. Or maybe Scott/Metaculus can just get ChatGPT to dedupe!
Use the magic...Verbalised Sampling... https://arxiv.org/abs/2510.01171v3
Change in composition of Supreme Court. British general election.
Can't see that there is any chance of a UK GE.
New PM via successful leadership challenge, sure.
I'd think a GE is more likely than a successful leadership challenge - any successful leadership challenge will (in part) be headed off by the PM's ability to call a GE. So it depends if - conditional on Starmer's bluff being called - he does indeed take the mutually assured destruction option of calling a GE.
A change in British Prime Minister is very unlikely given the size of their majority, but not completely outside the realm of possibility
I don't follow this at all.
Starmer loses a challenge and then ... refuses to accept it and calls an election?
Or Starmer doesn't even fight a challenge, he just immediately calls an election out of spite?
It wouldn't work under the Lascelles Principles, parliament can't be dissolved if there is a viable majority in parliament even if the current PM wants an election.
The prospects of a British general election are effectively zero, given the incumbent government has sole discretion over whether to call one (there is no mechanism to force one) and has no incentive to do so, given it would almost certainly lose power.
I’d put it at around 5%. I can see it happening but very, very unlikely.
I agree, although there is a mechanism to force one: a majority could pass legislation.
I think it is unlikely but a failure of the budget would be seen as a vote of no confidence, I expect parliament would be dissolved if no budget could be passed.
Nitpick: calling a probability of a possible event "effectively zero" strikes me at unnecessarily vague, because it just asserts that the probability is small compared to another, unspecified probability.
For some purposes, the probabilities 1e-3 and 1e-10 are effectively the same, for other purposes, they a worlds apart.
If I want to project my budget, then "the expected value of coins I will find on my way to work is less than a cent" is good enough, and I do not care if the probability of finding a dime on the sidewalk is more like 1e-4 or 1e-8. If instead it is the probability of dying in a car accident on my way to work, then I will care a lot.
I will grant you that in the context of forecasting contests, if you ask contestants n questions, you are unlikely to find out if they are well calibrated for probabilities more smaller than 5/n or so. As most contestants (except LLMs) will not be willing to make thousands of forecasts, this puts a practical lower limit on the probabilities which can be tested. If I ask them if a particular proton will decay this year, then all the good forecasters will max out on nope, and I will learn nothing.
>the incumbent government has sole discretion over whether to call one (there is no mechanism to force one)
<mildSnark>
Doesn't King Charles III technically have the power to dissolve parliament, although, if I understand correctly, that hasn't been done since 1835...
</mildSnark>
Nuclear warhead detonated in anger.
How would you evaluate the emotions of a nuclear warhead?
Even as I typed the thought crossed my mind!
Nuclear warhead detonated with intention of killing 1 or more humans.
That one human has gotta be Chuck Norris.
Nuclear warhead detonated in Angers. Quel dommage!
Nuclear warhead detonated in anger management class.
Gooz fraba. GOOZ FRABA!!
Odds of us being able to assess this for the current year? (This is me saying: if you can't definitively say when we last used a nuclear warhead, then it's a poor question, which will be hard to answer in the normal course of business, as opposed to people hitting Nuclear Kill Switches)
Not entirely sure what you're trying to say here. I'm pretty sure the last (only) year OP's question would be true for is 1945?
You run a seismograph? You run a radiation detector, in places where we bomb? Question isn't "a city-destroying missile", it's a nuclear bomb. They can be quite small. If you are unable to verify all of the current targets of American materiel (there have been more than a few in the last year), I'd say this question is unanswerable.
I think you have to get pretty heavy into conspiracy theory-land to think that anyone's likely to try to get away with hiding something like that, let alone succeed. There's just too many people who would be likely to be able to notice, and have incentives to reveal it, and not enough to be gained.
How many "trustworthy" people are able to notice? Let's say we decided to, oh, say, nuke Afghanistan (obvious substitutions are obvious). A leadership strike on terrorists.
You'd have to know the coordinates, and establish that we didn't just use a "big conventional bomb" (we'll note there are several places where we are said to have allowed the use of big conventional bombs in "anger"). And, after you did all that, decide to publish this -- which for many countries would be Embarrassing, and also might invite the US doing more bombing (as we've lost plausible deniability).
Russia already refuses to say "we didn't murder that guy" because they know they won't be believed (also, it sounds better if they get the rep that they can murder anyone anywhere).
Many people would like to publicize it *because* it would be embarrassing. And why take the risk when you could just *actually* use a big conventional bomb that nobody would care about?
There are no "big conventional bombs" that anyone who understands bombs will ever mistake for even the smallest existing nuclear weapons.
And there are no nuclear bombs whose detonation will be missed by the people who professionally look for nuclear explosions, except in very niche cases like detonating inside a very large deep cavern that are not consistent with using a weapon "in anger".
I am sadly amused by the extent to which it is broadly believed that nuclear weapons are simultaneously powerful enough to Destroy The World, and not that much more powerful than big conventional weapons. Complete failure to understand scale at both ends.
I assume this means nuclear warhead detonated as an attack rather than as a test.
A third low possibility is nuclear warhead detonated by accident, and we might want to distinguish between a warhead detonated on the owner's territory rather than outside it.
But not a nuclear gravity bomb (dropped from an airplane) or nuclear truck bomb or suitcase bomb? Or did you mean nuclear weapon of any kind?
Academic journals have a fairly clear ranking in any given discipline. SCImago has detailed rankings which are mostly accepted.
One would expect to see AI-written articles in pay-to-publish journals and lower quality venues already. I would like to see predictions by discipline and journal tier (e.g. top 10, top 100, top quartile, top 50%) for when academic papers fully written by AI will be (i) first published; (ii) regularly published and (iii) widely accepted as valid results. On a separate axis, one might measure the autonomy of the AI or the involvement of humans, I think the real milestone is when the AI chooses, develops and publishes a paper independently, but there will be intermediate steps, not only because journals are conservative and will insist on human authorship for a time.
Progress on these measures would allow measurement of a wide range of abilities by the AI (reasoning, data analysis in the mathematical sciences, textual analysis in the humanities, etc.) with workplace currency. If you believe in alignment as a project, it might offer insights into what the AI sees as valuable; interpretation & analysis of differences between human and AI researchers will likely be its own research area.
As a very concrete question suggestion:
'Will a paper largely or entirely written by AI pass peer review and appear in a journal with Q1-ranking on Scimago?'
Likely some slightly less academic wording would be more suitable.
You could make it more concrete by specifying what "largely" means. Look at this question for example: https://www.metaculus.com/questions/31130/major-components-of-aca-repealed-before-2026/. It contains the word "major" which could mean different things to different people. However, if you read the description:
"Resolution Criteria
This question will resolve as Yes if, before January 1, 2026, the United States has enacted one or more bills that changes at least three of the six following components of the Affordable Care Act (ACA) to either repeal or curtail them (see the fine print for fuller details):
Eliminates or reduces the ACA's Medicaid eligibility or federal funding
Eliminates or reduces the premium tax credit eligibility or amount
Eliminates or curtails the individual mandate (by law the individual mandate still exists in the ACA, but has no penalty)
Eliminates or curtails the mandate for certain employers to provide health coverage for employees. Reducing the penalties will also be considered to be relaxing the mandate.
Makes it so that ACA subsidies are no longer limited to plans that satisfy the requirements specified in the ACA, including allowing ACA subsidies to be contributed to health savings accounts or similar account
Eliminates or curtails medical underwriting restrictions
Fine Print
Below find an expanded description of the above six criteria with the criteria used to determine if each has occurred:
One or more of the following changes to Medicaid is implemented
Rescinding or reducing federal Medicaid eligibility below 133% of the federal poverty line (effectively 138% of the federal poverty line)
Rescinding or reducing the federal medical assistance percentage (FMAP) for newly eligible recipients for states which have expanded Medicaid
Reducing or capping the overall amount of Medicaid funding provided under the ACA
Imposing new federal requirements on Medicaid eligibility, such as work requirements, that have the effect of reducing the number of people who are currently eligible for Medicaid
One or more of the following changes is made to premium tax credits
Reducing taxpayer eligibility for the credits (the ACA specified that taxpayers with household incomes ranging between 100% and 400% of the federal poverty line were eligible, so a reduction would be narrowing that range by either increasing the minimum eligibility above 100% or lowering the cap below 400%)
Reducing or eliminating the premium assistance amount specified in the ACA
Imposing additional requirements on premium tax credits that have the effect of reducing the number of people who are currently eligible for premium tax credits
The individual mandate is eliminated (by law the individual mandate still exists in the ACA, but has no penalty)
Eliminating or relaxing the mandate for certain employers to provide health coverage for employees. Reducing the penalties will also be considered to be relaxing the mandate.
Making it so that ACA subsidies are no longer limited to plans that satisfy the requirements specified in the ACA, including allowing ACA subsidies to be contributed to health savings accounts or similar account
One or more of the following changes is made to the ACA's medical underwriting restrictions:
Eliminating or relaxing the ACA's restriction on excluding individuals with pre-existing conditions
Expanding the ability of health insurance issuers to price discriminate or set eligibility requirements based on certain characteristics
The above criteria need not be met in a single bill. The question will still resolve as Yes if multiple bills are enacted that have the combined effect of satisfying the above criteria.
The ACA is lengthy and complex, and Metaculus will use its judgment to assess legislation that has been passed while also relying on characterizations published in credible sources that Metaculus assesses to be knowledgeable and demonstrate sufficient expertise.
The expiration of enhanced subsidies passed in other legislation is immaterial; this question only resolves as Yes if the above criteria are met with respect to the text of the ACA."
It's quite concrete...
I would like to see something along these lines, maybe "will an AI be explicitly credited as an author on a peer-reviewed paper in the top (N) journals in its field"? By explicitly credited I mean the reviewers + editors know the author is an AI, the AI is listed as an official author in the author list, and the AI's contributions meet CRediT criteria.
I think this is a good phrasing, thank you.
Will CRISPR be used to cure a case of HIV?
Will American life expectancy at birth exceed 79 years, on average, by 2026?
According to the CDC, what will the death rate by overdose be in 2026? (If we need a binary question; Under 29 people per 100,000 per year? )
What will be the highest level of commercially available self-driving car in 2026?
Will an AI be declared CEO of a Fortune 500 organization by Dec. 2026?
Will the US workforce participation rate go below 58% by 2026?
Will an AI be recognized as the author of a best-selling book by Dec 2026?
Will a nuclear weapon be detonated in 2026 as a hostile action?
Will the price of gold be over $3,800 USD by 2026?
Will the 2026 global temperature average be greater than +1.5C compared to the 1850-1900 temperature average?
Will at least one peer-reviewed scientific article state, as a finding or explicit claim, that more than 0.5% of the current U.S. population was conceived using embryo selection By December 31, 2026?
(Criteria: Published in a journal ranked in the top 50% (Q1 or Q2) of its subject category in Clarivate’s Journal Citation Reports)
Will the median home value in the Los Angeles–Long Beach–Anaheim, CA metro area be more than 5% higher on December 31, 2026 than it was on December 31, 2025?
RESOLUTION CRITERIA
This question will be resolved using the Zillow Home Value Index (ZHVI) for the Los Angeles–Long Beach–Anaheim, CA Metro Area, as published by the Federal Reserve Economic Data (FRED) service.
Data series: LAOBPPR
Value for 2025: The ZHVI value reported for 2025-12-31 (or the closest available monthly value if daily data is not available).
Value for 2026: The ZHVI value reported for 2026-12-31 (or the closest available monthly value if daily data is not available).
Resolution rule:
YES = (ZHVI_2026 / ZHVI_2025) > 1.10
NO = otherwise.
Only the ZHVI (FRED series LAOBPPR) will be used; no adjustments will be made for inflation, seasonality, data revisions after resolution, or alternative data sources.
On December 31, 2026, will the ratio of the Los Angeles Zillow Home Value Index (ZHVI) to the U.S. Consumer Price Index for All Urban Consumers (CPI-U, FRED: CPIAUCSL) be higher than on December 31, 2025?
Will the US federal minimum wage exceed $10 by December 31, 2026?
Note: I used ChatGPT5 to help me craft a resolution criteria for one of the questions. You're welcome to guess which one. ;-)
The questions are supposed to be about 2026, so most of these wouldn't fit the contest.
They can't be longer term? Okay. Good to know. I'll edit.
I'd include an essay question along the lines of "What is something that will happen in 2026 that isn't asked about in any of the questions?"
You can probably use AI to pick out all the ones that came true in 2026, then you can subjectively pick the most insightful entries.
For example, one of the most consequential events of 2010s was the decision in the late summer of 2015 by Angela Merkel's German government to admit roughly one million young male Muslims, which seemed to then help lead to the populist backlash that brought Brexit and Donald Trump's first election in 2016.
But, so far as I can tell, nobody had predicted Merkel's decision since French novelist Jean Raspail's "The Camp of the Saints" in 1973. For example, it was not included in Philip Tetlock's Superforecaster competition for 2015.
Granted, nobody can objectively judge this kind of question (e.g., Raspail's novel has Hindus rather than Muslims and France rather than Germany, and it didn't happen for 42 more years, so how close did Raspail really come?) But, I would be very interested in Scott's judgement of "Here are the three most interesting subjective predictions that sort of came true in 2026" and "Here are three more predictions that didn't happen to happen in 2026, but I'd like to call your attention to them."
And in the future if one of the predictions for 2026 comes true in 2030 or whenever, the way Raspail's 1973 prediction more or less came true in 2015, you could call attention to it then.
Raspail did not predict anything in 1973. Writing a novel isn't making any kind of prediction.
You are entitled to assert that, but that view seems to be at odds with those of, say, the editors and contributers to Arts & Letters Daily. Over the course of my bookish life, I've read dozens of essays on whether "Brave New World," "1984," or "A Clockwork Orange" turned out to be better predictions. So, lots of highbrow individuals think writing a novel can be some kind of making a prediction.
Also, Raspail was explicitly making a prediction, that he hoped to counter with the book!!!
So much for the writers of Arts & Letters Daily. 1984 was about 1948, not an actual prediction of what would happen to Airstrip One/Great Britain.
Tell that to the author of Snow Crash.
LOL, Camp of the Saints is about the horrors of HINDU IMMIGRATION?
I wasn't especially impressed with what I had heard but Christ, I didn't realize the premise was THAT stupid.
I'm deeply suspicious of the claim that anything going on in Germany in 2015 had a meaningful impact on the US election in 2016. Americans just don't care enough about Europe for there to be big direct effects and it's too short a timeframe for big indirect effects.
Is there a reason to limit it to US politics?
UK and French politics are transparent enough to have useful prediction markets.
I suspect it's less unknowability and mostly that no one cares.
bonquers 🤯
LOL, your personality is something else. You are absolutely determined to be as sour and offensive as you can be at all times. In actuality, there are very many Americans that are invested in UK politics. I have known plenty of Trumpers who had high opinions of Nigel Farage, for example. And plenty of lefties who are waiting in high suspense for the downfall of perceived traitor Keir Starmer.
If by "invested in UK Politics" you mean convincing the PM to be our hatchetman... sure, I guess.
Is that you, Wimbli?
Depends if he's English himself. "Sour and offensive" is something of our national character. Doubly so if he has the misfortune of being a Scot.
Any country's politics are fine! Things that are of broad interest and global significance of any kind are what we're after.
Just yes/no questions or also quantitative ones (e.g. what will the approval rating be)
NVDA closes below $80 on any day in 2026
I would (selfishly) like to see some objective operationalization of "will the AI bubble pop", NVDA price would be one metric but even an overall S&P drop of (X) percent from its 2025 high, or some aggregate of hyperscaler stocks
I've seen 'generally accepted definitions' of a market 'correction' being a 10-20% drop in the S&P, and a 'bear market' or 'crash' being 20%+ e.g. https://www.schwab.com/learn/story/market-correction-what-does-it-mean. There are certainly a lot of attempted predictions floating around at the moment, with a consensus that the 'magnificent seven' tech stocks are overvalued, and that a correction must come at some point, whether it's over a short or longer period.
Yes I think a >= 20% drop would be a good threshold. Less than that seems not really important. Keep in mind though even sans AI at all, the base rate for bear markets has been one every ~5-6 years in the last 25 years. For an "AI bubble" to be big enough for me to care about it would need to be on par with the major financial crises of the last quarter century, which according to Claude were:
dot-com bubble, 49% drop
2008 financial crisis, 57% drop
COVID-19, 34% drop
2022 inflation, 25% drop
I'd go with worldwide depression. 60% of the American stock market is in just a handful of companies.
Israel no longer gets military aid from the US?
Interestingly there's some interest in ending this one the Israeli side. US military aid isn't a direct aid (it's a subsidy for US defense firms to sell to Israel), it comes with a lot of strings attached (including a similar amount of support to Egypt, which is fairly adverserial), it's a political liability, and it somewhat harms local competition. It's also just not that big anymore compared to GDP now that Israel isn't a small low-income country.
https://www.i24news.tv/en/news/israel/diplomacy-defense/artc-israel-considers-a-gradual-reduction-of-us-military-aid
This is not true in the event of an attack. That's direct military aid (writing into the congressional budget -- no need to do anything after the attack).
So I'd like to ask a question along the lines of: The James Webb Telescope seems poised to change our model of the cosmos. Will there be a consensus move away from lamda CMD (cold dark matter) in 2026?
But I don't know how to designate the outcome. How do you quantify consensus? Oh and I'd give this only a 5% chance of happening anyway. Very hidebound these scientists.
Historically, experiments tended to lead to the crisis of a paradigm, not establish a new paradigm on their own.
From WP, it seems that the JWST has observed that some very early (high z) galaxies were brighter than they were predicted to be by the models.
I am not an astrophysicist, and have no feel how much of a challenge that is for Lambda CDM.
I think that you are not wrong to describe the cosmologists as "hidebound". The people currently in charge certainly made their career with Lambda CDM, and will be reluctant to abandon cherished beliefs. Science advances on funeral at a time and all that.
However, this reluctance to switch theories is a feature, at least a bit. I would not prefer a world in which the consensus drifted yearly to whatever theory was cool at the moment.
Even if the JWST will be as fatal to Lambda CDM as Michelson-Moreley was to the ether theory, it will take time to develop a better theory. I mean, if most of the MOND people had claimed before that their theory predicted brighter early galaxies and would be seriously disfavored if the JWST found them to have the brightness predicted by Lambda CDM, that would certainly make them more respectable.
We already have better theories. They don't have the currency they ought because they're coming from data scientists (compression theory), and not out of astrophysics.
Assuming that "the demise of one theory" must come before the next one is silly. Protein folding was dead for 20 years, before one person looked and figured out what the chemists were doing wrong (note: I'm not sure he actually fixed the models, just realized what they were ignoring).
Oh mond does predict earlier galaxies. (See triton station blog) but I'd just ask if lamda cmd will fall out of favor.
Edit. I should have said MoND did predict. The paper is by Sanders and I think the date is 1998. I can find the exact paper if anyone is interested.
Global mean temperature anomaly for the year compared to NASA baseline. May need to bucket it by 10th of degree Celsius to get binary questions. This may be difficult, to actually predict, but we probably need some questions to prevent results from saturating the benchmark?
Will Russell Hogg turn a bunch of bananas into gelatin? It sounds silly, but it has serious implications.
Is this a Steins;Gate reference?
El Psy Kongroo
https://www.egscomics.com/
Top Secret!! Wait, has thewowser infiltrated the comments on behalf of the Organisation??? :(
Will Nick Fuentes be excommunicated by the Catholic Church?
I have only the vaguest idea who this guy is, and I don't even know if he's a Catholic (as distinct from "born into a Catholic heritage family", "culturally Catholic" or "lapsed").
So that's a question for his local bishop, whomever that might be.
Second, has he done something to be excommunicated for? Looking up the grounds for excommunication, they seem to be:
https://www.catholic.com/qa/why-and-how-one-is-excommunicated
"The 1983 Code of Canon Law attaches the penalty of automatic excommunication to the following actions:
Apostates, heretics, and schismatics (can. 1364)
Desecration of the Eucharist (can. 1382)
A person who physically attacks the pope (can. 1370)
A bishop who consecrates another bishop without papal mandate (can. 1387)
A priest who violates the seal of the confessional (can. 1386)
A person who procures an abortion (can. 1397 §2)
Accomplices who were needed to commit an action that has an automatic excommunication penalty (can. 1329)"
So unless he's larping as PZ Myers, I doubt he's desecrated the Eucharist. I don't think he's physically attacked any pope or consecrated any bishops or violated the seal of the confessional. Has he procured abortions, paid for an abortion, helped someone get an abortion, etc.? Maybe he's an apostate, but you have to formally apostatise now; you had to write to the bishop and ask to be removed etc. unlike previously, where going around not being a Catholic was grounds enough (e.g. even Joe Biden and Nancy Pelosi are not excommunicated, notwithstanding that Nancy was refused to allow to receive the Eucharist by her bishop). It seems to be more blurry now, since formal defection is no longer "a juridical act":
https://en.wikipedia.org/wiki/Formal_act_of_defection_from_the_Catholic_Church
"The motu proprio Omnium in mentem of 26 October 2009 removed from the canons in question all reference to an act of formal defection from the Catholic Church. Accordingly, "it is no longer appropriate to enter attempts at formal defection in the sacramental records since this juridic action is now abolished."
In late August 2010, the Holy See confirmed that it was no longer possible to defect formally from the Catholic Church.
...Although the act of "formal defection" from the Catholic Church has thus been abolished, public or "notorious" (in the canonical sense) defection from the Catholic faith or from the communion of the Church is of course possible, as is expressly recognized in the 1983 Code of Canon Law. Even defection that is not known publicly is subject to the automatic spiritual penalty of excommunication laid down in canon 1364 of the 1983 Code of Canon Law."
https://www.ncronline.org/news/nancy-pelosis-communion-ban-be-resolved-vatican
You can do a frickin' lot of things and still be considered a Catholic in good standing (maybe a very bad Catholic, but still a Catholic).
"I don't like the guy" or "surely he has violated at least one technicality of this religion" are not grounds for excommunication.
That's interesting. Why is getting an abortion considered more heinous than, say, murder?
Will Democrats have a majority in both houses of Congress after the 2026 midterms?
Will the United States intervene militarily (troops on the ground) in Venezuela?
Alternatively, will the United States bomb any target inside Venezuelan territory?
Better: Will the United States intervene to protect Exxon interests in Guyana? (this gets to the heart of the military intervention, and covers many use cases that your original one does not).
SpaceX manages to fill at least 20% of a starships propellant capacity from an orbiting depot.
And why I think it is a good question? As Elon Musk has said that he intends to send five Starships to Mars in 2026, and each will need to be a lot more than half full of fuel, many people will surmise that 20% will be achievable in the next year or so.
However, thinking about what the proposition entails will lead most sensible people to reject the idea quite strongly.
I'm also fascinated to know if anybody would be happy to wager a hundred pounds on the matter?
P.S. for the avoidance of doubt, I don't think any fuel will be transferred from an orbiting depot to a Starship in 2026.
How much housing will be built in SF in 2026?
>OpenAI believes that its AI models are getting smarter very quickly. So quickly that CEO Sam Altman says the company is on track to build an intern-level research assistant by September 2026, and a fully capable “AI researcher” by 2028.
( from https://ositcom.com/blog/sam-altman-says-openai-will-build-a-real-ai-researcher-by-2028 )
Whether the 2026 part of this can be operationalized depends on how transparent or opaque OpenAI chooses to be next year. If it _did_ choose to be sufficiently transparent, I'd phrase this as:
In OpenAI, are at least 75% of programming and AI experiment tasks that would take a human 2 days to perform being done by AI systems by 9/30/2026?
Another major question is whether incremental learning is successfully added to LLMs in 2026. Maybe operationalized as:
Do any AI labs offer an LLM which changes its weights on a daily basis, based on users' interactions?
Two other unrelated and much less important questions, much easier to track:
Do the 11/2026 elections leave the GOP with a majority in the House?
Do the 11/2026 elections leave the GOP with a majority in the Senate?
What about an item having to do with efforts get some kind of personhood rights for AI? I have in mind things like legal protections for AI, and efforts to legitimize certain rights for AI. Would also include efforts to legitimize certain human-AI relationships — marrying an AI, adopting an AI. Can anyone suggest ways of making outcome crisp and judgable? Legal actions? Introduction of bills? Establishment of congressional committees to consider? Existence of an AI rights organization with more than X members or more than $Y contributions?
Given the difficulty of "registering a business" in certain states (ahem. easy), "personhood" rights for AI are nearly equivalent to "personhood" rights for a business...
I don't understand what you're getting at. AI getting personhood rights such as marriage is a shoo-in in certain states? The law says marriage must be between 2 people -- ergo nobody can legally marry their parakeet or beamer.
No, sorry, I'm being a little facetious and a little confusing at once. The facetious is "an AI can create a business, because face to face contact is not required", and confusing is in saying that "since businesses have personhood status, so too can AIs, easy peasy" (this does not apply to marriages, obviously).
Except that most businesses are not "persons." Only corporations are, and that is because the whole point of incorporating is that the resulting corporation has a legal identity separate from that of the owner(s), which limits the owner's liability. Unlike AI entities, corporations are "persons" by definition, and always have been. https://thelawdictionary.org/corporation/
https://en.wikipedia.org/wiki/Juridical_person
But surely in some sense they are not persons. Marriage is by law only between 2 people, and people aren’t allowed to marry corporations, right?
Well, you are conflating "persons" and "people." "Person" has a very specific meaning in the law. People are "natural persons." Corporations are "juridical persons."
Moreover, the law can distinguish between types of persons. See eg Hague v. Committee for Industrial Organization, 307 U.S. 496 (1939) ["Natural persons, and they alone, are entitled to the privileges and immunities which § 1 of the Fourteenth Amendment secures for "citizens of the United States."]
Yeah, OK. So how does this fit with the question of what sort of changes would need to happen in order for someone to legally marry an AI, or adopt one, or charge one with a crime, or found a business with AI as a partner? What would need to change for those things to be possible, and where should we look to find out whether people are trying to bring about those changes via lawsuits or lobbying or whanot?
A legislature would have to pass a law permitting it. Not sure why that would happen unless AI became sentient.
Ugh. Why would this be good?
The outcome wouldn't (AI rights without alignment obviously leads to bad places due to their ease of duplication), but it might be worth predicting.
I don’t think it would! But there is lots of evidence many people are tending towards seeing them that way, due to ignorance and personal need. And it’s in the interest of the Ai companies to help this along, because customer loyalty. I’m looking for measures of how powerful this trend is.
Ah fair enough. Sorry I lost track of the context.
They stone you when you're walking down the street
They stone you when you're tryna keep your feet
They stone you when you're riding in your car
They stone you when you're playing your guitar
Oh I would not feel so all alone
Everybody must get stoned
Cannabis impairs predictive processing.
Indeed, though mostly I just didn't read the post before reading the comments.
How about some motorsport questions, for low-stakes breadth:
Will McLaren be in the top three of the World Constructor's Championship?
Will a manufacturer leave WEC's "Hypercar" class?
For MotoGP riders (premier class only; not Moto2 or Moto3) who miss Grands Prix due to injuries (i.e., excluding cases of "0"), what will the median number of Motorcycle Grands Prix missed due to injuries be, both consecutive Grands Prix missed and season total?
How many fatal crashes will there be at the Isle of Mann TT? (Low stakes, compared to war...)
Will the Pikes Peak International Hill Climb use the full course?
McLaren has been dominant in 2025 and were very good for most of 2024, but 2026 will have extreme regulation-changes and how regulation changes affect team standings is very difficult to forecast; manufacturers keep leaving the fastest class of the World Endurance Championship; MotoGP has some pretty extreme crashes, but fatalities are rare; the Isle of Mann TT averages 2 fatal crashes per running (2022 had 5 and 2024 had 0); and the Pikes Peak International Hill Climb takes place on Pikes Peak, in Colorado, so the course length is frequently shortened due to bad weather.
The form says that only the first 20 submissions per person will be considered. Is that 20 questions (4 sets of 5), or 20 form submissions (20 submission of 5 questions each)?
Will OpenAI, Anthropic, Google, xai, or meta claim 100% on the IMO?
Will the U.S. supreme court rule that rent control unconstitutional by 2040?
Here's another subjective question: What's a long run trend that becomes much better recognized in 2026?
Is it uncharitable for me to think that the rationalist-adjacent quest to discover the mysterious Secrets of Forecasting has borne little fruit, and the mission gets more quixotic every day?
Is it wrong to think that superforecasters are either just Lucky People or people predicting the future through means that are impenetrable to our analysis?
Or maybe I'm being a fuddy-duddy and forecasting is just a fun hobby that doesn't need to justify itself through concrete results?
You're correct to think that superforecasters are very smart people, and most people (midwits) suck at metaintelligence. So yes, if you're saying "predicting the future" is done by "means impenetrable to our analysis"... you're kind of tattling on yourself.
The first objections to the covid19 mRNA vaccines were raised over thirty years ago. Mr. "I Told You So" is more intelligent than most.
I'm so mystified by your comments. I have no idea what you are talking about half of the time. Are you SURE you aren't Wimbli?
Do you think they would give an honest answer?
Nah. But it amuses me to shoot across his bow and warn him that he ought to switch things up a bit. Assuming it really IS him. Maybe it's YOU!
Here's the qualities we look for in a Wimbli: extremely frequent poster, drastically right wing, somewhat populist, sometimes rather hard to follow, fond of obscure references and random facts, thick skinned (not easily upset)…and prone to reference war, death and homicide whenever he can. Though not in a directly threatening way. Not threatening to any of US, anyway.
There you have it, Wimbli, if you're reading this somehow. Consider it a list of what NOT to do.
I thought you of all people would have a policy against hassling the mentally ill, but obviously I'm not in a position to judge.
I do have a policy about not hassling the mentally ill. But Wimbli isn't mentally ill, as far as I know, just eccentric.
BTW, you'll be happy to know that your name has been cleared by the investigation. See, I'm not ENTIRELY sure that you can be described as "thick-skinned."
He is. Like dog coming back to his vomit he can’t keep away from ACX comments.
Scads of precise but uncited statistics? Magic 8 Ball says “Possibly so”
Have you read any of Philip Telock's work on forecasting or Rabin Hanson's work on Prediction Markets? If so, then I'd ask what concretely makes you think it's lucky people, and why for example Tetlock's advice on better predictions don't work. If not, then I'd advise you to read them.
1. When will China decide that Russia is weak enough that it can take back some or all of the Amur annexation?
2. When will the AI bubble burst (if it hasn't already done so by the end of 2025)?
3. When will the Epstein files that the FBI is holding be released (bear in mind both Houses have to pass the bill to release them, and Trump would likely refuse to sign the bill, and then 2/3rds of both houses would have to override his veto).
Some ai ones:
Will Open AI be publicly traded?
What will be it's market cap?
NVIDIA market cap?
Will any LLM finish Pokemon? (Probably needs clarification)
Will there be another iteration of the AI trading contest (nof1.ai or similar) where all the frontier models come out positive by the end?
Will there be another government shutdown in 2026 that lasts even longer than the recent record of 43 days?
i think there should be question towards chinas global influence. maybe ask how many new chinese cars will be sold in germany. if germans start to buy lots of chinese cars, that would mean that chinese engineering has surpassed western engineering. (though i also expect german government to protect their local manufacturers in some way)
I would also love to see some operationalization of "china go up / china go down" -- economically, since Taiwan issues surely already have good markets
Expecting German Government to protect their manufacturers? That's entirely counter to what they've been doing for the past couple of years.
My interest in the future is, of course, all about AI. The best single quantifiable question I can think of for that is: METR time horizons trends continue at expected rates.
The METR eval is probably the best single benchmark, and it's the one I look at most closely.
I'm not sure if it's the best measure overall though - it focuses very heavily (exclusively?) on agentic coding. I understand why - the whole recursive self improvement angle. But it doesn't fully capture the challenges of dealing with non-text data - particularly vision and working in real time.
My alternative measure would simply be revenue. The point of AI is to be useful, and the best quantifiable measure of that is how much people are willing to pay. Hard to measure for Google, but pretty easy for Anthropic and OpenAI.
There is little chance that AI will be able to write a decent book unaided in 2026. A better formulation of the question is:
Will the author of a bestselling book in 2026 openly admit to co-writing the entire book with AI?