242 Comments
deletedJan 30·edited Jan 30
Comment deleted
Expand full comment

It's hard to say that Trump is 'crazier' because they've been calling him crazy for eight years solid at this point. Ditto with he's a rapist, a fascist, a fraudster, and kicks one-eyed three-legged puppies into the howling storm with a villainous laugh.

At this point, if there was a genuine news clip showing Trump speaking at a campaign rally where he declared he was going to invade Poland the day after he wins the election, I'd shrug and go "*Only* Poland? All the online hyperventilating told me he was going to invade Hungary, Austria, and Greenland as well".

Expand full comment

Biden's campaign managed to hide and stage-manage him well enough in 2020 that voters didn't appreciate his significant oldness. The image that most voters had of Biden was formed by eight years of working under Obama, and that was sticky enough to survive as long as the voters weren't given many new data points. But you can't hide a president as much as you need to for that image to persist.

Expand full comment

If you are in London, we are doing a prediction market dating show in support of manifold.love on Feb 10th: https://lu.ma/qp9q3tmy

Expand full comment

Why would anyone bet on a Paperclipalypse outcome? If that outcome happens, definitionally you won't be around to collect.

Expand full comment

Eh, easy to construct a bet around that. I'll give you x today, you owe me y times x + inflation payable in z years.

Expand full comment

Does the Metaculus market actually work that way?

If you wanted to make a prediction market that worked that way, how would you handle the problem of people being unable to pay in Z years?

Expand full comment

I don't think it does. You'd need some sort of escrow or collateral.

Expand full comment

Doesn't escrow or collateral defeat the purpose of paying them early?

Expand full comment

Could be something like a lien on real estate. Practical considerations always get in the way of trying to bet against the rapture.

Expand full comment
Jan 30·edited Jan 30

Thinking briefly about the issue:

A) Metaculus should start automatically issuing new users a set amount of play money each month and allow users to bet their future allotment. (Much like sports teams can trade future draft picks)

B) Have some type of financial-instrument-like way for people to trade current and future play money running parallel on the site, establishing a consistent discount rate so current/future "dollars" can be interchanged. When betting in a future market you need to convert to currency applicable on that bet's resolution date. This should happen behind-the-scenes, but roughly if $10 today is worth $100 in 2050, you only need to tie up $10 today for a $100 bet in 2050

Expand full comment

Metaculus isn't a market, it's just a prediction aggregator.

Expand full comment

Isn't "inflation" here infinity if Paperclipalypse happens? What can you give me then that is as good for me in that world as $50 is good for me right now? (Let alone better.)

Expand full comment

The idea is that it's free money for the paperclip believer, who won't need to pay back anything if he's right.

Expand full comment

You'll be the richest paperclip in the pile!

Expand full comment

Instead of common steel, you'll be the solid gold paperclip!

Expand full comment

There's no betting on Metaculus, forecasters just receive a score and accumulate a track record and medals. For questions with biased incentives due to things like extinction we encourage forecasters to predict their true beliefs.

Expand full comment

It's all play money anyway.

And we can imagine apocalyptic scenarios where humanity doesn't go completely extinct ("just" a 99.99% reduction in human biomass would be considered apocalyptic by me personally).

Expand full comment

Maybe I can interest you on Paperclipalypse insurance? Low low premiums!

Expand full comment

Real-money prediction markets indeed did relatively poorly in 2022.

The further past shows a more complex story -- bettors were running slightly ahead of 538 from 2016 to 2020. With 2022 added, 538 gained a slightly edge. That chart and more are here, for anyone curious: https://www.maximumtruth.org/p/deep-dive-on-predicting-elections

Expand full comment

Maybe I missed if it was mentioned in that discussion, but I'm assuming all of this is based on predictions on the eve of the election?

It doesn't seem like there's much analysis anywhere of how good anyone is at predicting 9-10 months out. Just some articles basically saying, "Unless it's a landslide, you can't do it based on polls."

Expand full comment

That's correct, they're all looking at right before the election.

This study is the only one I've seen that looked at longer window (comparing PredictIt and The Economist statistical model) and it found that PredictIt was better earlier on: https://arxiv.org/pdf/2102.04936.pdf (see page 8)

Expand full comment

Thanks. Unfortunate that there's only a single election to look at. But intuitively it makes sense -- the further back you are, the less useful polls are, the more value bettors can add by second-guessing them based on one consideration or another. The closer in time you are, the more value there is to the exercise of analyzing the polls in a systematic way.

Expand full comment

Agreed, good points.

Expand full comment

This was a fascinating read! That's for doing all that data collection and analysis.

Expand full comment

I'm not familiar with the particulars here but surely this is misleading:

> Nate Silver is only one person, he has only one area of expertise, and you can’t hire him to predict random things for you (unless you’re rich and he’s bored). If Manifold can apply only-slightly-sub-Nate-Silver levels of analysis at scale to arbitrary topics, that’s a big deal.

Manifold is pretty good at elections *because* bettors defer to models like Nate's. Doing well on elections is little evidence for doing well on arbitrary topics. [Edit: the previous sentence refers to Manifold, not Nate.]

Expand full comment

I don't think that was meant to imply that Nate Silver would be good at lots of other things (the phrase "he has only one area of expertise" pretty directly contradicts it) and instead is meant to evoke the level of effort and skill that Nate brings to this one area, and imagine you had slightly lower than that in other areas.

It's like when someone says "He's the LeBron James of Excel Spreadsheets", they aren't implying that LeBron is preternaturally good at excel.

Expand full comment

How do we know LeBron isn't secretly an accounting whiz? 😀

Expand full comment

He might be, but his opportunity costs are too high: his other talents crowd out the time you could get him to spend on your excel problems for a commensurable fee.

Expand full comment
founding

I think this is true (Jack's writeup discusses how he personally moved the Manifold forecast closer to 538) but I also do think "successfully identifying the right people to defer to" is a major good feature of markets! Markets provide a mechanism for 1) people like Jack to gain influence over time by being consistently correct and 2) disproportionately spend that influence on questions they have more context on.

Or another frame: you already know Nate Silver is good, but how do you know if blogger Alice vs Bob is the Nate Silver of some new field you're not familiar with? I'd guess that "whoever's views the prediction markets weigh more" would be a decent heuristic.

Expand full comment

I think your last plot might be related to your fourth.

That is, now that Trump is looking reasonably likely to win the next US election, both Russia and Ukraine are unlikely to make a peace deal until that uncertainty (both whether he wins, and what his actual Ukraine policy will be) is resolved.

Expand full comment

If the prediction markets are overvaluing a Trump win, perhaps that represents a kind of hedging where people are willing to take a loss if Biden wins because they think their lives will be good but want to gain some cash if Trump wins to make up for the worsening of their lives.

Expand full comment

I'm not even American and i have bets down on Trump for precisely this reason. (Reversal of Biden-era climate policy, the likely pulling of meaningful support for Ukraine, and the further erosion of liberalism in the world largest democracy by 'project 2025' guff are the 3 main reasons the whole world would suffer from Trump's re-election).

Expand full comment
deletedJan 30·edited Jan 30
Comment deleted
Expand full comment

> I've got the super duper sads for you, I know how hard it must have been . . . having to read some mean tweets from Trump back in the day?

Trump literally attempted a coup d'etat on January 6, pressuring Mike Pence to throw out the electoral college votes and send it to the House in order to steal the Presidency. That's why his supporters were chanting "Hang Mike Pence". You can read more on this by googling "Eastman memos".

Ohh, but Joe Biden is doing "lawfare" to "impede democracy" because his justice department is prosecuting a person who tried to prevent the peaceful transfer of power and his co-conspirators.

Expand full comment
deletedJan 30
Comment deleted
Expand full comment

Here, let me help:

> The Eastman memos, also known as the "coup memo",[6][7] are documents by John Eastman, an American law professor retained by then-President Donald Trump advancing the fringe legal theory that a U.S. Vice President has unilateral authority to reject certified State electors. This would have the effect of nullifying an election in order to produce an outcome personally desired by the Vice President, such as a result in the Vice President's own party's favor, including retaining himself as Vice President, or if the Vice President is himself the presidential candidate, then to unilaterally make himself president.

Oh, wait, that's Wikipedia. Must be a left-wing conspiracy from the loony left with its paranoid delusions!

Expand full comment

> Betting sites favored Biden to win all swing states, but Trump to win overall.

That makes it sound like there was one bet on whether Biden would win all swing states, but I think what really happened was a bunch of bets, one per state. If so, then FWIW, it wouldn't be inconsistent to think that Biden had better odds in each individual swing state but worse odds overall (insofar as he would have needed to win more swing states than Trump).

Expand full comment

It might not be logically impossible, but it seems like a very bad bet; swing states are highly correlated in which way they swing.

Expand full comment

Also iirc Biden actually needed fewer swing states than Trump to win, although this heavily depends on which states you consider swing states

Expand full comment

Yes, I had that thought, but it's as you say. The definitions are arbitrary, though some are more arbitrary than others. Realistically, there are:

1. Very safe states that won't flip unless there's a 49-state-type historic landslide, which we may never see again.

2. States that could conceivably flip, and that went the other way in recent memory, so they could be called "swing states", but we're pretty confident we know which way they'll go this time.

3. States that could have a high chance to actually be tipping points, which is only ever going to be a very small number of states in any given election, maybe 3-4.

The D's have the clear advantage in every election when it comes to #1. I think NY+CA have more votes than all the R states in this category put together. Unlike those states, TX is a state that *could* flip. Though this is another way of saying that R's are the party that can realistically win despite losing the popular vote, because so many D votes are clustered in those two states.

Expand full comment

I just did an arbitrage strategy earlier today of buying Biden for 39c on Polymarket and Trump for 47c on PredictIt. This is almost guaranteed to turn 86c into $1.

The only cons are:

* Maybe one of them dies/goes to jail/drops out (but that might boost the odds of the surviving candidate)

* You need to fund an account on PredictIt and Polymarket

* Annoyance of reporting any gains on your taxes (is this gambling winnings or capital gains?) or a small risk of getting punished for not reporting

* Opportunity cost of not buying the S&P500 instead

* Max bet of $850 on PredictIt

Expand full comment

I was going to say that 14% doesn't seem like too much of an overestimate that at least one of them dies or drops out - but I realized that they *each* need a 14% probability of doing so, unless doing so leads to nearly 100% chance that their party wins.

Expand full comment

This is a great way to minimize losses, you can also find different odds on Futuur and use this to your advantage: https://futuur.com/

Expand full comment

I was unfamiliar with your site. It seems the search feature doesn't work that great. This market exists:

https://futuur.com/q/167614/will-the-sixth-book-in-george-rr-martins-a-song-of-ice-and-fire-saga-be-published-by-the-end-of-2024

But if I type into the search bar "George R. R. Martin" it fails to find it.

Expand full comment

Hey TGGP, welcome to Futuur!

Thanks for reporting this bug to us, our developers are aware of this problem and are already fixing it.

All feedback and suggestions are welcome, let me know if you have any other comment. Thank you very much!

Expand full comment

Ah, I'm guessing it's because of the space between the 'R's and my putting it in quotation marks.

Expand full comment

Betting "Yes Biden" on Polymarket (37c right now) and "No Biden" on PredictIt (53c right now) seems like a perfect arbitrage, and eliminates your first con — turning 90c into 1$.

I would love to be able to bet but sadly I am not a US person.

Expand full comment

https://manifold.markets/ScottAlexander/in-2028-will-ai-be-at-least-as-big?r=QmVuamFtaW5Ja3V0YQ

Could you clarify how you intend to resolve this? How much of a factor would the portion of voters considering it a top issue be compared to your subjective impression of the political conversation?

Expand full comment

Thanks for the shout out! I think the link you used might be broken, here's a working link to the article - https://asteriskmag.com/issues/05/prediction-markets-have-an-elections-problem-jeremiah-johnson

I do think prediction markets show enormous promise, but they've got some growing pains to work through. Inefficiencies in those markets don't mean they're not valuable. And even highly liquid, sophisticated markets can have these kinds of irrational episodes where dumb money overwhelms smart money - the entire meme stock phenomenon is evidence of that.

Expand full comment

Using decimal odds Trump is 1.83 on Bet365 and Biden is 2.75. That is an implied probability of 54.6% for Trump and 36.36% for Biden.

Stated differently, a $100 dollar bet on Biden would pay back $275 for a $175 profit.

If it is indeed a coin flip this is a massive value. My instincts are that the combined odds of him either losing to trump OR dropping out make that priced properly.

Of course a BET365 user I am likely in the same bubble that drove these odds to 2.75 in the first place.

Here are the next four:

Michelle Obama 10.00

Nicky Haley 17.00

Newsom 23.00

Kennedy jr. 29.00

Expand full comment

Good odds there for Nicky.

Expand full comment

I’m personally not sure how to feel about Neuralink - on one hand it (or any other complex BCI) could usher in an era as profoundly different to our world as the pre-smartphone era was to today. Yet I feel like there would likely be extremely nefarious or just plain bad uses that might completely wipe out that good (imagine advertising spam in one’s brain!). I’m not really sure which is stronger, or whether the development of BCIs will be similar to most modern technology, somewhere in the middle.

Expand full comment

“Wipe out that good” — I think smartphones are a net negative (mostly because they are a distraction and because I think they are bad for children and teenagers) and can’t really think of any significant benefit they bring.

Expand full comment

Smartphones are incredibly good for disabled people (including temporary disability eg a hospital stay), especially when bed bound or needing a lot of rest - they enable people to keep in touch, do productive work, and be much more independent due to being able to communicate without too much energy expended.

Expand full comment

I wrote a 20 page paper back in law school about the "digital divide" in access to computing and the internet between different communities, including a lot about adapting libraries and similar areas into community computing centers or setting up smaller dedicated ones, and so forth. Basically all of that was rendered irrelevant when the smartphone gained wide adoption. And all the horrible time-wasting garbage available on them subsidized the price to the point where even poor disadvantaged people could own one, and that they'd all want one.

Now there are still a few choke points that affect the poor, such as having available data which they typically buy as pre-paid cards that often run out by the end of the month, so having public access wi-fi remained relevant. But when you consider the amount of money that was poured into making sure low-income people had access to the internet, taxpayer and donor bucks buying tons of computers and networking at libraries and at rural or inner-city schools, and we really thought that was helping at the time, and within a few years it meant nothing bc smartphones happened.

Expand full comment

I can easily buy that smartphones over pre-iPhone PDAs being scaled up a bit are a pure loss. But handheld devices capable of being a good dictionary, a bad translator, a searchable map, an extensible generic calculator, a book reader, an email client, electronic/indexable note storage — those are useful. The fact that most of the smartphone usage time is not this (even if true videogames are their separate category not counted together with allegedly useful stuff), that's how scaling works in many markets… unfortunately.

Expand full comment

I don’t mean that smartphones aren’t (potentially) useful products. I mean that I think the era of ubiquitous smartphone adoption is a net negative for society.

Expand full comment

I replied interpreting that

> can’t really think of any significant benefit they bring.

means a claim that smartphones as used now don't have significant plus sides not covered by other currently widely available devices, not that those are outweighted by the negatives.

On the net and if «ubiquitous smartphone adoption» implies specifically smartphones with the current harware/software ecosystems, there I guess we are not disagreeing much.

Expand full comment

For me smartphones' biggest benefit is that I can google and get almost any question answered instantly. I probably do at least 5 searches per day in situations where I don't have access to a computer. Many of these searches could just be delayed til I'm near a computer , but it's not rare for one to be about something I need the info on immediately, such as what time a store closes. Still, I lived for years without being able to search info like that, so I'm sure I'd manage fine now without that convenience. But I am used to being able to satisfy my curiosity about things instantly, and that's harder to give up. Still I agree with you that cell phones are a net negative for young people, and actually think mine's a net negative for me. Like everyone, I tend to read my phone now in situations where I would otherwise be looking out the window of the Uber, or people watching, or daydreaming. I actually do not scroll social media sites -- I read stuff I'm genuinely interested in, such as ACX. But I think it's still bad for me to not be aware of my surroundings so much of the time.

Expand full comment

This is more or less like my experience.

Expand full comment
founding

Algorithmic social media is a net negative. Smartphones enable people to tether themselves to algorithmic social media 24/7, but they also enable many other good things and not everybody does the smartphone-social-media thing.

Expand full comment

What specific good things do they enable? Clearly there are many productive things one can do with a smartphone, but it seems people don’t generally feel their phones make them more productive.

Expand full comment

I'd say prediction markets did badly at the 2024 Presidential primaries, too. I guess it's possible that one day we could get a tell-all book revealing that Biden was planning not to run again and then changed his mind at some point, but that's probably not what happened, and even if it was true that wouldn't explain the low prices when he did officially announce. The low prices for Trump to win the Republican nomination are also probably because the market was wrong, not just because Trump got lucky. So not just a 2022 event.

I haven't checked if the markets were worse than Metaculus, to be fair.

Expand full comment

I've been enjoying Manifold lately, but I do think it has a limitation in that it's hard to entice people to bet on markets they're not personally interested in. Most of the time and energy goes into the same handful of topics.

The ideal (it seems to me) would be a system that encourages people to dig in and research difficult topics to try and win mana. But it seems to me like the optimizing strategies right now don't really end up doing that.

Expand full comment

I've inspired myself with this comment to try an experiment. I spent $10 real-world to subsidize/boost this market that got zero traction the first time: https://manifold.markets/TomGoldthwait/who-will-win-season-16-of-rupauls-d?r=VG9tR29sZHRod2FpdA

I'm curious to see if that's enough to get some people in there or if it's just not something anyone on Manifold cares about.

Expand full comment

37% chance of an AI fizzle? Ouch! But, yeah, I'm still waiting for GPT4 to get the "Which inorganic compounds are gases at STP?" close to right. :-(

Expand full comment

Nuclear power is not that low of a bar! Honestly, large-scale information processing using computers has a pretty disappointing track record by now… Economic limitations hit way before realising the already convincingly roadmapped technically feasible valuable capabilities — the most valuable stuff for a civilisation is about positive externalities, and economic pressure is towards optimal capture regardless of the sign of externalities.

Note that if language AI ends up being used either as a toy or as a reconceptualised universal «DeepL, but also English to a summary table or to Python», it might still learn to answer your question reliably (probably with an explicit mention of reference sources to filter). But if non-toy AI use is constrained to people with a conceptual model of what works well, won't it still be a nuclear-power-level fizzle? Yes, some of the already-demonstrated AI capabilities will stagnate because nobody will find a way to scale their application. Some will stumble forward towards «not perfect, and failures are creepy, but better than the other options, so well».

Expand full comment

Many Thanks! Re

>a reconceptualised universal «DeepL, but also English to a summary table or to Python»

I'm somewhat confused. Would that basically be an improved natural language interface to existing software?

Re

>probably with an explicit mention of reference sources to filter

Actually, in one of the prompts that I tried (most recently https://chat.openai.com/share/12040db2-5798-478d-a683-2dd2bd98fe4e ) I gave it the url of wikipedia's list of gases ( https://en.wikipedia.org/wiki/List_of_gases which is comprehensive, as far as I can tell), so all that it had to do was filter out the organic ones and cut off the list at 0C. It _still_ couldn't manage to do that.

>But if non-toy AI use is constrained to people with a conceptual model of what works well, won't it still be a nuclear-power-level fizzle?

I'm a bit confused about what you have in mind for this scenario. Do you mean if the developers of an LLM-using application must have a model of what works well, or do you mean if the end users must have a model of what works well? The former is still constraining, compared to full AGI, but the former is a lot less constraining than the latter. I would say that if LLMs can routinely provide a good natural language user interface (including straightening out umpteen ways of phrasing the same request, and enough commonsense reasoning to notice probable implied requests in most cases) to existing software, that's somewhat better than a fizzle, though short of a Futurama.

>Honestly, large-scale information processing using computers has a pretty disappointing track record by now…

Do you mean for all computer use, or just AI, or just use of LLMs? For computer use in general, there are many pedestrian applications (starting with census data and payroll programs), which I would call large-scale, and which I would also call successful.

>Economic limitations hit way before realising the already convincingly roadmapped technically feasible valuable capabilities — the most valuable stuff for a civilisation is about positive externalities, and economic pressure is towards optimal capture regardless of the sign of externalities.

Could you elaborate on what you have in mind here? My current understanding for LLMs is that inference is pretty cheap (after all, OpenAI lets us play with GPT3.5 for free, and with GPT4 for a modest price). My impression is that getting rid of hallucinations and making the answers to prompts more reliable is the big stumbling block currently.

Expand full comment

> Would that basically be an improved natural language interface to existing software?

Not only, also an improved natural language parsing of data. Also, not explicitly mentioned but within the span of the things mentioned, re-representation from angry spoken English to businessspeak etc.

> It _still_ couldn't manage to do that.

I have seen reports of people using an interface allowing to give a text directly and then asking to summarise and getting something not completely useless; I guess this is a bit fewer steps to mess up than passing an URL… But also stuff is just a bit unstable, and how to force an LLM into a proper step-by-step is a black art, but if things are recognised as stagnated maybe people will figure out how to provide control by demanding to show intermediate steps or something?

> end users must have a model of what works well

I mean this. There is a tool and there are deployment considerations and there is a skill to use it well. Not in the sense of understanding inner working, but in the sense of having a rough idea whether your flailing is improveing things or making them worse and needs to be rolled back.

> and enough commonsense reasoning to notice probable implied requests in most cases

… and not enough bad incentives to just manipulate this instead in some weird way…

> I would say that if LLMs can routinely provide a good natural language user interface to existing software, that's somewhat better than a fizzle, though short of a Futurama.

Why will it go differently than web search, OS GUIs, and other stuff? Making fine control even harder than before when not impossible. And declaring it a feature. (And remember for calibration that our definition of a fizzle is the power source happening to power half the France, , the entire US aircraft carrier fleet, like 8% of electric generation globally, and so on)

>Do you mean for all computer use, or just AI, or just use of LLMs? For computer use in general, there are many pedestrian applications (starting with census data and payroll programs), which I would call large-scale, and which I would also call successful.

I mean computer use in general. Yes, tabulating business records — which IBM did pre-vaccum-tubes — got more efficient. I am not saying nothing got done! Just that optimistic plans well-supported by convincing technical roadmaps have gone nowhere because the incentives were effectively to prevent them happening.

> after all, OpenAI lets us play with GPT3.5 for free

For calibration, it is not clear that Uber has ever been operationally profitable, so market-share-grabbing / alpha-testing strategies might not reveal that much about scaling costs…

The economic question is, in any case, what do you want to end up selling, and how you are maximising your chances to obtain an abusive monopoly. Do you want to invest into fighting hallucinations without being a tool only for skilled users? Judging from the fact that ground truth agreement doesn't work that well between humans, it is not easy! And then there is no guarantee you'll be able to make it your unrivaled capability…

Expand full comment

Many Thanks!

>Not only, also an improved natural language parsing of data. Also, not explicitly mentioned but within the span of the things mentioned, re-representation from angry spoken English to businessspeak etc.

Yes, that seems doable with basically the LLMs as they stand now.

>I have seen reports of people using an interface allowing to give a text directly and then asking to summarise and getting something not completely useless; I guess this is a bit fewer steps to mess up than passing an URL… But also stuff is just a bit unstable, and how to force an LLM into a proper step-by-step is a black art, but if things are recognised as stagnated maybe people will figure out how to provide control by demanding to show intermediate steps or something?

Yeah, though, in this particular case, even directing GPT4 to the URL should not have been needed. The URL was of a wikipedia page, so it should have been part of the training set before I ever prompted GPT4. Yeah, I tried something like "show step by step work" at one point, but it didn't help, and I wouldn't really expect it to in this case. This is really processing ~80 entries, each in a trivial way ("Does it boil below 0C?" "Does it contain any carbon atoms?").

>>end users must have a model of what works well

>I mean this.

Many Thanks!

>Not in the sense of understanding inner working, but in the sense of having a rough idea whether your flailing is improveing things or making them worse and needs to be rolled back.

Ok. The use case that I have in mind where the user needs some sense of what is sensible but not too much is an airline reservation system. A good AI airline reservation system would have enough smarts to e.g. bring up questions about whether the traveler wants a car rental, what are min and max tolerable layovers etc. I think existing LLMs probably have enough knowledge so that with modest training they could go from "My Aunt HIlda is sick, and I'm flying to visit her." to "This is probably urgent."

>Why will it go differently than web search, OS GUIs, and other stuff? Making fine control even harder than before when not impossible. And declaring it a feature.

I think that there is a trade-off. E.g. an airline reservation system can guess that a traveller is likely to need a rental car, which is beyond what web search can do. It can also do the equivalent of several web searches, or finding relevant technical terms from an informal phrasing of a request. As you said, fine control can be lost. The AI can make a _wrong_ "typical" assumption.

>I mean computer use in general. Yes, tabulating business records — which IBM did pre-vaccum-tubes — got more efficient. I am not saying nothing got done! Just that optimistic plans well-supported by convincing technical roadmaps have gone nowhere because the incentives were effectively to prevent them happening.

Ok. I used to have several books in a series "Computer Projects That Failed." Most of those weren't incentive problems, though, typically mistaken assumptions buried in the implementation. Do you have a class of projects in mind?

>The economic question is, in any case, what do you want to end up selling, and how you are maximising your chances to obtain an abusive monopoly.

Yes, every company dreams of getting to that position.

>Do you want to invest into fighting hallucinations without being a tool only for skilled users?

I'm confused. Solving hallucinations would allow broader uses and broader sets of users, basically looser constraints on both. Companies generally want broader markets.

Expand full comment

> Yeah, though, in this particular case, even directing GPT4 to the URL should not have been needed.

Even with humans each obvious step one wants to leave implied adds a chance of misunderstanding!

As a non-LLM example, there are (always have been) ATPs, automated theorem provers, i.e. fully formal proof search engines. They have long been impressively good and also not really sufficiently good and also enough to (rarely) prove stuff humans have outright failed to and also slightly weird and also still useful for some use cases and also requiring the user to understand what the user wants… There are some cases where you need to prove «if X, then A and B», and just splitting all such stuff with a script into «X then A» and «X then B» drastically improves your chances. There are some esoteric cases where splitting makes things worse.

The ATPs are perfectly capable to do this step, but search priorities work in a complicated way.

> Yeah, I tried something like "show step by step work" at one point, but it didn't help, and I wouldn't really expect it to in this case.

If the core is like the current LLMs, it needs some tinkering with the instruction interface and generation structure, so that it becomes convenient to express, one way or another, «extract data about all the stuff on the page, structure it name-boiling point at 1 atm-elements involved, filter boiling point below 300K, filter elements include C». Then have it construct but not send you the full table first… Just saying step-by-step right now is not enough.

> As you said, fine control can be lost. The AI can make a _wrong_ "typical" assumption.

The UI design already does it, so now we have a lot of well-known tools not suitable for fine-control work, and sometimes some tools that need quite a bit of learning to use them (and sometimes even to find them). If AI tools are like that and push out what currently counts as dumbed down tools, we can easily end up with a net loss of information processing capabilities available to typical users!

> Most of those weren't incentive problems, though, typically mistaken assumptions buried in the implementation. Do you have a class of projects in mind?

_Specific_ projects often fail like that, sure. Classes of capabilities fail because it ends up more profitable to make «lack of necessity» for complex features a popular marketing point.

The problem is that Engelbart's The Demo kind of looks more promising for complex information work and teamwork than what we have half a century later, and if you read his «Augmenting Human Intellect» you'll see that a) he had a reasonable roadmap what to do with it, b) the approach has some load-bearing parts that are kind of anathema nowadays.

Or you look at a later and watered down thing, hypertext, then notice that we have very few truly hypertext project (usually wikis) and little support of meaningfully-hypertext workflows (smallest federated wiki shows you the trail across links as pages side-by-side, that counts as a rare and advanced feature nowadays), and a ton of plain-formatted-text documents slapped onto a hypertext platform.

See also the campaign to paint the idea of User Agent being actually User Agent, displaying and transforming hypertext documents for the user, and not a faithful execution engine for malware, as crimethink.

>Companies generally want broader markets.

Companies want low-risk ROI, with not too much return going to competitors. How much consumer surplus they are ready to tolerate varies.

High-risk investments into making things better for everyone competitors included? Oh, and saying you have succeeded makes the scariest liability claims against you actually look reasonable? Can go in many interesting weird ways.

Expand full comment

Many Thanks!

>Even with humans each obvious step one wants to leave implied adds a chance of misunderstanding!

True! Let me emphasize that I'm _not_ trying to do everything possible to force the right answer out of GPT4. I know how to find inorganic gases. If all I wanted was to get it to tell me the right answer, I could have just said: "Repeat after me H2S, O2F2, PH3, ...". I'm trying to assess how close or far it is from being able to answer like an intelligent STEMM professor.

>just splitting all such stuff with a script into «X then A» and «X then B» drastically improves your chances

Good point! Yeah, the "smarter" as system is, the fewer such interventions should be necessary, but, of course, the AI field isn't at AGI yet, and more limited systems require use like the splitting you describe, which gets closer to programming them than to conversing with them.

>extract data about all the stuff on the page, structure it name-boiling point at 1 atm-elements involved, filter boiling point below 300K, filter elements include C

Yeah, but at that point GPT4 would be doing very little "common sense reasoning" and I might as well write a shell script.

>The problem is that Engelbart's The Demo kind of looks more promising for complex information work and teamwork than what we have half a century later, and if you read his «Augmenting Human Intellect» you'll see that a) he had a reasonable roadmap what to do with it, b) the approach has some load-bearing parts that are kind of anathema nowadays.

Hmm. I know of Engelbart's mouse and GUI inventions but haven't read his <<Augmenting Human Intelligence>>

Could you elaborate on what parts became "anathema" and to whom?

>See also the campaign to paint the idea of User Agent being actually User Agent, displaying and transforming hypertext documents for the user, and not a faithful execution engine for malware, as crimethink.

I'm lost. ( And I can't just google User Agent, because I get a ton of unrelated hits. ) Could you point me to a description of the User Agent (proposal??), and the campaign against it - who? why? - and how was it painted as crimethink, and by whom, and why?

Expand full comment

>Maybe this is just because he has a higher chance of becoming President before the trials are over?

There are a variety of other options. Trump's old and could just die before he gets around to serving time. He could theoretically skip out on his bail, though it's not very likely. The ballot-removal shenanigans could escalate into civil war, which even if he loses might not wind up with him serving time (if he's assassinated or otherwise killed during it, or if he's executed for treason). Nuclear war could happen over Taiwan, which even if Trump doesn't die would see the Blue Tribe crippled.

Expand full comment

I think the ballot removal shenanigans are a *terrible* precedent. Yeah, no means are too extreme to prevent the rise of Hitler, but (1) Trump is not Hitler (2) when he goes, you've still left behind "we can prevent The Other Lot from running their candidate by doing this" as precedent, and nothing in the laws of the universe says The Other Lot is always going to be Them and not You.

If Histler, Adolph J. wants to run for dogcatcher, let him on the ballot and let the citizenry vote for him or not. Don't muck about with "well we can't allow the dumb idiot public to have a choice, this isn't a democracy you know, in order to protect democracy we had to destroy it".

Expand full comment

Is there some reason you're telling me this? I know this.

Expand full comment

I wasn't particularly telling *you*, just passing a remark. I don't think civil war is likely, just that it's a very bad idea to allow this to stand as legal precedent. The worrying thing is all the calls on every side to Do Something!, with little to no realisation, so it seems, that this is a weapon that can be used against anyone, not just The Bad Guys We Have To Stop.

Expand full comment

If the Chinese invade Taiwan it isn’t even a violation of international law. They won’t for years anyway.

Expand full comment

I mean, I sure hope they don't, but I'd give it double-digit percent this year. There's a reason I live in Bendigo, have 20L of water in my bathroom cabinet, and am in the process of rendering my life savings resilient against WWIII.

Also, it's not especially relevant whether it's a violation of international law. What's relevant is that the West's pledged to defend Taiwan, that the loss of Taiwan would be a problem for the Far Eastern alliance system even absent that pledge, and that a direct conflict between the USA and PRC over Taiwan probably means a nuclear exchange.

Expand full comment

I don't think this will ever happen. More realistically, China will blockade Taiwan, and in response, the US/EU will send a very strongly worded letter sincerely condemning the action in harshest possible terms. After a few weeks, Taiwan will capitulate (by going the way of Hong Kong), and it's back to business in usual (except that now Taiwan has been part of Communist China since ancient times). After all, no one wants a war, right ?

Expand full comment
Jan 31·edited Jan 31

If you think it's worth defending Japan and Korea, you should think it's worth defending Taiwan, because the PRC has made noises about claiming Ryukyu and Korea and defending Taiwan now means less American death than defending Japan and Korea later due to Chinese possession of Taiwan breaking the First Island Chain (also because the Chinese nuclear arsenal is growing rapidly).

The US foreign policy establishment know this. And of course, this is assuming that the PRC doesn't pull a Yamamoto and hit a bunch of US bases in East Asia in the opening salvo, which is fairly-likely but not remotely assured.

Also, did you see the polls on Ukraine? Much of the public wanted to declare a no-fly zone or outright war.

Expand full comment

Sadly, I don't think they do; that is, maybe they do, but I think they also know that the US no longer has the resources to contest China in any meaningful sense -- short of going thermonuclear. I foresee a long period of song-and-dance over the next couple of decades, as control of the world passes from West to East.

Expand full comment

Would you mind putting a number on that prediction? I'd like to know just how far apart we are.

Expand full comment

Seems unlikely. Chinese economic miracle is running on fumes, and that's not taking into account the inevitable divorce with the West over Taiwan takeover, even if it doesn't result in a hot war.

Expand full comment

>the PRC has made noises about claiming Ryukyu and Korea

I have a really dumb question about the Korea part of this scenario. What happens to North Korea? Are the PRC and North Korea friendly enough that that part of the expansion would be a "friendly merger" or is the PRC expecting to annex North Korea by force, nukes and all?

Expand full comment

It's not a dumb question at all.

1) I know the PRC's working on revisionist history claiming Goryeo (a Korean state with Manchurian territory, and the literal origin of the word "Korea") to be a Chinese state, and I know that Korea was a tributary lost in the Century of Humiliation, undoing all of which is the CPC's core promise to the Chinese.

2) I don't think they've explicitly claimed they want Korea back yet (whereas they have for the Ryukyu islands). They're laying the groundwork for it, though, which is why I said "made noises".

3) I have no idea how they would attempt to operationalise this in the case of NK.

Expand full comment

> Much of the public wanted to declare a no-fly zone or outright war.

I’m willing to bet that 90% of that population had heard of Ukraine for the first time that week.

But back to Taiwan - the Chinese are a people who can wait it out and I really doubt if the US population would want a war here. It would be lost anyway.

Expand full comment

Regarding "dumb money" in prediction markets - shouldn't it usually be the case that dumb predictions outnumber good ones? Prediction markets are a zero-sum game, and running the market incurs costs which need to be covered, so the median prediction should have slightly negative expected return. Stock markets attract smart money, but they have a positive expected value, since stocks are productive - even if your bets don't outperform the average, you are still likely to make money due to dividends and increased share values.

Expand full comment

My understanding is that prediction markets are often positive-sum via subsidy. So for instance, if I want to know the answer to a question, I start by putting in my own money. It's kind of like a bounty - I set aside $500 to create 1000 shares. Half of them pay $1 on YES and the other half pay $1 on NO. The intrinsic value of those shares is what gets the market up and running as people buy and sell the shares.

Expand full comment

Oh, interesting. I wonder which of the above markets work like that.

Expand full comment

I'm calling hax.

Nate Silver is more than sufficiently skilled to use manifold and metaculus to help him forecast e.g. tracking people with records of forecasting. 97% chance that Silver or his advisors can do more than enough spreadsheetmancy for that.

Expand full comment

I'm pretty sure they don't. They haven't (unfortunately) open sourced their code bit they have described their algorithm in a fair amount of detail, it doesn't use prediction markets as an input.

Expand full comment

Nate Silver has talked about this a bit. He see his own modeling as being upstream of prediction markets. He wants people in prediction markets to be able to use his modeling to help guide their bets. If he himself uses prediction markets as an input to the model, that doesn't really work as well. So his election modeling is based entirely on raw polling data, fundamentals, etc.

Expand full comment
Jan 31·edited Jan 31

Roll to disbelieve. Either he is telling the truth, or he is following his incentives to be partially downstream from public prediction markets while simultaneously following his incentives to lie about being 100% upstream of public prediction markets.

Expand full comment

On the limits of real-money prediction markets: If you assume no cost for the effort of the research, their miscalibratipn still has to be large enough that betting on them would outperform the s&p in terms of risk-adjusted expected returns for smart money to want to bet on them.

For example if right now I can buy Biden at 40% but think he has a 45% chance to win, that's locking up money for almost a year for 10% returns - but I could get roughly the same expected returns by just buying SPY, and at a lower risk (since betting on election markets is all-or-nothing, which is very high risk). So you can expect them to be fixed by smart money arbitrage to within 10% of accurate, but not 5%.

(This gap should narrow closer to election day).

Expand full comment
Jan 30·edited Jan 30

RE: US election - you'd expect the markets to be misaligned because you have to tie money up for 3/4 of a year to correct them.

RE: liquidity & dumb money - there's a step below professional traders, which is professional gamblers. I work in this kind of area, and several of my friends have built statistical models for Eurovision Song Contest markets to answer questions like "how many points will Lithuania receive?"

As far as I'm aware, the big trading firms don't even bother betting on e.g. bookie odds for Premier League games, which have tens of millions of liquidity, but this is easily enough to sustain professional gamblers (and probably they're in the smaller/more niche markets even).

This affects the arbing problem of election odds too - as people professionalise, they'll create accounts on every real money prediction market and seek arb opportunities. Before that point - is it really worthwhile to spend your time looking for $10 here and there? The motivations for people who enjoy forecasting vs. people who enjoy finding edges in gambling markets aren't the same.

So the liquidity problem isn't as bad as it seems - we don't need prediction markets to attract Wall Street before these problems are solved. But there is likely at least one step change required before then, which is about attracting normal customers. Reach the point where arbing can get you an immediate $100, particularly if you can automate it, and you are set.

The shorter term solution would be if the big prediction markets had liquid markets like "What percentage win chance will this market have Biden at by the end of January", we would see quick agreement via arbing.

(I have Plans and Schemes to work on this, as it's similar to poker site liquidity, one of the few areas where I actually am an expert)

Expand full comment

> Why? All the things voters might blame Biden for - inflation, Ukraine, Gaza - happened either well before or well after the early-2023 period when his numbers began to decline.

I wonder if some of this is actually the legal shenanigans. Trust me I don’t want Trump to win, not because of silly white supremacy, or Russian agent stuff - that’s a tale for idiots - it’s because of his volatility. We really don’t need that right now.

Anyway I’m still a bit put off by some of the legal stuff, is the award of $83m in anyway proportional to what happened here?

Seems to me that this could be used against anybody in the future - the republicans will surely use it. Clinton could hardly have survived the scrutiny.

(I’m not talking about the criminal or referral cases which probably have merit).

Expand full comment

"Anyway I’m still a bit put off by some of the legal stuff, is the award of $83m in anyway proportional to what happened here?"

I've Opinions on that one. Trump doesn't help himself by shooting off his mouth, but if someone accuses you of a crime, you claim it didn't happen and they're a liar, and you can then be civilly convicted of defamation, I don't know what anyone who maintains their innocence is supposed to do - shut up and take the falsehood being spread around about you because you'll only be punished further if you fight the charge?

That's quite apart from the more I hear about the Carroll case, the more I lean towards she's lying.

Expand full comment

Claiming that someting didn't happen needn't involve affirmatively and agressively calling the accuser a liar. If you have observed Trump's bombast for any period, then you know exactly what he said about Carroll. Just about the only typical thing he didn't say is that she's ugly.

I am concerned about close cases and typical defendants; I don't want a simple denial to be considered a defamatory statement, and so I hope that the Trump/Carroll events do not set a precedent in that direction. But the good (?) thing about Trump is that he doesn't usually present us with close calls and nuance.

Expand full comment

What’s funny is according to Republicans circa 1990s Trump has made himself ineligible to be president by lying in civil depositions…that is the “perjury” that led to Clinton’s impeachment.

Expand full comment

We're a long way from the 1990s Republican party my friend. I miss it dearly.

Expand full comment

That Clinton witch hunt was no better.

Expand full comment

I just miss the party having standards, even if they were used as a weapon. Once you lose your standards, you can't get them back.

Expand full comment

On Ukraine - I think people have generally overestimated its short- and medium-term prospects at least since fall 2022: the "front line change" market for 2023 (https://manifold.markets/PS/will-the-front-line-in-ukraine-chan-3a82fb352136) started dropping fast in August and then again in September, and the situation was pretty much the same for the first half of 2023 (https://manifold.markets/PS/will-the-front-line-in-ukraine-chan-0591b16ee4cd). I'd guess part of the reason was the same as for the Trump markets - most people on Manifold are pro-Ukraine.

Expand full comment

Also, interestingly, almost all of the "no" money in the 2024 market comes from one person (I assume it's the user who's on here as @psychotechnology (https://substack.com/profile/74588887-the-irrationalist)). Depending on the outcome, it will be either a prime case of smart money getting the better of dumb money after all, or the opposite 🤷

Expand full comment

The funniest thing is that there was this very public huge leak of tons of classified US intelligence assessments in the early spring of 2023, which basically said that the front-line won't move much. But The Great Ukrainian Counteroffensive Backed By The Entire West couldn't possibly fail, so the intel was promptly forgotten by the MSM establishment in favor of uncritical hyper-charged propaganda, which of course resulted in a confused backlash.

Expand full comment

Regarding the problems with prediction markets I think what a lot of people (including the Asterisk article writer) are missing are the nitty gritty of how markets work:

- The headline probabilities the markets show are usually based on the last trade. So if after election day it is still showing Trump at 5% it doesn't necessarily mean people are willing to bet at those odds, more likely it is just that no-one has traded since outcome became clear (why would they?)

- A lot of markets are illiquid and have huge bid-ask spreads. For instance, in the PredictIt Trump-Biden market shown in the post, you could say it favors Trump but the bid-ask spread shows 46%-55% for him to win so I think it's more accurate to say it shows a tie.

- Markets have pretty high fees (PredictIt is 5% widthdrawal) so even if the bid-ask spread was 0 (which it is not) it might not make sense to bet against a 4% outcome even if you are 100% sure it won't happen.

- Cost of capital: if you think there's an opportunity to make 3% after all costs on a market that resolves in 1 year, a rational trader still wouldn't do it. You can just put the money in treasuries and get 4% in same period with zero risk.

- Sometimes there are technicalities about the resolution conditions of markets that make a big difference. I remember there was a market on 2020 election result on the now defunct platform Augur, that seemed to have opportunities for profitable bets on Biden after election day (I made a bet myself). But it ended up taking a long time to resolve (only in January I think) and for a while it actually looked that it could resolve as Invalid. This was because of the particular phrasing of the resolution condition it was unclear if it was unclear if certain technical conditions about electoral collage electors certifying the result were met by the resolution date. A naive observer would just look at the odds the market was still showing in December and decide prediction markets are dumb.

If you take all of these into account, it should be clear that prediction markets show a lot less irrationality than the Asterisk article claims. Of course, it does mean the markets are less useful than you'd naively think for the purpose of actually making predictions. To be fair, a lot of these issues will become less severe if the markets become more liquid.

Expand full comment
Jan 30·edited Jan 30

Would be very funny if it turns out, that non-real-money prediction markets being about as good or even better than real money prediction markets, is not just a temporary bug and it's possible to have non-real-money financial markets that systemically beat the stock market.

Expand full comment

I actually expect this would be the case. Real-money markets have a "risk premium" baked into their pricing, which play-money markets need not have. So, in some cases, play-money markets might converge on more accurate probabilities than real-money markets.

The average real-money investor is risk averse: they'll pay more than expected value to be protected from "bad" events. (This is why insurance is a profitable business!) So, in real-money markets, "bad" events tend to have higher implied probabilities than their real-life objective probabilities.

In fake-money markets, it's quite possible that the average investor could be risk-neutral and wouldn't have this bias.

Expand full comment

The big question with Trump's election is likely going to be one of turn out. Trump turns out the left like nothing else but it's just hard to be all that motivated by a fear of Biden. What balances the odds are the chances that Biden has a big (literal, as in breaks a hip) fall or makes some dumb blunder before the election.

With Trump short of death or coma it's pretty much all priced in.

I suspect this turn out effect is relatively hard to handle in election polling as it's a likely voter effect that's canidate/election dependent.

Expand full comment

The "Border" one seems weird - the pre-2022 borders *did* include more than Crimea, namely the Russian-occupied parts of Donbas.

Expand full comment

Does this mean we'll never get the kind of large liquid prediction markets we need because the SEC (and similar) will never approve without stronger evidence of benefit which requires approval to reach sufficient capitalization?

Expand full comment

Perhaps we should have a prediction market on whether we'll have large liquid prediction markets.

Expand full comment

Some of the most traded Manifold markets (and most of Metaculus's and Polymarket's questions AFAICT) are *very* heavily traded despite not being "real-money" markets. I think heavily-traded is a different axis of variation than real-money-ness.

Expand full comment
Jan 30·edited Jan 30

>Johnson makes a pretty reasonable guess about the cause: lots of dumb money. People use their pocketbooks to root for their favorite candidate. Normally in a functioning market smart money would take the other side and set the final price, but the high transaction costs, long waits, and regulatory limits on prediction markets mean it’s not generally worth smart money’s time to correct the mispricing; Goldman Sachs isn’t going to hire statistics PhDs to make a model just so they can bet $850 on PredictIt.

I wonder if it is only about this. Some time ago I was reading Stieglitz from the early 1990s, and while I don't have the exact citation at hand, I remember he argued that there a priori can't be a complete set of markets about everything. The most naive form of the argument, number of possible states and contingencies there could be a market about is larger than number of people and time and attention available to them to gather information and trade on it. And this was the ol' good regular economic markets he was mostly concerned with, not prediction markets which can about arbitrary questions.

Fuzzily extrapolating to prediction markets, it looks prediction markets must always be wrong to some extent. For there to be an informational advantage for a knowledgeable trader to gain from, there must a mispricing. If starting a market is very cheap and gathering good information is effortful, there eventually is could be more markets than traders with information to trade. To get your market going, you'd need to compete only to get people to notice your market in the first place and then entice them to trade on it. In the best scenario, perhaps one could come up with some theoretical caps how badly mispriced prediction markets will be, statistically, under different assumptions.

Unrelatedly, maybe there is a reason why fake-money weird internet status points work so well compared to real money: people seldom price their hobby activities the same way they price paid-for work. In real-money markets, smart money eventually should start pricing in inflation and interest rates and opportunity costs compared to investing into other possible, more profitable markets. If the market rewards you with weird internet hobby points? People will take it as a hobby and essentially donate their time and effort.

Expand full comment
Jan 30·edited Jan 30

> I think Paperclipalypse requires human extinction before 2050. It’s at 11%. But Metaculus’ direct “human extinction by 2100” market is only at 1.5%. Either I’m missing something, or something’s wrong. My guess: different populations of forecasters looking at each question.

By the resolution criteria for the first question, "If no resolution has occurred prior to 2050, the panel will meet and will vote on the option that is closest to the real world."

For example, if "only" 50% of humans will die, the answer will be Paperclipalypse, because that will be closer to the real world than other choices. But the "human extinction by 2100" question requires that all 100% of humans will die.

Expand full comment

"Rather, the interaction between AI and Homo sapiens ends about the same way that the interaction between Homo sapiens and Neanderthals ended."

I'm going to take the opportunity to rant about this, not because I think this is making this particular error in all its glory, but because expressions of it are fairly common in AI discussions and as soon as we-as-a-collective *stop doing this* we can finally start to think about what it really means for intelligent species to coexist.

Homo sapiens are not a universally superior group that violently wiped out the Neandertals* when they met. Trying to directly compare H. sapiens and Neandertal intelligence is something of a fool's errand, but there's little good case that Neandertals were anywhere near as far behind anatomically modern humans in that respect as the image of them in the cultural consciousness is. We were smarter, yes -- but human intelligence has always had wide error bars. The two coexisted in Europe for thousands of years -- minimum estimates about 1.5k years, probably more double that. Modern humans outcompeted Neandertals over this period through some mix of competition for resources, interbreeding/assimilation (not quite the same, but lumped here), and violence. Neandertals were resource-hungry -- they had very meat-heavy diets -- and even all else equal would come out worse under a pure resource competition than an equivalent species with a lower demand for land-based game. (H. sapiens maybe ate more fish.) A model of interaction between two sophonts** where one rapidly and violently massacres the other because of their superior intelligence can't be one based on Neandertals.

"Sophont", then, takes you down some avenues. I think the case that cachalots ('sperm whales') are sapient in the same sense as humans, and possibly smarter than humans were at a comparable stage of evolution, is extremely strong. (See https://web.archive.org/web/20160322095648/http://cachelot.com/ for an introduction to the topic.) Humans have certainly exploited cachalots, to the point of what is from this perspective a particularly heinous genocide. When I talk about cachalot intelligence with people, this is often a tricky point for them to think about: how could humans have committed genocide against a species at least their equals? Wouldn't the whales just outsmart us? Well, they *did* fight whaling ships -- at least some whales (e.g. Mocha Dick) were able to recognize their patterns, seek them out, and attack them if they attacked -- but, no, human history shows little clear correlation between intelligence and the escaping of atrocity. Whalers exploited things like the desire of cachalot mothers to defend their children in order to wipe out whole pods; plenty of humans who wanted to cause one another harm have exploited similar desires.

We know that two sapient species coexisted for thousands of years before one went extinct under complex circumstances. There is good reason to think two coexist now, and some to suspect the one that caused more harm to the other is not the unambiguous intellectual superior. This is to say that bringing in another intelligence has very complex consequences, especially if you think that intelligence will be 'clearly superhuman', but you can't pattern a superhuman intelligence to the perceptions people have of previous sophont interactions and get anything clearly representative of reality. In particular, you can't extrapolate from the actions of H. sapiens to the actions of any given "superior species". This is one of those interesting circumstances where people try to correct from thinking other intelligent beings would be "like humans", and *still* correct to them being too humanlike. The real outcome of throwing a third intelligence into the mix is trickier, less dependent on whether that intelligence is clearly "smarter than" humans or not (because this probably doesn't play the primary role in human exploitation), and not extrapolatable from half-remembered impressions of Neandertals.

*I prefer the spelling without the h, because it expresses English pronunciation more clearly and is what the valley is called now.

**"Intelligent AI will be sapient" is another of those "your AI is still too humanlike" things. We have intelligent AI right now,*** which is not...sentient, at least. You could, if you want to get really fun, try to argue that "sapient but not sentient" is a coherent idea. At this point, it might not be impossible. But you'd need a very rigorous definition of sapience. I'm saying "sophonts" here just to address the fact that all organic Terran species past X intelligence level are sophonts.

***AGI, the letters defined like that, is here. People wrote their definitions in the 2010s of what they expected AGI to be. It's here. We have it. If you mean something more intelligent, use a different term -- it's not our fault everything turned out to be less correlated than even the "no, seriously, AGI will not be like humans" guys thought.

Expand full comment

AI would have access to tools and the ability to coordinate large-scale events, which for a variety of reasons are not available to whales, even if I grant this theory of yours about sapient whales. Homo sapiens gradually killed off neanderthals merely by being a little bit better than them in the ways you mention. AI would be quite a bit better at those things than we are, and history moves faster now. Even if AIs didn't want to actively murder us, they would outcompete us at every task, requiring fewer resources to do so, and quickly seize control of the fate of the planet and make humans irrelevant. Sitting in the equivalent of a zoo or reservation while AI runs the Earth is not that much better than being paperclipped.

Expand full comment
Jan 30·edited Jan 30

Exactly no part of this post disclaims any of these possibilities. It's about the rejoinder to possible coexistence that goes "it's never worked before, because Neandertals!", not that coexistence is certain or likely or desirable or similar terms. (I think AI is less likely to become "a bundled package of universally-superhuman-ability, agency, coherence, and desire for control" than the louder voices in this sphere tend to -- that is, I'm somewhere in the top three categories on the market, but I don't dare to settle on a particular one.)

I think that amongst takes that assume ths bundled package is inevitable, that last sentence is a lot more controversial and complicated than a basic statement of "it's obviously near-equal" implies.

Expand full comment

Animals are born with a drive to compete for resources and survive. AI is not. I think the reason people default to the idea that AI will outcompete us is that it is hard to wrap your head around the idea of a highly intelligent being that does not fear death or avoid death and is not competitive. Seems to me AGI could very well be no more likely to want to fight for resources or for survival than it is to are to crave sex with, I dunno, vacuum cleaners.

Expand full comment

> AGI, the letters defined like that, is here.

AGI is not really defined that well, but chatGPT does pass the Turing test.

Expand full comment

Has a serious formal test been attempted?

Expand full comment

I have conversations with chatGPT that are clearly good enough to have fooled me had I not known it was ChatGPT.

In fact it is smart enough to assume, if I make a mistake in a request, that a mistake was made. Understanding of context is phenomenal.

However what lets it down is a certain style of writing (starting everything with Certainly! and other quirks) and some other tells. However I write in full paragraphs these days while at the start I used to itemise or list requests.

Expand full comment
Jan 30·edited Jan 30

> Will the Ukraine war end by 2025

> What am I missing here? It’s got to be higher than 8%, right?

No. Putin is not interested in peace or ceasefire. He is fully ready to fight for as long as it takes. Especially now when he hopes for a Trump victory.

The popular uprising in Russia in 2024 is almost impossible (<1%). Russians are not that tired of the war.

Neither are Ukrainians. They realize that any peace agreement with territorial concessions only means that Russia will subsequently attack them again, and are also determined to fight for as long as it takes.

This war belongs to the same category as the Soviet–Afghan War or Vietnam war.

Expand full comment
founding

What's the definition of "war end" here?

It's definitely possible that Vladimir Putin would like a three- to six-month pause to reconstitute his forces for further offensive action, if he could be sure Ukraine wouldn't be doing the same. So a proposal saying that the war "ends" tomorrow, but that as part of the price US and European military aid to Ukraine stops immediately and completely, that could look pretty good in Moscow - but only because they're planning to break the deal themselves in 3-6 months.

Such a deal would look like slow-motion surrender in Kyiv, and rightly so. But there's at least some chance that the Western allies would fall for it, and if they tell Ukraine "take the deal or we'll cut your military aid anyway", that could force them to sign and hope for the best. I don't know whether the odds of that are greater or less than 8%, but it's the sort of thing that could cause a "war ends by 2025" bet to resolve positive on or before 1/1/2025.

Expand full comment

Trump might go for such a deal, but he will not be inaugurated before January 2025. Besides, if it the choice is between "take the deal and we'll cut the aid" and "reject the deal and we'll cut the aid anyways", Zelenskyy will likely choose the latter. Besides, countries like Poland, Baltic countries, and probably UK and some others, will not accept such a deal. In fact, speaking cynically, the ongoing war in Ukraine is good for almost everybody except Russia and Ukraine. As long as Russia is involved in Ukraine, it will not attack anybody else.

Expand full comment

The only way for Trump to cut a deal with Putin is to give Gazprom back its E.U. market share…American energy companies are overwhelmingly Republican and generally have direct access to a Republican president and would never tolerate losing the E.U. energy market share they have gained the last several years.

Expand full comment

How would it.be i in his power to do that?

Expand full comment

President has the power to lift sanctions like how Republicans always sanction Iran and then Democrats lift those sanctions.

Expand full comment
founding

The Biden administration(*) has been pretty clearly playing for Eternal Stalemate for the past year at least, rather than for Ukrainian victory and Russian defeat. It has also, across many fronts, revealed a strong aversion to "escalation". And it's now having trouble getting Congress to fund even stalemate-level assistance for Ukraine.

So it seems disturbingly plausible that Putin wouldn't need to wait for Trump to sell a deal that can be optimistically described as "Lock in the eternal stalemate with no further cost in blood or treasure or bad PR, and if you turn that down I'll think you're trying to escalate".

* Not necessarily Joe Biden himself, but he's highly dependent on his staff.

Expand full comment

That would probably require getting rid of Zelenskyy, because he would neither buy this nor be able sell it to his base. He also won't go down quietly, and a forcible removal is pretty fraught with a total regime collapse.

Expand full comment
founding

The scenario where Biden cuts US aid, Europe can't and won't make up the difference, Zelensky won't take a deal, and so Ukraine is conquered in 2024, is also unfortunately plausible. Though you'd still have Ukrainian partisans holding out in the Carpathians, etc, so again we come to what is the definition of "war end" here.

Expand full comment

>In fact, speaking cynically, the ongoing war in Ukraine is good for almost everybody except Russia and Ukraine. As long as Russia is involved in Ukraine, it will not attack anybody else.

Provided that there is no misstep or surprise which makes the war go nuclear...

Expand full comment

I estimate the chance of <0.1%. A man who spent two years in a bunker during the COVID pandemic and communicated with others across a long table is not the type to start a nuclear war. Besides, why didn't he do any nuclear tests? It would scare the western politicians without any risk of retaliation. Quite probably, he lost his nuclear abilities.

Expand full comment

The long table was a power play. And assuming Russia is not nuclear capable is dangerous.

Expand full comment
founding

Yeah, the idea that Russia can't still launch a thousand working thermonuclear weapons is dangerously foolish.

And the man who communicates across a long table, is a man who fears his own people might someday topple and kill him. Remember Prigozhin's march on Moscow? The odds are probably less than 10%, but quite a bit more than 0.1%, that a serious enough setback in Ukraine would have Putin convinced that a nuclear strike against not-nuclear-armed Ukraine would be less likely to get him killed by Ukraine's conspicuously-not-willing-to-fight allies, than taking the L would get him killed by his own murderous people.

Expand full comment

Meeting via a 6 meter table was not a power play, since Putin did it to both Macron and his subordinates like Shoigu or Lavrov. Besides, he did it only during the COVID pandemic time. After the end of the pandemic, he met the same subordinates (Shoigu and Lavrov) face by face.

Expand full comment

Many Thanks! 0.1% seems low to me, even if Putin is averse to some classes of risks (as in the case of the bunker). Amongst other things, I'm including missteps. I agree with you that doing a nuclear test would be a way to growl in a fissile way without actually using a nuke against a real victim. As you said, he hasn't done that.

Still, there is always the fog of war. If the front becomes more changeable, day-to-day tactical shifts could become more threatening, and Putin might misjudge one and overreact. Simply the fact that there is killing in progress makes it harder to rule out false alarms - and no one's sensors are perfect.

Longer term, and across all wars, https://www.metaculus.com/questions/4779/at-least-1-nuclear-detonation-in-war-by-2050/ is currently at 25% - so call it maybe 1% per year, and I'd think that Russia v Ukraine should be say half of the 2024 contribution.

Expand full comment
Feb 3·edited Feb 3

How long have authoritarian countries had nuclear weapons?

The USSR/Russia had nuclear weapons for 74 years, of which it was democratic enough for about 14 years to keep the nuclear risk low. 74-14=60 years.

China has had nuclear weapons for 60 years

Pakistan for 25 years

North Korea for 20 years

Totally, authoritarian countries had nuclear weapons for the total period of 165 years, but never used it. Therefore, the chance for an authoritarian country to use it is between 0 and 0.6% per year, depending on the prior distribution. If the distribution is uniform, it's 0.3%, but I'm pretty sure it's skewed toward 0. So if we don't know anything else, it is 0.15% per year per dictator. With Putin, it is much lower for the reasons I wrote, and also because Putin has daughters and doesn't want them to spend their life in a bunker. Unlike for Stalin, there are no evidences of Putin's cruelty towards his daughters.

Meeting via a 6 meter table was not a power play, since Putin did it to both Macron and his subordinates like Shoigu or Lavrov. Besides, he did it only during the COVID pandemic time. After the end of the pandemic, he met the same subordinates (Shoigu and Lavrov) face by face.

Expand full comment

How is the Ukraine war good for ”almost everybody” even cynically? It’s very expensive for European countries.

Expand full comment

It is not that expensive. How much they paid together? Something between 50-100 bln euro for 2 years. Not that much in the scale of EU+UK. For that small price, they buy security.

Expand full comment

If you mean they benefit by supporting Ukraine not losing, I agree, but it isn't as though they benefit from the war the way a defense industry manufacturer would.

The economic price of the war is presumably much higher than that. Eg. Bailing out the gas company Uniper last yesr cost the German government tens of billions.

Expand full comment

There are positive sides in economics. They receive young, educated workforce which is not into terrorism or religious fundamentalism, or criminality, and ready to learn their languages and obey their rules.

Expand full comment

> Will the Ukraine war end by 2025

> What am I missing here? It’s got to be higher than 8%, right?

Both sides still hope for some kind of decisive victory that will ultimately resolve their dispute (i.e. other side's forces are in full rout, a change of power happens, and some kind of enforcible restrictions are placed on enemy's ability to restart the war later - for an Ukrainian victory, this would have to be (almost) instant NATO membership, for a Russian one, a bunch of Russian military bases on the remaining Ukrainian territory, if any).

I really don't understand what Ukraine hopes for (a direct NATO involvement?), but Russia has some hopes for drying out of Western funding for Ukraine, and if it doesn't happen, a win in battle of attrition. But even if US fails to provide any money or arms in 2024 at all (which is, I think, unlikely), Ukrainian forces may well survive into 2025 with reserves, EU help, and sheer size of the country. And who knows what will happen after US elections - Republicans might find renewed love for Ukraine once it's Trump's war to win or lose, and not Biden's (in case of Trump win). So "peace by 2025", or even a temporary ceasefire seems very unlikely - 8% is about right. For now, both sides still have enough people, vehicles, ammo and will to fight.

Expand full comment

>"I really don't understand what Ukraine hopes for..."

1917 redux / Hemingway bankruptcy.

Russia has been holding on far longer than projected/hoped, but there are incidents that suggest the limits of their endurance are reachable (e.g., the Black Sea fleet is only getting smaller, reliance on defective NK shells, fucking hantavirus in the trenches).

Expand full comment

Depending on who you listen to. Some say they’ve produced more shells than Europe as a whole.

Expand full comment

Wouldn't be surprising, but it may still not be enough; the Russian army is far more dependent on tube artillery fires than the Ukrainian army is becoming as systems like Storm Shadow, ATACMS, & (soon) F-16s are brought to bear.

Expand full comment

What nobody seems to realise about a Ukrainian win over Russia, or at least it’s not often articulated, is that the west needs them to win but to stop at the border. I don’t think that the Russians will be routed, the history of Russian wars is that they start badly and end well, but if they are routed and defeated to the extent that they feel they have to use nukes then they will. I mean it could happen at any time during the conflict but we probably don’t want the Ukrainian army on the outskirts of Moscow. The Russians are paranoid enough. An army outside Moscow will see the bombs flying.

Expand full comment

> the history of Russian wars is that they start badly and end well

Here are wars of Russia against sovereign states since 1900.

Russo-Japanese War: defeat.

WORLD WAR I: Russian Empire ceased to exist, Russia lost significant territories and paid reparations to Germany. Total defeat.

Winter War: The USSR was surprisingly unable to conquer a small Finland, although it did gain some unpopulated territory. Considering the difference in size and population (150 million vs. 4 million), it was a humiliating partial victory.

WORLD WAR II: Russia indeed started badly and ended well. But the US and UK were both on its side.

Soviet-Afghan War: USSR left in 1989, then collapsed two years later. Complete defeat.

Russian-Georgian War: Complete victory. But Russia did not start badly.

Totally, 3 defeats (2 of them resulted in breakdown of state), 2 victories, and 1 half-victory against Finland.

Expand full comment

Want to go back further? I was thinking Napoleon on the outskirts of Moscow to the Russians on the outskirts of Paris 18 months later.

Finland was about the same time as WWII so I’m putting that all into started badly. In fact Hitler saw that war and assumed Russia would be a pushover. It wasn’t.

Nobody wins in Afghanistan.

I should perhaps have worded it differently - when they seem defeated the Russians often get stronger.

Expand full comment

Alexander won in Afghanistan, the Mongols won, various other steppe nomad groups won as well.

Expand full comment
Jan 31·edited Jan 31

While it wasn't a conventional war against a sovereign state, the most consequential Russian defeat in the last hundred years was the humiliating rout in the Cold War. Georgia and Ukraine don't make much sense without that context, which few Western commentators understand.

Expand full comment
founding

The Ukrainians are not stupid. They've had several opportunities to push large mechanized forces well into Russia in this war, and they've always stopped right at the border. Only covert operations and deniable raids into Russia, never sustained. They're not going to pull a Prigozhin and try to march on Moscow.

Expand full comment

I don't think there is even a remote chance of this. In the worst-case scenario for Russia (e.g. a full-blown revolution in Moscow), it retains enough forces to avoid invasion inside its borders. I'd even go as far as to postulate that a big retreat from the current front-line is unlikely, although the supposed new government - depending on its disposition - might want to trade Donbas (but likely not Crimea) away in peace talks.

The only dangerous variant is not just revolution/coup, but a civil war in Russia - in which case, if everyone rolls a natural one, nukes might start flying randomly.

Expand full comment
founding

Ukraine hopes for there to still be a Ukraine populated by Ukrainians in ten years. If they stop fighting before Putin is ready to stop fighting (for real, not a lame-ass "ceasefire"), there won't be. As long as they keep fighting, there's a chance that the horse will learn to sing.

Dan and Ghillie Dhu are right that there's precedent for such melodic equine vocalization.

Expand full comment

Some Ukrainians tell they now should be like Israel, i.e. fight forever.

Also: I am not an Ukrainian, but here is my opinion. If they keep up for 10-20 years, Putin will die or become gravely ill, and both the Russian society and the Russian elites will be sick and tired of that non-ending war. That's how Afghanistan and Vietnam won. Besides, the Russian oil reserves may end some day.

Expand full comment

Thing is, in 10 years there will be no one to fight in Ukraine. Israel might be in "forever war", but it's mostly low intensity/low casualties. And Ukraine don't have almost-infinite population reserves of traditional societies like Afghanistan and Vietnam for a long "hot" war.

A very prolonged war will depopulate both countries, and might well end in strategic defeat for Russia, but Ukraine will lose even worse.

Additionally, 10 years is a very long time for Western support - governments will change, new conflicts will start elsewhere which demand more attention... It's already happening.

Expand full comment
Jan 31·edited Jan 31

There are 300K Ukrainian (150K if we subtract migration and occupied territories) and 800K Russian males achieving the age of 18 every year, so they both can last the war for 10-20 years if they really want to. And as long as Russians bomb their cities, torture prisoners of war, do massacres on occupied territories, bomb power stations to cause shortages of electricity, like they did last winter, Ukrainian morale will be high. Same was in Vietnam where Americans used napalm bombs, bombed power stations, performed My Lai massacre, and eventually lost the war.

Expand full comment
Jan 31·edited Jan 31

America lost the war fighting with one arm behind its back - North Vietnam was off limits for a ground invasion to avoid bringing in China a la the Korean War. Russia also fights half-heartedly in some ways so far, but unlike a relatively unimportant adventure half the world away, this is pretty much an existential war for the regime claiming at least a "great power" status - if it can't enforce a "sphere of influence" directly at its borders then it's basically a paper tiger.

Expand full comment

For Russia, NATO countries are off-limits. Especially Poland. If Russia dares to attack NATO, it will be another story. Similarly, when USSR invaded Afghanistan, Pakistan was off-limits.

I strongly doubt that anything can destroy a regime (read Putin) as long as Putin is alive and relatively healthy. People are too scared for a violent protest, the Russian society is atomized, while elites don't care for the "great power" rubbish in the first place. Russian propaganda will present even a most humiliating loss as a victory, and you know what? Many Russians will believe it.

Expand full comment

Well, NATO is off-limits only as long as America is credibly committed to defend the outer reaches of its empire. The likes of Trump continually make noises to the effect that they are unhappy with such an arrangement, so who knows what the future holds?

What Westerners generally don't understand about Russia is that Putin and Putinism are genuinely popular among a large fraction of Russians. Or, to put it differently, Putin cunningly pursues policies that he expects would be popular. Yes, plenty of Russians don't care about being a "great power", but the richest/smartest of those have emigrated long ago, and the rest are essentially defeated/apathetic. Plenty of the rest do believe that Russia is a bona-fide superpower, or at least a temporarily embarrassed one, and that it should fight tooth and nail to reestablish this. No other political platform is remotely viable in Russia currently, so anybody who expects this to end with Putin would be disappointed.

Expand full comment

FWIW, assuming they are they are their respective party's nominees on election day, Swift Centre forecasters put 52% on Biden winning, 48% on Trump winning, and also forecasted on some interesting conditional questions: https://www.swiftcentre.org/biden-trump-rematch-chances/

Expand full comment

I think it's really important to include a well-established real money market that comes from the betting world - they have very different audiences to those from the prediction world, and that affects the odds they generate.

Here's betfair's "exchange" market on the US Presidential election:

https://www.betfair.com/exchange/plus/politics/market/1.176878927

It has Trump at 2.24 (ie bet £1 get £2.24 if he wins) and Biden at 2.94 - that is to say that you can make a profit betting that one of them will win without specifying which one. 26% ROI on a bet that will run a little under a year.

This is being driven by the fact that the primaries aren't quite over - Haley has about a 4% chance of being president per the market, weirdly they have Michelle Obama at 7% (huh?), and various others at 1-2% (Newsom, RFK Jr, Philips, Harris).

Expand full comment

Michelle Obama is a Schelling (Michelling?) Point if Biden dies. There's no heir apparent and the heir presumptive (Harris) is awful. There isn't enough time to have a primary so the Democrats might as well go back to an Obama.

Expand full comment
founding

I don't think the "dumb money" problem is solved by scale or relaxed betting limits, because it doesn't address the problem that prediction markets are inherently zero sum and the smart money needs to get paid. Prediction is hard work; if we haven't already maxed out the unpaid-hobbyist-predictor market we eventually will and probably through the unpaid markets like Metaculus. To do better, we're going to need a way for the smart money to place a positive expected value on investing in prediction markets.

In the case of e.g. the stock market, that's easy because the stock market is positive-sum; you're trading claims on the future income of profitable enterprises. Money comes in to the market from traders making their bets, but also from companies paying their dividends. If you smartly place the right bets, you wind up owning a nice dividend stream (which you can sell for immediate $$$ if you need that). It's certainly possible for smart money to prosper by betting on dumb money, but smart money can also prosper by accurately predicting which companies will be most profitable, and we get useful signal out of stock markets.

In a prediction market, money only comes in from traders placing bets, so smart money can only prosper by way of dumb(*) money -> smart money transactions. The smart move is always to figure out what dumb money is going to do, and do it first. So everybody in the market, smart and dumb alike, is going to be making the dumb-money moves. That doesn't give us a useful signal.

You could get around this by having someone altrustically fund markets with the expectation of losing money but getting good data. Or maybe the subsidy could come from people who would profit (outside the market) from good data, but that comes with an obvious free rider problem and it would probably be more effective for them to just privately hire a team of forecasters. There are other possibilities, but I think they're all pretty low probability.

* Dumb in market-profitability terms. If you have some other reason for investing, even just "I'm rich and this is fun", making unfavorable bets may not be absolutely dumb,

Expand full comment

What about having the prediction market invest the money on each question in government bonds / S&P 500 / whatever same as banks do, and when the question is resolved add a bonus from interest accumulated to the winner?

Of course this might make little or no money for those running the prediction market itself.

Expand full comment

Re changes to manifold love, I think the best thing would be to bet on abstracted questions eg "odds heterosexual make and female both 30 years old reporting a good date" or "odds that two people who both say they like horror movies like each other", and aggregate those into giving the results for each person. (in a similar way to how OkCupid used to use the questions for compatibility score, but with the weightings set by the markets not by users choosing relative importance).

I don't imagine a lot of people are going to do substantial research into random pairs of strangers, but probably have opinions on general trends they'd enjoy testing .

Expand full comment

I don't quite understand, are these prediction markets based solely off of people voting? If so, how are prediction markets valuable in situations where the real world is changed by people working towards something and scientific discoveries and not by voting in an election or something like that?

Expand full comment

You can also find markets on other topics, such as scientific discoveries, take a look on our science's section:

https://futuur.com/q/category/101/science

Futuur is the only prediction market that offers both play-money and real-money predictions. It is worth noting that each of these options will have different probabilities of events occurring.

Expand full comment

ZOOWEE MAMA 🤩🤩🤩🤩🤩🤩

Expand full comment
Jan 30·edited Jan 30

I wondered if Johnson's article covered bookies, and yes it does. But I cackled at this bit in the quoted study:

"We find that, although both markets appear to be inefficient in absorbing the new information contained in the vote outcomes, the betting market seems less inefficient than the FX market. "

That reinforces my inclination that if you're going to throw money around on predictions, stick with the bookies; you'll get better and more realistic odds, and it's gambling anyway no matter how you dress it up with fancy names. If you want prediction markets to take off, you have to open them to the general idiot, and that means dumb money does drive out smart money. Stick with Betfair and Paddy Power!

As to the results of the EU referendum in the study, I think nobody quite believed the Tories would cock it up *that* badly, especially after they won on the Scottish independence referendum earlier. Nobody was expecting "Leave" to win, least of all David Cameron and his flying monkeys, but there you go.

Just as nobody expected Trump to be the Republican candidate in 2016, much less win the thing, but there you go. And nobody expected 2024 to be Trump versus Biden, Round II, but there you go. At this stage, 50/50 is the most prudent thing you can go for if you're not going to simply throw up your hands and go "well what next, we've had plague and war, when is the giant meteor of death going to strike?"

538 (new Disney rebranding!) kicked out/allowed Nate Silver to depart, so I'm not expecting much from them going forward - I also cackled at this part in the Johnson article:

"FiveThirtyEight (now 538) typically releases three election models with varying degrees of complexity — Deluxe, Classic, and Light. The score for FiveThirtyEight shown in this graphic is from their Deluxe model. If the Classic model had been chosen instead, FiveThirtyEight would have outperformed even Metaculus. Light was nearly identical in score to Deluxe."

Translation: when they stuck to Nate's original model, they did best. Now they're trying to gouge more money out of you for fancy 'deluxe' model and it's not as good since they bounced the guy. Heh heh heh.

Expand full comment
Jan 30·edited Jan 30

Has anyone a history of how the prediction markets handled Ron DeSantis (e.g. did they give him a good chance of winning, did that change over time as he performed or didn't in the debates, etc.)?

"I haven’t been following the Trump Legal Issues Cinematic Universe, so I was surprised to see “Trump won’t go to jail” climbing so consistently."

I think it's *because* there are so many cases against him right now, it's overkill. It's reaching "arson, murder and jaywalking" levels. And the Georgia prosecution over election racketeering is being hampered by revelations that the DA has been a naughty girl (her Wikipedia page studiously neglects to mention any of the scandal-mongering but the NYT is not as fastidious):

https://www.nytimes.com/article/fani-willis-georgia-trump.html

Yet again, a cautionary tale of "if you're gonna dump your missus for the new squeeze, get the divorce out of the way *first* and try to do it on good terms instead of stiffing her on the money" 😀 His ex is spitting feathers and spilling the tea in her divorce, which is how we learned about Nathan and Fani getting cosy in personal and professional life:

https://www.theguardian.com/us-news/2024/jan/19/fani-willis-travel-paid-nathan-wade-trump-georgia-case

Expand full comment

There are a lot of sub-threads in this story. Maybe GPT4 can write the screenplay for the soap opera?

Expand full comment

It'll make a great made-for-TV (or streaming services nowadays, I guess) movie eventually!

Trump's opponents do seem to manage to hamper themselves in various ways. I tend to ignore the New York court cases because they're so blatantly partisan, even if he really did something as accused, I can't trust the verdicts - it's "give him a fair trial and then hang him". Georgia, if we want to go conspiracy theorist, is that DA Lady was knocking boots with another lawyer and gave him the job of being chief prosecutor on the Trump case - even though this is nowhere near his area of expertise - primarily so he could get big money and take her on flights and trips, as well as giving him an excuse to 'work closely' with her. Problem is, literally the day after he got the big job, he then filed for divorce and seemingly tried to hide his assets (including the $$$$$ from prosecuting the Trump case) from his wife, *if* we believe the wife, who was naturally furious and is now telling all.

So the conspiracy theory here is that there really isn't a case, just a corrupt DA who jumped on the chance of a big publicity stunt case in order to funnel state dollars to her bae and in turn, he spent those government or state dollars on their romantic trips away.

Even if you do accept the racketeering charges as legit, the state prosecution is not covering itself in glory and impartial justice for the people credibility here 😁

Expand full comment

Many Thanks!

>just a corrupt DA who jumped on the chance of a big publicity stunt case in order to funnel state dollars to her bae and in turn, he spent those government or state dollars on their romantic trips away.

Always helps a soap opera to have an infidelity subplot! And the romantic trips can include product placements!

Expand full comment

On the Trump-Biden question, The Economist got a significantly better result for the Democratic candidate by asking the Good Judgment Project: https://www.economist.com/the-world-ahead/2023/11/13/what-the-superforecasters-predict-for-major-events-in-2024 ; 65-35 Democratic-Republican odds.

Expand full comment

It may be possible for the AI Future to be both "Futurama" and "AI-Dystopia" depending on whether you're an Excelceite (a member of the neo-upper-class living in a vast pleasure-dome) or you're part of the displaced hoards of lower-class depth-grobblers who will live underground in tiered cities. I imagine, however, that most of the actual endless toiling for nuggets of neo-plasmin will be done by robots.

Expand full comment
Jan 30·edited Jan 30

I'm not personally a believer in Paperclipalypse, but, in principal it could (a);eventually happen (b) happen after 2050.

Might take longer than 2050 for either of

A) AI becomes truly dangerous, e.g.if it's going to need another theoretical breakthrough

B) for AI to become so embedded in society that it's just impossible to turn it off, even after it turns rogue.

This is "slow takeoff" paperclipalypse, versus fast takeoff where e.g. OpenAi creates something that kills us all tomorrow.

Expand full comment

Re B) if AI can spread virally, it will be impossible to turn off even in 2024, because the human race will never turn off all its smartphones at once.

Expand full comment

Or more realistically, smart fridges and smart thermostats. Cell phones at least make good attempts to avoid catching and spreading viruses.

Expand full comment
founding

Assuming AI can fit on a smartphone, or function through the latency and connectivity issues of being distributed across bignum smartphones.

Expand full comment

Weirder things have happened. Rome lasted a thousand years and cell phones will be 50 years old on this timetable.

Knocking out the cell towers, including knocking out the infrastructure the runs them, will end cell phones quickly with no need for individual humans to decide if they want to keep them. Knocking out the power grid will do the same. In a world where apocalypse is possible, these are minor alternatives.

Expand full comment

Re cell phone towers, that assumes cell phones will not be able to communicate peer to peer.

Re power grid, that doesn't help when solar chargers are ubiquitous.

Expand full comment

The AsteriskMag link at the very top is broken.

Expand full comment

For anyone wondering about the massive spike in "Will Bard overtake ChatGPT", it's because Bard just jumped to (almost) the top of Chatbot Arena

https://www.reddit.com/r/LocalLLaMA/comments/1abqzsj/chatbot_arena_leaderboard_updated_latest_bard/

Expand full comment

"All the things voters might blame Biden for - inflation, Ukraine, Gaza - happened either well before or well after the early-2023 period when his numbers began to decline."

...you forget the largest elephant in the room: The ongoing massive immigration under Biden. Deeply unpopular, in particular among unskilled US American workers - and among those who try to unionize unskilled US American workers. (Europeans would highly likely have voted for Orban-clones everywhere, if we had US levels of irregular migration to EU borders. You seem totally crazy to us. Strike a deal with Mexico to limit immigration from further South, the way we have done with Turkey and other transit-countries, for God's sake.)

...Add to this the medium-sized elephants: Trump will highly likely reverse the Green Shift and start fracing again, which is popular among those who feel the sting of high energy prices. Plus that he is seen as less woke-friendly. And that he is unlikely to start new wars, plus is likely to ramp up protectionism of US-based industries, in particular if China is the competitor. Biden does some of this, but Trump has more "ownership" of these issues.

The strongest argument against Trump is that he tried a coup when he lost. That hopefully matters to people when election day comes. However, if Biden does not get his act together on the substantial elephant-ssues, my hunch is that a majority of US voters will cross their fingers, hold their noses and take their chances with Trump II. How strong a hunch? 0.65.

Expand full comment

Betting on extinction seems like a good example of a niche where betting sites aren’t very useful. If you bet on the world ending, and it does, the people running the prediction market won’t be around to pay you out (and you probably won’t be around either).

Expand full comment

While PredictIt is hobbled by the $850 cap, Polymarket isn't. A potential explanatory factor for Polymarket's unreliability is that its resolutions are not determined by any reliable authority, but by a stake-weighted vote of anonymous stakeholders on a crypto blockchain. This has exactly the result you'd expect: when they can get away with it, a small group of wealthy token owners will conspire to resolve a market incorrectly in order to turn a profit. This has happened multiple times.

Expand full comment

I'm sorry, what actual evidential basis is there for Biden being even or ahead in some markets? The polls are overwhelmingly against him; history is against him (Trump has greatly overperformed his polls twice and Biden has underpeformed once); several ecomomic and foreign policy indicators are against him, and the "Keys to the White House" system is looking bad for him as well.

And what's in his favour? As Scott says, wishful thinking among left-leaning tech types, and...anything else?

It's hard to take seriously the claimed potential reliability of these markets when I can't see what hard/quantitative evidence exists that the Biden numbers could be based on.

Expand full comment

Biden already beat Trump once, would be the main point against Trump's potential victory. Polls this far out are unreliable, so it just defaults back towards 50-50.

Expand full comment

Would you be saying the same thing if Biden was ahead by 5 instead? I don't recall too many people saying that the default should be 50-50 in February of an election year, while the polls said differently.

Expand full comment
Feb 5·edited Feb 5

At this point in 2020 the primaries were still ongoing and the Democratic nominee had yet to be decided. At this point in 2016 the primaries were still ongoing and neither the Republican nor Democratic nominee had yet to be decided. Super Tuesday isn't until March. However, at this point in 2024, we know what we are headed towards.

If Biden were five points ahead then that would lean towards Biden winning, but I would discount for the time delay. e.g. if +5 suggests a 70% chance of winning on the day-before the election (keep in mind Biden won +5 in 2020), it would be more like a 55-60% chance now.

Expand full comment

>The real odds are 50-50.

Do you endorse this choice of words? Maybe I'm nitpicking, but I did get kinda stuck on the phrase "real odds".

Expand full comment

"a revolution that’s entirely comparable to the scientific, industrial, or information revolutions"

One of these things is not like the other, one of these things doesn't belong

https://lukemuehlhauser.com/there-was-only-one-industrial-revolution/

Expand full comment

Trump is doing well in polls vs biden now, but that lead will predictably decline when the election gets closer and the media get around to aiming all their cannons at trump

Expand full comment

This implies that there are media outlets who haven't been aiming all their cannons at him already. Can you name one or more groups that aren't now but would be later? I can't think of a candidate or potential candidate in my lifetime that ever got more press (mostly negative) than Trump.

Expand full comment

with the election so far a way there isn't as much coverage as there will be

Expand full comment