deletedFeb 21
Comment deleted
Expand full comment
deletedFeb 21
Comment deleted
Expand full comment

> Average person can hail a self-driving car in at least one US city: 80%

> I think I nailed this.

which city is this?

Expand full comment

> The leading big tech company (eg Google/Apple/Meta) is (clearly ahead of/approximately caught up to/clearly still behind) the leading AI-only company (DeepMind/OpenAI/Anthropic)

DeepMind and Google are both part of Alphabet Inc., OpenAI was heavily invested in by Microsoft, and Anthropic was heavily invested in by Google. How will this prediction work if the "leading big tech companies" are just using things from the "AI-only" companies?

Expand full comment

I think you should probably add a prediction about robotics. There's going to be a lot of progress in the next 5 years.

Expand full comment

"IE you give it $2, say "make a Star Wars / Star Trek crossover movie, 120 minutes" and (aside from copyright concerns) it can do that?"

J.J. Abrams already did this with the reboot, and while I can't speak for anyone else, it certainly was not what *I* wanted.

"AI can write poetry which I’m unable to distinguish from that of my favorite poets (Byron / Pope / Tennyson )"

Interesting selection! I wouldn't have classed Pope as a Romantic poet, but this gives me the excuse to shoehorn in a somewhat relevant joke, from a 1944 Tolkien letter:

"The conversation was pretty lively – though I cannot remember any of it now, except C.S.L.'s story of an elderly lady that he knows. (She was a student of English in the past days of Sir Walter Raleigh. At her viva she was asked: What period would you have liked to live in Miss B? In the 15th C. said she. Oh come, Miss B., wouldn't you have liked to meet the Lake poets? No, sir, I prefer the society of gentlemen. Collapse of viva.) "

The Walter Raleigh mentioned above is:

"Sir Walter Alexander Raleigh (5 September 1861 – 13 May 1922) was an English scholar, poet, and author. Raleigh was also a Cambridge Apostle.

... in 1904 [he] became the first holder of the Chair of English Literature at Oxford University and he was a fellow of Merton College, Oxford (1914–22).

...Raleigh is probably best known for the poem "Wishes of an Elderly Man, Wished at a Garden Party, June 1914":

I wish I loved the Human Race;

I wish I loved its silly face;

I wish I liked the way it walks;

I wish I liked the way it talks;

And when I'm introduced to one

I wish I thought What Jolly Fun!"

Expand full comment

"in early 2018 the court was 5-4 Democrat"

No, in early 2018 the court had 5 Republicans: Roberts, Kennedy (soon to be replaced by Kavanaugh), Alito, Gorsuch, and Thomas

Expand full comment

> Looking back, in early 2018 the court was 5-4 Democrat, and one of the Republicans was John Roberts, who’s moderate and hates change.

Both of these claims are difficult to justify. The court in early 2018 had 5 justices appointed by Republican presidents (Anthony Kennedy, replaced by Kavanaugh, was appointed by Reagan; while he had a reputation as a swing justice, he went pretty far right in cases that didn't involve privacy).

Likewise, John Roberts is a moderate only in the context of the most conservative court in a century. This isn't a normative judgment, just a description of his voting record. He has consistently voted with the conservative bloc across a range of issues. The exceptions (Obamacare) spring readily to mind because they are rare.

Expand full comment

> 6. Social justice movement appear less powerful/important in 2023 than currently: 60%

How do you figure ? Cancel culture and social justice are IMO more powerful than ever, and still gaining in power -- especially as compared to 2018.

Expand full comment

Mass adoption of driving cars was always a fantasy. And I’ve said that here, and elsewhere, before. The problem is that self driving cars have to be perfect, not just very good, for legal reasons, and for psychological reasons.

Expand full comment

I think predictions about fertility rate in various countries should be of interest, also how technology such as AI girlfriends effect this, similarly what would the percent of the population identifying as LGBT be like, would we start to see the beginning of the religious/insular communities inheriting the earth, what about changes in IQ and such, would any of the fertility increasing efforts have worked.

Expand full comment

I remember thinking at the time that 1% for Roe was way too low and I'd make it closer to 50% (of course now I can't point to anything proving I thought that, though I could've sworn I made that prediction on the original thread from 2018).

In particular I'd say that even if Republicans had only gotten one of Kavanaugh and Barrett you'd still probably see Roe "substantially" overturned. Roberts didn't technically vote to overturn Roe, but I think with 5 conservatives (minus Kennedy if he were still around) on the court he wouldn't have voted to uphold any abortion restrictions. Whether you think his vote in Dobbs is consistent with "not substantially overturning Roe" is a matter of judgment - his decision would clearly allow abortions to be prohibited that were protected under Roe, but he also didn't say "and also Roe is overturned".

But even if you think Roberts's concurrence doesn't count as "substantially overturning" Roe, that wouldn't stop people from passing an even harsher law as a test case (which would have certainly happened if Kavanaugh had joined Roberts in our non-alternative timeline). To me one of the most likely versions of the "Roe isn't substantially overturned by 2023" possibility wasn't that Roe was protected but that they drag it out and it doesn't happen till 2024 or 2025.

Expand full comment

> As far as I can tell, none of the space tourism stuff worked out and the whole field is stuck in the same annoying limbo as for the past decade and a half.

I agree that progress has been disappointingly slow, but your original prediction is more accurate than this assessment makes it seem. There are in fact two companies (infrequently) selling suborbital spaceflights (Virgin Galactic and Blue Origin), and SpaceX has launched multiple completely private orbital tourism missions.

Expand full comment

> At least 350,000 people in the US are regularly (at least weekly) talking to an AI which they consider a kind of romantic companion.

Depending on how you judge this, it could already be true. I'm assuming you're familiar with Replika? It's an "AI companion" app that claims 2 million active users. Until quite recently they were aggressively pushing a business model where you pay $70/month to sext with your Replika, but they recently changed course and apparently severely upset a fair number of users who were emotionally attached: https://unherd.com/thepost/replika-users-mourn-the-loss-of-their-chatbot-girlfriends/

Expand full comment

>IDK, I don't expect a Taiwan invasion.

No number on that?

Also thanks, I needed this today in particular. Had a dream where GDP had gone up 30% in the past year and I figured we'd missed the boat on any chance to avoid AI doom.

Expand full comment

>At least 350,000 people in the US are regularly (at least weekly) talking to an AI which they consider a kind of romantic companion

This seems to have already been true as of late 2022.

Replika seems to have had up to 1M DAUs, although this was before their recent changes of removing a lot of romantic/nsfw functionality (which users very much did not like, and likely led to >0 suicides and notable metric decreases). It's also notable that they do not use particularly good nor large models, but use a lot of hard-coding, quality UX, and continual anthromophoric product iterations. Given what I've seen of their numbers, it's highly likely they had >350,000 weekly active users already.

Those that think AI partners will not take off strongly underestimate how lonely and isolated many people are, likely because they aren't friends with many such people (as those people have fewer friends and do not touch grass particularly often). The barriers are more so that this is hard to do well, there is a bit of social stigma around it, and supporting NSFW is a huge pain across many sectors for many reasons. The latter will remain true, but the other two will change pretty quickly.

Expand full comment

It would have been interesting to see your percentages for "Trump gets impeached" and "Trump gets impeached twice" if you had included them.

Expand full comment

> <...> I should have put this at more like 90% or at most 95%. I’m not sure I had enough information to go lower than that, <...>

Aren't these inverted? I.e. shouldn't this read "more like 10% or at least 5%. I’m not sure I had enough information to go higher than that"?

Expand full comment

> 14. SpaceX has launched BFR to orbit: 50%

Almost? Likely in March of this year, it was likely delayed that long by the pandemic, not just permitting and technological challenges.

> 16. SLS sends an Orion around the moon: 30%

They have! Just uncrewed.

Expand full comment
Feb 20

I think you were wrong on every single thing.

A prediction is X will happen. Not there is an 80% chance that it will happen.

A prediction is Y will not happen. Not there is a 30% chance that it will not happen.

Expand full comment

> only because everyone has settled into an equilibrium where they know what the cancellable opinions are and don't say them

Those standards keep changing.

Expand full comment

> AGE OF MIRACLES AND WONDERS: We seem to be in the beginning of a slow takeoff. We should expect things to get very strange for however many years we have left before the singularity. So far the takeoff really is glacially slow (everyone talking about the blindingly fast pace of AI advances is anchored to different alternatives than I am) which just means more time to gawk at stuff. It’s going to be wild. That having been said, I don’t expect a singularity before 2028.

This prediction is so vague as to be horoscope-worthy. We are going to see something really strange and wonderful yet totally unspecified; or things will continue pretty much as usual until the Singularity; or perhaps something else will happen. Yep, that covers all the bases.

> Some big macroeconomic indicator (eg GDP, unemployment, inflation) shows a visible bump or dip as a direct effect of AI (“direct effect” excludes eg an AI-designed pandemic killing people)

Ok, so other than AI intentionally killing people (by contrast with e.g. exploding Teslas), how would we know whether any macroeconomic indicator is due to AI or not ? This prediction is likewise pretty vague.

> Gary Marcus can still figure out at least three semi-normal (ie not SolidGoldMagikarp style) situations where the most advanced language AIs make ridiculous errors that a human teenager wouldn’t make, more than half the time they’re asked the questions: 30%

Does it have to be Gary Marcus specifically ? 30% is ridiculously low if we expand the search space to all of humanity. Or just to ACX readers, even.

> AI can make a movie to your specifications: 40% short cartoon clip that kind of resembles what you want, 2% equal in quality to existing big-budget movies.

Depending on the definition of "short" and "clip", AI can already do it today ( https://www.cartoonbrew.com/tech/stable-diffusion-is-launching-an-ai-text-to-animation-tool-in-partnership-with-krikey-225919.html ). Anything of decent quality remains out of reach.

> but only because everyone has settled into an equilibrium where they know what the cancellable opinions are and don't say them

This is worse than the current level of wokeness, so I'd argue that the current level is far from its peak.

> IDK, I don't expect a Taiwan invasion.

I do, by 2028, at about 55%. We will know more after the 2024 election.

> ECONOMICS: IDK, stocks went down a lot because of inflation, inflation seems solveable, it'll get solved, interest rates will go down, stocks will go up again?

I expect the growth of the actual productive output of the US to continue its decline. By 2028, I expect the US to be in a prolonged period of economic (and cultural) stagnation (if not decline), whether the pundits acknowledge it or not.

Expand full comment
Feb 20·edited Feb 20

You write that woke has peaked and judge that social justice is weaker than it was in 2018.

I’m not sure I agree? 2020 set race related stuff on fire, and while it’s died down a bit from that it’s hardly at pre-2018 levels. DEI statements as a condition of academic employment are everywhere, the DEI industry is if anything still growing, and it feels like half of popular culture is just “rehash old IP but woker”.

I think maybe youth gender medicine is peaking, in the sense that you’re finally starting to see some mainstream not-conservative pushback on it, but Jesse Singal is making a career out of pointing out really crappy pro-gender medicine research so I wouldn’t say it’s on a downswing really yet.

I know you already noted that “peaked” doesn’t mean “not strong” and that these things take a long time to decay… but I’m still not sure what you’re looking at to judge 2023 as less “social justicy” than 2018. 2020 was a huge inflection point against that.

Expand full comment

Wikipedia says that the Himalayas continue to rise at 5 mm/yr. Mount Everest has not peaked.

Expand full comment

Including Manifold Markets links to your predictions was a very cool thing and I am glad you did it.

Expand full comment
Feb 20·edited Feb 26

Not sure what it implies about wokeness, but Mount Everest doesn’t seem to have actually peaked yet. It’s still getting taller, if only slightly — and it might end up being outflanked on the left/west by faster-growing Nanga Parmat, which isn’t a coincidence because nothing is ever a coincidence.


Expand full comment

Is there any specific posts on the singularity from Scott? He’s an intelligent guy but I am totally a sceptic on that. I’d like my (er) priors challenged.

Expand full comment

> I think this is probably our best hope right now, although I usually say that about whatever I haven't yet heard Eliezer specifically explain why it will never work.

"What form does credulity take among rationalists?"

Expand full comment

There are only two of these where I think you are wrong enough to put substantial Manifold-bucks behind it.

First, I do not expect substantially more people to be using AI as a romantic partner than using AI as a coach/therapist. The 5% chance for a serious AI therapist product seems so low I am concerned you made a typo.

Second, I think "AI can generate feature-length films" is 50-50 to happen by 2026, which is much higher than your "2% by 2028".

Expand full comment

One thing you didn't discuss for the future, which I found to be an interesting oversight since you discussed the decline of wokeness and the Supreme Court's impact on democracy, is that it looks virtually certain that affirmative action is going to be prohibited federally over the next 5 years. (It looks like there's a >50% chance of that this summer, honestly). This is either going to make wokeness look very weak; make it look very countercultural and different; or perhaps add new fuel to the fire.

More generally, I think it's underestimated how right-wing the present Supreme Court is, and that over the long run (...IMO, absent some realignment) it is probably likelier to get more rather than less right-wing. Unless Biden replaces one of the right-wing judges, we're probably getting a Roe-tier blockbuster decision every summer for the foreseeable future. The ultimate -- which I'm not confident enough to predict -- would be a case strengthening the non-delegation doctrine, which would hugely limit the administrative state. In theory all six of the right-wing judges on the Court have been in favor of this at some point or another in their careers (and it's overlooked that we came within a hair's breadth of this happening in the Gundy case in 2019; Alito, normally one of the more-right judges, dissented because he thought that having the first effect be releasing sex offenders on a technicality was a bad look); nobody wants to predict this because nobody's confident they have the stones, but then that was the same logic as with Roe.

Expand full comment

> anything that has to happen on a scale of seconds or minutes is fatal to AI training

Now extrapolate, and calm down

Expand full comment

I feel like whatever is happening in your brain when you read Victorian poetry or Bing chatbot logs is radically different to what happens in my brain when I read these things.

What do you reckon the future of the housing market is? Do you think there's going to be any significant policy change in any direction? What happens if there's not? Also, are all these stories about how the American education system is collapsing and teens are basically illiterate now real or bullshit? How fucked are we if they're real? These are the future trends I'm interested in.

Expand full comment

Disappointed there’s no calibration chart! I know the resolutions are fuzzier but would still be interested in seeing it.

Expand full comment
Feb 21·edited Feb 21

"Wokeness has peaked - but Mt. Everest has peaked, and that doesn't mean it's weak or irrelevant or going anywhere. Fewer people will get cancelled, but only because everyone has settled into an equilibrium where they know what the cancellable opinions are and don't say them (or because everyone with a cancellable opinion has already been removed, or was never hired in the first place)."

In other words, wokeness has won. Authoritarianism has won. If everyone with a cancellable opinion has been removed or not hired in the first place, and everyone refrains from saying cancellable opinions, in what sense do we still have a free and democratic society? Does a society where a large percentage of people, including its most influential, can't debate controversial ideas deserve to call itself a democracy?

Expand full comment

On the culture war question. I feel like people have stopped talking about the alt right for the most part. But mostly because it's been absorbed or transfigured into Trumpism/MAGA. (Whether in continuity Trumpism-without-Trump flavor or MAGA Classic™). There's maybe some lesson in how parties integrate insurgent movements. The old Romney and Bush style GOP is dead, but instead of being replaced by the alt right as existed in 2018 we got something with the aesthetics and core politics of the alt right, but picking up traditional Republican positions on tax etc as well. Rather than the populist realignment people were predicting after Trump. (Arguably a similar thing happened to a lesser extent with the Dems, but reversed, Biden gave an old white working class face and aesthetic to some more left wing policies).

Looking back at it I didn't expect how much the anti trans strand would split off and become its own thing, rather than staying part of a wider alt right ideology. Though in retrospect it's obvious that a narrative focused around "groomers" and protecting children would be more palatable to median voters than "Jews will not replace us!".

Expand full comment

On Trump: I disagree that the Republican Party has not moved on from Trump. They have moved on from him personally. They haven't shifted very far from him politically but he was never nearly as far as his opponents believed from the center of gravity of the Republican Party anyway. I think DeSantis fills the role you attributed to "someone like Ted Cruz" in your initial predictions, he find a happy compromise between Trumpishness and establishment Republicanism that most people on the right are happy to shrug and go along with.

On AI science: A lot of papers in materials science theory basically come down to "We simulated X property of Y material using method M and parameters P. We got value Z, which can be compared to values Z', Z'' and Z''' computed by previous methods or measured experimentally. In conclusion, method M is pretty good." The ability to read the existing literature to figure out the most important X-Y-M combinations to simulate, and the ability to write the actual paper, are the only things stopping this whole process from being automated, but I think it could probably be done reasonably well right now.

On AI movies: 40% seems low for "a short cartoon that kind of resembles what you want", while 2% seems high for "as good as existing big-budget movies". Turning a short prompt into a short cartoon script is within GPT's reach, turning a script into a series of images is within Stable Diffusion's reach, and the only problem is replacing an unrelated series of images with a continuous and consistent animation. I'd give 95% probability someone is going to manage to pump something like this out since it's the obvious next step.

Expand full comment

Scott seems to have left off the "just for fun" section:

> 1. I actually remember and grade these predictions publicly sometime in the year 2023: 90%

> 2. Whatever the most important trend of the next five years is, I totally miss it: 80%

> 3. At least one prediction here is horrendously wrong at the “only a market for five computers” level: 95%

1 happened, obviously. (Although the post in the subreddit probably made this a foregone conclusion)

2. If you think "the most important trend" is Covid, then I disagree that he "totally missed it". As he said, he was somewhat off in how it would play out, but just the fact of having that discussion is impressive enough to not be a total miss.

3, I think Roe overturned at 1% qualifies here.

Expand full comment

Seems like you're good at predicting the state of tech but bad at predicting mass human behavior. Likely because your own circle of acquaintances is so unlike the median.

Expand full comment
Feb 21·edited Feb 21

Not to pile on too much more when you've already admitted defeat on the politics front, but the phrase that stood out to me even though it didn't make it into a formal prediction was this:

"no, minorities are not going to start identifying as white and voting Republican en masse."

I'm a demographer who has done a little informal work on this and I disagree pretty strongly. I don't know that they're going to start voting Republican "en masse," but every prediction about the rise of minorities in the US turns heavily on Hispanics being a minority. At this point, 90% of Hispanics in Texas and Florida identify as white and their right-ward shift was a major bright spot for Republicans in 2020. In two generations, I don't believe we will see the history of Hispanics or Asians in the US much differently than we see Irish or Italians. They are simply new immigrant groups who are in the early stages of assimilation (a process Americans naively assume can happen overnight) and will be heavily relied-on conservative foot soldiers.

Blacks and Native Americans are the true minorities with staying power in the US and neither are substantially increasing in number. The really interesting thing to be seen, imho, is how new African immigrants integrate - how far they go in adopting or rejecting the identities of Generational African-Americans - and how much they obscure the distinction between immigrants and African-Americans. They've been able to hold both identities so far and take full advantage of the DEI push - to the point where the majority of Black students at Ivy League schools are not descendants of enslaved persons. But there is budding unrest over that fact.

Expand full comment

Good on you. You and Paul Krugman are the only opinion gives who self grade

Expand full comment

To help demonstrate Scott's difficult task, I will throw out a few bold predictions of my own, although with a ten-year timeframe.

1) California will not be a US state in 2023 - 40%

1A) California splits into multiple states - 15%

1B) California secedes - 20%

1C) United States completely dissolves - 5%

2) At least one country uses AI in some way to eliminate the concept of money by 2033 - 25%

3) A new religion started in the 2020s (more unique than just a new Pentecostal denomination) has at least 5 million adherents by 2033 - 25%

Expand full comment

I love that you do this -- please keep doing it. Epistemically inspiring.

Expand full comment

I noticed you didn't grade your meta-predictions!

"1. I actually remember and grade these predictions publicly sometime in the year 2023: 90%"

Clearly you did!

"2. Whatever the most important trend of the next five years is, I totally miss it: 80%"

I think almost everyone would agree that COVID was the most important trend of the 2018-2023 period. Most people would probably put the Ukraine War in second place. You didn't mention either of those: You brought up the possibilities of an artificial pandemic and a major Middle Eastern war, but the possibilities of a natural pandemic and a major Eastern European war didn't come up at all.

3. At least one prediction here is horrendously wrong at the “only a market for five computers” level: 95%

I think giving 1% odds to Roe v. Wade is wrong to the same extent as the "five computers" example.

That's 3/3 correct, you're very good at making predictions about your predictions!

Expand full comment

I don't think you should have gone to 90-95% on Roe v Wade being overturned. Nine justices is too small for statistical inevitability, you've still got quirks of individual behavior at work. Even if you assume a 6-3 "Republican" SCOTUS, Republican-nominated justices have traditionally only ~80% reliable at voting to overturn Roe v Wade. So if you take the 2018 court, add one new Trump nominee replacing a retiring conservative judge, and one new Trump nominee replacing a dead liberal judge, you're probably only at 65% for overturning Roe v. Wade. And there was no guarantee that we'd see a dead liberal judge before we got a Democratic president, so probably knock that down to 50% at most.

1% was way too low, but 90% would have been way too high.

Expand full comment

Here’s a thought ... highly evangelistic chat bots, deployed for changing people’s opinions (most likely during an election year).

This is something that seems like it could happen, and also something that could potentially lead to a large freak out by whatever side is worse at using it?

Further thought: actively hacking/hijacking existing chat bots people have bonded with to deliver evangelistic payloads

Expand full comment

Not bullish enough on AI. China will absolutely invade Taiwan, revanchism is what every boomer dictator does when they realize their country is floundering and hasn't reached escape velocity to become a global hegemon. 100% correct on crypto and everything else.

Expand full comment

One quibble, perhaps addressed already: US Politics claim #12. It reads: At least one major (Brady Act level) federal gun control bill passed: 20%. You resolved it as "not having happened". I think it did, and the most important thing it did was showed that gun legislation (ANY gun legislation) can pass today's Congress - because the substantive stuff wasn't all that dramatic. See e.g.:


6/25/2022: "President Biden on Saturday signed into law the first major gun safety legislation passed by Congress in nearly 30 years..."

Expand full comment

I see no predictions on embryo selection, genetic editing..., why?

Expand full comment

>11. Psilocybin approved for general therapeutic use in at least one country: 30%

This did happen, Australia recently announced the laws are changing as of July. If the prediction had to come true before the beginning of 2023 though, you'd be correct

Expand full comment
Feb 21·edited Feb 21

"nobody being willing to say the spectacular achievements signify anything broader"

I may be quibbling, but this part seems clearly wrong. I don't believe such people are correct, but it's completely clear that they are out there, starting with Blake Lemoine.

Maybe they are few enough to be in the noise, and it certainly doesn't merit reducing the grade below A.

[ETA: I was too hot off the mark; I see that this is from a general paragraph without specific predictions and confidence levels. Never mind.]

Expand full comment
Feb 21·edited Feb 21

>AI can make deepfake porn to your specifications (eg “so-and-so dressed in a cheerleading costume having sex on a four-poster bed with such-and-such”), 70% technically possible, 30% chance actually available to average person.

This is borderline already technically possible. If you go to the Stable Diffusion subreddit, you can see lots of high quality images people have had the AI do of whatever they want. With a few seconds of work, the AI will generate images to your specifications, it struggles a bit with complex specifics (but not nudity, I have to make sure I have "nude, nsfw, etc." in my negative prompts or many models will spit out nudes when you don't want them). For example, here's a screenshot of [Marilyn Monroe from when she was a Jedi in Star Wars](https://imgur.com/a/oZ5AXRn). And note, with some effort you can significantly improve images e.g., highlight portions you want to change, try 20 different variations and see which one you like the best, etc. so this is an example of a couple of minutes of work with current AI, from someone who hasn't practiced that much yet.

Expand full comment

I tried some poetry with Bing Chat. It helps that it can actually search for information on the poets and their style. This was my favorite, in the style of Lord Byron:

I saw her once among the festive crowd

That filled the hall with laughter and delight

She shone more bright than any star or cloud

That graced the splendid canopy of night

Her eyes were like two jewels of the deep

That sparkled with a thousand rays of fire

Her lips were like two roses in their sleep

That breathed a fragrance sweeter than a lyre

Her voice was like a music of the spheres

That charmed the ear with every word she spoke

Her smile was like a sunbeam that appears

To chase away the gloom with every stroke

I longed to speak to her, but dared not try

For fear that she would scorn my humble sigh.

Expand full comment

V impressed by your glimpses of brilliance from 5 years ago.

Expand full comment

You think the social justice movement is less powerful in 2023 than 2018? Was this posted from an alternate universe? (And can I apply to immigrate there?)

To pick one example out of ten million: Land Acknowledgements were the fringiest of fringy fringe ideas in 2018. Not so in 2023.

I suppose that a "movement" ceases to be a "movement" when it takes over everything, and just becomes "the way things are." But I don't think that's what you meant.

To balance out this comment a bit, I'll add something positive: your AI predictions were extremely impressive. I was skeptical, and I was wrong.

Expand full comment

I'm a bit confused about the "AI can write poetry" point. If it's "some language model has produced at least one poem that wouldn't stand out among Romantic poems" I think my confidence in that happening is 99%. I wouldn't be surprised if there were already a couple of poems like that.

Conversely, if the claim is "there will be a reliable way to prompt AI for an original poem that wouldn't stand out among Romantic poems" then my confidence in that is fairly low, maybe 25% or less. At least for the current general-purpose machine learning models. Maybe a specialised AI could do it.

My criterion for "doesn't stand out" is something like: If you show 9 authentic, unfamiliar Romantic poems and the AI generated poem to college students of English or Literature (but who aren't specialised in poetry) then fewer than 1/4 will guess that the poem is AI generated or consider it noticeably worse than the others.

Expand full comment

I remembered you predicting that machine translation will be flawless by 2023. I went back to check, and it turns out you did predict it but edited it out shortly afterwards (adding an "if" at the beginning of the sentence).

I'm glad for the edit, but I was waiting 5 years for you to admit you were wrong about this just to see you get an "A" after all, so I'm a little frustrated. Still, your edit was early enough that this is mostly fair.

Expand full comment

I don’t think you follow “right wing culture” very closely, as if you did, you’d notice a growing fracture between Trump Republicans and DeSantis Republicans. The party has absolutely started to move on from Trump.

Expand full comment

@Scott Alexander I am curious about the uninsured rate prediction for healthcare.

In the summer of 2022, the rate was 8%. Significantly lower than 13%.

Was the idea conditioned on Trump being able to significantly repeal Obamacare?

The way it's written "As Obamacare collapses" doesn't specify how it would collapse. Regulated access to health insurance is the norm in every other first world country and some of the not so first world ones... it seems odd to take it as a given that ours would collapse without giving an explanation why.

Was the prediction based on Republican control of the presidency and legislature? The inherent contradictions of creeping statism?

I was hoping to see more of an explanation on it than just "I was wrong about #1"

Expand full comment

> I don’t know how I even came up with “AI can generate images and even stories to a prompt” as a possibility! I didn’t even think it was on the radar back then!

It was definitely on the radar back then! You're mis-remembering the state of AI in this period. Facebook came out with the bAbI test in 2015 and it was solved shortly afterwards. It demonstrated basic story comprehension that checked several types of mental skill. DeepMind published on an AI that could read the Daily Mail and answer questions about the stories in 2015.

By 2015 GANs were already generating random faces, albeit with obvious distortions and corruptions. By 2016 the faces were small but plausible, by 2017 celebrities had been mastered and by 2018 the tech was essentially done:


These GANs weren't generating images based on prompts but that was a clear next direction researachers were already expressing interest in. So, sorry, I liked your predictions, but this one is not actually as impressive as you find it to be, which is interesting for what it says about our recall of the past.

Expand full comment

> Gary Marcus can still figure out at least three semi-normal (ie not SolidGoldMagikarp style) situations where the most advanced language AIs make ridiculous errors that a human teenager wouldn’t make, more than half the time they’re asked the questions: 30%

Which human teenager? As in, would the cognitive reflection test questions count, if the AI answered them wrong? Certain human teenagers would answer them wrong and others wouldn't.

Also, a human teenager in which situation? I mean, I'd expect a human teenager asked how Dante Alighieri died to answer either "I dunno" or the correct answer if the question comes up in a regular conversation with their friends, but to try and pass a half-remembered guess as knowledge if taking an exam with no (or sufficiently small) penalty for wrong answers or reward for blank answers. (The last time I asked ChatGPT, it said something to the effect of "Nobody knows for sure, but probably either [the correct answer] or old age", never mind he was 56 years old.)

> At least 350,000 people in the US are regularly (at least monthly) talking to an AI advertised as a therapist or coach. I will judge this as true if some company involved reports numbers, or if I hear about it as a cultural phenomenon an amount that seems proportionate with this number: 5%

How seriously do they have to take this? Does it count if a large fraction of the 350,000 do it mostly just for fun, the way certain people read horoscopes in newspapers just for fun? You might want to specify "spend at least $10/month" if you want to only count people taking it at least somewhat seriously.

> Artificial biocatastrophe (worse than COVID): 5%

Do I understand correctly that "artificial" means *both* not-naturally-occurring *and* deliberate, so neither Mongols throwing plague victim corpses over the walls of Caffa nor a lab leak of something like COVID but worse would count because the former fails the "not-naturally-occurring" criterion and the latter fails the "deliberate" criterion?

Expand full comment

"AI does philosophy: 65% chance writes a paper good enough to get accepted to a philosophy journal (doesn’t have to actually be accepted if everyone agrees this is true)"

I'd rate this much higher. I think it could have a pretty good shot at achieving that right now, and I wouldn't be very surprised if I learned it already had written a paper that had been accepted.

Expand full comment

I think AI will be able to do a lot with research, including biological research. It may find new truths-- possibly needing to be confirmed by physical research--by finding connections that people haven't noticed. It will SHINE at detecting fraud and poor research design. People will presumably find new and better ways to commit fraud, but just going over what's already been published to find more of what's falsely believed will be important.

It seems reasonable that it might be able to run its own physical experiments. Expect a combination of successes and embarrassing failures.

Expand full comment

> Jordan Peterson’s ability to not instantly get ostracized and destroyed signals a new era of basically decent people being able to speak out

I know it's not part of your explicit predictions, but JPete got less decent imo and is squarely in the business of fanning the culture war flames


Expand full comment

"There will be two or three competing companies offering low-level space tourism by 2023. Prices will be in the $100,000 range for a few minutes in suborbit."

That's the one space prediction you absolutely nailed: Blue Origin and Virgin Galactic is two companies, and that's the right price range.

Expand full comment

Love it. What do you think about AI writing (fiction/non-fiction) bestselling books? :)

Expand full comment

You judge no. 2 in US Culture as not having happened? Why, it's happened more than just about anything else on your list.

Expand full comment

I do think your AI prediction deserves grade A, but just so you know, to my reading, "Nothing that happens in the interval until 2023 will encourage anyone to change this way of thinking," seems strikingly false. There are many people panicking about artists potentially being out of jobs soon right now that weren't panicking about it before. That seems like something that changed people's way of thinking. (Whether it changed it in the right direction is a different question entirely.) But it wasn't something you explicitly listed as a point under your narrative and so not something that affects the grade anyway.

Thank you for sharing these!

Expand full comment

"2. No “far-right” party in power (executive or legislative) in any of France, Germany, UK, Italy, Netherlands, Sweden, at any time: 50%"

"Far-right leader Giorgia Meloni sworn in as Italy's prime-minister" - Guardian, October 2022



Expand full comment

Pretty decent results, and it's encouraging that even with the covid crisis the future was reasonably predictable, with the one big miss (Roe) having almost nothing to do with covid at all.

Also surprised to see "AI life coach" placed so low and so much lower than "AI parasocial romantic partner". It seems like a fairly easy thing to provide to elderly people living alone, for example. Universities might provide them to their students. Tens of millions of people have Alexa so I suspect this service would reach 350k very quickly after being introduced. Is the intent here that, in the case of therapy, the AI would be doing a role that requires a medical license?

"Wokeness has peaked" will be an interesting prediction to evaluate in the ~20% of futures where Harris/Haley/some other "woman of color" is President on 1/1/28. I suppose that's about where I would have put the five-years-out probability in 2018.

Expand full comment

I'm proud of myself for agreeing pretty strongly with your crypto prediction back then - I remember thinking "it's money, there's NO WAY it won't end up regulated like money." I have the advantage of working with a lot of regulated companies, which gives me a pretty regulation-aware lens, I suppose.

I *did* think there would first be a huge scandal/disaster involving a crypto exchange and specifically direct harm to many consumers that prompted a wave of regulation, which is true-ish but not nearly at the profile I expected.

On your new predictions: I think of myself as a bit of an AI skeptic compared to some here, so I'm surprised to find some of your AI predictions pretty conservative. The porn generation concept seems inevitable unless AI progress slows dramatically, as does the poetry generation concept. A bit of good prompt engineering and trial and error can already meet some of the goals you have at 70% or less... It would really surprise me if we can't go from three tries to one in five years given the current pace of progress.

Expand full comment

>At least 350,000 people in the US are regularly (at least monthly) talking to an AI advertised as a therapist or coach. I will judge this as true if some company involved reports numbers, or if I hear about it as a cultural phenomenon an amount that seems proportionate with this number: 5%

I'll bite on this one: At least 80% of major corporations will be using (specifically constrained) AI in lieu of deterministic chatbots for:

- explaining HR benefits to employees and assisting in enrollment

- assisting in filling out job applications

- dealing with first-line IT issues (e.g. "did you try replacing the wireless mouse's battery?")

- other HR-adjacent things (the more legal, the more human-in-the-loop)

Additionally at least 80% of major corporations are offering a health / wellness / fitness benefit that uses a similarly constrained AI ILO chatbot to tell people to lose weight / exercise more / drink more water / etc.

Probability of the above I'd put over 90% by Jan 1 2028.

I expect "Powered by OpenAI" or similar derived corporate HR / benefit / IT products to start rolling out in the next two years. We will all have a great time debating the difference between these super shackled not-chatbots and actual AI.

Expand full comment

Intended as a reply to several “Scott is wrong about the Social Justice movement being less powerful now, it’s obviously more” comments:

I think whether you think of the social justice (or any other) movement as being more or less "powerful" than 5 years ago depends on whether you model power as "already realised changes to society" or "ability to realise future changes" - the former of which has obviously increased, but the latter of which has quite arguably decreased. SJ-style rhetoric has expanded explosively over the last decade because it has been an easy way for both organisations and broadly Blue-Tribe individuals to signal social virtue without actually sacrificing anything meaningful by making vacuous more-progressive-then-average statements.

Of course, when everybody wants to be ahead of the curve on something an arms race develops, which is how previously fringe statements like land acknowledgements rapidly become mainstream. I think what's changing now is that most of the ground that can be covered with mere words (ie. ones that do not necessarily precipitate meaningful action) has mostly been covered, and large numbers of superficially progressive organisations and individuals are starting to have to reckon with a situation where continuing to signal above-average-progressiveness has exponentially increasing costs, and tamping their enthusiasm down accordingly. How many organisations with a Land Acknowledgement will ever actually cede any of it to today's indigenous remnants? My money is on <10%.

Relatedly, many (myself included) see 2020 as the year the SJ 'wave' broke, primarily due to the summer's BLM protests/riots and how they were covered. Mainstream progressives continued their rhetorical advance into previously fringe territory by campaigning for police abolition, 'justified' mob violence, increasingly strong notions of racial identitarianism, and an inconsistent notion of what COVID controls were acceptable, and it finally became real enough that while there wasn't much visible backlash from within the Blue Tribe, a lot of moderates quietly said to themselves "oh shit, this stuff is actually real now" and decided not to advance any further. Now it's 2.5 years later and even watered down versions of police de-funding have gone nowhere, mainstream media corps have mostly stepped back from highlighting contentious racial identity issues (outside of some much safer pride-esque flag-waving in February), and more generally progressive rhetoric seems to be losing its ability to sweep previously fringe positions into the mainstream. The one arguable exception to this is on trans issues, but even that seems to be stalling out somewhat, and it’s worth noting that even if it does succeed in growing in mainstream appeal, it’s pretty niche for a civil rights fight and as such falls more into the ‘I can keep signalling on this and it won’t affect me’ category.

Expand full comment

Some predictions I'd like to see discussed:

Will AI displace a significant number (>10%) of professional or menial office jobs in 5 years (e.g. in accounting, finance, IT, web development, therapy)?

Will Netlix be determined a loser in the streaming wars?

Will YIMBY zoning changes take effect in most major cities?

Will we see a major antitrust case in 5 years?

Will there be significant police reform?

Expand full comment

> We definitely have the technology to do the polygenic score thing. I think impute.me might provide the service I predicted, but if so, it’s made exactly zero waves - not even at the same “somewhat known among tech-literate people” level as 23andMe. From a technical point of view this was a good prediction; from a social point of view I was completely off in thinking anyone would care.

I'm not surprised. You could always go and download your 23andMe data and run plink on it. So impute.me wasn't offering anything genuinely new: various services were doing that anyway. Nor does the history of tech show that there is always a backlash: the most relevant precedent, IVF, was accompanied by extreme dire warnings of doom, and attempts to ban it, and then someone went ahead and did it, and everyone shrugged. So it is not too surprising that the same thing happened with embryo selection: I found it hilarious how Aurea was announced and, after all that heavy breathing and talk of how Doudna was waking up from nightmares about Adolf Hitler, then no major media outlet could be bothered to report on it for like half a year (I think Bloomberg, of all places, did the first real article?).

> The polygenic embyro selection product exists and is available through LifeView. I can’t remember whether I knew about them in 2018 or whether this was a good prediction.

For those a little confused, 'LifeView' is the company formerly known as 'Genomic Prediction', co-founded by Steve Hsu. GenPred launched publicly around October 2018, so February 2018 isn't too long before and Scott might've been hearing about it before. Another startup, 'Orchid', began offering PGS embryo testing (possibly around mid-2021?) but I don't know how many they have done.

Expand full comment

I think your economic predictions look somewhat worse than you think in hindsight, in the sense of what you chose to make predictions about and spend your energy thinking about, moreso than the specific predictions you made. You devoted a LOT of space in the econ section to cryptocurrency, a thing which ultimately didn't matter much (even though you did correctly predict that it wouldn't matter much!). This wouldn't look as weird if it weren't for the fact that the economic stories of the past five years have been far more interesting and dramatic than I think the version of you five years ago would have expected.

None of this is meant as criticism of you specifically or anything like that, of course. Knowing what sorts of stories will matter is even harder than making specific predictions about a particular thing. I think it's an interesting question about these types of predictions, though: how do you treat the lack of focus or missing predictions when you are attempting to calibrate?

For reference, some things that reasonably could have been in your predictions that were not include inflation, the economic decline of the UK, the USA's relatively strong performance relative to its allies, the return of protectionist economics, supply chain issues re: batteries, chips, etc., housing supply/zoning issues, etc.

Some other areas that you missed predictions on include climate change (what % probability would you have given the US passing major legislation aimed at reducing its emissions by more than 25%?), Trump attempting to reject election results showing he lost, just generally the US congress becoming far more legislatively productive than it had been previously (including significant bipartisan legislation!), Afghanistan withdrawal

Expand full comment

Typo police: "Bitcoin will do find" -> "Bitcoin will do fine"

Expand full comment

I think there's a poor framing on a pair of the predictions:

> At least 350,000 people in the US are regularly (at least monthly) talking to an AI advertised as a therapist or coach. I will judge this as true if some company involved reports numbers, or if I hear about it as a cultural phenomenon an amount that seems proportionate with this number: 5%

> At least 350,000 people in the US are regularly (at least weekly) talking to an AI which they consider a kind of romantic companion. I will judge this as true if some company involved reports numbers, or if I hear about it as a cultural phenomenon an amount that seems proportionate with this number: 33%

Note that anyone who talks to Replika as a romantic companion would fulfil both of these; it IS advertised as a therapist or coach, and it does have that relationship option in addition to the friend/romance partner options. And Replika's website (only one way of interacting with the chatbot; can also be done through an app) currently has 1.1 million monthly unique visitors. I wouldn't be surprised if both of these should evaluate to true *right now* as written. Hard to judge what percent of that 1.1 million + app users view the bot as a romantic companion, but that really is most of their advertising.

Expand full comment

In "Whither Tartaria", you wrote that you don't have taste (and I wondered what you meant by that), so I am surprised that you have favorite Romantic poets.

The amount of material to train on in order to imitate a specific Romantic poet isn't very much, so I am much more skeptical about that than you are.

Expand full comment

"I think there will be more of a movement to ban or restrict AI. I think people worried about x-risks (like myself) will have to make weird decisions about how and whether to ally with communists and other people I would usually dislike"

Can someone explain this part? I'm not sure how communists (China?) are meant to help with x-risk. Does it mean "ally with China to restrict AI"?

Expand full comment

250 comments in and nobody taking issue with

"4. Average person can buy a self-driving car for less than $100,000:"

Implying that the "average person" could drop $100,000 on anything smaller than a house says something about the demographics of the people/chatbots here. Especially considering the inflation rate in 2018.

Expand full comment

Since others have addressed that your rating of your social justice prediction is at best questionable and at worst hilariously wrong, I'll take up this one:

>Religion will continue to retreat from US public life. As it becomes less important, mainstream society will treat it as less of an outgroup and more of a fargroup. Everyone will assume Christians have some sort of vague spiritual wisdom, much like Buddhists do.

While that's technically not a prediction, I have *no clue* how you get the idea that "mainstream society" is treating Christianity as more of a fargroup than an outgroup. They're still treated as hateful bigots, likely moreso than any point in a lifetime in light of Dobbs, and with fearmongering of "Christian nationalism" (which ultimately boils down to Christians having any political opinion at all except retreat), they are *most certainly not* treated as a fargroup. Christians are firmly still in the outgroup camp.

Maybe San Francisco atheists are slightly less hateful towards Christians than they were back in the Internet Atheist heyday, but they are not representative of the mainstream.

Expand full comment

Speravato (Esketamine) was approved by the FDA (my wife is a neuroscientist and said this meets your glutametergic antidepressant). She says lots of action in this space (Sage is working on several drugs) - so yeah!

Expand full comment

> Last week was the tenth anniversary of my old blog (I accept your congratulations)

Congratulations! Your blog is one of the few places with almost-consistently good and well-paced statements (except for that "child prison" outburst when reviewing "The Cult of Smart", what the hell, seriously: if I wanted angry-but-probably-correct perspectives, I'd go to theangrygm.com; you weren't that visibly angry about Scott-Aaronson-picked-on-by-feminists, of all things).

>The leading big tech company (eg Google/Apple/Meta) is (clearly ahead of/approximately caught up to/clearly still behind) the leading AI-only company (DeepMind/OpenAI/Anthropic) in the quality of their AI products: (25%/50%/25%)

Bracketing here is weird; I presume the probability variants refer to the second triplet of variants but it reads as if it could just as well refer to each of the other two or that they're in some weird conjoined relation (i.e. Google is clearly ahead DeepMind/Apple approximately caught up to OpenAI/Meta is clearly still behind Anthropic).

>I think Xi is a significant change towards traditional dictatorship which doesn't work as well

And you think this doesn't _increase_ chances of Taiwan invasion? Even after the last year's lesson that, basically, dictators don't always do the sensible-for-their-personal-goals thing even this century?

>I expect Ukraine and Russia to figure out some unsatisfying stalemate before 2028

How? Just… how? They both seem politically stuck in a situation that will not be, well, satisfied with an unsatisfying stalemate.

Expand full comment

For "US Politics," is the 20% forecast of a state de facto decriminalizing hallucinogens "having happened" stating that a state did or didn't de facto decriminalize hallucinogens? That's poor wording.

Expand full comment

Re. "Kamala Harris didn’t even get close to becoming president, although Biden made the extremely predictable mistake of making her VP." -- I think being the VP of an octogenarian should count as getting close to becoming President.

A prediction that's on my mind right now is "A Concordet loser will win the 2024 US presidential election".

Expand full comment

If someone builds a question bank of 100 questions where teenagers score 95% on average, and the LLM gets 97% correct, does that mean Gary Marcus wins? Also, does Gary Marcus get to choose the exact wording, adversarially against a specific LLM (so he can maybe exploit a SolidGoldMagikarp bug unknowingly, as long as he can find a decent number of variants)? Or does Gary Marcus fail if Scott re-words the questions and the LLM gets them right half the time?

I consider Gary Marcus likely to succeed at finding errors that he can replicate with several variants, but I also consider Scott likely to be able to re-word those queries to avoid the errors. I suspect that if some big academic group writes a list of questions and keeps them in an icebox, future LLMs will eventually outperform teenagers but not score 100%.

Expand full comment

> What the unofficial version of health care will be remains to be seen.

That would be Gofundme.

Expand full comment

I saw this opening paragraph:

> AI will be marked by various spectacular achievements, plus nobody being willing to say the spectacular achievements signify anything broader. AI will beat humans at progressively more complicated games, and we will hear how games are totally different from real life and this is just a cool parlor trick. If AI translation becomes flawless outstanding, we will hear how language is just a formal system that can be brute-forced without understanding. If AI can generate images and even stories to a prompt, everyone will agree this is totally different from real art or storytelling. Nothing that happens in the interval until 2023 will encourage anyone to change this way of thinking.

And I thought, "what a naive person from 2018, that's the total opposite of what happened!" and yet Scott graded it as correct! It seems to me that 2022 is the year that people finally agreed that AI is *not* just a cool parlor trick, and it really *does* signify something broader. There's of course still a lot of people claiming that it is (but you also still find people saying the same thing about electoral government nearly 250 years later). There's also not yet a consensus on *what* the broader thing it signifies is, but I think that a major transition just happened that Scott of 2018 had predicted wouldn't occur until late 2023 or later.

Expand full comment

I looked up Auvelity, excited to learn about a new development in pharmacology, only to learn that it's wellbutrin + cough syrup (dextromethorphan).

Expand full comment

> I think these were boring cowardly nothing-ever-happens predictions that mostly came true.

I can't really agree; saying there's a 40% chance over five years that the crown prince of Saudi Arabia will be deposed is most certainly not a boring, nothing-ever-happens prediction!

Expand full comment

I'm confused by Scott's predictions for AI coach vs AI "romantic companion." Thought for sure it was a typo

The coach feels much more reasonable to me, and more socially acceptable. It also is the sort of thing which I can imagine conferring real advantages, like there's a chance such a thing could increase productivity and income so even skeptics have incentive to try it. Manifold rates both possibilities as about 62%.

Yet Scott rate's AI coaches as incredibly unlikely (5%, 19:1 odds), and AI romantic partners as almost a coin flip (33%, 2:1 odds). If we use surprise = -log(p), Scott would find AI coaches 2.7 times as surprising. Why the discrepancy?

Expand full comment

Very interesting post...my favourite from this Substack for a long time ! Overall, I think that Scott did quite well on most of his predictions in 2018, and I was quite impressed by both the AI and pandemic predictions (I think that it's still not clear if there was a lab-leak or not, right?). The political predictions were a bit more off, but overall I'd still argue that it's not clear if Republicans or Democrats are less "unified"...probably Democrats, though both are too big for their own good IMO (and the US, just like Canada or the UK, would benefit from proportional representation of some form)…

One thing I've noticed regarding the predictions for 2028 is that the Manifold Market predictions seem much more bullish than Scott's predictions for as regarding AI in the next 5 years...I would be even more careful than Scott here, even though I do realise that AI is now more advanced than most lay people (like me) would have thought just a few years ago...

Expand full comment

Henry Kissinger, of all people, shares a byline with two other writers on an essay on ChatGPT in the February 25-26 edition of the WSJ that really caught my attention: 'Because ChatGPT is designed to answer questions, it sometimes makes up facts to provide a seemingly coherent answer. That phenomenon is known among AI researchers as "hallucination" or "stochastic parroting," in which an AI strings together phrases that look real to a human reader but have no basis in fact.'

So ChatGPT not only 'researches' its data from questionable material broadcast without qualification on the Internet, but it fabricates its own 'facts' to support a clever, pleasing conclusion for the lazy, gullible human. The only researchers or writers ChatGPT will be replacing are cable TV news producers and newspaper editors, and James Patterson. No worries; I doubt anyone will notice.

Expand full comment