483 Comments
Comment deleted
Expand full comment

Metaculus has this: https://www.metaculus.com/organization/public-figures/ eg: https://www.metaculus.com/public-figure/joe-biden/ (great site overall btw and it gets better all the time) This feature is experimental though and there is no aggregated accuracy score as public figures don't usually make predictions with percentages and they don't all predict the same questions, so assigning a score would not be meaningful.

Expand full comment
Comment deleted
Expand full comment
author
Feb 21, 2023·edited Feb 21, 2023Author

Major warning (50% of ban) for this comment, it's irrelevant, controversial, and says a thing requiring an explanation without providing one.

Anyone who takes the bait will have their comments deleted.

Expand full comment

> Average person can hail a self-driving car in at least one US city: 80%

> I think I nailed this.

which city is this?

Expand full comment

New Orleans probably

Expand full comment

San Francisco, Phoenix

Expand full comment

Does anyone know why these cities in particular? I'm not really familiar with Phoenix, but driving in San Francisco doesn't seem like an easier problem to solve than driving in most other US cities, let alone on a freeway.

Expand full comment

They have to be generally sunny and snow-free, with fairly tech-friendly regulations.

Expand full comment

Phoenix is ideal testing conditions (no snow, no rain, no pedestrians), while San Francisco is where all the engineers tend to live anyway.

Expand full comment

I'm gonna quibble with Phoenix. There are a couple of very limited programs that operate in small parts of the metro area that you can sign up for and occasionally use. I I would not say that the average tech-savvy person can hail a self-driving car in the way that they can get an Uber or Lyft.

Expand full comment

I tend to agree, although I think you're overstating it. Waymo One tells me that I can hail a vehicle 24/7 and that on most occasions there's no-one in the driver's seat, but it operates "within parts of the Phoenix metropolitan area, including Downtown Phoenix and parts of Chandler, Tempe, Mesa and Gilbert." This means it won't work for a large class of rides, e.g. someone who wants to travel home to the suburbs from the downtown area.

As written, the prediction is true, but as far as I can remember, I understood it to mean that one would be able to hail a self-driving car to go to most ordinary destinations within the city, in the same way that existing ride-hailing apps work.

Expand full comment

That was my impression too - I've heard of people surprisedly getting in a self-driving car that they hailed, but I haven't heard of anyone actually having an ordinary option of hailing a self-driving car.

Expand full comment

San Francisco is both near to a lot of the tech companies and is arguably some of the hardest driving in the country, ignoring weather. *Every* weird traffic thing happens somewhere in SF, it's an absolute nightmare. As someone who's driven in both SF and NYC, I honestly think SF is worse, and that makes it absolutely great for testing with someone sitting in the car who can hit an abort button.

If you can drive in SF you can drive anywhere in the US (unless it's snowing.)

Phoenix is the exact opposite, and is perfect for initial tests *without* an abort button.

Expand full comment
Feb 21, 2023·edited Feb 21, 2023

I’m not entirely sure if I’d grade this as true for San Francisco. Some average people in certain parts of SF can hail a self driving car in SF but the average person can’t and the cars don’t serve the densest and busiest areas of the city, like downtown.

Expand full comment

Within those limitations, are the cars now truly autonomous? Or is it still the case that there always needs to be a human "test driver" either inside the car or trailing behind it in a separate car, ready to intervene if necessary?

Expand full comment

Within those limitations, the cars are truly autonomous. They can still dispatch help to the cars but the cars aren't literally being tailed.

Expand full comment

As far as I know this prediction is false.

Expand full comment

What makes Waymo's deployment in Phoenix not fulfill the terms of the prediction?

Expand full comment

The language of the prediction is ambiguous so maybe it should count. My reading of it is that waymo only serving a tiny section of downtown shouldn't count.

Expand full comment

I think this is one of those kinda edge cases where if something were on the line for whether or not the prediction was technically accurate, it'd be a hard call. But nothing is, and we don't need to worry about exactly what's on what side of which line.

What is clear is that artificial intelligence moved less quickly than Scott was anticipating in terms of autonomous vehicles. We maybe (depending largely on what you describe as "an average person") got just barely over the line of his 80% prediction. He thought there was a 30% chance that you'd be able to hail an autonomous vehicle in "at least 5 of the 10 largest US cities," which we aren't even *close* to. Presumably if you had zoomed way in on this and said, "Okay, so you think there's an 80% chance that it's rolled out somewhere, and a 30% chance that it's reached half of all big cities, then there's some set of intermediary set of probabilities of it being intermediately rolled out," and all of those would've been false. It's also not the case that we got like, almost there -- it won't be rolled out to 5 of 10 cities in 2024, or 2025.

And he thought there was a 10% chance that at least 5% of all US trucking had been replaced.

If you kind of come up with that as an approximation of Scott's mental probability distribution of "how far along autonomous vehicles will be," then it seems clear we're way short of his average, probably his mean value of the probability distribution was about one standard deviation off from the truth.

Expand full comment

It depends on your model for what the constraints on rolling out autonomous taxis are.

I don't think it's a situation where you make a little bit of progress in one city, and then you make a bit more progress and it's in two cities, and then you slowly creep up. I think that if the technical/regulatory/PR tangle gets resolved, then suddenly you can roll them out almost everywhere, and if not, then it's limited in the way we see.

I also don't think that the tangle had a 30% chance of resolving, but I have the benefit of hindsight.

Expand full comment

If he meant, "30% chance of autonomous taxis being available in 10/10 biggest US cities," or "9/10 with one inexplicable holdout for no reason," then I assume that's what he would've said.

I agree it's not a linear ramp where the next city is just as hard as the previous city, but we were only in a 5 year prediction mode! It is clearly the case that there are in fact logistical and local issues to rollout per city.

Expand full comment

“ I don't think it's a situation where you make a little bit of progress in one city, and then you make a bit more progress and it's in two cities, and then you slowly creep up. I think that if the technical/regulatory/PR tangle gets resolved, then suddenly you can roll them out almost everywhere”

I don’t agree for two reasons. First, the current paradigm involves building a painstaking map of every drivable area within a geographic location, much more detailed than google maps. Even if you build a map covering all of phoenix, you can’t make some minor tweaks and transfer the technology to Houston. You have to build a brand new map from scratch.

Second, the current locations are ideal conditions - well maintained roads with clear signage, good weather, low to medium traffic and well behaved drivers. Not generalizeable to, say Manhattan.

I agree that it’s going to faster expanding the service to say San Bernardino than it was to create the first service in Phoenix. But it’s not going to be trivial to go from most of phoenix to most of the US.

I would wager that in 8 years, a self driving car service will not be able to cover 50% of Manhattan during the winter.

Expand full comment

I haven't seen any news report that I would count as "You can hail a self-driving car in city X." Now my criterion for a self-driving car is strict, "You can get from this point to 99% of the destinations people drive to from this point without touching the controls." To some degree, the limited jurisdictions that allow self-driving contribute to this. But my impression is that self-driving technology is at the "90% of the distance of 90% of the trips" level, not the "all of the distance of 99% of the trips" that would e.g. give blind people the same mobility as the sighted.

Expand full comment

> The leading big tech company (eg Google/Apple/Meta) is (clearly ahead of/approximately caught up to/clearly still behind) the leading AI-only company (DeepMind/OpenAI/Anthropic)

DeepMind and Google are both part of Alphabet Inc., OpenAI was heavily invested in by Microsoft, and Anthropic was heavily invested in by Google. How will this prediction work if the "leading big tech companies" are just using things from the "AI-only" companies?

Expand full comment

Buying the competition counts.

Expand full comment

Counts as which - the big tech being ahead or the AI company being ahead?

Expand full comment

Both!

The problem is that the question isn’t well framed—most big tech companies have the edge that they have not just because they innovate but because they can identify and buy out innovation and talent.

My point is that you really can’t, being intellectually honest, cut out Big Tech because they ‘only own’ or ‘only invest in’ the AI firms. That counts as Big Tech doing it. But being fair, it also counts as the AI firm having the advantage. It just isn’t a question that yields insight; it betrays misunderstanding of what investment is for and why it’s beneficial.

Expand full comment

I think you should probably add a prediction about robotics. There's going to be a lot of progress in the next 5 years.

Expand full comment

Interesting. What are the big challenges in robotics that you think we're on the verge of solving?

Expand full comment

Hardware is enormously harder than people think. Software may improve robotics in a sense, but as soon as you need physical interaction you get crushed by physics.

Just think of Moore's law vs battery capacity. Transistor count (and software complexity) has increased in the range of 100,000-100,000,000x in the same amount of time we needed to double battery capacity (while losing lifespan and turning them into bombs).

I've thought about the topic a lot and I think there is a reasonable odds that growing muscle in vats ends up being easier than building motors that can match biological performance.

Self driving cars also reveal why robotics are so damn difficult, you need 99.99999999...% percent reliability and mistakes are expensive. I spent some time working on UAVs and ANY glitch will smash your hardware into pieces. Funny the first time but imagine if your computer self-descructed every time the compiler detected a syntax error.

Expand full comment

The massive improvements in AI, especially in the field of Genrative AI, can have direct effects on how a robot can think 'like a human'. So maybe afterwards, when the AI hype is settled down, we could have a new 'humanly robot ' hype. What do you think?

Expand full comment

The funny thing about the humanoid robot area is that it's really attractive from a PR and public awareness standpoint, but sort of dubious in terms of ROI. The point being that since we already have humans to do humanoid tasks, a bot in that form has to be competitive (cheaper, better, whatever) compared to the existing human pool.

Useful robots that *aren't at all humanoid* seem like a better investment, they just don't get as much attention...

Expand full comment

That makes sense!

Expand full comment

I think you'd do better with a mini-centaur form. Say something about the size of a dog. (There are reasons different dog breeds are different sizes.) You could argue for wheels or treads instead of legs, but unless speed is your main concern, legs have advantages. Handling stairs fairly well is one reason. Boston dynamics already has the dog body, but attaching the arms is difficult, as you need to brace them to use any force. Now imagine trying to get that robot to thread a needle, or remove a frozen bolt. Last I heard there was lots of work needed on the hands.

Now consider what a robot nursing assistant should look like. You don't want to hit ANYBODY's uncanny valley. I think the face should look anime, and definitely non-human but friendly. But if the price were right, there'd be a big market for robot nursing assistants.

Expand full comment

Kinda agree though. After reading the points made by Boris, I think Miniature robots that favour utility and function more than the actual 'looking like a humanoid robot' segment could be the best place where AI can thrive.

Expand full comment

Which of:

BostonDynamics robots,

A CNC Mill,

A smart thermostat,

An autonomous forklift in a warehouse,

A DH-82B "Queen Bee" military UAV,

are we counting under 'robotics'?

Expand full comment

BostonDynamics Robot seems the one in my opinion.

While framing the question, I said 'humanly robot' which I think was a little misleading.

Expand full comment

"IE you give it $2, say "make a Star Wars / Star Trek crossover movie, 120 minutes" and (aside from copyright concerns) it can do that?"

J.J. Abrams already did this with the reboot, and while I can't speak for anyone else, it certainly was not what *I* wanted.

"AI can write poetry which I’m unable to distinguish from that of my favorite poets (Byron / Pope / Tennyson )"

Interesting selection! I wouldn't have classed Pope as a Romantic poet, but this gives me the excuse to shoehorn in a somewhat relevant joke, from a 1944 Tolkien letter:

"The conversation was pretty lively – though I cannot remember any of it now, except C.S.L.'s story of an elderly lady that he knows. (She was a student of English in the past days of Sir Walter Raleigh. At her viva she was asked: What period would you have liked to live in Miss B? In the 15th C. said she. Oh come, Miss B., wouldn't you have liked to meet the Lake poets? No, sir, I prefer the society of gentlemen. Collapse of viva.) "

The Walter Raleigh mentioned above is:

"Sir Walter Alexander Raleigh (5 September 1861 – 13 May 1922) was an English scholar, poet, and author. Raleigh was also a Cambridge Apostle.

... in 1904 [he] became the first holder of the Chair of English Literature at Oxford University and he was a fellow of Merton College, Oxford (1914–22).

...Raleigh is probably best known for the poem "Wishes of an Elderly Man, Wished at a Garden Party, June 1914":

I wish I loved the Human Race;

I wish I loved its silly face;

I wish I liked the way it walks;

I wish I liked the way it talks;

And when I'm introduced to one

I wish I thought What Jolly Fun!"

Expand full comment

I wonder if Professor Raleigh's knighthood was helped along by whoever was responsible for the honors list for George V liking the idea of creating another Sir Walter.

Expand full comment

Is there any context for this, other than the films being bad? I assume it's a joke, but the comment is very deadpan so I wanted to check.

I remember "computer screenplay assistant" is a thing that was zeitgeisty for about a week, and probably misinterpreted by much of the public

Expand full comment
Feb 23, 2023·edited Feb 23, 2023

"Star Trek reboot as Star Wars showreel by JJ Abrams" was a rant I did on a site that shall not be named back then, complete with side-by-side comparisons of things like the Starfleet Academy cadet uniforms and Star Wars generic troopers uniforms, but that was fourteen years ago and I haven't kept most of it (I do have the part where I defended James Tiberius Kirk as not the pop culture skirt-chaser Abrams interpreted him as, in one interview where he said "Nobody’s going to force Kirk to be a romantic and settle down. That would feel forced and silly. Kirk’s a player”. As we said at the time, "Abrams is not of the body!")

But yeah, there was so much in the visual design, not to say the reboot universe, that was moved to be more like Star Wars than Trek, including the phasers being revamped to be more like blasters. Abrams did himself no favours with me with his attitude to Trek, and it was no secret he was a Star Wars fan, not a Trekkie, and dearly longed to get the directing gig for the new Star Wars movies. That's why I think he did do the reboot as, literally, a showreel to demonstrate he could do a big-budget movie set in an established universe and reboot it successfully.

About the only time I am in agreement with John Stewart 😁

https://www.youtube.com/watch?v=-mSM5BCUhZ4

Expand full comment
founding

Thank you for the John Stewart clip; that was delightful. I would loved to have seen the full version of your rant at the time.

Expand full comment
Feb 25, 2023·edited Feb 25, 2023

I think there was a lot covered in it, the particular Tumblr (yes, that was the hellsite) discussion group that got going was mostly critical. Not that the reboot as such was a bad idea, but that they wasted a lot of opportunity.

How the reboot timeline got created was hooey, but Trek has always had a lot of hooey involved, that's how we got the term "technobabble" after all. The importance difference is that Trek is science fiction (it does try to be grounded in 'vaguely plausible if we stretch it a lot and take the most out there speculative theories current') while Star Wars is science fantasy (the desert planet setting, the 'hokey religion versus blasters', the mystic order of the Jedi, the Force, midichlorians and the rest of it - it's a pulp skiffy influence at heart and none the worse for it).

The problem comes when (a) you're a bunch of untalented hacks and (b) try to force the contents of one universe into the mould of the other. Abrams went for Kewl Shots (the complaints about lens flare and how the bridge of the "Enterprise" looked like an Apple store) and very clearly modelled much of his reboot along the lines of Star Wars (his love) than established Trek canon.

There were a lot of "left on the cutting room floor" scenes floating around at the time, both good and bad; a whole chunk of this Kirk's childhood backstory was cut, which would have contributed to understanding his character and why he was the way we see him later (the rebel without a clue). Other bits got cut and it was for the better, e.g. what Abrams and company seemingly thought was a *hilarious* bit about "they all look the same" when it came to Kirk and the Orion women cadets - he romances one named Gaila for ulterior motives, to get her to run the Kobayashi Maru hack when he's taking the test. Later, he goes to apologise to her for using her (since this got her into trouble, quite naturally) and - here's the joke, hold on to your sides! - he apologises to the *wrong* woman! Because all those green slave girls look the same, you know! And he doesn't really care about Gaila so he can't tell her apart from any other random Orion cadet!

That Jim Kirk, what a card 😐

Reboot McCoy was the best thing in it, thank you Karl Urban. I could go on about other things - oh why the heck not? The Spock/Uhura romance was unexpected, and the end result was that it looked like Abrams couldn't think of a way to fit Uhura in as anything other than a girlfriend (we have one scene where Uhura demands to know why she hasn't been assigned to the Enterprise, Spock reasonably says it would look like favouritism because they're in a relationship, and she bullies/nags him into reassigning her). There's also, in the second movie, the totally unprofessional scene she makes about their relationship while they're on a mission, in front of their commanding officer, and again looks more like "nagging shrew" than "equal professional and officer pulling her weight". There's the throwaway line dismissing Christine Chapel. The infamous underwear scene with Carol Marcus in the second movie, which echoes the underwear scene with Gaila in the first, and manages to be both unsexy *and* sexist. The terrible pseudoscience which doesn't even pretend to be technobabble - now we can warp between moving starships, the Klingon home planet is apparently on the doorstep of the Terran solar system because we can get there in a short trip, Starfleet Command can have every senior commanding officer killed by one guy in a scout ship because security, what that? and so on.

The second movie trying to persuade us "this is the engine room of a starship" when anyone who has had even the most cursory view around a chemical, food or other processing plant can identify "this is a brewery". The first movie BLOWING UP VULCAN (if they think that after all this time I've forgotten and forgiven, they have another think coming). The heavily militarised Starfleet Command, which again can be explained by the backstory of this timeline *if* they bothered to explain it, which they don't. Not one but *two* dogfights by starships over San Francisco, as the climactic moments of both movies. The second movie had me cheering on the evil admiral, because he at least was competent, and they finally remembered that hey, you build starships in orbit in space docks not from the ground up on earth. That first movie shot was another Kewl Shot with Kirk on his motorcycle pulling up to view the ship that he will eventually command, but didn't make much sense logistically (though I've read posts defending it):

https://townsquare.media/site/442/files/2013/05/Trek-Guide-Starfleet.jpg?w=980&q=75

I didn't even bother with the third movie, even though that was allegedly better. They burned up all my goodwill by then, and I've been a Trekkie since I was seven.

Expand full comment

"in early 2018 the court was 5-4 Democrat"

No, in early 2018 the court had 5 Republicans: Roberts, Kennedy (soon to be replaced by Kavanaugh), Alito, Gorsuch, and Thomas

Expand full comment

Either he’s forgetting that Kennedy was nominally a Republican or he’s just being plain about the fact that Kennedy was not actually a Republican in any meaningful sense.

Expand full comment

Kennedy joined the Republican justices on most major divisive questions, such as Obmacare being unconstitutional. He also hand-picked Kavanaugh as his successor. Saying he was not actually a Republican is nuts, unless you're only looking at PP v. Casey and Obergefell

Expand full comment
Feb 21, 2023·edited Feb 21, 2023

I think Scott just made a mistake. I don't think he was intentionally trying to make a statement about Kennedy being insufficiently conservative.

Still, I think his broader point still stands if you split the Supreme Court into liberal, moderate, and conservative categories rather than merely Democrats and Republicans, with both Roberts and Kennedy falling into the moderate category. In 2018, there were 4 liberals, 2 moderates, and 3 conservatives. Neither the left nor the right could get a majority on their own, so this pushed the court towards making more moderate decisions overall, which made dramatic upheavals (like overturning Roe v. Wade) rather unlikely. Then Kennedy and Ginsberg were replaced by Kavanaugh and Barrett, so the balance shifted to 3 liberals, 1 moderates, and 5 conservatives - enough for the conservatives to bull rush their way past the compromise stage and force through any decision they wanted, without any need to temper or moderate them first.

Had the court's balance remained the same, I expect Roberts would have gotten his way with the Dobbs v. Jackson case: Mississippi's 15 week abortion ban would've still been upheld, but it would've been a narrower ruling that merely pushed back the viability line from 20 weeks to 15, rather than overturning the Roe decision completely and allowing states to ban abortion at any point in pregnancy. This would've been true even if we'd only gotten Kavanaugh or Barrett, but not both: The conservatives would've had to go along with Roberts' compromise, because they simply wouldn't have had the numbers to overturn Roe entirely.

Granted, giving 1% odds to Roe being overturned was still too low. But it wasn't a sure thing by any means. I'd have probably given it 1 in 3 odds of happening when Scott made these predictions, a 50/50 chance when Kavanaugh was appointed, and 2 in 3 odds of happening when Barrett was appointed.

Expand full comment
Feb 21, 2023·edited Feb 21, 2023

I disagree, his judicial philosophy while it concurred with the Republican/conservative wing of the Court at times was not particularly conservative or Republican and any sufficiently read student of the Court should know this.

Kavanaugh’s concerns are similar, he’s obsessed with how his legacy on the Court is seen and willing to strike pragmatic compromises in order to be viewed historically as not a partisan—in my view that makes him fairly unprincipled.

Justices should less be evaluated by quantifying how much they agreed with others than how they arrived at those conclusions and whether they are willing to stand by their principles, and which principles they will stand by. It’s pretty plain from the careers and records of say Kennedy, Kavanaugh, and Roberts that they come from a completely different school than Gorsuch who is very different from Scalia, Alito, or Thomas (who have the best claim to being called the Republicans).

That said, I don’t think Scott believes the above. My comment was tongue firmly in cheek.

Expand full comment

The more professional they are, the harder it is to stereotype them. I didn't know Scalia was supposed to be a 'conservative' until after he was dead; Ginsburg was absolutely partisan and unprofessional.

Expand full comment

Spot on. Any justice clearly identifiable as liberal or conservative shouldn't (in an ideal world) have the job.

Expand full comment

Picky disagreement: Both the liberal and the conservative sides have points on which they are clearly correct. So if you just just be adherence to those points it's quite reasonable to say a liberal or conservative judge is properly doing their job. The problem is all the other stuff.

E.g., abortion should clearly be a state level issue. I may think many of the states aren't living up to their constitutional obligations (though I ususally don't know their constitutions well enough to really comment), but it should clearly be a state level decision. There are many such issues, where the matter SHOULD rest with the states, but the states have defaulted, so it ended up with the feds. Then there's that idiotic Supreme Court decision that cities couldn't have a residency requirement for provision of general assistance. I see NO valid basis for that decision, and the result has been a "race to the bottom" in support of social services at the city level. But that SHOULD have been a city level decision. (I don't remember whether it was the conservative or the liberal agenda that inspired that idiocy, but I suspect conservative. But an honest conservative should diligently oppose it, and a liberal should find no reason to support it.)

Expand full comment

"E.g., abortion should clearly be a state level issue."

Abortion (like other forms of birth control) has been an activity partaken of by individuals, on an individual basis, throughout history. Because of this historical ownership of this right being held by the people, no government, it firmly belongs in the rights held by the people (not the states or the federal government) as indicated in the 9th and 10th amendments. Or so I say. So at the very least your "clearly", is not as clear as you believe it to be.

The US Constitution is not a system meant to allow totalitarian control of individual's ways of life at the federal or state levels (excluding only those rights embodied in the 1st through 8th amendments).

Expand full comment

While I largely agree with you, I feel this is a matter that should be addressed by the constitutions of the individual states. And that they should defer to the rights of individuals. Well, at least unless one wants to undertake rewriting the entire constitutional system. The expansion of federal powers that has happened has been necessary, but I believe that much of it is clearly illegal. What should have happened is various constitutional amendments, but that was so difficult that those in power generally just ignored the clear words of the constitution, and made "workable decisions".

When I said "E.g., abortion should clearly be a state level issue." I meant that it should not be decided at the federal level. Once you get away from the federal level, the different constitutions of the various states make things quickly too complex for any simple answer. Well, if you're arguing on legal grounds. If you're arguing on moral grounds the problem is that there's no consensus on what the proper morality is. Everybody's arguing for their own point of view, often with the same words meaning different things.

Expand full comment

If you didn't know Scalia was a conservative you weren't paying attention. He wanted to uphold anti-sodomy laws and claimed the majority in "Lawrence v. Heller" which struck them down was a "product of a Court, which is the product of a law-profession culture, that has largely signed on to the so-called homosexual agenda".

Expand full comment

While I may or may not have reservations about Scalia's decisions, my point was that I regard Scalia more professional than Ginsburg. For the same reason, I regard Scalia more professional than Thomas.

Expand full comment

I assume because you are a conservative?

Expand full comment

I think you are conflating the names of Lawrence v. Texas (the sodomy ruling) with Heller vs. District of Columbia (a Second Amendment ruling)

Expand full comment

I'm sorry, but this is not professional: “This Court has never held that the Constitution forbids the execution of a convicted defendant who has had a full and fair trial but is later able to convince a habeas court that he is ‘actually’ innocent.”

To say that procedure trumps actual innocence is to undermine the very foundation of criminal law. Such a statement is neither conservative nor liberal, but anti-law itself.

Expand full comment

> Looking back, in early 2018 the court was 5-4 Democrat, and one of the Republicans was John Roberts, who’s moderate and hates change.

Both of these claims are difficult to justify. The court in early 2018 had 5 justices appointed by Republican presidents (Anthony Kennedy, replaced by Kavanaugh, was appointed by Reagan; while he had a reputation as a swing justice, he went pretty far right in cases that didn't involve privacy).

Likewise, John Roberts is a moderate only in the context of the most conservative court in a century. This isn't a normative judgment, just a description of his voting record. He has consistently voted with the conservative bloc across a range of issues. The exceptions (Obamacare) spring readily to mind because they are rare.

Expand full comment

> 6. Social justice movement appear less powerful/important in 2023 than currently: 60%

How do you figure ? Cancel culture and social justice are IMO more powerful than ever, and still gaining in power -- especially as compared to 2018.

Expand full comment

I think the wave has crested.

Expand full comment

Even if it has, you have to think 2020 was the crest, and I don’t think we’re at “pre 2018” levels.

Expand full comment

Sure. I was responding to your “more powerful than ever”. I agree that Scott got that wrong, to be fair though, 2020 was an unusual year.

Expand full comment

It seems like a strange omission from this post, given that Scott had to take his blog down in 2020.

Expand full comment
Feb 21, 2023·edited Feb 21, 2023

Does the crest look anything like what Scott described, in your opinion? "Wokeness has peaked - but Mt. Everest has peaked, and that doesn't mean it's weak or irrelevant or going anywhere. Fewer people will get cancelled, but only because everyone has settled into an equilibrium where they know what the cancellable opinions are and don't say them (or because everyone with a cancellable opinion has already been removed, or was never hired in the first place). "

If so, that's not the decline of wokeness. That's the decisive victory of wokeness. If it's cresting because it won and can now get rid of all opponents, that shouldn't reassure anyone. Not even wokists, who can easily be the next targets of the cultural revolution.

Expand full comment
Comment deleted
Expand full comment

It can, and eventually it will; but probably not by 2028. I'd be happy to be proven wrong, of course...

Expand full comment

"You can't just gain power, you have to maintain it, and for all its facade of strength wokeness has nothing behind it: no reason, no joy, not even improved living standards, just a mass hallucination enforced by raw power and bullying on quietly resentful subjects. Such a system never lasts."

The same was true of Christianity in the 5th century, or Islam in the 7th. Look at where they are now. The quietly resentful subjects became less resentful, then converted to Christianity/Islam for a mixture of self-interested and genuine reasons. There's no reason that a mass hallucination enforced by raw power and bullying can't last millennia and cause millions of deaths.

Expand full comment

Christianity survived the empire. It has something that wokeness doesn’t -

Redemption.

Expand full comment

Has it? Did it even survive incorporation into the empire? The name is not the thing.

Expand full comment

Christianity, Islam, etc all had charismatic prophet-founders and divine-level foundations. They have fascinating and deep scriptures and stories underlying them.

Wokeism has none of this. No cosmological level claims, no powerful stories. And there's an even more fundamental problem than that - unlike traditional religions, it doesn't even have demographics on its side! It has produced incredibly low birthrates. Based on all of that, I just don't see it having any traction in the medium or long term.

Don't get me wrong, it will undoubtedly still be a thing in 2028. But by the end of the century? I highly doubt it. It may well end up retreating to the fringes of society within a couple of decades.

Expand full comment

Institutional Wokeness has a whole lot of people, and more in 2023 than 2018, whose salaries are dependent upon not understanding your arguments.

Expand full comment

The question is: Will they be the first to get the boot at the next recession, and will they be rehired afterwards?

Expand full comment

Agreed.

Expand full comment

I don’t want to talk specific cultural issues but.

1) Hogwarts legacy was not cancelled.

2) Nicola Sturgeon is gone.

3) there’s a weak attempt at “BAFTAs too white” but it’s gaining no traction

4) the reaction to the sensitivity edits of Roald Dahl has been universal derision

Expand full comment

Hogwarts Legacy was absolutely canceled in my social circles, including giving it up being considered part of “discomfort is needed for progress”. I'm not sure what reaction the Roald Dahl edits got, I think mildly positive with some weak pushback on cultural-history grounds.

Expand full comment

Yeah. In my social circle, I witnessed live version where a slightly out-of-touch old liberal who initially tried to say "I don't like the look of censoring and boycotting books and by extension games because the author has wrong opinions," got some spontaneous peer struggle session review. By the end of discussion, he was loudly proclaiming various nasty things about Rowling and agreeing that everyone should boycott her computer game because it is like giving money to North Korea.

Dahl: I notice that the public pushback comes from the old liberals (as in, retirement age) like Salman Rushdie or the usual right-wing adjacents.

Expand full comment

https://www.forbes.com/sites/paultassi/2023/02/14/hogwarts-legacy-is-the-top-four-best-selling-games-on-steam-hits-new-peak-playercount/amp/

Says Hogwarts Legacy is on Track to be one of the leading games this year.

Expand full comment

I can believe it. I wouldn't be surprised if both my social circles were especially wokeness-eaten in places and there were a bunch of https://slatestarcodex.com/2018/05/23/can-things-be-both-popular-and-silenced/ sorts of phenomena going on.

Expand full comment

Given that it was the top selling game on Steam for the last month, and the most streamed game ever on Twitch we can conclude that your social circles are unrepresentative of gaming humanity as a whole.

Expand full comment

The Dahl edit reaction has been universal derision, but Puffin hasn't walked back the idea, either. Rather like with Seuss, which was met with nearly-universal derision, but eBay policy is still that you can't even sell the old copies.

eBay hasn't gone that far with Dahl yet, as far as I'm aware.

Expand full comment

Sorry, I'm a bit out of the loop. What was wrong with Dr. Seuss books? (Well, "One fish, two fish" was immensely boring, but I mean outside of things like that.)

Expand full comment

No need to be sorry!

In early 2021 Dr. Seuss' estate pulled six books (https://www.nytimes.com/2021/03/04/books/dr-seuss-books.html) from publication for insensitivities like a Chinese character eating a bowl of rice using chopsticks (that's one of the examples; I *think* "If I Ran The Zoo" had some things that would be mildly offensive to even a sane person, but most of the sources don't explicitly say what was wrong with the books).

Because of this, cries of censorship and 1984 etc, prices of the affected books spiked to several hundred dollars on eBay, followed by eBay delisting all of them and forbidding further listings under their Offensive Materials policy (https://www.ebay.com/help/policies/prohibited-restricted-items/offensive-material-policy?id=4324).

Expand full comment

Also Mt Everest hasn’t peaked I don’t think. It is still growing.

Expand full comment

Considering the DEI bureaucracies in institutions of higher learning and in corporate HR, It's getting baked into everything.

Mandatory ESG is still a possibility.

I vote "not peaked," at least in the real world. Twitter SJ might be on the decline.

Expand full comment

"6. Social justice movement appear less powerful/important in 2023 than currently: 60%"

No. Giving that to yourself is an error because you are overly focusing on the slight receding of the tide in 2022-2023 and forgetting the inundation of 2020-2021. The pre-George Floyd world of 2018 was a lot different than 2023.

Expand full comment

Yeah, that's my impression as well. It looks like SJ has firmly and formally entrenched itself in the universities now, with mandatory DEI statements as de-facto ideological purity tests for new hires. It might take decades to undo that.

Expand full comment

I think I know why everyone *assumes* that mandatory DEI statements are de-facto ideological purity tests - but my understanding of how these statements are used is that they are just collected and sent to the hiring department as part of the hiring package, and in practice, ideological purity is just as likely to sink your candidacy with some significant fraction of a hiring committee as wrongthink is. The real problem with these statements is that they create a culture war minefield for candidates to navigate, with no indication of what is actually going to be judged as good.

If these statements are used by *administrators* as a pre-filter before files get to the department level (and I've heard some claims that some UC schools might do this for some applications) *then* these can be de-facto ideological purity tests. But when they just get sent to the committee that includes both a 30 year old radical assistant professor and a 70 year old curmudgeon, it's really unclear what kind of statement you need to avoid getting nixed by someone. (Probably the kind of statement that makes people glaze over and look back at your academic work instead.)

Expand full comment

You are being VERY optimistic in your assessment.

I am willing to bet that >99% of applicants will grit their teeth and do their best to pronounce the shibboleths properly, rather than hope that their application goes straight to the desk of some contrarian professor who is on board with "fuck that PC bullshit".

Especially since some reputable universities are very explicit about how they judge the diversity statements and that, yes, they do use them to pre-filter applications before they go to the departments (and again at later stages). See e.g. https://whyevolutionistrue.com/2019/12/31/life-science-jobs-at-berkeley-with-hiring-giving-precedence-to-diversity-and-inclusion-statements/ .

I hope that explains why I, for one, have trouble seeing these statements as anything else than purity tests.

Expand full comment

Yes, it sounds like some hires at Berkeley have used it that way. That does not seem likely to be much more of a precursor of wider trends than Hamline College with the Islamic art fiasco is.

But the bigger point is that most academics want people who will say nice stuff about minorities in their statements, but will get very worried about hiring someone who said they actually tried to change something about how their previous department worked.

Expand full comment
founding

I'm not sure how "ideological purity is just as likely to sink your candidacy" is supposed to work. Yes, there are hiring managers who don't want overzealous ideological purity in their departments. Those managers aren't asking for DEI statements. If there's a DEI statement, that's coming from HR or the administration or somewhere. And maybe they just put it in there because all the cool people are doing it and they don't have any systematic way of doing anything with it. Or maybe they are using it as a pre-filter or other disqualifier.

But from the point of view of anyone filling out a DEI statement, the possibilities are "this is a waste of my time" or "this will roundfile my application if I don't at least feign ideological purity". Some of them won't waste their time and won't complete the application, the rest will feign ideological purity to some extent.

So "I will cleverly ask for a DEI statement, then rule out the candidates who show too much ideological purity", screens people out for being diligent and capable in doing what they think you just asked them to do. Nobody does that, so nobody expects anyone else to do that, so the expected value of a DEI statement remains between "waste of time" and "necessary proclamation of ideological purity to get this job".

Expand full comment

Why are you talking about "hiring managers"? This is about university hiring. The way that occurs is that some big committee composed of faculty members asks for a bunch of materials, and they evaluate it, and discuss the candidates until they can reach some sort of consensus about who the few finalists will be. They usually ask for materials about research, teaching, and service, and some universities now require them to ask for a DEI statement as well. But the evaluation is entirely in-house. In general, anything that isn't research doesn't matter all that much (at least at R1 universities) and people even say things like "a teaching award is the kiss of death" - it shows that you care too much about non-research things. If any part of the application sets off a red flag for one or more committee members, it's very easy for them to keep that candidate out of the finalists, given the strength of the pools. A bunch of faculty reading an application aren't going to use a candidate's strong DEI statement as very much reason to raise or lower their estimation of the candidate. But if the DEI statement triggers someone on the committee in one direction or another, it's going to sink your application. You really don't want your DEI statement to stand out as an indication that you're on the vanguard, because that's going to make a lot of people worried about having you in their department. Instead, the best strategy is usually making a relatively bland statement about how you've been nice to women and minorities in grad school or supported them as a faculty member, and maybe taking the opportunity to show how you would diversity the faculty. But ideological purity is going to be very scary to at least some members of a hiring committee, and unless your research record is very strong, it's going to make things difficult, just as much as expressing reactionary views will.

Expand full comment
Feb 27, 2023·edited Feb 27, 2023

As someone posted on Hacker News about this topic, university mandatory DEI statements are less about wokeness as they are about survival of the universities. There is a population time bomb from the Great recession that is just about to hit the universities (search for a US population pyramid). These DEI statements are less about wokeness, and more about how good a candidate is at getting non-traditional students (including older students) to enroll and complete.

Expand full comment

Yeah, I found the way Scott graded that section really weird.

Expand full comment

I find the social justice predictions really suspect. It's judging prevailing opinion about opinions about opinions. How do you measure "cancel culture"? Cancellations per annum? Number of cancellable offenses per the Board of Cancellation? Number of words dedicated to social justice in Atlantic op-eds?

IDK. All fluff to me. The answer will always depend on who you ask.

Expand full comment

Number of black people in Ads in countries with few black people? That’s dropped a bit.

Expand full comment

I could just say that Pacific Islanders are underrepresented in ads so social justice is declining because it's not inclusive enough of non-black minorities. It's turtles all the way down.

Expand full comment

The U.K. is about 3% black, compared to the 12-13% African American population and yet has a black history month, not a black history week.

There is no specific month, or week, or day, or hour, for other minorities. I think there should be a Polish history hour.

Expand full comment

As long as it involves pierogis, I'm in.

Expand full comment

Pierogis, Enigma and a pronunciation guide could bring you all the way to lunchtime.

Expand full comment

Pierogis and mead !

Expand full comment

I know there's a Gypsy History Month at least...

Expand full comment

BritishCouncil's website specifically says there's a South Asian Heritage Month.

These months seems to obviously be about British colonialism, and the affected peoples.

Expand full comment
Feb 21, 2023·edited Feb 21, 2023

That whole section seemed off. I read it, thinking "oof, Scott's going have to give himself a bed grade on this one", and then was shocked that he said that "all of this is basically true" and gave himself a B+. Especially the "Christianity culturally becomes Buddhism" thing - that almost seemed *more* true in 2018, I remember all the "real Christianity is socialist" posts back then - and I don't think I've heard of *anyone* suggest a black lesbian pope.

I think the only reason he was able to claim to score so well is that his discrete predictions didn't actually test the predictions in the prose.

Side note, George Santos' clearly-intentional use of the white power "okay sign"[1] (probably just him trolling for his own amusement because he knows his days are numbered, but still) comes close to making Scott directionally incorrect on (2) and (3), but I know it's not explicit enough for Scott to count it.

[1] https://www.snopes.com/fact-check/george-santos-white-power-sign-mccarthy/

Expand full comment

The okay sign does not signify white power for anyone not terminally online.

Expand full comment

What evidence do you have that congressman Santos is *not* terminally online? He's 34, it's not that uncommon, especially for politically-minded people. Also, he's a compulsive liar, which fits right in on the internet.

Expand full comment

I'm very interested in this comment thread but it seems like the evidence for and against "social justice movement appears less important in [time A] than [time B]" seems to be a collection of gut feelings and anecdotes. Anybody have any good ideas on how to measure/quantify relative cultural strength of an ideology? I'm guessing the answer is no but I'd love to find out I'm wrong.

Expand full comment

I think it's not impossible to get an objective sense of these things in hindsight, but it's nigh impossible at the time. we'll know in 10 years whether "woke" was a passing thing or the new normal

Expand full comment

I have never understood the ostensible relationship between "social justice" and "cancel culture". What do these things have to with each other at all.

That "social justice" can have any meaning with being tired Rerum Novarem boggles my mind. That there are ostensible proponents and critics of "social justice that don't even know what Rerum is is pretty sad state of thought.

Expand full comment

What is called Social Justice these days should probably be referred to as "Critical Social Justice". It shares nothing with the Catholic concept that you're referring to except the name, and is instead a blend of cultural marxism and postmodernism. The closest relative in Catholic theology might be Liberation Theology (but I'm saying that based on hearsay).

Expand full comment

Kids.

The idea of "Cultural Marxism" makes me laugh out loud. Recently, a pre-boomer tried to tell me that ✌️ doesn't mean "peace".

I'm reclaiming social justice and ✌️.

"If you want peace work for justice." Pope Paul VI

And reclaiming "progressive " for Henry George (Progress and Poverty 1879) and the Church (Populorum Progressio, 1967)

Expand full comment
Feb 22, 2023·edited Feb 22, 2023

The concept of Cultural Marxism explains so, so much of what's going on. The shared idea: society is best viewed as a struggle between groups of people - the oppressors (bad) and the oppressed (good). Oppressors erect systems of power to maintain the status quo, which must be torn down to achieve liberation.

Old-fashioned Marxism: capitalists oppress the proletariat.

Flavors of cultural Marxism:

Feminism: men oppress women.

Queer studies: cis-hetero people oppress people with queer sexual identities.

Critical Race Theory: whites oppress Blacks.

Fat Studies: normal-sized people oppress fat people.

Post-Colonialism: white people oppress people of color.

Disability studies: healthy people oppress disabled people.

And on and on and on. Same shit, lots of different piles.

Good luck with your reclaiming. I'm not even Catholic, but any sincere attempt to actually do good in the world and not just play power games would be very much appreciated.

Expand full comment
Feb 22, 2023·edited Feb 22, 2023

The term "cultural Marxism" has a lot of baggage[1] that goes far beyond what you're using it to mean.

That said, this *is* a valid way of describing idpol and the culture war through a Marxist lens, *IF* you repress the term "oppressor" with "privileged".

i.e. there's a capitalist/bourgeoisie/privileged class that has a comparative advantage in society, and while not inherently evil, they are incentivized to maintain that power at the expense of the proletariat/unprivileged/oppressed class. Sometimes this takes the form of direct oppression, but it's not always that simple (especially as progress makes the oppressed class less oppressed).

You can even work in the concept of petit-bourgeoisie to apply to e.g. poor whites that cling to race as the reason they're better than brown poors, and why they need to cling on to what little priviledge that brings them.

[1] https://en.wikipedia.org/wiki/Cultural_Marxism_conspiracy_theory

Expand full comment

Mass adoption of driving cars was always a fantasy. And I’ve said that here, and elsewhere, before. The problem is that self driving cars have to be perfect, not just very good, for legal reasons, and for psychological reasons.

Expand full comment

Counterpoint: self driving elevators only had to be very good to disemploy elevator operators.

I agree they'll have to be much better than human drivers.

Expand full comment

Counter counterpoint - self driving cars still aren’t where self-flying planes were 20+ years ago, and support for them has if anything gone down after the MCAS fiasco.

Actually I think airplanes are illustrative because they demonstrate that there is a whole new class of deadly accidents that will start to occur due to human interaction with automation, and the automation will generally get blamed by a wary public (we see this a bit already in some of the Tesla crashes where yes, the automated screwed up but clearly the human wasn’t doing their duty to monitor it either).

Expand full comment
founding

Neither self-driving cars nor self-flying planes, in the sense of "attentive on-board human operator not required", are generally trusted to carry human passengers. Or to operate in close proximity to human bystanders. If there are even a few specialized markets where self-driving cars do carry human passengers, that puts them well ahead of the planes.

The planes seem to have the edge because there are more high-profile applications that don't involve transporting humans or operating dangerously close to (friendly) humans such that "and, yeah, it will crash and burn one time out of a thousand" is acceptable.

Expand full comment

Flying is easier to automate than driving, and airliners are already quite capable of flying themselves almost the entire way in most conditions with a very low error rate. Indeed most airliner flights are mostly flown by autopilot. And most “manual” airline flying involves automated pilot guidance and envelope protections substantially more sophisticated than all but the highest level of available driver assist functions. Even some single pilot private aircraft now have certified emergency auto land systems. All these automated systems “crash and burn” at a much lower rate than one time in a thousand, even if we take “crash and burn” to mean “human intervention required to prevent catastrophic result of automation failure (human told autopilot to do wrong thing doesn’t really count)”

Self-driving is quite a bit harder, but the stakes are lower and “come to a stop and wait for the constantly available human help to pull up and fix it” is viable in a way that it isn’t for an airplane. So yes, there are a couple of limited areas where recently truly self driving cars have begun to operate (although anecdotally the human intervention rate is still fairly high).

I guess it’s hard to say which is “ahead” given the different problem spaces. My main point bough is that the general public isn’t necessarily going to understand the nuances of difficulty and they aren’t going to accept “very good” for cars given that they are nowhere near accepting an automated passenger aircraft even though they are arguably already “very good”.

Expand full comment

"Mostly flown by autopilot" is a very different thing to "can fly without human supervision".

Expand full comment

Sure, but my point was basically that any semi-frequent traveler has almost certainly been on an aircraft that was flown almost entirely hands off between just after takeoff and the very end of final approach (and you may have even been auto landed, although that’s a rarely used capability). And even the parts that were “hand flown” were subject to significant automated assists and protections.

Meanwhile very few have been on a truly “auto piloted” car on the open road and any driver assists are generally limited to lane keeping and cruise control (with a few cars having more advanced features like park assist and auto braking).

Airliners are a bit weird too because there’s a different trade off when you’ve already got a pair of highly paid and trained individuals on board plus many more on the ground operating in a highly regimented system. Bluntly sometimes we just want the pilots to not get bored or lose their skills, plus it’s nice to have the flexibility to change the programmed plan en route. The autopilot is generally CAPABLE of more than it is typically ALLOWED to do. Then again programming an autopilot is a lot more complicated than “select destination in Google Maps and go” and requires skilled pilots and controllers.

Expand full comment

But the economics of self-[flying/driving] mass transport are very different.

Trains would be almost trivial to automate but the payoff is negligible (how much time does the average person spend driving a train per day?).

Expand full comment

An investigation into the economics of London's Docklands Light Railway (fully automated) might be instructive. Not that I am offering to do it, mind you.

Expand full comment

An investigation into why TfL has not fully automated their other networks despite government pressure would also be instructive.

Expand full comment

Call me a cynic, but I imagine unions play a substantial role, with the staff that can't be automated yet exerting their bargaining power to protect the ones that can

Expand full comment

At least in the 21st Century, self-driving elevators are about as perfect as a human-made machine is going to get safety wise.

Expand full comment

Sure. Per the CDC it looks like excluding escalators and people working on elevators to leave only passengers, there are maybe 10-15 deaths a year in the US, which is well under the "struck by lightning" threshold. Even counting all those it's 30.

But while I don't know the history, I'd expect that like most things they had a development curve. This NPR piece says it took 50 years from their invention to widespread adoption, against widespread public distrust (and of course labor resistance). https://www.npr.org/2015/07/31/427990392/remembering-when-driverless-elevators-drew-skepticism

And anecdotally, Rex Stout had Nero Wolfe's investgator Archie Goodwin constantly noting when an elevator was self-service for decades after they became common, though he didn't seem particularly bothered by them. Except that in that interval between human operators and ubiquitous surveillance cameras, they didn't offer convenient witnesses to quiz.

Expand full comment

No. They'll need to be much cheaper than human drivers, and nearly as convenient. I expect self driving cars to be rented by the hour or minute. That won't handle all use cases, but it will handle all the common ones that 90% of people experience. And you won't need to worry about parking or maintenance. But it *WILL* need to be a dependable service.

Expand full comment

Another counter-point: Self-driving cars can sneak in step by step.

Germany has already passed a law allowing autonomous cars of level 4. This means that the car companies may define circumstances in which the car drives autonomously. There must be a person on board, and on alert they must be ready to take back control within 10(?) seconds.

Right now, there is only one such system, and that is allowed (by company decision) to operate on highways below 60km/h (i.e., only in congested traffic or traffic jams). But it can increase gradually: perhaps in two years they go for up to 120, then for general roads outside of cities, and eventually everywhere.

Expand full comment

“ There must be a person on board, and on alert they must be ready to take back control within 10(?) seconds.”

In practice, how do you enforce this? 10 seconds is enough for any actual emergency to be over, so you’re either going to have to be essentially fully autonomous or produce a ton of nuisance alarms any time things get remotely sketchy.

Expand full comment

Yeh.

The AI works until until it’s about to kill you.

Then it’s “wake up mate. You are dead in 10 seconds. Thank you for using CheapAssCarAI, even if this is your last ever use of the service, we appreciate your custom”.

Expand full comment

Sounds like the car is meant to handle any emergencies automatically while it's autonomous. The 10 seconds is so that there's a person that can drive when the car leaves the environment where it's allowed to drive autonomously (eg when it exits the highway).

Expand full comment

I don't understand the comment. Category 4 means that the car must be able to deal autonomously with any emergencies. That's the definition of category 4. It's the highest category below 5 (5 = no driver). The alert is for example when the car is going to leave the highway, or when it starts snowing (or whatever other environments are not covered by the car).

You are probably thinking about categories 2 and 3, which are about assisted driving. Tesla is between 2 and 3, more advanced companies around 3, and the first models with (very limited) 4 are just coming out.

Expand full comment

I think predictions about fertility rate in various countries should be of interest, also how technology such as AI girlfriends effect this, similarly what would the percent of the population identifying as LGBT be like, would we start to see the beginning of the religious/insular communities inheriting the earth, what about changes in IQ and such, would any of the fertility increasing efforts have worked.

Expand full comment

> start to see the beginning of the religious/insular communities inheriting the earth

If that's relating to differing birth rates, another five years isn't enough to affect anything (partly because kids have to age up, partly because the effect is less than people think even if meaningful in the long run because lots of religious people become not-religious).

Expand full comment

That's all true but my guess is that even in 5 years time you would have a much better sense if such trends are going to continue long term, the LGBT trend might be the most noticeable over a 5 year period, I think the same is also true of efforts to increase fertility. Maybe you would also see concerns about underpopulation or concerns about a aging population to become more common, I can very easily imagine a significant shift in culture/media/academia and such on those issues within 5 years.

Expand full comment

It think the big deal for fertility rates isn't sexual orientation, it's the cost of having children.

Let's have predictions about the cost of housing and education.

Or how about predictions about work from home. A lot of managers hate it, but maybe some of them are aging out. Is is too early to be thinking about major companies (possibly new companies) that are entirely work from home?

Expand full comment

Those already exist in tech; I've worked for some. Searching 'remote-first' will get you to info.

Expand full comment

I'm not just talking about work-from-home companies, I'm talking about large ones. How big are the biggest?

Expand full comment

Fair enough. The largest I've worked for was a few hundred people. I'd be shocked if there weren't any in the thousands, and mildly surprised if there weren't any in the tens of thousands. But I certainly don't know about any huge companies that are 100% remote.

Expand full comment

Yes, I agree. Having children through fertility treatments or adoption isn't cheap but it's relatively minor compared to the costs of one parent being out of the workplace for 5-6 years or full-time childcare during that time.

I'm not sure I can sum it up in a one-sentence numerical prediction, but I think we'll see a peak in college tuition (in real dollars) and in college enrollment, which will help bring down the total cost of raising a child. AI, automation, and outsourcing will further cut into the benefits of a generic college degree. Non-outsourceable blue collar jobs like plumber, electrician, etc. will start to be a little more attractive again to a generation looking at the massive student debt problem their elders are dealing with.

Expand full comment

In my own circles, trans people now greatly outnumber lesbians and gays combined. (Unsure if T > L+G+B.) So perhaps we'll soon start seeing LGBT activist groups dropping the T, as trans activism has a very different flavor than LGB activism.

Expand full comment

I remember thinking at the time that 1% for Roe was way too low and I'd make it closer to 50% (of course now I can't point to anything proving I thought that, though I could've sworn I made that prediction on the original thread from 2018).

In particular I'd say that even if Republicans had only gotten one of Kavanaugh and Barrett you'd still probably see Roe "substantially" overturned. Roberts didn't technically vote to overturn Roe, but I think with 5 conservatives (minus Kennedy if he were still around) on the court he wouldn't have voted to uphold any abortion restrictions. Whether you think his vote in Dobbs is consistent with "not substantially overturning Roe" is a matter of judgment - his decision would clearly allow abortions to be prohibited that were protected under Roe, but he also didn't say "and also Roe is overturned".

But even if you think Roberts's concurrence doesn't count as "substantially overturning" Roe, that wouldn't stop people from passing an even harsher law as a test case (which would have certainly happened if Kavanaugh had joined Roberts in our non-alternative timeline). To me one of the most likely versions of the "Roe isn't substantially overturned by 2023" possibility wasn't that Roe was protected but that they drag it out and it doesn't happen till 2024 or 2025.

Expand full comment

Agreed, and even Scott's "what I was thinking at the time" justification is pretty poor, tbh. While Kennedy was a fairly reliable pro-Roe vote, he was also old and likely to retire under a republican President and Senate before the midterms (just as Breyer's retirement under Biden rightly surprised no one). Roberts is more moderate than Thomas or Alito, but he was not in any sense a locked in pro-Roe vote. Breyer and Ginsburg were both old in 2018, and there was surely a decent chance that one of them kicked the bucket or was forced to retire before they wanted to - Ginsburg's death was treated as a tragedy by pro abortion advocates, but not exactly a shock. And of course, let's not make the mistake of relying on hindsight to know that she only had to make it to 2021 - from the perspective of 2018 there was a perfectly plausible world where Trump gets re-elected in 2020, Ginsburg dies/retires in 2021, and Roe gets overturned in 2022.

There were plenty of plausible pathways for Roe to survive past 2023 too, but something in the 40-60% range would have been reasonable rather than the 5-10% Scott revises to in this post. Even in retrospect, he substantially underestimates how likely this outcome was.

Expand full comment

Not that I called it at the time, but the risk of Roe being overturned should always have been closer to the statistical probability of RBG dying or retiring due to health issues, which was certainly more than 1% - by 87 I think the annual likelihood of death is something like 6 or 7%.

Expand full comment

Seems to me that the Roe prediction could be sort of right, even though it was wrong, in that it was really based on the recognition of an underlying truth - i.e that the American people probably won't put up with a total ban on abortion.

I hope that it's not just wishful thinking that this decision might be decisive in the 2024 election in returning a President and/or State legislatures that will pass pro-choice laws.

As a remainer Brit I'm also hopeful that Brexit will eventually be seen the same way. i.e. something that we just had to endure for a while in order to reveal a truth to the reactionary non-political rump of the population that nevertheless decides who governs us.

One of the reasons that so many predictions end up so wrong isn't because the underlying thought is wrong, but because the polarisation we have in society makes so many outcomes a simple coin-toss with a slightly (52%-48%) weighted coin.

Expand full comment

> As far as I can tell, none of the space tourism stuff worked out and the whole field is stuck in the same annoying limbo as for the past decade and a half.

I agree that progress has been disappointingly slow, but your original prediction is more accurate than this assessment makes it seem. There are in fact two companies (infrequently) selling suborbital spaceflights (Virgin Galactic and Blue Origin), and SpaceX has launched multiple completely private orbital tourism missions.

Expand full comment

And didn't SLS put an Orion capsule around the moon and back in November 2022? Or did it need to be manned?

Expand full comment

Yes, SLS put Orion around the moon late last year.

In addition, Starship is damn close to going into orbit. It's expected to happen within the next month (for real this time). It will very likely turn out that Scott missed that prediction by less than a month.

Expand full comment
founding

Expected by whom? I wouldn't give that better than 50-50 odds. Obviously Elon either expects it or wants people to believe it, and obviously Elon has lots of fanboys who will believe anything he says. But do you know anyone who (correctly) predicted that Starship probably wouldn't make orbit in 2022, who is presently predicting an orbital launch in March 2023?

Expand full comment

> At least 350,000 people in the US are regularly (at least weekly) talking to an AI which they consider a kind of romantic companion.

Depending on how you judge this, it could already be true. I'm assuming you're familiar with Replika? It's an "AI companion" app that claims 2 million active users. Until quite recently they were aggressively pushing a business model where you pay $70/month to sext with your Replika, but they recently changed course and apparently severely upset a fair number of users who were emotionally attached: https://unherd.com/thepost/replika-users-mourn-the-loss-of-their-chatbot-girlfriends/

Expand full comment

>IDK, I don't expect a Taiwan invasion.

No number on that?

Also thanks, I needed this today in particular. Had a dream where GDP had gone up 30% in the past year and I figured we'd missed the boat on any chance to avoid AI doom.

Expand full comment

Agreed, I'd like a % on that prediction. I personally agree it's unlikely, but more 10% unlikely than 1% unlikely.

Expand full comment

Yeah, anything a country actively wants to do should be higher than 1% over 5 years.

Expand full comment

>At least 350,000 people in the US are regularly (at least weekly) talking to an AI which they consider a kind of romantic companion

This seems to have already been true as of late 2022.

Replika seems to have had up to 1M DAUs, although this was before their recent changes of removing a lot of romantic/nsfw functionality (which users very much did not like, and likely led to >0 suicides and notable metric decreases). It's also notable that they do not use particularly good nor large models, but use a lot of hard-coding, quality UX, and continual anthromophoric product iterations. Given what I've seen of their numbers, it's highly likely they had >350,000 weekly active users already.

Those that think AI partners will not take off strongly underestimate how lonely and isolated many people are, likely because they aren't friends with many such people (as those people have fewer friends and do not touch grass particularly often). The barriers are more so that this is hard to do well, there is a bit of social stigma around it, and supporting NSFW is a huge pain across many sectors for many reasons. The latter will remain true, but the other two will change pretty quickly.

Expand full comment

Even setting aside Replika's user numbers, Scott seems to massively underestimate the extent to which people will form emotional bonds with just about anything with a smiling face. Genshin Impact alone might have enough players who meet this criterion.

Expand full comment
founding

True restraint is when you only get *one* constellation of your favorite 5-star cutie :P

Expand full comment

It would have been interesting to see your percentages for "Trump gets impeached" and "Trump gets impeached twice" if you had included them.

Expand full comment

Even better would have been “at least 5 Republican Senators vote to convict”.

Expand full comment

> <...> I should have put this at more like 90% or at most 95%. I’m not sure I had enough information to go lower than that, <...>

Aren't these inverted? I.e. shouldn't this read "more like 10% or at least 5%. I’m not sure I had enough information to go higher than that"?

Expand full comment

> 14. SpaceX has launched BFR to orbit: 50%

Almost? Likely in March of this year, it was likely delayed that long by the pandemic, not just permitting and technological challenges.

> 16. SLS sends an Orion around the moon: 30%

They have! Just uncrewed.

Expand full comment

BFR is much closer than it was 5 years ago, certainly, but a successful launch to orbit this year is still something I wouldn’t bet more than even money on.

I think there is a general tendency to underestimate how long space technology can look really really close to ready to go before it actually is. The devil is always in the details (consider Boeing Starliner - yeah, haha Boeing, but it’s looked “basically done” for years. The same could be said for Virgin and New Shepard)

Expand full comment

SpaceX just did a static fire for a test launch in march that seems planned to be orbital or "barely suborbital". I don't see how they could fail to hit 2023 unless that launch fails catastrophically.

Expand full comment

And last year they were claiming they’d launch in late summer or early fall of 2022. They’ve done a lower stage static fire and a few low altitude hops with Starship, most of which exploded on landing. Huge step from that to successful orbit. Shit happens, space is hard.

Expand full comment
Feb 21, 2023·edited Feb 21, 2023

Sure, let's split the predictions:

SpaceX attempts a BFS/Starship "orbital or barely suborbital" (where barely = within 100m/s) test launch in 2023

SpaceX succeeds at it.

I'd say 90%, 70%. Full first launch success chance like 60%, but they'll probably get off the pad enough that they have some odds of getting a second attempt in 23.

Expand full comment

Update: Polymarket has a "success by March 31" market at 80% against https://polymarket.com/event/will-spacexs-starship-successfully-reach-outer-space-by-march-31-2023 . Very tempted to make an account.

Expand full comment

They don’t even have FAA approval to fly out of Boca Chica yet, and March 31st is less than 6 weeks away. On top of that the static fire wasn’t even particularly successful if it was a dress rehearsal for launch - one of the engines never lit and another shut itself down, and the test was only 15 seconds. 80% odds against by March 31 seems frankly optimistic for their chances.

I’d say it’s probably 70% they get off one full stack launch this year and 50/50 it reaches orbit if it does launch. Very low probability they launch multiple times this year.

This is a huge, expensive test, even for SpaceX. If they launch and it doesn’t go perfectly, they aren’t going to throw away that many Raptors just to YOLO it until they understand very well what went wrong and feel like they have a high probability of success, and that takes time.

Expand full comment

It's not just the raptors that are the issue, right? I mean, if the launch goes spectacularly wrong, couldn't the entirety of Boca Chica literally go down in flames? After all, it's some 5000 tons of fuel, not that far from some more massive fuel tanks...

Expand full comment
Feb 21, 2023·edited Feb 21, 2023

Static fire was successful in the sense that if it'd been a launch and nothing else went wrong, it would have gotten to orbit on that amount of engines.

As you scale up engine count, eventually you will just need to handle failures gracefully.

Twitter source, but: https://twitter.com/wapodavenport/status/1626790650921291777 "From what I hear, everything is on track for a March launch attempt as far as the FAA is concerned."

Expand full comment
founding

The first four launches of the full "Starship" upper stage, failed catastrophically. So the odds of the first launch of the "Super Heavy" booster failing catastrophically, are not small. The Starship program has been a lot more aggressive than that of the Falcon 9' more like that of the Falcon 1 (which failed catastrophically the first three times).

And as noted below, some catastrophic failure modes of Starship would leave SpaceX without a Starship launch site for a year or more.

Expand full comment

Are you also shorting Tesla? They've got to send it someday, and apparently *their* perception is that they've retired enough risk. We'll find out soon if the FAA agrees.

Expand full comment

Gah missed the edit window - Fanboy reporting for duty. IANARS and I know you are, I'll sign the waiver. :-)

Not "full Starship upper stage", aero demonstrators with janky v1 Raptors, which retired a *lot* of risk--belly flop works! SN8 (the first attempt!) almost made it, and probably would have with Raptor 2 (and no slosh, and no helium, and... see "risks retired").

So they should... spend a year building out another Stage 0, *then* blow up this one, and be right where they would have been (except they lost a year to fix the hypothetical issue)? They've already got 200 Raptors in the barn, and likely a few hundred Starlink v2's sitting on the books. They're *too* hardware rich at this point.

Expand full comment

Yeah, I think Scott should have counted #16 as Yes.

Expand full comment

I think you were wrong on every single thing.

A prediction is X will happen. Not there is an 80% chance that it will happen.

A prediction is Y will not happen. Not there is a 30% chance that it will not happen.

Expand full comment

Well that’s totally wrong.

Expand full comment

I will disagree.

Expand full comment

This reminds me of a joke about the lottery. The odds are 50/50 because you either win or you lose.

Probabilistic statements are obviously acceptable in predicting the future. If someone draws a card from a deck and puts it face down on the table, then asks me if the card is the ace of hearts, is diamonds, or is black I would say the odds are 1/52, 1/4, and 1/2.

If I were to take a bet on any of these I would expect the odds to reflect those probabilities. Scott bets on the future as well.

What does it mean to say the odds are 1/52?. If the deck is drawn 52 times I would expect the ace of hearts to appear once. Draw it 520 times and, on average, the ace will appear approximately 10 times.

And that’s what probabilistic guesses of the future are. A 60% chance means that the predictor expects that in 60% of the potential futures the prediction will happen.

Expand full comment

I'm confused about what you're saying. Is this a semantic argument about the word "prediction"? A philosophical argument about the nature of knowledge not being probabilistic? A gripe that Scott is failing to follow some standard format that unspecified other parties are using?

And I've just got no idea at all about how to interpret your first sentence.

Expand full comment

X happens.

Alice predicted X 80%.

Bob predicted X 51%.

Charlie predicted X 50.1%

Daniele predicted X 50.0001%

Who predicted accurately?

Expand full comment

Alice predicted much more accurately than Bob who predicted slightly more accurately than Charlie who predicted slightly more accurately than Daniele.

Expand full comment
Feb 21, 2023·edited Feb 21, 2023

No. Alice was just more confident in her prediction. She did not "predict" bettter.

If your so-call "prediction" X will happen (80% ) (of the time? with 80% confidence?) and your error estimation is plus or minus 10%, then you are not actually predicting X will happen because your confidence interval is less than 100%.

I flip a coin 10 times.

You predict that it will be 5 heads. (add your probability figure. what will it be?)

I predict it will NOT be 5 heads with out a probability figure.

Who has really made the better prediction?

It is likely to comes up exactly 5 heads about 24.6% IF and only IF we have a "fair coin" and we throw it an infinite amount of time.

It comes up 4 heads, who has made the better prediction?

It comes up 5 heads, but your "confidence probability" was not 24.6%, who has made the better prediction?

When an expert gives an opinion in court, for it to be admissible she must state that her opinion/(prediction?) is to a reasonable degree of medical or expert certainty or probability. What that has come to mean is that the opinion is "more likely than not", i.e. 50% plus a speck.

Do we make predictions about taking risks? Sure when we drive, we might say that the trip will be safe because we know that there is about one micromort for every 250 miles driven. One micromort = 1 in a million chance of death. If driving was a 20% risk ( safety is 80% confident) of driving 250 miles. No one in their right mind would drive (and the roads would be so full of accident and ambulances that driving would probably be nearly impossible due to traffic.) So when SA predicts X will happen 80% is he really making a strong prediction? Is he even really being that confident in his so-called prediction in comparison to the decision to drive 250 miles?

Expand full comment

Just semantics.

Expand full comment

Not semantics.

Expand full comment

It's pretty close to gibberish, honestly.

Also this statement -- "What that has come to mean is that the opinion is "more likely than not", i.e. 50% plus a speck." -- is flat wrong at least in U.S. courts. (Actual courts I mean, not the Hollywood versions onscreen.)

Expand full comment

A, B, C, D, and E don't happen. F - Z do.

Alice predicted "80% of A-Z will happen."

Bob predicted "50% of A-Z will happen."

Charlie predicted "100% of A-Z will happen."

Alice predicted more accurately than Bob or Charlie.

Expand full comment

That didn't clear up any of my questions. Could you please explicitly affirm or deny that you are saying each of the three possibilities that I listed?

As for your question, obviously higher confidence is good on a prediction that comes true and bad on a prediction that didn't come true.

There are formal mathematical scoring rules you can use to grade predictions that are designed to reward accurate probabilities; i.e. where if an event actually happens 75% of the time, then a prediction of 75% will get the highest expected score (or the highest average score after a large number of tests), beating out any competitors who gave other probabilities, either higher or lower. This lets you give a quantitative answer to how good each prediction was.

Because if a coin actually lands heads 75% of the time, you'd obviously rather have the actual number 75% than just know "heads is more likely than tails". The number gives you more information and lets you make better plans.

Expand full comment

Call it a "probability estimate" instead of a "prediction", if you want. It's a prediction plus an estimate of confidence, you can always translate it to just a prediction by just looking at whether something is thought to be more likely than not, as Scott in fact does in this post.

Well-calibrated estimates are more valuable than the same estimates stated as just yes or no, since an optimal betting strategy requires estimates of expected returns. Otherwise, you bet just as much on a 55% as a 99%, and get blown out 45% of the time on the former.

Expand full comment

Yeah a probability estimate is not the same thing as a prediction.

A point estimate alone is useless without knowing +/- e. And we must know the distribution of e. Is e Gaussian or Paretian? Is e unimodal or U shaped?

If SA made his so-called predictions as an offered bet of percent of annual income above a living wage or just a bet size of percent of total wealth, then we might know something because then we will know if SA is financially ruined or not.

Expand full comment
Feb 21, 2023·edited Feb 21, 2023

So if you predict it will not rain when you go outside, you never bring your umbrella? So you get wet 49% of the time?

Or are you just disagreeing on terms?

Expand full comment

> only because everyone has settled into an equilibrium where they know what the cancellable opinions are and don't say them

Those standards keep changing.

Expand full comment

> AGE OF MIRACLES AND WONDERS: We seem to be in the beginning of a slow takeoff. We should expect things to get very strange for however many years we have left before the singularity. So far the takeoff really is glacially slow (everyone talking about the blindingly fast pace of AI advances is anchored to different alternatives than I am) which just means more time to gawk at stuff. It’s going to be wild. That having been said, I don’t expect a singularity before 2028.

This prediction is so vague as to be horoscope-worthy. We are going to see something really strange and wonderful yet totally unspecified; or things will continue pretty much as usual until the Singularity; or perhaps something else will happen. Yep, that covers all the bases.

> Some big macroeconomic indicator (eg GDP, unemployment, inflation) shows a visible bump or dip as a direct effect of AI (“direct effect” excludes eg an AI-designed pandemic killing people)

Ok, so other than AI intentionally killing people (by contrast with e.g. exploding Teslas), how would we know whether any macroeconomic indicator is due to AI or not ? This prediction is likewise pretty vague.

> Gary Marcus can still figure out at least three semi-normal (ie not SolidGoldMagikarp style) situations where the most advanced language AIs make ridiculous errors that a human teenager wouldn’t make, more than half the time they’re asked the questions: 30%

Does it have to be Gary Marcus specifically ? 30% is ridiculously low if we expand the search space to all of humanity. Or just to ACX readers, even.

> AI can make a movie to your specifications: 40% short cartoon clip that kind of resembles what you want, 2% equal in quality to existing big-budget movies.

Depending on the definition of "short" and "clip", AI can already do it today ( https://www.cartoonbrew.com/tech/stable-diffusion-is-launching-an-ai-text-to-animation-tool-in-partnership-with-krikey-225919.html ). Anything of decent quality remains out of reach.

> but only because everyone has settled into an equilibrium where they know what the cancellable opinions are and don't say them

This is worse than the current level of wokeness, so I'd argue that the current level is far from its peak.

> IDK, I don't expect a Taiwan invasion.

I do, by 2028, at about 55%. We will know more after the 2024 election.

> ECONOMICS: IDK, stocks went down a lot because of inflation, inflation seems solveable, it'll get solved, interest rates will go down, stocks will go up again?

I expect the growth of the actual productive output of the US to continue its decline. By 2028, I expect the US to be in a prolonged period of economic (and cultural) stagnation (if not decline), whether the pundits acknowledge it or not.

Expand full comment

>I do, by 2028, at about 55%. We will know more after the 2024 election.

I mostly agree with you, but I still feel a bit of an urge to interrogate you about how seriously you're taking that.

Expand full comment

Well, about 55% seriously at present :-)

Expand full comment

:V

No, I mean in the sense of Tyler Cowen's responding to people saying "X will collapse" with "Are you short X?". I avoided moving back to Melbourne and bought 20L of bottled water based on a weaker prediction than that; I'm feeling an urge* to check whether you're actually acting as though you believe that number.

*Not entirely an urge I'm proud of; part of it's coming from gatekeep-y sorts of motivations. I'm giving in because, well, if you weren't taking it seriously and start taking it seriously because I probed, that's still a good outcome assuming that our predictions are anywhere remotely close to accurate.

Expand full comment

I mean, I believe in that number with approximately 55% strength. I'm not sure what I should be doing differently given this information, though. I cannot prevent the invasion of Taiwan. I can do a few very minor things to influence the 2024 election, and I'm doing them.

Expand full comment
Feb 21, 2023·edited Feb 21, 2023

>I'm not sure what I should be doing differently given this information, though.

I'm not sure either, since I don't know where you live and what preparations you already have for nuclear war.

I'm basically saying here that there are people who say that nuclear war is super likely but who also live in Manhattan without a clear exit plan or some worth-dying-for reason to be there, and those people are obviously not very serious about their "nuclear war is super likely" belief.

Expand full comment

I think the jump from "invasion of Taiwan" to "nuclear war" that you are implicitly making in this comment is unfounded. I suspect the US and China will be perfectly capable of conducting a gentlemanly conventional war wherein they only use honorable bombs that level city blocks but don't fill the air with radioactive fallout.

Expand full comment

The 2024 election is an American election. Who do you think is going to invade Taiwan? The US or China?

Expand full comment

There's a Taiwanese election in 2024 as well.

Even talking about the American one, there's a case that the specifics of the campaign/result might cause the PRC to smell weakness/opportunity. This could happen *before* the actual election day, though.

Expand full comment

The Chinese can wait. I think that the idea that the Chinese need to act now is probably planted by US ideologues, the end game of which is perhaps the US provoking in Taiwan.

Expand full comment

I think there's probably some assumption that the US is occupied, financially and organizationally, with supporting Ukraine right now, and trying to support Taiwan at the same time will make us more likely to try to settle both conflicts peacefully.

I'm not sure this accurate, I think it's about 50/50 that opening up a second proxy war either causes a demoralized, economically exhausted American populace to become increasingly isolationist or that it causes us to kick over into a full-blown 1941-style war patriotic war machine. There's a lot of pent-up anti-Chinese sentiment in middle America.

Expand full comment

I think the first outcome (exhausted populace becoming isolationist) is way more likely. Withdrawing support from Ukraine is practically the Republican party platform at this point...

Expand full comment

China's population is in decline. How long do you think they can wait? Also, the US is very focused on arming Ukraine, so they're not in a great spot to counter China at this time either.

Expand full comment

The China’s population decline is a giant misdirection. Forget the crazy estimates, they will have a billion or so people by end of century at worst. And there’s no reason to assume that gdp per capita won’t keep increasing. Taiwans tfr is, by the way, the worst in the world.

Expand full comment

Sure, a billion people whose demographics are heavily shifted to older people, and with social norms skewed heavily towards having fewer children. I hope you can see the problem with this if you're planning to wage a war. The near future is their best time to invade from a boots-on-the-ground perspective. They might wait a bit until their tech sector is better developed and they have a more solid supply chain for chips, but then their chances are only heading downhill.

Expand full comment

I don't see what the US gains by provoking China to invade Taiwan?

(If your answer is "to draw China into a quagmire" I'm going to reply that there are better places to have a quagmire than the country that supplies most of your semiconductor needs.)

Expand full comment

>I expect the growth of the actual productive output of the US to continue its decline. By 2028, I expect the US to be in a prolonged period of economic (and cultural) stagnation (if not decline), whether the pundits acknowledge it or not.

What is "actual productive output"? You clearly disagree with the definition of real GDP growth, as the US has not seen any prolonged period of real GDP decline. Ideally such a prediction would be specific, otherwise you could just count X declining industries as being 'real' and Y growing industries as 'fake/illusionary'.

Expand full comment

No, what I meant was something like GDP growth as expressed in the buying power of the median citizen, not in dollars (even inflation-adjusted dollars). This would also mean that Jeff Bezos getting 10x richer won't skew the metric.

Expand full comment
Feb 20, 2023·edited Feb 20, 2023

You write that woke has peaked and judge that social justice is weaker than it was in 2018.

I’m not sure I agree? 2020 set race related stuff on fire, and while it’s died down a bit from that it’s hardly at pre-2018 levels. DEI statements as a condition of academic employment are everywhere, the DEI industry is if anything still growing, and it feels like half of popular culture is just “rehash old IP but woker”.

I think maybe youth gender medicine is peaking, in the sense that you’re finally starting to see some mainstream not-conservative pushback on it, but Jesse Singal is making a career out of pointing out really crappy pro-gender medicine research so I wouldn’t say it’s on a downswing really yet.

I know you already noted that “peaked” doesn’t mean “not strong” and that these things take a long time to decay… but I’m still not sure what you’re looking at to judge 2023 as less “social justicy” than 2018. 2020 was a huge inflection point against that.

Expand full comment

A potentially really interesting prediction - you predicted (correctly) that LGBTQ identification would be higher in 2023 than 2018. What would your prediction be for 2028? What if we limit it to explicitly transgender? Or transgender and non-binary? Or limit it to under 30 year olds (or another convenient demographic number)?

Expand full comment

T/NB is the interesting/controversial set. Maybe B, but polling can be performative. L has been famously shrinking; I'm not aware of significant changes to G.

But also, Scott's done a pretty good job of avoiding the fun/controversial parts of this topic for 9 years now; he doesn't seem interested in anything more specific than what he's already done.

Expand full comment

Living and working deep in the heart of Woke America, I agree with the "peaked" argument.

Some people here seem to be taking "peaked" to mean "is now in decline"; I don't think that's how Scott meant it nor is it what I'm seeing and experiencing. Maybe a clearer word to use would have been "plateaued".

It also does now, for the first time in several years, seem at least possible that wokeness will actually start to decline in its general social influence. Not certain but possible. If that does happen it will be a slow slide not a sudden big reversal.

Expand full comment

It’s not so much the ”peaked” I object to, it’s the “social justice weaker on 2023 than 2018”.

Expand full comment

DEI statements and the DEI industry actually represent in many ways a waning of social justice - these are now just boxes to be ticked, and hours to be spent filling out online forms in between your hours filling out forms about waste/fraud/abuse of employer photocopier privileges and e-mail password security. It's no longer about substance.

Expand full comment

I'm not at all convinced that it ever really has been about substance. Reading Kendi and D'Angelo and Coates and some others (I've completed several DEI workshops the last few years), it has always seemed pretty obviously to be mostly about status-seeking.

That said, I agree with you that the bureaucratization of it, e.g. DEI statements and etc, represents a waning in a way.

Expand full comment

Kendi is a weird case - I think his official statements describe a fully consequentialist view of racism that I would be on board with, where what matters is actually improving results for groups that currently have bad results. But his statements about actual people and actual cases seem to be very much about status as you suggest. (I know less about DiAngelo, but she seems like she might be less substance-oriented. Coates I think is more so.)

Expand full comment

I'm not up on Kendi's public speaking, just his most recent book and his op-eds based on it.

With D'Angelo, I've had the painful experiences of both reading her book and hearing her speak. Some hours I'd love to have back frankly.

Expand full comment

I'll mention again something I read on Hacker News. University DEI statements are more about economics than wokeism. They're about getting non-trad students in seats and completing. The Great Recession caused a population time bomb that, ceteris paribus, will mean more schools going economically bust unless they can get a larger slice of the non-trad student population than they currently have.

Expand full comment
Comment deleted
Expand full comment

"how do DEI statements get more non-trad students?"

Emphasis on the diversity. Especially emphasis on the diversity of students from non-traditional backgrounds (such as various non-East Asian minorities).

And it's not necessarily in terms of hiring diverse candidates (in which case a DEI statement wouldn't be asked for, one's group membership would be asked for), it's about what the candidates are doing toward DEI among students.

Diversity = new student populations

Equity = scholarships, grants, other initiatives that get non-trads the financial resources to complete

Inclusion = engagement of non-trads which promote completion (marginalized students drop out more than students who view themselves as part of the community)

That's not all, and a bunch of woke stuff is likely present as well. But this is definitely a part of it.

The school I'm currently attending recently sent out something explicitly about getting more older students (those looking to gain credentials to help them advance in their current careers). So I believed it when the Hacker News person posted this rationale behind the DEI statement requirements.

Expand full comment
Comment deleted
Expand full comment

If it was meant sincerely, and they had a plan to do so, probably.

Expand full comment
founding

I'm also not convinced that the alleged problem even exists. Yes, there's a modest reduction in the number of new college-age young adults coming. But the fraction of college-age young adults who actually attend college is increasing faster, and there's no sign of that changing. Nothing even remotely resembling an elite university is going to need to do anything to meet its recruitment target beyond saying "we are an elite university, and we think you might be good enough to attend".

Harvard and MIT could fill their ranks with nothing but White and Asian people, if they wanted to. So could the University of California system.

Expand full comment

Wikipedia says that the Himalayas continue to rise at 5 mm/yr. Mount Everest has not peaked.

Expand full comment

Indeed!

Expand full comment

Including Manifold Markets links to your predictions was a very cool thing and I am glad you did it.

Expand full comment

These Manifold Markets links will still work in 2028 when it comes time to grade these predictions: 5%

Expand full comment
Feb 20, 2023·edited Feb 26, 2023

Not sure what it implies about wokeness, but Mount Everest doesn’t seem to have actually peaked yet. It’s still getting taller, if only slightly — and it might end up being outflanked on the left/west by faster-growing Nanga Parmat, which isn’t a coincidence because nothing is ever a coincidence.

https://www.bbc.com/future/article/20220407-how-tall-will-mount-everest-get-before-it-stops-growing

Expand full comment

+1! Unfortunately there's no upvoting here...

Expand full comment

Is there any specific posts on the singularity from Scott? He’s an intelligent guy but I am totally a sceptic on that. I’d like my (er) priors challenged.

Expand full comment

> I think this is probably our best hope right now, although I usually say that about whatever I haven't yet heard Eliezer specifically explain why it will never work.

"What form does credulity take among rationalists?"

Expand full comment

What is the probability that, by 2028, someone replaces Eliezer Yudkowsky as Scott's community's go-to person on the high risk of AI catastrophe? (By 'full-time', I mean to exclude people like Elon Musk.)

Expand full comment

There are only two of these where I think you are wrong enough to put substantial Manifold-bucks behind it.

First, I do not expect substantially more people to be using AI as a romantic partner than using AI as a coach/therapist. The 5% chance for a serious AI therapist product seems so low I am concerned you made a typo.

Second, I think "AI can generate feature-length films" is 50-50 to happen by 2026, which is much higher than your "2% by 2028".

Expand full comment

I would agree the AI therapist seems more likely to work well & have wide appeal than the AI significant other. You just need the therapist to be a sounding board, remember your history, and spit out some psychobabble; the fact that it's not a real person forming real opinions on you is if anything a plus.

Expand full comment

right now the AI can’t do faces, perhaps by design. I don’t think AI will ever be able to do this, except as a gimmick.

Expand full comment

AIs have no trouble doing realistic faces on their own (e.g. thispersondoesnotexist). The faces generated by most general-purpose image models are often janky, but I suspect all that's really needed is to plug in a special-purpose human face generator. Same deal with hands.

Expand full comment

I get why faces don’t work so well on public facing models (they are worried about deep fakes) but I don’t know why they can’t plug in the hand generator.

Expand full comment

Hands are tricky even for human artists, to be fair.

My uninformed speculation is that it's because hands are long, thin, and detailed shapes. Diffusion is built on a denoising algorithm - it looks at each pixel and asking "how do I make this fit the image better?" But with fingers it's easy to get into a local maximum - each pixel seems to blend in pretty well with the surrounding shape, the overall look fits the prompt, there's no obvious linear improvement to make - but when you take a step back and look at the big picture you have six fingers instead of five. To fix it you'd need to erase the hand and start over rather than make small adjustments.

I've noticed Stable Diffusion also has trouble with hair - very thin strands end up losing their edges and melting into other features. I think this is a similar thing - hair has both very precise small-scale details and a larger structure that needs to stay consistent over a longer distance.

Expand full comment

Some AI can do faces.

And I don't think this will be a $2 thing, more like $1000. Which is still cheap compared to a $50 million feature film.

Expand full comment

I continue to be impressed how people will confidently give hostage to fortune by narrowly basing their expectations on a few old janky samples from the weakest (free, public) model like Stable Diffusion, and say that AI models can't do things that the SOTA proprietary models like Imagen, Phenaki, eDiff-I, or Muse almost certainly can do, and has many technical fixes like face-specific losses as demonstrated by Make-A-Scene - even assuming that the excellent StyleGAN faces from several years ago didn't already demonstrate how absurd it would be to claim that 'AI will *never* be able to do faces'...

Expand full comment

I feel that if you had some teaching to do, and maybe you do, then your post could have been more impressive if it weren’t full of snark.

Expand full comment

You were snarkily provided relevant topics to Google to learn why he's so snarky. And it wasn't that snarky.

Expand full comment

You're simply mistaken. I've made lots of realistic faces with StableDiffusion using models specialized in human-realism.

The most easily available, freely hosted models struggle a bit with humans because they're in a war against porn, which compromises their understanding of anatomy. Models that include some porn/nudity in their training sets do a great job on human faces, and a much better job on things like hands than the freely hosted ones. They're just also scandal bait.

Expand full comment

Judging by the 57% market, I too would guess it is a typo.

Expand full comment

>First, I do not expect substantially more people to be using AI as a romantic partner than using AI as a coach/therapist. The 5% chance for a serious AI therapist product seems so low I am concerned you made a typo.

At a wild guess, I think Scott's predicting issues on the supply side rather than on the demand side. Explicitly selling an AI as a therapist would engage the various laws and codes of practice around being a therapist, which means you would suffer All The World's Lawsuits. Offering AI as a romantic partner would be much less vulnerable to this because while the AI would probably emotionally manipulate some of the users, exploiting your boyfriend for money isn't actually illegal.

Expand full comment

But what about using your AI romantic partner as a therapist? My point is, as long as you have a realistic chat bot, you can probably use it for anything. So there is the question of whether there will be AI therapists that are branded as such (with all the legal consequences), but also the separate question of whether people will use AI as a therapist, regardless of how it is branded.

Expand full comment

He said therapist *or* coach. I don't think coaches are especially regulated.

(I'm another who thinks the "therapist or coach" figure should be significantly higher than the "romantic partner" one.

Expand full comment

No idea how many people are using it, but there's already a life coach on Character AI. I found her annoying, but in a similar way that I've sometimes found human therapists annoying.

Expand full comment

Meh, I'll just set up my AI therapist server farm in Elbonia.

Expand full comment

"First, I do not expect substantially more people to be using AI as a romantic partner than using AI as a coach/therapist."

This is the dichotomy between the sexual-variant-last people and the sexual-variant-first people, with the sexual-middle people caught in the middle. This may ultimately come down to the relative population sizes of these groups. I think in general people estimate that sexual variants are rarer in the primary and middle positions than their sexual-last counterparts. (Look up "Instinctual variants" if you don't understand what I've typed.)

Expand full comment

One thing you didn't discuss for the future, which I found to be an interesting oversight since you discussed the decline of wokeness and the Supreme Court's impact on democracy, is that it looks virtually certain that affirmative action is going to be prohibited federally over the next 5 years. (It looks like there's a >50% chance of that this summer, honestly). This is either going to make wokeness look very weak; make it look very countercultural and different; or perhaps add new fuel to the fire.

More generally, I think it's underestimated how right-wing the present Supreme Court is, and that over the long run (...IMO, absent some realignment) it is probably likelier to get more rather than less right-wing. Unless Biden replaces one of the right-wing judges, we're probably getting a Roe-tier blockbuster decision every summer for the foreseeable future. The ultimate -- which I'm not confident enough to predict -- would be a case strengthening the non-delegation doctrine, which would hugely limit the administrative state. In theory all six of the right-wing judges on the Court have been in favor of this at some point or another in their careers (and it's overlooked that we came within a hair's breadth of this happening in the Gundy case in 2019; Alito, normally one of the more-right judges, dissented because he thought that having the first effect be releasing sex offenders on a technicality was a bad look); nobody wants to predict this because nobody's confident they have the stones, but then that was the same logic as with Roe.

Expand full comment

"Roe-tier" is one hell of a standard. A not-insignificant portion of the political right had built their entire political identity on overturning Roe. Affirmative action isn't even close in importance. Right-wing rulings I would consider equivalent would be, say, overturning the Civil Rights Act in toto, or overturning Griswold, or possibly ruling all gun restrictions unconstitutional. I just can't see anything on that scale coming to pass.

Expand full comment

'Roe-tier' here meaning 'a substantial shift in policy on a decades-old goal of the political right'. Affirmative action is just the most obvious one (and the one which might be of interest to Scott, since it would put a large amount of wokism in the position of their policy goals being considered unconstitutional), and the non-delegation doctrine the biggest one in the sense of having the most impact on how the government runs.

Nearly every 5-4 left-wing decision from about 1991 onwards -- so, dozens if not hundreds of cases -- is at risk of getting overturned. (Some are less likely than others; I don't see a challenge to Obergefell or Lawrence reaching SCOTUS, for example). This is a process that's going to take significantly longer than 5 years to finish, especially since the Supreme Court works pretty slowly these days (https://empiricalscotus.com/2022/12/05/supremely-slow-out-of-the-gates/).

Also, I expect the Supreme Court to *get more conservative* over time. Right now, the norm seems to be that conservative Senates don't even consider nominees from liberal Presidents, and vice versa. Although the Senate is currently controlled by Democrats, this relies on maintaining legacy Senators from very red states like West Virginia and Montana. At 2020 (a D+4 year), Republicans and Democrats each won 25 states (for a 50-50 Senate) in the presidential election; in 2016 (a D+2 year), Republicans beat Democrats 30-20 by number of states. In the 21st century, the median national outlook is around D+1 -- so pretty clearly a Republican majority, and perhaps even a very large one. Over time, the composition of the Supreme Court should match that of the Senate, and the Senate seems to obviously favor the GOP. (That said, because there are 6 conservatives and 3 liberals, a conservative seat is likelier to come up than a liberal one, and it is true that *for now* Democrats continue to control the Senate, so Biden still has a window to push the court leftward.)

Expand full comment

Interesting logic. I don't know that the court can really get much more conservative in the near term, however. The next two justices up for replacement are Thomas and Alito, neither of whom are getting replaced by anyone more conservative then themselves regardless of Senate makeup. Assuming we get lucky and no ground is lost on those two, we're then waiting for Roberts, Sotomayor, or Kegen, none of whom is due in the particularly near future. Unless Republicans get a hard lock on the Senate for long enough to kill off S or K (so, like, a couple decades) I can't see there being much of a conservative shift from where they are right now. And I don't trust any political trends to hold for more than a couple decades.

Expand full comment
Feb 21, 2023·edited Feb 21, 2023

>Right now, the norm seems to be that conservative Senates don't even consider nominees from liberal Presidents, and vice versa.

Wait, is "liberal Senates don't even consider nominees from conservative Presidents" a norm there's evidence for? I thought as of yet this was still strictly one-party foul play. For clearly partisan justices, sure, but have the Dems stonewalled any Republican "compromise" candidates yet (or verbally committed to doing so) the way Republicans did when the roles were reversed?

Expand full comment

Robert Bork waves "hello!"

I'm guessing your idea of what "compromise" and "clearly partisan" entails varies dramatically between the parties.

Expand full comment

Bork was 36 years ago.

For plenty of years after that instance, SCOTUS nomination votes were mostly not strictly partisan. Bush41's nomination of Souter was a near-unanimous Senate vote, as were Clinton's nominations of Breyer and Ginsburg. Clarence Thomas, before Anita Hill's accusations, was going to confirmed by a similar vote; even after those headlines he still got the votes of several Democratic Senators which he needed in order to be confirmed.

In 2005, 22 Democratic Senators voted to confirm Roberts as Chief Justice. Alito got a few Dem votes; Sotomayor and Kagan each got some GOP votes.

Only after the Merrick Garland thing, when one party literally just refused to let a nominee even be considered and voted on, did the process become partisan beyond redemption. Now, yes, both parties are basically like groups of angry children standing in their respective Senate corners just screaming "NO!" over and over.

(I'm an independent who's voted for candidates of each major party, at least when there isn't a decent independent on my local ballot. For my money what McConnell did was a much most serious violation of the Constitution's intentions and spirit than what the Dems had done to Bork, and has done far more damage to our system of governance.)

Expand full comment

Thought experiment: how do you see the confirmation votes for the Trump nominees going if Garland had been considered and voted down? Because I really can't see them being any less acrimonious or any more bipartisan than they were. Which suggests that, however pissed people may have been about Garland, it didn't really change much.

Expand full comment

Paul mostly covered it, but let me be explicit that your choice of counterexample being from before most of this blog's readers (including myself) were born serves as further evidence to me that it is *not* the norm when the sides are reversed. It remains to be seen what will happen in the future, but calling it a "both sides" norm is currently counterfactual.

Expand full comment

Bork is the breaking point. We can argue all day about who defected first and to what degree (spoiler: it was those guys over there) but Bork is the prototype. If you want a more recent example, look into how many Dems voted to confirm Trump's SCOTUS nominees. (Didn't wind up mattering because Republicans had control of the Senate, but you have to assume they would have voted similarly if they were in the majority).

Expand full comment

First of all, SCOTUS appointments are a rare enough event that to get any kind of sample size you're going to need to go back decades.

Second of all, this "that's too old" is a supremely isolated demand for recency that somehow gets invoked to say things like "Trump is the most -ist president ever, no LBJ doesn't count that's too long ago also how 'bout them Tulsa race riots?"

Third of all, saying a refusal to vote on a candidate is a "strictly one-party foul play" is belied by freaking Kavanaugh. After his confirmation testimony was finished then DiFi dropped the Blasey-Ford accusations in an attempt to delay the vote past the midterms. It didn't work, but that was also an attempt to deny a vote on a SCOTUS candidate.

Fourth of all, if refusing to allow a vote is a violation of norms, how do you classify bringing in a mob to scream at/intimidate senators of the opposing party, or do you deny that happened?

Expand full comment
Feb 27, 2023·edited Feb 27, 2023

Bork was considered, as is required by the Senate by the Constitution, and rejected, as allowed to the Senate by the Constitution. Garland wasn't even brought up for a vote to be rejected on.

Expand full comment

Again, you'll need to cite your source that a vote is required.

Bork was the first one explictly rejected for political reasons. This created a new norm of "playing politics with SCOTUS nominations are A-Ok" That tactics would escalate is entirely predictable.

Expand full comment

I agree with your overall predictions about the SCOTUS these coming few years, while also agreeing with MasteringTheClassics' objection that overturning Roe was in a different category than most other things.

At any rate the chances of the SCOTUS ruling against affirmative action this year are now much greater than 50%. Gorsuch made that pretty clear during oral argument:

https://reason.com/2023/02/07/a-gorsuch-lgbt-decision-may-doom-affirmative-action/

Also though a correction: "in the position of their policy goals being considered unconstitutional" is not correct. When the Court rules against Harvard it will be a statutory ruling not a constitutional one. The issue before the Court is not whether AA violates the Constitution but whether it violates federal law as currently written.

(Which is not just a semantic difference, since updating a federal law requires a lot less effort and political will than amending the Constitution.)

Expand full comment

Ah, fair point. I know Thomas has advocated ruling that affirmative action is prohibited under his reading of the Fourteenth Amendment, but I didn't know that the Court had narrowed the scope of the issue in the ongoing case just to existing federal law. I think creating new federal laws allowing affirmative action in the near future is very unlikely, but it is something that I could imagine in a medium-term future wokeness resurgence (like, not the next few years but the next few decades). Constitutional amendments look off the table for the foreseeable future.

Expand full comment

Thomas is thus far the only justice to advocate that at least in public. It will be interesting to see whether he takes this new opportunity to again go on record along those lines.

(The Court didn't narrow the scope of the issue in the current case, that simply is the issue in the case. The two lawsuits that have reached the Court each argued that AA violates the Civil Rights Act.)

Expand full comment

ADA?

Expand full comment

> anything that has to happen on a scale of seconds or minutes is fatal to AI training

Now extrapolate, and calm down

Expand full comment

I feel like whatever is happening in your brain when you read Victorian poetry or Bing chatbot logs is radically different to what happens in my brain when I read these things.

What do you reckon the future of the housing market is? Do you think there's going to be any significant policy change in any direction? What happens if there's not? Also, are all these stories about how the American education system is collapsing and teens are basically illiterate now real or bullshit? How fucked are we if they're real? These are the future trends I'm interested in.

Expand full comment

"Also, are all these stories about how the American education system is collapsing and teens are basically illiterate now real or bullshit?"

What's the most compelling example you've seen? I want to look into it and see if there's anything there.

Expand full comment

Most posts on r/teachers are about this now. Makes it a very powerful doom scroll.

Expand full comment

Single anecdotal data point: A few months back I had to to teach a recent high school graduate coworker what fractions were. Like, as a concept. This is in an area considered to have decent government schools, too.

Expand full comment

No child left behind! Or educated apparently.

Expand full comment

That's not a decline from anything. My mother's friend's husband is a professor of mathematics at Kansas State University. He has repeatedly complained about incoming students being unable to work with fractions.

Expand full comment

My students seem at most only modestly less prepared than those of ten years ago. My peers would complain of a sharper decline, but I think teachers of all generations feel like the new generations are hopelessly vapid so my bar for genuine regression is pretty high.

The most salient issue warping public perception here is America's ongoing push to keep kids in school to the bitter end. That 11th grader who doesn't know fractions and reads at a 1st grade level would have been a dropout twenty years ago. Today, he gets a medical diagnosis and a certificate mandating all sorts of special treatment and bar-lowering that helps him pass all critical classes with a D because the alternative is too much work for everyone and not aligned with the interests of anyone who has any real say.

Expand full comment

Well said. I mean I am not normal, but my nine year old public school kid can add/multiply fractions and rattle off a bunch of classical history (though he didn’t learn that in school).

The biggest problem with current American education is the attempt to overeducate the idiots and poorly raised.

Expand full comment

>Also, are all these stories about how the American education system is collapsing and teens are basically illiterate now real or bullshit? How fucked are we if they're real?

Speaking entirely anecdotally from my experience as an English and then CS teacher at a relatively normal low-income public high school, the answers are "yes, it's very real" and "not particularly fucked". They lack skills prior generations cultivated because those skills are easily augmented via virtual prosthesis and are not very important. It's true that if I had assigned students even a 2-page article, none but the highest achievers would actually read the entire thing. It's also true that they would not have had an issue discussing the topic and thinking about it deeply if I had just assigned a youtube video instead.

Expand full comment

Disappointed there’s no calibration chart! I know the resolutions are fuzzier but would still be interested in seeing it.

Expand full comment
Feb 21, 2023·edited Feb 21, 2023

"Wokeness has peaked - but Mt. Everest has peaked, and that doesn't mean it's weak or irrelevant or going anywhere. Fewer people will get cancelled, but only because everyone has settled into an equilibrium where they know what the cancellable opinions are and don't say them (or because everyone with a cancellable opinion has already been removed, or was never hired in the first place)."

In other words, wokeness has won. Authoritarianism has won. If everyone with a cancellable opinion has been removed or not hired in the first place, and everyone refrains from saying cancellable opinions, in what sense do we still have a free and democratic society? Does a society where a large percentage of people, including its most influential, can't debate controversial ideas deserve to call itself a democracy?

Expand full comment

The Social Justice army has conquered and controls just about all the territory they had charted on their famous Long March—almost every Humanities Dept, almost every cultural institution (from the Met to the Guggenheim down to your local knitting club), academic hiring, Hollywood and TV, legacy journalism, philanthropy, most of the Dem Party, Blue states and cities, corporate boards, even sports and the military.

All that's left to them now are various mopping-up operations, and the occasional (virtual) execution of a stray heretic, pour encourager les autres...

One of their best weapons has been the inability of American liberals to comprehend what was happening, how it could happen here—a Maoist Lite takeover of our cultural and educational institutions. They have been whistling past the graveyard for almost a decade now: oh, it's just a few kids on campus, it's just Fox News, there's no such thing as cancel culture, and now the inevitable: it's peaked, it'll fade out, Father Time will rescue us eventually...

People with no beliefs (except money and status) always get rolled by people with strong beliefs, no matter how stupid or dangerous. Once America was down to its last two gods, Mammon and The Self, it was inevitable that a new cult would appear to erect their temples atop our ruins.

Expand full comment
Feb 21, 2023·edited Feb 21, 2023

I would say that American liberals understood what was happening, and they liked it. The woke are in their coalition. Why would they say no to useful allies who did the dirty work of bullying and harassing opponents, while they got to share in the fruits of victory? Sure, there's always the fear that the woke will come for "me" one day, but if "I" just agree with them 100% on everything, they'll leave me alone...right?

Expand full comment

On the culture war question. I feel like people have stopped talking about the alt right for the most part. But mostly because it's been absorbed or transfigured into Trumpism/MAGA. (Whether in continuity Trumpism-without-Trump flavor or MAGA Classic™). There's maybe some lesson in how parties integrate insurgent movements. The old Romney and Bush style GOP is dead, but instead of being replaced by the alt right as existed in 2018 we got something with the aesthetics and core politics of the alt right, but picking up traditional Republican positions on tax etc as well. Rather than the populist realignment people were predicting after Trump. (Arguably a similar thing happened to a lesser extent with the Dems, but reversed, Biden gave an old white working class face and aesthetic to some more left wing policies).

Looking back at it I didn't expect how much the anti trans strand would split off and become its own thing, rather than staying part of a wider alt right ideology. Though in retrospect it's obvious that a narrative focused around "groomers" and protecting children would be more palatable to median voters than "Jews will not replace us!".

Expand full comment

I wouldn't call "anti-Trans" an alt-right thing, it's more of a mainstream thing.

And I think we would have mostly missed that it would become a big thing, but that's mostly because we mostly missed how big the pro-trans thing would become. The anti-trans thing is a predictable reaction to the overreach of the pro-trans thing.

Expand full comment

Sorry, what "pro-trans thing"? You mean the "treat trans people like people" thing? Maybe the "let's reduce the chance of teen suicide among trans teens by maybe giving them some hope that their own bodies won't develop into wrong-gender personal nightmares" thing? Or perhaps the "allow professional sports regulators to decide on medical grounds when a female-bodied athlete can compete against born-female-bodied athletes"?

Expand full comment

Possibly he meant the "lock female prisoners up with male rapists who totally insist they are now women" thing.

Expand full comment

Thanks for pointing this out. There’s also the very perceptible shift in focus, in retrospect, from the Normal Gays being the major social political football 5ish-10ish years ago, to us (transgender folks) increasingly becoming the political football. Kinda sucks most of the time, I don’t particularly like my existence being politicized, and I’m looking forward to this phase passing us by... in another 5 years? Heh, maybe, a girl can dream.

Expand full comment

"Normal gays," as the name suggests, became perceived as *normal*. Lots of reasons for that; I suspect an underrated one is that a lot of them simply aged out of activism and edginess (and so did their most hardcore opponents).

"Transgender" (which, you may agree, is a big umbrella with a lot of factors and not terribly useful to discuss as one phenomenon, but so goes mainstream discourse) hasn't hit that, yet. There's enough supply of Tiktoks, Tumblrs, and whatever's going on in Canada and Scotland to keep the public perception firmly planted in weird/crazy/dangerous instead of "normal."

If you could find a way to reduce the volume of activists that live for the fight, and increase the representation of normal people *that happen to be trans*, I think the same process would occur, you can be peaceful and something else becomes the football.

Expand full comment

This is an interesting tangent that I’m surprised didn’t crop up during the recent conversations on r/theschism… you might be onto something there.

Expand full comment

"giving them some hope that their own bodies won't develop into wrong-gender personal nightmares"

Hope in the form of surgery, sterilization, lifetime use of off-label meds, the inability to experience orgasm, and lord knows what other side effects?!

Wouldn't the wiser path be NOT to tell anxious children that if they find puberty uncomfortable they "may have been born in the wrong body"? (the gnostic creationist myth of our time)

Wouldn't it be wiser and more hopeful to be honest and say we all hate our bodies at times, but that self-acceptance is the true path to peace and maturity, that needing strangers to "validate our identities" is another form of servitude, and that "changing sex" is as impossible as a lion becoming a mouse?

Also, the whole "do you want a dead son or a live daughter?" emotional blackmail thing is really shameless manipulation and has caused so much unnecessary damage, as we are now learning through the growing ranks of de-transitioners.

Expand full comment

I have trans relatives. What you're saying is not consistent with their reality. It's not about "anxious children" finding "puberty uncomfortable." In fact my sister didn't realize until quite a bit after puberty - but in hindsight it was obvious, just not even thinkable at the time.

And it's not about "changing sex." Some people simply were socially assigned a gender different than their brain is wired for, and it needs to be corrected as best as possible.

https://en.wikipedia.org/wiki/Detransition#Reasons says that only a small percentage of transitioned trans people de-transition, and very often it's due to external pressure, and very often they re-transition.

When you talk about sterility and orgasm you're making assumptions about the surgical choices made by individual trans people. In many cases your assumption will be incorrect.

Expand full comment

sorry i knew i shouldn't have taken the bait, bad judgment on my part.

i just don't believe in the concept of "gender" (and i think transmania is the frontal lobotomy of our time), so any discussion bw us is like an atheist debating a christian fundamentalist, mostly pointless.

my bad, all apologies

Expand full comment

None of what you just wrote invalidates my sister's truth. She was raised as a man and - in this society - she was and is a far better fit for a woman.

"Gender" is a social construct and can't be rigorously pinned down. I'm genuinely curious: Are you equally comfortable being called "ma'am" or "sir"? My sister is not; for her, gender is real. As long as (some/many) social interactions are gender-flavored - which, surely you don't deny - then the concept of "misgendered" is meaningful.

Expand full comment

Not believing in the concept of gender sounds to me like the typical mind fallacy. Some people don't have much of an internal sense of gender. I'm one of them, and apparently you are too. Other people do have a sense of gender, and for many of them it's hugely important that their bodies look a certain way, or that other people call them by the right names and pronouns. I can't say I understand it, but I don't have to, because it's not my life.

Expand full comment

"treat trans people like people" - I think everyone's on board with that.

"let's reduce the chance of teen suicide..." - citation needed. It's a nice talking point, though.

"allow professional sports regulators..." ...who happen to be under tremendous pressure from loony activists? Also, claiming that trans women are "female-bodied" is begging the question, and AFAICT, the realistic answer to where they can fairly compete is "dressage, sailing, curling, shooting... and that's about it".

Also, you've forgotten the parts where terms like "mother" and "woman" are to be struck from the vocabulary of medical texts, where women are supposed to accept dudes in women's dressing rooms and toilets, where male sex offenders can demand to be placed in women's prisons etc.

Expand full comment

Re suicide, https://www.frontiersin.org/articles/10.3389/fpsyg.2018.00399/full talks mainly about homelessness but cites others showing that homelessness is extra-bad for LGBTQ kids.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7578185/ studies supportive mothers vs. suicidal ideation - 3X improvement.

https://transpulseproject.ca/wp-content/uploads/2012/10/Impacts-of-Strong-Parental-Support-for-Trans-Youth-vFINAL.pdf shows parental support vs. lots of wellbeing factors (but not suicide).

https://www.reuters.com/article/us-health-transgender-suicide-rejection/for-trans-people-family-rejection-tied-to-suicide-attempts-substance-abuse-idUSKCN0YI22T points to a paywalled article but says that high family rejection is 3X as likely to report a suicide attempt vs. low family rejection.

"loony" activists - citation needed. In any political wedge issue, the other side will seem loony - that's the manufactured point.

Re sports, I was undecided until I saw that there were actual numbers and medical-based restrictions on participation rules. At that point I figured that political pressure on the sports regulators in either direction was suboptimal, and I've seen way too much political pressure that is anti-trans. I meant "female-bodied" from the point of view of the sport - e.g. muscle strength for most sports, so not begging the question at all.

"dudes" in women's dressing rooms and toilets - isn't that begging the question?

Male sex offenders in women's prisons - stats/citations? Also, compare and contrast with the harm from an effeminate person with female breasts being forced into a men's prison.

Reality is that trans people exist, and we can either treat them in destructive ways, or we can try to find ways to treat them like people rather than problems. I recognize that it's a real values issue - how far should society go to treat a small percentage of people like people rather than problems? Real legitimate arguments could be made against the ADA, for example (and probably were at the time). But I prefer societies that are willing to change and adjust in order to avoid harm to a small percent of their members. All of us are in the small-percent category for one category or another, and eventually we'll age and need extra accommodations in that way at least.

It seems like you're focusing mainly on problems and costs to society, with some talking points that might have originated in moderate-conservative media. If you haven't talked with actual trans people about these issues, I recommend it - you might get a new perspective on some things.

Expand full comment

Don't know that the stats have caught up to it yet, but regarding "male sex offenders in women's prisons", my understanding is that the Scottish First Minister resigned over basically exactly this happening, at approximately the same time she introduced a bill to allow your gender to be "whatever you legally declare it is". Last month, apparently; key name to search if you're curious is "Isla Bryson".

As for comparing and contrasting the harm with sending them to a men's prison, it seems to me there's a certain amount of "best of a set of bad options" here:

Man in women's prison -> lots of additionally-traumatized women.

(Trans-)Woman in men's prison -> one very-additionally-traumatized woman.

(Trans-)Woman in solitary confinement -> one very-additionally-traumatized woman, and significantly more prison resources required.

Expand full comment

"Man in women's prison -> lots of additionally-traumatized women.

(Trans-)Woman in men's prison -> one very-additionally-traumatized woman."

That's assuming trans women are more likely than cis women to be rapists. Because there are already plenty of rapists in women's prisons, and most of them are cis.

Expand full comment
Feb 23, 2023·edited Feb 23, 2023

The thing where most of the people taking about trans issues are total loons and there is a real issue with it replacing tattoos and being goth as a thing for kids to rebel with except some of them end up with their tits or dick cut off instead of a Chinese character on their wrist.

It is well beyond silly and harmful.

Expand full comment

(For some data: Support for the idea that one's gender can be different from one's sex assigned at birth is at 38% (vs 60% opposed) in the US, and slowly dropping for at least five years, per Pew. It is neither a majority position, nor a fringe position. Numbers vary for related issues (eg, more support for things like anti-discrimination legislation and using new preferred pronouns, less support for things like allowing minors to transition and trans participation in athletics of the gender they identify as).)

Expand full comment

On Trump: I disagree that the Republican Party has not moved on from Trump. They have moved on from him personally. They haven't shifted very far from him politically but he was never nearly as far as his opponents believed from the center of gravity of the Republican Party anyway. I think DeSantis fills the role you attributed to "someone like Ted Cruz" in your initial predictions, he find a happy compromise between Trumpishness and establishment Republicanism that most people on the right are happy to shrug and go along with.

On AI science: A lot of papers in materials science theory basically come down to "We simulated X property of Y material using method M and parameters P. We got value Z, which can be compared to values Z', Z'' and Z''' computed by previous methods or measured experimentally. In conclusion, method M is pretty good." The ability to read the existing literature to figure out the most important X-Y-M combinations to simulate, and the ability to write the actual paper, are the only things stopping this whole process from being automated, but I think it could probably be done reasonably well right now.

On AI movies: 40% seems low for "a short cartoon that kind of resembles what you want", while 2% seems high for "as good as existing big-budget movies". Turning a short prompt into a short cartoon script is within GPT's reach, turning a script into a series of images is within Stable Diffusion's reach, and the only problem is replacing an unrelated series of images with a continuous and consistent animation. I'd give 95% probability someone is going to manage to pump something like this out since it's the obvious next step.

Expand full comment
author

I don't think they've moved on from Trump; the prediction markets still have him tied for most likely 2024 nominee, and I think most of the Republicans who ever liked him still do.

Maybe we're disagreeing on what would qualify as this short cartoon; I'm imagining a short cartoon version of the Star Trek / Star Wars crossover as being ~5 minutes and having animation quality the same as a mediocre cartoon on YouTube right now.

Expand full comment

I thought DeSantis had overtaken Trump in the prediction markets, but I just checked metaculus and you're right, they're currently tied at 45% each (and I don't care to check any others). I think that Trump's chances are being overestimated by people who tend to live in non-Republican bubbles; I think his chances are very low. The feeling I get is that people who liked Trump still like Trump but think that it's time for someone with vaguely Trumpish views but without Trump's idiosyncratic personal flaws. The only argument I ever see in favour of Trump as a nominee is "revenge". But since my argument comes down to "screw prediction markets, trust my personal feelings" feel free to disregard it.

I don't think we're disagreeing what would qualify. Given the crappy quality of a mediocre YouTube cartoon (say, Wolfoo or Cocomelon) I think that might be even more achievable.

The current state of the art is that infinite Seinfeld episode on Twitch. But that's not really the state of the art, the animation is deliberately crappy and the scripts don't seem to be as good as what you can easily generate with ChatGPT. Plug ChatGPT into some kind of modern character animation engine and you're 80% of the way there already.

Expand full comment
Feb 22, 2023·edited Feb 22, 2023

The issue for the GOP is the DeSantis/Trump split is basically a college/non-college educated split. So yes, on Twitter, every smart pro-Trump before 2020/2022 conservative is making reasonable arguments about moving past Trump.

OTOH, in rural Wisconsin or the exurbs of Texas, where a majority still think Trump won the 2020 election, how are they going to react when Trump asks in a debate, "raise your hand if you think I won in 2020," and DeSantis can't raise his hand, because if he does, why is he running against the rightful POTUS?

I also think national primary polling is not only quite non-predictive, it's likely missing Trump voters just like national polls did in 2016 & 2020.

Expand full comment

Should be noted that while Trump was indeed underestimated in national *general* election polling in 2016 and 2020, he was actually overestimated in the 2016 primary polling pretty consistently; it was Ted Cruz who over-performed then. (Trump's victory has been misremembered as a landslide, because the polls thought that there would be a landslide and because it looked impressive on the map; in reality Trump v. Cruz was the closest Republican primary, by pretty much any metric you want to use, since Ford v. Reagan went to a convention in 1976).

More generally, in national Republican primaries, the candidate who out-performs their polls is always the hyper-socially-conservative, evangelical-religious candidate (Huckabee '08, Santorum '12, Cruz '16). Neither Trump nor DeSantis seems like a very natural fit for these guys.

(Also -- while I think you're right generally and there's still a lot of support for Trump -- your specific examples are pretty bad. Trump lost rural Wisconsin and the Texas exurbs in 2016, and while Trumpy primary candidates do well in the former nowadays, the Texas exurbs are still electing Dan Crenshaw and Wesley Hunt. *Also*, candidates who denied Trump won the 2020 election did fine in Republican primaries in 2022...in head-to-heads -- but 'hard MAGA' was almost always the largest faction in multi-candidate races, which were more common. But *also*, turnout is always higher in presidential primaries, which might be advantageous to Trump. But *also* when one side has a contested primary and the other doesn't the electorate is much more moderate because it attracts people who always vote no matter what's on their ballot -- these are always highly educated centrists, and they're the reason Sanders crashed so hard between 2020 and 2016, even as his beliefs seemingly became more popular among the wider public.)

Expand full comment

My bet is that Trump is likely to age out of being a plausible candidate. Not dead, not incapacitated, just no longer energetic enough to attract new voters. I don't have a strong opinion about whether he'll be running.

Expand full comment

While this wouldn't make me unhappy, it's hard to see him or his supporters accepting that idea as long as Biden remains a viable candidate.

Democrats, of course, have been muttering about Biden's age with respect to 2024 more or less constantly, but so far without a lot of success in naming a consensus alternative.

(538 had a podcast that was particularly hilarious: they went round the table naming obvious and uninspiring alternatives, and Bernie was like the third or fourth name suggested. Leaving aside one's politics, Sanders isn't who you go to if the problem is Biden being too old.)

Expand full comment

Would Trump need new supporters to win? Could he get them if he's largely coasting on momentum?

Expand full comment

I don't really know, but something like 2016 where his opponents can't agree on an alternative to unite behind at least seems possible.

DeSantis is obviously trying to position himself as that alternative, but I don't know if he'll succeed or if his chosen lane overlaps too much with Trump's own.

Expand full comment

Scott seems to have left off the "just for fun" section:

> 1. I actually remember and grade these predictions publicly sometime in the year 2023: 90%

> 2. Whatever the most important trend of the next five years is, I totally miss it: 80%

> 3. At least one prediction here is horrendously wrong at the “only a market for five computers” level: 95%

1 happened, obviously. (Although the post in the subreddit probably made this a foregone conclusion)

2. If you think "the most important trend" is Covid, then I disagree that he "totally missed it". As he said, he was somewhat off in how it would play out, but just the fact of having that discussion is impressive enough to not be a total miss.

3, I think Roe overturned at 1% qualifies here.

Expand full comment

Seems like you're good at predicting the state of tech but bad at predicting mass human behavior. Likely because your own circle of acquaintances is so unlike the median.

Expand full comment

Not to pile on too much more when you've already admitted defeat on the politics front, but the phrase that stood out to me even though it didn't make it into a formal prediction was this:

"no, minorities are not going to start identifying as white and voting Republican en masse."

I'm a demographer who has done a little informal work on this and I disagree pretty strongly. I don't know that they're going to start voting Republican "en masse," but every prediction about the rise of minorities in the US turns heavily on Hispanics being a minority. At this point, 90% of Hispanics in Texas and Florida identify as white and their right-ward shift was a major bright spot for Republicans in 2020. In two generations, I don't believe we will see the history of Hispanics or Asians in the US much differently than we see Irish or Italians. They are simply new immigrant groups who are in the early stages of assimilation (a process Americans naively assume can happen overnight) and will be heavily relied-on conservative foot soldiers.

Blacks and Native Americans are the true minorities with staying power in the US and neither are substantially increasing in number. The really interesting thing to be seen, imho, is how new African immigrants integrate - how far they go in adopting or rejecting the identities of Generational African-Americans - and how much they obscure the distinction between immigrants and African-Americans. They've been able to hold both identities so far and take full advantage of the DEI push - to the point where the majority of Black students at Ivy League schools are not descendants of enslaved persons. But there is budding unrest over that fact.

Expand full comment

Should be noted that what percentage of Hispanics are "white Hispanics" strongly depends on exactly how you phrase the question -- the Census Bureau recorded 20% of Hispanics identifying 'white alone' as their race, while the American Census Bureau recorded 65% of Hispanics calling themselves white: https://en.wikipedia.org/wiki/White_Hispanic_and_Latino_Americans#:~:text=As%20of%202020%2C%2062%20million,Latinos%20self%2Didentified%20as%20white.

In the 2022 midterm, Democrats improved on 2020 outright with white voters (very slightly, but still), while Republicans made large gains with Asian and Hispanic voters, and black turnout crashed. (The source for these claims is the Edison Research exit polling, which goes back to the early 1990s and can be used to study a number of fascinating changes to the American electorate; there are other demographic surveys, like Pew, but all of these suffer from being uncheckable.) But all of these groups are still very strongly, >60% Democratic. In general, in the aftermath of Obama no longer being a prominent public figure and memory of the Civil Rights Movement fading as those politically active in the 1960s have been dying, minority turnout has been declining relative to white turnout in the post-2016 world. (Only relative to white turnout, though: 2020 smashed turnout records among every demographic. 2012 remains the only cycle in American history at which black turnout was *higher* than white turnout, though.)

Expand full comment

Hispanic's in two stats (FL & TX) that already had a hold in state-level GOP politics (because of the Bush Machine in TX & Cuban voters in FL) are more Republican. Meanwhile, exit polls still have Democrat's winning around 60-65% of Hispanic voters, and I think, even more Asian voters in 2022.

Yes, there was some rightward movement in some large cities and in Florida. Outside of that, not really.

Expand full comment

Good on you. You and Paul Krugman are the only opinion gives who self grade

Expand full comment

The FiveThirtyEight people do it.

And a big reason that I subscribe to the Athletic, despite being interested in only one sport of the many that they cover obsessively, is that their writers and analysts make lots of predictions and always circle back to self-grade.

(Which is a wholly new thing in the world of sportswriting, in fact it is pretty close to an inversion of the entire previous concept and practice of sportswriting.)

Expand full comment

Vox does it too...

Expand full comment

To help demonstrate Scott's difficult task, I will throw out a few bold predictions of my own, although with a ten-year timeframe.

1) California will not be a US state in 2023 - 40%

1A) California splits into multiple states - 15%

1B) California secedes - 20%

1C) United States completely dissolves - 5%

2) At least one country uses AI in some way to eliminate the concept of money by 2033 - 25%

3) A new religion started in the 2020s (more unique than just a new Pentecostal denomination) has at least 5 million adherents by 2033 - 25%

Expand full comment

"1) California will not be a US state in 2023 - 40%" - I assume you meant 2033, not 2023 here?

Expand full comment

... yes. of course I would make a typo (and I can't fix it now). The rest of the numbers do look like what I meant.

Expand full comment

There is an edit button under the three dots under your comment.

Expand full comment
Feb 21, 2023·edited Feb 21, 2023

There are really interesting ideas, but I'd place the odds for all of them *much* lower than you.

1. The U.S. completely dissolves: 1%. Maybe over a 50-100 year timeframe, but not in just a decade. This only happens by 2023 if there's a hot war with China and/or Russia, multiple 9/11-scale terrorist attacks within a very short timeframe, an enormous and well-coordinated insurgency movement with the overt support of >25% of the populace and multiple state governments, or some kind of truly apocalyptic natural disaster (e.g. an asteroid impact, a supervolcanic eruption, a plague on par with the Black Death).

California splits into multiple states: 0.3%. This idea's been around for decades, and despite a fringe minority of *very* vocal supporters, it's never really gotten off the ground. I really don't see that changing in the next decade. Still, I suppose it's always a possibility, and the Maine example does prove that there's some (VERY distant) precedent for regions seceding from states while remaining part of the Union.

California secedes: 0.2%. There's an extremely slim chance of this happening if the far-right takes control of Congress and the White House *and* tries to push through draconian reactionary policies at the federal level, rather than sticking with the tried-and-true "leave it up to the states" approach (for instance, federal bans on abortion and gay marriage). Even then, it would be an uphill battle for secessionists that would probably fail.

That brings the overall odds of California not being a U.S. state to a grand total of 1.5%, about 25x lower than your estimate. I also think it's funny that my order of likelihood for the three possibilities was the reverse of yours. While I think it's very unlikely that the U.S. will dissolve altogether, it seems even less likely for a single state to break apart or secede from an otherwise intact and unified country.

2. Not sure how to assess this one, because I'm not sure what you mean by "eliminate the concept of money." If you mean that modern government-backed cash is replaced by some kind of A.I.-coordinated cryptocurrency, then I'd say 2%. It's unlikely that such a huge change will happen within the next decade, but it's technologically feasible. Also, two countries (El Salvador and the Central African Republic) have already adopted Bitcoin as official government-sanctioned legal tender, so I could see a similarly small and poor country adapting an A.I.-based currency.

If you literally mean that the entire concept of currency will be replaced by a command economy controlled by a Kantorovich-style central planning A.I., then I'd say 0.001% if I'm being generous. I'm not sure that such a system would even theoretically be possible, given the Economic Calculation Problem. And even if it is possible in theory, it would require technology that's probably centuries ahead of what we have now. Back in the 80s, economist Alexander Nove showed that it would take the world's best supercomputer millions of years to perform the calculations necessary to fully plan out even a small nation's economy. Computers have gotten much better since then, but they'd have to get several orders of magnitude better than they are now to make a Kantorovich system feasible. (And Nove was a socialist; capitalist economists argued that he was too generous and that it would simply be impossible regardless of how much time and how much processing power a computer had.)

3. I'd put this one at 0.1%. Even Islam and Mormonism didn't spread that much within their first decade of existence! If this happened, it would be an unprecedented historical and sociological anomaly. For reference, Wicca's had about 70 years, and it still only has around 800,000 adherents. Still, I suppose the modern world is a weird place, and the sheer size of the overall global population (as well as the interconnectivity provided by modern transportation and communications tech) means that getting 5 million adherents isn't *quite* as impossible of a feat as it would've been for growing religions in earlier eras.

Expand full comment

I'd put "California splits into multiple states" way ahead of the other possibilities. There are actual people who want this; there are billionaires who keep trying to make it a referendum. They get slapped down by the courts, but they exist and they're serious-ish political actors. But every other thing here seems like something no serious actors support.

I think the odds of California splitting into multiple states over 10 years is maybe around 1%-5% or so, but the rest of this has probabilities of under 0.1%.

Expand full comment

Very briefly:

1. Scott thinks the era of political unrest in America has peaked. I disagree. I think there will continue to be significant political stresses. I agree the most likely scenario for secession in the short-term is "President DeSantis tries to ban abortion nationwide, and the opposition reaches critical mass before he back down". Marjorie Taylor-Greene is promoting a "National Divorce" on Twitter again, so it might even be a bipartisan movement to split.

2. A Kantorovich-style system of the AI allocating resources would count, as would a Gesell-style system of decaying money (I think that is too complicated to implement without AI). Just using Bitcoin would not.

3. Depending on how you count, Falun Gong might have met that threshold of growth in the 1990s.

Expand full comment
Feb 22, 2023·edited Feb 22, 2023

I'm old enough to remember America & JesusLand memes in 2004. There's always a group of people wanting to split.

I say this, as a hardcore pro-choice person. Even if DeSantis passes a national abortion ban and it gets upheld, the future likely isn't Californiia seceding, but basically, making itself a sanctuary state. Sure, if the federal government wants to spend time and money raiding abortion providers, they can do it, but they'll get no help or backup from the state or local government.

Remember, right now, multple states have massive cannabis businesses, despite marijuana still be a Schedule I drug nationally.

Expand full comment

Re religion, social media and chatbots could help spread a new religion a lot faster than before. Specifically, chatbots on social media programmed to convince humans of a certain belief system - humans are easier to hack than we mostly think we are, and I suspect even today's chatbot technologies could be adapted to be effective.

Re your mention of insurgency possibly leading to US dissolution - serious question: would a coordinated attempt in 2024, 2028, or 2032 to steal a presidential election count as insurgency in your book? 1) if significant (e.g. including state election officials and/or governors) but unsuccessful; 2) if successful to the point that presidential succession was disrupted?

If it would count, how would you rate the likelihood?

If the likelihood is non-zero, would this kind of bureaucratic insurgency be the kind of thing that might lead to dissolving the US, and with what probability?

Expand full comment

If California is still a state in 2033, how would you grade this prediction? On the one hand, you think it'll probably (60%) still be a state. But your hot take is that there's a 40%(!) chance that it won't be, which I imagine is much higher than most people would estimate.

Expand full comment

I love that you do this -- please keep doing it. Epistemically inspiring.

Expand full comment

I noticed you didn't grade your meta-predictions!

"1. I actually remember and grade these predictions publicly sometime in the year 2023: 90%"

Clearly you did!

"2. Whatever the most important trend of the next five years is, I totally miss it: 80%"

I think almost everyone would agree that COVID was the most important trend of the 2018-2023 period. Most people would probably put the Ukraine War in second place. You didn't mention either of those: You brought up the possibilities of an artificial pandemic and a major Middle Eastern war, but the possibilities of a natural pandemic and a major Eastern European war didn't come up at all.

3. At least one prediction here is horrendously wrong at the “only a market for five computers” level: 95%

I think giving 1% odds to Roe v. Wade is wrong to the same extent as the "five computers" example.

That's 3/3 correct, you're very good at making predictions about your predictions!

Expand full comment

Neither COVID nor the Ukraine War are trends. They are events. Whether they will even lead to any trends is debatable, since neither is over yet.

Expand full comment

I can see the event vs trend distinction being important since trends are more predictable than events.

Expand full comment
founding

I don't think you should have gone to 90-95% on Roe v Wade being overturned. Nine justices is too small for statistical inevitability, you've still got quirks of individual behavior at work. Even if you assume a 6-3 "Republican" SCOTUS, Republican-nominated justices have traditionally only ~80% reliable at voting to overturn Roe v Wade. So if you take the 2018 court, add one new Trump nominee replacing a retiring conservative judge, and one new Trump nominee replacing a dead liberal judge, you're probably only at 65% for overturning Roe v. Wade. And there was no guarantee that we'd see a dead liberal judge before we got a Democratic president, so probably knock that down to 50% at most.

1% was way too low, but 90% would have been way too high.

Expand full comment

I interpreted that section as Scott inversing the sign. E.g. he’s saying he should have given a 90-95% chance that Roe stood instead of 99%.

I think your numbers are much better.

Expand full comment

Here’s a thought ... highly evangelistic chat bots, deployed for changing people’s opinions (most likely during an election year).

This is something that seems like it could happen, and also something that could potentially lead to a large freak out by whatever side is worse at using it?

Further thought: actively hacking/hijacking existing chat bots people have bonded with to deliver evangelistic payloads

Expand full comment

Not bullish enough on AI. China will absolutely invade Taiwan, revanchism is what every boomer dictator does when they realize their country is floundering and hasn't reached escape velocity to become a global hegemon. 100% correct on crypto and everything else.

Expand full comment

One quibble, perhaps addressed already: US Politics claim #12. It reads: At least one major (Brady Act level) federal gun control bill passed: 20%. You resolved it as "not having happened". I think it did, and the most important thing it did was showed that gun legislation (ANY gun legislation) can pass today's Congress - because the substantive stuff wasn't all that dramatic. See e.g.:

https://www.npr.org/2022/06/25/1107626030/biden-signs-gun-safety-law

6/25/2022: "President Biden on Saturday signed into law the first major gun safety legislation passed by Congress in nearly 30 years..."

Expand full comment
founding

Brady Act applied to basically all firearms purchases. A law that applies only to a small minority of firearms purchases, is hard to claim as "Brady Level".

Expand full comment

I see no predictions on embryo selection, genetic editing..., why?

Expand full comment

>11. Psilocybin approved for general therapeutic use in at least one country: 30%

This did happen, Australia recently announced the laws are changing as of July. If the prediction had to come true before the beginning of 2023 though, you'd be correct

Expand full comment

To be pedantic, the *law* isn’t changing, the *regulation* is.

Expand full comment

My mistake - you're correct.

Expand full comment
Feb 21, 2023·edited Feb 21, 2023

"nobody being willing to say the spectacular achievements signify anything broader"

I may be quibbling, but this part seems clearly wrong. I don't believe such people are correct, but it's completely clear that they are out there, starting with Blake Lemoine.

Maybe they are few enough to be in the noise, and it certainly doesn't merit reducing the grade below A.

[ETA: I was too hot off the mark; I see that this is from a general paragraph without specific predictions and confidence levels. Never mind.]

Expand full comment
Feb 21, 2023·edited Feb 21, 2023

>AI can make deepfake porn to your specifications (eg “so-and-so dressed in a cheerleading costume having sex on a four-poster bed with such-and-such”), 70% technically possible, 30% chance actually available to average person.

This is borderline already technically possible. If you go to the Stable Diffusion subreddit, you can see lots of high quality images people have had the AI do of whatever they want. With a few seconds of work, the AI will generate images to your specifications, it struggles a bit with complex specifics (but not nudity, I have to make sure I have "nude, nsfw, etc." in my negative prompts or many models will spit out nudes when you don't want them). For example, here's a screenshot of [Marilyn Monroe from when she was a Jedi in Star Wars](https://imgur.com/a/oZ5AXRn). And note, with some effort you can significantly improve images e.g., highlight portions you want to change, try 20 different variations and see which one you like the best, etc. so this is an example of a couple of minutes of work with current AI, from someone who hasn't practiced that much yet.

Expand full comment

I think he's saying videos? But agree, I don't think even that's going to need 5 years. Also, that image is hot!

Expand full comment

Ah yes, you're probably right about him meaning video. 70% confident in high-quality video generation in 5 years might be a bit low, but seems like a reasonable prediction. For images though I'd go in at 90-95% for technically possible in 5 years.

Expand full comment
Feb 22, 2023·edited Feb 22, 2023

I don't think he can mean videos, because the previous prediction gives only a 40% chance of an AI being able to make a "short cartoon clip that kind of resembles what you want", which would be inconsistent with a 70% chance of an AI being able to make realistic pornographic videos to the user's specifications.

Expand full comment

Stable Diffusion seems pretty bad at sex though (and four-poster beds).

Expand full comment

Yeah, if images count this is already a thing. There are communities making textual inversions for hundreds of celebrities which let you specify them (and very popular celebrities work without the inversions.) 'Having sex' is a bit complicated, but again there are LoRAs and hypernetworks dedicated to pictures of various sex acts (including far-fetched things like tentacles) which seem to do fairly well if their sample outputs are anything to go by. There's probably a lot of cherrypicking involved there, but it clearly works.

Amusingly, 'four-poster bed' is the least reliable part of this prompt, since there hasn't been a community effort of guys dedicated to 'creative horniness' optimizing on the problem. It would be straight-forward to solve (beds are an inherently easier problem than, say, hands), but you'd have to do it yourself or just accept some variability on bed-type.

Expand full comment

I tried some poetry with Bing Chat. It helps that it can actually search for information on the poets and their style. This was my favorite, in the style of Lord Byron:

I saw her once among the festive crowd

That filled the hall with laughter and delight

She shone more bright than any star or cloud

That graced the splendid canopy of night

Her eyes were like two jewels of the deep

That sparkled with a thousand rays of fire

Her lips were like two roses in their sleep

That breathed a fragrance sweeter than a lyre

Her voice was like a music of the spheres

That charmed the ear with every word she spoke

Her smile was like a sunbeam that appears

To chase away the gloom with every stroke

I longed to speak to her, but dared not try

For fear that she would scorn my humble sigh.

Expand full comment

Even on a basic semantic level, that isn't very good. Clouds don't shine (and especially not "bright"), a lyre isn't known for its fragrance, smiles don't have "strokes", neither do sunbeams, etc.

Expand full comment

I agree. I think though that a lot of people enjoy (human written) poems that sound poetic, flowery, or deep, but don't actual have much substance on further analysis.

I definitely do this with music lyrics. I couldn't even tell you what the verses of most songs I enjoy say, but I often remember short phrases from them that happened to sound nice.

Expand full comment

Yes, it's a bit like the mistakes people make when they try to imitate old grammar, they mix up verb forms and say stuff like "I art" because it sounds old timey, even though it's like saying "I are".

Expand full comment

I think this criticism is a bit literal for poetry, which is often metaphorical. A lyre's melody may not be fragrant, but it can certainly be "sweet". One might enjoy the sensation of a sunbeam as they would a person's caress, and get a similar sensation simply seeing a smile.

I could give similar criticism of genuine Romantic poetry:

And the midnight moon is weaving

Her bright chain o'er the deep;

Whose breast is gently heaving,

As an infant's asleep:

A moon can't weave! And certainly not with a chain. An ocean doesn't have a breast!

https://www.poetryfoundation.org/poems/43846/stanzas-for-music

Expand full comment

The word choices pointed out in the Bing Chat poem seem more random. The moon's long reflection on water looks woven: https://qph.cf2.quoracdn.net/main-qimg-0cc559868e0f27c2f3feb44d193e76f4-lq

Expand full comment

Metaphors shouldn't override the literal meanings of words (betray ignorance of what they actually mean), because that is what they actually derive their meaning from. Sometimes writers make mistakes because they use words they don't understand just because they sound poetical. But there aren't any errors like that in the poem you cite.

In this case the mistakes the chatbot makes are similar to mistakes it makes in other contexts, where it shows that it doesn't have any knowledge of the outside world (for it "lyre" is just a word it pushes around, not a thing).

Expand full comment

V impressed by your glimpses of brilliance from 5 years ago.

Expand full comment

You think the social justice movement is less powerful in 2023 than 2018? Was this posted from an alternate universe? (And can I apply to immigrate there?)

To pick one example out of ten million: Land Acknowledgements were the fringiest of fringy fringe ideas in 2018. Not so in 2023.

I suppose that a "movement" ceases to be a "movement" when it takes over everything, and just becomes "the way things are." But I don't think that's what you meant.

To balance out this comment a bit, I'll add something positive: your AI predictions were extremely impressive. I was skeptical, and I was wrong.

Expand full comment

I'm a bit confused about the "AI can write poetry" point. If it's "some language model has produced at least one poem that wouldn't stand out among Romantic poems" I think my confidence in that happening is 99%. I wouldn't be surprised if there were already a couple of poems like that.

Conversely, if the claim is "there will be a reliable way to prompt AI for an original poem that wouldn't stand out among Romantic poems" then my confidence in that is fairly low, maybe 25% or less. At least for the current general-purpose machine learning models. Maybe a specialised AI could do it.

My criterion for "doesn't stand out" is something like: If you show 9 authentic, unfamiliar Romantic poems and the AI generated poem to college students of English or Literature (but who aren't specialised in poetry) then fewer than 1/4 will guess that the poem is AI generated or consider it noticeably worse than the others.

Expand full comment

I remembered you predicting that machine translation will be flawless by 2023. I went back to check, and it turns out you did predict it but edited it out shortly afterwards (adding an "if" at the beginning of the sentence).

I'm glad for the edit, but I was waiting 5 years for you to admit you were wrong about this just to see you get an "A" after all, so I'm a little frustrated. Still, your edit was early enough that this is mostly fair.

Expand full comment

As I've indicated previously (https://alakasa.substack.com/p/ai-development-and-translators-work), machine translation is *pretty pretty good*; it's not flawless, but then again human translation is not flawless too. The problems in machine translation are often others than strictly interpreted quality problems. We'll see how more generalist AI's can handle them, of course, currently specialized translation engines still beat ChatGPT (at least) in translation. (Are there any tests of the Bing AI as a translation engine?)

However, in my practical experience as a translator, the quality of machine translation still shows up in my work surprisingly little. For some reason (ignorance? cost?) even many firms that offer MTPE (machine translation post-editing; it tells something that translation industry has had a specialized term for this job for over a decade now, at least) jobs still don't use the best machine translation engines.

Moreover, I've seen increasing amount of jobs where the companies not only don't employ MT beforehand before spending the jobs but explicitly tell translators to not use any machine translation engines themselves; sometimes, the given reason is quality (ie. machine translation might still lead to subtle errors that a lazy post-editor might be driven to ignore more easily knowing that it's the machine's fault), but other times, they explicitly mention in their instructions that companies just plain don't want to give their business secrets to corpuses owned by Google and other such large companies (that use them for AI development), presumably because that would give those other companies a business advantage.

Perhaps this is just an indication of what sort of issues are going to be in the way of getting AI to "field use" where it really makes an effect on our lives in other fields of work. As such, one question where it might have been interesting to see Scott's predictions is; how are copyright issues and/or corporate territorial issues vis-a-vis the use of their information going to affect the utilization and development of LLM-based AI? I'm not sure how that would be best expressed as a prediction, though.

Expand full comment

> currently specialized translation engines still beat ChatGPT (at least) in translation.

Which ones? I'm looking at the ChatGPT ones in https://www.reddit.com/r/MachineLearning/comments/1135tir/d_glm_130b_chineseenglish_bilingual_model/ and they sound like they beat the pants off the specialized DeepL/Google Translate/NLLB translation models

Expand full comment

DeepL, with Finnish. (Full disclosure, I’ve done a few jobs testing/grading their translations.)

I ran a part of Scott’s sperm count blog thru both just now, and while both were far from perfect, ChatGPT made a bunch of elementary mistakes that immediately scream MT to a Finn. Like, spelling/grammar errors. DeepL avoided those, at least.

I can provide a fuller analysis in a few hours.

Expand full comment

Scott's blog:

"An average ejaculation is 3 ml, so total sperm count is 3x sperm/ml. Since sperm/ml has gone down from 99 million to 47 million, total count has gone down from ~300 million to ~150 million.

150 million is still much more than 30 million, but sperm count seems to have a wide distribution, so it’s possible that some of the bottom end of the distribution is being pushed over the line where it has fertility implications.

But Willy Chertman has a long analysis of fertility trends here, and concludes that there’s no sign of a biological decline. Either the sperm count distribution isn’t wide enough to push a substantial number of people below the 30 million bar, or something else is wrong with the theory."

Translation to Finnish by ChatGPT:

"Keskimääräinen siemensyöksy on 3 ml, joten kokonais siittiömäärä on 3x siittiöt/ml. Koska siittiöt/ml on laskenut 99 miljoonasta 47 miljoonaan, kokonaismäärä on laskenut noin 300 miljoonasta noin 150 miljoonaan.

150 miljoonaa on silti paljon enemmän kuin 30 miljoonaa, mutta siittiömäärällä näyttää olevan laaja jakauma, joten on mahdollista, että joillakin jakauman alapäässä olevilla on hedelmällisyysvaikutuksia.

Mutta Willy Chertmanilla on pitkä analyysi hedelmällisyys-trendeistä täällä, ja hän päättelee, että biologista laskua ei ole merkkejä. Joko siittiömäärän jakauma ei ole tarpeeksi laaja saadakseen merkittävän määrän ihmisiä alle 30 miljoonan rajan, tai teoriassa on jotain muuta vikaa."

Obvious errors:

It writes "kokonais siittiömäärä" (total sperm count), when it should be "kokonaissiittiömäärä". Not writing Finnish compound words together is a classic MT error, makes text instantly look goofy.

"hedelmällisyys-trendeistä" (fertility trends) should be "hedelmällisyystrendeistä". Not as egregious as above, but still looks silly, kind of like writing auto-mobile or tele-vision.

"biologista laskua ei ole merkkejä" (there’s no sign of a biological decline) would translate literally to "biological decline is no marks". Google Translate 10 years ago level error.

There are also other less obvious weirdnesses. "some of the bottom end of the distribution is being pushed over the line where it has fertility implications" is not really the same as "joillakin jakauman alapäässä olevilla on hedelmällisyysvaikutuksia", which would literally translate to "some [implication: people] on the bottom end of the distribution have fertility effects". A very odd sentence. Note that DeepL grappled with this one, too, though. In general, it's quite literal and does not work too well as a translation.

ChatGPT is quite a bit better in translating stuff to English, which is unsurprising, since it's job is *writing* in English in general, and translation is basically a subset of writing. However, it can still offer errors. I translated a few Finnish news stories to English as a test, and in one, there was a description of Putin's speech saying "Instead, Putin thanked Russian entrepreneurs who support his war. He equated the forgotten oligarchs with the 1990s, when Western economic advisors swarmed in Russia. State property was wildly privatized. According to Putin, this is also the West's fault." This gives the impression that Putin referred to Russian enterpreneurs supporting him as "the forgotten oligarchs", which of course might happen if he just started being very cynical and tried to reclaim the word "oligarch", but was, as far as I know, not what happened.

(The actual news story states, essentially, that Putin thanked the Russian entrepreneurs and equated unpatriotic oligarchs with the 90s and the Western advisors: "Sen sijaan Putin kiitti venäläisiä yrittäjiä, jotka tukevat hänen sotaansa. Isänmaansa unohtaneet oligarkit hän yhdisti 1990-lukuun, jolloin Venäjällä vilisi länsimaisia talousneuvonantajia.")

Regarding the Reddit thread on Chinese translations, unless I missed it, no-one seems to actually mention *what* the translated text is and where it comes from. Is it some classic text or well-known book that was a part of the training materials, for example?

Expand full comment

I now can access Bing, which apparently just uses the pre-existing Bing translator. It produced a fairly awkward literal translation without obvious errors like with ChatGPT.

Expand full comment
Feb 21, 2023·edited Feb 21, 2023

That's not the impression I'm getting from that post. ChatGPT seems to be an equal rather than clearly superior - it's a bit better in some places, but a bit worse in others.

(I also suspect it suffers from the same problem DeepL has of dropping words and sentences, which Google Translate is much better about).

Expand full comment

I don’t think you follow “right wing culture” very closely, as if you did, you’d notice a growing fracture between Trump Republicans and DeSantis Republicans. The party has absolutely started to move on from Trump.

Expand full comment

@Scott Alexander I am curious about the uninsured rate prediction for healthcare.

In the summer of 2022, the rate was 8%. Significantly lower than 13%.

Was the idea conditioned on Trump being able to significantly repeal Obamacare?

The way it's written "As Obamacare collapses" doesn't specify how it would collapse. Regulated access to health insurance is the norm in every other first world country and some of the not so first world ones... it seems odd to take it as a given that ours would collapse without giving an explanation why.

Was the prediction based on Republican control of the presidency and legislature? The inherent contradictions of creeping statism?

I was hoping to see more of an explanation on it than just "I was wrong about #1"

Expand full comment

> I don’t know how I even came up with “AI can generate images and even stories to a prompt” as a possibility! I didn’t even think it was on the radar back then!

It was definitely on the radar back then! You're mis-remembering the state of AI in this period. Facebook came out with the bAbI test in 2015 and it was solved shortly afterwards. It demonstrated basic story comprehension that checked several types of mental skill. DeepMind published on an AI that could read the Daily Mail and answer questions about the stories in 2015.

By 2015 GANs were already generating random faces, albeit with obvious distortions and corruptions. By 2016 the faces were small but plausible, by 2017 celebrities had been mastered and by 2018 the tech was essentially done:

https://www.researchgate.net/figure/45-years-of-GAN-progress-on-face-generation-20147-201510-201611-201712_fig2_341699736

These GANs weren't generating images based on prompts but that was a clear next direction researachers were already expressing interest in. So, sorry, I liked your predictions, but this one is not actually as impressive as you find it to be, which is interesting for what it says about our recall of the past.

Expand full comment

> Gary Marcus can still figure out at least three semi-normal (ie not SolidGoldMagikarp style) situations where the most advanced language AIs make ridiculous errors that a human teenager wouldn’t make, more than half the time they’re asked the questions: 30%

Which human teenager? As in, would the cognitive reflection test questions count, if the AI answered them wrong? Certain human teenagers would answer them wrong and others wouldn't.

Also, a human teenager in which situation? I mean, I'd expect a human teenager asked how Dante Alighieri died to answer either "I dunno" or the correct answer if the question comes up in a regular conversation with their friends, but to try and pass a half-remembered guess as knowledge if taking an exam with no (or sufficiently small) penalty for wrong answers or reward for blank answers. (The last time I asked ChatGPT, it said something to the effect of "Nobody knows for sure, but probably either [the correct answer] or old age", never mind he was 56 years old.)

> At least 350,000 people in the US are regularly (at least monthly) talking to an AI advertised as a therapist or coach. I will judge this as true if some company involved reports numbers, or if I hear about it as a cultural phenomenon an amount that seems proportionate with this number: 5%

How seriously do they have to take this? Does it count if a large fraction of the 350,000 do it mostly just for fun, the way certain people read horoscopes in newspapers just for fun? You might want to specify "spend at least $10/month" if you want to only count people taking it at least somewhat seriously.

> Artificial biocatastrophe (worse than COVID): 5%

Do I understand correctly that "artificial" means *both* not-naturally-occurring *and* deliberate, so neither Mongols throwing plague victim corpses over the walls of Caffa nor a lab leak of something like COVID but worse would count because the former fails the "not-naturally-occurring" criterion and the latter fails the "deliberate" criterion?

Expand full comment

"AI does philosophy: 65% chance writes a paper good enough to get accepted to a philosophy journal (doesn’t have to actually be accepted if everyone agrees this is true)"

I'd rate this much higher. I think it could have a pretty good shot at achieving that right now, and I wouldn't be very surprised if I learned it already had written a paper that had been accepted.

Expand full comment

There are journals in most fields which will publish any rubbish at all as long as you pay the publication fee. I'd be surprised if there's none in philosophy.

Expand full comment

Probably, SJR Q1 or similar would capture this

Expand full comment

Oh it for sure could already get published. It is very easy to get any nonsense published if it hits the right ideological feelies in many journals, and like you aid many are explicitly pay to play.

It would just need to make sure it identified itself as at minority woman.

Expand full comment

I think AI will be able to do a lot with research, including biological research. It may find new truths-- possibly needing to be confirmed by physical research--by finding connections that people haven't noticed. It will SHINE at detecting fraud and poor research design. People will presumably find new and better ways to commit fraud, but just going over what's already been published to find more of what's falsely believed will be important.

It seems reasonable that it might be able to run its own physical experiments. Expect a combination of successes and embarrassing failures.

Expand full comment

> Jordan Peterson’s ability to not instantly get ostracized and destroyed signals a new era of basically decent people being able to speak out

I know it's not part of your explicit predictions, but JPete got less decent imo and is squarely in the business of fanning the culture war flames

https://twitter.com/puppygranate/status/1627263463646191616

Expand full comment

That’s funny because my thoughts on JP have shifted in the opposite direction to what you’re saying. I didn’t like his first “12 Rules” at all and thought it was meandering and mostly trite, but (and I’m being honest here) what I mostly hated was that it was targeted to young men mostly, who (I thought) would misconstrue it to justify misogyny. I think I was wrong.

I don’t think he’s fanning the culture wars in the way others do at all ... nor do I think the twitter clip is actually misogynistic at all. It’s actually quite reasonable and sensible ... and progressive/liberal in that he strongly advocates for seeing and experiencing people as individuals AND not stopping with the superficial externals.

The stereotypical gender signs and roles are contextual clues. The can change over time and vary in different cultures.

What I appreciate about JP now (and really, I kind of hated him) is how reasonable he really is and pretty respectful in his interactions with people with whom he both agrees and disagrees.

In addition he also appears to be quite willing to admit when he himself doesn’t behave in a way consistent with his stated values.

Expand full comment

From the Twitter video:

> But if I don't know whether you're male or female, what the hell should I do with you? [...] The simplest thing for me to do is go find someone else who's a hell of a lot less trouble and who's willing to abide by the social norms enough so that they don't present a mass of indeterminate confusion

I think the veneer of JP's respectfulness is pretty thin. He's making his low-resolution stereotyping someone else's problem with a great air of authority.

If he wouldn't use his daddy-professor voice but instead innocently say "I'm an old man and blue hair confuses me" but he only does full confidence authority mode when wandering from his religious ballpark to things he is not knowledgable in like climate change or modernity.

I mean yes, he is not using insults, but the bar for respectfulnes is higher in my opinon.

Expand full comment

"There will be two or three competing companies offering low-level space tourism by 2023. Prices will be in the $100,000 range for a few minutes in suborbit."

That's the one space prediction you absolutely nailed: Blue Origin and Virgin Galactic is two companies, and that's the right price range.

Expand full comment

Love it. What do you think about AI writing (fiction/non-fiction) bestselling books? :)

Expand full comment

You judge no. 2 in US Culture as not having happened? Why, it's happened more than just about anything else on your list.

Expand full comment

Which politician are you thinking of?

Expand full comment

I do think your AI prediction deserves grade A, but just so you know, to my reading, "Nothing that happens in the interval until 2023 will encourage anyone to change this way of thinking," seems strikingly false. There are many people panicking about artists potentially being out of jobs soon right now that weren't panicking about it before. That seems like something that changed people's way of thinking. (Whether it changed it in the right direction is a different question entirely.) But it wasn't something you explicitly listed as a point under your narrative and so not something that affects the grade anyway.

Thank you for sharing these!

Expand full comment

"2. No “far-right” party in power (executive or legislative) in any of France, Germany, UK, Italy, Netherlands, Sweden, at any time: 50%"

"Far-right leader Giorgia Meloni sworn in as Italy's prime-minister" - Guardian, October 2022

https://www.theguardian.com/world/2022/oct/22/far-right-leader-giorgia-meloni-sworn-in-as-italys-prime-minister

B-

Expand full comment

Being called far-right by the Guardian doesn't count.

Expand full comment

Well, firstly, if a European broadsheet is outside your definition, you should set up your prediction conditions more carefully.

And secondly, you could try The Atlantic instead ...

"Italy’s first far-right leader since World War II"

https://www.theatlantic.com/ideas/archive/2023/02/giorgia-meloni-brothers-of-italy-fascism/673000/

Or those wild-eyed radicals at the Atlantic Council ...

"Italy recently elected a far-right leader"

https://www.atlanticcouncil.org/blogs/menasource/italy-recently-elected-a-far-right-leader-heres-how-the-arab-world-reacted-to-the-news/

Expand full comment

Right, but being too far right to be called "center-right" does.

Expand full comment

Scott grades this prediction as false, which means that he agrees with you.

Expand full comment

On re-reading, you are correct, and I am not. Thanks.

Expand full comment

Which is weird, because it seems clearly true that Italy has a far-right government! And Sweden has a far-right party in government, though it is not the governing party.

Expand full comment

Pretty decent results, and it's encouraging that even with the covid crisis the future was reasonably predictable, with the one big miss (Roe) having almost nothing to do with covid at all.

Also surprised to see "AI life coach" placed so low and so much lower than "AI parasocial romantic partner". It seems like a fairly easy thing to provide to elderly people living alone, for example. Universities might provide them to their students. Tens of millions of people have Alexa so I suspect this service would reach 350k very quickly after being introduced. Is the intent here that, in the case of therapy, the AI would be doing a role that requires a medical license?

"Wokeness has peaked" will be an interesting prediction to evaluate in the ~20% of futures where Harris/Haley/some other "woman of color" is President on 1/1/28. I suppose that's about where I would have put the five-years-out probability in 2018.

Expand full comment

I'm proud of myself for agreeing pretty strongly with your crypto prediction back then - I remember thinking "it's money, there's NO WAY it won't end up regulated like money." I have the advantage of working with a lot of regulated companies, which gives me a pretty regulation-aware lens, I suppose.

I *did* think there would first be a huge scandal/disaster involving a crypto exchange and specifically direct harm to many consumers that prompted a wave of regulation, which is true-ish but not nearly at the profile I expected.

On your new predictions: I think of myself as a bit of an AI skeptic compared to some here, so I'm surprised to find some of your AI predictions pretty conservative. The porn generation concept seems inevitable unless AI progress slows dramatically, as does the poetry generation concept. A bit of good prompt engineering and trial and error can already meet some of the goals you have at 70% or less... It would really surprise me if we can't go from three tries to one in five years given the current pace of progress.

Expand full comment

>At least 350,000 people in the US are regularly (at least monthly) talking to an AI advertised as a therapist or coach. I will judge this as true if some company involved reports numbers, or if I hear about it as a cultural phenomenon an amount that seems proportionate with this number: 5%

I'll bite on this one: At least 80% of major corporations will be using (specifically constrained) AI in lieu of deterministic chatbots for:

- explaining HR benefits to employees and assisting in enrollment

- assisting in filling out job applications

- dealing with first-line IT issues (e.g. "did you try replacing the wireless mouse's battery?")

- other HR-adjacent things (the more legal, the more human-in-the-loop)

Additionally at least 80% of major corporations are offering a health / wellness / fitness benefit that uses a similarly constrained AI ILO chatbot to tell people to lose weight / exercise more / drink more water / etc.

Probability of the above I'd put over 90% by Jan 1 2028.

I expect "Powered by OpenAI" or similar derived corporate HR / benefit / IT products to start rolling out in the next two years. We will all have a great time debating the difference between these super shackled not-chatbots and actual AI.

Expand full comment

Intended as a reply to several “Scott is wrong about the Social Justice movement being less powerful now, it’s obviously more” comments:

I think whether you think of the social justice (or any other) movement as being more or less "powerful" than 5 years ago depends on whether you model power as "already realised changes to society" or "ability to realise future changes" - the former of which has obviously increased, but the latter of which has quite arguably decreased. SJ-style rhetoric has expanded explosively over the last decade because it has been an easy way for both organisations and broadly Blue-Tribe individuals to signal social virtue without actually sacrificing anything meaningful by making vacuous more-progressive-then-average statements.

Of course, when everybody wants to be ahead of the curve on something an arms race develops, which is how previously fringe statements like land acknowledgements rapidly become mainstream. I think what's changing now is that most of the ground that can be covered with mere words (ie. ones that do not necessarily precipitate meaningful action) has mostly been covered, and large numbers of superficially progressive organisations and individuals are starting to have to reckon with a situation where continuing to signal above-average-progressiveness has exponentially increasing costs, and tamping their enthusiasm down accordingly. How many organisations with a Land Acknowledgement will ever actually cede any of it to today's indigenous remnants? My money is on <10%.

Relatedly, many (myself included) see 2020 as the year the SJ 'wave' broke, primarily due to the summer's BLM protests/riots and how they were covered. Mainstream progressives continued their rhetorical advance into previously fringe territory by campaigning for police abolition, 'justified' mob violence, increasingly strong notions of racial identitarianism, and an inconsistent notion of what COVID controls were acceptable, and it finally became real enough that while there wasn't much visible backlash from within the Blue Tribe, a lot of moderates quietly said to themselves "oh shit, this stuff is actually real now" and decided not to advance any further. Now it's 2.5 years later and even watered down versions of police de-funding have gone nowhere, mainstream media corps have mostly stepped back from highlighting contentious racial identity issues (outside of some much safer pride-esque flag-waving in February), and more generally progressive rhetoric seems to be losing its ability to sweep previously fringe positions into the mainstream. The one arguable exception to this is on trans issues, but even that seems to be stalling out somewhat, and it’s worth noting that even if it does succeed in growing in mainstream appeal, it’s pretty niche for a civil rights fight and as such falls more into the ‘I can keep signalling on this and it won’t affect me’ category.

Expand full comment

Some predictions I'd like to see discussed:

Will AI displace a significant number (>10%) of professional or menial office jobs in 5 years (e.g. in accounting, finance, IT, web development, therapy)?

Will Netlix be determined a loser in the streaming wars?

Will YIMBY zoning changes take effect in most major cities?

Will we see a major antitrust case in 5 years?

Will there be significant police reform?

Expand full comment

> We definitely have the technology to do the polygenic score thing. I think impute.me might provide the service I predicted, but if so, it’s made exactly zero waves - not even at the same “somewhat known among tech-literate people” level as 23andMe. From a technical point of view this was a good prediction; from a social point of view I was completely off in thinking anyone would care.

I'm not surprised. You could always go and download your 23andMe data and run plink on it. So impute.me wasn't offering anything genuinely new: various services were doing that anyway. Nor does the history of tech show that there is always a backlash: the most relevant precedent, IVF, was accompanied by extreme dire warnings of doom, and attempts to ban it, and then someone went ahead and did it, and everyone shrugged. So it is not too surprising that the same thing happened with embryo selection: I found it hilarious how Aurea was announced and, after all that heavy breathing and talk of how Doudna was waking up from nightmares about Adolf Hitler, then no major media outlet could be bothered to report on it for like half a year (I think Bloomberg, of all places, did the first real article?).

> The polygenic embyro selection product exists and is available through LifeView. I can’t remember whether I knew about them in 2018 or whether this was a good prediction.

For those a little confused, 'LifeView' is the company formerly known as 'Genomic Prediction', co-founded by Steve Hsu. GenPred launched publicly around October 2018, so February 2018 isn't too long before and Scott might've been hearing about it before. Another startup, 'Orchid', began offering PGS embryo testing (possibly around mid-2021?) but I don't know how many they have done.

Expand full comment

I think your economic predictions look somewhat worse than you think in hindsight, in the sense of what you chose to make predictions about and spend your energy thinking about, moreso than the specific predictions you made. You devoted a LOT of space in the econ section to cryptocurrency, a thing which ultimately didn't matter much (even though you did correctly predict that it wouldn't matter much!). This wouldn't look as weird if it weren't for the fact that the economic stories of the past five years have been far more interesting and dramatic than I think the version of you five years ago would have expected.

None of this is meant as criticism of you specifically or anything like that, of course. Knowing what sorts of stories will matter is even harder than making specific predictions about a particular thing. I think it's an interesting question about these types of predictions, though: how do you treat the lack of focus or missing predictions when you are attempting to calibrate?

For reference, some things that reasonably could have been in your predictions that were not include inflation, the economic decline of the UK, the USA's relatively strong performance relative to its allies, the return of protectionist economics, supply chain issues re: batteries, chips, etc., housing supply/zoning issues, etc.

Some other areas that you missed predictions on include climate change (what % probability would you have given the US passing major legislation aimed at reducing its emissions by more than 25%?), Trump attempting to reject election results showing he lost, just generally the US congress becoming far more legislatively productive than it had been previously (including significant bipartisan legislation!), Afghanistan withdrawal

Expand full comment

Typo police: "Bitcoin will do find" -> "Bitcoin will do fine"

Expand full comment

Also, the sentence containing "Bitcoin will do find [sic]" is a comma splice and should be split up with semicolons instead.

Expand full comment

Ain’t nobody got time for semi colons.

Expand full comment

I think there's a poor framing on a pair of the predictions:

> At least 350,000 people in the US are regularly (at least monthly) talking to an AI advertised as a therapist or coach. I will judge this as true if some company involved reports numbers, or if I hear about it as a cultural phenomenon an amount that seems proportionate with this number: 5%

> At least 350,000 people in the US are regularly (at least weekly) talking to an AI which they consider a kind of romantic companion. I will judge this as true if some company involved reports numbers, or if I hear about it as a cultural phenomenon an amount that seems proportionate with this number: 33%

Note that anyone who talks to Replika as a romantic companion would fulfil both of these; it IS advertised as a therapist or coach, and it does have that relationship option in addition to the friend/romance partner options. And Replika's website (only one way of interacting with the chatbot; can also be done through an app) currently has 1.1 million monthly unique visitors. I wouldn't be surprised if both of these should evaluate to true *right now* as written. Hard to judge what percent of that 1.1 million + app users view the bot as a romantic companion, but that really is most of their advertising.

Expand full comment

In "Whither Tartaria", you wrote that you don't have taste (and I wondered what you meant by that), so I am surprised that you have favorite Romantic poets.

The amount of material to train on in order to imitate a specific Romantic poet isn't very much, so I am much more skeptical about that than you are.

Expand full comment

"I think there will be more of a movement to ban or restrict AI. I think people worried about x-risks (like myself) will have to make weird decisions about how and whether to ally with communists and other people I would usually dislike"

Can someone explain this part? I'm not sure how communists (China?) are meant to help with x-risk. Does it mean "ally with China to restrict AI"?

Expand full comment

It means “people worried about workers being alienated from their work might object to AI in particular applications or jobs, and the futurists might support them for mitigating-x-risk reasons”.

Expand full comment

I assume it means that the US government might have to take concrete action to ban AI, and that rationalists might have to team up with some pro-regulation faction on the left to make this happen. Probably not communists, since communists have no power in America and teaming up with them could never benefit any political project.

I think it will be funny if the reason we fail to get an AI revolution is because tech-positive libertarians get so worried about being I Have No Mouth And I Must Screamed that they decide to set aside all their principles for just this one thing. Like how 70s environmentalists opposed nuclear power, thus doing more damage to the environment than anyone else in the history of the world.

Expand full comment

250 comments in and nobody taking issue with

"4. Average person can buy a self-driving car for less than $100,000:"

Implying that the "average person" could drop $100,000 on anything smaller than a house says something about the demographics of the people/chatbots here. Especially considering the inflation rate in 2018.

Expand full comment

“Average person” here is not meant to refer to middle class people, but rather people without special connections to people at a company, people who aren’t in some particularly permissive country, etc.

I, a broke student, “could” drop $300 on an extremely fancy dinner in my area. I “could not” interview Rob Wiblin, even if he didn’t charge me for it.

Basically, he’s just trying to say that the technology exists, faces no unusually strenuous regulatory hurdles to access, is on the market, and is supported by sufficient infrastructure that anybody with the money and inclination could get one.

Expand full comment

That does make sense.

Expand full comment

Since others have addressed that your rating of your social justice prediction is at best questionable and at worst hilariously wrong, I'll take up this one:

>Religion will continue to retreat from US public life. As it becomes less important, mainstream society will treat it as less of an outgroup and more of a fargroup. Everyone will assume Christians have some sort of vague spiritual wisdom, much like Buddhists do.

While that's technically not a prediction, I have *no clue* how you get the idea that "mainstream society" is treating Christianity as more of a fargroup than an outgroup. They're still treated as hateful bigots, likely moreso than any point in a lifetime in light of Dobbs, and with fearmongering of "Christian nationalism" (which ultimately boils down to Christians having any political opinion at all except retreat), they are *most certainly not* treated as a fargroup. Christians are firmly still in the outgroup camp.

Maybe San Francisco atheists are slightly less hateful towards Christians than they were back in the Internet Atheist heyday, but they are not representative of the mainstream.

Expand full comment

Presumably by https://slatestarcodex.com/2019/10/30/new-atheism-the-godlessness-that-failed/, as your last paragraph already indicates. Also, such things are very continual with very different effects in different subgroups (of which Scott also already wrote); that said, it does seem that on some sites I hang out, saying to Christians that their perspectives are wrong is no longer considered OK (when it formerly was!).

Expand full comment

Speravato (Esketamine) was approved by the FDA (my wife is a neuroscientist and said this meets your glutametergic antidepressant). She says lots of action in this space (Sage is working on several drugs) - so yeah!

Expand full comment

> Last week was the tenth anniversary of my old blog (I accept your congratulations)

Congratulations! Your blog is one of the few places with almost-consistently good and well-paced statements (except for that "child prison" outburst when reviewing "The Cult of Smart", what the hell, seriously: if I wanted angry-but-probably-correct perspectives, I'd go to theangrygm.com; you weren't that visibly angry about Scott-Aaronson-picked-on-by-feminists, of all things).

>The leading big tech company (eg Google/Apple/Meta) is (clearly ahead of/approximately caught up to/clearly still behind) the leading AI-only company (DeepMind/OpenAI/Anthropic) in the quality of their AI products: (25%/50%/25%)

Bracketing here is weird; I presume the probability variants refer to the second triplet of variants but it reads as if it could just as well refer to each of the other two or that they're in some weird conjoined relation (i.e. Google is clearly ahead DeepMind/Apple approximately caught up to OpenAI/Meta is clearly still behind Anthropic).

>I think Xi is a significant change towards traditional dictatorship which doesn't work as well

And you think this doesn't _increase_ chances of Taiwan invasion? Even after the last year's lesson that, basically, dictators don't always do the sensible-for-their-personal-goals thing even this century?

>I expect Ukraine and Russia to figure out some unsatisfying stalemate before 2028

How? Just… how? They both seem politically stuck in a situation that will not be, well, satisfied with an unsatisfying stalemate.

Expand full comment
Feb 21, 2023·edited Feb 21, 2023

>They both seem politically stuck in a situation that will not be, well, satisfied with an unsatisfying stalemate.

Yep, I agree. Putin seems hell-bent on crushing Ukraine, whatever the cost, and there doesn't seem to be a nuke-free way to get rid of his regime in the medium term. He certainly won't allow any sort of economic boom to occur in Ukraine, even if that's the last decision that he makes.

Expand full comment
Feb 22, 2023·edited Feb 22, 2023

> Putin seems hell-bent on crushing Ukraine, whatever the cost

If this would have been the case, then probably the following would have happened already:

- no natural gas transit through Ukraine

- all government buildings in Kiev were destroyed, and several Ukraine high-ranking official have been killed by missile strikes

- Russia drafted 1M+ men (and closed the borders to make sure they won't flee)

Expand full comment

>no natural gas transit through Ukraine

It's not there for Ukraine's benefit.

>several Ukraine high-ranking official have been killed by missile strikes

Given that Zelenskyy was able to visit Bakhmut unharmed, Russia doesn't have the ability to target them a few kilometers from the front line, never mind hundreds. Putin also considers them a puppet government of no consequence, and that all the important decisions are made in Washington.

>Russia drafted 1M+ men (and closed the borders to make sure they won't flee)

Will happen eventually. Putin wants Ukraine crushed effortlessly, but if that's not on the cards, he'll do it the hard way. Also, seeing how abjectly unprepared the Russian military was for the influx of even 300k, a million would've simply led to a much worse clusterfuck, without any significant increase in combat power.

Expand full comment

>>no natural gas transit through Ukraine

> It's not there for Ukraine's benefit.

Well, but it benefits Ukraine a lot - transit fees are still being paid (even after a hike initiated by Ukraine!). If the 'whatever the cost' part would have been true, these payments would have stopped long ago...

>>several Ukraine high-ranking official have been killed by missile strikes

> Given that Zelenskyy was able to visit Bakhmut unharmed, Russia doesn't have the ability to target them a few kilometers from the front line, never mind hundreds.

Or lack of desire to target them (see recent interview by Israeli ex-PM Bennett).

>>Russia drafted 1M+ men (and closed the borders to make sure they won't flee)

> Will happen eventually.

Actually agree.

> Putin wants Ukraine crushed effortlessly, but if that's not on the cards, he'll do it the hard way.

Putin _wanted_ to change regime in Ukraine effortlessly. After spectacularly failing - it seems - he wanted (and still wants!) to roll things roughly to where they were in January 2022. But he can't get that (primarily due to US stance), and so (very reluctantly) follow the military escalation path.

> Also, seeing how abjectly unprepared the Russian military was for the influx of even 300k, a million would've simply led to a much worse clusterfuck, without any significant increase in combat power.

Horrible clusterfuck with 1M drafted in 2 months - yes, of course; but 200k every 2 months would have been sustainable, and draft could have been started in May, after initial plan failure.

Expand full comment
Feb 22, 2023·edited Feb 22, 2023

>If the 'whatever the cost' part would have been true, these payments would have stopped long ago...

By cost I mean long-term consequences for Russia, from becoming (an even more of) a decaying pariah with somewhat expanded borders in the best case, to a radioactive wasteland in the worst. But achieving the above "best" case still costs money, and it's plausible that operating that pipeline has benefited Russia's war effort more than Ukraine's, which so far has unlimited bankroll from the West anyway.

>he wanted (and still wants!) to roll things roughly to where they were in January 2022

Nah, to me the absurd gunpoint referendums and annexations shown that he's all in. There's no way to backtrack from that while remotely saving face.

>but 200k every 2 months would have been sustainable

It seems that the relevant bottleneck right now is the lack of adequate training capacity and usable less-than-ancient materiel, not the lack of cannon fodder. With China apparently considering to finally throw some of its weight behind Putin, this may soon change.

Expand full comment

> it's plausible that operating that pipeline has benefited Russia's war effort more

Perhaps; still kind of weird to pay money to your enemy in a hot war

> Nah, to me the absurd gunpoint referendums and annexations shown that he's all in. There's no way to backtrack from that while remotely saving face.

That is what a lot of people thought right after referendums. But then there was a retreat from Kherson, which - according to some sources - was planned before (!) referendums. So a lot of face was lost over that retreat; not sure if there is still anything worth saving.

> It seems that the relevant bottleneck right now is the lack of adequate training capacity and usable less-than-ancient materiel, not the lack of cannon fodder.

But training capacity and materiel can be produced by drafted workers!

Expand full comment

For "US Politics," is the 20% forecast of a state de facto decriminalizing hallucinogens "having happened" stating that a state did or didn't de facto decriminalize hallucinogens? That's poor wording.

Expand full comment

I think in all of his cases, he gives a statement, followed by a probability, and in the evaluation section determines whether the statement is true or not. The probability is neither correct nor incorrect, and didn't happen or not happen, but how far off the probability is from the truth value is a way to evaluate the goodness or badness of the probability.

Expand full comment

Re. "Kamala Harris didn’t even get close to becoming president, although Biden made the extremely predictable mistake of making her VP." -- I think being the VP of an octogenarian should count as getting close to becoming President.

A prediction that's on my mind right now is "A Concordet loser will win the 2024 US presidential election".

Expand full comment

LBJ made it explicit: "When Clare Boothe Luce later asked him why he would accept the nomination to be number two, he answered: 'Clare, I looked it up: one out of every four Presidents has died in office. I’m a gamblin’ man, darlin’, and this is the only chance I got.'” (https://www.econlib.org/archives/2012/05/great_moments_i_4.html)

Expand full comment

If someone builds a question bank of 100 questions where teenagers score 95% on average, and the LLM gets 97% correct, does that mean Gary Marcus wins? Also, does Gary Marcus get to choose the exact wording, adversarially against a specific LLM (so he can maybe exploit a SolidGoldMagikarp bug unknowingly, as long as he can find a decent number of variants)? Or does Gary Marcus fail if Scott re-words the questions and the LLM gets them right half the time?

I consider Gary Marcus likely to succeed at finding errors that he can replicate with several variants, but I also consider Scott likely to be able to re-word those queries to avoid the errors. I suspect that if some big academic group writes a list of questions and keeps them in an icebox, future LLMs will eventually outperform teenagers but not score 100%.

Expand full comment

> What the unofficial version of health care will be remains to be seen.

That would be Gofundme.

Expand full comment

I saw this opening paragraph:

> AI will be marked by various spectacular achievements, plus nobody being willing to say the spectacular achievements signify anything broader. AI will beat humans at progressively more complicated games, and we will hear how games are totally different from real life and this is just a cool parlor trick. If AI translation becomes flawless outstanding, we will hear how language is just a formal system that can be brute-forced without understanding. If AI can generate images and even stories to a prompt, everyone will agree this is totally different from real art or storytelling. Nothing that happens in the interval until 2023 will encourage anyone to change this way of thinking.

And I thought, "what a naive person from 2018, that's the total opposite of what happened!" and yet Scott graded it as correct! It seems to me that 2022 is the year that people finally agreed that AI is *not* just a cool parlor trick, and it really *does* signify something broader. There's of course still a lot of people claiming that it is (but you also still find people saying the same thing about electoral government nearly 250 years later). There's also not yet a consensus on *what* the broader thing it signifies is, but I think that a major transition just happened that Scott of 2018 had predicted wouldn't occur until late 2023 or later.

Expand full comment

I looked up Auvelity, excited to learn about a new development in pharmacology, only to learn that it's wellbutrin + cough syrup (dextromethorphan).

Expand full comment

> I think these were boring cowardly nothing-ever-happens predictions that mostly came true.

I can't really agree; saying there's a 40% chance over five years that the crown prince of Saudi Arabia will be deposed is most certainly not a boring, nothing-ever-happens prediction!

Expand full comment

I'm confused by Scott's predictions for AI coach vs AI "romantic companion." Thought for sure it was a typo

The coach feels much more reasonable to me, and more socially acceptable. It also is the sort of thing which I can imagine conferring real advantages, like there's a chance such a thing could increase productivity and income so even skeptics have incentive to try it. Manifold rates both possibilities as about 62%.

Yet Scott rate's AI coaches as incredibly unlikely (5%, 19:1 odds), and AI romantic partners as almost a coin flip (33%, 2:1 odds). If we use surprise = -log(p), Scott would find AI coaches 2.7 times as surprising. Why the discrepancy?

Expand full comment

Very interesting post...my favourite from this Substack for a long time ! Overall, I think that Scott did quite well on most of his predictions in 2018, and I was quite impressed by both the AI and pandemic predictions (I think that it's still not clear if there was a lab-leak or not, right?). The political predictions were a bit more off, but overall I'd still argue that it's not clear if Republicans or Democrats are less "unified"...probably Democrats, though both are too big for their own good IMO (and the US, just like Canada or the UK, would benefit from proportional representation of some form)…

One thing I've noticed regarding the predictions for 2028 is that the Manifold Market predictions seem much more bullish than Scott's predictions for as regarding AI in the next 5 years...I would be even more careful than Scott here, even though I do realise that AI is now more advanced than most lay people (like me) would have thought just a few years ago...

Expand full comment

As for my predictions for the next five years, here is my best try (just FYI, I am really not an expert on AI or any "Tech Stuff", so I will focus on other areas more)…(probabilities in % in brackets).

US Politics:

- I don't think that the Democrats (most likely Biden, but still could be someone else (Mark Kelly?)) will loose re-election in 2024. It's pretty unusual for a party to lose the presidency after 4 years in power...Trump was the exception of course, as was Carter, but their circumstances were very different than Biden's currently...(>70%).

- In terms of the "realignment" in US politics, the Democrats are now favoured more than the GOP in most elections since they are now the party of the "high-information/high-propensity-to vote" voters and the GOP is now the party of the "low-information/low-propensity-to-vote" voters...so the GOP will do better in presidential election years and worse in the midterms...though the US Senate is still structurally biased against Democrats in the US, so I don't think that the Democrats will keep the senate beyond 2024 (60-70% for the first part, 50-60% for the latter).

- Political polarization in the US might have peaked, though I am less sure about this hearing the comments from MTG etc. Still, I think that the US system will adapt (e.g. more power to the states or something)…but at least along racial lines, polarisation seems to have dropped and I would expect this to continue, with the major "faults" in US politics now seemingly being education and gender...(50-60% for the first part, >70% for the latter part).

International Politics:

- The EU will not break-up, but until 2029 I don't really see any new members joining...maybe Albania or Bosnia, but that seems far-fetched considering the issues/problems on the Balkans currently...(>70% for the first part, 50-60% for the latter).

- Overall, the EU (at the EU-level) will move more to the right politically, along with most member countries...this would mean a higher focus on preventing illegal immigration, energy security and economic competitiveness. An interesting effect of this could be that more left-wing parties will become Eurosceptic and the Brexit/rejoin debate in the UK will not be as clear-cut anymore (left-wingers in the UK maybe not being so pro-EU anymore?). (50-60% for the first part, 50% for the latter).

- Climate Change and how to respond to it will become the most polarising political issue in Western Democracies. You see this already with the "last generation" and on the other side, climate change deniers like many GOPers or Canadian Conservatives. Interestingly, this could also be the issue that drives a major conflict between the EU+UK and the US/Canada, especially if the latter have conservative administrations at the same time (probably only after 2028 though)…(>60% for the first part, >50% for the latter part).

Geopolitics:

- Chinese military power will still grow, but probably not to a level that would seriously challenge US supremacy worldwide (>80%).

- EU countries will invest more in their militaries, but with a few exceptions (Poland, the Baltics, maybe the Nordics and France) not enough to keep up with the NATO goals...(>60%).

- The US might become less willing to support European defence, though with a Democrat in the White House, this should not become a major issue...so only after 2028 IMO...

- Russia might fragment, though I put the percentage here at below 30%. Still, calls for autonomy in the Russian regions might grow, and thus Russia might focus itself inward (even militarily)…

- The Russo-Ukrainian War will probably (>70%) end...not sure how, IMO some kind of settlement where Russia keeps Crimea and maybe the Eastern Regions is still most likely...though it won't be a permanent solution for sure.

Economics (Global):

- The US will still be the major economic power worldwide in 5 Years. China's difference with the US in terms of nominal GDP will be less than 10 Trillion, but more than 5 Trillion (confidence: >60%).

- The EU will fall further behind the US economically. I also expect GINI to rise in most Western European countries, though not nearly to the level of the US...overall, the feeling in much of Europe will be economic stagnation (stagflation - at least until 2025?), while in the US it will be of economic prosperity (for most people at least), though Western European countries will still be considered to have a higher quality of life than the US by most people (so similar to the late 90s/early 2000s maybe?) (50-60%).

- Inflation will continue in the next 5 years, though should abate by 2024/25 (combined with a minor recession in the next few years maybe?). Inflation will continue to be worse in the EU than the US, and the recession will also be worst in the EU than in the US (60-70%).

- Countries that will have (relative to other countries) good economic conditions in the next 5 years: The US, Canada, Vietnam, India, Indonesia, Mexico, Poland, eventually Ukraine as Scott wrote (50-60%).

- Countries that will have (relative to other countries) worse economic conditions in the next 5 years: Germany, the UK, the V4 countries (except Poland), Iran, Russia of course...(50-60%).

Technology (various):

- Electric Cars will be more widespread by 2028, but not nearly as much as to completely replace all ICE cars anytime soon...my predictions for EVs as percentage of new vehicle sales in 2028: between 40-60% in the Western EU countries and China (>70%), between 20-40% in the US, Japan and Canada (>60%), less than 20% in most (currently) developing countries (>50%).

- There will be more talk about alternatives to EVs, such as Hydrogen fuel cell or ICE cars by 2028...(>80%).

- Public Transit use will continue to stagnate in most Western Countries, as project overruns and rising energy costs make them less enticing for most users...

- Neither HS 2 in the UK nor California HSR will be finished by 2028 (>80%).

- AI will become more prominent in terms of being used successfully in various applications, but there will be no serious claims of a "AGI" in use by 2028 (>70%).

- Robotics will become the next major focus of automation technology by 2028, as someone wrote above...though robotics seems much more difficult than AI to be implemented successfully...

- Solar and Wind Energy might hit some capacity constrains (minerals, land etc.), so I think that the major boom here is over (at least in Western Europe, and possibly by the end of the decade in the rest of the industrialised world)…the debate will shift more towards nuclear energy (which already happened TBF), and modular reactors will become more popular, though still not widely adopted in most regions (>65%).

- Nuclear Fusion will be somewhat closer to operability, but still not quite there yet...(>60%).

- Carbon capture technology will have advanced and will be in (small-scale) use widely (50-60%).

Science/Nature:

- Climate Change will continue, with no major change in the yearly temperature rise from the 2010s (i.e. a global 0.2C temperature rise each decade on average) (>70%).

- The regions affected by climate change might be different though, e.g. the 2010s saw a lot of warming in Europe, but less in large parts of Canada and the US...maybe this will change related to the changes in AMOC or "Hale Cycle" solar minimum (i.e. colder Winters in Europe vs warmer in NA again?).

- Geoengineering will become more seriously discussed, though no serious plans will be provided for it (>70%).

- The rise in CO2 PPM will drop somewhat, but not nearly enough to get us to a 2C-maximum-warming goal by the end of the 21st century (>70%).

- Gene editing will become more widely available, but also more regulated...only in cases where it will seriously impact the health of people will it be "acceptable", both legally and ethically, in most Western Countries (though the range of what is "acceptable" will vary between jurisdictions)…(>60%).

Culture/other:

- Popular Music will become more upbeat again after sounding "depressed" for the last 5-10 years. Thus, Music will sound more like the late 90s/early 2000s in style/attitude (more "Nu-Disco", House, pop-R&B and Hip-Hop, less Trap, Emo-Rap and "Whisper Pop") (>60%).

- The UK and US charts might diverge again, like they did in the 90s. IMO this is less likely now than back then, because of the internet, but maybe it actually could be the opposite of the 90s (Happy US, sad UK?) .

- There will be more AI-written, produced-and-performed songs, but the vast (80+%) majority of songs will still be written, produced and sung by humans (>70%).

- "Wokeism" will have peaked in terms of culture and debate, and while some concepts will survive, most areas of popular culture (comedy, TV, music etc.) will be less "woke" than they have been in the last few years by 2028...(50-60%).

- Movie theatres/cinemas won't be seeing a renaissance, though some might still survive as a niche products, just like "normal" theatres have survived...(60-70%).

- The ideas that each generation is distinctive from another and can be clearly defined by a certain age cohort (i.e. born during any 20 year-period) will be less popular in 5 years than it is now, though ironically, the idea of a "fourth turning" might gain more mainstream popularity...

Expand full comment

Henry Kissinger, of all people, shares a byline with two other writers on an essay on ChatGPT in the February 25-26 edition of the WSJ that really caught my attention: 'Because ChatGPT is designed to answer questions, it sometimes makes up facts to provide a seemingly coherent answer. That phenomenon is known among AI researchers as "hallucination" or "stochastic parroting," in which an AI strings together phrases that look real to a human reader but have no basis in fact.'

So ChatGPT not only 'researches' its data from questionable material broadcast without qualification on the Internet, but it fabricates its own 'facts' to support a clever, pleasing conclusion for the lazy, gullible human. The only researchers or writers ChatGPT will be replacing are cable TV news producers and newspaper editors, and James Patterson. No worries; I doubt anyone will notice.

Expand full comment