Metaculus has this: https://www.metaculus.com/organization/public-figures/ eg: https://www.metaculus.com/public-figure/joe-biden/ (great site overall btw and it gets better all the time) This feature is experimental though and there is no aggregated accuracy score as public figures don't usually make predictions with percentages and they don't all predict the same questions, so assigning a score would not be meaningful.
Does anyone know why these cities in particular? I'm not really familiar with Phoenix, but driving in San Francisco doesn't seem like an easier problem to solve than driving in most other US cities, let alone on a freeway.
I'm gonna quibble with Phoenix. There are a couple of very limited programs that operate in small parts of the metro area that you can sign up for and occasionally use. I I would not say that the average tech-savvy person can hail a self-driving car in the way that they can get an Uber or Lyft.
I tend to agree, although I think you're overstating it. Waymo One tells me that I can hail a vehicle 24/7 and that on most occasions there's no-one in the driver's seat, but it operates "within parts of the Phoenix metropolitan area, including Downtown Phoenix and parts of Chandler, Tempe, Mesa and Gilbert." This means it won't work for a large class of rides, e.g. someone who wants to travel home to the suburbs from the downtown area.
As written, the prediction is true, but as far as I can remember, I understood it to mean that one would be able to hail a self-driving car to go to most ordinary destinations within the city, in the same way that existing ride-hailing apps work.
That was my impression too - I've heard of people surprisedly getting in a self-driving car that they hailed, but I haven't heard of anyone actually having an ordinary option of hailing a self-driving car.
San Francisco is both near to a lot of the tech companies and is arguably some of the hardest driving in the country, ignoring weather. *Every* weird traffic thing happens somewhere in SF, it's an absolute nightmare. As someone who's driven in both SF and NYC, I honestly think SF is worse, and that makes it absolutely great for testing with someone sitting in the car who can hit an abort button.
If you can drive in SF you can drive anywhere in the US (unless it's snowing.)
Phoenix is the exact opposite, and is perfect for initial tests *without* an abort button.
I’m not entirely sure if I’d grade this as true for San Francisco. Some average people in certain parts of SF can hail a self driving car in SF but the average person can’t and the cars don’t serve the densest and busiest areas of the city, like downtown.
Within those limitations, are the cars now truly autonomous? Or is it still the case that there always needs to be a human "test driver" either inside the car or trailing behind it in a separate car, ready to intervene if necessary?
The language of the prediction is ambiguous so maybe it should count. My reading of it is that waymo only serving a tiny section of downtown shouldn't count.
I think this is one of those kinda edge cases where if something were on the line for whether or not the prediction was technically accurate, it'd be a hard call. But nothing is, and we don't need to worry about exactly what's on what side of which line.
What is clear is that artificial intelligence moved less quickly than Scott was anticipating in terms of autonomous vehicles. We maybe (depending largely on what you describe as "an average person") got just barely over the line of his 80% prediction. He thought there was a 30% chance that you'd be able to hail an autonomous vehicle in "at least 5 of the 10 largest US cities," which we aren't even *close* to. Presumably if you had zoomed way in on this and said, "Okay, so you think there's an 80% chance that it's rolled out somewhere, and a 30% chance that it's reached half of all big cities, then there's some set of intermediary set of probabilities of it being intermediately rolled out," and all of those would've been false. It's also not the case that we got like, almost there -- it won't be rolled out to 5 of 10 cities in 2024, or 2025.
And he thought there was a 10% chance that at least 5% of all US trucking had been replaced.
If you kind of come up with that as an approximation of Scott's mental probability distribution of "how far along autonomous vehicles will be," then it seems clear we're way short of his average, probably his mean value of the probability distribution was about one standard deviation off from the truth.
It depends on your model for what the constraints on rolling out autonomous taxis are.
I don't think it's a situation where you make a little bit of progress in one city, and then you make a bit more progress and it's in two cities, and then you slowly creep up. I think that if the technical/regulatory/PR tangle gets resolved, then suddenly you can roll them out almost everywhere, and if not, then it's limited in the way we see.
I also don't think that the tangle had a 30% chance of resolving, but I have the benefit of hindsight.
If he meant, "30% chance of autonomous taxis being available in 10/10 biggest US cities," or "9/10 with one inexplicable holdout for no reason," then I assume that's what he would've said.
I agree it's not a linear ramp where the next city is just as hard as the previous city, but we were only in a 5 year prediction mode! It is clearly the case that there are in fact logistical and local issues to rollout per city.
“ I don't think it's a situation where you make a little bit of progress in one city, and then you make a bit more progress and it's in two cities, and then you slowly creep up. I think that if the technical/regulatory/PR tangle gets resolved, then suddenly you can roll them out almost everywhere”
I don’t agree for two reasons. First, the current paradigm involves building a painstaking map of every drivable area within a geographic location, much more detailed than google maps. Even if you build a map covering all of phoenix, you can’t make some minor tweaks and transfer the technology to Houston. You have to build a brand new map from scratch.
Second, the current locations are ideal conditions - well maintained roads with clear signage, good weather, low to medium traffic and well behaved drivers. Not generalizeable to, say Manhattan.
I agree that it’s going to faster expanding the service to say San Bernardino than it was to create the first service in Phoenix. But it’s not going to be trivial to go from most of phoenix to most of the US.
I would wager that in 8 years, a self driving car service will not be able to cover 50% of Manhattan during the winter.
I haven't seen any news report that I would count as "You can hail a self-driving car in city X." Now my criterion for a self-driving car is strict, "You can get from this point to 99% of the destinations people drive to from this point without touching the controls." To some degree, the limited jurisdictions that allow self-driving contribute to this. But my impression is that self-driving technology is at the "90% of the distance of 90% of the trips" level, not the "all of the distance of 99% of the trips" that would e.g. give blind people the same mobility as the sighted.
> The leading big tech company (eg Google/Apple/Meta) is (clearly ahead of/approximately caught up to/clearly still behind) the leading AI-only company (DeepMind/OpenAI/Anthropic)
DeepMind and Google are both part of Alphabet Inc., OpenAI was heavily invested in by Microsoft, and Anthropic was heavily invested in by Google. How will this prediction work if the "leading big tech companies" are just using things from the "AI-only" companies?
The problem is that the question isn’t well framed—most big tech companies have the edge that they have not just because they innovate but because they can identify and buy out innovation and talent.
My point is that you really can’t, being intellectually honest, cut out Big Tech because they ‘only own’ or ‘only invest in’ the AI firms. That counts as Big Tech doing it. But being fair, it also counts as the AI firm having the advantage. It just isn’t a question that yields insight; it betrays misunderstanding of what investment is for and why it’s beneficial.
Hardware is enormously harder than people think. Software may improve robotics in a sense, but as soon as you need physical interaction you get crushed by physics.
Just think of Moore's law vs battery capacity. Transistor count (and software complexity) has increased in the range of 100,000-100,000,000x in the same amount of time we needed to double battery capacity (while losing lifespan and turning them into bombs).
I've thought about the topic a lot and I think there is a reasonable odds that growing muscle in vats ends up being easier than building motors that can match biological performance.
Self driving cars also reveal why robotics are so damn difficult, you need 99.99999999...% percent reliability and mistakes are expensive. I spent some time working on UAVs and ANY glitch will smash your hardware into pieces. Funny the first time but imagine if your computer self-descructed every time the compiler detected a syntax error.
The massive improvements in AI, especially in the field of Genrative AI, can have direct effects on how a robot can think 'like a human'. So maybe afterwards, when the AI hype is settled down, we could have a new 'humanly robot ' hype. What do you think?
The funny thing about the humanoid robot area is that it's really attractive from a PR and public awareness standpoint, but sort of dubious in terms of ROI. The point being that since we already have humans to do humanoid tasks, a bot in that form has to be competitive (cheaper, better, whatever) compared to the existing human pool.
Useful robots that *aren't at all humanoid* seem like a better investment, they just don't get as much attention...
I think you'd do better with a mini-centaur form. Say something about the size of a dog. (There are reasons different dog breeds are different sizes.) You could argue for wheels or treads instead of legs, but unless speed is your main concern, legs have advantages. Handling stairs fairly well is one reason. Boston dynamics already has the dog body, but attaching the arms is difficult, as you need to brace them to use any force. Now imagine trying to get that robot to thread a needle, or remove a frozen bolt. Last I heard there was lots of work needed on the hands.
Now consider what a robot nursing assistant should look like. You don't want to hit ANYBODY's uncanny valley. I think the face should look anime, and definitely non-human but friendly. But if the price were right, there'd be a big market for robot nursing assistants.
Kinda agree though. After reading the points made by Boris, I think Miniature robots that favour utility and function more than the actual 'looking like a humanoid robot' segment could be the best place where AI can thrive.
"IE you give it $2, say "make a Star Wars / Star Trek crossover movie, 120 minutes" and (aside from copyright concerns) it can do that?"
J.J. Abrams already did this with the reboot, and while I can't speak for anyone else, it certainly was not what *I* wanted.
"AI can write poetry which I’m unable to distinguish from that of my favorite poets (Byron / Pope / Tennyson )"
Interesting selection! I wouldn't have classed Pope as a Romantic poet, but this gives me the excuse to shoehorn in a somewhat relevant joke, from a 1944 Tolkien letter:
"The conversation was pretty lively – though I cannot remember any of it now, except C.S.L.'s story of an elderly lady that he knows. (She was a student of English in the past days of Sir Walter Raleigh. At her viva she was asked: What period would you have liked to live in Miss B? In the 15th C. said she. Oh come, Miss B., wouldn't you have liked to meet the Lake poets? No, sir, I prefer the society of gentlemen. Collapse of viva.) "
The Walter Raleigh mentioned above is:
"Sir Walter Alexander Raleigh (5 September 1861 – 13 May 1922) was an English scholar, poet, and author. Raleigh was also a Cambridge Apostle.
... in 1904 [he] became the first holder of the Chair of English Literature at Oxford University and he was a fellow of Merton College, Oxford (1914–22).
...Raleigh is probably best known for the poem "Wishes of an Elderly Man, Wished at a Garden Party, June 1914":
I wonder if Professor Raleigh's knighthood was helped along by whoever was responsible for the honors list for George V liking the idea of creating another Sir Walter.
"Star Trek reboot as Star Wars showreel by JJ Abrams" was a rant I did on a site that shall not be named back then, complete with side-by-side comparisons of things like the Starfleet Academy cadet uniforms and Star Wars generic troopers uniforms, but that was fourteen years ago and I haven't kept most of it (I do have the part where I defended James Tiberius Kirk as not the pop culture skirt-chaser Abrams interpreted him as, in one interview where he said "Nobody’s going to force Kirk to be a romantic and settle down. That would feel forced and silly. Kirk’s a player”. As we said at the time, "Abrams is not of the body!")
But yeah, there was so much in the visual design, not to say the reboot universe, that was moved to be more like Star Wars than Trek, including the phasers being revamped to be more like blasters. Abrams did himself no favours with me with his attitude to Trek, and it was no secret he was a Star Wars fan, not a Trekkie, and dearly longed to get the directing gig for the new Star Wars movies. That's why I think he did do the reboot as, literally, a showreel to demonstrate he could do a big-budget movie set in an established universe and reboot it successfully.
About the only time I am in agreement with John Stewart 😁
I think there was a lot covered in it, the particular Tumblr (yes, that was the hellsite) discussion group that got going was mostly critical. Not that the reboot as such was a bad idea, but that they wasted a lot of opportunity.
How the reboot timeline got created was hooey, but Trek has always had a lot of hooey involved, that's how we got the term "technobabble" after all. The importance difference is that Trek is science fiction (it does try to be grounded in 'vaguely plausible if we stretch it a lot and take the most out there speculative theories current') while Star Wars is science fantasy (the desert planet setting, the 'hokey religion versus blasters', the mystic order of the Jedi, the Force, midichlorians and the rest of it - it's a pulp skiffy influence at heart and none the worse for it).
The problem comes when (a) you're a bunch of untalented hacks and (b) try to force the contents of one universe into the mould of the other. Abrams went for Kewl Shots (the complaints about lens flare and how the bridge of the "Enterprise" looked like an Apple store) and very clearly modelled much of his reboot along the lines of Star Wars (his love) than established Trek canon.
There were a lot of "left on the cutting room floor" scenes floating around at the time, both good and bad; a whole chunk of this Kirk's childhood backstory was cut, which would have contributed to understanding his character and why he was the way we see him later (the rebel without a clue). Other bits got cut and it was for the better, e.g. what Abrams and company seemingly thought was a *hilarious* bit about "they all look the same" when it came to Kirk and the Orion women cadets - he romances one named Gaila for ulterior motives, to get her to run the Kobayashi Maru hack when he's taking the test. Later, he goes to apologise to her for using her (since this got her into trouble, quite naturally) and - here's the joke, hold on to your sides! - he apologises to the *wrong* woman! Because all those green slave girls look the same, you know! And he doesn't really care about Gaila so he can't tell her apart from any other random Orion cadet!
That Jim Kirk, what a card 😐
Reboot McCoy was the best thing in it, thank you Karl Urban. I could go on about other things - oh why the heck not? The Spock/Uhura romance was unexpected, and the end result was that it looked like Abrams couldn't think of a way to fit Uhura in as anything other than a girlfriend (we have one scene where Uhura demands to know why she hasn't been assigned to the Enterprise, Spock reasonably says it would look like favouritism because they're in a relationship, and she bullies/nags him into reassigning her). There's also, in the second movie, the totally unprofessional scene she makes about their relationship while they're on a mission, in front of their commanding officer, and again looks more like "nagging shrew" than "equal professional and officer pulling her weight". There's the throwaway line dismissing Christine Chapel. The infamous underwear scene with Carol Marcus in the second movie, which echoes the underwear scene with Gaila in the first, and manages to be both unsexy *and* sexist. The terrible pseudoscience which doesn't even pretend to be technobabble - now we can warp between moving starships, the Klingon home planet is apparently on the doorstep of the Terran solar system because we can get there in a short trip, Starfleet Command can have every senior commanding officer killed by one guy in a scout ship because security, what that? and so on.
The second movie trying to persuade us "this is the engine room of a starship" when anyone who has had even the most cursory view around a chemical, food or other processing plant can identify "this is a brewery". The first movie BLOWING UP VULCAN (if they think that after all this time I've forgotten and forgiven, they have another think coming). The heavily militarised Starfleet Command, which again can be explained by the backstory of this timeline *if* they bothered to explain it, which they don't. Not one but *two* dogfights by starships over San Francisco, as the climactic moments of both movies. The second movie had me cheering on the evil admiral, because he at least was competent, and they finally remembered that hey, you build starships in orbit in space docks not from the ground up on earth. That first movie shot was another Kewl Shot with Kirk on his motorcycle pulling up to view the ship that he will eventually command, but didn't make much sense logistically (though I've read posts defending it):
I didn't even bother with the third movie, even though that was allegedly better. They burned up all my goodwill by then, and I've been a Trekkie since I was seven.
Either he’s forgetting that Kennedy was nominally a Republican or he’s just being plain about the fact that Kennedy was not actually a Republican in any meaningful sense.
Kennedy joined the Republican justices on most major divisive questions, such as Obmacare being unconstitutional. He also hand-picked Kavanaugh as his successor. Saying he was not actually a Republican is nuts, unless you're only looking at PP v. Casey and Obergefell
I think Scott just made a mistake. I don't think he was intentionally trying to make a statement about Kennedy being insufficiently conservative.
Still, I think his broader point still stands if you split the Supreme Court into liberal, moderate, and conservative categories rather than merely Democrats and Republicans, with both Roberts and Kennedy falling into the moderate category. In 2018, there were 4 liberals, 2 moderates, and 3 conservatives. Neither the left nor the right could get a majority on their own, so this pushed the court towards making more moderate decisions overall, which made dramatic upheavals (like overturning Roe v. Wade) rather unlikely. Then Kennedy and Ginsberg were replaced by Kavanaugh and Barrett, so the balance shifted to 3 liberals, 1 moderates, and 5 conservatives - enough for the conservatives to bull rush their way past the compromise stage and force through any decision they wanted, without any need to temper or moderate them first.
Had the court's balance remained the same, I expect Roberts would have gotten his way with the Dobbs v. Jackson case: Mississippi's 15 week abortion ban would've still been upheld, but it would've been a narrower ruling that merely pushed back the viability line from 20 weeks to 15, rather than overturning the Roe decision completely and allowing states to ban abortion at any point in pregnancy. This would've been true even if we'd only gotten Kavanaugh or Barrett, but not both: The conservatives would've had to go along with Roberts' compromise, because they simply wouldn't have had the numbers to overturn Roe entirely.
Granted, giving 1% odds to Roe being overturned was still too low. But it wasn't a sure thing by any means. I'd have probably given it 1 in 3 odds of happening when Scott made these predictions, a 50/50 chance when Kavanaugh was appointed, and 2 in 3 odds of happening when Barrett was appointed.
I disagree, his judicial philosophy while it concurred with the Republican/conservative wing of the Court at times was not particularly conservative or Republican and any sufficiently read student of the Court should know this.
Kavanaugh’s concerns are similar, he’s obsessed with how his legacy on the Court is seen and willing to strike pragmatic compromises in order to be viewed historically as not a partisan—in my view that makes him fairly unprincipled.
Justices should less be evaluated by quantifying how much they agreed with others than how they arrived at those conclusions and whether they are willing to stand by their principles, and which principles they will stand by. It’s pretty plain from the careers and records of say Kennedy, Kavanaugh, and Roberts that they come from a completely different school than Gorsuch who is very different from Scalia, Alito, or Thomas (who have the best claim to being called the Republicans).
That said, I don’t think Scott believes the above. My comment was tongue firmly in cheek.
The more professional they are, the harder it is to stereotype them. I didn't know Scalia was supposed to be a 'conservative' until after he was dead; Ginsburg was absolutely partisan and unprofessional.
Picky disagreement: Both the liberal and the conservative sides have points on which they are clearly correct. So if you just just be adherence to those points it's quite reasonable to say a liberal or conservative judge is properly doing their job. The problem is all the other stuff.
E.g., abortion should clearly be a state level issue. I may think many of the states aren't living up to their constitutional obligations (though I ususally don't know their constitutions well enough to really comment), but it should clearly be a state level decision. There are many such issues, where the matter SHOULD rest with the states, but the states have defaulted, so it ended up with the feds. Then there's that idiotic Supreme Court decision that cities couldn't have a residency requirement for provision of general assistance. I see NO valid basis for that decision, and the result has been a "race to the bottom" in support of social services at the city level. But that SHOULD have been a city level decision. (I don't remember whether it was the conservative or the liberal agenda that inspired that idiocy, but I suspect conservative. But an honest conservative should diligently oppose it, and a liberal should find no reason to support it.)
"E.g., abortion should clearly be a state level issue."
Abortion (like other forms of birth control) has been an activity partaken of by individuals, on an individual basis, throughout history. Because of this historical ownership of this right being held by the people, no government, it firmly belongs in the rights held by the people (not the states or the federal government) as indicated in the 9th and 10th amendments. Or so I say. So at the very least your "clearly", is not as clear as you believe it to be.
The US Constitution is not a system meant to allow totalitarian control of individual's ways of life at the federal or state levels (excluding only those rights embodied in the 1st through 8th amendments).
While I largely agree with you, I feel this is a matter that should be addressed by the constitutions of the individual states. And that they should defer to the rights of individuals. Well, at least unless one wants to undertake rewriting the entire constitutional system. The expansion of federal powers that has happened has been necessary, but I believe that much of it is clearly illegal. What should have happened is various constitutional amendments, but that was so difficult that those in power generally just ignored the clear words of the constitution, and made "workable decisions".
When I said "E.g., abortion should clearly be a state level issue." I meant that it should not be decided at the federal level. Once you get away from the federal level, the different constitutions of the various states make things quickly too complex for any simple answer. Well, if you're arguing on legal grounds. If you're arguing on moral grounds the problem is that there's no consensus on what the proper morality is. Everybody's arguing for their own point of view, often with the same words meaning different things.
If you didn't know Scalia was a conservative you weren't paying attention. He wanted to uphold anti-sodomy laws and claimed the majority in "Lawrence v. Heller" which struck them down was a "product of a Court, which is the product of a law-profession culture, that has largely signed on to the so-called homosexual agenda".
While I may or may not have reservations about Scalia's decisions, my point was that I regard Scalia more professional than Ginsburg. For the same reason, I regard Scalia more professional than Thomas.
I'm sorry, but this is not professional: “This Court has never held that the Constitution forbids the execution of a convicted defendant who has had a full and fair trial but is later able to convince a habeas court that he is ‘actually’ innocent.”
To say that procedure trumps actual innocence is to undermine the very foundation of criminal law. Such a statement is neither conservative nor liberal, but anti-law itself.
> Looking back, in early 2018 the court was 5-4 Democrat, and one of the Republicans was John Roberts, who’s moderate and hates change.
Both of these claims are difficult to justify. The court in early 2018 had 5 justices appointed by Republican presidents (Anthony Kennedy, replaced by Kavanaugh, was appointed by Reagan; while he had a reputation as a swing justice, he went pretty far right in cases that didn't involve privacy).
Likewise, John Roberts is a moderate only in the context of the most conservative court in a century. This isn't a normative judgment, just a description of his voting record. He has consistently voted with the conservative bloc across a range of issues. The exceptions (Obamacare) spring readily to mind because they are rare.
Does the crest look anything like what Scott described, in your opinion? "Wokeness has peaked - but Mt. Everest has peaked, and that doesn't mean it's weak or irrelevant or going anywhere. Fewer people will get cancelled, but only because everyone has settled into an equilibrium where they know what the cancellable opinions are and don't say them (or because everyone with a cancellable opinion has already been removed, or was never hired in the first place). "
If so, that's not the decline of wokeness. That's the decisive victory of wokeness. If it's cresting because it won and can now get rid of all opponents, that shouldn't reassure anyone. Not even wokists, who can easily be the next targets of the cultural revolution.
"You can't just gain power, you have to maintain it, and for all its facade of strength wokeness has nothing behind it: no reason, no joy, not even improved living standards, just a mass hallucination enforced by raw power and bullying on quietly resentful subjects. Such a system never lasts."
The same was true of Christianity in the 5th century, or Islam in the 7th. Look at where they are now. The quietly resentful subjects became less resentful, then converted to Christianity/Islam for a mixture of self-interested and genuine reasons. There's no reason that a mass hallucination enforced by raw power and bullying can't last millennia and cause millions of deaths.
Christianity, Islam, etc all had charismatic prophet-founders and divine-level foundations. They have fascinating and deep scriptures and stories underlying them.
Wokeism has none of this. No cosmological level claims, no powerful stories. And there's an even more fundamental problem than that - unlike traditional religions, it doesn't even have demographics on its side! It has produced incredibly low birthrates. Based on all of that, I just don't see it having any traction in the medium or long term.
Don't get me wrong, it will undoubtedly still be a thing in 2028. But by the end of the century? I highly doubt it. It may well end up retreating to the fringes of society within a couple of decades.
Hogwarts Legacy was absolutely canceled in my social circles, including giving it up being considered part of “discomfort is needed for progress”. I'm not sure what reaction the Roald Dahl edits got, I think mildly positive with some weak pushback on cultural-history grounds.
Yeah. In my social circle, I witnessed live version where a slightly out-of-touch old liberal who initially tried to say "I don't like the look of censoring and boycotting books and by extension games because the author has wrong opinions," got some spontaneous peer struggle session review. By the end of discussion, he was loudly proclaiming various nasty things about Rowling and agreeing that everyone should boycott her computer game because it is like giving money to North Korea.
Dahl: I notice that the public pushback comes from the old liberals (as in, retirement age) like Salman Rushdie or the usual right-wing adjacents.
Given that it was the top selling game on Steam for the last month, and the most streamed game ever on Twitch we can conclude that your social circles are unrepresentative of gaming humanity as a whole.
The Dahl edit reaction has been universal derision, but Puffin hasn't walked back the idea, either. Rather like with Seuss, which was met with nearly-universal derision, but eBay policy is still that you can't even sell the old copies.
eBay hasn't gone that far with Dahl yet, as far as I'm aware.
Sorry, I'm a bit out of the loop. What was wrong with Dr. Seuss books? (Well, "One fish, two fish" was immensely boring, but I mean outside of things like that.)
In early 2021 Dr. Seuss' estate pulled six books (https://www.nytimes.com/2021/03/04/books/dr-seuss-books.html) from publication for insensitivities like a Chinese character eating a bowl of rice using chopsticks (that's one of the examples; I *think* "If I Ran The Zoo" had some things that would be mildly offensive to even a sane person, but most of the sources don't explicitly say what was wrong with the books).
"6. Social justice movement appear less powerful/important in 2023 than currently: 60%"
No. Giving that to yourself is an error because you are overly focusing on the slight receding of the tide in 2022-2023 and forgetting the inundation of 2020-2021. The pre-George Floyd world of 2018 was a lot different than 2023.
Yeah, that's my impression as well. It looks like SJ has firmly and formally entrenched itself in the universities now, with mandatory DEI statements as de-facto ideological purity tests for new hires. It might take decades to undo that.
I think I know why everyone *assumes* that mandatory DEI statements are de-facto ideological purity tests - but my understanding of how these statements are used is that they are just collected and sent to the hiring department as part of the hiring package, and in practice, ideological purity is just as likely to sink your candidacy with some significant fraction of a hiring committee as wrongthink is. The real problem with these statements is that they create a culture war minefield for candidates to navigate, with no indication of what is actually going to be judged as good.
If these statements are used by *administrators* as a pre-filter before files get to the department level (and I've heard some claims that some UC schools might do this for some applications) *then* these can be de-facto ideological purity tests. But when they just get sent to the committee that includes both a 30 year old radical assistant professor and a 70 year old curmudgeon, it's really unclear what kind of statement you need to avoid getting nixed by someone. (Probably the kind of statement that makes people glaze over and look back at your academic work instead.)
I am willing to bet that >99% of applicants will grit their teeth and do their best to pronounce the shibboleths properly, rather than hope that their application goes straight to the desk of some contrarian professor who is on board with "fuck that PC bullshit".
Yes, it sounds like some hires at Berkeley have used it that way. That does not seem likely to be much more of a precursor of wider trends than Hamline College with the Islamic art fiasco is.
But the bigger point is that most academics want people who will say nice stuff about minorities in their statements, but will get very worried about hiring someone who said they actually tried to change something about how their previous department worked.
I'm not sure how "ideological purity is just as likely to sink your candidacy" is supposed to work. Yes, there are hiring managers who don't want overzealous ideological purity in their departments. Those managers aren't asking for DEI statements. If there's a DEI statement, that's coming from HR or the administration or somewhere. And maybe they just put it in there because all the cool people are doing it and they don't have any systematic way of doing anything with it. Or maybe they are using it as a pre-filter or other disqualifier.
But from the point of view of anyone filling out a DEI statement, the possibilities are "this is a waste of my time" or "this will roundfile my application if I don't at least feign ideological purity". Some of them won't waste their time and won't complete the application, the rest will feign ideological purity to some extent.
So "I will cleverly ask for a DEI statement, then rule out the candidates who show too much ideological purity", screens people out for being diligent and capable in doing what they think you just asked them to do. Nobody does that, so nobody expects anyone else to do that, so the expected value of a DEI statement remains between "waste of time" and "necessary proclamation of ideological purity to get this job".
Why are you talking about "hiring managers"? This is about university hiring. The way that occurs is that some big committee composed of faculty members asks for a bunch of materials, and they evaluate it, and discuss the candidates until they can reach some sort of consensus about who the few finalists will be. They usually ask for materials about research, teaching, and service, and some universities now require them to ask for a DEI statement as well. But the evaluation is entirely in-house. In general, anything that isn't research doesn't matter all that much (at least at R1 universities) and people even say things like "a teaching award is the kiss of death" - it shows that you care too much about non-research things. If any part of the application sets off a red flag for one or more committee members, it's very easy for them to keep that candidate out of the finalists, given the strength of the pools. A bunch of faculty reading an application aren't going to use a candidate's strong DEI statement as very much reason to raise or lower their estimation of the candidate. But if the DEI statement triggers someone on the committee in one direction or another, it's going to sink your application. You really don't want your DEI statement to stand out as an indication that you're on the vanguard, because that's going to make a lot of people worried about having you in their department. Instead, the best strategy is usually making a relatively bland statement about how you've been nice to women and minorities in grad school or supported them as a faculty member, and maybe taking the opportunity to show how you would diversity the faculty. But ideological purity is going to be very scary to at least some members of a hiring committee, and unless your research record is very strong, it's going to make things difficult, just as much as expressing reactionary views will.
As someone posted on Hacker News about this topic, university mandatory DEI statements are less about wokeness as they are about survival of the universities. There is a population time bomb from the Great recession that is just about to hit the universities (search for a US population pyramid). These DEI statements are less about wokeness, and more about how good a candidate is at getting non-traditional students (including older students) to enroll and complete.
I find the social justice predictions really suspect. It's judging prevailing opinion about opinions about opinions. How do you measure "cancel culture"? Cancellations per annum? Number of cancellable offenses per the Board of Cancellation? Number of words dedicated to social justice in Atlantic op-eds?
IDK. All fluff to me. The answer will always depend on who you ask.
I could just say that Pacific Islanders are underrepresented in ads so social justice is declining because it's not inclusive enough of non-black minorities. It's turtles all the way down.
That whole section seemed off. I read it, thinking "oof, Scott's going have to give himself a bed grade on this one", and then was shocked that he said that "all of this is basically true" and gave himself a B+. Especially the "Christianity culturally becomes Buddhism" thing - that almost seemed *more* true in 2018, I remember all the "real Christianity is socialist" posts back then - and I don't think I've heard of *anyone* suggest a black lesbian pope.
I think the only reason he was able to claim to score so well is that his discrete predictions didn't actually test the predictions in the prose.
Side note, George Santos' clearly-intentional use of the white power "okay sign"[1] (probably just him trolling for his own amusement because he knows his days are numbered, but still) comes close to making Scott directionally incorrect on (2) and (3), but I know it's not explicit enough for Scott to count it.
What evidence do you have that congressman Santos is *not* terminally online? He's 34, it's not that uncommon, especially for politically-minded people. Also, he's a compulsive liar, which fits right in on the internet.
I'm very interested in this comment thread but it seems like the evidence for and against "social justice movement appears less important in [time A] than [time B]" seems to be a collection of gut feelings and anecdotes. Anybody have any good ideas on how to measure/quantify relative cultural strength of an ideology? I'm guessing the answer is no but I'd love to find out I'm wrong.
I think it's not impossible to get an objective sense of these things in hindsight, but it's nigh impossible at the time. we'll know in 10 years whether "woke" was a passing thing or the new normal
I have never understood the ostensible relationship between "social justice" and "cancel culture". What do these things have to with each other at all.
That "social justice" can have any meaning with being tired Rerum Novarem boggles my mind. That there are ostensible proponents and critics of "social justice that don't even know what Rerum is is pretty sad state of thought.
What is called Social Justice these days should probably be referred to as "Critical Social Justice". It shares nothing with the Catholic concept that you're referring to except the name, and is instead a blend of cultural marxism and postmodernism. The closest relative in Catholic theology might be Liberation Theology (but I'm saying that based on hearsay).
The concept of Cultural Marxism explains so, so much of what's going on. The shared idea: society is best viewed as a struggle between groups of people - the oppressors (bad) and the oppressed (good). Oppressors erect systems of power to maintain the status quo, which must be torn down to achieve liberation.
Old-fashioned Marxism: capitalists oppress the proletariat.
Flavors of cultural Marxism:
Feminism: men oppress women.
Queer studies: cis-hetero people oppress people with queer sexual identities.
Critical Race Theory: whites oppress Blacks.
Fat Studies: normal-sized people oppress fat people.
Post-Colonialism: white people oppress people of color.
Disability studies: healthy people oppress disabled people.
And on and on and on. Same shit, lots of different piles.
Good luck with your reclaiming. I'm not even Catholic, but any sincere attempt to actually do good in the world and not just play power games would be very much appreciated.
The term "cultural Marxism" has a lot of baggage[1] that goes far beyond what you're using it to mean.
That said, this *is* a valid way of describing idpol and the culture war through a Marxist lens, *IF* you repress the term "oppressor" with "privileged".
i.e. there's a capitalist/bourgeoisie/privileged class that has a comparative advantage in society, and while not inherently evil, they are incentivized to maintain that power at the expense of the proletariat/unprivileged/oppressed class. Sometimes this takes the form of direct oppression, but it's not always that simple (especially as progress makes the oppressed class less oppressed).
You can even work in the concept of petit-bourgeoisie to apply to e.g. poor whites that cling to race as the reason they're better than brown poors, and why they need to cling on to what little priviledge that brings them.
Mass adoption of driving cars was always a fantasy. And I’ve said that here, and elsewhere, before. The problem is that self driving cars have to be perfect, not just very good, for legal reasons, and for psychological reasons.
Counter counterpoint - self driving cars still aren’t where self-flying planes were 20+ years ago, and support for them has if anything gone down after the MCAS fiasco.
Actually I think airplanes are illustrative because they demonstrate that there is a whole new class of deadly accidents that will start to occur due to human interaction with automation, and the automation will generally get blamed by a wary public (we see this a bit already in some of the Tesla crashes where yes, the automated screwed up but clearly the human wasn’t doing their duty to monitor it either).
Neither self-driving cars nor self-flying planes, in the sense of "attentive on-board human operator not required", are generally trusted to carry human passengers. Or to operate in close proximity to human bystanders. If there are even a few specialized markets where self-driving cars do carry human passengers, that puts them well ahead of the planes.
The planes seem to have the edge because there are more high-profile applications that don't involve transporting humans or operating dangerously close to (friendly) humans such that "and, yeah, it will crash and burn one time out of a thousand" is acceptable.
Flying is easier to automate than driving, and airliners are already quite capable of flying themselves almost the entire way in most conditions with a very low error rate. Indeed most airliner flights are mostly flown by autopilot. And most “manual” airline flying involves automated pilot guidance and envelope protections substantially more sophisticated than all but the highest level of available driver assist functions. Even some single pilot private aircraft now have certified emergency auto land systems. All these automated systems “crash and burn” at a much lower rate than one time in a thousand, even if we take “crash and burn” to mean “human intervention required to prevent catastrophic result of automation failure (human told autopilot to do wrong thing doesn’t really count)”
Self-driving is quite a bit harder, but the stakes are lower and “come to a stop and wait for the constantly available human help to pull up and fix it” is viable in a way that it isn’t for an airplane. So yes, there are a couple of limited areas where recently truly self driving cars have begun to operate (although anecdotally the human intervention rate is still fairly high).
I guess it’s hard to say which is “ahead” given the different problem spaces. My main point bough is that the general public isn’t necessarily going to understand the nuances of difficulty and they aren’t going to accept “very good” for cars given that they are nowhere near accepting an automated passenger aircraft even though they are arguably already “very good”.
Sure, but my point was basically that any semi-frequent traveler has almost certainly been on an aircraft that was flown almost entirely hands off between just after takeoff and the very end of final approach (and you may have even been auto landed, although that’s a rarely used capability). And even the parts that were “hand flown” were subject to significant automated assists and protections.
Meanwhile very few have been on a truly “auto piloted” car on the open road and any driver assists are generally limited to lane keeping and cruise control (with a few cars having more advanced features like park assist and auto braking).
Airliners are a bit weird too because there’s a different trade off when you’ve already got a pair of highly paid and trained individuals on board plus many more on the ground operating in a highly regimented system. Bluntly sometimes we just want the pilots to not get bored or lose their skills, plus it’s nice to have the flexibility to change the programmed plan en route. The autopilot is generally CAPABLE of more than it is typically ALLOWED to do. Then again programming an autopilot is a lot more complicated than “select destination in Google Maps and go” and requires skilled pilots and controllers.
An investigation into the economics of London's Docklands Light Railway (fully automated) might be instructive. Not that I am offering to do it, mind you.
Call me a cynic, but I imagine unions play a substantial role, with the staff that can't be automated yet exerting their bargaining power to protect the ones that can
Sure. Per the CDC it looks like excluding escalators and people working on elevators to leave only passengers, there are maybe 10-15 deaths a year in the US, which is well under the "struck by lightning" threshold. Even counting all those it's 30.
And anecdotally, Rex Stout had Nero Wolfe's investgator Archie Goodwin constantly noting when an elevator was self-service for decades after they became common, though he didn't seem particularly bothered by them. Except that in that interval between human operators and ubiquitous surveillance cameras, they didn't offer convenient witnesses to quiz.
No. They'll need to be much cheaper than human drivers, and nearly as convenient. I expect self driving cars to be rented by the hour or minute. That won't handle all use cases, but it will handle all the common ones that 90% of people experience. And you won't need to worry about parking or maintenance. But it *WILL* need to be a dependable service.
Another counter-point: Self-driving cars can sneak in step by step.
Germany has already passed a law allowing autonomous cars of level 4. This means that the car companies may define circumstances in which the car drives autonomously. There must be a person on board, and on alert they must be ready to take back control within 10(?) seconds.
Right now, there is only one such system, and that is allowed (by company decision) to operate on highways below 60km/h (i.e., only in congested traffic or traffic jams). But it can increase gradually: perhaps in two years they go for up to 120, then for general roads outside of cities, and eventually everywhere.
“ There must be a person on board, and on alert they must be ready to take back control within 10(?) seconds.”
In practice, how do you enforce this? 10 seconds is enough for any actual emergency to be over, so you’re either going to have to be essentially fully autonomous or produce a ton of nuisance alarms any time things get remotely sketchy.
Then it’s “wake up mate. You are dead in 10 seconds. Thank you for using CheapAssCarAI, even if this is your last ever use of the service, we appreciate your custom”.
Sounds like the car is meant to handle any emergencies automatically while it's autonomous. The 10 seconds is so that there's a person that can drive when the car leaves the environment where it's allowed to drive autonomously (eg when it exits the highway).
I don't understand the comment. Category 4 means that the car must be able to deal autonomously with any emergencies. That's the definition of category 4. It's the highest category below 5 (5 = no driver). The alert is for example when the car is going to leave the highway, or when it starts snowing (or whatever other environments are not covered by the car).
You are probably thinking about categories 2 and 3, which are about assisted driving. Tesla is between 2 and 3, more advanced companies around 3, and the first models with (very limited) 4 are just coming out.
I think predictions about fertility rate in various countries should be of interest, also how technology such as AI girlfriends effect this, similarly what would the percent of the population identifying as LGBT be like, would we start to see the beginning of the religious/insular communities inheriting the earth, what about changes in IQ and such, would any of the fertility increasing efforts have worked.
> start to see the beginning of the religious/insular communities inheriting the earth
If that's relating to differing birth rates, another five years isn't enough to affect anything (partly because kids have to age up, partly because the effect is less than people think even if meaningful in the long run because lots of religious people become not-religious).
That's all true but my guess is that even in 5 years time you would have a much better sense if such trends are going to continue long term, the LGBT trend might be the most noticeable over a 5 year period, I think the same is also true of efforts to increase fertility. Maybe you would also see concerns about underpopulation or concerns about a aging population to become more common, I can very easily imagine a significant shift in culture/media/academia and such on those issues within 5 years.
It think the big deal for fertility rates isn't sexual orientation, it's the cost of having children.
Let's have predictions about the cost of housing and education.
Or how about predictions about work from home. A lot of managers hate it, but maybe some of them are aging out. Is is too early to be thinking about major companies (possibly new companies) that are entirely work from home?
Fair enough. The largest I've worked for was a few hundred people. I'd be shocked if there weren't any in the thousands, and mildly surprised if there weren't any in the tens of thousands. But I certainly don't know about any huge companies that are 100% remote.
Yes, I agree. Having children through fertility treatments or adoption isn't cheap but it's relatively minor compared to the costs of one parent being out of the workplace for 5-6 years or full-time childcare during that time.
I'm not sure I can sum it up in a one-sentence numerical prediction, but I think we'll see a peak in college tuition (in real dollars) and in college enrollment, which will help bring down the total cost of raising a child. AI, automation, and outsourcing will further cut into the benefits of a generic college degree. Non-outsourceable blue collar jobs like plumber, electrician, etc. will start to be a little more attractive again to a generation looking at the massive student debt problem their elders are dealing with.
In my own circles, trans people now greatly outnumber lesbians and gays combined. (Unsure if T > L+G+B.) So perhaps we'll soon start seeing LGBT activist groups dropping the T, as trans activism has a very different flavor than LGB activism.
I remember thinking at the time that 1% for Roe was way too low and I'd make it closer to 50% (of course now I can't point to anything proving I thought that, though I could've sworn I made that prediction on the original thread from 2018).
In particular I'd say that even if Republicans had only gotten one of Kavanaugh and Barrett you'd still probably see Roe "substantially" overturned. Roberts didn't technically vote to overturn Roe, but I think with 5 conservatives (minus Kennedy if he were still around) on the court he wouldn't have voted to uphold any abortion restrictions. Whether you think his vote in Dobbs is consistent with "not substantially overturning Roe" is a matter of judgment - his decision would clearly allow abortions to be prohibited that were protected under Roe, but he also didn't say "and also Roe is overturned".
But even if you think Roberts's concurrence doesn't count as "substantially overturning" Roe, that wouldn't stop people from passing an even harsher law as a test case (which would have certainly happened if Kavanaugh had joined Roberts in our non-alternative timeline). To me one of the most likely versions of the "Roe isn't substantially overturned by 2023" possibility wasn't that Roe was protected but that they drag it out and it doesn't happen till 2024 or 2025.
Agreed, and even Scott's "what I was thinking at the time" justification is pretty poor, tbh. While Kennedy was a fairly reliable pro-Roe vote, he was also old and likely to retire under a republican President and Senate before the midterms (just as Breyer's retirement under Biden rightly surprised no one). Roberts is more moderate than Thomas or Alito, but he was not in any sense a locked in pro-Roe vote. Breyer and Ginsburg were both old in 2018, and there was surely a decent chance that one of them kicked the bucket or was forced to retire before they wanted to - Ginsburg's death was treated as a tragedy by pro abortion advocates, but not exactly a shock. And of course, let's not make the mistake of relying on hindsight to know that she only had to make it to 2021 - from the perspective of 2018 there was a perfectly plausible world where Trump gets re-elected in 2020, Ginsburg dies/retires in 2021, and Roe gets overturned in 2022.
There were plenty of plausible pathways for Roe to survive past 2023 too, but something in the 40-60% range would have been reasonable rather than the 5-10% Scott revises to in this post. Even in retrospect, he substantially underestimates how likely this outcome was.
Not that I called it at the time, but the risk of Roe being overturned should always have been closer to the statistical probability of RBG dying or retiring due to health issues, which was certainly more than 1% - by 87 I think the annual likelihood of death is something like 6 or 7%.
Seems to me that the Roe prediction could be sort of right, even though it was wrong, in that it was really based on the recognition of an underlying truth - i.e that the American people probably won't put up with a total ban on abortion.
I hope that it's not just wishful thinking that this decision might be decisive in the 2024 election in returning a President and/or State legislatures that will pass pro-choice laws.
As a remainer Brit I'm also hopeful that Brexit will eventually be seen the same way. i.e. something that we just had to endure for a while in order to reveal a truth to the reactionary non-political rump of the population that nevertheless decides who governs us.
One of the reasons that so many predictions end up so wrong isn't because the underlying thought is wrong, but because the polarisation we have in society makes so many outcomes a simple coin-toss with a slightly (52%-48%) weighted coin.
> As far as I can tell, none of the space tourism stuff worked out and the whole field is stuck in the same annoying limbo as for the past decade and a half.
I agree that progress has been disappointingly slow, but your original prediction is more accurate than this assessment makes it seem. There are in fact two companies (infrequently) selling suborbital spaceflights (Virgin Galactic and Blue Origin), and SpaceX has launched multiple completely private orbital tourism missions.
Yes, SLS put Orion around the moon late last year.
In addition, Starship is damn close to going into orbit. It's expected to happen within the next month (for real this time). It will very likely turn out that Scott missed that prediction by less than a month.
Expected by whom? I wouldn't give that better than 50-50 odds. Obviously Elon either expects it or wants people to believe it, and obviously Elon has lots of fanboys who will believe anything he says. But do you know anyone who (correctly) predicted that Starship probably wouldn't make orbit in 2022, who is presently predicting an orbital launch in March 2023?
> At least 350,000 people in the US are regularly (at least weekly) talking to an AI which they consider a kind of romantic companion.
Depending on how you judge this, it could already be true. I'm assuming you're familiar with Replika? It's an "AI companion" app that claims 2 million active users. Until quite recently they were aggressively pushing a business model where you pay $70/month to sext with your Replika, but they recently changed course and apparently severely upset a fair number of users who were emotionally attached: https://unherd.com/thepost/replika-users-mourn-the-loss-of-their-chatbot-girlfriends/
Also thanks, I needed this today in particular. Had a dream where GDP had gone up 30% in the past year and I figured we'd missed the boat on any chance to avoid AI doom.
>At least 350,000 people in the US are regularly (at least weekly) talking to an AI which they consider a kind of romantic companion
This seems to have already been true as of late 2022.
Replika seems to have had up to 1M DAUs, although this was before their recent changes of removing a lot of romantic/nsfw functionality (which users very much did not like, and likely led to >0 suicides and notable metric decreases). It's also notable that they do not use particularly good nor large models, but use a lot of hard-coding, quality UX, and continual anthromophoric product iterations. Given what I've seen of their numbers, it's highly likely they had >350,000 weekly active users already.
Those that think AI partners will not take off strongly underestimate how lonely and isolated many people are, likely because they aren't friends with many such people (as those people have fewer friends and do not touch grass particularly often). The barriers are more so that this is hard to do well, there is a bit of social stigma around it, and supporting NSFW is a huge pain across many sectors for many reasons. The latter will remain true, but the other two will change pretty quickly.
Even setting aside Replika's user numbers, Scott seems to massively underestimate the extent to which people will form emotional bonds with just about anything with a smiling face. Genshin Impact alone might have enough players who meet this criterion.
BFR is much closer than it was 5 years ago, certainly, but a successful launch to orbit this year is still something I wouldn’t bet more than even money on.
I think there is a general tendency to underestimate how long space technology can look really really close to ready to go before it actually is. The devil is always in the details (consider Boeing Starliner - yeah, haha Boeing, but it’s looked “basically done” for years. The same could be said for Virgin and New Shepard)
SpaceX just did a static fire for a test launch in march that seems planned to be orbital or "barely suborbital". I don't see how they could fail to hit 2023 unless that launch fails catastrophically.
And last year they were claiming they’d launch in late summer or early fall of 2022. They’ve done a lower stage static fire and a few low altitude hops with Starship, most of which exploded on landing. Huge step from that to successful orbit. Shit happens, space is hard.
SpaceX attempts a BFS/Starship "orbital or barely suborbital" (where barely = within 100m/s) test launch in 2023
SpaceX succeeds at it.
I'd say 90%, 70%. Full first launch success chance like 60%, but they'll probably get off the pad enough that they have some odds of getting a second attempt in 23.
They don’t even have FAA approval to fly out of Boca Chica yet, and March 31st is less than 6 weeks away. On top of that the static fire wasn’t even particularly successful if it was a dress rehearsal for launch - one of the engines never lit and another shut itself down, and the test was only 15 seconds. 80% odds against by March 31 seems frankly optimistic for their chances.
I’d say it’s probably 70% they get off one full stack launch this year and 50/50 it reaches orbit if it does launch. Very low probability they launch multiple times this year.
This is a huge, expensive test, even for SpaceX. If they launch and it doesn’t go perfectly, they aren’t going to throw away that many Raptors just to YOLO it until they understand very well what went wrong and feel like they have a high probability of success, and that takes time.
It's not just the raptors that are the issue, right? I mean, if the launch goes spectacularly wrong, couldn't the entirety of Boca Chica literally go down in flames? After all, it's some 5000 tons of fuel, not that far from some more massive fuel tanks...
Static fire was successful in the sense that if it'd been a launch and nothing else went wrong, it would have gotten to orbit on that amount of engines.
As you scale up engine count, eventually you will just need to handle failures gracefully.
The first four launches of the full "Starship" upper stage, failed catastrophically. So the odds of the first launch of the "Super Heavy" booster failing catastrophically, are not small. The Starship program has been a lot more aggressive than that of the Falcon 9' more like that of the Falcon 1 (which failed catastrophically the first three times).
And as noted below, some catastrophic failure modes of Starship would leave SpaceX without a Starship launch site for a year or more.
Are you also shorting Tesla? They've got to send it someday, and apparently *their* perception is that they've retired enough risk. We'll find out soon if the FAA agrees.
Gah missed the edit window - Fanboy reporting for duty. IANARS and I know you are, I'll sign the waiver. :-)
Not "full Starship upper stage", aero demonstrators with janky v1 Raptors, which retired a *lot* of risk--belly flop works! SN8 (the first attempt!) almost made it, and probably would have with Raptor 2 (and no slosh, and no helium, and... see "risks retired").
So they should... spend a year building out another Stage 0, *then* blow up this one, and be right where they would have been (except they lost a year to fix the hypothetical issue)? They've already got 200 Raptors in the barn, and likely a few hundred Starlink v2's sitting on the books. They're *too* hardware rich at this point.
This reminds me of a joke about the lottery. The odds are 50/50 because you either win or you lose.
Probabilistic statements are obviously acceptable in predicting the future. If someone draws a card from a deck and puts it face down on the table, then asks me if the card is the ace of hearts, is diamonds, or is black I would say the odds are 1/52, 1/4, and 1/2.
If I were to take a bet on any of these I would expect the odds to reflect those probabilities. Scott bets on the future as well.
What does it mean to say the odds are 1/52?. If the deck is drawn 52 times I would expect the ace of hearts to appear once. Draw it 520 times and, on average, the ace will appear approximately 10 times.
And that’s what probabilistic guesses of the future are. A 60% chance means that the predictor expects that in 60% of the potential futures the prediction will happen.
I'm confused about what you're saying. Is this a semantic argument about the word "prediction"? A philosophical argument about the nature of knowledge not being probabilistic? A gripe that Scott is failing to follow some standard format that unspecified other parties are using?
And I've just got no idea at all about how to interpret your first sentence.
No. Alice was just more confident in her prediction. She did not "predict" bettter.
If your so-call "prediction" X will happen (80% ) (of the time? with 80% confidence?) and your error estimation is plus or minus 10%, then you are not actually predicting X will happen because your confidence interval is less than 100%.
I flip a coin 10 times.
You predict that it will be 5 heads. (add your probability figure. what will it be?)
I predict it will NOT be 5 heads with out a probability figure.
Who has really made the better prediction?
It is likely to comes up exactly 5 heads about 24.6% IF and only IF we have a "fair coin" and we throw it an infinite amount of time.
It comes up 4 heads, who has made the better prediction?
It comes up 5 heads, but your "confidence probability" was not 24.6%, who has made the better prediction?
When an expert gives an opinion in court, for it to be admissible she must state that her opinion/(prediction?) is to a reasonable degree of medical or expert certainty or probability. What that has come to mean is that the opinion is "more likely than not", i.e. 50% plus a speck.
Do we make predictions about taking risks? Sure when we drive, we might say that the trip will be safe because we know that there is about one micromort for every 250 miles driven. One micromort = 1 in a million chance of death. If driving was a 20% risk ( safety is 80% confident) of driving 250 miles. No one in their right mind would drive (and the roads would be so full of accident and ambulances that driving would probably be nearly impossible due to traffic.) So when SA predicts X will happen 80% is he really making a strong prediction? Is he even really being that confident in his so-called prediction in comparison to the decision to drive 250 miles?
Also this statement -- "What that has come to mean is that the opinion is "more likely than not", i.e. 50% plus a speck." -- is flat wrong at least in U.S. courts. (Actual courts I mean, not the Hollywood versions onscreen.)
That didn't clear up any of my questions. Could you please explicitly affirm or deny that you are saying each of the three possibilities that I listed?
As for your question, obviously higher confidence is good on a prediction that comes true and bad on a prediction that didn't come true.
There are formal mathematical scoring rules you can use to grade predictions that are designed to reward accurate probabilities; i.e. where if an event actually happens 75% of the time, then a prediction of 75% will get the highest expected score (or the highest average score after a large number of tests), beating out any competitors who gave other probabilities, either higher or lower. This lets you give a quantitative answer to how good each prediction was.
Because if a coin actually lands heads 75% of the time, you'd obviously rather have the actual number 75% than just know "heads is more likely than tails". The number gives you more information and lets you make better plans.
Call it a "probability estimate" instead of a "prediction", if you want. It's a prediction plus an estimate of confidence, you can always translate it to just a prediction by just looking at whether something is thought to be more likely than not, as Scott in fact does in this post.
Well-calibrated estimates are more valuable than the same estimates stated as just yes or no, since an optimal betting strategy requires estimates of expected returns. Otherwise, you bet just as much on a 55% as a 99%, and get blown out 45% of the time on the former.
Yeah a probability estimate is not the same thing as a prediction.
A point estimate alone is useless without knowing +/- e. And we must know the distribution of e. Is e Gaussian or Paretian? Is e unimodal or U shaped?
If SA made his so-called predictions as an offered bet of percent of annual income above a living wage or just a bet size of percent of total wealth, then we might know something because then we will know if SA is financially ruined or not.
> AGE OF MIRACLES AND WONDERS: We seem to be in the beginning of a slow takeoff. We should expect things to get very strange for however many years we have left before the singularity. So far the takeoff really is glacially slow (everyone talking about the blindingly fast pace of AI advances is anchored to different alternatives than I am) which just means more time to gawk at stuff. It’s going to be wild. That having been said, I don’t expect a singularity before 2028.
This prediction is so vague as to be horoscope-worthy. We are going to see something really strange and wonderful yet totally unspecified; or things will continue pretty much as usual until the Singularity; or perhaps something else will happen. Yep, that covers all the bases.
> Some big macroeconomic indicator (eg GDP, unemployment, inflation) shows a visible bump or dip as a direct effect of AI (“direct effect” excludes eg an AI-designed pandemic killing people)
Ok, so other than AI intentionally killing people (by contrast with e.g. exploding Teslas), how would we know whether any macroeconomic indicator is due to AI or not ? This prediction is likewise pretty vague.
> Gary Marcus can still figure out at least three semi-normal (ie not SolidGoldMagikarp style) situations where the most advanced language AIs make ridiculous errors that a human teenager wouldn’t make, more than half the time they’re asked the questions: 30%
Does it have to be Gary Marcus specifically ? 30% is ridiculously low if we expand the search space to all of humanity. Or just to ACX readers, even.
> AI can make a movie to your specifications: 40% short cartoon clip that kind of resembles what you want, 2% equal in quality to existing big-budget movies.
> but only because everyone has settled into an equilibrium where they know what the cancellable opinions are and don't say them
This is worse than the current level of wokeness, so I'd argue that the current level is far from its peak.
> IDK, I don't expect a Taiwan invasion.
I do, by 2028, at about 55%. We will know more after the 2024 election.
> ECONOMICS: IDK, stocks went down a lot because of inflation, inflation seems solveable, it'll get solved, interest rates will go down, stocks will go up again?
I expect the growth of the actual productive output of the US to continue its decline. By 2028, I expect the US to be in a prolonged period of economic (and cultural) stagnation (if not decline), whether the pundits acknowledge it or not.
No, I mean in the sense of Tyler Cowen's responding to people saying "X will collapse" with "Are you short X?". I avoided moving back to Melbourne and bought 20L of bottled water based on a weaker prediction than that; I'm feeling an urge* to check whether you're actually acting as though you believe that number.
*Not entirely an urge I'm proud of; part of it's coming from gatekeep-y sorts of motivations. I'm giving in because, well, if you weren't taking it seriously and start taking it seriously because I probed, that's still a good outcome assuming that our predictions are anywhere remotely close to accurate.
I mean, I believe in that number with approximately 55% strength. I'm not sure what I should be doing differently given this information, though. I cannot prevent the invasion of Taiwan. I can do a few very minor things to influence the 2024 election, and I'm doing them.
>I'm not sure what I should be doing differently given this information, though.
I'm not sure either, since I don't know where you live and what preparations you already have for nuclear war.
I'm basically saying here that there are people who say that nuclear war is super likely but who also live in Manhattan without a clear exit plan or some worth-dying-for reason to be there, and those people are obviously not very serious about their "nuclear war is super likely" belief.
I think the jump from "invasion of Taiwan" to "nuclear war" that you are implicitly making in this comment is unfounded. I suspect the US and China will be perfectly capable of conducting a gentlemanly conventional war wherein they only use honorable bombs that level city blocks but don't fill the air with radioactive fallout.
Even talking about the American one, there's a case that the specifics of the campaign/result might cause the PRC to smell weakness/opportunity. This could happen *before* the actual election day, though.
The Chinese can wait. I think that the idea that the Chinese need to act now is probably planted by US ideologues, the end game of which is perhaps the US provoking in Taiwan.
Metaculus has this: https://www.metaculus.com/organization/public-figures/ eg: https://www.metaculus.com/public-figure/joe-biden/ (great site overall btw and it gets better all the time) This feature is experimental though and there is no aggregated accuracy score as public figures don't usually make predictions with percentages and they don't all predict the same questions, so assigning a score would not be meaningful.
Major warning (50% of ban) for this comment, it's irrelevant, controversial, and says a thing requiring an explanation without providing one.
Anyone who takes the bait will have their comments deleted.
> Average person can hail a self-driving car in at least one US city: 80%
> I think I nailed this.
which city is this?
New Orleans probably
San Francisco, Phoenix
Does anyone know why these cities in particular? I'm not really familiar with Phoenix, but driving in San Francisco doesn't seem like an easier problem to solve than driving in most other US cities, let alone on a freeway.
They have to be generally sunny and snow-free, with fairly tech-friendly regulations.
Phoenix is ideal testing conditions (no snow, no rain, no pedestrians), while San Francisco is where all the engineers tend to live anyway.
I'm gonna quibble with Phoenix. There are a couple of very limited programs that operate in small parts of the metro area that you can sign up for and occasionally use. I I would not say that the average tech-savvy person can hail a self-driving car in the way that they can get an Uber or Lyft.
I tend to agree, although I think you're overstating it. Waymo One tells me that I can hail a vehicle 24/7 and that on most occasions there's no-one in the driver's seat, but it operates "within parts of the Phoenix metropolitan area, including Downtown Phoenix and parts of Chandler, Tempe, Mesa and Gilbert." This means it won't work for a large class of rides, e.g. someone who wants to travel home to the suburbs from the downtown area.
As written, the prediction is true, but as far as I can remember, I understood it to mean that one would be able to hail a self-driving car to go to most ordinary destinations within the city, in the same way that existing ride-hailing apps work.
That was my impression too - I've heard of people surprisedly getting in a self-driving car that they hailed, but I haven't heard of anyone actually having an ordinary option of hailing a self-driving car.
San Francisco is both near to a lot of the tech companies and is arguably some of the hardest driving in the country, ignoring weather. *Every* weird traffic thing happens somewhere in SF, it's an absolute nightmare. As someone who's driven in both SF and NYC, I honestly think SF is worse, and that makes it absolutely great for testing with someone sitting in the car who can hit an abort button.
If you can drive in SF you can drive anywhere in the US (unless it's snowing.)
Phoenix is the exact opposite, and is perfect for initial tests *without* an abort button.
I’m not entirely sure if I’d grade this as true for San Francisco. Some average people in certain parts of SF can hail a self driving car in SF but the average person can’t and the cars don’t serve the densest and busiest areas of the city, like downtown.
Within those limitations, are the cars now truly autonomous? Or is it still the case that there always needs to be a human "test driver" either inside the car or trailing behind it in a separate car, ready to intervene if necessary?
Within those limitations, the cars are truly autonomous. They can still dispatch help to the cars but the cars aren't literally being tailed.
As far as I know this prediction is false.
What makes Waymo's deployment in Phoenix not fulfill the terms of the prediction?
The language of the prediction is ambiguous so maybe it should count. My reading of it is that waymo only serving a tiny section of downtown shouldn't count.
I think this is one of those kinda edge cases where if something were on the line for whether or not the prediction was technically accurate, it'd be a hard call. But nothing is, and we don't need to worry about exactly what's on what side of which line.
What is clear is that artificial intelligence moved less quickly than Scott was anticipating in terms of autonomous vehicles. We maybe (depending largely on what you describe as "an average person") got just barely over the line of his 80% prediction. He thought there was a 30% chance that you'd be able to hail an autonomous vehicle in "at least 5 of the 10 largest US cities," which we aren't even *close* to. Presumably if you had zoomed way in on this and said, "Okay, so you think there's an 80% chance that it's rolled out somewhere, and a 30% chance that it's reached half of all big cities, then there's some set of intermediary set of probabilities of it being intermediately rolled out," and all of those would've been false. It's also not the case that we got like, almost there -- it won't be rolled out to 5 of 10 cities in 2024, or 2025.
And he thought there was a 10% chance that at least 5% of all US trucking had been replaced.
If you kind of come up with that as an approximation of Scott's mental probability distribution of "how far along autonomous vehicles will be," then it seems clear we're way short of his average, probably his mean value of the probability distribution was about one standard deviation off from the truth.
It depends on your model for what the constraints on rolling out autonomous taxis are.
I don't think it's a situation where you make a little bit of progress in one city, and then you make a bit more progress and it's in two cities, and then you slowly creep up. I think that if the technical/regulatory/PR tangle gets resolved, then suddenly you can roll them out almost everywhere, and if not, then it's limited in the way we see.
I also don't think that the tangle had a 30% chance of resolving, but I have the benefit of hindsight.
If he meant, "30% chance of autonomous taxis being available in 10/10 biggest US cities," or "9/10 with one inexplicable holdout for no reason," then I assume that's what he would've said.
I agree it's not a linear ramp where the next city is just as hard as the previous city, but we were only in a 5 year prediction mode! It is clearly the case that there are in fact logistical and local issues to rollout per city.
“ I don't think it's a situation where you make a little bit of progress in one city, and then you make a bit more progress and it's in two cities, and then you slowly creep up. I think that if the technical/regulatory/PR tangle gets resolved, then suddenly you can roll them out almost everywhere”
I don’t agree for two reasons. First, the current paradigm involves building a painstaking map of every drivable area within a geographic location, much more detailed than google maps. Even if you build a map covering all of phoenix, you can’t make some minor tweaks and transfer the technology to Houston. You have to build a brand new map from scratch.
Second, the current locations are ideal conditions - well maintained roads with clear signage, good weather, low to medium traffic and well behaved drivers. Not generalizeable to, say Manhattan.
I agree that it’s going to faster expanding the service to say San Bernardino than it was to create the first service in Phoenix. But it’s not going to be trivial to go from most of phoenix to most of the US.
I would wager that in 8 years, a self driving car service will not be able to cover 50% of Manhattan during the winter.
A news article answering this question: https://techcrunch.com/2022/11/10/now-anyone-can-hail-a-waymo-robotaxi-in-downtown-phoenix/
I haven't seen any news report that I would count as "You can hail a self-driving car in city X." Now my criterion for a self-driving car is strict, "You can get from this point to 99% of the destinations people drive to from this point without touching the controls." To some degree, the limited jurisdictions that allow self-driving contribute to this. But my impression is that self-driving technology is at the "90% of the distance of 90% of the trips" level, not the "all of the distance of 99% of the trips" that would e.g. give blind people the same mobility as the sighted.
> The leading big tech company (eg Google/Apple/Meta) is (clearly ahead of/approximately caught up to/clearly still behind) the leading AI-only company (DeepMind/OpenAI/Anthropic)
DeepMind and Google are both part of Alphabet Inc., OpenAI was heavily invested in by Microsoft, and Anthropic was heavily invested in by Google. How will this prediction work if the "leading big tech companies" are just using things from the "AI-only" companies?
Buying the competition counts.
Counts as which - the big tech being ahead or the AI company being ahead?
Both!
The problem is that the question isn’t well framed—most big tech companies have the edge that they have not just because they innovate but because they can identify and buy out innovation and talent.
My point is that you really can’t, being intellectually honest, cut out Big Tech because they ‘only own’ or ‘only invest in’ the AI firms. That counts as Big Tech doing it. But being fair, it also counts as the AI firm having the advantage. It just isn’t a question that yields insight; it betrays misunderstanding of what investment is for and why it’s beneficial.
I think you should probably add a prediction about robotics. There's going to be a lot of progress in the next 5 years.
Interesting. What are the big challenges in robotics that you think we're on the verge of solving?
Hardware is enormously harder than people think. Software may improve robotics in a sense, but as soon as you need physical interaction you get crushed by physics.
Just think of Moore's law vs battery capacity. Transistor count (and software complexity) has increased in the range of 100,000-100,000,000x in the same amount of time we needed to double battery capacity (while losing lifespan and turning them into bombs).
I've thought about the topic a lot and I think there is a reasonable odds that growing muscle in vats ends up being easier than building motors that can match biological performance.
Self driving cars also reveal why robotics are so damn difficult, you need 99.99999999...% percent reliability and mistakes are expensive. I spent some time working on UAVs and ANY glitch will smash your hardware into pieces. Funny the first time but imagine if your computer self-descructed every time the compiler detected a syntax error.
The massive improvements in AI, especially in the field of Genrative AI, can have direct effects on how a robot can think 'like a human'. So maybe afterwards, when the AI hype is settled down, we could have a new 'humanly robot ' hype. What do you think?
The funny thing about the humanoid robot area is that it's really attractive from a PR and public awareness standpoint, but sort of dubious in terms of ROI. The point being that since we already have humans to do humanoid tasks, a bot in that form has to be competitive (cheaper, better, whatever) compared to the existing human pool.
Useful robots that *aren't at all humanoid* seem like a better investment, they just don't get as much attention...
That makes sense!
I think you'd do better with a mini-centaur form. Say something about the size of a dog. (There are reasons different dog breeds are different sizes.) You could argue for wheels or treads instead of legs, but unless speed is your main concern, legs have advantages. Handling stairs fairly well is one reason. Boston dynamics already has the dog body, but attaching the arms is difficult, as you need to brace them to use any force. Now imagine trying to get that robot to thread a needle, or remove a frozen bolt. Last I heard there was lots of work needed on the hands.
Now consider what a robot nursing assistant should look like. You don't want to hit ANYBODY's uncanny valley. I think the face should look anime, and definitely non-human but friendly. But if the price were right, there'd be a big market for robot nursing assistants.
Kinda agree though. After reading the points made by Boris, I think Miniature robots that favour utility and function more than the actual 'looking like a humanoid robot' segment could be the best place where AI can thrive.
Which of:
BostonDynamics robots,
A CNC Mill,
A smart thermostat,
An autonomous forklift in a warehouse,
A DH-82B "Queen Bee" military UAV,
are we counting under 'robotics'?
BostonDynamics Robot seems the one in my opinion.
While framing the question, I said 'humanly robot' which I think was a little misleading.
"IE you give it $2, say "make a Star Wars / Star Trek crossover movie, 120 minutes" and (aside from copyright concerns) it can do that?"
J.J. Abrams already did this with the reboot, and while I can't speak for anyone else, it certainly was not what *I* wanted.
"AI can write poetry which I’m unable to distinguish from that of my favorite poets (Byron / Pope / Tennyson )"
Interesting selection! I wouldn't have classed Pope as a Romantic poet, but this gives me the excuse to shoehorn in a somewhat relevant joke, from a 1944 Tolkien letter:
"The conversation was pretty lively – though I cannot remember any of it now, except C.S.L.'s story of an elderly lady that he knows. (She was a student of English in the past days of Sir Walter Raleigh. At her viva she was asked: What period would you have liked to live in Miss B? In the 15th C. said she. Oh come, Miss B., wouldn't you have liked to meet the Lake poets? No, sir, I prefer the society of gentlemen. Collapse of viva.) "
The Walter Raleigh mentioned above is:
"Sir Walter Alexander Raleigh (5 September 1861 – 13 May 1922) was an English scholar, poet, and author. Raleigh was also a Cambridge Apostle.
... in 1904 [he] became the first holder of the Chair of English Literature at Oxford University and he was a fellow of Merton College, Oxford (1914–22).
...Raleigh is probably best known for the poem "Wishes of an Elderly Man, Wished at a Garden Party, June 1914":
I wish I loved the Human Race;
I wish I loved its silly face;
I wish I liked the way it walks;
I wish I liked the way it talks;
And when I'm introduced to one
I wish I thought What Jolly Fun!"
I wonder if Professor Raleigh's knighthood was helped along by whoever was responsible for the honors list for George V liking the idea of creating another Sir Walter.
Is there any context for this, other than the films being bad? I assume it's a joke, but the comment is very deadpan so I wanted to check.
I remember "computer screenplay assistant" is a thing that was zeitgeisty for about a week, and probably misinterpreted by much of the public
"Star Trek reboot as Star Wars showreel by JJ Abrams" was a rant I did on a site that shall not be named back then, complete with side-by-side comparisons of things like the Starfleet Academy cadet uniforms and Star Wars generic troopers uniforms, but that was fourteen years ago and I haven't kept most of it (I do have the part where I defended James Tiberius Kirk as not the pop culture skirt-chaser Abrams interpreted him as, in one interview where he said "Nobody’s going to force Kirk to be a romantic and settle down. That would feel forced and silly. Kirk’s a player”. As we said at the time, "Abrams is not of the body!")
But yeah, there was so much in the visual design, not to say the reboot universe, that was moved to be more like Star Wars than Trek, including the phasers being revamped to be more like blasters. Abrams did himself no favours with me with his attitude to Trek, and it was no secret he was a Star Wars fan, not a Trekkie, and dearly longed to get the directing gig for the new Star Wars movies. That's why I think he did do the reboot as, literally, a showreel to demonstrate he could do a big-budget movie set in an established universe and reboot it successfully.
About the only time I am in agreement with John Stewart 😁
https://www.youtube.com/watch?v=-mSM5BCUhZ4
Thank you for the John Stewart clip; that was delightful. I would loved to have seen the full version of your rant at the time.
I think there was a lot covered in it, the particular Tumblr (yes, that was the hellsite) discussion group that got going was mostly critical. Not that the reboot as such was a bad idea, but that they wasted a lot of opportunity.
How the reboot timeline got created was hooey, but Trek has always had a lot of hooey involved, that's how we got the term "technobabble" after all. The importance difference is that Trek is science fiction (it does try to be grounded in 'vaguely plausible if we stretch it a lot and take the most out there speculative theories current') while Star Wars is science fantasy (the desert planet setting, the 'hokey religion versus blasters', the mystic order of the Jedi, the Force, midichlorians and the rest of it - it's a pulp skiffy influence at heart and none the worse for it).
The problem comes when (a) you're a bunch of untalented hacks and (b) try to force the contents of one universe into the mould of the other. Abrams went for Kewl Shots (the complaints about lens flare and how the bridge of the "Enterprise" looked like an Apple store) and very clearly modelled much of his reboot along the lines of Star Wars (his love) than established Trek canon.
There were a lot of "left on the cutting room floor" scenes floating around at the time, both good and bad; a whole chunk of this Kirk's childhood backstory was cut, which would have contributed to understanding his character and why he was the way we see him later (the rebel without a clue). Other bits got cut and it was for the better, e.g. what Abrams and company seemingly thought was a *hilarious* bit about "they all look the same" when it came to Kirk and the Orion women cadets - he romances one named Gaila for ulterior motives, to get her to run the Kobayashi Maru hack when he's taking the test. Later, he goes to apologise to her for using her (since this got her into trouble, quite naturally) and - here's the joke, hold on to your sides! - he apologises to the *wrong* woman! Because all those green slave girls look the same, you know! And he doesn't really care about Gaila so he can't tell her apart from any other random Orion cadet!
That Jim Kirk, what a card 😐
Reboot McCoy was the best thing in it, thank you Karl Urban. I could go on about other things - oh why the heck not? The Spock/Uhura romance was unexpected, and the end result was that it looked like Abrams couldn't think of a way to fit Uhura in as anything other than a girlfriend (we have one scene where Uhura demands to know why she hasn't been assigned to the Enterprise, Spock reasonably says it would look like favouritism because they're in a relationship, and she bullies/nags him into reassigning her). There's also, in the second movie, the totally unprofessional scene she makes about their relationship while they're on a mission, in front of their commanding officer, and again looks more like "nagging shrew" than "equal professional and officer pulling her weight". There's the throwaway line dismissing Christine Chapel. The infamous underwear scene with Carol Marcus in the second movie, which echoes the underwear scene with Gaila in the first, and manages to be both unsexy *and* sexist. The terrible pseudoscience which doesn't even pretend to be technobabble - now we can warp between moving starships, the Klingon home planet is apparently on the doorstep of the Terran solar system because we can get there in a short trip, Starfleet Command can have every senior commanding officer killed by one guy in a scout ship because security, what that? and so on.
The second movie trying to persuade us "this is the engine room of a starship" when anyone who has had even the most cursory view around a chemical, food or other processing plant can identify "this is a brewery". The first movie BLOWING UP VULCAN (if they think that after all this time I've forgotten and forgiven, they have another think coming). The heavily militarised Starfleet Command, which again can be explained by the backstory of this timeline *if* they bothered to explain it, which they don't. Not one but *two* dogfights by starships over San Francisco, as the climactic moments of both movies. The second movie had me cheering on the evil admiral, because he at least was competent, and they finally remembered that hey, you build starships in orbit in space docks not from the ground up on earth. That first movie shot was another Kewl Shot with Kirk on his motorcycle pulling up to view the ship that he will eventually command, but didn't make much sense logistically (though I've read posts defending it):
https://townsquare.media/site/442/files/2013/05/Trek-Guide-Starfleet.jpg?w=980&q=75
I didn't even bother with the third movie, even though that was allegedly better. They burned up all my goodwill by then, and I've been a Trekkie since I was seven.
"in early 2018 the court was 5-4 Democrat"
No, in early 2018 the court had 5 Republicans: Roberts, Kennedy (soon to be replaced by Kavanaugh), Alito, Gorsuch, and Thomas
Either he’s forgetting that Kennedy was nominally a Republican or he’s just being plain about the fact that Kennedy was not actually a Republican in any meaningful sense.
Kennedy joined the Republican justices on most major divisive questions, such as Obmacare being unconstitutional. He also hand-picked Kavanaugh as his successor. Saying he was not actually a Republican is nuts, unless you're only looking at PP v. Casey and Obergefell
I think Scott just made a mistake. I don't think he was intentionally trying to make a statement about Kennedy being insufficiently conservative.
Still, I think his broader point still stands if you split the Supreme Court into liberal, moderate, and conservative categories rather than merely Democrats and Republicans, with both Roberts and Kennedy falling into the moderate category. In 2018, there were 4 liberals, 2 moderates, and 3 conservatives. Neither the left nor the right could get a majority on their own, so this pushed the court towards making more moderate decisions overall, which made dramatic upheavals (like overturning Roe v. Wade) rather unlikely. Then Kennedy and Ginsberg were replaced by Kavanaugh and Barrett, so the balance shifted to 3 liberals, 1 moderates, and 5 conservatives - enough for the conservatives to bull rush their way past the compromise stage and force through any decision they wanted, without any need to temper or moderate them first.
Had the court's balance remained the same, I expect Roberts would have gotten his way with the Dobbs v. Jackson case: Mississippi's 15 week abortion ban would've still been upheld, but it would've been a narrower ruling that merely pushed back the viability line from 20 weeks to 15, rather than overturning the Roe decision completely and allowing states to ban abortion at any point in pregnancy. This would've been true even if we'd only gotten Kavanaugh or Barrett, but not both: The conservatives would've had to go along with Roberts' compromise, because they simply wouldn't have had the numbers to overturn Roe entirely.
Granted, giving 1% odds to Roe being overturned was still too low. But it wasn't a sure thing by any means. I'd have probably given it 1 in 3 odds of happening when Scott made these predictions, a 50/50 chance when Kavanaugh was appointed, and 2 in 3 odds of happening when Barrett was appointed.
I disagree, his judicial philosophy while it concurred with the Republican/conservative wing of the Court at times was not particularly conservative or Republican and any sufficiently read student of the Court should know this.
Kavanaugh’s concerns are similar, he’s obsessed with how his legacy on the Court is seen and willing to strike pragmatic compromises in order to be viewed historically as not a partisan—in my view that makes him fairly unprincipled.
Justices should less be evaluated by quantifying how much they agreed with others than how they arrived at those conclusions and whether they are willing to stand by their principles, and which principles they will stand by. It’s pretty plain from the careers and records of say Kennedy, Kavanaugh, and Roberts that they come from a completely different school than Gorsuch who is very different from Scalia, Alito, or Thomas (who have the best claim to being called the Republicans).
That said, I don’t think Scott believes the above. My comment was tongue firmly in cheek.
The more professional they are, the harder it is to stereotype them. I didn't know Scalia was supposed to be a 'conservative' until after he was dead; Ginsburg was absolutely partisan and unprofessional.
Spot on. Any justice clearly identifiable as liberal or conservative shouldn't (in an ideal world) have the job.
Picky disagreement: Both the liberal and the conservative sides have points on which they are clearly correct. So if you just just be adherence to those points it's quite reasonable to say a liberal or conservative judge is properly doing their job. The problem is all the other stuff.
E.g., abortion should clearly be a state level issue. I may think many of the states aren't living up to their constitutional obligations (though I ususally don't know their constitutions well enough to really comment), but it should clearly be a state level decision. There are many such issues, where the matter SHOULD rest with the states, but the states have defaulted, so it ended up with the feds. Then there's that idiotic Supreme Court decision that cities couldn't have a residency requirement for provision of general assistance. I see NO valid basis for that decision, and the result has been a "race to the bottom" in support of social services at the city level. But that SHOULD have been a city level decision. (I don't remember whether it was the conservative or the liberal agenda that inspired that idiocy, but I suspect conservative. But an honest conservative should diligently oppose it, and a liberal should find no reason to support it.)
"E.g., abortion should clearly be a state level issue."
Abortion (like other forms of birth control) has been an activity partaken of by individuals, on an individual basis, throughout history. Because of this historical ownership of this right being held by the people, no government, it firmly belongs in the rights held by the people (not the states or the federal government) as indicated in the 9th and 10th amendments. Or so I say. So at the very least your "clearly", is not as clear as you believe it to be.
The US Constitution is not a system meant to allow totalitarian control of individual's ways of life at the federal or state levels (excluding only those rights embodied in the 1st through 8th amendments).
While I largely agree with you, I feel this is a matter that should be addressed by the constitutions of the individual states. And that they should defer to the rights of individuals. Well, at least unless one wants to undertake rewriting the entire constitutional system. The expansion of federal powers that has happened has been necessary, but I believe that much of it is clearly illegal. What should have happened is various constitutional amendments, but that was so difficult that those in power generally just ignored the clear words of the constitution, and made "workable decisions".
When I said "E.g., abortion should clearly be a state level issue." I meant that it should not be decided at the federal level. Once you get away from the federal level, the different constitutions of the various states make things quickly too complex for any simple answer. Well, if you're arguing on legal grounds. If you're arguing on moral grounds the problem is that there's no consensus on what the proper morality is. Everybody's arguing for their own point of view, often with the same words meaning different things.
If you didn't know Scalia was a conservative you weren't paying attention. He wanted to uphold anti-sodomy laws and claimed the majority in "Lawrence v. Heller" which struck them down was a "product of a Court, which is the product of a law-profession culture, that has largely signed on to the so-called homosexual agenda".
While I may or may not have reservations about Scalia's decisions, my point was that I regard Scalia more professional than Ginsburg. For the same reason, I regard Scalia more professional than Thomas.
I assume because you are a conservative?
I think you are conflating the names of Lawrence v. Texas (the sodomy ruling) with Heller vs. District of Columbia (a Second Amendment ruling)
I'm sorry, but this is not professional: “This Court has never held that the Constitution forbids the execution of a convicted defendant who has had a full and fair trial but is later able to convince a habeas court that he is ‘actually’ innocent.”
To say that procedure trumps actual innocence is to undermine the very foundation of criminal law. Such a statement is neither conservative nor liberal, but anti-law itself.
> Looking back, in early 2018 the court was 5-4 Democrat, and one of the Republicans was John Roberts, who’s moderate and hates change.
Both of these claims are difficult to justify. The court in early 2018 had 5 justices appointed by Republican presidents (Anthony Kennedy, replaced by Kavanaugh, was appointed by Reagan; while he had a reputation as a swing justice, he went pretty far right in cases that didn't involve privacy).
Likewise, John Roberts is a moderate only in the context of the most conservative court in a century. This isn't a normative judgment, just a description of his voting record. He has consistently voted with the conservative bloc across a range of issues. The exceptions (Obamacare) spring readily to mind because they are rare.
> 6. Social justice movement appear less powerful/important in 2023 than currently: 60%
How do you figure ? Cancel culture and social justice are IMO more powerful than ever, and still gaining in power -- especially as compared to 2018.
I think the wave has crested.
Even if it has, you have to think 2020 was the crest, and I don’t think we’re at “pre 2018” levels.
Sure. I was responding to your “more powerful than ever”. I agree that Scott got that wrong, to be fair though, 2020 was an unusual year.
It seems like a strange omission from this post, given that Scott had to take his blog down in 2020.
Does the crest look anything like what Scott described, in your opinion? "Wokeness has peaked - but Mt. Everest has peaked, and that doesn't mean it's weak or irrelevant or going anywhere. Fewer people will get cancelled, but only because everyone has settled into an equilibrium where they know what the cancellable opinions are and don't say them (or because everyone with a cancellable opinion has already been removed, or was never hired in the first place). "
If so, that's not the decline of wokeness. That's the decisive victory of wokeness. If it's cresting because it won and can now get rid of all opponents, that shouldn't reassure anyone. Not even wokists, who can easily be the next targets of the cultural revolution.
It can, and eventually it will; but probably not by 2028. I'd be happy to be proven wrong, of course...
"You can't just gain power, you have to maintain it, and for all its facade of strength wokeness has nothing behind it: no reason, no joy, not even improved living standards, just a mass hallucination enforced by raw power and bullying on quietly resentful subjects. Such a system never lasts."
The same was true of Christianity in the 5th century, or Islam in the 7th. Look at where they are now. The quietly resentful subjects became less resentful, then converted to Christianity/Islam for a mixture of self-interested and genuine reasons. There's no reason that a mass hallucination enforced by raw power and bullying can't last millennia and cause millions of deaths.
Christianity survived the empire. It has something that wokeness doesn’t -
Redemption.
Has it? Did it even survive incorporation into the empire? The name is not the thing.
Christianity, Islam, etc all had charismatic prophet-founders and divine-level foundations. They have fascinating and deep scriptures and stories underlying them.
Wokeism has none of this. No cosmological level claims, no powerful stories. And there's an even more fundamental problem than that - unlike traditional religions, it doesn't even have demographics on its side! It has produced incredibly low birthrates. Based on all of that, I just don't see it having any traction in the medium or long term.
Don't get me wrong, it will undoubtedly still be a thing in 2028. But by the end of the century? I highly doubt it. It may well end up retreating to the fringes of society within a couple of decades.
Institutional Wokeness has a whole lot of people, and more in 2023 than 2018, whose salaries are dependent upon not understanding your arguments.
The question is: Will they be the first to get the boot at the next recession, and will they be rehired afterwards?
Agreed.
I don’t want to talk specific cultural issues but.
1) Hogwarts legacy was not cancelled.
2) Nicola Sturgeon is gone.
3) there’s a weak attempt at “BAFTAs too white” but it’s gaining no traction
4) the reaction to the sensitivity edits of Roald Dahl has been universal derision
Hogwarts Legacy was absolutely canceled in my social circles, including giving it up being considered part of “discomfort is needed for progress”. I'm not sure what reaction the Roald Dahl edits got, I think mildly positive with some weak pushback on cultural-history grounds.
Yeah. In my social circle, I witnessed live version where a slightly out-of-touch old liberal who initially tried to say "I don't like the look of censoring and boycotting books and by extension games because the author has wrong opinions," got some spontaneous peer struggle session review. By the end of discussion, he was loudly proclaiming various nasty things about Rowling and agreeing that everyone should boycott her computer game because it is like giving money to North Korea.
Dahl: I notice that the public pushback comes from the old liberals (as in, retirement age) like Salman Rushdie or the usual right-wing adjacents.
https://www.forbes.com/sites/paultassi/2023/02/14/hogwarts-legacy-is-the-top-four-best-selling-games-on-steam-hits-new-peak-playercount/amp/
Says Hogwarts Legacy is on Track to be one of the leading games this year.
I can believe it. I wouldn't be surprised if both my social circles were especially wokeness-eaten in places and there were a bunch of https://slatestarcodex.com/2018/05/23/can-things-be-both-popular-and-silenced/ sorts of phenomena going on.
Given that it was the top selling game on Steam for the last month, and the most streamed game ever on Twitch we can conclude that your social circles are unrepresentative of gaming humanity as a whole.
The Dahl edit reaction has been universal derision, but Puffin hasn't walked back the idea, either. Rather like with Seuss, which was met with nearly-universal derision, but eBay policy is still that you can't even sell the old copies.
eBay hasn't gone that far with Dahl yet, as far as I'm aware.
Sorry, I'm a bit out of the loop. What was wrong with Dr. Seuss books? (Well, "One fish, two fish" was immensely boring, but I mean outside of things like that.)
No need to be sorry!
In early 2021 Dr. Seuss' estate pulled six books (https://www.nytimes.com/2021/03/04/books/dr-seuss-books.html) from publication for insensitivities like a Chinese character eating a bowl of rice using chopsticks (that's one of the examples; I *think* "If I Ran The Zoo" had some things that would be mildly offensive to even a sane person, but most of the sources don't explicitly say what was wrong with the books).
Because of this, cries of censorship and 1984 etc, prices of the affected books spiked to several hundred dollars on eBay, followed by eBay delisting all of them and forbidding further listings under their Offensive Materials policy (https://www.ebay.com/help/policies/prohibited-restricted-items/offensive-material-policy?id=4324).
Also Mt Everest hasn’t peaked I don’t think. It is still growing.
Considering the DEI bureaucracies in institutions of higher learning and in corporate HR, It's getting baked into everything.
Mandatory ESG is still a possibility.
I vote "not peaked," at least in the real world. Twitter SJ might be on the decline.
"6. Social justice movement appear less powerful/important in 2023 than currently: 60%"
No. Giving that to yourself is an error because you are overly focusing on the slight receding of the tide in 2022-2023 and forgetting the inundation of 2020-2021. The pre-George Floyd world of 2018 was a lot different than 2023.
Yeah, that's my impression as well. It looks like SJ has firmly and formally entrenched itself in the universities now, with mandatory DEI statements as de-facto ideological purity tests for new hires. It might take decades to undo that.
I think I know why everyone *assumes* that mandatory DEI statements are de-facto ideological purity tests - but my understanding of how these statements are used is that they are just collected and sent to the hiring department as part of the hiring package, and in practice, ideological purity is just as likely to sink your candidacy with some significant fraction of a hiring committee as wrongthink is. The real problem with these statements is that they create a culture war minefield for candidates to navigate, with no indication of what is actually going to be judged as good.
If these statements are used by *administrators* as a pre-filter before files get to the department level (and I've heard some claims that some UC schools might do this for some applications) *then* these can be de-facto ideological purity tests. But when they just get sent to the committee that includes both a 30 year old radical assistant professor and a 70 year old curmudgeon, it's really unclear what kind of statement you need to avoid getting nixed by someone. (Probably the kind of statement that makes people glaze over and look back at your academic work instead.)
You are being VERY optimistic in your assessment.
I am willing to bet that >99% of applicants will grit their teeth and do their best to pronounce the shibboleths properly, rather than hope that their application goes straight to the desk of some contrarian professor who is on board with "fuck that PC bullshit".
Especially since some reputable universities are very explicit about how they judge the diversity statements and that, yes, they do use them to pre-filter applications before they go to the departments (and again at later stages). See e.g. https://whyevolutionistrue.com/2019/12/31/life-science-jobs-at-berkeley-with-hiring-giving-precedence-to-diversity-and-inclusion-statements/ .
I hope that explains why I, for one, have trouble seeing these statements as anything else than purity tests.
Yes, it sounds like some hires at Berkeley have used it that way. That does not seem likely to be much more of a precursor of wider trends than Hamline College with the Islamic art fiasco is.
But the bigger point is that most academics want people who will say nice stuff about minorities in their statements, but will get very worried about hiring someone who said they actually tried to change something about how their previous department worked.
I'm not sure how "ideological purity is just as likely to sink your candidacy" is supposed to work. Yes, there are hiring managers who don't want overzealous ideological purity in their departments. Those managers aren't asking for DEI statements. If there's a DEI statement, that's coming from HR or the administration or somewhere. And maybe they just put it in there because all the cool people are doing it and they don't have any systematic way of doing anything with it. Or maybe they are using it as a pre-filter or other disqualifier.
But from the point of view of anyone filling out a DEI statement, the possibilities are "this is a waste of my time" or "this will roundfile my application if I don't at least feign ideological purity". Some of them won't waste their time and won't complete the application, the rest will feign ideological purity to some extent.
So "I will cleverly ask for a DEI statement, then rule out the candidates who show too much ideological purity", screens people out for being diligent and capable in doing what they think you just asked them to do. Nobody does that, so nobody expects anyone else to do that, so the expected value of a DEI statement remains between "waste of time" and "necessary proclamation of ideological purity to get this job".
Why are you talking about "hiring managers"? This is about university hiring. The way that occurs is that some big committee composed of faculty members asks for a bunch of materials, and they evaluate it, and discuss the candidates until they can reach some sort of consensus about who the few finalists will be. They usually ask for materials about research, teaching, and service, and some universities now require them to ask for a DEI statement as well. But the evaluation is entirely in-house. In general, anything that isn't research doesn't matter all that much (at least at R1 universities) and people even say things like "a teaching award is the kiss of death" - it shows that you care too much about non-research things. If any part of the application sets off a red flag for one or more committee members, it's very easy for them to keep that candidate out of the finalists, given the strength of the pools. A bunch of faculty reading an application aren't going to use a candidate's strong DEI statement as very much reason to raise or lower their estimation of the candidate. But if the DEI statement triggers someone on the committee in one direction or another, it's going to sink your application. You really don't want your DEI statement to stand out as an indication that you're on the vanguard, because that's going to make a lot of people worried about having you in their department. Instead, the best strategy is usually making a relatively bland statement about how you've been nice to women and minorities in grad school or supported them as a faculty member, and maybe taking the opportunity to show how you would diversity the faculty. But ideological purity is going to be very scary to at least some members of a hiring committee, and unless your research record is very strong, it's going to make things difficult, just as much as expressing reactionary views will.
As someone posted on Hacker News about this topic, university mandatory DEI statements are less about wokeness as they are about survival of the universities. There is a population time bomb from the Great recession that is just about to hit the universities (search for a US population pyramid). These DEI statements are less about wokeness, and more about how good a candidate is at getting non-traditional students (including older students) to enroll and complete.
Yeah, I found the way Scott graded that section really weird.
I find the social justice predictions really suspect. It's judging prevailing opinion about opinions about opinions. How do you measure "cancel culture"? Cancellations per annum? Number of cancellable offenses per the Board of Cancellation? Number of words dedicated to social justice in Atlantic op-eds?
IDK. All fluff to me. The answer will always depend on who you ask.
Number of black people in Ads in countries with few black people? That’s dropped a bit.
I could just say that Pacific Islanders are underrepresented in ads so social justice is declining because it's not inclusive enough of non-black minorities. It's turtles all the way down.
The U.K. is about 3% black, compared to the 12-13% African American population and yet has a black history month, not a black history week.
There is no specific month, or week, or day, or hour, for other minorities. I think there should be a Polish history hour.
As long as it involves pierogis, I'm in.
Pierogis, Enigma and a pronunciation guide could bring you all the way to lunchtime.
Pierogis and mead !
I know there's a Gypsy History Month at least...
BritishCouncil's website specifically says there's a South Asian Heritage Month.
These months seems to obviously be about British colonialism, and the affected peoples.
That whole section seemed off. I read it, thinking "oof, Scott's going have to give himself a bed grade on this one", and then was shocked that he said that "all of this is basically true" and gave himself a B+. Especially the "Christianity culturally becomes Buddhism" thing - that almost seemed *more* true in 2018, I remember all the "real Christianity is socialist" posts back then - and I don't think I've heard of *anyone* suggest a black lesbian pope.
I think the only reason he was able to claim to score so well is that his discrete predictions didn't actually test the predictions in the prose.
Side note, George Santos' clearly-intentional use of the white power "okay sign"[1] (probably just him trolling for his own amusement because he knows his days are numbered, but still) comes close to making Scott directionally incorrect on (2) and (3), but I know it's not explicit enough for Scott to count it.
[1] https://www.snopes.com/fact-check/george-santos-white-power-sign-mccarthy/
The okay sign does not signify white power for anyone not terminally online.
What evidence do you have that congressman Santos is *not* terminally online? He's 34, it's not that uncommon, especially for politically-minded people. Also, he's a compulsive liar, which fits right in on the internet.
I'm very interested in this comment thread but it seems like the evidence for and against "social justice movement appears less important in [time A] than [time B]" seems to be a collection of gut feelings and anecdotes. Anybody have any good ideas on how to measure/quantify relative cultural strength of an ideology? I'm guessing the answer is no but I'd love to find out I'm wrong.
I think it's not impossible to get an objective sense of these things in hindsight, but it's nigh impossible at the time. we'll know in 10 years whether "woke" was a passing thing or the new normal
I have never understood the ostensible relationship between "social justice" and "cancel culture". What do these things have to with each other at all.
That "social justice" can have any meaning with being tired Rerum Novarem boggles my mind. That there are ostensible proponents and critics of "social justice that don't even know what Rerum is is pretty sad state of thought.
What is called Social Justice these days should probably be referred to as "Critical Social Justice". It shares nothing with the Catholic concept that you're referring to except the name, and is instead a blend of cultural marxism and postmodernism. The closest relative in Catholic theology might be Liberation Theology (but I'm saying that based on hearsay).
Kids.
The idea of "Cultural Marxism" makes me laugh out loud. Recently, a pre-boomer tried to tell me that ✌️ doesn't mean "peace".
I'm reclaiming social justice and ✌️.
"If you want peace work for justice." Pope Paul VI
And reclaiming "progressive " for Henry George (Progress and Poverty 1879) and the Church (Populorum Progressio, 1967)
The concept of Cultural Marxism explains so, so much of what's going on. The shared idea: society is best viewed as a struggle between groups of people - the oppressors (bad) and the oppressed (good). Oppressors erect systems of power to maintain the status quo, which must be torn down to achieve liberation.
Old-fashioned Marxism: capitalists oppress the proletariat.
Flavors of cultural Marxism:
Feminism: men oppress women.
Queer studies: cis-hetero people oppress people with queer sexual identities.
Critical Race Theory: whites oppress Blacks.
Fat Studies: normal-sized people oppress fat people.
Post-Colonialism: white people oppress people of color.
Disability studies: healthy people oppress disabled people.
And on and on and on. Same shit, lots of different piles.
Good luck with your reclaiming. I'm not even Catholic, but any sincere attempt to actually do good in the world and not just play power games would be very much appreciated.
The term "cultural Marxism" has a lot of baggage[1] that goes far beyond what you're using it to mean.
That said, this *is* a valid way of describing idpol and the culture war through a Marxist lens, *IF* you repress the term "oppressor" with "privileged".
i.e. there's a capitalist/bourgeoisie/privileged class that has a comparative advantage in society, and while not inherently evil, they are incentivized to maintain that power at the expense of the proletariat/unprivileged/oppressed class. Sometimes this takes the form of direct oppression, but it's not always that simple (especially as progress makes the oppressed class less oppressed).
You can even work in the concept of petit-bourgeoisie to apply to e.g. poor whites that cling to race as the reason they're better than brown poors, and why they need to cling on to what little priviledge that brings them.
[1] https://en.wikipedia.org/wiki/Cultural_Marxism_conspiracy_theory
Mass adoption of driving cars was always a fantasy. And I’ve said that here, and elsewhere, before. The problem is that self driving cars have to be perfect, not just very good, for legal reasons, and for psychological reasons.
Counterpoint: self driving elevators only had to be very good to disemploy elevator operators.
I agree they'll have to be much better than human drivers.
Counter counterpoint - self driving cars still aren’t where self-flying planes were 20+ years ago, and support for them has if anything gone down after the MCAS fiasco.
Actually I think airplanes are illustrative because they demonstrate that there is a whole new class of deadly accidents that will start to occur due to human interaction with automation, and the automation will generally get blamed by a wary public (we see this a bit already in some of the Tesla crashes where yes, the automated screwed up but clearly the human wasn’t doing their duty to monitor it either).
Neither self-driving cars nor self-flying planes, in the sense of "attentive on-board human operator not required", are generally trusted to carry human passengers. Or to operate in close proximity to human bystanders. If there are even a few specialized markets where self-driving cars do carry human passengers, that puts them well ahead of the planes.
The planes seem to have the edge because there are more high-profile applications that don't involve transporting humans or operating dangerously close to (friendly) humans such that "and, yeah, it will crash and burn one time out of a thousand" is acceptable.
Flying is easier to automate than driving, and airliners are already quite capable of flying themselves almost the entire way in most conditions with a very low error rate. Indeed most airliner flights are mostly flown by autopilot. And most “manual” airline flying involves automated pilot guidance and envelope protections substantially more sophisticated than all but the highest level of available driver assist functions. Even some single pilot private aircraft now have certified emergency auto land systems. All these automated systems “crash and burn” at a much lower rate than one time in a thousand, even if we take “crash and burn” to mean “human intervention required to prevent catastrophic result of automation failure (human told autopilot to do wrong thing doesn’t really count)”
Self-driving is quite a bit harder, but the stakes are lower and “come to a stop and wait for the constantly available human help to pull up and fix it” is viable in a way that it isn’t for an airplane. So yes, there are a couple of limited areas where recently truly self driving cars have begun to operate (although anecdotally the human intervention rate is still fairly high).
I guess it’s hard to say which is “ahead” given the different problem spaces. My main point bough is that the general public isn’t necessarily going to understand the nuances of difficulty and they aren’t going to accept “very good” for cars given that they are nowhere near accepting an automated passenger aircraft even though they are arguably already “very good”.
"Mostly flown by autopilot" is a very different thing to "can fly without human supervision".
Sure, but my point was basically that any semi-frequent traveler has almost certainly been on an aircraft that was flown almost entirely hands off between just after takeoff and the very end of final approach (and you may have even been auto landed, although that’s a rarely used capability). And even the parts that were “hand flown” were subject to significant automated assists and protections.
Meanwhile very few have been on a truly “auto piloted” car on the open road and any driver assists are generally limited to lane keeping and cruise control (with a few cars having more advanced features like park assist and auto braking).
Airliners are a bit weird too because there’s a different trade off when you’ve already got a pair of highly paid and trained individuals on board plus many more on the ground operating in a highly regimented system. Bluntly sometimes we just want the pilots to not get bored or lose their skills, plus it’s nice to have the flexibility to change the programmed plan en route. The autopilot is generally CAPABLE of more than it is typically ALLOWED to do. Then again programming an autopilot is a lot more complicated than “select destination in Google Maps and go” and requires skilled pilots and controllers.
But the economics of self-[flying/driving] mass transport are very different.
Trains would be almost trivial to automate but the payoff is negligible (how much time does the average person spend driving a train per day?).
An investigation into the economics of London's Docklands Light Railway (fully automated) might be instructive. Not that I am offering to do it, mind you.
An investigation into why TfL has not fully automated their other networks despite government pressure would also be instructive.
Call me a cynic, but I imagine unions play a substantial role, with the staff that can't be automated yet exerting their bargaining power to protect the ones that can
At least in the 21st Century, self-driving elevators are about as perfect as a human-made machine is going to get safety wise.
Sure. Per the CDC it looks like excluding escalators and people working on elevators to leave only passengers, there are maybe 10-15 deaths a year in the US, which is well under the "struck by lightning" threshold. Even counting all those it's 30.
But while I don't know the history, I'd expect that like most things they had a development curve. This NPR piece says it took 50 years from their invention to widespread adoption, against widespread public distrust (and of course labor resistance). https://www.npr.org/2015/07/31/427990392/remembering-when-driverless-elevators-drew-skepticism
And anecdotally, Rex Stout had Nero Wolfe's investgator Archie Goodwin constantly noting when an elevator was self-service for decades after they became common, though he didn't seem particularly bothered by them. Except that in that interval between human operators and ubiquitous surveillance cameras, they didn't offer convenient witnesses to quiz.
No. They'll need to be much cheaper than human drivers, and nearly as convenient. I expect self driving cars to be rented by the hour or minute. That won't handle all use cases, but it will handle all the common ones that 90% of people experience. And you won't need to worry about parking or maintenance. But it *WILL* need to be a dependable service.
Another counter-point: Self-driving cars can sneak in step by step.
Germany has already passed a law allowing autonomous cars of level 4. This means that the car companies may define circumstances in which the car drives autonomously. There must be a person on board, and on alert they must be ready to take back control within 10(?) seconds.
Right now, there is only one such system, and that is allowed (by company decision) to operate on highways below 60km/h (i.e., only in congested traffic or traffic jams). But it can increase gradually: perhaps in two years they go for up to 120, then for general roads outside of cities, and eventually everywhere.
“ There must be a person on board, and on alert they must be ready to take back control within 10(?) seconds.”
In practice, how do you enforce this? 10 seconds is enough for any actual emergency to be over, so you’re either going to have to be essentially fully autonomous or produce a ton of nuisance alarms any time things get remotely sketchy.
Yeh.
The AI works until until it’s about to kill you.
Then it’s “wake up mate. You are dead in 10 seconds. Thank you for using CheapAssCarAI, even if this is your last ever use of the service, we appreciate your custom”.
Sounds like the car is meant to handle any emergencies automatically while it's autonomous. The 10 seconds is so that there's a person that can drive when the car leaves the environment where it's allowed to drive autonomously (eg when it exits the highway).
I don't understand the comment. Category 4 means that the car must be able to deal autonomously with any emergencies. That's the definition of category 4. It's the highest category below 5 (5 = no driver). The alert is for example when the car is going to leave the highway, or when it starts snowing (or whatever other environments are not covered by the car).
You are probably thinking about categories 2 and 3, which are about assisted driving. Tesla is between 2 and 3, more advanced companies around 3, and the first models with (very limited) 4 are just coming out.
I think predictions about fertility rate in various countries should be of interest, also how technology such as AI girlfriends effect this, similarly what would the percent of the population identifying as LGBT be like, would we start to see the beginning of the religious/insular communities inheriting the earth, what about changes in IQ and such, would any of the fertility increasing efforts have worked.
> start to see the beginning of the religious/insular communities inheriting the earth
If that's relating to differing birth rates, another five years isn't enough to affect anything (partly because kids have to age up, partly because the effect is less than people think even if meaningful in the long run because lots of religious people become not-religious).
That's all true but my guess is that even in 5 years time you would have a much better sense if such trends are going to continue long term, the LGBT trend might be the most noticeable over a 5 year period, I think the same is also true of efforts to increase fertility. Maybe you would also see concerns about underpopulation or concerns about a aging population to become more common, I can very easily imagine a significant shift in culture/media/academia and such on those issues within 5 years.
It think the big deal for fertility rates isn't sexual orientation, it's the cost of having children.
Let's have predictions about the cost of housing and education.
Or how about predictions about work from home. A lot of managers hate it, but maybe some of them are aging out. Is is too early to be thinking about major companies (possibly new companies) that are entirely work from home?
Those already exist in tech; I've worked for some. Searching 'remote-first' will get you to info.
I'm not just talking about work-from-home companies, I'm talking about large ones. How big are the biggest?
Fair enough. The largest I've worked for was a few hundred people. I'd be shocked if there weren't any in the thousands, and mildly surprised if there weren't any in the tens of thousands. But I certainly don't know about any huge companies that are 100% remote.
Yes, I agree. Having children through fertility treatments or adoption isn't cheap but it's relatively minor compared to the costs of one parent being out of the workplace for 5-6 years or full-time childcare during that time.
I'm not sure I can sum it up in a one-sentence numerical prediction, but I think we'll see a peak in college tuition (in real dollars) and in college enrollment, which will help bring down the total cost of raising a child. AI, automation, and outsourcing will further cut into the benefits of a generic college degree. Non-outsourceable blue collar jobs like plumber, electrician, etc. will start to be a little more attractive again to a generation looking at the massive student debt problem their elders are dealing with.
In my own circles, trans people now greatly outnumber lesbians and gays combined. (Unsure if T > L+G+B.) So perhaps we'll soon start seeing LGBT activist groups dropping the T, as trans activism has a very different flavor than LGB activism.
I remember thinking at the time that 1% for Roe was way too low and I'd make it closer to 50% (of course now I can't point to anything proving I thought that, though I could've sworn I made that prediction on the original thread from 2018).
In particular I'd say that even if Republicans had only gotten one of Kavanaugh and Barrett you'd still probably see Roe "substantially" overturned. Roberts didn't technically vote to overturn Roe, but I think with 5 conservatives (minus Kennedy if he were still around) on the court he wouldn't have voted to uphold any abortion restrictions. Whether you think his vote in Dobbs is consistent with "not substantially overturning Roe" is a matter of judgment - his decision would clearly allow abortions to be prohibited that were protected under Roe, but he also didn't say "and also Roe is overturned".
But even if you think Roberts's concurrence doesn't count as "substantially overturning" Roe, that wouldn't stop people from passing an even harsher law as a test case (which would have certainly happened if Kavanaugh had joined Roberts in our non-alternative timeline). To me one of the most likely versions of the "Roe isn't substantially overturned by 2023" possibility wasn't that Roe was protected but that they drag it out and it doesn't happen till 2024 or 2025.
Agreed, and even Scott's "what I was thinking at the time" justification is pretty poor, tbh. While Kennedy was a fairly reliable pro-Roe vote, he was also old and likely to retire under a republican President and Senate before the midterms (just as Breyer's retirement under Biden rightly surprised no one). Roberts is more moderate than Thomas or Alito, but he was not in any sense a locked in pro-Roe vote. Breyer and Ginsburg were both old in 2018, and there was surely a decent chance that one of them kicked the bucket or was forced to retire before they wanted to - Ginsburg's death was treated as a tragedy by pro abortion advocates, but not exactly a shock. And of course, let's not make the mistake of relying on hindsight to know that she only had to make it to 2021 - from the perspective of 2018 there was a perfectly plausible world where Trump gets re-elected in 2020, Ginsburg dies/retires in 2021, and Roe gets overturned in 2022.
There were plenty of plausible pathways for Roe to survive past 2023 too, but something in the 40-60% range would have been reasonable rather than the 5-10% Scott revises to in this post. Even in retrospect, he substantially underestimates how likely this outcome was.
Not that I called it at the time, but the risk of Roe being overturned should always have been closer to the statistical probability of RBG dying or retiring due to health issues, which was certainly more than 1% - by 87 I think the annual likelihood of death is something like 6 or 7%.
Seems to me that the Roe prediction could be sort of right, even though it was wrong, in that it was really based on the recognition of an underlying truth - i.e that the American people probably won't put up with a total ban on abortion.
I hope that it's not just wishful thinking that this decision might be decisive in the 2024 election in returning a President and/or State legislatures that will pass pro-choice laws.
As a remainer Brit I'm also hopeful that Brexit will eventually be seen the same way. i.e. something that we just had to endure for a while in order to reveal a truth to the reactionary non-political rump of the population that nevertheless decides who governs us.
One of the reasons that so many predictions end up so wrong isn't because the underlying thought is wrong, but because the polarisation we have in society makes so many outcomes a simple coin-toss with a slightly (52%-48%) weighted coin.
> As far as I can tell, none of the space tourism stuff worked out and the whole field is stuck in the same annoying limbo as for the past decade and a half.
I agree that progress has been disappointingly slow, but your original prediction is more accurate than this assessment makes it seem. There are in fact two companies (infrequently) selling suborbital spaceflights (Virgin Galactic and Blue Origin), and SpaceX has launched multiple completely private orbital tourism missions.
And didn't SLS put an Orion capsule around the moon and back in November 2022? Or did it need to be manned?
Yes, SLS put Orion around the moon late last year.
In addition, Starship is damn close to going into orbit. It's expected to happen within the next month (for real this time). It will very likely turn out that Scott missed that prediction by less than a month.
Expected by whom? I wouldn't give that better than 50-50 odds. Obviously Elon either expects it or wants people to believe it, and obviously Elon has lots of fanboys who will believe anything he says. But do you know anyone who (correctly) predicted that Starship probably wouldn't make orbit in 2022, who is presently predicting an orbital launch in March 2023?
> At least 350,000 people in the US are regularly (at least weekly) talking to an AI which they consider a kind of romantic companion.
Depending on how you judge this, it could already be true. I'm assuming you're familiar with Replika? It's an "AI companion" app that claims 2 million active users. Until quite recently they were aggressively pushing a business model where you pay $70/month to sext with your Replika, but they recently changed course and apparently severely upset a fair number of users who were emotionally attached: https://unherd.com/thepost/replika-users-mourn-the-loss-of-their-chatbot-girlfriends/
>IDK, I don't expect a Taiwan invasion.
No number on that?
Also thanks, I needed this today in particular. Had a dream where GDP had gone up 30% in the past year and I figured we'd missed the boat on any chance to avoid AI doom.
Agreed, I'd like a % on that prediction. I personally agree it's unlikely, but more 10% unlikely than 1% unlikely.
Yeah, anything a country actively wants to do should be higher than 1% over 5 years.
>At least 350,000 people in the US are regularly (at least weekly) talking to an AI which they consider a kind of romantic companion
This seems to have already been true as of late 2022.
Replika seems to have had up to 1M DAUs, although this was before their recent changes of removing a lot of romantic/nsfw functionality (which users very much did not like, and likely led to >0 suicides and notable metric decreases). It's also notable that they do not use particularly good nor large models, but use a lot of hard-coding, quality UX, and continual anthromophoric product iterations. Given what I've seen of their numbers, it's highly likely they had >350,000 weekly active users already.
Those that think AI partners will not take off strongly underestimate how lonely and isolated many people are, likely because they aren't friends with many such people (as those people have fewer friends and do not touch grass particularly often). The barriers are more so that this is hard to do well, there is a bit of social stigma around it, and supporting NSFW is a huge pain across many sectors for many reasons. The latter will remain true, but the other two will change pretty quickly.
Even setting aside Replika's user numbers, Scott seems to massively underestimate the extent to which people will form emotional bonds with just about anything with a smiling face. Genshin Impact alone might have enough players who meet this criterion.
True restraint is when you only get *one* constellation of your favorite 5-star cutie :P
It would have been interesting to see your percentages for "Trump gets impeached" and "Trump gets impeached twice" if you had included them.
Even better would have been “at least 5 Republican Senators vote to convict”.
> <...> I should have put this at more like 90% or at most 95%. I’m not sure I had enough information to go lower than that, <...>
Aren't these inverted? I.e. shouldn't this read "more like 10% or at least 5%. I’m not sure I had enough information to go higher than that"?
> 14. SpaceX has launched BFR to orbit: 50%
Almost? Likely in March of this year, it was likely delayed that long by the pandemic, not just permitting and technological challenges.
> 16. SLS sends an Orion around the moon: 30%
They have! Just uncrewed.
BFR is much closer than it was 5 years ago, certainly, but a successful launch to orbit this year is still something I wouldn’t bet more than even money on.
I think there is a general tendency to underestimate how long space technology can look really really close to ready to go before it actually is. The devil is always in the details (consider Boeing Starliner - yeah, haha Boeing, but it’s looked “basically done” for years. The same could be said for Virgin and New Shepard)
SpaceX just did a static fire for a test launch in march that seems planned to be orbital or "barely suborbital". I don't see how they could fail to hit 2023 unless that launch fails catastrophically.
And last year they were claiming they’d launch in late summer or early fall of 2022. They’ve done a lower stage static fire and a few low altitude hops with Starship, most of which exploded on landing. Huge step from that to successful orbit. Shit happens, space is hard.
Sure, let's split the predictions:
SpaceX attempts a BFS/Starship "orbital or barely suborbital" (where barely = within 100m/s) test launch in 2023
SpaceX succeeds at it.
I'd say 90%, 70%. Full first launch success chance like 60%, but they'll probably get off the pad enough that they have some odds of getting a second attempt in 23.
Update: Polymarket has a "success by March 31" market at 80% against https://polymarket.com/event/will-spacexs-starship-successfully-reach-outer-space-by-march-31-2023 . Very tempted to make an account.
They don’t even have FAA approval to fly out of Boca Chica yet, and March 31st is less than 6 weeks away. On top of that the static fire wasn’t even particularly successful if it was a dress rehearsal for launch - one of the engines never lit and another shut itself down, and the test was only 15 seconds. 80% odds against by March 31 seems frankly optimistic for their chances.
I’d say it’s probably 70% they get off one full stack launch this year and 50/50 it reaches orbit if it does launch. Very low probability they launch multiple times this year.
This is a huge, expensive test, even for SpaceX. If they launch and it doesn’t go perfectly, they aren’t going to throw away that many Raptors just to YOLO it until they understand very well what went wrong and feel like they have a high probability of success, and that takes time.
It's not just the raptors that are the issue, right? I mean, if the launch goes spectacularly wrong, couldn't the entirety of Boca Chica literally go down in flames? After all, it's some 5000 tons of fuel, not that far from some more massive fuel tanks...
Static fire was successful in the sense that if it'd been a launch and nothing else went wrong, it would have gotten to orbit on that amount of engines.
As you scale up engine count, eventually you will just need to handle failures gracefully.
Twitter source, but: https://twitter.com/wapodavenport/status/1626790650921291777 "From what I hear, everything is on track for a March launch attempt as far as the FAA is concerned."
The first four launches of the full "Starship" upper stage, failed catastrophically. So the odds of the first launch of the "Super Heavy" booster failing catastrophically, are not small. The Starship program has been a lot more aggressive than that of the Falcon 9' more like that of the Falcon 1 (which failed catastrophically the first three times).
And as noted below, some catastrophic failure modes of Starship would leave SpaceX without a Starship launch site for a year or more.
Are you also shorting Tesla? They've got to send it someday, and apparently *their* perception is that they've retired enough risk. We'll find out soon if the FAA agrees.
Gah missed the edit window - Fanboy reporting for duty. IANARS and I know you are, I'll sign the waiver. :-)
Not "full Starship upper stage", aero demonstrators with janky v1 Raptors, which retired a *lot* of risk--belly flop works! SN8 (the first attempt!) almost made it, and probably would have with Raptor 2 (and no slosh, and no helium, and... see "risks retired").
So they should... spend a year building out another Stage 0, *then* blow up this one, and be right where they would have been (except they lost a year to fix the hypothetical issue)? They've already got 200 Raptors in the barn, and likely a few hundred Starlink v2's sitting on the books. They're *too* hardware rich at this point.
Yeah, I think Scott should have counted #16 as Yes.
I think you were wrong on every single thing.
A prediction is X will happen. Not there is an 80% chance that it will happen.
A prediction is Y will not happen. Not there is a 30% chance that it will not happen.
Well that’s totally wrong.
I will disagree.
This reminds me of a joke about the lottery. The odds are 50/50 because you either win or you lose.
Probabilistic statements are obviously acceptable in predicting the future. If someone draws a card from a deck and puts it face down on the table, then asks me if the card is the ace of hearts, is diamonds, or is black I would say the odds are 1/52, 1/4, and 1/2.
If I were to take a bet on any of these I would expect the odds to reflect those probabilities. Scott bets on the future as well.
What does it mean to say the odds are 1/52?. If the deck is drawn 52 times I would expect the ace of hearts to appear once. Draw it 520 times and, on average, the ace will appear approximately 10 times.
And that’s what probabilistic guesses of the future are. A 60% chance means that the predictor expects that in 60% of the potential futures the prediction will happen.
I'm confused about what you're saying. Is this a semantic argument about the word "prediction"? A philosophical argument about the nature of knowledge not being probabilistic? A gripe that Scott is failing to follow some standard format that unspecified other parties are using?
And I've just got no idea at all about how to interpret your first sentence.
X happens.
Alice predicted X 80%.
Bob predicted X 51%.
Charlie predicted X 50.1%
Daniele predicted X 50.0001%
Who predicted accurately?
Alice predicted much more accurately than Bob who predicted slightly more accurately than Charlie who predicted slightly more accurately than Daniele.
No. Alice was just more confident in her prediction. She did not "predict" bettter.
If your so-call "prediction" X will happen (80% ) (of the time? with 80% confidence?) and your error estimation is plus or minus 10%, then you are not actually predicting X will happen because your confidence interval is less than 100%.
I flip a coin 10 times.
You predict that it will be 5 heads. (add your probability figure. what will it be?)
I predict it will NOT be 5 heads with out a probability figure.
Who has really made the better prediction?
It is likely to comes up exactly 5 heads about 24.6% IF and only IF we have a "fair coin" and we throw it an infinite amount of time.
It comes up 4 heads, who has made the better prediction?
It comes up 5 heads, but your "confidence probability" was not 24.6%, who has made the better prediction?
When an expert gives an opinion in court, for it to be admissible she must state that her opinion/(prediction?) is to a reasonable degree of medical or expert certainty or probability. What that has come to mean is that the opinion is "more likely than not", i.e. 50% plus a speck.
Do we make predictions about taking risks? Sure when we drive, we might say that the trip will be safe because we know that there is about one micromort for every 250 miles driven. One micromort = 1 in a million chance of death. If driving was a 20% risk ( safety is 80% confident) of driving 250 miles. No one in their right mind would drive (and the roads would be so full of accident and ambulances that driving would probably be nearly impossible due to traffic.) So when SA predicts X will happen 80% is he really making a strong prediction? Is he even really being that confident in his so-called prediction in comparison to the decision to drive 250 miles?
Just semantics.
Not semantics.
It's pretty close to gibberish, honestly.
Also this statement -- "What that has come to mean is that the opinion is "more likely than not", i.e. 50% plus a speck." -- is flat wrong at least in U.S. courts. (Actual courts I mean, not the Hollywood versions onscreen.)
A, B, C, D, and E don't happen. F - Z do.
Alice predicted "80% of A-Z will happen."
Bob predicted "50% of A-Z will happen."
Charlie predicted "100% of A-Z will happen."
Alice predicted more accurately than Bob or Charlie.
That didn't clear up any of my questions. Could you please explicitly affirm or deny that you are saying each of the three possibilities that I listed?
As for your question, obviously higher confidence is good on a prediction that comes true and bad on a prediction that didn't come true.
There are formal mathematical scoring rules you can use to grade predictions that are designed to reward accurate probabilities; i.e. where if an event actually happens 75% of the time, then a prediction of 75% will get the highest expected score (or the highest average score after a large number of tests), beating out any competitors who gave other probabilities, either higher or lower. This lets you give a quantitative answer to how good each prediction was.
Because if a coin actually lands heads 75% of the time, you'd obviously rather have the actual number 75% than just know "heads is more likely than tails". The number gives you more information and lets you make better plans.
Call it a "probability estimate" instead of a "prediction", if you want. It's a prediction plus an estimate of confidence, you can always translate it to just a prediction by just looking at whether something is thought to be more likely than not, as Scott in fact does in this post.
Well-calibrated estimates are more valuable than the same estimates stated as just yes or no, since an optimal betting strategy requires estimates of expected returns. Otherwise, you bet just as much on a 55% as a 99%, and get blown out 45% of the time on the former.
Yeah a probability estimate is not the same thing as a prediction.
A point estimate alone is useless without knowing +/- e. And we must know the distribution of e. Is e Gaussian or Paretian? Is e unimodal or U shaped?
If SA made his so-called predictions as an offered bet of percent of annual income above a living wage or just a bet size of percent of total wealth, then we might know something because then we will know if SA is financially ruined or not.
So if you predict it will not rain when you go outside, you never bring your umbrella? So you get wet 49% of the time?
Or are you just disagreeing on terms?
> only because everyone has settled into an equilibrium where they know what the cancellable opinions are and don't say them
Those standards keep changing.
> AGE OF MIRACLES AND WONDERS: We seem to be in the beginning of a slow takeoff. We should expect things to get very strange for however many years we have left before the singularity. So far the takeoff really is glacially slow (everyone talking about the blindingly fast pace of AI advances is anchored to different alternatives than I am) which just means more time to gawk at stuff. It’s going to be wild. That having been said, I don’t expect a singularity before 2028.
This prediction is so vague as to be horoscope-worthy. We are going to see something really strange and wonderful yet totally unspecified; or things will continue pretty much as usual until the Singularity; or perhaps something else will happen. Yep, that covers all the bases.
> Some big macroeconomic indicator (eg GDP, unemployment, inflation) shows a visible bump or dip as a direct effect of AI (“direct effect” excludes eg an AI-designed pandemic killing people)
Ok, so other than AI intentionally killing people (by contrast with e.g. exploding Teslas), how would we know whether any macroeconomic indicator is due to AI or not ? This prediction is likewise pretty vague.
> Gary Marcus can still figure out at least three semi-normal (ie not SolidGoldMagikarp style) situations where the most advanced language AIs make ridiculous errors that a human teenager wouldn’t make, more than half the time they’re asked the questions: 30%
Does it have to be Gary Marcus specifically ? 30% is ridiculously low if we expand the search space to all of humanity. Or just to ACX readers, even.
> AI can make a movie to your specifications: 40% short cartoon clip that kind of resembles what you want, 2% equal in quality to existing big-budget movies.
Depending on the definition of "short" and "clip", AI can already do it today ( https://www.cartoonbrew.com/tech/stable-diffusion-is-launching-an-ai-text-to-animation-tool-in-partnership-with-krikey-225919.html ). Anything of decent quality remains out of reach.
> but only because everyone has settled into an equilibrium where they know what the cancellable opinions are and don't say them
This is worse than the current level of wokeness, so I'd argue that the current level is far from its peak.
> IDK, I don't expect a Taiwan invasion.
I do, by 2028, at about 55%. We will know more after the 2024 election.
> ECONOMICS: IDK, stocks went down a lot because of inflation, inflation seems solveable, it'll get solved, interest rates will go down, stocks will go up again?
I expect the growth of the actual productive output of the US to continue its decline. By 2028, I expect the US to be in a prolonged period of economic (and cultural) stagnation (if not decline), whether the pundits acknowledge it or not.
>I do, by 2028, at about 55%. We will know more after the 2024 election.
I mostly agree with you, but I still feel a bit of an urge to interrogate you about how seriously you're taking that.
Well, about 55% seriously at present :-)
:V
No, I mean in the sense of Tyler Cowen's responding to people saying "X will collapse" with "Are you short X?". I avoided moving back to Melbourne and bought 20L of bottled water based on a weaker prediction than that; I'm feeling an urge* to check whether you're actually acting as though you believe that number.
*Not entirely an urge I'm proud of; part of it's coming from gatekeep-y sorts of motivations. I'm giving in because, well, if you weren't taking it seriously and start taking it seriously because I probed, that's still a good outcome assuming that our predictions are anywhere remotely close to accurate.
I mean, I believe in that number with approximately 55% strength. I'm not sure what I should be doing differently given this information, though. I cannot prevent the invasion of Taiwan. I can do a few very minor things to influence the 2024 election, and I'm doing them.
>I'm not sure what I should be doing differently given this information, though.
I'm not sure either, since I don't know where you live and what preparations you already have for nuclear war.
I'm basically saying here that there are people who say that nuclear war is super likely but who also live in Manhattan without a clear exit plan or some worth-dying-for reason to be there, and those people are obviously not very serious about their "nuclear war is super likely" belief.
I think the jump from "invasion of Taiwan" to "nuclear war" that you are implicitly making in this comment is unfounded. I suspect the US and China will be perfectly capable of conducting a gentlemanly conventional war wherein they only use honorable bombs that level city blocks but don't fill the air with radioactive fallout.
The 2024 election is an American election. Who do you think is going to invade Taiwan? The US or China?
There's a Taiwanese election in 2024 as well.
Even talking about the American one, there's a case that the specifics of the campaign/result might cause the PRC to smell weakness/opportunity. This could happen *before* the actual election day, though.
The Chinese can wait. I think that the idea that the Chinese need to act now is probably planted by US ideologues, the end game of which is perhaps the US provoking in Taiwan.