Did you actually read the paragraph? He's basically saying that, that overall they'll have massively overpredicted AI progress but the progress that does get made will be used to "prove" Gary was wrong.
When self driving cars were really big in the news, it was driving in nice weather with a human there to correct issues. Many sources were saying the human was not really needed (only there because of regulations) and that the number of human interventions was going down. A bit contradictory there, but the trend was mostly right. Basically, we did/do have self-driving cars, if you want them to only drive on certain roads with certain conditions. A general self-driving car that can drive in the rain and snow, or other tricky situations (especially including other drivers) may never happen. Even human-level AI may not be able to do it, and it may not be an AI's fault. Those systems are highly dependent on sensory input, particularly cameras. Cameras can get dirty, break, or struggle to see in low-visibility environments.
For one thing, a person can blink and clear local obstructions. For an externally mounted camera, road dust and other debris would be a constant issue and cleaning it off far less convenient.
I would guess that just as human eyes are inside the vehicle, and can blink, it would be possible to have cameras inside the vehicle, with a blink mechanism.
I was being overly facetious with the blink comment, but the underlying concern is still there. It takes a lot of engineering to put a camera system inside that can watch all the different ways that a human can just by adjusting their head and eyes. It takes just as much or more engineering to mount external systems that can do the same, and I doubt there's a system that will clean off six+ inches of snow.
I have a backup camera that beeps when it thinks I'm too close to something. It's frequently wrong, usually beeping when there's nothing very close (false positive) but sometimes fails to beep when something is there (false negative). Either one could result in significant driving issues.
I am not interested in where Marcus' predictions beat some outgroup's. I am interested in where he makes more meaningful predictions that might reasonably be surprising, and I don't think I've seen one yet. Do you know of any?
The outgroup are those you're willing to derisively dismiss. Has Marcus made a prediction in contradiction to, say, Scott Alexander? What predictions has he made that we're waiting on more evidence to evaluate.
I am once again asking for specifics. I have seen him push back on Elon Musk's timelines, for one, but at this point that's just picking on the chronologically challenged.
If Marcus only engages with "certain groups" in the first place, his predictions aren't interesting. He isn't "doomed" with *anyone*, he's just generally irrelevant.
You’re very fortunate then. I’ve come across several people that seem to believe in him verbatim lol! Of course they also thought the earth would end in 2012 - the Mayan calendar thing. And the earth is only 7-8 thousand years old because that’s when god created it. Anddd that’s enough for tonight lol
Haha, those irrational people, believing things that we can instantly dismiss as dumb. Things we feel no temptation, not one bit, to believe. It must be that they are defective and we are rational.
I notice you’re not arguing any of those things are actually true, though. Irrational people who believe all those things do, in fact, exist, and aren’t even very rare.
I don't like this general practice of saying "Haha, look at those stupid people believing stupid things. We are so much smarter than them." Stigmatizing non-mainstream non-institutional belief is corrosive to all forms of knowledge discovery, and kills millions.
> Of course they also thought the earth would end in 2012 - the Mayan calendar thing. And the earth is only 7-8 thousand years old because that’s when god created it.
That sounds like something hard to reconcile. I mean if you believe the Bible literally, shouldn't there be angels and trumpets and beasts coming out of the sea and all that jazz having nothing to do with some pagan Mayans?
Dunno about "Angel Cards", but the Tarot, properly interpreted, always applies. Of course, what it's doing is saying "pay attention to this option", sort of like the I Ching. The trick is in the interpretation, and you've got to do that for yourself. Nobody else can do it for you. (It usually isn't worth the effort if you're rather introspective anyway.)
Tarot and I Ching are one thing but these “Angel cards” are little more than cute little sayings to make you feel good. There isn’t a bad card in the deck lol! The last time I had my (Tarot) cards read she told me a man would come that would be bad news and he would try to cheat me. Not a month later I met someone that tried to cheat me on some antiques I had for sale.
The thing about Nostradamus is that at least 95% of people who have been presented with one of his quatrains have never actually sat down and just started reading the quatrains for themselves. They've only ever heard the most cherry-picked and massaged versions of his predictions, from someone trying to create entertainment or controversy. If they're on the older side, I'll particularly point fingers at that extremely credulous Orson Welles TV special.
And for the most part these people say, "Wow, this Nostradamus guy sure did predict a bunch of things, pretty eerie," and then move on and don't think again about him until the next time someone brings him up.
I think the post was taking that as read, but making an interesting point contrasting the psychology of reactions to his positive(*) predictions and reactions to Fukuyama's, Pinker's, or Scott's own negative(*) ones.
(*) Positive and negative in the sense of "things will happen" and "things won't happen", not good and bad
Tarot cards serve as a medium for psychological projection to connect with ones deeper thoughts. it's a Rorschach test with a mild spiritual component.
Tarot cards can be used for prediction, like a crystal ball, and that is where I would say they lose their value.
" Fukuyama said some (no offense) kind of vapid stuff" -- I'm afraid that has to be filed under "Tell me you haven't read The End of History and The Last Man." It can be described in many ways, but "vapid" is not one of them.
I feel like 99% of the problem for Fukuyama is the title. If he'd titled it something dull like "Speculations on the reduction in ideologically fueled conflict" there would be far fewer complaints, but also far fewer people would have read it. Something of a monkeys paw situation
I think it’s actually worked out pretty well for Fukuyama. His reputation these days is back to where it was in the late ‘80s, plus he gets the advantage of having had his name in media discussion for decades.
He would still be wrong, ideologically-fueled conflict is very alive and well. But maybe the sheer boringness of the title will make anybody who tries to look at it to make fun of him fall asleep and wake up with amnesia.
International ideological conflict seems to have peaked though, at least for the time being. Ideological opposition to liberal democracy might see occasional resurgences, if we think of Islamic fundamentalism as a resurgence, but so far each resurgence has been weaker than the last one. Even most countries that aren't liberal democracies still more or less pretend to be. China runs a dual act where it pretends to be both a multiparty democracy and a Marxist-Leninist state instead of what it really is, a basically non-ideological republican autocracy.
Prior to the dominance of liberal democracy, autocrats used to openly eschew it and condemn it, instead of half-ashamedly wearing it like a skinsuit as they do nowadays.
I dunno, I thought The End of History and the Last Man was kinda vapid. Especially the overall premise that we need an grand overaching theory of history, Marx and Hegel propose them so one of them must be right, it wasn't Marx so it must be Hegel. His later books, but especially The Origins of Political Order, were so much better.
Well, we really *do* need a "grand overarching theory of history". We just don't have one that works, or any clear guide as to how to develop one. Psychohistory is a plausible candidate, but nobody knows how to build it. Even a really good sociology would help, but we don't seem to have that, either. (Perhaps the folks who developed it needed to keep it secret, ala Hari Seldon, but that seems quite unlikely.)
Why do we need an overarching theory of history? When I studied history, it seemed what mattered most was gaining little insights that add up to wisdom over time. No theory can provide a full-resolution picture of reality, just as no maps can provide no perfect picture of reality. It is for this reason that we do not have "one grand map," but many maps for different purposes.
The reason that you didn't get an understanding of history when you studied it is that we don't have a good framework to fit the data into. We've got pieces here and that that sort or usually work.
E.g., "Do democracies and republics always turn into dictatorships?" Justify either a yes or a no answer. Or. from Frank Herbert, "Do civil services always turn into aristocracies?" We've got several examples where they did, but there were always complicating factors. (Actually, I think there were some that didn't survive an invading army, so in a strict sense the answer should be no, but that's a rather unpleasant way to avoid the problem, as the invading army turned into the aristocracy.)
We need a good theory of history to predict the results of actions that we are taking in present time. (Of course, we might not pay attention to it anyway.) E.g., I thought the US civil rights movement was too abrupt, and needed to be more gradual. But on consideration I realized that there's no way such a movement could be continued over a sufficient period of time to allow a gradual transition. But perhaps my basic idea was just wrong. There's no way to know.
For this to work you need a theory that allows you to handle the interactions of the various segments. I.e. and overarching theory. It doesn't need to be complete, but it needs to allow the parts that it doesn't handle to be considered "noise" and still produce the correct answers.
As for your note, perhaps we have different definitions of "understanding". To me the best understanding of history that I've seen resembles the understanding of an accurate astrologer. Any result can be explained afterwards, but not predicted.
I think if a superintelligent alien species (or AI) were to simply hand us such a guide, we would just immediately reject it, because it would undoubtably start with an unflattering assessment of our nature as an animal species, and put us within the same framework of known instincts, limitations, and tendencies as we use when we describe the behavior of crocodiles or donkeys. (For example, I imagine a great deal of what we do can easily be explained by wired-in instincts for sexual competition, familial and tribal preference, fear of mortality.) We would absolutely hate that. We long to believe we behave the way we do from conscious choice arrived at after long and brilliant thought. To believe we are 85% biological robots enslaved by our base drives like so many dogs baying at the moon or a bitch in heat would be unbearable.
I recently listen to Origins of Political Order, Political Decay, and End of History. They work well together, End of History lays out this "soft determinist" framework of historical evolution through well observed trends in economic growth and technological development. The Order series then comes in and highlights all the historical and social contingencies that put the "soft" in soft determinism.
And granted its very much on my brain now, but it's hard not to see our identity obsessed culture war and not see "the struggle for recognition" as an incredibly useful lense to observe politics with.
I admit I haven't read it, but the thesis statement is pretty vapid. The government we have now is as good as it is going to get? Forever? Just seems obviously silly (and I was thinking this in the mid 90s too when I first encountered him). Certainly young me though a technology aided radical democracy or some sort of smaller scale stakeholder run polyarchy seemed like avenues for possibility, not to mention all the ones I couldn't imagine.
It isn't, that is pretty much how literally everyone describe sit. "Liberal democracy won, and it will not be supplanted because there are no superior competitors". You are syaing this isn't what the book is about?
> Why does everyone say that is what the book is about, if that isn't true?
The same reason you are... people like to repeat what others say if it makes them feel some type of way, even if they haven't come to their own understanding.
I haven't read the book, but if it is so, I wouldn't be particularly surprised, considering the title, and how I've seen Malthus and the 1972 Limits to Growth being strawmanned...
Looking back to reviews from mjaor publicaitons from the early 90s, that certainly is what they all think the book is about. I am confused that such an influential book is supposedly about some totally different thing that what society claims it is about.
Why is it obviously silly? Do you expect horses to invent calculus, eventually? Or honeybees to some day come up with much more efficient organizations of hive labor? All species have limitations, because no species is infinitely capable or infinitely smart. We have ours, too. It could be the government models we have now are the best *we* are capable of inventing, or comprehending, or working with. Perhaps only a much smarter species could do better.
Basicallly I think our technology is moving too quickly, our circumstances changing too radically, that we won’t find some alternate options to a model that is already 200 years old and quite bad.
But why would technology change our models of government? You'd think the latter derive from stuff like our inherent psychology and social psychology ("trust people who look like you more, or at least handsome/pretty people more") or our inherent mental limitations (Dunbar Number). How would having much more powerful smartphones change anything? It feels like we'd just do what we already do, only perhaps faster and harder.
Indeed, isn't that a lesson one could draw from social media? Rather than free us to develop more nuanced and complex models of social interaction, as Berners-Lee might have mused in 1989, it has enabled us to be gossipy cliquey 1958 high-school seniors on a much grander scale.
Because technology changes what is possible. Radical democracy (which I don't necessarily think is good) is absolutely feasible today in a way it never would have been 100-200 years ago.
You could have the whole populace making decisions thesmelves without representatives. Technology can also lead to problems or issues that are so big democratic/representative government cannot be trusted with them. you might need a leviathan to make society work. All sort so fpossibilities.
Ethics and politics is VERY situaitonal, and our situaiton is changing.
I mean even social media provides a good exmaple, yes it has turned general politics into a gossipy mess, which mighrt mean social media necessitates a move away from representative politics in an era when the mob is so "self-frothing".
How is it possible ? Are you suggesting voting machines ? Have you forgotten how every. single. expert. says they cannot be relied enough on ? (I guess they could be still used for non-important decisions... but why do we need direct democracy for those ?)
Personally I thought the Origins of Political Order was great. I only skipped the End of History because it seemed less interesting from 2nd-hand accounts of what it's about. I might have to revisit.
Well the biggest issue is that a lot of people predict a lot of things and so someone has probably predicted any given event. Probably lots of people predicted a peaceful fall of the Soviet Union. Many of them also predicted many other things but generally you only hear about successful predictions.
That book that predicted the peaceful fall of the Soviet Union was just some guy getting lucky. Predictions of the future is very much 1 million monkeys on typewriters.
Politics is a great example. Every cycle we get a dozen new guys who predicted shocking thing and a dozen guys from the previous cycle who predicted shocking thing but were wrong this time.
Maybe 10% of the population if not 5%, understands statistics and also applies it effectively in their actual life outside of academia. So predicting stuff is is always gonna be a shitshow and the media, both sincerely and cynically for profit is going to misunderstand predictions and take one lucky guess guys too seriously and guys with high calibration on important predictions not seriously at all.
Neither the average reporter or the average cashier is gonna check the calibration record of someone who predicts a high intensity issue wrong or right.
Reagan was convinced it was doomed (“will be consigned to the ash heap of history”) - though he didn’t specify the means - but was dismissed for his naïveté by his betters.
Thing is: IF you find one, people will argue about the NON-peacefulness of that fold (+ that is was not a "fold"). Not me, but if one paid me ... "January 1991 - Troops crush pro-independence demonstrations in the Baltics, killing 14 people in Lithuania and five in Latvia." 'peaceful?! Gotcha! BTFO!' add Nagorny, Chechnya, Putin's wars (really 'direct after-shocks' of this 'active conspiracy or whatever', just a WWII was 'clearly caused' by WWI).
"For hope no room, Be all gloom; all the world loves a doom. Give 'Em Threats, Play G. Thun., And They'll Pay To Say You're A-1."
When you say “no one” do you mean that literally not one person made such a prediction, or that out of the millions of people who made predictions about the Soviet Union, only a few tens of thousands predicted this?
Okay well I wasn't specifying experts, though I also don't believe no experts predicted it. That's just very unlikely.
Of course many people who had a salary depending on the USSR not falling would have argued heavily that it wouldn't. Military industrial complex and so forth.
Well I mean, this guy did. Presumably at least one other guy did. Probably more than one, in fact!
The point isn’t that this was a particularly popular opinion, it’s that there were 5 billion people alive around that time; of course at least a few thousand would have guessed that.
See the willful misinterpretation of Nate Silver's 2016 prediction.
He put a 30 something % chance of Trump victory right before the election, based on the idea that "if the polls are undercounting republicans, it means these states are actually R states, and Trump would win, but we have no idea, right now, how much, if any, the polls are undercounting republicans"
That later got spun into "Nate Silver was totally wrong about the 2016 election!"
There's definitely a level of willful stupidity and innumeracy that continues to this day, but there's also an element of how most of the election predictors were so off that 2016 results killed them outright, and 538 was left as the only target for folks in the "anger" stage of grief to vent at despite being one of the least deserving. Simple age is a worthwhile first metric for a prediction shop even before you look at their record, because the failures get selected away in anything that actually needs a budget and expertise (but not before poisoning the discourse).
This reminds me of an old scam. You send letters to 1024 people (or a much larger number) advising half that "this stock will go up big time" and other half that it will go down. You do that about 4-5 times. Add in traditional fraudster techniques, such as explaining a secret/new approach to investing, or how it is risky so you don't want them to invest, you could be wrong after all. In fact, you can even keep some of the cohorts where you were wrong, to help maintain creditability. Then you make an offer to invest. Some fraction of those that have seen repeated successes demonstrated (and you are guaranteed those for at least 2^[n-1] predictions and people) and will be crawling over themselves to give you money.
Yeah like 11 years ago in one of my social science classes we actually had a lecture on that for sports betting. I can't remember the details due to the time frame but it certainly stacks up amazingly to, say, political polling in the last few elections.
Yes, but I think his point is that this time he’s not talking about actually making accurate predictions - he’s talking about making predictions that people say good things about. He specifically doesn’t want to bring in the Tetlock stuff here.
Yes. But it's fucking difficult enough to make good predictions, without also trying to make predictions that won't make people want to rip your face off. Social media's fostering of face-ripping is probably making it even hard for all of us to think well.
I only do Twitter now, & on Twitter only follow doctors & scientists plus a few civilians whom I've found to be smart and civilized. I get way more detailed and up-to-date info about topics of interest on Twitter than I could get anywhere else. I'm not willing to give that up. Problem is, I'm so interested in these topics I can't refrain from commenting or asking questions, & my tweets sometimes get savaged. Also, I see attempts to discredit and dismantle good ideas in the replies that follow them, and sometimes argue back. It is just not possible for me to be deeply interested in these subjects yet have no interest in the responses to them, mine and others. Doubt that I'm unusual in that respect.
Scott specifically said, "This post is... not about how to make good predictions. It’s about how to make predictions that don’t make you miserable and cost you lots of credibility."
I think making bets with knowledgeable and honest people is a good way to make predictions.
This way you can point to you winning the bet as evidence enough that you were right (or wrong).
The betting mechanism tends to nail down definitions and appoint arbiters etc. So that's not purely down to public opinion. (Of course, no one can force other people to defer to the judgement of your arbiter. But it's a strong Schelling point.)
Absolutely. AI met Scott’s terms of the bet, but I’m not convinced it is actually even doing any better than the original round of image generation that led to the forecast as an improvement.
AFAIKT, most people think the current "AI artists" are a huge improvement over the prior generation.
And I am personally convinced that the "give me a bit of text and I'll give you a few pictures" form of control is not sufficient for a decent product. It should first iterate around rough sketches for a few generations and then only fill in the final choice. The current controls wouldn't give you what you want with a human artist, unless you were just collecting works by that artist.
3) phrasing things so that resolution is unambiguous
The problem with hypothesis generation being hard is that it can legitimately be a great prediction even if it’s only 10% likely to happen—if I predict 10 really surprising things 100 years out, and 1 of them comes true, that’s AMAZING, not a failure.
The problem with calibration being hard is that without stating actual probabilities and averaging over a long period of time, people will interpret all of your predictions as 100% confident. And even if you do, you’ll get bad press every time you make a 70% prediction and it’s wrong (c.f. five thirty eight in 2016)
The problem with ambiguous resolution is that companies write 100 page contracts and then spend 100 million in court trying to solve this problem and it’s still a shitshow.
Depends if there's a way to get economic value from your predictions. In an extreme case, if you could predict the winning lottery numbers 10% of the timez that would be very valuable.
You could take the pulp science media portrayals of advanced technology and just predict each of them coming true on a really long timetable. Fusion, AI, flying cars, whatever. You're pretty much guaranteed to be okay doing this, because you are either right (could be entirely by chance, it doesn't matter) or everyone forgets you made the prediction by the time it doesn't pan out.
I've complained about the "30 years from now" predictions before. That's sufficiently long from now that the people making the claims will be retired or close to it by the time their reputation is at stake, and they also have plenty of time to adjust their timeline as more information is known - commonly by saying "30 years from now" again, even if we're 10 years later.
5 and 10 year predictions at least require that most predictors will still be relevant and remembered, and we can at least see if some progress has been made.
If you care primarily about reputations, I see that. But if you care about truth, well some truths are just hard to adjudicate, but they may still be worth predicting.
In that case, I guess my position would be to properly discount anything too far out. 30 years away, to me, just means that people are still studying it and expect that's it's theoretically possible.
Writing something unambiguous is hard, but not that hard if you have a few iterations (which is part of the problem with both contracts and laws - there's no reward for finding an ambiguity in a contract/law the way there is for finding a vulnerability in software, so the iterations take decades rather than weeks).
But also, most contract and law writing isn't even trying to resolve ambiguities; it's trying to write something that both parties (in a contract) or a big enough coalition (in legislation) can agree to. Instead of trying to avoid ambiguities, the process positively incentivises them.
The result is: very powerful judges, as they are the ones that end up interpreting this stuff. Sometimes the intent is obvious and they will abide by it; other times the intent is obvious and they will choose to go with the literal text; other times it really isn't obvious, but they still have to pick one option.
For a really classic example, consider the Second Amendment to the United States Constitution:
"A well regulated Militia, being necessary to the security of a free State, the right of the People to keep and bear Arms, shall not be infringed."
So:
What does "well regulated" mean in this context?
How are the two clauses connected - is the first clause explanatory (the reason why the right exists but without effect), limiting (ie the People only have the right to keep and bear arms to the extent necessary for the militia), conditional (ie the right exists only if the militia is necessary, and the necessity of the militia is a judiciable question)?
Is the right collective to the People or individual to each person? If collective, how big of a collective, can people set up their own collectives? If individual, are there people excepted, e.g. felons? If so, on what basis can we determine who those exceptions are?
What are "Arms" in this context? Is this all weapons, or is it only a subset of weapons?
You could run an iterative process until you got rid of ambiguities - ie keep going until the people asking questions are clearly just being awkward. But that would mean answering these questions - and that might deprive you of the support necessary to get the law passed or the contract signed.
A common contract example is that you might contract someone to "make reasonable efforts to" build something. You and they might have very different idea of how much effort is reasonable, and if you actually talked it out, the negotiations would break down. If there is a problem, you'll go to court and the judge will decide what is and isn't reasonable.
As someone who works in federal regulation, I can second that the ambiguities are absolutely a "feature not a bug" in a lot of cases, and that a lot of what I get paid for is helping people navigate through those dangerous reefs.
"homo sapiens" means humans. A supremacist is someone who thinks that a group is much better than the others. In this context, that tweeter is saying that Stephen pinker, who wrote "the better angels of our nature" which is a book about how humanity is solving problems like war, is letting his pro-human bias make him blind to the fact that that his claims are wrong.
Humans may be pretty mediocre at solving these problems, but is there evidence that some other entities are better? Supremacy doesn't require perfection, just being better than all the competition.
Did you actually read the paragraph? He's basically saying that, that overall they'll have massively overpredicted AI progress but the progress that does get made will be used to "prove" Gary was wrong.
When self driving cars were really big in the news, it was driving in nice weather with a human there to correct issues. Many sources were saying the human was not really needed (only there because of regulations) and that the number of human interventions was going down. A bit contradictory there, but the trend was mostly right. Basically, we did/do have self-driving cars, if you want them to only drive on certain roads with certain conditions. A general self-driving car that can drive in the rain and snow, or other tricky situations (especially including other drivers) may never happen. Even human-level AI may not be able to do it, and it may not be an AI's fault. Those systems are highly dependent on sensory input, particularly cameras. Cameras can get dirty, break, or struggle to see in low-visibility environments.
Are cameras any more sensitive to those conditions than human eyes? Will they continue to be?
For one thing, a person can blink and clear local obstructions. For an externally mounted camera, road dust and other debris would be a constant issue and cleaning it off far less convenient.
Anecdote: I recently got the FSD beta on my Model 3, and it proactively sprayed washer fluid & wiped the windshield when it's camera was obscured.
Sounds like it can blink!
I would guess that just as human eyes are inside the vehicle, and can blink, it would be possible to have cameras inside the vehicle, with a blink mechanism.
I was being overly facetious with the blink comment, but the underlying concern is still there. It takes a lot of engineering to put a camera system inside that can watch all the different ways that a human can just by adjusting their head and eyes. It takes just as much or more engineering to mount external systems that can do the same, and I doubt there's a system that will clean off six+ inches of snow.
I have a backup camera that beeps when it thinks I'm too close to something. It's frequently wrong, usually beeping when there's nothing very close (false positive) but sometimes fails to beep when something is there (false negative). Either one could result in significant driving issues.
Human eyes are just *better* than cameras, by several orders of magnitude. There are a small handful of exceptions (for instance, https://www.smithsonianmag.com/smart-news/camera-bound-space-telescope-takes-3200-megapixel-photos-180975758/) but they're all incredibly bulky. The one in my example is a 13 ft. x 5 ft. cube, not exactly the sort of thing you can easily build into a vehicle.
Short answer: yes.
I am not interested in where Marcus' predictions beat some outgroup's. I am interested in where he makes more meaningful predictions that might reasonably be surprising, and I don't think I've seen one yet. Do you know of any?
The outgroup are those you're willing to derisively dismiss. Has Marcus made a prediction in contradiction to, say, Scott Alexander? What predictions has he made that we're waiting on more evidence to evaluate.
> I believe he has made anti predictions.
I am once again asking for specifics. I have seen him push back on Elon Musk's timelines, for one, but at this point that's just picking on the chronologically challenged.
If Marcus only engages with "certain groups" in the first place, his predictions aren't interesting. He isn't "doomed" with *anyone*, he's just generally irrelevant.
>be charismatic and charming enough to gather legions of fans
Up there with "draw the rest of the fucking owl".
Hehehe I love that one. Applies ot so many work situaitons too.
Interesting commentary. I’ve always believed Nostradamus was full of shit lol
I haven't encountered (personally or virtually) anyone who didn't, or mentioned Nostradamus in any other context than "for lolz".
You’re very fortunate then. I’ve come across several people that seem to believe in him verbatim lol! Of course they also thought the earth would end in 2012 - the Mayan calendar thing. And the earth is only 7-8 thousand years old because that’s when god created it. Anddd that’s enough for tonight lol
Haha, those irrational people, believing things that we can instantly dismiss as dumb. Things we feel no temptation, not one bit, to believe. It must be that they are defective and we are rational.
Lol obviously
https://en.wikipedia.org/wiki/Wronger_than_wrong
I notice you’re not arguing any of those things are actually true, though. Irrational people who believe all those things do, in fact, exist, and aren’t even very rare.
No, this was a reference to https://slatestarcodex.com/2014/04/15/the-cowpox-of-doubt/
I don't like this general practice of saying "Haha, look at those stupid people believing stupid things. We are so much smarter than them." Stigmatizing non-mainstream non-institutional belief is corrosive to all forms of knowledge discovery, and kills millions.
> Of course they also thought the earth would end in 2012 - the Mayan calendar thing. And the earth is only 7-8 thousand years old because that’s when god created it.
That sounds like something hard to reconcile. I mean if you believe the Bible literally, shouldn't there be angels and trumpets and beasts coming out of the sea and all that jazz having nothing to do with some pagan Mayans?
Oh shit if that happened I would be running for my camera!! I have a friend that reads “Angel cards.” I’m like no thank you. My Angel left long ago.
Dunno about "Angel Cards", but the Tarot, properly interpreted, always applies. Of course, what it's doing is saying "pay attention to this option", sort of like the I Ching. The trick is in the interpretation, and you've got to do that for yourself. Nobody else can do it for you. (It usually isn't worth the effort if you're rather introspective anyway.)
Tarot and I Ching are one thing but these “Angel cards” are little more than cute little sayings to make you feel good. There isn’t a bad card in the deck lol! The last time I had my (Tarot) cards read she told me a man would come that would be bad news and he would try to cheat me. Not a month later I met someone that tried to cheat me on some antiques I had for sale.
The thing about Nostradamus is that at least 95% of people who have been presented with one of his quatrains have never actually sat down and just started reading the quatrains for themselves. They've only ever heard the most cherry-picked and massaged versions of his predictions, from someone trying to create entertainment or controversy. If they're on the older side, I'll particularly point fingers at that extremely credulous Orson Welles TV special.
And for the most part these people say, "Wow, this Nostradamus guy sure did predict a bunch of things, pretty eerie," and then move on and don't think again about him until the next time someone brings him up.
Those people are a hoot lol.
I thought he was a decent subplot for Alias, but that's about it.
I think the post was taking that as read, but making an interesting point contrasting the psychology of reactions to his positive(*) predictions and reactions to Fukuyama's, Pinker's, or Scott's own negative(*) ones.
(*) Positive and negative in the sense of "things will happen" and "things won't happen", not good and bad
AS HE PREDICTED
An excellent introduction into how to perform a tarot reading.
Tarot cards serve as a medium for psychological projection to connect with ones deeper thoughts. it's a Rorschach test with a mild spiritual component.
Tarot cards can be used for prediction, like a crystal ball, and that is where I would say they lose their value.
" Fukuyama said some (no offense) kind of vapid stuff" -- I'm afraid that has to be filed under "Tell me you haven't read The End of History and The Last Man." It can be described in many ways, but "vapid" is not one of them.
+1
Yeah, it's kind of a shame, Fukuyama gets mentioned as if he were just a fancier version of, I dunno, Thomas Friedman.
I feel like 99% of the problem for Fukuyama is the title. If he'd titled it something dull like "Speculations on the reduction in ideologically fueled conflict" there would be far fewer complaints, but also far fewer people would have read it. Something of a monkeys paw situation
I think it’s actually worked out pretty well for Fukuyama. His reputation these days is back to where it was in the late ‘80s, plus he gets the advantage of having had his name in media discussion for decades.
Possibly worth it, but he keeps getting nagged about the end of history and he's tired of it.
He would still be wrong, ideologically-fueled conflict is very alive and well. But maybe the sheer boringness of the title will make anybody who tries to look at it to make fun of him fall asleep and wake up with amnesia.
International ideological conflict seems to have peaked though, at least for the time being. Ideological opposition to liberal democracy might see occasional resurgences, if we think of Islamic fundamentalism as a resurgence, but so far each resurgence has been weaker than the last one. Even most countries that aren't liberal democracies still more or less pretend to be. China runs a dual act where it pretends to be both a multiparty democracy and a Marxist-Leninist state instead of what it really is, a basically non-ideological republican autocracy.
Prior to the dominance of liberal democracy, autocrats used to openly eschew it and condemn it, instead of half-ashamedly wearing it like a skinsuit as they do nowadays.
I dunno, I thought The End of History and the Last Man was kinda vapid. Especially the overall premise that we need an grand overaching theory of history, Marx and Hegel propose them so one of them must be right, it wasn't Marx so it must be Hegel. His later books, but especially The Origins of Political Order, were so much better.
Well, we really *do* need a "grand overarching theory of history". We just don't have one that works, or any clear guide as to how to develop one. Psychohistory is a plausible candidate, but nobody knows how to build it. Even a really good sociology would help, but we don't seem to have that, either. (Perhaps the folks who developed it needed to keep it secret, ala Hari Seldon, but that seems quite unlikely.)
For me, Toynbee, modified by Carroll Quigley. Under-rated.
Why do we need an overarching theory of history? When I studied history, it seemed what mattered most was gaining little insights that add up to wisdom over time. No theory can provide a full-resolution picture of reality, just as no maps can provide no perfect picture of reality. It is for this reason that we do not have "one grand map," but many maps for different purposes.
The reason that you didn't get an understanding of history when you studied it is that we don't have a good framework to fit the data into. We've got pieces here and that that sort or usually work.
E.g., "Do democracies and republics always turn into dictatorships?" Justify either a yes or a no answer. Or. from Frank Herbert, "Do civil services always turn into aristocracies?" We've got several examples where they did, but there were always complicating factors. (Actually, I think there were some that didn't survive an invading army, so in a strict sense the answer should be no, but that's a rather unpleasant way to avoid the problem, as the invading army turned into the aristocracy.)
We need a good theory of history to predict the results of actions that we are taking in present time. (Of course, we might not pay attention to it anyway.) E.g., I thought the US civil rights movement was too abrupt, and needed to be more gradual. But on consideration I realized that there's no way such a movement could be continued over a sufficient period of time to allow a gradual transition. But perhaps my basic idea was just wrong. There's no way to know.
Theories, sure. But one overarching theory? You would need to justify that specifically.
Note: I did not say I failed to get an understanding of history.
For this to work you need a theory that allows you to handle the interactions of the various segments. I.e. and overarching theory. It doesn't need to be complete, but it needs to allow the parts that it doesn't handle to be considered "noise" and still produce the correct answers.
As for your note, perhaps we have different definitions of "understanding". To me the best understanding of history that I've seen resembles the understanding of an accurate astrologer. Any result can be explained afterwards, but not predicted.
I think if a superintelligent alien species (or AI) were to simply hand us such a guide, we would just immediately reject it, because it would undoubtably start with an unflattering assessment of our nature as an animal species, and put us within the same framework of known instincts, limitations, and tendencies as we use when we describe the behavior of crocodiles or donkeys. (For example, I imagine a great deal of what we do can easily be explained by wired-in instincts for sexual competition, familial and tribal preference, fear of mortality.) We would absolutely hate that. We long to believe we behave the way we do from conscious choice arrived at after long and brilliant thought. To believe we are 85% biological robots enslaved by our base drives like so many dogs baying at the moon or a bitch in heat would be unbearable.
I recently listen to Origins of Political Order, Political Decay, and End of History. They work well together, End of History lays out this "soft determinist" framework of historical evolution through well observed trends in economic growth and technological development. The Order series then comes in and highlights all the historical and social contingencies that put the "soft" in soft determinism.
And granted its very much on my brain now, but it's hard not to see our identity obsessed culture war and not see "the struggle for recognition" as an incredibly useful lense to observe politics with.
I admit I haven't read it, but the thesis statement is pretty vapid. The government we have now is as good as it is going to get? Forever? Just seems obviously silly (and I was thinking this in the mid 90s too when I first encountered him). Certainly young me though a technology aided radical democracy or some sort of smaller scale stakeholder run polyarchy seemed like avenues for possibility, not to mention all the ones I couldn't imagine.
That really, REALLY isn't the thesis statement, though.
I actually summarized the book here if you're interested - https://bookreview.substack.com/p/the-end-of-history-and-the-last-man
It isn't, that is pretty much how literally everyone describe sit. "Liberal democracy won, and it will not be supplanted because there are no superior competitors". You are syaing this isn't what the book is about?
No.
Why does everyone say that is what the book is about, if that isn't true?
Because almost nobody actually read the book, they just repeat what they've heard about it.
> Why does everyone say that is what the book is about, if that isn't true?
The same reason you are... people like to repeat what others say if it makes them feel some type of way, even if they haven't come to their own understanding.
I haven't read the book, but if it is so, I wouldn't be particularly surprised, considering the title, and how I've seen Malthus and the 1972 Limits to Growth being strawmanned...
Looking back to reviews from mjaor publicaitons from the early 90s, that certainly is what they all think the book is about. I am confused that such an influential book is supposedly about some totally different thing that what society claims it is about.
I suggest you just read it for yourself and make up your own mind.
Why is it obviously silly? Do you expect horses to invent calculus, eventually? Or honeybees to some day come up with much more efficient organizations of hive labor? All species have limitations, because no species is infinitely capable or infinitely smart. We have ours, too. It could be the government models we have now are the best *we* are capable of inventing, or comprehending, or working with. Perhaps only a much smarter species could do better.
Basicallly I think our technology is moving too quickly, our circumstances changing too radically, that we won’t find some alternate options to a model that is already 200 years old and quite bad.
But why would technology change our models of government? You'd think the latter derive from stuff like our inherent psychology and social psychology ("trust people who look like you more, or at least handsome/pretty people more") or our inherent mental limitations (Dunbar Number). How would having much more powerful smartphones change anything? It feels like we'd just do what we already do, only perhaps faster and harder.
Indeed, isn't that a lesson one could draw from social media? Rather than free us to develop more nuanced and complex models of social interaction, as Berners-Lee might have mused in 1989, it has enabled us to be gossipy cliquey 1958 high-school seniors on a much grander scale.
Because technology changes what is possible. Radical democracy (which I don't necessarily think is good) is absolutely feasible today in a way it never would have been 100-200 years ago.
You could have the whole populace making decisions thesmelves without representatives. Technology can also lead to problems or issues that are so big democratic/representative government cannot be trusted with them. you might need a leviathan to make society work. All sort so fpossibilities.
Ethics and politics is VERY situaitonal, and our situaiton is changing.
I mean even social media provides a good exmaple, yes it has turned general politics into a gossipy mess, which mighrt mean social media necessitates a move away from representative politics in an era when the mob is so "self-frothing".
How is it possible ? Are you suggesting voting machines ? Have you forgotten how every. single. expert. says they cannot be relied enough on ? (I guess they could be still used for non-important decisions... but why do we need direct democracy for those ?)
Personally I thought the Origins of Political Order was great. I only skipped the End of History because it seemed less interesting from 2nd-hand accounts of what it's about. I might have to revisit.
Political Order and Political Decay is also good and continues on from origins
Second this. An excellent survey.
agree
Well the biggest issue is that a lot of people predict a lot of things and so someone has probably predicted any given event. Probably lots of people predicted a peaceful fall of the Soviet Union. Many of them also predicted many other things but generally you only hear about successful predictions.
That book that predicted the peaceful fall of the Soviet Union was just some guy getting lucky. Predictions of the future is very much 1 million monkeys on typewriters.
Politics is a great example. Every cycle we get a dozen new guys who predicted shocking thing and a dozen guys from the previous cycle who predicted shocking thing but were wrong this time.
Maybe 10% of the population if not 5%, understands statistics and also applies it effectively in their actual life outside of academia. So predicting stuff is is always gonna be a shitshow and the media, both sincerely and cynically for profit is going to misunderstand predictions and take one lucky guess guys too seriously and guys with high calibration on important predictions not seriously at all.
Neither the average reporter or the average cashier is gonna check the calibration record of someone who predicts a high intensity issue wrong or right.
>>>Probably lots of people predicted a peaceful fall of the Soviet Union.
No. Nobody did. No one.
That's the thing - no one, including a bunch of experts in the field- thought the USSR would fold peacefully, if at all.
Reagan was convinced it was doomed (“will be consigned to the ash heap of history”) - though he didn’t specify the means - but was dismissed for his naïveté by his betters.
Thing is: IF you find one, people will argue about the NON-peacefulness of that fold (+ that is was not a "fold"). Not me, but if one paid me ... "January 1991 - Troops crush pro-independence demonstrations in the Baltics, killing 14 people in Lithuania and five in Latvia." 'peaceful?! Gotcha! BTFO!' add Nagorny, Chechnya, Putin's wars (really 'direct after-shocks' of this 'active conspiracy or whatever', just a WWII was 'clearly caused' by WWI).
"For hope no room, Be all gloom; all the world loves a doom. Give 'Em Threats, Play G. Thun., And They'll Pay To Say You're A-1."
When you say “no one” do you mean that literally not one person made such a prediction, or that out of the millions of people who made predictions about the Soviet Union, only a few tens of thousands predicted this?
or that the rate at which people made such predictions increased nearer to the time when the soviet union collaped?
Okay well I wasn't specifying experts, though I also don't believe no experts predicted it. That's just very unlikely.
Of course many people who had a salary depending on the USSR not falling would have argued heavily that it wouldn't. Military industrial complex and so forth.
https://www.amazon.com/Will-Soviet-Union-Survive-until/dp/B0006CPGLA/
(I read this book when it was first published, in 1970.)
Well I mean, this guy did. Presumably at least one other guy did. Probably more than one, in fact!
The point isn’t that this was a particularly popular opinion, it’s that there were 5 billion people alive around that time; of course at least a few thousand would have guessed that.
See the willful misinterpretation of Nate Silver's 2016 prediction.
He put a 30 something % chance of Trump victory right before the election, based on the idea that "if the polls are undercounting republicans, it means these states are actually R states, and Trump would win, but we have no idea, right now, how much, if any, the polls are undercounting republicans"
That later got spun into "Nate Silver was totally wrong about the 2016 election!"
Particularly since others were saying that it was 99/1 and mocking Silver for his prediction at the time
I liked Andre Cooper's writeup here: https://goodreason.substack.com/p/nate-silvers-finest-hour-part-1-of
There's definitely a level of willful stupidity and innumeracy that continues to this day, but there's also an element of how most of the election predictors were so off that 2016 results killed them outright, and 538 was left as the only target for folks in the "anger" stage of grief to vent at despite being one of the least deserving. Simple age is a worthwhile first metric for a prediction shop even before you look at their record, because the failures get selected away in anything that actually needs a budget and expertise (but not before poisoning the discourse).
This reminds me of an old scam. You send letters to 1024 people (or a much larger number) advising half that "this stock will go up big time" and other half that it will go down. You do that about 4-5 times. Add in traditional fraudster techniques, such as explaining a secret/new approach to investing, or how it is risky so you don't want them to invest, you could be wrong after all. In fact, you can even keep some of the cohorts where you were wrong, to help maintain creditability. Then you make an offer to invest. Some fraction of those that have seen repeated successes demonstrated (and you are guaranteed those for at least 2^[n-1] predictions and people) and will be crawling over themselves to give you money.
Yeah like 11 years ago in one of my social science classes we actually had a lecture on that for sports betting. I can't remember the details due to the time frame but it certainly stacks up amazingly to, say, political polling in the last few elections.
No love at all for Tetlock's Superforecasting?
https://www.amazon.com/Superforecasting-Science-Prediction-Philip-Tetlock/dp/0804136696/ref=tmm_hrd_swatch_0?_encoding=UTF8&qid=1664333430&sr=8-1
I feel like tetlock has been already discussed to death on this blog, going back like, 10 years
He is definitely not feeling a lack of love from Scott
Got it-thanks!
But still...seems particularly relevant for this topic, no?
Yes, but I think his point is that this time he’s not talking about actually making accurate predictions - he’s talking about making predictions that people say good things about. He specifically doesn’t want to bring in the Tetlock stuff here.
Yes. But it's fucking difficult enough to make good predictions, without also trying to make predictions that won't make people want to rip your face off. Social media's fostering of face-ripping is probably making it even hard for all of us to think well.
Why not just quit social media? It isn't very ahrd ot give up twitter/FB/etc.
I only do Twitter now, & on Twitter only follow doctors & scientists plus a few civilians whom I've found to be smart and civilized. I get way more detailed and up-to-date info about topics of interest on Twitter than I could get anywhere else. I'm not willing to give that up. Problem is, I'm so interested in these topics I can't refrain from commenting or asking questions, & my tweets sometimes get savaged. Also, I see attempts to discredit and dismantle good ideas in the replies that follow them, and sometimes argue back. It is just not possible for me to be deeply interested in these subjects yet have no interest in the responses to them, mine and others. Doubt that I'm unusual in that respect.
Scott specifically said, "This post is... not about how to make good predictions. It’s about how to make predictions that don’t make you miserable and cost you lots of credibility."
He wrote a book review for Superforecasting several years ago: https://slatestarcodex.com/2016/02/04/book-review-superforecasting/. But this post has a different purpose.
Thanks! I missed that-much appreciated.
I think making bets with knowledgeable and honest people is a good way to make predictions.
This way you can point to you winning the bet as evidence enough that you were right (or wrong).
The betting mechanism tends to nail down definitions and appoint arbiters etc. So that's not purely down to public opinion. (Of course, no one can force other people to defer to the judgement of your arbiter. But it's a strong Schelling point.)
The outside discussion of Scott's AI image generation bet is illuminating: the properties discussed above seem to predominate the discourse.
Yes. But it's not something Scott has to defend.
(I believe his betting partner was too confident and agreed to terms that were too soft.)
Absolutely. AI met Scott’s terms of the bet, but I’m not convinced it is actually even doing any better than the original round of image generation that led to the forecast as an improvement.
AFAIKT, most people think the current "AI artists" are a huge improvement over the prior generation.
And I am personally convinced that the "give me a bit of text and I'll give you a few pictures" form of control is not sufficient for a decent product. It should first iterate around rough sketches for a few generations and then only fill in the final choice. The current controls wouldn't give you what you want with a human artist, unless you were just collecting works by that artist.
On the contrary, Scott was overconfident in choosing terms that would be impossible to fulfill due to content filters.
There’s three hard parts to predicting things:
1) hypothesis generation
2) calibration
3) phrasing things so that resolution is unambiguous
The problem with hypothesis generation being hard is that it can legitimately be a great prediction even if it’s only 10% likely to happen—if I predict 10 really surprising things 100 years out, and 1 of them comes true, that’s AMAZING, not a failure.
The problem with calibration being hard is that without stating actual probabilities and averaging over a long period of time, people will interpret all of your predictions as 100% confident. And even if you do, you’ll get bad press every time you make a 70% prediction and it’s wrong (c.f. five thirty eight in 2016)
The problem with ambiguous resolution is that companies write 100 page contracts and then spend 100 million in court trying to solve this problem and it’s still a shitshow.
I mean, predicting 10 10% things 100 years out and getting one right is "amazing" for data nerds or w/e but is it valuable practically?
Sorta like people are super impressed by a trivia nerd but that knowledge is still "trivial".
Depends if there's a way to get economic value from your predictions. In an extreme case, if you could predict the winning lottery numbers 10% of the timez that would be very valuable.
Sure. But generally these types of predictions are not financially useful. Also lottery numbers are much harder to predict.
You could take the pulp science media portrayals of advanced technology and just predict each of them coming true on a really long timetable. Fusion, AI, flying cars, whatever. You're pretty much guaranteed to be okay doing this, because you are either right (could be entirely by chance, it doesn't matter) or everyone forgets you made the prediction by the time it doesn't pan out.
I've complained about the "30 years from now" predictions before. That's sufficiently long from now that the people making the claims will be retired or close to it by the time their reputation is at stake, and they also have plenty of time to adjust their timeline as more information is known - commonly by saying "30 years from now" again, even if we're 10 years later.
5 and 10 year predictions at least require that most predictors will still be relevant and remembered, and we can at least see if some progress has been made.
If you care primarily about reputations, I see that. But if you care about truth, well some truths are just hard to adjudicate, but they may still be worth predicting.
In that case, I guess my position would be to properly discount anything too far out. 30 years away, to me, just means that people are still studying it and expect that's it's theoretically possible.
Put it this way - If I predicted 100 lottery numbers and one of them won the the jackpot, that would be very useful.
Writing something unambiguous is hard, but not that hard if you have a few iterations (which is part of the problem with both contracts and laws - there's no reward for finding an ambiguity in a contract/law the way there is for finding a vulnerability in software, so the iterations take decades rather than weeks).
But also, most contract and law writing isn't even trying to resolve ambiguities; it's trying to write something that both parties (in a contract) or a big enough coalition (in legislation) can agree to. Instead of trying to avoid ambiguities, the process positively incentivises them.
The result is: very powerful judges, as they are the ones that end up interpreting this stuff. Sometimes the intent is obvious and they will abide by it; other times the intent is obvious and they will choose to go with the literal text; other times it really isn't obvious, but they still have to pick one option.
For a really classic example, consider the Second Amendment to the United States Constitution:
"A well regulated Militia, being necessary to the security of a free State, the right of the People to keep and bear Arms, shall not be infringed."
So:
What does "well regulated" mean in this context?
How are the two clauses connected - is the first clause explanatory (the reason why the right exists but without effect), limiting (ie the People only have the right to keep and bear arms to the extent necessary for the militia), conditional (ie the right exists only if the militia is necessary, and the necessity of the militia is a judiciable question)?
Is the right collective to the People or individual to each person? If collective, how big of a collective, can people set up their own collectives? If individual, are there people excepted, e.g. felons? If so, on what basis can we determine who those exceptions are?
What are "Arms" in this context? Is this all weapons, or is it only a subset of weapons?
You could run an iterative process until you got rid of ambiguities - ie keep going until the people asking questions are clearly just being awkward. But that would mean answering these questions - and that might deprive you of the support necessary to get the law passed or the contract signed.
A common contract example is that you might contract someone to "make reasonable efforts to" build something. You and they might have very different idea of how much effort is reasonable, and if you actually talked it out, the negotiations would break down. If there is a problem, you'll go to court and the judge will decide what is and isn't reasonable.
As someone who works in federal regulation, I can second that the ambiguities are absolutely a "feature not a bug" in a lot of cases, and that a lot of what I get paid for is helping people navigate through those dangerous reefs.
OK, not the topic, but... WTF is a "homo sapiens supremacist"??
"homo sapiens" means humans. A supremacist is someone who thinks that a group is much better than the others. In this context, that tweeter is saying that Stephen pinker, who wrote "the better angels of our nature" which is a book about how humanity is solving problems like war, is letting his pro-human bias make him blind to the fact that that his claims are wrong.
Humans may be pretty mediocre at solving these problems, but is there evidence that some other entities are better? Supremacy doesn't require perfection, just being better than all the competition.