929 Comments

This article by Freddie deBoer is a nice and eloquent summing up of my position on Israel, and how seriously I take accusations of Anti-Semitism that is entirely founded on my supposedly unfair criticism of the Zionist state : https://freddiedeboer.substack.com/p/i-assure-you-i-am-permitted-to-oppose

> [Title] I Assure You, I Am Permitted to Oppose the Existence of Any and All Nation-States

> [subtitle] even one that's very very important to you

> I am and must be an anti-Zionist for reasons that precede any particular opinion about Israel or the Palestinians. I am opposed to religious characters for states, whether actively theocratic or not; I am opposed to ethnonationalism specifically; I am opposed to nationalism generally. None of these beliefs stem from a rejection of Jews or the Jewish religion or Israel

> These ever-expanding definitions of anti-Semitism, now codified by government (and, I assure you, Republicans and their liberal Zionist enablers will work tirelessly to make criticism of Israel actively illegal) would prohibit all manner of basic philosophical and political positions that should be protected speech under any definition. The religious opposition to the modern state of Israel found in some Hasidic sects, orthodox Marxism, all manner of libertarian and anarchist conceptions of a righteous future, every impulse that opposes the modern fiction of the nation-state - all ground up, rendered impermissible, under the insistence that to oppose the governmental body that is the modern state of Israel is in and of itself a form of interpersonal bigotry. It’s a casual, incidental destruction of the entire philosophical world of internationalism.

> I’m not going to give you a discount argument against the nation-state in this space; you can, and should, read entire books about the subject.

> But historical arguments are not a requirement of anti-nationalist sentiment. All that’s required is to recognize that nations are literal fictions, invented by human beings with no transcendent or permanent reality, and that in a few hundred years nationalism has been responsible for more bloodshed and misery than any other human belief.

> For the record, many Marxists and other forms of internationalists often take pains to distinguish the nation from the nation-state, national identity from nationalism [... :] A nation is a people, while a state is a governmental body

> the question of Israel’s basic nature - again, leaving all concerns for the Palestinians aside - is complicated by its status as an ethnostate.

> “Jewish” famously denotes both a religion and an ethnic group; a Jewish state must therefore have an ethnic and Jewish character. And this has obvious and ugly consequences for Israel’s essential being.

> So many of the basic ugly realities of what Israel is, beneath the surface of “the only democracy in the Middle East,” stem from the fact that an ethnostate cannot help but discriminate, cannot help but create second-class citizens. It’s common for defenders of Israel to point out that there is a sizable minority of Arab Israeli citizens within the country, but they’re much less likely to acknowledge that those citizens face systemic discrimination, which has intensified since the start of the latest conflict. But what did you expect? That an ethnonationalist project wouldn’t result in people pursuing ethnic supremacy?

> Which brings us to the notion of a double standard. I’m not sure why people think this is all such a gotcha - yes, I do oppose all ethnonationalism! I do not recognize any state’s “right to exist,” given that rights accrue to human beings and not to violent abstractions like states.

> So why all the focus on Israel? Because Israel is different

> [Because] Zionists constantly step from one foot to another when it comes to the basic question of whether Israel is exceptional or not, special or not. When justifying 75 years of dispossession for the Palestinian people, they say of course Israel is exceptional, of course Israel is special. The Jews were promised the land by God, they have been expelled from country after country, they endured the Holocaust, they are a wholly unique case for which we must permit every exception. This exceptional status holds precisely as long as it takes us to get to the supposedly unfair fixation on Israel’s crimes, at which point we are to understand that Israel is a wholly unexceptional country and that there is no legitimate reason that an American would focus particularly on its sins. You can’t have it both ways! If you insist that Israel’s very existence is in some sense special, you cannot then rage out whenever people focus on Israel to a special degree. Every year, each and every American has more than 4 billion ironclad reasons to pay special attention to Israel.

> I could also point out that if the status of being “the only democracy in the Middle East” means anything at all, it must entail special attention. If you want to be shielded for supposedly embodying those ideals, you must be ready to be harshly criticized on the grounds that you aren’t embodying them.

> Let me add the part which will surely inspire yet-more lazy accusations of anti-Semitism: among the most tiresome and insulting elements of this whole debate lies this insistence that Israel and Zionism must be the exception to every rule.

> I am an internationalist; I reject ethnonationalism; I think religion should have no part in government; therefore I must be an anti-Zionist.

Expand full comment

One of the only two comments on the post sums up a small part of what I feel https://open.substack.com/pub/freddiedeboer/p/i-assure-you-i-am-permitted-to-oppose?&comments=true&commentId=44905611

And since I can't reply there I'll write here some of my thoughts on the subject. First of all - I agree with almost everything Freddie deBoer wrote, but it's far from a summing up of the subject. That criticism of Zionism shouldn't be considered antisemitism ought to be trivial. One of the other facets to consider is what it means to "oppose" a state.

In theory I oppose the existence of States in general and Nation-States in particular, in practice my level and mode of opposition is dependant on the particulars of the situation.

For example, I oppose a Trump-led U.S. or generally the existance of such a racist country where the disenfeanchised include 0.7% of the population which is imprisoned, another measly 0.2% residents of D.C. and another 3 million citizens of Puerto Rico. (I might be exposing some ignorance here). Where AIAK the gerrymanderng, mostly on an ethnic level, is still quite bad (quick google search seems to suppor but has outdated data). And with a baffeling two-house system with a clearly un-democratic senate.

But that won't be cause for me to support the organizations like Al-Qaeda or Nation od Islam, nor will it make me in any way oppose the existence of the U.S.A. It would not stop me from supporting the U.S.A against forces that seek to harm it. What it does is make me criticize the U.S. and support positive change from within in the hopes of short-term improvements and an eventual peaceful transition to a post-state world society.

Zionism is inherintly racist, there is some defence to it (It claims the idea of liberal democracy is unsound and insuffcient to protect minorities), I used to be staunchly anti-zionist and I will wait until the overwhelming pain around the recent events will subside before reassesing my position. But Israel is not much more inherintly Zionist than the U.S. is inherintly pro-slavery or prone to violently subjucating Native-Americans. It *is* a democracy where Zionism could be excised from the inside without violent revolution.

I have British citizenship, I have the option to say the hell with Israel and its murderous and racist behaviours, to pack my things and go. I live with the knowledge that if I and every reasonable person with that capacity will do so we will be dooming our family, friends, and enemies to unimaginable slaughter. So in the day-to-day I oppose the existance of Israel as a Zionist subjugating State but I support the existance of Israel as a democratic state.

(In the meta-level I am making the case that "opposing" something is meaningless without stating what you support in its place)

Expand full comment

I don't understand that "One of the only 2 comments" bit, is this mis-phrased ? It seems to imply that the post has only 2 comments, but it has 500+. I think you meant one of the only 2 that you agree with (and you're probably only counting top-level comments as "comments".)

Unfortunately, the equivalence between Israel and Jews has been dug rather deep, so for huge swaths of the Pro-Israel camp, Anti-Zionism **IS** Anti-Semitism, the negation of that is inconceivable, ridiculous on its face, and perhaps a sign of bad faith malicious deceit to them. I still don't understand Hebrew remotely enough to do anything interesting with Hebrew sources and I have never visited Israel, but it appears from where I am that Israelis don't even consider "Zionism" to be a distinct category from "Jewish Israeli", witness this video https://www.youtube.com/watch?v=Z1_qqphDurQ where most of the people interviewed either stare in confusion or just outright deny the existence of Anti-Zionist Jews. You could even say that, and I'm aware of how deeply ironic that would be, Israel is using its Jews as "Human Shields" : Ah you want to destroy Israel, but - you see - you first have to know what to do with 7 million Jews, ehhhh, are you a Nazi, Mr. Want-To-Destroy-Israel, would you want to kill 7 million Jews ?

On top of those ontological issues, there are several epistemological issues making things worse. It doesn't help that some Israeli critics are - indeed - just plain anti-Semites who love the new cover. How many ? I don't know, but they exist. There is no reason in existence one can be dumb enough to think that Jewish students or restaurant owners in America or France are responsible for what Israel does, or that harassing them will have any weight on the matter, it has to be malice. It also doesn't help that Israel employs dedicated propaganda units just for the express purpose of spreading this very same perception of Anti-Zionists, and that even if every single Anti-Zionist is a card-carrying philo-semite fluent in Hebrew, they would still have this perception around them just from the effect of Israeli propaganda.

The conundrum you present features in every discussion about a corrupt and rotten institution or organization, the conundrum of whether to reform or erase. Whether to fix or start anew, whether to go slow and cut a thousand cuts or go fast and burn the whole thing to the ground. I don't have a strong feeling either way on the particular instance of Israel, I just think that (A) Humans have a huge blindspot in favor of systems and structures that will never be fixed, because they (quite rationally) fear abrupt change, they also want to honor past investment in the systems and the structures (sunk cost fallacy), so there is a huge hazard of motivated reasoning here where you convince yourself that Reform will save the day even though it's not going to realistically do it, (B) I tend to have an opposite bias towards radical and ground-up restructuring on other issues, subject to a strict and cynical realization that Utopia is not guaranteed and that the end doesn't justify the means under any conditions.

I'm not the brightest bulb when it comes to American history, but as far as I'm aware, America giving its black population rights happened only as an unintended side-effect of (A) Industrialization, which makes Slave Labor a net-negative paradigm (B) a failed compromise, the south could have kept its slaves if they were more willing to meet the north in the middle on some issues related to voting rights of the blacks and stuff. As for the indigenous population, they were given rights after the vast majority of them were extinct. I'm sure that if Palestinians were to decrease to 100K, the Israeli right-wing would rejoice much and give the surviving Palestinains all they want up to and including token representation in the government.

Destroying (the zionist, racist) Israel from within Israel is not impossible, I'm just urging caution against underestimating how difficult it would be. The Hawks are cunning, close associates to Ariel Sharon are literally on tape saying that the entire de-settlement of Gaza is just a ploy to freeze the peace process until Palestinans are genocided enough that peace is automatically achieved. They look like they were right.

The spectrum of Anti-Israel positions is also far wider than what you present it to be, there are more internally-consistent positons than just (1) Pro-Hamas (2) Pro-Israel and thinks it's perfect as it is (3) Pro-Israel and thinks it's flawed but that its flaws can be neutralized through the democratic process.

Re: Your meta point. I don't know, should 1200s Atheists know about Evolution ? They would have to know Evolution in order to present a viable alternative against the Abrahamic story of creation, but that wouldn't exist yet for the next 700 years. It's the greatest trick the status quo ever pulled on people, to convince them that "Status Quo sucks, What's the Alternative ? No Viable Alternative ? Therefore, every opposition to the Status Quo sucks and is meaningless" is a useful chain of reasoning.

I agree that an opposition that has a realistic and actionable alternative is much much better and much much more likely to succeed than an opposition that simply refuses and elaborates no further, but I disagree that the latter form of opposition is meaningless or is equivalent to the null action.

Expand full comment

A side issue, but all slavery was abolished (in theory, but at least a lot fewer people were enslaved) at least in part because slavery became a less efficient way of using labor, but slavery for domestic labor seems at least as economically efficient as it ever was, and it's also abolished.

Expand full comment

Re 2 comments... I was refering to top level, and I only see two (appreantly unless I click on one of them... there is no other button which displays the rest), which seemed reasonable since it also showed making further comments as disabled. Anyway, it doesn't matter.

I was agreeing about how Anti-Zionism == Anti-Semitism is wrong, so you're preaching to the choir. If your point is about how entrentched that claim is, then yeah - I concur.

Living in Israel, I don't need the video, the concept of Anti-Zionist Jews exist but under harsh attack (and some religious people here don't believe atheists actually exist), I can't tell how much of the proper political left (5-10% of voters?) ascribes to that since some pay lip service to Zionism. The concept of Anti-Zionist citizens though is a large minority represented in the Knesset.

So your point about slavery is... that one would have been right in destroying the U.S. at the time? I don't care much about history, I was bringing it up regarding to what it is now.

I think I understand and symphasize with your ironic claim, but as you present it it's just... reality? there are millions of jews, hostages of circumstance, living in israel due to choices of their parents, who will die if Israel stops existing. That's not being used as human shields, it's just life. I addition to that the danger to their life is leverged to get concessions, which is in a way being used as human shields.

Your bias towards (B) might be the crux of our disagreement, I am always leery of radical and ground-up restructuring when it includes high possibilty of death, but especially so when the death toll would include me, my family, and my friends. This makes the matter hard for me to discuss without bias. So yeah... sorry to get emotional about this but feeling powerless about my government killing thousands of civillians on the one hand and everything else on the other is stressfull.

A second crux might be that I obessed too much about opposition to existence rather than taken things are more general opposition, this probably invalidates most of my arguments? I still really don't like the phrasing, but I understand the framework it comes from.

I'm not sure what your point about The spectrum of Anti-Israel positions. All positions are wider than 3 possiblities, and I don't think I used the framing which you do. In particular I think that using Pro-Hamas or Pro-Israel at all is reductive and way too prevalent as terms and ways of thought.

Your analogy to 1200s Atheists is... bad? they say that they don't know where life originated from, nothing nothing is a valid alternative to the Abrahamic story of creation because it has no direct effect on anything. Waving shiny alternatives and causing widespread death is the greatest trick of radicalism... destroying things without a plan or with a bad one is kinda a good way to make everything worse. Not that anyone *here* is arguing for Status Quo.

I don't think opposition without alternative is meaningless or is equivalent to the null action, I think it can be actively harmful. I'm not asking for a realistic and actionable alternative, but for some vague gesture of in what direction the alternative is being looked at.

Obviously the first step should be saying "this is wrong" and no alternative is needed at that stage.

Expand full comment

I'm not disagreeing (that much) by the way, emotions are hard to convey through emoji-less text but I'm actually overjoyed whenever I find Israelis like you. I'm just probing your views further because I'm interested in the general topic of discussion and all the myriad angles it can be viewed from. (And by the way, sorry for my username, I don't intend most Israelis with it or even a hypothetical sane Israel that doesn't kill the innocent Palestinians.)

>  I was referring to top level, and I only see two

I see, that wasn't the case when the article still had comments enabled, but I can see it now. You can still defeat it by (1) Clicking on the replies of one of the 2 comments you can see (2) Click "Return To Thread" on the top, beneath the "Commenting has been turned off" grey text. You will return to the original thread where several, about 40+ or 50+, top-level comments are present.

I know you agree that Anti-Zionism is not Anti-Semitism in the absolute, I'm grateful and a bit surprised you do, I'm just explaining why this isn't my prior about an arbitrary Israeli. Zionism to Israelis is like water to fish.

> The concept of Anti-Zionist citizens is represented in the Knesset

Because of the Heredim right ?

> that one would have been right in destroying the U.S. at the time?

Obviously, I can't make a sweeping moral claim about something that happened in 1860. For all I know, destroying the US in 1860 might have been the turning point that would allow the resurgence of slavery elsewhere. For all I know, no US or a worse US in the 1930s could have meant that Nazi Germany is a WW2 victor, imposing fates far worse than chattel slavery on multiple tens of millions of non-Germans in Europe alone. I don't believe anybody has the compute necessary to convincingly simulate 160 years of no US, certainly not the 80+ years from 1941 till now, certainly not in their head.

All I'm saying is that history can be deceiving, that it walks with atrociously slow steps and occasionally backtracks, that it walks in paths that make no sense from any human standpoint, any more than the coast of Norway makes sense to an urban road planner. It can be easy and tempting to look from your current vantage point and say "Phew, it's a good thing that we didn't destroy the US when it was a slaver nation, look at where we're now", but this is a fallacy, it's not like you visited other timelines and saw that every single alternative beginning with destroying the US ends badly, maybe some of them do, maybe the majority of them, but you have no reason to believe that all of them do.

> I think I understand and sympathize with your ironic claim, but as you present it it's just... reality

Oh I don't think that Israel is exceptional at all, at the very least its human shields enjoy high standards of living. The exact same about being used as human shields minus the standards of living can be said about my own native Egypt under the Al-Sisi regime or (in the horrifying extreme) North Korea. I'm an Anarchist for a reason.

>  there are millions of jews, hostages of circumstance, living in israel due to choices of their parents

I hold not a single planck mass of hatred or blame towards them, they're the reason I don't want to see Israel destroyed by force. They're the reason my face falls when I hear about Israeli casualties, even the military ones.

> [All those Jews] will die if Israel stops existing

Ehhh, I can see where this is coming from and I sympathize hugely with it, but it's not that black and white. I'm not asking any Jew to take their chances, I'm just saying those chances are more like 50% to 60% at the worst rather than 90% to 99%, and they can get as low as 20% to 30% on good days.

Again, I personally wouldn't risk my family on death odds as low as 10%  to 5%, so I'm not implying that those Jews are being unreasonable in fearing for their lives if Israel no longer exists, I'm just against inflating already bad odds to be worse.

> sorry to get emotional about this

Any Arab or Jew is plenty emotional about this, and that's an understatement. So don't apologize for being human !

> I'm not sure what your point about the spectrum of Anti-Israel positions

I was specifically reacting to the bit where you said "Although I don't like the US, that still wouldn't get me to support Al Qaeda or Nation of Islam". This might not have been intended on your part, but to me this has the implication that a person can only either (1) Support Al Qaeda (2) Support America and think it needs change from within (3) Support America and think it's perfect. The gap between (1) and (2) is massive, and includes lots of positions that are more hostile and violence-y than (2) but still nowhere near close to (1).

>  In particular I think that using Pro-Hamas or Pro-Israel at all is reductive and way too prevalent as terms and ways of thought.

I very much agree. I use Pro-Israel and Pro-Palestine/Pro-Hamas in the same sense as "Northern Hemisphere" and "Southern Hemishpere", very rough grouping of whole swaths of territories.

My point about 1200s atheists is just that people are too harsh on radical alternatives. People are harsh on anyone who wants to abolish capitalism (please ignore how extremely vague this is for now), abolish meat-eating, perform various ground-up restructuring of society and/or family. This makes sense from a certain point of view, otherwise any rando will just dream up 100s of ill-thought-out scenarios and spam their society with them until one of them hits and becomes a disaster. But, this also sometimes blinds people to how utterly the status quo sucks, and how some of the alternatives are only weak and badly-thought because very few embrace them, they are literally being run on less minds, so of course they will be unsatisfying and low-resolution (unless lying and false promises are used, which is a whole other can of worms).

As a vague gesture towards alternatives, what I would personally ask out of a genie if I found a magic lamp is the following :

1- The US stops supporting Israel with tanks and Iron Dome missiles, stops sending aircraft carriers for them, leaves BDS alone, AIPAC becomes well known and US politicians under their influence become less legitimate

2- The entire Arab/Muslim world becomes simultaneously more harsh and less harsh on Israel :

2--a) More : No trade relations, military coordination, or intelligence coordination. Sanctions, Oil embargos, etc... The countries that are still making their peace with Israel or haven't yet, make peace conditional on Palestinians being treated better.

2--b) Less : Hebrew is taught in schools alongside Arabic (or the other dominant language in non-Arabic cases like Indonesia and Pakistan), the pro-Jew POV becomes more prevalent and socially acceptable, the Israeli POV and the catastrophic failures of Arab leaders of the 1950s-1980s are focused more during history classes. Student exchanges, joint cultural works (jointly-produced movies and TV series, music, etc...), tourism and so on.

Expand full comment

By the way, sorry for dropping off from the conversatoin - had to go to sleep and then life distracted me. I can try to write full replies if you find this conversation fruitful-in-potential and don't mind long lulls.

Expand full comment

I just noticed that what I know about HSV-1 (herpes) don't make sense:

1- A large majority of the population carry the virus (per internet & common knowledge)

2- But only a minority sometimes get blisters (per internet & common knowledge)

3- Active blisters are highly contagious (per my physician)

4- But asymptomatic carriers still shed virus 20% of the time (less if they're under antiviral treatment) (per wikipedia)

5- During a blister episode, I should avoid touching it, or wash my hands thorougly afterward, especially before touching any other mucosa (eye, lips, genitals) (per my physician)

From 5-, I assume that a given HSV-1 infection is localized, and that I could get multiple ones if I were careless. But for someone in his 30's who never developed any, is the precaution actually relevant? The odds are high that they're asymptomatic carriers, would a different source of HSV-1 risk causing episodes when the previous one(s?) didn't? And if asymptomatic carriers shed viruses 20% of the time, any time I shake hands with someone, and we don't have super rigorous hand-mouth hygiene, and I rub my eyes afterward, shouldn't I risk getting an infection in the eye?

And if each infection is independent from each other, and asymptomatic carriers shed virus 20% of the time, then shouldn't any unprotected oral sex involve a ~20% (a bit less, for those that aren't carriers) risk of getting a genital infection?

There's something that is wrong, either from the bits I got from wikipedia, from those I got from my physician 20 years ago, or from those I infer.

Expand full comment

Organs don't age at the same rate.

https://www.nature.com/articles/d41586-023-03821-w

Previous discussion: https://www.astralcodexten.com/p/open-thread-304/comment/44384289

I'm amazed. I think this is the first time I've raised a theoretical/intuitive question and had some plausible science show up so fast.

Expand full comment

There's a video making rounds of various university presidents refusing to outright say that calling for a genocide of Jews violates campus policies on bullying and harassment (https://www.youtube.com/watch?v=QuTfzcNIeDI if you haven't seen it and want to). My question is, if you're a president of a major US university being asked this question in those words, why do you not give a passionate speech about how of course calling for genocide (of anyone) is bullying and harassment, but by the way, that isn't actually happening? What do you stand to gain by equivocating?

Expand full comment

I haven't watched the video but I did enjoy Ken White's analysis of the dynamics around it: https://popehat.substack.com/p/stop-demanding-dumb-answers-to-hard

>Take this week’s Congressional hearing about antisemitism on college campuses, titled “Holding Campus Leaders Accountable and Confronting Antisemitism.” A generous interpretation — a credulous one — would be that the hearing was designed to inquire why colleges aren’t protecting Jewish students from antisemitic harassment. A more realistic interpretation is that the hearing was a crass show trial primarily intended to convey that a wide variety of dissenting speech about Israel is inherently antisemitic, that American colleges are shitholes of evil liberalism, and that Democrats suck. Since Democrats do suck, they mostly cooperated.

>The core Two-Minute Hate of this carnival was Rep. Elise Stefanik’s demand for yes-or-no answers to questions about whether policies at Harvard, Penn, and MIT would prohibit calling for the genocide of Jews. You might think Elise Stefanik is an unlikely standard-bearer for a crusade against antisemitism, given that she’s a repeat promoter of Great Replacement Theory, the antisemitic trope that Jews are bringing foreigners into America to undermine it. But if you bought Stefanik’s bullshit, you probably didn’t think that far. The college presidents did a rather clumsy job of saying, accurately but unconvincingly, that the answer depends on the context. Stefanik and every politician our loudmouth who wants you to hate and distrust college education and Palestinians pounced on it. And many of you fell for it. You — and I say this with love — absolute fucking dupes.

Expand full comment

I did *not* enjoy that analysis. That it's a show trial is obvious; the defendant at a show trial can still acquit themselves better or worse before losing.

Put differently -- let's say I personally am failing to get outraged (mostly because I'm not prone to outrage), but have friends who are getting outraged. Telling them that they are "absolute fucking dupes" does not advance the cause of conveying to them how it's possible to both be a decent human being and not be outraged.

Expand full comment
founding

It's not even a show trial, it's just a show. And yes, it's stupid for the Senate to force a bunch of university presidents to star in such a show, but yeah, if you have to do it, do it better than that.

Expand full comment

I don't know the first thing about Ivy League politics and the labyrinthian machination concocted by the army of lawyers that those schools have on call, who no doubt advised each of those presidents.

But if **I** was questioned in that session, I imagine I would be tempted to not say a straightforward "Yes" in order to :

1- Refuse to give the obviously performative shrill congresswoman an easy answer she can tout on twitter. If I rip into her, I will be punished because she's Mrs. Congresswoman (or will I ?). If I give a passionate speech about how calling for the genocide of Jews is of course wrong and reprehensible, but is not happening, she will cut the part that she likes and tout it to her twitter hordes anyway. If she can't, she will keep pressing for an easy answer, "Yes or No Mrs. President, Yes or No, I want a single bit answer because I can't fit anything bigger into my brain."

Yes Or No questions are rhetorical tricks when asked from positions of imbalanced power, like debates with hostile audiences and heated formal hearings. When someone replies with a straightforward Yes or No, they're agreeing to every single word, premise, and phrasing in the question. "Mr. Husband, do you rape your wife by lightly kissing her neck even though she's not in the mood for sex ? YES OR NO ?".

Yes Or No questions are good in high-trust environments where the majority of the audience can be trusted to know rhetorical manipulation tactics like the above and not fall for it, the questions can be used to establish common ground quickly and efficiently and dispel baileys advanced by other members of the ingroup at the outset : "As a Pro-Palestinian, I will agree that yes, calling for the genocide of Jews is utterly reprehensible and has no place in any sane stand for Palestine."

2- Building on 1, if I answer yes, the obvious next 2 lines of inquiry is establishing whether calls for genocide of Jews happened in my university and whether I'm blamable for it. The Congresswoman already hinted at what kind of evidence she will use to establish the first :

->- [0:21 in the video]

> [MIT president] I have not heard any chants calling for the genocide of Jews on our campus

>> [Congresswoman] But you have heard chants for Intifada

But those are already 2 very different things, the 2 Intifidas weren't a genocide of Jews by any sane measure, the ratio of Israelis vs Palestinians killed in the first Intifada is something like 200 to 2000, no genocider ever accepts those KD ratios. This doesn't mean the Intifada is a good or neutral thing, it has burned immense amounts of goodwill and the second one is probably the reason why the peace process halted dead in its tracks. But it was no genocide, and establishing this requires a lot of argument in front of the scary big camera while Mr. Shrill Congresswoman says "I WILL GIVE YOU ONE LAST CHANCE FOR THE WHOLE WORLD TO KNOW YOUR TESTIMONY : BABIES SHOULD NOT BE KILLED, YAY OR NAY ?"

So it's extremely tempting to not cede any ground at all in the face of such an opponent, any one moment or the other the dramatic showdown will happen anyway, the opponent WILL "win" anyway because she has the privilege to ask questions and her post-production will have a field day with the answers one way or the other, so it better happens when I'm in the strong position and my opponent is still exhausting her voice just trying to get me to agree with the beachhead she will launch her next attack from.

Now obviously, this is all a Machiavellian analysis that I'm not agreeing with or saying is remotely good or acceptable, and staying silent in the face of such a question while potentially millions of Jews watch is not even a good strategy for anyone remotely Pro-Palestinian in the sane secular human-rights-based way, besides being morally icky of course. The best solution is to not get myself in such incredibly hostile debate environments to begin with, the second best solution is to phrase my response as a hostile counter-question :

> What are you implying with this question ? We all know that nobody decent would stand silently while someone is chanting for the genocide of anybody, so the real question is why you thought it would be useful to imply with an indisdious phrasing of the question that I do not have this common decency, and do you agree that calling for the genocide of Palestinians is an immoral thing to do ? Do you agree that the current Israeli military response has claimed the lives of 16000+ Palestinian civilians ?

Ceding ground, but hiding the ceded ground in between 2 or more questions, the more questions making more assumptions about her the merrier. She now have 2 choices, either accepting the ceded ground and continuing on from it, opening herself to repeated counter attacks based from the questions "Are you refusing to say whether chanting for the death of Palestinians is wrong, Congresswoman ?", or getting distracted by the counter-questions, forgetting the ceded ground and not developing the original attack. This has to be done carefully within the confines of whatever etiquettes I'm expected to follow, so that she doesn't abort the whole episode and declare victory when she remembers she's the one who asks questions. The bet is that she gets distracted by the questions that she forgets she has the option of not answering them. That's why more questions hiding more outrageous assumptions about her is better.

Again, all of this is very Machia-villain and against the spirit of good and honest debates of the kind that enlightens, but sometimes you just gotta do what you gotta do.

Expand full comment

I disagree with your "Mr. Husband, do you rape your wife by lightly kissing her neck even though she's not in the mood for sex ? YES OR NO ?" example. That is, in practice I agree that you'd end up with a disagreement over what constitutes rape / genocide, but the question being asked in the video is the straightforward "Mr. Husband, do you rape your wife?". It seems to me to be *incredibly* bad optics to be equivocating on that phrasing, as opposed to equivocating on what exactly "rape" is. Though I suppose this is the strategy that Bill Clinton tried, and it didn't work great for him.

Expand full comment

Well I agree, the initial question by the Congresswoman didn't smuggle a lot of assumption on the surface.

But I'm judging by her considering the Intifada to be a genocide, if she had won that concession from the presidents she would have probably kept pushing it till the point where saying "Ceasefire NOW" is now genocide against Jews. The presidents didn't handle it well but if I were in their shoes I wouldn't have given her a straightforward 1-bit answer either.

Expand full comment

I agree that one should ignore the 1-bit constraint and give a longer response; my own knee-jerk reaction is "why are you asking about genocide *of the Jews* as if that was important, are you implying that calling for a genocide of someone else would be just fine?" It's just, I'm desperately confused by why they didn't in fact answer that "of course we wouldn't stand idly by if anyone were calling for a genocide on our campus, BUT" and instead went with a "maybe" to the entire thing.

Expand full comment

Because their first duty as officers of their respective institutions is to protect them from liability. That's far more important than answering a "gotcha" question.

Expand full comment

I'm still confused by how any of this incurs liability for the institution. Wouldn't that have to go something like this? "My client was disciplined for harassment because they were calling for the genocide of the Jews. We concede that they were calling for the genocide, but hold that the disciplinary action violated Harvard's harassment policy." That seems like an extremely unsympathetic case to make.

You other point, that actually it *might not be* a violation of the policy, and therefore unconditionally saying that it *is* would be lying, makes a lot more sense to me.

Expand full comment

What we're looking at is a clash between two orthogonal moral systems. The first, is "normie morality" in which the good or bad of an act depends on the nature of the act itself without regard for who is doing it. The second is what we could call "social justice morality", and in that system the moral value of an act is determined primarily by its alignment towards redressing socio-economic imbalances between large groups of people.

The University Presidents are stuck in a difficult position where they are forced, in front of congress and on live TV, to choose between the two moral systems. I think deep down they don't actually agree with social justice morality, but they are dependent on a large group of people who actually do. Of course they're also subject to an even larger and more economically influential (though less violent) group of people who believe in normie morality. All you can do in this situation is waffle, say something contradictory and non-committal, and hope that one of the other university presidents screws up even worse so that they'll get all the heat, not you.

Expand full comment

I was born in the USSR, and absorbed at least some of the mindset through osmosis. My impression was that modeling things in terms of class was conceptually similar to the SJW framework, and also that it was flexible enough for responses along the lines of "of course we don't condone calling for genocide, but we fully support the offers of our downtrodden and oppressed brethren to shake off the shackles of their oppressors". I am surprised that the SJW-inspired crowd instead bites the bullet of continuing to be wishy-washy, even when literally asked about "calling for genocide of the Jews".

Expand full comment

They can't give that reply because that isn't an official policy position of any of their respective schools. I'm not convinced that it should be. As for the substance of the question, not a lawyer but I would imagine that calls for non-specific acts of violence are protected speech in the US, or at least the legal status is murky enough that we shouldn't be asking a college president to opine about it.

Expand full comment

I would be happier with the video if the university professors were asked what sort of context or actions they had in mind. This being said, I'm not pleased at all that they're so wishy-washy on the subject.

I listened to the second half, and it's a bit clearer. Threats against individuals are taken seriously, threats against the whole group is no-never-mind.

Expand full comment

I'm sympathetic to the idea that individuals have rights in a way that groups don't, but it seems incredibly *stupid* to explicitly say that literally calling for the final solution would be fine depending on context.

Expand full comment

As I said above, I'm not a legal expert, but my understanding is that calling for the final solution, in a general nonspecific way, is protected speech in the US. This may be good or bad depending, but there isn't much a university can do about that.

Expand full comment
founding

Harvard and MIT are private universities, and as such can have speech codes stronger than would be allowed by the 1st Amendment's restriction on government action. Or not, as they chose.

And yes, "We should exterminate the Jews", as a general and nospecific threat, is protected against government interference. So is "We should reinstitute Negro slavery" or "we should round up all the Muslims and put them in camps" or "we should forcibly detransition all the transgender people". A private university is free to say that they value academic freedom enough to allow all of those. Or to say that they value civility enough to kick you out for saying any of those.

Or to say "No on the slavery, concentration camps, forced detransitions and all that, but advocating the Final Solution for the Jewish Problem is just fine". But if that's their position, it's not freedom of speech, it's plain antisemitism and should be called out as such. Also, there are legal issues with expressing official antisemitism (or anti-any-protected-group-ism) while taking federal money for e.g. research projects.

Expand full comment

I wish that, instead of speculating what the presidents *would* say *if* they were asked about calling for mass lynchings, Elise Stefanik just *did* ask. Not necessarily because I expect to like the outcome, but I don't like speculating about "what would they say if X, huh? huh?!" when there was an excellent opportunity to test it.

Expand full comment

>and as such can have speech codes stronger than would be allowed by the 1st Amendment's restriction on government action.<

Of course this hearing is government action, and the Congresswoman here should not be allowed to pressure private institutions to apply stronger restrictions than the government can apply directly. It's gross.

Expand full comment

"Harvard and MIT are private universities, and as such can have speech codes stronger than would be allowed by the 1st Amendment's restriction on government action. Or not, as they chose."

I don't think that this is true. I don't lose my free speech rights by simply walking into a shopping mall, or a corporate headquarters, or a private campground, all private institutions. Of course, they can regulate behavior that interferes with the normal course of activities there, as can a private university, but that has nothing to do with the content of the speech.

Expand full comment

According to Tyler: https://marginalrevolution.com/marginalrevolution/2023/12/the-university-presidents.html their equivocating prevents lawsuits.

"Their entire testimony is ruled by their lawyers, by their fear that their universities might be sued, and their need to placate internal interest groups. That is a major problem, in addition to their unwillingness to condemn various forms of rhetoric for violating their codes of conduct. As Katherine Boyle stated: “This is Rule by HR Department and it gets dark very fast.”"

Expand full comment

I was confused by why lawyers would find it objectionable to, again, denounce the literal phrase "calling for a genocide of the Jews" in abritrarily strong terms. Someone I talked to pointed out that Elise Stefanik wasn't interested in getting the presidents to denounce it, she wanted them to specifically declare that it would violate their bullying and harassment policy. Their point was that, if the university president were to concede that it would violate the harassment policy, then the university would go up for a harassment lawsuit (and a university administrator who allows that is clearly no longer employable as a university administrator). I'm not entirely sure I follow the implication "president concedes that something is a violation of the internal harassment policy" => "university gets sued" (doesn't it follow that the university should just not have an internal harassment policy?), but it does at least explain why everyone was so committed to particular formulations of their question / answer.

Expand full comment

That, plus the fact that it in all liklihood *doesn't* violate their bullying and harassment policies (because they likely do not meet the legal criteria for bullying or harassment). Once again, I am not a lawyer, so someone should fact check me. But can you imagine the publicity storm that would erupt if one of them admitted that?

Expand full comment

Ah. Yeah, ok, the fact that they could go up for perjury if they unconditionally said it *did* violate it is a good point. (Also not a lawyer, but having looked through Harvard's bullying and harassment policy, I am also not convinced it would violate it.)

Expand full comment

Very much true, though it does open them up to charges of hypocrisy and in the court of public opinion. Neither of those things are controlled by lawyers (though good PR people might have some thoughts) and don't directly lead to monetary damages.

Conservatives, whose opinions they clearly did not care about already, have more reason to dislike them and more ammunition to post on right-leaning news sources that Ivy League universities don't care about. This kind of concern has a very low natural ceiling for these schools.

I think what it really comes down to is if rich Jews are willing to pull donations from these schools more than replacement donors come in, we might see some action. Otherwise their testimony was the best of a group of bad options for them.

Expand full comment

Scott, if you're willing to, would you share some information about prerequisites and timelines for applying to medical school in Ireland? I'm considering it because applying to medical school in the US would take me ~3 years, which seems absurd (I'm only missing 5 courses, but they mostly have to be taken in sequence.)

Expand full comment

I know about the issues with match rates and all that – but I want to check if applying is worthwhile before I think about that.

Expand full comment

https://www.youtube.com/watch?v=yDp3cB5fHXQ

Four hours! by hbomberguy about the plagiarism by James Somerton on youtube. I've been told by a number of people that it's both meticulous and engaging, and I might watch it.

https://youtu.be/A6_LW1PkmnY?feature=shared

Almost two hours by Todd in the Shadows about how Somerton was talking utter nonsense, and yet had a substantial reputation until hbomberguy documented the plagiarism. I watched this one and I recommend it.

Somerton's youtube presence is toast. I'm interested to see that youtube posters have done a better job of opposing plagiarism than the government has.

Expand full comment

I watched both videos: Todd's is more bang for your buck (in terms of watch time) but Hbomber's is thorough and engaging. Hbomber spends the first hour or so talking about other YouTubers who have plagiarized, and then spends the rest of the video on Somerton.

Expand full comment

It's not the government's job to oppose plagiarism at all, let alone on YouTube. It's not even a tort like copyright infringement, unless it actually rises to that level.

Expand full comment

Robot with wheels on all four limbs.

https://spectrum.ieee.org/quadruped-robot-wheels

"The ETHZ researchers got the robot to reliably perform these complex behaviors using a kind of reinforcement learning called ‘curiosity driven’ learning. In simulation, the robot is given a goal that it needs to achieve—in this case, the robot is rewarded for achieving the goal of passing through a doorway, or for getting a package into a box. These are very high-level goals (also called “sparse rewards”), and the robot doesn’t get any encouragement along the way. Instead, it has to figure out how to complete the entire task from scratch."

Expand full comment

The number 1 song on Israel's spotify and YouTube is a genocidal rap that compares Palestinians to Amalek, celebrates the destruction of Gaza as the righteous revenge for Gaza's envelope's Kibbutz children, and - for some reason - mentions Mia Khalifa and Bela Hadid on the same footing as Hamas and Hezbollah leadership.

The Song's name, Harbu Darbu, is a corruption of the Syrian Arabic dialect colloquial expression حرب و ضرب, meaning literally "War and Striking". It's a Hebrew slang in the criminal underworld for "Swords and Destruction"[3].

Making fun of Palestine supporters, the lyrics likens "Free Palestine" to a holiday sale, utilizing the English double entendre of "Free" as in "The IDF will take Palestine for free".

On YouTube, the official upload has 5.3 million views in 3 weeks.

[1] https://www.youtube.com/watch?v=1rk3n9V-aQs

[2] https://lyricstranslate.com/en/harbudarbu-charbu-darbu.html

[3] https://en.wikipedia.org/wiki/Harbu_Darbu

Expand full comment

Nice. Israel has more confidence in itself than the West does. I don't remember songs like this after 9/11. Well, maybe a few country ones, but not rap ones.

Expand full comment

It will never cease to be hilarious or ironic when Pro Genociders rationalize Israel's military response by drawing parallels to 9/11, forgetting how did that work out for them.

Expand full comment
Dec 6, 2023·edited Dec 6, 2023

You've mistranslated the lyric you find damning. It is not a play on the English equivocation between 'free' (as in not requiring payment) and 'free' (as in liberty). The line is making fun of Palestinians who claim to speak Hebrew but make exactly the mistake you just did. The line is: "They shout 'Free Palestine' but it sounds like a holiday sale to me." The immediate joke is the (imagined) Palestinian chose the wrong word for 'free', the one that means 'sale' and not 'liberty.' It's a very rudimentary mistake because even the grammar is wrong. (Your Arabic, however, is correct.)

It doesn't directly celebrate destruction in Gaza in the name of Kibbutz children. It says to write the names of victims of the attack on the guns and the shells of IDF soldiers. Your interpretation that any reference to weapons of the IDF is a reference to Gaza's destruction is an interpretation, not plainly in the song. The only other references to Gaza are a call to make war and generically to shoot at it as well as associating it with places like the Golan heights.

The line about Mia Khalifa is a claim that God's vengeance will come on anyone who supported the October 7th attacks or Hamas. They are not mentioned 'on the same footing' but a dozen lines later. There's about ten lines from most to least serious and Mia Khalifa/Bella Hadid are at the very end.

Lastly, of the five million views the vast majority of them come from the Arab world with Israel registering about 1.2 million of them. The other millions are mostly from the rest of the Middle East. Because, and I suspect this is not a coincidence, it's become a common cause on Arabic social media and a lot of bad translations are floating around that either purposefully or due to lack of Hebrew knowledge mistranslate the lyrics. And they make many of the claims you do.

(Of course there's something of an asymmetry here due to the number of Arabs in the region vs the number of Jews where 1 million Jews is a huge proportion of the Jewish population while 4 million Arabs is a small percentage.)

Please try to be better about your contributions on this topic. If you are (as there are now numerous signs) someone who doesn't speak Hebrew very well and who hates Israel you will be respected more for making your honest case than pretending. And you should generally be more careful of anti-Israeli news articles because you should, if you want an accurate model of the world, be especially sharp eyed about anything that reinforces your worldview to avoid confirmation bias.

Expand full comment
Dec 6, 2023·edited Dec 6, 2023

> You've mistranslated

I didn't translate any Hebrew myself, as a matter of fact. I just read the English lyrics I posted in [2].

When I first read the line making fun of 'Free Palestine' I didn't actually understand what's the connection between holiday sales and the slogan, one of the sources I read about the song suggested this interpretation and it made sense to me, so I included it.

> Your Arabic, however, is correct

There would have to be something very wrong with me to be bad at my native tongue.

> It doesn't directly celebrate destruction in Gaza in the name of Kibbutz children. [...] It says to write the names of victims of the attack on the guns and the shells of IDF soldiers.

Uhmm okay, and what are the guns and the shells of the IDF doing right now ? What is the only use of guns and shells ?

Counterfactual : let's say I sing about writing the names of Gaza's children on the guns and rockets of Hamas. Does this count as a celebration of the destruction of Israel or nah ? If yes, what's different when I switch it to Israeli children and Israeli weapons ?

And What about comparing Gazans to Amalek ?

> The line about Mia Khalifa is a claim that God's vengeance

Just God's revenge ? Isn't this line right there after mentioning Bella, Dua and Mia :

>> All the IDF units are coming, to put the heat of the sword

>> on their heads, woe woe

Ten lines in a rap song doesn't seem like too much of a distance to me. Especially when most of them are 1 catch phrase repeated.

Just the mere fact that they mention the name of a model, a porn star and a singer next to the names of Islamist organizations' leaders is an indicator there is an astonishing amount of confused victimhood and misguided radicalization where that came from.

> the five million views the vast majority of them come from the Arab world with Israel registering about 1.2 million of them.

As far as I know you can't be sourcing this claim from the YouTube UI shown to ordinary viewers, I have also seen the first 3 pages of google search results for this song before making the post and I have never seen this claim referenced. So unless you have the credentials of the uploader account or a YouTube-internal database, I want to know who you are quoting this number from please.

> making your honest case than pretending.

Pretending to be what ? This is my honest case. Are you saying I don't believe what I'm writing ?

> you should generally be more careful of anti-Israeli news articles because you should, if you want an accurate model of the world, be especially sharp eyed about anything that reinforces your worldview to avoid confirmation bias

A thing I'm already taking into account as much as I can, in proportion to the counter-signal that Israel has spent 70 years and billions of dollars openly injecting propaganda in the western and the global info stream, to the point where the AIPAC annual policy conference is second only to the State of the Union in how many US feds eyeballs it captures.

Anti-Israel news sources are, quite trivially, the only place where I will find facts and interpretations suppressed by this vast propaganda machine. I also see a fair amount of pro-Israel news sources, take the diff, and reach my best-effort view.

Expand full comment

> Uhmm okay, and what are the guns and the shells of the IDF doing right now ? What is the only use of guns and shells ?

To wage war. No one disputes that Israel is waging war in Gaza. The immediate interpretation that anything Israel does in Gaza is automatically destroying Gaza is certainly an interpretation people push. But it's an interpretation and your original claim was not, "I interpret X to mean Y." It was that the song directly called for the destruction of Gaza.

> Counterfactual : let's say I sing about writing the names of Gaza's children on the guns and rockets of Hamas. Does this count as a celebration of the destruction of Israel or nah ? If yes, what's different when I switch it to Israeli children and Israeli weapons ?

No counterfactual necessary, such songs exist. They generally don't come under such scrutiny though. And I do distinguish between the ones that directly call for the destruction of Israel or killing of Jews and those that say something general like 'remember the martyrs'.

> I want to know who you are quoting this number from please.

From the music charts. 1.2 million in Israel, 6 million on the regional charts. Unless you think it's a bunch of Assyrian Christians I think it's fair to assume the other 5 million-ish are Muslims of various kinds.

> Pretending to be what ? This is my honest case. Are you saying I don't believe what I'm writing ?

You have claimed that you have knowledge of Hebrew and studied Israel and so your hatred of Israel is justified. I take you about as seriously as I'd take an Israeli who went by the name "LearnedArabicHatesPalestinians" and who constantly posted things about how Arabs are awful and Hamas is barbaric. Which is to say, not very.

I also don't believe you are making a genuine effort to do anything more than propagandize on behalf of your preferred side. I'd prefer you put more effort into actually thinking about the situation and delivering your point of view with more heat and less light. Because, unfortunately, the quality of pro-Palestinian commentators here is quite low and I'd love to have a high quality one around. You clearly want to be that and feel passionately about this. But you're not really contributing so far. As it is right now you're almost certainly violating the rules and saying untrue things.

> A thing I'm already taking into account as much as I can, in proportion to the counter-signal that Israel has spent 70 years and billions of dollars openly injecting propaganda in the western and the global info stream, to the point where the AIPAC annual policy conference is second only to the State of the Union in how many US feds eyeballs it captures.

The Muslim world spends more money influencing American policy and news than Israel does. Muslims have more influence than Israel does by simple virtue of size. Both economically and population-wise Now, a lot of that goes to propping up Gulf State monarchies rather than advocating for support for Palestine. But I'm not sure if that's your point. If you just do an accounting the Egyptians are the ones who recently had their hooks in a Senator that headed foreign relations. I can't think of anything comparable Israel has.

Also: Do you have a source about how AIPAC is the second most watched speech by Federal policymakers and employees? In general it's accepted that the oil states, most of which are Muslim, have more outsized influence.

> Anti-Israel news sources are, quite trivially, the only place where I will find facts and interpretations suppressed by this vast propaganda machine. I also see a fair amount of pro-Israel news sources, take the diff, and reach my best-effort view.

Is this because they're pro-Israel or because you're so anti-Israel that neutrality feels pro-Israel to you? Put another way, I hear this same charge from the extreme pro-Israel side.

Expand full comment

> The immediate interpretation that anything Israel does in Gaza is automatically destroying Gaza

So Israel is not destroying Gaza ? What, It's waging war in it purely using the power of friendship and love ? No 50% of urban areas made one with the ground https://apnews.com/article/palestinians-gaza-israel-bombing-destruction-hamas-reconstruction-f299a28410b70ee05dd764df97d8d3a0 ?

No 1.8 million Palestinians displaced and 15000+ killed ?

> such songs exist.

Please post some.

>  I do distinguish between the ones that directly call for the destruction of Israel or killing of Jews and those that say something general like 'remember the martyrs'.

So according to you, a song that compares Palestinians to Amalek belongs to the second category and not the first ?

> From the music charts. 1.2 million in Israel, 6 million on the regional charts.

Where are those charts/statistics you're getting those numbers from ? According to the song's wikipedia article :

>> As of December 2023, "Harbu Darbu" had received almost 3.5 million listens and views.

No mention of 1.2 million views/listens in Israel, no mention that the total number of 5 on YouTube is mostly from the Middle East instead of globally.

> You have claimed that you have knowledge of Hebrew and studied Israel

I have claimed absolutely no such thing. I have been pretty honest about my level of Hebrew knowledge since day 1, about 7 or 8 open threads ago. Just last thread I was replying to Nancy Lebovitz saying that I still find it difficult to tell Hebrew letters apart by name. My username begins with "Learns" not "Learned", it's an ongoing thing that's sadly not going to finish in time for me to have the pleasure of seeing some natives calling for the genocide of my ethnic group.

I have never actually claimed that I hate Israelis in the absolute, only Israel. Not being able to distinguish between a state and a population already makes you not the greatest paragon of neutrality. That's the difference between the Russian state and the Russian people, Putin vs. Dostoevsky.

> I also don't believe you are making a genuine effort to do anything more than propagandize on behalf of your preferred side.

What examples of effort that you would like to see me do ? Agree with you ? Something else ? Mention some concrete things that I can put on a checklist.

>  the quality of pro-Palestinian commentators here is quite low

It's pretty insulting to say that the quality of **people**, not comments, are low. I will chalk this up to a slip on your part.

No hard feelings my man, but I do not exist to please you. You're not the sole arbiter of what passes as a high quality comment vs. a low quality one. If you have specific complaints about my views or how I express them that don't boil down to just "I don't like them and I don't like you", I'm all ears, **everyone** can use a little bit of criticism even if it's harsh and coming from a hostile place. But I don't find your complaints very actionable to be honest, they are all variations on "You're a liar, You're bigoted against Israelis, You're misguided/propagandized/stupid/low-quality, etc...", and even if I were all of these things, which is possible, you have to be more specific if your goal is anything but simply dunking on me for the entertainment or catharsis value.

> right now you're almost certainly violating the rules

Do you mean the "Kind Necessary True" rule ? Kind is kinda out of the window because there is nothing kind you can say about a war where one side is annihilating the civilian population of a city with more bombs in a month than what was dropped on Afghanistan in a year. True ? Not a single word out of my original comment was factually false to the best of my knowledge when I posted it, the mistranslation about 'Free Palestine' is out of my hands (and it's not independently confirmed, I'm merely taking your word for it.)  

Necessary ? I would say yes, people should know that the state incessantly claiming to be a victim and denying it's doing a genocide has people who produced a hugely-watched and hugely-listened-to song comparing the civilians of their enemies to Amalek, who in the Bible are people that God orders the Jews to genocide and kill to the last child.

Ultimately, if you think I'm such a rule violator, you can report me and move on. There is no point to debating the True/Necessary conditions because you're almost guaranteed to view anything that paints Israel in a bad light as noise, misinformation or outliers. Signals and Correctives https://everythingstudies.com/2017/12/19/the-signal-and-the-corrective/

> The Muslim world spends more money

I don't believe this is true until I see numbers. And I'm atheist so I don't get your point, so what that the Muslim world spends more money on bribes and lobbying ? I have a finite brain that can only get mad about so many things, right now there are 15000+ innocents dying in front of the cameras, so I'm mad about that. When this ends, I will get back to being mad about Islam buying legitimacy it doesn't deserve with petrodollars. Sometimes the 2 things intersect when some idiot on one side or the other islamizes the conflict, and then I'm the first one who gets mad.

 >  Do you have a source about how AIPAC is the second most watched speech by Federal policymakers and employees?

https://en.wikipedia.org/wiki/American_Israel_Public_Affairs_Committee#Supporters

> Is this because they're pro-Israel or because you're so anti-Israel that neutrality feels pro-Israel to you?

I have several heuristics to judge someone or some source as "Pro-Israel", some of those are :

1- Doesn't recognize that Palestinians have a right to the land, views them as guests

2- Doesn't mention anything about myriad of Israeli genocides of Palestinians https://en.wikipedia.org/wiki/Palestinian_genocide_accusation

3- Doesn't distinguish between Arabs and Muslims, thinks Arabs are inherently more prone to violence and peace-rejectionism and that every single thing Israel ever did was reacting to them

4- Emotional blackmail using the historical anti-semitism of several societies that hosted Jews (the Arab ones were the least among them by the way)

5- Uses Judaism and the Bible as a serious argument that the Israel has a legitimate right to the land

Expand full comment

I suspect that part of how Israel got into this situation is excessive attention to how much it's been hurt and not enough attention on how it can hurt people.

This is a hint.

Expand full comment

Are you indirectly saying that Palestinians do the same thing ?

Expand full comment

I believe the song is pro killing Gazans though I admit I'm going by a version with English sub-titles that doesn't seem to be on youtube any more. It was the one where the translator said there was so much slang they weren't sure they got all of it.

Expand full comment

In recent years, many sci-fi and fantasy fans have groused that classic literature is considered a higher art than those genres. No doubt that classic literature has taken a cultural beating over the past decade or two. For instance, in the '90s, Ernest Hemingway was still considered to be one of the greatest American authors of all time. Now he has been relegated to the old white racist league, never to be mentioned in print.

Meanwhile Tolkien has replaced Tolstoy as the great, old, wise author, at least online.

I consider this to be a bad turn of events.

The main theme of classic literature is mortality, death. It's something we all must confront, and it is worthwhile to think about, to meditate upon, to read about.

Sci-fi, fantasy and other genre fiction have deservedly been held in lesser esteem than Literature.

Expand full comment

"The main theme of classic literature is mortality, death "

That's called damning with feint praise. Classical literature is, luckily for us, way wider than that.

Expand full comment

Mortality and death tend to be incredibly dull subjects. Oh boy, another book about how we're all going to die. I'm sure the latest justification for why this is a good thing will be absolutely riveting. Give me stories of heroes with the actual power to change things, please. Reality is grim enough as it is, and no amount of "Literature" about the poetry of futility will improve matters.

Expand full comment

>I'm sure the latest justification for why this is a good thing will be absolutely riveting. Give me stories of heroes with the actual power to change things, please<

Why not both? https://www.youtube.com/watch?v=ngGede_9hAE

...oh wait that one's still fantasy.

Also spoilers I guess.

Expand full comment

The cultural beating classic literature has taken over the past couple of decades has nothing to do with science fiction and fantasy. You allude to the actual cause when you mention the 'old white racist league'. The cultural elite, unlike the actual consumer audience, wants to promote diversity over quality, which means tearing down the great and old in favor of the mediocre, new and diverse. This is not confined to classic literature, nor even literature in general. You can see the same effect in almost any field of art, though the more the art tends towards the populist the slower the disease spreads.

Tolkien has survived where a lot of classic 'great, old, wise' sci-fi and fantasy authors have fallen victim to the same forces that took down Tolstoy because blockbuster movies are a much more populist media due to the need to actually earn at least a portion of their tens if not hundreds of millions of dollar budgets from the wallets of the public. If Tolkien hadn't been brought to film, he would have been shoved aside like the others. And we've seen efforts to shove Tolkien into the mediocre diversity mold, which have produced predictably mediocre results, though ones labeled 'Tolkien' to try to convince the mass public that they still had some of the 'great, old wisdom' left.

As far as genres go, fantasy is specifically a genre that is optimized for telling stories. Just about any theme can be done well in fantasy.

Expand full comment

"Ernest Hemingway...has been relegated to the old white racist league, never to be mentioned in print."

That statement is false. I read it to my elder son whose bachelor's degree from an American liberal-arts college is in literature and who recently completed a master's at one of the nation's elite arts schools located in a large "blue" city; he LOLed.

Ken Burns' 6-hour documentary mini-series "Hemingway" debuted on PBS in 2021; I watched all of it. It accurately described the racial attitudes that were reflected in Hemingway's writing and in his life, which didn't at all distract or detract from the series' portrayal of Hemingway.

Expand full comment

I would actually not be *too* sure if this is the case. At least the 4chan /lit/ charts (ie. https://static.wikia.nocookie.net/4chanlit/images/2/27/Top100lit2014.jpg/revision/latest/scale-to-width-down/1000?cb=20160102025035) have tended to be quite heavy on classic (or "modern classic") literature, eschewing SF/fantasy, and 4chan considers to be a major culture former among the younger generations, at least from what I've understood.

Expand full comment

4chan has been slowly dying for years and /lit/ isn't one of the bigger boards. I wouldn't take them as a barometer of the younger generations at all.

Expand full comment
Dec 6, 2023·edited Dec 6, 2023

i think Tolkien, if he has, only has become prominent due to the generations that grew up on the incredibly popular movies. I used to read SF heavily in the 80s, and fans wpuld have thought of him as one of many delights.

I mean Anne McCaffrey for example was a huge popular success; there were 24 pern novels and she wrote a lot of other series; i liked The Ship who Sang and the novels she wrote in that world. She's not alone though; if you want someone as a literary replacement i think a sf fan would argue Gene Wolfe over Tolkien, or if earlier Samuel Delany. R.A. Lafferty or Tom Disch or Barry Malzberg as well. A SF fan woukd say "yeah Tolkien is great but i really love..."

i think the issue is people read less overall though and read based on movies.

Expand full comment
Comment deleted
Expand full comment

i think in the old days i'd agree but now its too much of a monkey's paw to gamble. Even then though.

There are some older things that aren't good adaptations but are valuable because that the only ones they'll ever get. The old cartoon The Flight of Dragons is an okay watch, but its very precious to me because it's actually Gordon R. Dickson's The Dragon Knight, in part. Or the old TV movie "The People" which was based on Zenna Henderson's stories and the People.

i think sometimes a bad adaptation can be a remembrance at least.

Expand full comment

I'm not saying you're wrong about any of this, but I find grand and vague pronouncements about "how the culture has changed" on some issue to be overdone and often unconvincing. And even when they're true at the time, most "permanent cultural shifts" only last for a few years. So can I ask how you quantify claims like "Tolkien has replaced Tolstoy" online? Is this just an impression or measured in some way? Do you mean among media outlets or among random commenters? If the latter, are you sure there's a meaningful difference between "opinions of random people on Twitter" now and "opinions of random people at the bar (or on Usenet)" thirty years ago? That you're not just comparing elite opinions in the past with popular opinions now and saying opinions have become more populist?

Again, not saying you're wrong, just that I find claims like this suspicious.

And as for Hemingway, are you talking more "a few NYT #cancelHemingway articles" or "routine removal of Hemingway from hundreds of school/university curricula"? If the latter, I wouldn't find *this* claim suspicious at all, just incredibly depressing.

Expand full comment

A great deal of classic literature is fantasy or sf. Consider Gulliver's Travels and the Divine Comedy, for example.

LOTR has a tremendous amount about death and loss.

There's more to life and literature than contemplating death.

Expand full comment

See also Peter Beagle. One of his big themes is "Accept death. The acceptance is good for you."

Expand full comment

Ray Bradbury is a modern example. If you expand it to horror, Shirley Jackson is too. Neither seem to be considered only genre fiction.

Expand full comment

A Christmas Carol came to mind for me!

(I'm not sure if Divine Comedy counts, though. I'd have thought fantasy is reserved for things practically no one believes are real.)

Expand full comment

Dante's depiction of the afterlife includes a number of monsters from Greek mythology in addition to the Christian stuff, so I think he'd still qualify as a fantasy writer even by that definition.

Expand full comment

1- Something about you lumping together sci-fi and fantasy both in one fist tells me you're an outsider to both, as those 2 from the inside have vast differences and readers/fans of one might not be readers/fans of the other.

2- Refusing to read older works entirely because of the supposed moral deficiencies of the author is precisely the kind of narrow-minded, black-and-white, fight-or-flight, simplistic behaviour that reading a lot of sci-fi is supposed to free you from. It's not apriori wrong to say an author was racist, neither is saying that a popular work by said author is an expression of racism. Only censorship as a response to said perceived moral deficiencies is wrong, but I struggle to see how sci-fi or fantasy played a part in this unfortunate turn of events. 

3-(a) Is Death and Mortality the main theme of **all** of classic literature ? Charles Dickens ? Jane Austen ? Oscar Wilde ?

3-(b) Is Death and Mortality not featured prominently in sci-fi ? To pick the latest 2 novels I read, Titanium Noir and Venomous Lumpsucker, one of them is about Death, the other about Extinction. They both, in their own way, satirize the repeated and ultimately ineffective human half-measures against Death, which distract humans from enjoying life.

4- Older storytelling being gradually displaced by newer forms is a tale literally older than writing. Wasn't the Novel itself a radically modern reinvention of storytelling that only happened in the 19th century ? Didn't it displace older storytelling mediums like the folktale and the theatre play ? One component of this is indeed fashion and status games, the hot new thing eventually becomes not as hot anymore. But another component is that every art form assumes cultural context, and at some point the distance between the reader's cultural context and the writer's is simply too vast to be fun to bridge. Eventually all truisms either become too true or too taboo to agree with, all character names become too funny-sounding and hard to remember, all the in-jokes and subtle subtext become too obscure to not fly right past the head of most readers.

5- I'm of the opinion that if a man wants to enjoy shit, I will fight to the death for his right to enjoy shit. I don't give a shit if people think sci-fi is "childish" or whatever 1950s wrong cached opinion they hold about the genre that imagined Space Travel before they were in the womb. Their loss. I'm going to read it and enjoy it anyway. Similarly, I don't think fans of classic literature should really care about whether people keep holding them in high regard or not, many great authors weren't held in high regard in their own times. The apex of wisdom is this : Status Games Are For Losers, and their winners are losers. Seek wisdom wherever you find it, and don't forget to enjoy yourself a little in the process.

Expand full comment

Why can't sci-fi and fantasy deal with mortality and death?

I'd argue that given the past century of average lifespan and post retirement quality of life advances, speculative fiction is an avenue that is more than appropriate to consider it.

My current thoughts on why speculative literature is currently so esteemed is the fact that everything is changing so fast, our understanding of the world is more enormous than ever, and the setting of classical literature - relatively static worlds limited in scope - no longer resonate.

And also, Orwell's Big Brother is literally real now. Moby Dick seems fantastical to me - just randomly quit my job and get hired as a shiphand when I feel like going to sea?? In this economy? Without union membership?

Granted, some stuff still holds - Tale of Two Cities probably still holds up.

Expand full comment

> The main theme of classic literature is mortality, death. It's something we all must confront, and it is worthwhile to think about, to meditate upon, to read about.

There are diminishing returns to everything, including contemplating death.

You can also contemplate death from a sci-fi perspective: https://www.youtube.com/watch?v=FMJNta-okRw

Expand full comment

Many Sci-fi, fantasy and other genre fictions also have deep themes they deal with (and yeah, some are just "cool spaceship battle pew pew"). And some are in the middle.

OTOH I also don't think classical literature is as consistently deep as people claim - I like some of the classical books I read but a lot of them are closer to "pew pew" adventure stories (except in real life and without the space battles) than people want to admit.

Expand full comment

In reality it’s not just the story but the writing. Jane Austen is regarded as a classic because of her writing skills, the stories are Rom coms. Well without the comedy, except a bit of snark.

Expand full comment

On the one hand this is true, and if I jump from reading trashy 80s fantasy to Jane Austen I notice a jump in writing quality.

On the other hand, classics writers don't have categorically better writing - I think e.g. Susanna Clarke or has even better writing skill than Jane Austen. And classics often fall short on writing structure issues that modern writers would learn not to do (Jane Eyre is well written but has huge streches of random rambling that you'd only see in webfics today, not in anything actually published somewhere that has an editor).

Expand full comment

Mortality/death is cool and important but not by any means the most important topic to contemplate and learn about. Good sci-fi and fantasy appeals to me because it explores many cool and important topics, not just one.

Expand full comment

I've been learning to salsa dance, and I'm definitely not a natural at this, but a big issue I'm having is dancing in time with the clave. Surely someone can dance here, as we're a diverse bunch. Any tips for keeping the salsa beat?

Expand full comment

It might be helpful to pinpoint where the difficulty lies. Is it perceiving and orienting to the rhythm in the first place? Losing track of it as your attention is taken away? Hearing how the rhythm of your steps sits inside the rhythm of the clave? Physically moving your limbs in sync with it? Shifting your weight to where it needs to be ahead of the step you need to take?

Expand full comment

All of them. The last question isn't even a concept I was aware of.

Expand full comment

How long has it been? The reason I ask is that there’s a great variability in people’s rhythm sense and control, and it’s possible you may be on the lower end of it. Which means you just need more time to practice.

If there’s one suggestion, it’s to be “loose”, relax and don’t worry about perfection. I know it is kind of obvious, but it is helpful to try. I still remember my MMA coach reminding me to relax during sparring, and eventually it worked.

Expand full comment

I started taking it seriously this August. It's just that I have seen some make serious progress in that time.

Expand full comment

Oh, one more thing: if you have a good dancer in mind whom you like, try mimicking him. Like literally, pretend you’re him, adopt his manners and facial expressions. It is weirdly helpful. I had a “role model” like that in the MMA class, and I adopted his utterly relaxed facial expression while sparring. He looked like he wasn’t even there, utterly relaxed and unbothered. Mimicking his slacked face helped me to relax.

Expand full comment

Oh, it’s fine then. Everyone’s different. Some of these fast progressers may just be better-coordinated, some may have music or athletic background, etc. I was like that with MMA - I’m not athletically gifted so I kept watching others get much better while I struggled (music was the opposite). But I stuck with it and slowly got better. I’ll never be UFC material, but that’s an insanely high standard to judge yourself by.

Expand full comment

Tyler Cowen claims that top athletes are cognitive elites because being a top athlete requires a lot of intelligence, both in the knowing what to do on the field in real time sense but also in the training requires passing a bunch of marshmallow tests way.

Yet, c'mon, we also know that a lot of top athletes are really dumb. They aren't all Charles Barkley.

My question has to do with the General Intelligence Hypothesis. If human intelligence is really a general thing, with high intelligence in one field bleeding into others, then Tyler is obviously right. But it doesn't seem likely, does it? Why is there a pop culture dichotomy between jocks and nerds? Is the dichotomy false? Why do nerds look so much like nerds and why do jocks look so much like jocks? Is it all a phony social construct?

Expand full comment

Citing Cowen would make it easier to evaluate the claim.

Expand full comment

He has said this in various places. A search turned up this: https://marginalrevolution.com/marginalrevolution/2023/08/in-which-sector-are-the-top-performers-stupidest.html

>One of my core views is that the most successful performers in most (not all) areas are extremely smart and talented. So if you are one of the (let’s say) top fifty global performers in an area, you are likely to be one sharp cookie, even if the form of your intelligence is quite different from that in say academia or the tech world.

You might that a sport such as basketball selects for height, and thus its top performers are not all that mentally impressive. But I’ve spent a lot of time consuming the words of Lebron James, Magic Johnson, Michael Jordan, and Kareem Abdul-Jabbar (including a podcast and a dinner with the latter), and I am firmly convinced they are all extremely intelligent.

Expand full comment

Looking at the quote, Cowen's actual claim is somewhat limited, and his examples are even more limited. He is speaking of the 50 top global performers in an area, which is very narrow - much more narrow than e.g. being an NBA player, although players in the NBA are surely already in the hyper elite of the population as far as basketball performance.

His 4 examples, though, of top 50 performance, are somehow all in the top 5 of this list of greatest basketball players of all time (https://www.cbssports.com/nba/news/top-15-players-in-nba-history-cbs-sports-ranks-the-greatest-of-all-time-from-west-and-steph-to-lebron-and-mj/).

So they're basically examples of people who are the absolute best in the whole world at their performance (arguably, even better than that).

Furthermore, I suspect Cowen is overstating their intelligences. I doubt that even the people he lists would be considered extremely intelligent. Perhaps the one most known as an intellectual is Abdul Jabbar and from what I've seen, he's reasonably intelligent and curious / thoughtful, but shows little sign of brilliance or extreme intelligence. You can check out his Substack here: https://kareem.substack.com/ and evaluate him for yourself.

Even if what he were saying were true, though, it wouldn't be because basketball prowess is particularly g-loaded. Even if g only has a tiny positive impact on basketball prowess one could still find that the GOATs in that field tend to have relatively high g, since they need every edge over their competitors, including cognitive skill.

Still, an IQ of e.g. 120 might render someone smarter than 99% of professional basketball players, which might give him such an edge over the competition without being a genius. Such a player would likely come off as smarter than IQ 120, given people's expectations.

Expand full comment

I’ve said this before, but Ronnie O‘Sullivan who just won his 8th UK Championship in snooker, is a mathematical genius of sorts though he may not be able to solve an equation.

In general terms although I’m not a fan of soccer in general I occasionally watch the Monday night games on Sky under duress. The analysis by former players is very articulate, precise and comprehensive. Meanwhile when I listen to top politicians I don’t get that.

Expand full comment

> I’ve said this before, but Ronnie O‘Sullivan who just won his 8th UK Championship in snooker, is a mathematical genius of sorts though he may not be able to solve an equation.

Why would that be? Because he's good at estimating angles, momentum, and the effect of two-body collisions? That doesn't make him a "mathematical genius", but a great signal processor. My smartphone is mind-bogglingly efficient and fast at signal processing, but it won't solve the Collatz conjecture anytime soon.

Expand full comment

I have a hypothesis that a lot of late career athletes have minor to severe brain damage, depending on how much potential head impact the sport involves. This suggests that a runner, a high jumper, and an archer might have different outcomes at varying points of their career, re: intelligence.

Expand full comment

IQ may be general, but time is limited, and your skills require both.

It may be true that a baby born with genes for high intelligence and height has a *potential* to become a math whiz or an Olympic-level basketball player. But each of that requires spending a lot of time developing specific skills, and the kid who spends afternoons readings books is probably not going to be great at sports, and the kid who spends afternoons in a gym is probably not going to be great academically.

Once in a while there will be such kid who does both and excels at both, but that is probably rare. It is not just a question of possibility, but also of preferences and social reinforcement. You can be intellectually able to do something and yet find it boring. Or you can try something first, get good at it, and then find it socially more rewarding to continue doing what you are already good at rather than becoming a beginner at something else. Or you can simply have friends who have a hobby, so when you are with them, you do that thing instead of other possible things.

You can also have things correlate strongly and yet come apart at the extremes. You can have a group of great basketball players with high IQ each, but the one who will be best at basketball is not necessarily the one with the highest IQ among them, but the one with best muscles and joints. Even if IQ is useful in general, it is not so useful that 1 extra point would overcome literally everything else. See also: https://slatestarcodex.com/2018/09/25/the-tails-coming-apart-as-metaphor-for-life/

> Why is there a pop culture dichotomy between jocks and nerds?

The difference is mostly about preferences and spending their time. Those differences are a result of different traits (e.g. extraversion vs introversion) and random influences (what their friends are doing, what their parents told them to do, etc.). It is not like people are measured for their innate potential and trained to develop their skills accordingly.

Also, to be a jock or a nerd at school you only need to be slightly jock-ier or nerd-ier than the local average. Then it becomes about what you do and who you hang out with.

> Why do nerds look so much like nerds and why do jocks look so much like jocks?

How much time do they spend exercising their muscles vs sitting in a chair reading? How much time do they spend outside vs inside? The body reflects how you treat it for years.

Expand full comment

All good points. I had forgotten about that tails coming apart post. It fascinating me when I first read it, but now I'm thinking: "Of course the tails come apart; the sample size drops to 1 at the tail."

Expand full comment

a) the dichotomy is at least somewhat false (when I was in high school, a lot of the people in national long-distance running competitions were the same ones you'd see at math contests. Relatedly, top hedge funds end up employing a surprising number of former olympians).

b) general factor of intelligence holds somewhat at the statistical level but on an individual level you expect significant mean-reversion - even if engineering and people skils are both heavily g-loaded, someone who's best in the world at engineering would probably only be a couple of standard deviations above average in social skills (fairly charismatic but not the life of the party).

c) you expect this to look weaker at lower levels - olympic athletes are probably pretty smart, but your high school sports team is probably mostly selected on being into sports and willing to spend time training rather than being inherently supertalented.

d) In sports where head injuries are common (including both football and soccer), you expect athletes to become dumb over time just due to concussions, even if they started out smart.

Expand full comment

This is a good answer.

Expand full comment

I want to add to my question, not what I might say, but what Steve Sailer might say. Blacks are overrepresented in American sports yet underrepresented in the field of physics. Does this make sense if the primary component in both is high G?

Expand full comment

I think the answer is: Athletics and maths require different kinds of intelligences. But doesn't that refute the General Intelligence Hypothesis?

Expand full comment

>Athletics and maths require different kinds of intelligences. But doesn't that refute the General Intelligence Hypothesis?

Only if you choose to call the skills required by athletics "intelligence".

Expand full comment

Yes, and this is the question that interests me. Isn't AI used in robotics and self-driving cars and other things which attempt to manipulate the physical world, as say an athlete might? Is that type of AI not considered part of general intelligence when we talk about AGI? I'm getting the sense that it is not. Which makes me think that General Intelligence just has to do with verbal and logical intelligence.

Expand full comment

It is "intelligence" when you think of it as "cognitive power" I.e. the skill of "moving yourself well under given constraints" can be optimized for by throwing better thinking at it, in the context of robots, is a thing, but it's not clear if IQ tests test for skills necessary for doing sports, even though for humans that's "general" intelligence. And even if it turns out that doing well at sports is more a matter of say, your nervous system rather than your brain, that just means other humans can't easily learn it NOT that it would be unlearnable for all intelligences.

Aka

"General Intelligence" descrived different things for humans than it does for AGI, because we're not just birthing another human for AGI.

Expand full comment

Which undermines the notion we can create an ASI, because that is just a suped up AGI. Maybe we can create an Artificial Human Intelligence by training it on all the things humans can do, but there's nowhere else to go except maybe other animals if Intelligence isn't general.

Expand full comment

The easy answer, cribbing off TLP, is that humans have a strong tendency towards psychic inertia, “if I do not change I will not die.”

But when one is confronted with evidence of one’s inferiority, say, in physical ability, one is thrown off balance. The mind wants to come up with narratives to justify the current state of affairs and protect against the impetus to change. Natural candidates for these narratives for the nerds-in-becoming include “well, I may not be X but I am more righteous than those Xers” or “I’m smarter and that’s what matters”. Groups form to reinforce these narratives.

So, in persona as TLP, we would say the answer is narcissism. As it usually turns out to be.

Expand full comment

Of course, this whole thing never happens for someone who never feels that inferiority at all because he or she is smart and strong. But the mechanism works for almost anyone who feels left out of any group in any way and can be adapted endlessly. I’m open to hear criticism of it but the general idea is one of the good insights from the TLP blog.

Expand full comment

A neural net can be superhuman at classifying dog breeds and not be anywhere close to generally intelligent. Given a task of enough complexity, like playing a sport at the highest level, the size of the neural net required might be quite large. Does a large enough neural net start naturally showing hints of being generally smart, or does that just happen for LLMs? My guess is that yes, as a game reaches a certain level of complexity, the abstractions learned by the DNN will start to be generally applicable to many cognitive tasks. But my guess is also that LLMs might converge faster. What I am saying is that there is probably a lot of neural complexity involved in being a top athlete, but it probably isn't as transferable to other domains, than the same size network applied to learning physics.

Expand full comment

https://x.com/RepStefanik/status/1732138663608271149?s=20. Am i missing something why didn't they just say "Yes"? if they were asked about calling for the genocide of black people would they have responded the same?

Expand full comment

If they say "yes" then they are implicitly committing to taking disciplinary action that they're too scared to go through with.

Remember Charlie Hebdo.

Expand full comment
Dec 5, 2023·edited Dec 5, 2023

Important qualifier to "doxxing is bad": I think that if someone is /already/ a public figure then it is legitimate to connect other anonymous public personas to them, at least if those personas are doing anything in any way related to whatever they're famous for.

I absolutely don't want politicians to be able to comment on politics anonymously, or CEOs to be able to talk anonymously about anything related to their industry.

"Famous pseudonym is real person X" is not usually legitimate journalism if X isn't a name that will mean anything else to the reader, and hence the only value will be to harass them, but "Famous person X has been doing ... under an anonymous pseudonym" often is.

(I have no idea if this applies to the "Beff Jezos" case Scott is talking about above - I intentionally haven't looked up the details before writing this, because I don't want my views on the general principal biased by one case).

Expand full comment

A non-political example that comes to my mind is a famous writer Stephen King publishing a few books under a pen name Richard Bachman.

Among other reasons, he made a new pseudonym as an experiment to find out how much of his recent books' popularity and sales was because of their quality and how much because of his accumulated fame as an author. So he wrote books under a new name and published them with no marketing. Unfortunately, the experiment was cut short by doxing (Wikipedia does not provide details).

In this specific case, I think the doxing was bad.

Expand full comment

James Tiptree was a famous science fiction writer who was never seen, and was eventually doxxed, just as a matter of curiosity. "He" turned out to be Alice Sheldon.

She was very upset at the loss of privacy, and in my opinion, the later fiction wasn't as good.

Expand full comment

Rowling did the same thing, successfully.

Expand full comment
Dec 6, 2023·edited Dec 6, 2023

The wiki page for Richard Bachman has citations to the guy who broke the story.

https://www.washingtonpost.com/archive/lifestyle/1985/04/09/steven-king-shining-through/eaf662da-e9eb-4aba-9eb9-217826684ab6/

Basically Thinner read too much like Stephen King. So, mission complete I suppose; it's not just the name, his writing is unique enough to recognize in the wild.

Expand full comment

There's probably a weird edge case where someone has two famous anonymous accounts but isn't themselves a public figure, where you could argue that connecting the two public faces is legitimate but providing the real identity is not, but I doubt this ever comes up.

Expand full comment

There might be a difference between theory of mind and theory of emotion.

People generally are fairly good at believing that other people know different facts-- the classic even if you know what's in the box, will people who haven't looked in the box know what you know? test.

People seem to be generally bad at having a gut understanding that other people have different preferences from one's own.

Expand full comment

Sounds correct.

A lot of advice consists of: "stop doing the things you like, and start doing the things *I* like".

Expand full comment

Understanding one's own preferences is so complex, and what we prefer is constantly changing, even within the span of a few minutes. And we tend to rely on others to help us figure out what we really want, so we are used to commenting on what we think others should do. With theory of mind I get immediate feedback when I mansplain something to somebody who already knows it, but when it comes to preferences I can't really test my hypothesis about things like "she would really be happier (ie prefer) if she drank less."

Expand full comment

Do you know any reviews of songs? Not of full albums, just single songs / pieces.

Expand full comment

Not quite reviews, but Song Exploder podcast

Expand full comment
Dec 6, 2023·edited Dec 6, 2023

There’s a lot of analysis of songs on YouTube, e.g. Rick Beato. A couple of his recent single-song videos: https://m.youtube.com/watch?v=7PAkVIFUZPQ https://m.youtube.com/watch?v=QMfchuNAvH4

Here’s a good one: Charles Cornell reviews a new Jacob Collier song that I’ll probably still be listening to in fifty years if I live that long. https://m.youtube.com/watch?v=ZdstfRN6cQs (Listen to the song before this review, you don’t want to miss the chance to hear it for the first time. )

Expand full comment

Not sure if Todd in the Shadows counts. His One-Hit Wonderland series is more of a career retrospective with some song reviews in the middle. https://www.youtube.com/watch?v=cqE9B1_9TpQ

Expand full comment

I have been rereading the sequences and I am not super impressed with the methodology of the paper Yud's praising here: https://www.lesswrong.com/posts/J4vdsSKB7LzAvaAMB/an-especially-elegant-evpsych-experiment

I came up with an alternative hypothesis that might explain the results. So I am trying my own version of the experiment.

If you wouldn't mind taking this 5 min Google Forms survey, I will be able to justify looking into this more: https://forms.gle/4kR8EVw7f3RJaci86

Obviously, I might be shooting my results in the foot, with a lot of people already having seen Yud's piece: hindsight bias... etc.

Expand full comment

I answered your survey first before reading the linked Yud article, and I don't think your question formulation captures expected grief.

The way I look at life insurance is the same way I look at most other insurance: it's a hedge against a financial risk, allowing you to turn a small chance of a catastrophic loss into a predictable regular expense. And since it's priced so the insurance company makes money on average, you shouldn't hedge more risk than you have, and moreover should consider self-insuring losses that you can afford to eat.

For life insurance, the financial loss you're hedging is 1) funeral costs and other final expenses, and 2) loss of the insured person's income and other material contribution to the household's finances. In most cases in rich countries, minor children are a net loss in terms of 2, bringing in little or no income and probably not carrying the rest of the family in terms of chores, so you should probably only buy life insurance for your children of you need it to cover final expenses. And the correct amount of insurance to carry would be enough to cover a funeral minus what you can cover out of pocket without undue hardship. Losing a child would be emotionally devastating, yes, but grief is independent of financial considerations here.

Expand full comment

I agree that most people don't/won't translate grief into cash (especially when money effects utility on the log scale {citation needed}).

I think the survey could still stand if people treated life insurance the way that EA's would. If a poor person in sub-saharan Africa can have their life saved for ~$5K then EA's should buy any life insurance policies for poor sub-saharan Africans if they are cheaper than $5K. We aren't trying to save utility in the same way that EA's do (I would value my children at way more than $5K) but I think the framework still stands?

I am confused on whether it really is a zero-sum game with insurance companies, even if they have all the same information as me (they assign the same probability to insured event).

This article: https://blog.paulhankin.net/kellycriterion/

seems to argue that if we both value money on the log scale and use Kelly criterion that both the insurer and insuree could benefit from the transaction (given it has the right price). Though I also believe that many insurance prices currently are priced only to benefit insurer.

Expand full comment

Strong agree. I filled out the survey with $0 for all categories. I didn't have children to support me financially. Getting money from their death would make me feel terrible in a way I can't describe. I would burn any check I got.

Expand full comment

Yep, this is the rational approach to life insurance.

Most "financial advisors" will of course try to convince you otherwise; basically that you should maximize life insurance (and thus their bonus for making you buy it) for everyone.

Emotionally, not having life insurance may feel like "tempting fate"; as if you are telling the gods that you are pretty sure that X will not die anytime soon, and therefore the gods will obviously kill X at the next opportunity, just to teach you a lesson on humility. A possible financial impact is one thing, but if you are superstitious (most people are, to some degree, especially when you start talking about a possible death of their loved ones), it may feel like refusing to buy life insurance somehow *increases the probability* of someone's death.

Expand full comment

I agree that many financial advisors play on ignorance and (comparative) irrationality. I am still confused on whether all insurance coverage is zero-sum, this article seems to argue that if we are using Kelly criterion for our money there are insurance policies that benefit both parties: https://blog.paulhankin.net/kellycriterion/

Otherwise I don't know how I would explain why companies that understand expected value calculations would ever make institutional decisions to insure projects.

Expand full comment

Yeah, insurance doesn't necessarily have to be zero-sum.

But average people buying life insurance (the original topic) usually do not know how to calculate the optimal amount, and the people selling the insurance (or recommending someone else's insurance, and getting paid for successful recommendations) have an obvious incentive to sell as much as possible.

Expand full comment

I've honestly never encountered that framing. I've only seen people apply the reasoning of the poster above.

Expand full comment

I thought that every life insurance salesman had a few stories ready of people who merely procrastinated on buying life insurance and died the next week. Is it just my luck to meet them?

Expand full comment

I was a bit obsessed with checking this paper around 10 years ago, to the point where I bought and read Nancy Howell's _Demography of the Dobe !Kung_, the book from which the raw data on demography was taken for the paper. I think I've lost the writeup where I argued why the paper was terrible, but I see that this analysis, linked from the comments, seems concinving and pulls no punches: https://scienceisshiny.wordpress.com/2020/09/11/everything-wrong-with-the-paper-human-grief-is-its-intensity-related-to-the-reproductive-value-of-the-deceased/. I believe I reached about the same conclusion w.r.t. the correlation - that what we really see here is that "the RV and grief timeseries both have a rising, a level and a falling bit", and once you postulate that, any reasonable RV series subject to that will correlate really high with the grief timeseries - without having the stats knowledge to explain why they shouldn't have done what they did.

But I do very much recommend the Howell book, you get to see how real messy science is made in an environment of imperfect - oh so much imperfect - information. For example, how do you actually build an age-based population model or RV-by-age curve in a population which is not tracked by government statistics and the people have no idea what their ages are? I wrote a post about how Howell did it: https://avva.livejournal.com/2412457.html (it's in Russian, but just ChatGPT it if you're interested).

Expand full comment

Most people who criticize Yudkowsky do so via low content drivebys, so this is already much better.

I suspect that if you got into contact with Eliezer, he even might signal boost it himself.

Unfortunately, I can't answer the survey since all answers I **want** to give would be the explicit result of a calculation that would prove EY's point (the perils of knowing AMF QALY estimates for children....)

Expand full comment

New here and not sure if this is a good topic for this forum, but something I've always wanted to discuss, so why not.

Suppose I eat one cow's worth of beef per year. If I stop eating beef, how many fewer cows would be slaughtered?

I think the EV is one, right? Something like a 99.9% chance that this would save zero cows, and 0.1% chance it would save 1000 cows.

I'm assuming that there must be some feedback loop. Everywhere along the pipeline from cattle ranch to plate, each entity has to decide whether to order more burgers based on demand, or open another cattle ranch based on demand, etc. There's an incredibly tiny chance that one of those demand numbers is *right* on the edge, and my choice will tip it, creating a very large effect.

Am I in the right ballpark?

Expand full comment

Yep, a tiny chance of a large effect, and a large chance of zero effect, on average what you would expect. (Plus all the second-order effects.)

Expand full comment

I think we should also consider your social impact.

Do you share meals with someone regularly? If so, you've displaced beef from someone else's plate, as well (the logistics of cooking 2 separate meals just to accommodate a dietary preference is often too much, so in practice in a lot of long term partners and families, 1 person becoming a vegetarian often has a knock on effect on everyone else's diet since most people don't order out that much - to be clear, everyone else still eats meat, but often less than the counterfactual).

Do you often go to social gatherings where you share a meal? Are your friends willing to accommodate you? In those situations, your more restrictive dietary preference is going to displace some beef off the group choice too. When having meals with my vegetarian friend, we typically don't opt for a steakhouse, and when we get pizza, it'll be vegetarian (or something like 1 or 2 out of the 3 we order will be vegetarian). Having a vegetarian option on the table displaces some of the meat, unless it's one of those 1 plate per person kind of situations (burgers, sandwiches). But simply influencing choice of venue is probably important.

I guess it's possible that you are totally isolated and will never influence someone else's dietary decisions ever, but that seems unlikely.

Will it be massive? Hard to say. Are you attending a large family dinner every week? Do you regularly cook for 2 - 5 people? How often do you share meals with people? On the low end it might be like 1.2 cows. On the high end it might be like 5 - 10 cows. It's still not necessarily going to massively thin the herds, but it is an effect.

A curious other effect could happen if you host the dinners (i.e dictate the contents of the meal). That way, you get to displace meat off someone's plate. Each of us don't have infinite capacity for food, so any share you can win is yours to keep. This implies that effective meat displacement involves going out and feeding other people, which has much more potential effect than just feeding yourself. Doing stuff like telling your friends you're bringing samosas (detering someone else from bringing, idk, meat pies).

(Feeding, not converting. Lots of people are averse to lifestyle conversion efforts, but few people will turn down free food unless it's bad, and displacing meat calories helps).

Expand full comment

Moral of the story: invite 6 friends to a vegetarian meal once per week and you can eat steaks guilt free for the next six days.

Expand full comment

Suppose you're a New World slave owner in the 18th century, and you buy one slave per year on average. If you stopped buying new slaves, how many fewer slaves would be brought from Africa to the New World ?

Expand full comment

Depends on what you eat instead, and what the people who would otherwise have eaten the food you actually eat end up eating, ad infinitum.

Minimal example: imagine you & I both regularly eat at the same restaurant. At current prices (with you eating beef), I just barely prefer the chicken to a steak; once you start ordering chicken instead, the price of the chicken goes up and that of the steak goes down, so I switch to steak. The same amount of the same food is being eaten as before, we've just swapped which.

Expand full comment

I think it's probably more like 1.001, because some meat goes to waste - I guess that if people eat n cows worth of beef, the amount of beef produced is well-approximated be (1+epsilon)n for some small positive epsilon governing efficiency.

There's also going to be an economic effect from efficiencies of scale and supply and demand - whether you eat beef or not probably has a marginal second-order effect on the price of beef for other people, but I'm not not sure what the sign is.

Expand full comment

Cattle ranches aren't stomped out of the ground and then converted into burgers in one indivisible step. The decision is less "do we build one new ranch or not", but rather "do we slaughter 75 or 76 cows today" and "do we inseminate 450 or 451 cows this week".

So yes, there are still discrete step functions at work - not eating meat for one day won't save 0.0075 cows - but they're more fine granular than "1000 cows are killed or not".

Expand full comment

Even in the presence of step functions, expected value works that way when you don't know where in the step you are.

Example: Trains come every ten minutes, I don't know when the next one will come. I can walk to the train station, or run, getting there two minutes faster. What is the effect of this on the expected time I reach my destination? Running will get me there two minutes faster on average. Similar reasoning works for speeding, even if you might get stopped by a stoplight and lose all the time you gained.

Expand full comment
Comment deleted
Expand full comment

You probably had a small effect, such that if you were alone it probably wasn't enough to change the menu, but as part of a group they might decide to add more options.

Expand full comment

Can anyone more knowledgeable on the Jezos comment on what exactly is the schtick with the crypto bro esque twitter postings?

The fluff regarding his startup is also absolutely insufferable yet the founders seem in theory to be technically competent people(I would also argue that working on TF Quantum is a completely misguided effort with no real use case at all in the near term).

Expand full comment

A datum for the "are we past Peak Woke?" discussion. NIH, the National Institutes of Health, proposes to change its mission statement to remove "reduce disability" as a goal. This is in the name of disability equity and inclusivity, as proposed by a committee for diversity.

Instead of "To seek fundamental knowledge about the nature and behavior of living systems and the application of that knowledge to enhance health, lengthen life, and reduce illness and disability."

the new proposed mission statement is "To seek fundamental knowledge about the nature and behavior of living systems and to apply that knowledge to optimize health and prevent or reduce illness for all people."

https://diversity.nih.gov/blog/2023-10-11-share-your-thoughts-nihs-mission-statement

The period for public comment ended recently. I guess the decision will come in the next few months.

Expand full comment

If I was disabled, the current debates surrounding the Canadian MAID euthanasia system might make me a bit tetchy about unspecified goals to "reduce disability".

Actually, what inerests me more is the removal of the goal to "lengthen life", especially considering the major drops in life expectancy recently.

Expand full comment

https://www.smbc-comics.com/

He loses. It's ugly, but it isn't a Christmas sweater due to being red, white and black with no green.

I think this is funny, but possibly in too poor taste for my facebook feed.

Expand full comment

Santa's suit is also red, white and black with no green.

Expand full comment

Santa is hearby kicked out of Christmas

Expand full comment

Does science know melatonin to sometimes cause nightmares?

Expand full comment

Anecdotally, I've never noticed melatonin causing me to remember nightmares, but exercise sure does.

Expand full comment

Same. I had a lot of nightmares when I was younger, I've been taking melatonin for some time and have had a very low number of them.

Expand full comment

Eating late definitely causes me vivid dreams at least.

Expand full comment

Depends which science you ask. I did a quick survey of reputable tertiary sources I could find with a quick googling and came up with "yes" (Mayo Clinc), "no, or at least not enough to be worth mentioning" (NiH, NHS, and WebMD), and "maybe/probably" (Cleveland Clinic). The Cleveland Clinic answer goes into a bit more detail about what we do know, that melatonin does increase REM sleep, plus some preliminary evidence that melatonin's metabolites may improve memory and thus make us remember dreams more intensely, but the overall effect of melatonin on dreams doesn't appear to have been rigorously studied.

https://health.clevelandclinic.org/does-melatonin-cause-bad-dreams

Expand full comment

Before reading Scott's article on melatonin (wherein he recommended 0.3 mg to be the optimum dose), I tried one of the standard commercial doses - likely 5 mg.

I experienced an erratic heartbeat and, if not full-blown nightmares, very vivid and disturbing dreams.

Expand full comment

Why does Substack show people's likes (on their profile) but not their comments? Is it a programming thing? Or an ethical thing, making it harder to stalk someone, and/or get them cancelled?

Since likes are basically votes, I'd have thought they're more naturally deserving of secrecy than comments (which, to continue the analogy, are more like campaign speeches or media endorsements). Both secret, both displayed, and comments displayed but likes secret would all make sense, but I find this combination odd.

Expand full comment

I once had a dream in which I opened Substack and, in the corner next to my profile picture, saw a lot of information about me inferred by some algorithm from my comments, with confidence estimates next to each piece of information - plenty of demographic data and some psychological traits, all inferred correctly albeit with different confidence.

Maybe I'm not the only one who's had this nightmare.

Expand full comment

I, on the other hand, have been continually disappointed by how *BAD* most FAAMG inferences are. With the literally tens of thousands of Phd's and data scientists they have working on your digital cookie and behavior trail, they should literally know you better than you know yourself. As in, you go to a new part of downtown for a meeting, and they should be able to predict with high accuracy which lunch place you'll end up going to "serendipitously" and with no planning aforethought.

If FB or GOOG did dating apps and you were looking for a LTR, they should literally be able to instantly pick a partner with much higher chance of LTR success than both of you could yourselves, swiping for months. All the information is there, with certainty. An ASI would have no problem doing any of these things, and I'd even bet GPT-5 or 6 could do this with high fidelity.

And yet, with the collective brainpower of tens of thousands of Phd's working around the clock, FB and GOOG routinely fail to show me ANY advertisements that are even tangentially related to anything I care about and would buy. And I'm a comparatively heavy spender in relation to the USA median - I have some pretty expensive hobbies, and for the non-expensive hobbies, I'm more than willing to throw down hundreds or thousands on a whim. Do FB or GOOG, the literal worldwide online advertisement monopoly tap into any of that? Not at all.

I wonder if this is an ethics thing, because like most Linux / tech-savvy folk, I use uBlock Origin and uMatrix and Ghostery and things like that. I opt out of profiling and cookies where I can. But I know the data is still there, with certainty. Unblockable pixel trackers, using Chrome browser, using Google search, proximity and network analysis, Google (and FB via proximity and network analysis and pixel trackers and deals with every top 1k website) definitely HAS the info, even with the privacy measures I take. They just don't use it, and I wonder if it's because they have ethically decided that I seem to by trying to avoid profiling with the various browser add-ons.

But then we're talking about Goog and FB - making a non-regulatory decision to make less money for the sake of "ethics??" It is to laugh. So, a bit of a mystery to me, why all these data harvesters suck so badly at targeting with their inferences, when the data is certainly there to be inferred.

Expand full comment

On the plus side, in the iOS app today I clicked a notification for a comment and it actually took me to the comment instead of the top of the whole thread, so maybe there is some improvement going on.

Expand full comment

This seems insane to me as well.

Expand full comment
Dec 5, 2023·edited Dec 5, 2023

Scott, you asked for other suggestions on ways to protest Forbes' doxxing of somebody: How about writing EAT SHIT AND DIE on a brick and throwing it through their window? A bit too crass?

Expand full comment

That seems about the right level of crass, actually. It conveys contempt without implying future violence. As opposed to tossing a brick that said, for example, "by any means necessary", or simply posting the address online with no commentary whatsoever. Those wouldn't be crass at all.

Expand full comment

Possibly deemed offensive to coprophiles and scat enthusiasts, kink-shaming is bad no-no thing.

Expand full comment

I expect there’s a substantial coprophile presence at Forbes. In fact you’ve now got me worried that brick will mistaken for a generic Happy Holidays message. Thanks a shit.

Expand full comment

Wouldn't work, they've got those industrial windows.

Expand full comment

I really hope you're joking.

Expand full comment

On the subject of predictive processing in the human brain, this has probably been said before but it only hit me recently that nocturnal dreams are likely just what happens when predictive processing happens while all the external stimuli are off.

Expand full comment

Is this just the activation-synthesis hypothesis but in contemporary jargon?

Expand full comment

Can't say because I don't know your old jargon.

Expand full comment

I think there’s a lot to that idea. I think the way dreams are shaped is that a bunch

of stuff from the day is getting dumped and sorted through and tagged and stored in memory, but the predictive processing part of the mind keeps getting glimpses of the stuff and trying to work with it in the usual way, and that’s

what gives dreams a certain story-like quality. I could say more but I’m sitting up in bed and very sleepy. So I’m off to dream my brains out.

Expand full comment
Dec 5, 2023·edited Dec 5, 2023

Dreams seem too random and incoherent to be explained by just predictive processing.

My favored theory is that this is a biological analog of the practice of randomly shuffling the data set while training neural nets in order to prevent overfitting.

Expand full comment

It's not completely random. The dreams often include something that happened recently, or something traumatizing. So perhaps random weighted by some combination of recency and/or emotional impact.

Expand full comment
Dec 5, 2023·edited Dec 5, 2023

I recently stumbled on an interesting factionalization within antisemitic circles. One group hates the Jews but thinks they're smart and accomplished, the other hates the Jews but thinks they're credit-stealing charlatans.

Watch the fun ensue when they meet https://twitter.com/MatthewParrott/status/1730671822323282417

Expand full comment

Can't they imagine them to be smart, accomplished, and unethical?

Expand full comment

That's the first group. The second group is more old school nazi purists who think Aryans are the only smart/capable race.

Expand full comment

> think Aryans are the only smart/capable race.

How did that work out for Adolf? One genius Aryan physicist vs a boatload of genius Jews:

"According to a May 1945 roster, Jews made up about two-thirds of the leadership in the Manhattan Project's Theoretical Division (T-Division) — the group tasked with calculating critical mass and modeling implosions — which is still operating today as the only division with an uninterrupted history since Project Y."

Expand full comment

They don't think the Jews did anything. They think that all supposed Jewish accomplishment are really nefarious credit-stealing from good hardworking gentiles.

Expand full comment

My friend says our knowledge of physics will never reach an end because physics is the study of the laws that govern the physical universe, but we can only see a small fraction of the universe, so there's always a chance that the laws as we understand them might not apply to parts of the universe we can't see.

For example, thanks to the limited speed of light, we can't see objects that are more than 46.1 billion light years away. If you were an astronomer watching the very edge of our visible bubble of the universe, it's always possible that suddenly, a new part of the universe could emerge into your view where gravity obviously worked in reverse. The possibility of such a thing occurring means physics can never reach its end.

Is my friend right?

Expand full comment

Note that point-of-view invariance is a powerful principle in ph ysics, and it implies universality.

Expand full comment

Why yes, yes of course your friend is right, he reinvented Hume's Problem of Induction.

Nobody reinventing Hume is ever wrong.

Expand full comment

Going to say, no. Physics will never reach an end, not because there is always something new to discover, but because there is always someone trying to discover something new. Even if everything is perfectly explained, physics will continue, as people try ever more novel attempts to crack the consensus.

Expand full comment

In the sense that physics describes the way nature fundamentally works around us, it could concievably reach an end at some point; where we have discovered an ultimate underlying theory and explained all the constants and terms involved. Whether this is possible or not is unknown, and I would say it is more a question of philosophy than physics.

Your friend's argument is a bit different, since they are saying we cannot see everything, so we cannot know the same laws hold everywhere. This is akin to some theories that say multiple universes exist, within which the laws of physics are difference. Many people will tell you that such ideas are not really physics - since for it to be physics, we need to be able to conduct an experiment to prove or disprove the idea. As we have no ability to know what lies beyond the visible universe, any speculation about different laws of physics there is not physics but rather religion or philosophy (the same holds for other regions we cannot access, like the time before the Big Bang or other universes).

I would point out, as well, that for all we know the laws of physics could completely change tomorrow. That could always be true, and so by the same argument you can say physics will never be complete until we can definitely rule out that they won't change at some arbitary point in space or time.

Expand full comment

I think so far the universe seems homogeneous at a very large scale. Under that assumption, the parts we do not see follow the same laws as the parts we see. Perhaps one day we will find a reason why it is so.

> a new part of the universe could emerge into your view where gravity obviously worked in reverse.

I guess this is the difference between philosophy and science. Such thing is very unlikely to happen, to the degree that no one reasonably expects it to happen, but I wouldn't want to spend the rest of my life playing verbal games against the philosophers, which means that philosophers win this debate.

Expand full comment
Dec 5, 2023·edited Dec 5, 2023

Physics “never reaching its end” is awkwardly phrased, but essentially correct. However, his reasoning is naively constrained, and wholly unneeded. The simplest way to understand this is that we have no way of knowing if/when we reach the “end of physics”. No one left a marker there for us to declare a finish line.

Expand full comment

By "the end" I mean a point where every observable phenomenon can be perfectly explained by the laws of physics as we know them, and where none of the laws contradict each other. In such a condition, the behavior of every subatomic particle, black hole, and distant galaxy could be totally explained by physics calculations.

Expand full comment

I think this is the crux of the problem though. Back in late 1800's there was this strong consensus that "physics was over", things were well-explained. Yet here we are. I suspect the future will be no different: as soon as we get complacent with our understanding of the universe, an Einstein will show up with a paper opening a new frontier.

Expand full comment

Sabine Hossenfelder disagrees:

https://www.youtube.com/watch?v=KW4yBSV4U38

She does agree with what you say, but argues that the remaining problems in physics might be too difficult for humanity to solve -- at least, not without some kind of a radical increase in our collective intelligence, which would be technologically impossible to achieve without first solving those very same problems in physics.

Expand full comment

> which would be technologically impossible to achieve without first solving those very same problems in physics.

This is the part where she seems to just be completely wrong. AI is advancing quickly without requiring new physics

Expand full comment

This is just me talking, but I don't think that AI is going to be some kind of a magic bullet. AI is great if you already know most of the answer and want to save time (to be fair, a lot of time !) on calculations; it's not so great if you want to develop entirely new physics (which is what most of the outstanding problems in physics are about, AFAIK).

Expand full comment
Dec 5, 2023·edited Dec 5, 2023

Sabine makes the error* (standard among practitioners of high energy physics, broadly defined), of assuming that her subfield is all of physics. I refer you to https://www.datasecretslox.com/index.php/topic,3007.msg91383.html#msg91383 for a broader perspective.

Many subfields of physics have ended already (see e.g. electromagnetism - pretty much a solved problem now, or at least handed off to the engineers). And high energy physics might be drawing to a different kind of end. But there are plenty of others which are nowhere near ending.

*Haven't actually watched the video. Am responding to what I think she would have said (based on what people from her background usually say).

Expand full comment

I think it might be helpful to watch her video, perhaps ?

In general, unlike the claim you're arguing against in your linked post (*), she is not claiming that all available problems in physics have been solved or are close to a solution; rather, she's claiming that the remaining problems could be too difficult for humans to ever solve, and will thus remain unsolved forever.

(*) I have not read your linked post, I'm just responding to what people on your side of the debate usually say. :-)

Expand full comment
Dec 5, 2023·edited Dec 5, 2023

>> that the remaining problems could be too difficult for humans to ever solve, and will thus remain unsolved forever.

This seems overwhelmingly unlikely to be true, unless you are defining physics = high energy physics, which is what people with Sabine's background usually do. In that case it might be true. But that's precisely the definition I am objecting to.

I doubt you have ever heard anyone on `my side of the debate' (that being the side that `there is more to physics than HEP'). The high energy physicists tend to monopolize the public conversation. Unless you are thinking of prior discussion with me. Or unless you tend to hang out with practicing physicists.

ETA: If you can link me to a text transcript of Sabine's, I will read it.

Expand full comment

Sorry, I don't have a transcript, but I sympathize with you -- I also prefer reading to watching.

Expand full comment

IIUC this study claims 7.18 / 1000 = 1 / 139 boys born in the USA recently have developed profound autism. Any good reasons to doubt this?

https://www.researchgate.net/publication/370128310_The_Prevalence_and_Characteristics_of_Children_With_Profound_Autism_15_Sites_United_States_2000-2016

Expand full comment
Dec 5, 2023·edited Dec 5, 2023

Just read abstract and skimmed study. Study seems OK, but there are many good reasons to disagree with your takeaway.

Study subjects were a bunch of children who had been diagnosed with autism between 2000 and 2016, and study looked at records of their condition at age 8. Point of study was to classify their autism as profound or not profound, using a new, more precise definition of profound (nonverbal, IQ less than 50 etc.), and point of doing that was to give us baseline data to be used going forward in keeping track of changes in frequency of profound and non-profound autism.

So they found that about a quarter of autistic kids qualified as profoundly autistic. This held for both males and females. However, there were about about 4x as many autistic males in the studied group, and there was no change over time in percent of autistic kids that are male. It has been known for a long times that males are a 4x as likely as females to be diagnosed with autism.

As for the figure you quote, 7.18 males per thousand with profound autism: Yes, that figure is correct. But there's nothing recent about that number. That figure is for all the kids in the study group. The kids in this study all turned age 8 between 2000 and 2016, So the youngest kids studied were age 8 in 2016, and are about 15 years old now. So in the group as a whole, kids who turned 8 between 2000 and 2016, and so now are aged 15-23, 7.18/000 males, and 1.88/1000 females (about 1/4 as many as males) were profoundly autistic according to the new definition of profound autism.

The overall prevalence of autism has been slowly going up between 2000 and the present, for both males and females, and for both profound and non-profound autism. I'm not sure whether anybody knows whether that's because more autistic kids are being born or because our definition of autism has become looser.

Expand full comment

I seem to have misplaced my copy of the book about late-talking children (https://www.amazon.com/gp/product/0262027798/), but I recall it talking at length about how 1) autism is not a subtle disorder, but late-talking kids now routinely get labelled autistic; 2) how important it is not to let a normal late-talking kid get treated as if he was autistic.

Personal anecdote: a well-meaning administrator at our school district suggested that we put on record our late-talking kids as autistic in order to get extra help. When we objected that everyone told us that they did not look in the least autistic, she said that it's OK because, maybe, they are not autistic by the medical definition, but there are different definitions.

I notice that the study says "records from medical, education, and service providers". I wouldn't be at all surprised if there were quite a few entries where a kid was labelled autistic due to being late at talking, and then an unedited IEP propagated through the system for years and years long after the kid started talking and exited speech therapy. (The last copies of my kids' IEPs that I saw had information in them that's now 2 years old and has very little to do with the current situation.)

Expand full comment

From the study: We considered children to be nonverbal or minimally verbal if any of the following were identified in the records: (1) most recent evaluation at ≥48 months of age describing a child as nonverbal (median [IQR], 79 [65-93] months) or child determined to be nonverbal (no spontaneous words or word approximations) by clinician record review, (2) lan-guage classified primarily as echolalia or jargon by clinician review, or (3) being administered an Autism Diagnostic Observation Scale Module 1 (a gold standard observational measure appropriate for nonverbal or minimally verbal chil-dren) at age ≥48 months (median [IQR], 60 [53-70] months).

So if it's mostly based on children 4 years or older, that should exclude most late talkers, right?

Expand full comment

OK, from looking at this carefully, this is even worse than I thought. They say: "We categorized children as having profound autism if they were either nonverbal or minimally verbal or had an IQ <50." So that automatically labels late talkers, most of whom grow up to be completely normal, as profoundly autistic, right? I think this is the point where you should just stop taking them seriously at all.

When I was in high school, I was shocked by the following observation. Suppose there's a test for a disease that's carried by 1% of the population. The test has 10% false positives. You take 101 people and give them the test. 1 of them correctly tests positive, and 10 more get false positives. So you have 11 people who tested positive, and only one of them actually has the disease! Isn't this what we're looking at right now, a small rate of actual autism and a very bad test for it (non-verbal at 48 months according to some records)? That would seem to suggest that their results are pretty much completely meaningless.

I originally wrote an explanation for why the 48 month cutoff might not exclude most late talkers, but I deleted it since the above consideration is more important. (TLDR: I don't know how late talkers are distributed by the age where they start talking, I know quite a few kids who were not talking at 48 months and turned out just fine, and also both medical records and IEPs tend to have obsolete info.)

I think we can take it for granted that there are a lot more late talkers than autists, and if we label a large percentage of them autistic, our set of presumed autists is mostly late talkers who are perfectly normal. That's just the math.

Expand full comment

>OK, from looking at this carefully, this is even worse than I thought. They say: "We categorized children as having profound autism if they were either nonverbal or minimally verbal or had an IQ <50." So that automatically labels late talkers, most of whom grow up to be completely normal, as profoundly autistic, right? I think this is the point where you should just stop taking them seriously at all.

If they assessed when they were quite young I would agree. But what these researchers did was assess every kid in the study at age 8. (Actually they did not do the assessments themselves, they went back and read the records of another study following a huge bunch of kids -- but they only read the records of assessments done when the kids were age 8.)

From the abstract: "Methods: We analyzed population-based surveillance data from the Autism and Developmental Disabilities Monitoring Network for 20 135 children aged 8 years with autism during 2000-2016."

Expand full comment

If I understood it right, the original study also did not assess or follow the actual kids. They went through their records when the kids were age 8. Also, the original study is more honest and talks about ASD, not autism.

Maybe I'm not reading it right, but it reads to me as if they were only requiring a 48 month assessment to categorize a kid as profoundly autistic:

"We categorized children as having profound autism if they were either nonverbal or minimally verbal or had an IQ <50.6. We considered children to be nonverbal or minimally verbal if any of the following were identified in the records: (1) most recent evaluation at ≥48 months of age describing a child as nonverbal (median [IQR], 79 [65-93] months) or child determined to be nonverbal (no spontaneous words or word approximations) by clinician record review..."

I tried to find actual numbers on autism, not ASD, and I failed. Perhaps I did not look hard enough, but I'm beginning to suspect that there's actually no epidemic of autism, and all the scary numbers are generated by lumping Asperger's and speech delays in with actual, debilitating autism that makes normal life impossible.

Think about it. How many families with autistic kids incapable of living normal lives do you know? I know none, but I know quite a few people who admit to having Asperger's (and probably even more who have it but don't admit it) and quite a few perfectly normal kids who took forever to start talking (all of them boys, all bilingual or trilingual, all children of engineers). It feels as if this isn't how an autism epidemic should look.

Expand full comment

Ugh. Just looked it up. Only 30% of late talkers are autistic. And many kids who are autistic follow a completely different pattern. They begin talking at the typical time, and in fact seems normally active, social and affectionate. Then around age 3 they regress -- lose their language, become isolative and avoidant of contact.

Expand full comment

Thank you very much for taking the time!

I find this figure shockingly high, and was sort of hoping there was reason to believe it was much higher than the true rate.

Expand full comment

Well, autism is not rare. It’s slightly less common that schizophrenia. Remember that only 25% are profoundly affected. Some have such a mild version that they can live pretty normal lives, just have a lot of quirks and sensitivities. Also, the fraction of the population that has or has had major depression is way way higher.

Expand full comment

Am I the only one here whose opinion on recently deceased statesman Kissinger was mostly based on Unsong, Interlude Het [0]?

Of course, after reading that rolling stone obituary [1], Scott's characterization seems rather on-point.

[0] https://unsongbook.com/interlude-%D7%97-war-and-peace/ (for the impatient: ctrl-F betray divinity)

[1] https://www.rollingstone.com/politics/politics-news/henry-kissinger-war-criminal-dead-1234804748/

Expand full comment

Re "There are purported exchange rates between money and lives"

These numbers typically represent the amount of money that government agencies are willing to spend to save a life in their country. If the "billions in value" was not going to be in government hands (but rather distributed among creditors, shareholders, etc), then it's unclear why this is the right comparison? Given the choice, I might prefer 1 additional life and billions less dollars of FTX (and still agree with the government tradeoffs).

Expand full comment
Dec 5, 2023·edited Dec 5, 2023

I think Scott is using the Givewell value of 1 statistical life saved per ~4K spent on AMF bednets as the exchange rate. (Which strengthens your point)

Edit: oh wait, I'm a nimrod and didn't actually click on the link, the entire paragraph above is full of lies.

Expand full comment
Dec 4, 2023·edited Dec 4, 2023

I'm going to reiterate my request for someone to rigorously defend EA against this economic-utilitarian critique: lives saved is a linear function of wealth. Wealth is an exponential function of time. Unless you impose a discount rate on the intrinsic value of life then I don't see how the utilitarian calculus doesn't compel you to maximize economic growth even at the expense of near-term charitable interventions.

For more context here's my previous thread on that. I think the 2 responses were weak.

https://open.substack.com/pub/astralcodexten/p/contra-deboer-on-movement-shell-games?r=fo2bp&utm_campaign=comment-list-share-cta&utm_medium=web&comments=true&commentId=44558190

Expand full comment

Why stop there? By this reasoning, one should kill *any* number of extant humans if it generates *any* nonzero amount of investable wealth, because a murder for a nickel today could easily save the entire population of the galaxy in a million years, at historical growth rates. It has plenty of precedent, too; "we can use the resources better than they can," is one of the most popular justifications for violence.

If that's going too far for your tastes, we can still justify piling the resources into some kind of personal wealth fund; someday, they'll do something wonderful for those losers who we refused to help today because they are worthless (or, less strawman-y, "worth less") to the future. I'm sure we'll start helping them tomorrow, once our wealth stops growing exponentially -- oh wait, it *always* does, so we never use it to help others. Thus the investment was in fact merely serving our own personal wealth accumulation, as it will continue to serve, forever.

But, let's come back to the question, *what* grows the economy? Is it just some quantity eternally inflating at an annual percentage in an abstract, conceptual space? Well, it's not -- IANAE, but roughly, it's the sum total of transactional human activity. Human activity is a function strongly dependent on human existence; I'll go ahead and take that one as an axiom, thanks -- that things have to exist to be active*. So the economic growth rate has a strong dependence on the size and skill of the populace. Frankly, this is so self-evident that I struggle to ascribe the exponential v. linear argument to anything but bad faith -- at best, it can be a rationalization for pre-existing (economically) conservative views. The other path seems clear: save lives today, improve quality of life tomorrow, create even greater exponential economic growth next week, and save even more lives next month, so to speak. In macro terms, the Keynesian multiplier of saved lives is quite positive, when those lives also have the opportunity to contribute back.

On top of that, constrained non-linear dynamical systems are, in general, mathematically unpredictable in the long run. This is, or should be, well-known (thx Jurassic Park). Hell, money as we know it might not exist in a thousand years. But EAs (not in the movement, but not "anti-EA") seemingly advance similar arguments about exponential growth over dubiously long timeframes, so I don't know if my point is consistent with their philosophy.

After all this, a common response is something like, "Well, the market is still the best mechanism to decide which lives are maximally useful to save for their labor, so charity is neutral to wasteful." There is a worldview which I do not share tightly wrapped into that statement, so it doesn't seem debatable.

Expand full comment

>Why stop there? By this reasoning, one should kill *any* number of extant humans

I mean, that's simple. The downstream effects of murder and mayhem is anarchy, which is very diseconomic. People don't learn to code when they're afraid that someone might kill them. Law and order is very productive.

I'm shocked that you need this explained. All of traditional morality can be understood in terms of cultural survival value. It's a cultural encoding of economic reality. Its adaptive purpose is to maximize the survivability of the culture that uses it.

> the Keynesian multiplier of saved lives is quite positive, when those lives also have the opportunity to contribute back.

The second half of that sentence is doing a lot of work. That's where my objection lays. If the Keynesian multiplier of saved lives was positive then I would be all for charity. However, I think it's obvious that it's not - at least, not with how charity is currently practiced. The average person saved by a bed net does not have net-positive economic value - at least not relative to other investment options.

Expand full comment
Dec 8, 2023·edited Dec 8, 2023

"I think it's obvious" -- well, good enough for me!!

I recently got a lot of flak for pointing out that I hate-read the comments about AI x-risk. But your responses on this thread perfectly exemplify my two most hated aspects of those comments -- a confusion of mathematical *jargon* with mathematical *rigor*, along with lopsided demands for rigor that never apply to one's own position.

This is a linearized model -- one which assumes present growth rates are applicable for as long as needed -- and one that has reduced the entire complexity of the economic universe to two variables, present wealth and population size. At best, this is a toy model, but maybe we can draw some limited conclusions from it. Toy models can be useful as a starting point, if we're careful about applying them only within the relevant domain. But then you go even further and zero out a parameter that has no right to be assumed away -- the covariant / causal relation between saved lives and economic growth. If this parameter is AT ALL nonzero, that is, if there's any mechanism for a saved life to feed back even a tiny fraction into the 1st world economy that provided the charity, then it doesn't matter if the instantaneous value of the life doesn't individually maximize the investment return, some degree of mixed strategy is mathematically optimal because that's how *linear fucking algebra works.* (Consider the world where, say, gold is the asset with the highest return rate, so we invest every penny in gold. The rest of the economy stagnates, and eventually the real value of gold drops along with it.) There is a nuance where we're constraining the outcome to "long positions only" (positive coefficients), but this merely restricts the parameter space. You cannot dismiss the existence of this parameter space without doing some kind of calculation to figure out what it is.

Since this one particular problem is so easy, it can be solved it with compound interest, no matrices required. That is, I'll do your homework for you this just this once. Since there are just two variables, we take "The first world economy" as the agent providing the charity, C dollars to save some life (all amounts measured in present day dollar values), or C dollars invested in "the economy" which grows continuously at annual rate R. The saved life lasts L productive years and contributes an average annual amount A through surplus labor value, exports, etc. A quick and dirty estimate for A is GDP per capita minus average income. This amount is generally positive; i.e. in Paraguay, the second poorest South American country, it is around ~$200/yr. Of course, the poor produce less than average, but they also earn less. Saying this amount is "always negative" for charity recipients is a way, way larger lift than any point I made about Keynesian multipliers (which DO NOT have to be maximal to make charity a good investment, as I pointed out already). Anyway, if A > CR/(1-e^-RL), bam, game over, charity contributes more not only in the immediate sense of saving a life, but in the longtermist sense of growing the entire economy more than the pure investment strategy. If C = $1000, r = 0.05, and L = 40 years, this works out to A > $58, less than 30% of the annual amount of surplus produced by the average Paraguayan. So, yes, there are completely reasonable parameter regimes where charity makes sense, even when hypothetical future lives are valued equally to real contemporary ones.

As to the isolated demands for rigor, for example, *you* get to assume the existence of some "mayhem" parameter (never mentioned in the original post!) that makes it costly to *take lives*, but *I* have to *prove* that there might be an equivalent cost to *letting people die* preventable deaths? Yeah, not going to play this stupid game where you get to just assume every beneficial factor for your argument and negate every counterargument out of hand. Even for relatively simple problems, the effort required to make a mathematical (dis)proof is dozens of times larger than that required to string together a few mathy-sounding sentences and claim that the resulting word salad describes reality. The demands that you apply to others you would never dream of applying to yourself. This is contempt.

Expand full comment
Dec 14, 2023·edited Dec 15, 2023

>If C = $1000, r = 0.05, and L = 40 years, this works out to A > $58,

Yes, if you make up numbers then you can always make the math come out in your favor. Unfortunately "doing arithmetic with made up numbers" isn't what I meant by rigor. The most rigorous empirical figure for the cost of saving a life that I could find is given by givewell here:

https://www.givewell.org/how-much-does-it-cost-to-save-a-life

where they conclude that bed nets in Guinea save lives at a cost of $4500/life. The $200/yr "economic surplus" (which by the way is a measure I've neither heard of nor could find any references for - if you didn't just make it up then provide a source which motivates it) obviously then underperforms the ROI of first-world investment. It does even worse when you account for the fact that 80% of malaria deaths happen in children under 5 (source: https://www.who.int/news-room/fact-sheets/detail/malaria), which means that any economic output from the intervention will be delayed by ~10 years which reduces the ROI by about half. In this case that means a paltry 2-3%, which I'm sure overstates it because this is using costs from a sub-saharan african nation and economic outputs from a less-terrible south american one (plus of course it's absurd to use national figures to represent the economic output of the absolute bottom rung of society, but that's a whole other discussion). So once again I have to ask: do you believe that children born 10 years from now have half the moral worth of the ones alive today?

And for the record I'll just say that while I generally don't respond to disrespectful emotionally-tilted wingnut comments, yours had so much bluster and so little substance that I felt honor-bound to reply. If you fail to adopt a more collegial tone then I'll let this be my last response to you.

Expand full comment

*Virtual particles are a mathematical fiction we use to do perturbation theory. We pay the price by having to renormalize our quantum theories. They are not "non-existent but active."

Expand full comment

This is kind of the perspective of the longtermist faction of EA, which considers the future a moral priority. Most people who think like this are concerned about reducing existential risk.

Expand full comment

Not a full argument necessarily, but it seems that any sufficiently easy thing to do that costs very little should still be done before maximizing economic growth. A literal child drowning in a pond while you walk by would count.

It feels to me that the kinds of things you can do while going about your normal life without sacrificing economic considerations would all count, and there's an awful lot of those things available. Most people work one full time job of 35-50 hours a week and have lots of minor opportunities to do good that are marginal compared to their income.

Expand full comment

Just curious, "economy" is a vast thing, do you also have an idea exactly which specific part of economy should an Effective Economist support with their money? Or should we just throw the money to some index fund and let the market sort it out?

I am asking because there seems to be a paradox -- unregulated economy seems to work better in general, but large parts of economy are *not* spent on maximizing economic growth, but instead on consumption. If we say that regulation is bad and therefore consumption is good, because it is a part of normal economy, then we have effectively concluded that producing bottles of Coca Cola is more altruistic than producing anti-malaria nets. Would you agree with that? On the other hand, if some regulations are desirable (assume that you are a dictator, and your only goal is to maximize economic growth, so you can e.g. ban all things that do not contribute to economic growth), what would you propose?

Expand full comment

>Or should we just throw the money to some index fund and let the market sort it out?

Essentially yes. I'm not presuming to know how best to grow the economy in micro terms, just that it generally works best when everyone freely follows their own self-interest to the best of their ability. I think there ARE general rules of thumb one can follow to boost the odds of growth - things like prioritizing capital investment over consumption - but it's nothing beyond basic responsible adulting (buy a house over a fancy car, build a savings portfolio over taking an expensive vacation). Honestly I would consider that more EA than buying bednets for the Congo. Charity begins at home, as they say.

>assume that you are a dictator, and your only goal is to maximize economic growth

I mean, I think economies work best without dictators. Were I in that position I would just try to copy the most effective free-market economies around.

Expand full comment

This is not a rigorous argument, but it reminds me of a thought experiment of a robot who wants to do something, but instead decides to build a robot who could do it better... and that robot also decides to build a robot who could do it even better... and ultimately we get an infinite sequence of robots with ever increasing capabilities, without any of them actually doing anything (other than building more robots). And yet, each robot, from its own perspective, is acting rationally to accomplish the original goal.

Expand full comment

Why would you ask for a "defense" against that? Without the normal, functioning economy there would be no food or malaria nets to give away. Dividing up the factories and passing the scrap out is nobody's idea of a good way to feed people.

Expand full comment

I don't think you understand my argument. ANY charitable donation is a marginal detraction from exponential economic growth and therefore creates negative net value.

Expand full comment

If children in currently developing countries are as much better off 100 years from now as they are better off today relative to how things were 100 years ago, investing money could amount to taking from the poor (people alive in the undeveloped present) to give to the rich (people living in the developed future). Taken to a rhetorical extreme, the world could "run out" of starvation in one hundred years. More realistically, the number of first world countries in Africa could rise to "most of them," there could be peace between Israel and Palestine, and many other things in keeping with the level of changes we have seen since 1923.

But you can't take that argument too far and break down the factories to make tin roofs, otherwise the economic growth won't happen and the people in the future won't be that much better off, not enough to offset the compounding interest.

Expand full comment

>the world could "run out" of starvation in one hundred years

And that would be bad ... why?

I'll be honest: I have zero idea what your point is here. Could you restate it in different terms please? Your first sentence, in particular, is totally opaque to me.

Expand full comment

First sentence was pretty clear to me. He’s saying given that modern children are healthier than 100 years ago, it’s likely that 100 years from now children will be even healthier. Especially in parts of the world that are poorer now.

Therefore not engaging in charity now will harm the present group of children who will be less well off than future children.

Which by the way is a fairly obvious rebuttal.

Expand full comment

I think it's even more obviously wrong, since the trend you're depending on only exists because of general economic growth. Children in 2123 will be healthier only because of the economic growth between now and then. If you detract from that growth by helping a child today with charity, then you are indirectly harming exponentially more children in the future.

Expand full comment

"Contributing to economic growth" contains a very broad set of activities and consequences. A good percentage of which are net-negative, unless you consider willingness to pay as being a perfect proxy for value. If I carved a sculpture and sold it for $2million I'd be technically growing the economy by $2million, but the expected net effect of that is marginally increasing the utility and status of a rich person. You might argue that the rich person was motivated to make their own contributions to economic growth in order that they might buy sculptures like mine, but I think that's a bit of a reach, and you could also argue that it's at least more valid to be motivated by effective philanthropy. Basically it's only a subset of economic growth that adds real significant value, with utilitarianism in mind. I think if it fits within someones comparative advantage to grow those parts of the economy, EA sensibilities would generally be all for that, with some reasonable conditions.

Regardless of whichever economic system you favour, there's going to be tradeoffs you have to make which result in market failures or deadweight loss. There's certain contractual exchanges that aren't possible within a legal system, perhaps there's corruption. If children have worms inside them, and these worms likely subtract from their future economic potential, and we can get rid of the worms for less than a dollar, and no market solution has occurred, I think it's a safe bet that paying to deworm them is more beneficial than letting that money ride in an index fund.

Somewhat related to the previous point, the market only benefits those with the sufficient legal rights to enjoy those benefits. Economic growth has not benefited pigs and chickens on net. Economic growth permits further investment in industrialized agricultural methods that induce great misery in billions of individuals. You might disagree about how animals should be valued, but given that I do value them highly, you'd need to argue why economic growth is likely to benefit animals.

Lastly, economic growth is very unlikely to decrease the likelihood of existential risks, and there's reason to expect it may increase the likelihood.

Expand full comment
Dec 5, 2023·edited Dec 5, 2023

> If I carved a sculpture and sold it for $2million I'd be technically growing the economy by $2million

This is pretty clearly an isolated demand for rigor. If I save an African child only for it to die in a civil war the next day then I haven't accomplished anything either. There are no guarantees for either donation or investment so we have to look at rational expectations, and investing in a first world economy clearly wins by that measure.

>economic growth is very unlikely to decrease the likelihood of existential risks,

Neither is saving a dying African, but either way this is totally unrelated to my point which is very narrowly about the tradeoff between economic growth and charitable donation. I don't care at all about the various other fringe stuff that EA is associated with. It's angels-dancing-on-pins nonsense and I won't waste my time debating it, at least not in this thread.

Expand full comment

So you're just talking about the value of saving a human life? that's only one small part of EA, but we can discuss that. Are you unwilling to discuss interventions such as deworming?

I think my point still holds that "economic growth" measures way too much to be a coherent metric to measure against specific focused charitable efforts. A huge percentage of economic activity just doesn't really contribute towards the compounding of essential inputs into the future.

I think the expected impact of effectively treating/preventing malaria Is quite large. For future economic growth and otherwise. Developing countries have the most potential for growth, not having people chronically sick with malaria is quite beneficial for growth.

Expand full comment
Dec 5, 2023·edited Dec 5, 2023

> Are you unwilling to discuss interventions such as deworming?

Not at all. Saving a life/improving a life all fall under the same type of analysis.

> A huge percentage of economic activity just doesn't really contribute towards the compounding of essential inputs into the future.

Again I think this is an isolated demand for rigor. If you want to compare "economics" vs "charity" then you have to analyze them consistently. You can't cherry-pick the bad things Economics does and then compare them to the best things Charity does. If you do that then I think that growth delivers more net-value than charity does.

> not having people chronically sick with malaria is quite beneficial for growth.

I agree with that in principle, I'm just not sure it's true in reality. People have been battling malaria for decades, yet sub-saharan Africa still lives at the subsistence level. Is there any economic data that indicates that X% reduction in malaria deaths in a country leads to Y% increase in growth 10 years later? I've never seen that data and I think there are very good common sense reasons to expect that it's not true.

If you told me that the expected outcome of a charitable intervention would be that Zambia would wind up with the institutions that lead to sustainable 2% growth then I would be all for it. The problem is that, AFAICT, no EA interventions target that. Which is understandable - no one knows how to transform a third world country into an industrial economy. But absent a concrete goal which can reasonably lead to self-sustaining growth, my position is that any charity is just wasting resources which would better help the future world by being devoted to first-world economic and technological growth. I'm sure just donating to the NSF would be much better, objectively speaking, than buying bed nets.

Expand full comment

>You can't cherry-pick the bad things Economics does and then compare them to the best things Charity does. If you do that then I think that growth delivers more net-value than charity does.

But EA is specifically devoted towards using analytical tools in order to seek out the best things that charity can do. I don't think it's cherry picking to point towards EA endorsed charitable efforts.

Expand full comment
Dec 6, 2023·edited Dec 6, 2023

Sure. I phrased that badly, apologies. What I meant was you can't point to capitalistic waste and at the same time assume the African kid you saved will one day be a successful entrepreneur. You have to compare expected value.

If EA wants to do the best thing, it has to compare the expected value of each investment option (charity vs growth). I think economic growth clearly wins there, unless the charity is explicitly and effectively geared toward enabling sustained economic growth. And, hey, if you can get charity and growth at the same time then obviously I agree that that's the thing to do. But you have to actually demonstrate it, or at least make a compelling argument. EA doesn't even gesture towards it. It's not enough to just say "well with enough bed nets people in the Congo will naturally just start innovating and creating stable institutions." That sounds great but the cold hard reality is that they very obviously won't. That's why I consider money spent on bed-nets to be a pure deadweight loss to the utility of world. You might as well light an oil well on fire.

Expand full comment

You are shying away from the debate, perhaps, because you are engaging in non rigorous claims while ignoring, “misunderstanding“, or misrepresenting the opposition.

I mean you haven’t even begin to prove that charitable giving reduces economic growth. It’s an assumption you’ve just taken for granted.

Expand full comment

Your tone and content are both disrespectful, I suspect because you're unable to proffer a robust response to my position. I'm not 'shying away' from debate on those topics, I just want to keep the conversation focused.

If you think my argument is weak then exploit that weakness by backing me into a rhetorical corner, if you can. Otherwise keep your ad hominems to yourself.

Expand full comment

You’ve barely begun to engage with any body but let’s back you into that corner.

I want you to explain why spending money on, say, malarial nets will reduce economic growth to the extent that saving these lives doesn’t increase economic growth and human utility, both for present and future generations.

Expand full comment
Dec 5, 2023·edited Dec 5, 2023

Because countries that can't get it together enough to provide 50-cent bed nets to its citizens aren't high-growth countries. The marginal life saved by a bed net in, say, Zambia, has zero expected economic value. The intervention is doing nothing but allowing a subsistence farmer to survive until reproductive maturity. The consequentialist result, 15 years down the road, is nothing but several additional children who also must be helped by charitable aid. This is reflected in things like productivity statistics which, for sub-saharan africa, essentially never improve.

The money wasted on that charitable aid would have enabled some nonzero incremental economic growth if it had been invested in a first-world economy.

Expand full comment

It looks like you suggest investing money. We don't know if World GDP or US GDP or the S&P will be higher or lower in 100 or 200 or 300 years. With fertility crashing, I'd bet that World GDP will be lower than now in 100 years.

Expand full comment
Dec 5, 2023·edited Dec 5, 2023

It's certainly a safe bet that it will be higher in 20 years. My argument is valid as long as the rational expectation of economic growth persists. I agree that the logic changes drastically once that's no longer true, but that's not the world that we currently inhabit. I also think your gloomy economic forecast is a rather fringe position. I would bet heavily that world GDP will be higher in 300 years, at least in per capita terms.

Expand full comment

> Wealth is an exponential function of time.

Is it? Would that have been true in all human societies?

Expand full comment

It seems that a decent growth model is Lucas 1988 (http://www.econ2.jhu.edu/people/ccarroll/Public/LectureNotes/Growth/LucasGrowth.pdf). You can hammer it into a single differential equation that looks something like k’ = sqrt(kh) - ck where c is a constant and h is exponential in time. k is capital per capita, h represents knowledge. Not sure how well it fares empirically.

Expand full comment

That's not really relevant. It's currently true in ours.

Expand full comment

If you are going to make a definitive visit about the future having infinite economic growth then that takes a lot more proof than some pithy statement.

Expand full comment

I don't see why infinite growth is a prerequisite for my argument. Whether or not exponential growth continues indefinitely isn't relevant to the fact that it's a rational expectation for the foreseeable future.

Expand full comment

Ok. So that is a caveat.

Your argument therefore seems to be that we shouldn’t discount future humans so charity today is less useful than encouraging economic growth. That seems like convenient mechanism to demand tax cuts for yourself (or the rich) and not pay any charity.

Future humans will be better off anyway if economic growth outpaces population growth. Also there will be fewer humans in the future not more, present trends continuing. So let’s end some misery now.

Expand full comment
Dec 5, 2023·edited Dec 5, 2023

We shouldn't discount future humans to the extent of depriving them of resources they would reasonably expect us to preserve (such as some surviving wildlife, among other things!) and would curse our memory for losing.

On the other hand, neither should we be too indulgent and self-sacrificing for their anticipated benefit, because humans are not at their best without _any_ challenges. So we might actually be doing them a disservice by handing them everything on a plate, at the cost of our own comfort and prosperity.

In 300 years the richest humans alive today will seem as poor as church mice compared with the entrepreneurs who start mining the asteroid belt. I imagine desolate wastelands like the interior of Greenland will be valuable real estate because somewhere like that will be essential to "land" million ton nickel-iron asteroids without too much disruption!

Expand full comment

While I personally disagree with this post series, I do think it should be required material when talking about indefinite exponential growth

https://dothemath.ucsd.edu/2011/07/can-economic-growth-last/

(Note: the author is fundamentally not a transhumanist, and doesn't have the concept "mind upload" or "Dyson sphere", hence why they aren't covered)

Expand full comment

Just glancing through it and I didn’t see much mention of renewables. Surely that’s a net zero increase in the earths energy usage.

Edit.

Looked at it again and his prediction of the present day 5% growth continuing is plucked out of nowhere. The works economy has never grown like that.

The 20C saw a twenty fold increase, about 3% a year. A lot of this was also population growth. 5% yoy growth would be, as he admits, a growth rate of 132 times over a century. An increase about 6 times greater than the 20th century

If you take this impossible figure and say that it’s impossible, energy wise, well that’s not that impressive.

He probably is correct there being not being that much efficiency left to gain, however he seems to ignore that wind and solar don’t add any extra energy to the system, but just move energy around around.

Expand full comment

He briefly mentions renewables, and links to a post calculating how much more energy the sun can provide at what efficiency levels we can have before the waste heat starts boiling away the oceans (around 400 years from now)

Expand full comment
Dec 4, 2023·edited Dec 4, 2023

Captive African grey parrots, have been shown to use human language, make sentences and even ask questions, but that's all captive birds. So why isn't there any research on what language grey parrots use in the wild, and do they? Wild birds probably use these language abilities too. I don't expect parrots to have human-level language with recursion, but some basic grammar must exist, especially considering that even small passerines have some sort of syntax (https://www.nature.com/articles/ncomms10986).

Expand full comment
Dec 5, 2023·edited Dec 5, 2023

Generally the answer to "why isn't there any research" is that no one cared enough to either do it themselves or pay someone else to do it. There aren't that many biologists out there, and they usually have better things to do than decode parrot languages.

Edit: And just as I thought: "Little is known about the behaviour and activities of these birds in the wild. In addition to a lack of research funding, it can be particularly difficult to study these birds in wild situations due to their status as prey animals, which leads them to have rather secretive personalities. It has been shown that wild greys may also imitate a wide variety of sounds they hear, much like their captive relatives. In the Democratic Republic of the Congo, two greys sound-recorded while roosting reportedly had a repertoire of over 200 different calls, including nine imitations of other wild bird songs and one of a bat." From Wikipedia.

Expand full comment

A little related: https://www.amazon.com/Chulo-Among-Coatimundis-Bil-Gilbert/dp/0816508429

A man, his son, and his son's friend spent a year studying coatimundis, who are more or less communal raccoons. They could only hear what the coatimundis were saying out of doors (not in their tunnels), but found that he coatimundis had at least a little syntax. As I recall, word order mattered for indicating the type of threat.

With modern tech, it might be possible to find out more about gray parrots in the wild.

Expand full comment

RE: "There are purported exchange rates between money and lives, destroying billions in value is pretty bad by all of them"

I agree with this stance, but even though I'm probably more critical of EA than Scott is, there's a weird ironic outcome I heard about. SBF, in addition to just committing a whole bunch of fraud, had this notion that you should take a bunch of wild insane bets because if even one of them paid off the world might be a massively better place. I think SBF showed us a lot of good reasons not to do that, but it seems like at least one of his bets *did* pay off and may be the best hope for his creditors to make good on their losses -- FTX invested in Anthropic, and now that stake is worth "nine figures":

https://www.businessinsider.com/sam-bankman-frieds-anthropic-stake-wholly-irrelevant-prosecutors-2023-10?op=1

The article says that fact is going to be of absolutely no personal help to SBF legally or financially, and I suppose even if it does put FTX's creditors in the black in terms of paying off the bankruptcy claims it's still probably a net loss when you figure in all the damaged trust and stuff.

But I have to admit to being extremely annoyed that it seems that SBF's "make a bunch of insane wild bets" strategy seems to have paid off in a narrow literal sense (even if it's no help to him personally).

Expand full comment

Isn't "make a bunch of insane wild bets" exactly what venture capitalists do? Seems to be working out just fine for them.

Expand full comment

There are degrees of wild insane bets-- there's unlikely to pay off, and then there's can't possibly pay off.

Expand full comment

Yeah, sure, but SBF took that to another level

Expand full comment

And for everyone mystified *why* it's of no help for him legally if one of his bets pays off big, one of the common historically repeating issues in financial crime is the pattern where you manage money for someone, take a small loss, and instead of fessing up for your failure (which was not criminal, but will probably lose you your customers), you decide you'd rather double down on some risky bet, and if you succeed you'll make the money back and everything will be fine again. Much of financial regulations is designed to make this option as unappealing as possible for people who manage other people's money.

Or in other words, if you are not allowed to take cookies out of the jar, you are a criminal the second you do that, even if you put ten cookies back five minutes later.

Expand full comment
author
Dec 4, 2023·edited Dec 4, 2023Author

That makes sense. I also think there's an even simpler reason: *everyone* can make money on net by doing risky things, it's called "investing". If you have some good reason to think you're better at this than other people, you can start an investment firm, take their money, invest it, and become rich.

That is, if you can make market rate of return (let's say 5%), and rich people are willing to lend you $1 billion, you can turn it into 1.05 billion, give back the billion, and have an extra $50 million to split between yourself and your clients. The reason not everyone uses this trick to get a free $50 million is that there's lots of competition between investment firms, and rich people will only give their money to the ones that seem the best and offer the biggest fraction of the profits.

If you can just steal $1 billion, make $50 million off it, and then give it back with a little interest, you're not some kind of Robin Hood, you're just a person who used fraud to get enough starting capital to do normal investment stuff.

If you get very lucky and actually make 50% rate of return and compensate your scammees extremely well, then maybe rich people would have given you their money if they had known this beforehand, but the law shouldn't usually include a loophole for "it's not illegal if you succeed", because everyone expects to succeed and we don't want to encourage trying.

Expand full comment

(Reposting from Less Wrong)

When autism was low-status, all you could read was how autism is having a "male brain" and how most autists were males. The dominant paradigm was how autists *lack the theory of mind*... which nicely matched the stereotype of insensitive and inattentive men.

Now that Twitter culture made autism cool, suddenly there are lots of articles and videos about "overlooked autistic traits in women" (which to me often seem quite the same as the usual autistic traits in men). And the dominant paradigm is how autistic people are actually *too sensitive* and easily overwhelmed... which nicely matches the stereotype of sensitive women.

For example -- https://www.youtube.com/watch?v=xeZZHnQYoR4 -- difficulty in romantic relationships, difficulty understanding things because you interpret other people's speech literally, anxiety from pretending to be something you are not, suppressing your feelings to make other people comfortable, changing your language and body language to mirror others, being labeled "sensitive" or "gifted", feeling depleted after social events, stimming, being more comfortable in writing than in person, sometimes taking a leadership role because it is easier than being a member of the herd, good at gaslighting yourself, rich inner speech you have trouble articulating, hanging out with people of the opposite sex because you don't do things stereotypical for your gender, excelling at school, awkward at flirting -- haha, nope, definitely couldn't happen to someone like me. /s

(The only point in that video that did not apply symmetrically was: female special interests are usually more socially acceptable than male special interests. It sounds even more convincing when the author puts computer programming in the list of female special interests, so the male special interests are reduced to... trains.)

I suppose the lesson is that if you want to get some empathy for a group of people, you first need to convince the audience that the group consists of women, or at least that there are many women in that group who deserve special attention. Until that happens, anyone can "explain" the group by saying basically: "they are stupid, duh".

Expand full comment

I don't know about that. I do think that when autism was defined as primarily a "male" disorder, then any similar traits in girls got overlooked because "girls aren't autistic".

But there are autistic women out there.

Like all corrections, it seems to have swung to the opposite degree. I don't know if it's trendy to be autistic and that is what is going on, but I'm very wary of self-diagnosis done online for validation and demands for special treatment, so yes I'd agree there are people claiming to be autistic, and ADHD, and PTSD, and every other letter in the alphabet when in reality they're just self-absorbed assholes.

Expand full comment

This comment reminds me that there is, supposedly, both a schizoid personality disorder ('I don't care what people think of me, so I do my own thing') and an avoidant personality disorder ('I care too much about what people think of me, so I have to make do with doing my own thing'). The two personality disorders might be best understood as the same underlying phenomenon, but experienced differently for unknown reasons.

Expand full comment

I think this is an important reason why Men's Rights Activists are such a joke. They tend to use woman-coded tactics to gain empathy for a group that by definition does not contain women. Whoops.

Expand full comment

You stopped hearing about the "male brain" and "theory of mind" stuff because it was wrong. Like, blatantly wrong. Most of the research that supported the male brain theory ended up not replicating, and autistic people lacking theory of mind? Seriously? You have living evidence against that right here.

The biggest thing that changed is that psychiatrists correctly realized that they had no idea what was going on with autism and decided to just combine all of the specific classifications into one "Autism Spectrum Disorder". People accepted that autism wasn't just a binary between "savant" and "intellectually disabled", and that it ends up manifesting in a large variety of ways. That presumably also helped more women with the condition get diagnosed as well, since for whatever reason their conditions are usually less extreme.

As for your theory that all you need to get sympathy for a minority group is to convince people that the group consists of women, a clear counterexample is transgender people. Apparently there are just as many trans men as there are trans women, but for whatever reason transphobic rhetoric usually pretends that trans men don't even exist. (This becomes awkward during the whole bathroom debate, since I highly doubt cis women are going to be happy about trans men sharing the same bathroom as them.) No matter which half you consider to be female, it clearly wasn't effective enough to afford them the same social acceptance as, say, homosexuals.

Expand full comment

I think transgender people are an example that people care more about women, but that "caring" might not necessarily manifest as support. It is as contentious an issue as it is because a lot of libs who were usually happy to just go along with it changed their minds when:

1. Trans-women (for most people, men) started getting into women's spaces (sports, bathrooms, etc.)

2. Trans-men (for most people, women) started subjecting themselves to treatments that could have deleterious impact on their health (if a "man" goes through with bottom surgery, it's just a funny greentext on 4chan, if a "woman" permanently fucks up her voice, it's a national health crisis).

Expand full comment

Presumably this is because there are a lot of issues specific to trans-women (e.g women's sports).

Expand full comment

If they were wrong about wha was going on with autism, how do they know autism is even a coherent category?

Expand full comment

Well, some people are much more severely affected than other. (And it's not even a 1 dimensional spectrum, as people vary in which of th3 symptoms they have).

At the severe end, some difficulty with reasoning about how other people will react seems common (theory of mind is perhaps giving too grand a title to the cluster of

symptoms).

Some philosophers make pedantic objections, along the lines of: in what sense is this "theory of mind" actually a theory?

"Extreme male brain" a la Simon Baron Cohen was always disputed and a bit dubious ... especially as there are women with autism.

Expand full comment
Dec 5, 2023·edited Dec 5, 2023

>At the severe end, some difficulty with reasoning about how other people will react seems common (theory of mind is perhaps giving too grand a title to the cluster of

symptoms).

Hmm, there’s some stuff you and most of the public do not know about actual autism. 25% of people diagnosed with autism have what’s called profound autism: They are non-verbal, and have IQ’s of 50 or less. I have actually seen people like this (I’m a psychologist). They’re squatting in the corner of the room twirling a piece of tinfoil, and you can’t even get them to look at you. *That’s* what the severe end is. In recent years, people with average of above-average intelligence and a certain set of quirks have been said to be “on the autistic spectrum,” and maybe they are. One way of thinking about these people is that they have autism lite. In that way of thinking, they have tiny versions of what the profoundly autistic person squatting in the corner has. That are introverted (=squatting in corner and ignoring me), are not good at reading people (=do not recognize my friendly overtures for what they are), have relatively few interests, but are intensely interested in those (=piece of tinfoil), have rigid routines and ideas and have a hard time flexing and accommodating new stuff (=hours of twirling). Could be.

As for the theory of mind stuff: that measure seems very confounded by intelligence. It was initially a developmental measure. For instance, there’s the crayons test. If you show a small child a bandaid box and ask him what’s in it, he will guess bandaids. Then you show him that actually the box contains crayons. Then you ask him what he thinks another kid would say, if asked what’s in the box. Kids up to age 4 or so say “crayons.” Now that they know the bandaid box has crayons in it, they can't create a mental model of another mind that sees a bandaid box, and naturally assumes it contains bandaids. By the time they are 5 they will be able to. Nobody who’s not a fool thinks a an adult with a college degree and a job, but some autistic-like traits, would flunk the crayons test. Some people at Caltech developed a much harder version of the same test, and report that autistic adults do worse than normals on it.

https://www.caltech.edu/about/news/autism-and-theory-mind-85113#:~:text=A%20classic%20test%20of%20theory,crayons%2C%20not%20Band%2DAids.

But the test is so hard it seems likely to me that the results are confounded by intelligence, and paper does not mention whether normals and autistics were matched for IQ. And it’s not clear from the article (probably is on original study, which I have not seen) what they mean by “autistic people,” Do they mean people with Asperger’s syndrome (ie., super high-functioning autism) or what’s more usually meant by autism?

There is a test that high functioning, supposedly autistic, people do worse at: It's called Reading the Mind in the Eyes, and you can take it on Amazon, weirdly. Amazon even scores it for you. 28 is average, 22 or below is strongly suggestive of autism. Scores on this test are only weakly correlated with intelligence.

Expand full comment
Dec 5, 2023·edited Dec 5, 2023

I honestly think folding Asperger's in with the Autism Spectrum umbrella was a bad idea, and that some people now classed as high-functioning autistic are really Asperger's, and that the two are related but different syndromes.

But what do I know, I don't even play a doctor on the television!

Expand full comment

I have a few questions about this theory of mind (with spoilers for Harry Potter and The Spy Who Came In From The Cold, which I hope is all right).

First, how well do autistic people do with keeping track of complex "quintuple agent" situations like the ones in those stories? Severus Snape has a public position (working for Dumbledore at Hogwarts), but he's secretly a Death Eater, *but* everyone close to Dumbledore knows he's posing as a death eater, *but* it becomes clear the death eaters know this, *but* in fact Dumbledore knows *this* and Snape's loyal to him after all.

Or since that's a fairly linear case that just adds more and more iterations of "they know we know they know, and they know we know they know we know they know"...what about The Spy Who Came In From The Cold, where a British agent is sent to pose as a defector to the Russians, to frame a top Russian agent as being a traitor for the British, to get him removed. He thinks he's a mere triple agent (fake traitor), but actually the real plan is to have him exposed to the Russians at the end, because the said Russian top agent really *is* a traitor for the British, and the plan is to discredit all suspicians of him.

Are autistic people better or worse than average at understanding plots like that? My instinct is they'd be better, since it seems like game theory and to overlap with things like chess. But the theory of mind stuff you mentioned suggest they'd be worse.

Second, are autistic people (in your opinion) better or worse than average at ideological turing tests? This seems like something with a lot of important social implications.

And third, I remember reading something (can't remember where, but I think it was a low quality source like a tabloid), about a supposed test where people who had recently had some kind of power were told to write a word on their faces so others could read it, and were more likely than the control group to write it from their own perspective and forget to flip it so it would be readable to others. The implication being that having power makes you more narcisstic and less likely to think from other people's perspectives. And that's why politicians suck.

Like I said, low quality source, but does this sound like something at all plausible? And if so how does it relate to theory of mind and autism? No matter what someone thinks of either autistic people or politicians, they don't usually put them in the same category...

Expand full comment

> Nobody who’s not a fool thinks a an adult with a college degree and a job, but some autistic-like traits, would flunk the crayons test.

Haha, of course! But if we take "theory of mind" as a scale instead of a binary, would it make sense to say that people on autistic spectrum are worse at the "theory of mind" than neurotypical people, adjusted for age? (But still better than when they were kids?)

For example, an adult with a college degree and a job may still suck at interpersonal relationships and office politics, which are like the advanced levels of "theory of mind".

Expand full comment

Yes, that’s my impression. For instance I have one extremely smart patient who is getting an advanced degree in a STEM field. But they can’t grasp nuances about people. For instance my patient made a bitter, mocking remark about the child of someone rich and famous who as an undergrad was given access to some research resources that other people could not. Somebody else defended the undergrad, saying “well it’s not his fault that his father is X, and who’s going to turn down a chance to use [a certain resource] if it’s offered?” My patient could not see it. Because she was very angry about the unfairness of the situation, she could not keep from picturing the kid with the famous dad as a sneering, entitled little pig. It’s a latter day version of not being able to grasp that the next kid to see the bandaid box (which contains crayons) is going to guess that the box contains bandaids.

Expand full comment

At one point, I was employed on an autism research project, so got to see examples of the severe forms. Also some people I know with high functioning autism have kids with the severe form

I was perhaps hedging my statement in an unclear way.

How about:

Very severe forms -> unclear whether everything i impaired, not just Tom.

Very high functioning -> ok, maybe to. Impairment is very slight

Intermediate cases -> you can see that the impairment is specifically about reading or predicting other people, other faculties not impaired

Scene I know has autism ~ the specific impairment of not understanding metaphors, language use fine otherwise. Sucks to have that of course, but isn't it interesting that the impairment is quite so specific?

Expand full comment

I know quite a few people who are officially diagnosed with autism.

A friend of mine tells me that has been diagnosed autistic, and I'm like "wait, you have better theory of mind than most of the people I work with." "Sensory issues, not theory of mind" he tells me.

So apparently, you can get diagnosed with just having the sensory issues part of the symptom cluster.

Expand full comment

Only if you are diagnosed by someone whose views are strongly shaped by popular culture.

Expand full comment

? Someone whose views were shaped by popular culture would surely think autism = theory of mind deficit; on the other hand, someone who is more up on the literature would know that sensory issues often co-occur with more well-known autistic symptoms, and might consider just sensory issues as bring part of the same symptom cluster.

It's very unclear if what is diagnosed as autism is one condition, or several.

Expand full comment
Dec 5, 2023·edited Dec 5, 2023

As I wrote somewhere on this thread, nobody with a shred of common sense believes that adults able to go to school, get a job, etc. are operating with a grossly deficient theory of mind. What I meant was that the person must have been diagnosed by someone who, when in doubt, calls someone's quirks autism, because as people are pointing out on this thread autism is cool these days and many believe they have it. The Diagnostic and Statistical Manual has a list of criteria people need to meet for an autism diagnosis, and there's no way sensory issues alone would qualify someone. In fact sensory issues are not even among the criteria.

https://www.cdc.gov/ncbddd/autism/hcp-dsm.html

Of course this doc is free to have his own private theory of what autism is, but to avoid confusion maybe he should give his diagnosis another name, like Sensorium Sensitivity Syndrome, which has a nice ring to it.

Expand full comment

Any ideas on *how* autism became cool? It seems to have started around the time that Temple Grandin documentary came out, which would support your theory.

Expand full comment
Dec 5, 2023·edited Dec 5, 2023

I think it started a long time before that, although that probably played a role.

In the eighties and nineties, Asperger syndrome was rediscovered and put in the ICD. So people were introduced to the idea that autistic people could be high functioning (or to put it more controversially, that there was a better kind of autism). In the early 2000’s there was a trend of “identifying” people like Einstein or Mozart as having supposedly been autistic, with the suggestion that some autistic people can have special abilities. A psychiatrist called Michael Fitzgerald wrote an article about the potential link between genius and autism in 2004 and it was widely picked up by the press.

The 2003 novel Curious Incident of the Dog in the Night-Time introduced a lot of people to the concept of high functioning autism as well.

Expand full comment

@anomie

I think you're on to something here, and I think Big Bang Theory highlights the parallels between the cultural concepts of autism and nerdiness.

As far as the internet culture definition of Autism goes (which is related tangentially at best to the medical diagnosis), my impression is that it followed approximately the same trajectory as the concept of "nerd". As in, it started off as an insult. It was then adopted by people who consider themselves outsiders as an ironic and somewhat self-deprecating self description. Through that process, it was mainstreamed enough that it is now synonymous with "somewhat quirky".

Both also follow the dynamic observed in the original comment, where something that was once almost exclusively associated with men becomes "something that women totally are too" once it becomes a socially acceptable identity.

Expand full comment

Also people who were good at coding, math, and related areas started making a ton of money. And just like that being a nerd became cool.

Expand full comment

I hate to say it, but... Big Bang Theory might have had it a part in it. At the very least, it popularized the image of people on the spectrum being quirky but intelligent people.

Expand full comment

COVID has changed many people's perception of the medical establishment. Some people have gone off the deep end of "it's all a lie," and that's definitely not me, but I have had to significantly revise my priors about the chance that accepted medical wisdom might be based on absolutely nothing. (And, to be clear, my priors on that were not at zero beforehand.)

Which brings me to ... facial hair. It is more or less universally said that facial hair (particularly in men) grows at the same rate regardless of how often you shave it. Even though many men seem to believe otherwise, this belief is called a 'myth'.

Does anyone know if this actually has any basis in the literature, though? I can't find anything to substantiate it. And I'm considering how to set up a proper experiment on myself, because I am like 98% certain that my hair grows back faster after I start shaving more often.

Expand full comment

I always (think I) observe that my facial hair grows back faster for 2-3 days after shaving, then slows down after that.

Expand full comment

> about the chance that accepted medical wisdom might be based on absolutely nothing.

I was very surprised to discover that people in a lot of countries are not told to floss their teeth. A lot of recommendations are pretty arbitrary.

Expand full comment

Yep, medical wisdom can vary deeply between countries. (Sometimes between hospitals.) When I compare Slovakia and Austria, we seem to be two different species.

I think most doctors just believe whatever they were taught at university decades ago, and even that was taught by the previous generation of doctors, etc.

In parallel, there is medical research... and occasionally some of its findings get to the doctors... and occasionally some of those doctors give a lecture at university, and then it finally becomes a part of the established medical wisdom that is approved by high-status doctors. But that process takes a lot of time, on average.

Expand full comment

Covid aside, seem unlikely to me that shaving or hair cutting frequency changes growth rate. Hair does not communicate with the living cells in the hair follicle that produce it. Hair is "dead," and has no nerve or blood vessels running through it. How would the follicle get the word about whether the ends of hairs have been snipped?

Expand full comment

Wild guess, but mechanical action? The root of the hair is still attached to the follicle. Longer hairs will transmit physical stimuli - the pulling, etc., that naturally occurs - and this stimuli could impact growth rate.

Or it could simply be that the follicle has an easier time generating new hair when there’s less existing hair to push outwards, but that this effect happens more slowly, so that hair growth takes some time to accelerate or decelerate.

I don’t know the exact mechanism but considering that the two relevant things are physically connected it seems like there are many possible interactions.

Expand full comment

Yes, you’re right. I believe I was thinking too narrowly.

Expand full comment

Also the act of shaving could affect the follicles so you would need a control with basically doing a facial massage instead.

Expand full comment

You could shave at random intervals, with delays between 1 and 5 days or something and weigh the hair that you cut. Plot the weight against the delay. Fit the data with a polynomial. Do the residuals anticorrelate with the previous delay? I guess you need a very precise scale though.

Expand full comment

Would be easier to measure the length? Shave the same spot, pick 10 shavings with tweezers, plunk on a steel ruler, or better yet, use measuring calipers. Average the results out.

If you want to be pedantic and assume normal error distribution, use 32.

Expand full comment

I admit I don't know if I care enough about proving my theory to do that much hair plucking :D.

Expand full comment

Well, no Nobel prize for you then! :)

Expand full comment

That would probably work too, I guess

Expand full comment

The received wisdom is that this is the difference between cut hair (with a blunt tip) and uncut hair (with a tapered tip). The former supposedly feels rougher and gives the impression that it's grown more. Is it true? Who knows?

Expand full comment

I've always heard that too but it doesn't quite explain what I'm seeing, which is the difference in facial hair quantity (or quality) at a point about 24 hours after shaving. If each shave resets your face to 'zero', and hair grows back at the same rate, you would expect your face to be the same after 24 hours, I think.

Expand full comment

>grows at the same rate regardless of how often you shave it.<

To clarify; are you saying you will grow the same amount of hair, or that the hair that grows will grow at the same speed? The second seems very accurate; if I shave and then stop shaving, there's very predictable levels of growth each morning. The first I'm less sure on, but can say I've been shaving for twenty years, then stopped shaving for a full month, and still can't grow a decent beard.

I guess if you didn't have anywhere to go for a bit, you could shave half your face and compare its growth to the other half.

Expand full comment

To be fair I had not considered the "number vs. length" angle. I feel what I'm observing is greater hair length but I suppose the other is possible.

Yeah, if I didn't need to appear in public, I think your half-and-half approach would be the way to go. Much easier than all the other methods of measuring.

Expand full comment

So far as evidence against accepted medical wisdom, I think the best modern case is the advice, given for a long time by more or less everyone in a relevant position, to use margarine instead of butter. The margarine at the time was hydrogenated vegetable oil, high in transfats, which we now know are much more dangerous than saturated fats. I don't know how many people died of heart attacks as a result of that advice but my guess is in the millions.

Expand full comment

I've more or less arrived at the point where I don't trust a single thing coming out of the nutritional sciences, at least when it comes to making health decisions in regards to my diet (speaking as a person in good health).

Expand full comment

I think nutrition advice is exceptionally bad. Among other reasons, because there is lot of lobbying by the food producers. Producers of unhealthy food will fight just as hard as the tobacco industry.

Expand full comment

That's a very big one for me, too. It clearly shows that's it's possible for weak-or-wrong findings to become the universal truth, and that should be a cautionary tale.

At the same time, it's not meaningless that the medical establishment did, eventually, change course. Being slow to recognize scientific truth is very different from rejecting it and holding that rejection for non-scientific reasons.

Also, I think there's a danger in over-generalizing here. If you could zoom out to a ridiculously high level, and reduce your choices to either (a) follow the general advice of the medical establishment and listen to your doctor or (b) ignore it entirely, I am pretty confident that (a) would result in greater health and a longer life. The fact that medicine sometimes gets something wrong, even extremely wrong, is not an argument for wholesale rejection.

But it is evidence for caution, and perhaps for active skepticism for any recent 'wisdom' that seems to fly in the face of prior convention. Or, more saliently, it's reason to try to understand the difference between (a) the researchers who are trying to find the truth and (b) the public officials, activists, and spokespeople who are trying to turn those findings into directives about how to live our lives.

Expand full comment

I agree that if the only choice is to follow the establishment advice or ignore it entirely, following is the better option. But I think you should take that advice is evidence well short of proof of what you should do. I can't think of any cases where I act against establishment medical advice, one where I act according to non-establishment advice (Bredesen's protocol) but not advice that goes against the establishment advice.

But in a number of non-medical areas I have looked at the evidence carefully enough to conclude that the establishment orthodoxy is probably wrong. Those, unlike medicine, were areas where I thought I had some relevant expertise.

Expand full comment

If you don't follow medical advice, you still have some sources of information. There's custom, which might not be disastrously bad. And there's keeping track of how you feel, which might give some hints about what's good or bad for you.

Expand full comment

Relatedly, the other day I was looking into the claim that adding cream to coffee does/does not affect whether or not it stains one's teeth. Everything I found on the internet said that the idea that it does affect tooth stains is based on a misconception that lighter colored liquid will have less tooth-staining power, which is a myth, and said that if you want to avoid staining your teeth you should just drink coffee with a straw (iced coffee, I guess?). But my partner's claim was that the cream would change the pH of the liquid, which might affect its tooth-staining power, which sounds potentially plausible, and wasn't addressed in any of the discussions.

I very much don't trust the "evidence-based medicine" community on a lot of things, because they very much confuse "no evidence in the form of a randomized clinical trial" for "it doesn't happen", and I do wonder how much standardized medical advice, especially on minor things like shaving and tooth staining, is of this form.

Expand full comment

The more important piece is the fact that the casein in milk binds to the tanins in coffee: https://skeptics.stackexchange.com/questions/6029/does-adding-milk-or-cream-in-the-coffee-help-reduce-teeth-stain

Expand full comment

I mentioned this to my partner, and he said this was in fact his claim - not pH, but that whatever the staining compound is would be bound preferentially by something in milk. He suspects it could be achieved by substances other than milk too - anything that would be usable to make a clarified milk punch: https://punchdrink.com/articles/clarified-milk-punch-techniques-recipes/

Expand full comment
Dec 5, 2023·edited Dec 5, 2023

Huh… I’m a black coffee drinker, no stains. Not sure why.

Expand full comment

There's probably all sorts of factors for individual teeth that determine how stained they get, beyond mere quantity of exposure to staining compounds!

Expand full comment

I always drink coffee with cream or milk. I have coffee-stained teeth.

Expand full comment

The claim wouldn't be that it makes prevents all staining, just that it reduces staining compared to black coffee.

Expand full comment

what supplements and medications do people take?

Expand full comment

Does coffee/tea count? Because I take a ton of that.

Expand full comment

Medications: Metformin, Repatha (because I don't trust statins), Lisinopril

Supplements: COq10, MSM, glucosamine sulfate, D3 (prescribed)

Expand full comment

Metformin for diabetes or longevity?

Expand full comment

Metformin for diabetes.

My control of my blood sugar has surprisingly improved. I can handle carbs better than I used to, though I still need the metformin. I haven't added significant exercise or lost weight.

Possible theories: I'm pretty good about not eating at night (fast from about 7PM to 7AM or so). Qi gong has paid off, though this particular improvement doesn't seem to be common.

Expand full comment

I'm a middle-aged powerlifter who's done some Tin Man triathlons and does cardio 5x a week in addition to lifting 2x-3x my bodyweight in the 3 compound lifts:

Fish oil

Creatine

Testosterone (hugest boost to quality of life available in medications/supplements, IMO)

Bergamot (prophylactically to offset testosterone's effect on lipids)

Telmisartan (prophylactically for T's potential effect on blood pressure)

Sirolimus (8mg over weekends for MTOR downreg)

Metformin (theoretically anti-aging)

GSH / Glutathione injections (theoretically anti-aging)

You can get half of these from Ageless RX, who I heartily recommend to all and sundry (particularly those who want to try sirolimus / rapamycin and have trouble finding it).

Expand full comment

May I ask how much creatine you’re taking?

Expand full comment

Just 5g a day. There are some in my gym cohort who recommend .1 g per kg daily (and you can find plenty of pubmed studies that recommend that too), so I'm under by 3g a day by that metric, but the couple of times I've tried more for a few weeks I didn't really notice a difference.

I do eat red meat, so I figured there's probably a ceiling to supplementation's value.

Expand full comment

Thank you. Another commenter here mentioned 5 g/day as a sensible target with rapidly diminishing returns beyond that too. I'm smaller, so it comes out to less than 2 g under the 0.1 g/kg limit for me anyway.

Expand full comment

How could you be under 20kg / 44lbs? Because 20kg bodyweight would be 2g a day. You'd have to be a small child.

The smallest adult I've known is ~90 lbs, which is 40kg, which would be 4g / day, so something in your math may be off. Or you may be a small child, in which case kudos on your reading tastes!

Expand full comment

Oh, no, I meant my 5 g/day makes it <2g deficit re. 0.1g/kg. I'm about 65..66 kg. Long time past being a small child :)

Expand full comment

Also how heavy are you? 2x-3x body weight on bench press is rather serious.

Expand full comment

I compete in the 181lb weight class, and my totals adding bench/squat/deadlift are 1100-1400 lbs depending on how much cardio, rock climbing, or triathlon / other training I'm doing. I'm usually rolling around at 185-190lbs bodyweight though, and cut down for competition.

I've been doing testosterone for 7 years or so, and definitely see differences. I've cycled off with PCT a couple of times because I was paranoid that I was suppressing my endogenous HPA axis capabilities and would need to take it the rest of my life. But the difference in training volume, recovery capacity, and libido the few times I did this were significant, such that I've just bit the bullet and decided screw my endogenous HPA axis, I can be dependent on external T the rest of my life.

From a macro perspective, I can tell you that older me is stronger and fitter across the board than younger me, and I was no slouch when I was younger. I'd estimate this is probably at least 30-50% due to taking T - the other 50-70% is having dialed in better eating habits, training methods, intensities, and recovery strategies over the decades.

Expand full comment

You seem thoughtful and procedural. Do you have written content I can read?

Expand full comment

Alas, I don't. I've thought about it, because I'm my social group's go-to for training or diet advice, but I figure there are so many other extremely high quality authors / bloggers / influencers out there that it wouldn't be a great use of my time.

However, I wholeheartedly recommend any content from Greg Nuckols, Mike Israetel, or any of the Renaissance Periodization ebooks, all of whom have a thoughtful, evidence-based approach to training / diet / supplementation recommendations.

Expand full comment
Dec 5, 2023·edited Dec 5, 2023

How long have you been taking testosterone? I heard the positive effects tend to revert after some time.

Expand full comment

Aspirin, semaglutide, magnesium.

Expand full comment

Fish oil and creatine monohydrate.

Expand full comment

1. There are exactly four siblings in this family that can say truthfully "I have exactly three brothers". How many girls can there be among the siblings?

2. Kids are standing in a circle. Eight of them are standing between two girls, while the remaining six are standing between a boy and a girl. How many girls are there?

Expand full comment

1. 0 girls. 4 siblings means GBBB or BBBB, and GBBB is illegal because none of the boys have 3 brothers.

2. My first intuitive answer was wrong so I need to think about the answer again.

Expand full comment

Ok after rereading the question, any number of girls works. BBBB, BBBBG, BBBBGG, etc.

Expand full comment
Dec 5, 2023·edited Dec 5, 2023

Suppose a family of seven children, three boys and four girls. All the girls can say "I have three brothers" but none of the boys can say that.

EDIT: Family of four boys, no girls also works. So the answer is either zero or four, depending :-)

Expand full comment

Thanks, I misread the question as "There are exactly 4 siblings, and"

Expand full comment

re 1: Four girls, three boys also works though.

Expand full comment
Dec 6, 2023·edited Dec 6, 2023

Thanks, I misread the question as "There are exactly 4 siblings, and."

Expand full comment
Dec 5, 2023·edited Dec 5, 2023

A Dwemer said, 'Nothing is of any use. We must go and misinterpret this.'

1. infinite. Four boys, infinite girls. (There's another answer but... I don't feel up to the politics of it.)

2. Three. The kids are standing in a circle; they are not the circle. The circle is a chalk outline around them. They are forming two straight lines, the first with a girl on each end, the second with a boy on one end and a girl on the other.

Expand full comment

1. Any number - there could be three boys and four girls, or four boys and any number of girls.

2. Assuming "between" means "directly between" and this isn't a trick question on that basis, there are 14 kids total and exactly 6 have a single adjacent boy, but each boy is adjacent to two people and therefore only 3 are required, with the remaining 11 being girls.

Expand full comment

1. Correct.

2. No trick, "between" means "directly between". Correct, and a remarkably short way of getting to the right answer!

Expand full comment
Dec 4, 2023·edited Dec 4, 2023

1. Either four or none.

2. Elevensies.

Edit: Though I have to say, Facebook/Xwitter have accustomed me to the possibility that these kinds of posts are pure engagement bait. If there's a substack version of that, then I regret falling for it.

Expand full comment

I am very much convinced that longtermism (worrying about what we can do now to facilitate a thriving galactic population some long time down the road) is a fool's errand and a waste of money, beyond an occasional blog post. I outlined my reasons before, but basically they are "any statement with a long-term horizon that does not rely on tested models is a Knightian uncertainty. You cannot meaningfully assign a probability to it, because you cannot be calibrated on such statements."

For example, we can meaningfully estimate probabilities of the global temperature change at some point in the future (though not very accurately) given what we know. We can meaningfully estimate a probability of an asteroid impact by a certain date, because we have a good and well tested underlying theory. We cannot meaningfully estimate the probability of aliens contacting us, or of AGI coming into existence let alone wiping us out, or in general of how our actions now will affect humanity 100 years into the future, let alone 10000 years into the future.

It is probably good in expectation to do our best not to screw things up for the future generations by following common sense ideas of conservation, technological progress, alleviating poverty and immediate suffering, etc. There is no way to estimate goodness in expectation of the more esoteric interventions being consider by some EA types. I wish they deprioritized those in both the effort and in profile, similarly to what NASA does with the Advanced Propulsion Physics Laboratory.

Expand full comment
author

This is totally EA's fault, because it's pushed the "long-termism" claim, but I think almost nothing hinges on long-termism and it was a mistake to claim we are using it for any practical decision (Will MacAskill cares about it, but he is a philosopher and allowed to care about irrelevant things). See https://forum.effectivealtruism.org/posts/KDjEogAqWNTdddF9g/long-termism-vs-existential-risk

Expand full comment

I have read your Forum post. I'm sorry if I'm misunderstanding it, because I'm still confused about what exactly I've been reading/hearing AI safety advocates say about cause prioritization in recent years.

Consider two ways of supporting the claim that 'It is much more worthwhile for us to address unaligned AI than other catastrophic risks, like nuclear war and bioterror.'

-option 1: "Unaligned AI has a 1/3 probability of catastrophic impact on humanity this century. That's higher than other catastrophic risks."

-option 2: "Unaligned AI has a 1/30 probability of catastrophic impact on humanity this century. That's lower than other catastrophic risks. However, the unconditional probability of *extinction* is [much] higher. Thus, AI is jeopardizing trillions of future lives in a way that other catastrophic risks are not."

Doesn't the term 'longtermism' capture why some AI safety advocates feel justified in citing option 2, and others don't?

If so, isn't longtermism highly relevant for the questions of policy-making and allocating money?

I could ask this with a bit more nuance with 3x the text length, factoring things like tractability and neglectedness, but I think the basic question about comparing impact is fairly clear without that.

Expand full comment

I agree with most of what you are saying there, except for the crucial point that made longtermism look like a viable cause and not just a something for a philosopher to write papers about:

> A 1/1 million chance of preventing apocalypse is worth 7,000 lives, which takes $30 million with GiveWell style charities. But I don't think long-termists are actually asking for $30 million to make the apocalypse 0.0001% less likely - both because we can't reliably calculate numbers that low, and because if you had $30 million you could probably do much better than 0.0001%.

No, you cannot "do much better"! Because you hit Knightian uncertainty and "probability" stops being a good model for what might happen. Surely something **will** happen. But you can't usefully assign a probability to it. You are not calibrated on out of distribution events. Even if you are the best superforecaster out there, whose calibration is best in class and who routinely makes a killing on prediction markets, "preventing apocalypse" in the longtermist sense is so far out of distribution, any number assigned as its probability would be pure fiction with no grounding in reality. I assume you agree that probability is in the mind, and it is important to understand when you hit the limits of bounded rationality. Your own concept of a "Schelling fence" is useful to apply here, as a limit of how far you are willing to push your logic and rely on its conclusions.

Expand full comment
author
Dec 5, 2023·edited Dec 5, 2023Author

I don't think that's true.

The only study of very long-range predictions (which was only 20 years, but that's the timescale we're talking about for AI and pandemics) showed that good predictors continued to be able to predict better than chance (and bad predictors).

More philosophically, there's no specific horizon after which prediction becomes impossible (eg an exactly 25 year cutoff). It seems more like predictable decays as time goes out. That means prediction isn't "impossible", just weaker. You can try to quantify how much weaker, and see whether your weak prediction of 25 years from now makes the future more or less affectable than your strong prediction of 5 years from now.

Just to give an example, I think researching UV-C methods that kill all airborne pathogens and getting them placed in every building is pretty likely to help with a pandemic 50 years from now! I agree there are many ways this can go wrong - maybe somehow you failed catastrophically in testing UV-C and it doesn't work at all, or maybe you do this and then 10 years from now someone else uninstalls all of them (at great expense - why would they do this?), but I'd be surprised if it were exactly as likely to kill 50 million people as to save 50 million people. We can see many cases of people in the past saying we should do X, and now we can say with hindsight that if they'd done X, we would be better off today (eg nip global warming in the bud).

If you think most "long term" interventions are more dubious than the UV-C one, that's an argument for switching your long-term thinking from more dubious ones to more certain ones like UV-C, not for "Knightian uncertainty" as an inherent property of long-term thinking.

Expand full comment

> More philosophically, there's no specific horizon after which prediction becomes impossible (eg an exactly 25 year cutoff).

This is of course true, but not entirely relevant. If we agree that e.g. predicting events 10 years from now is valuable, but 500 years from now is virtually impossible, then we are essentially in agreement even if I place the arbitrary asymptotic threshold at 100 years and you place it at 128.

> Just to give an example, I think researching UV-C methods that kill all airborne pathogens and getting them placed in every building is pretty likely to help with a pandemic 50 years from now!

But this is the opposite of long-termism ! I don't know exactly what kind of research you're performing, but I bet that you're coming up with results such as "this many lumens of UV-C kills this percentage of these types of pathogens in this volume of air" (as well as hopefully "...and causes this little damage to human skin and retinas"). You are collecting real (and immediate) test data for a mechanism that is well understood. Your uncertainty lies in the probability of the next pandemic, as well as its source (i.e. perhaps it is caused by some UV-resistant organism), but even there we (sadly) have plenty of existing data to work from, and plenty of experiments we could perform (and in fact are performing, hopefully in secure biolabs). By contrast, if you were trying to estimate economic implications of a global pandemic 50 years from now as projected 100 additional years into the future, then you'd be engaged in long-termism (i.e. unfounded speculation).

Expand full comment

Hmm, I think there are two different points I am trying (poorly) to gesture at.

One is that predictions that rely on "educating guessing" rather than science (e.g. AI vs UV-C sanitizing or ant suffering vs greenhouse effect from CO2 emissions) tend to age much faster. I.e. we can be confident that if we keep pushing CO2 up, we will end up in something like another Eocene eventually. If we apply UV-C successfully at scale, we will kill currently known airborne pathogens, even if it happens 50 years down the road. On the other hand, our "best guesses" have a much shorter shelf life.

The other point is whether the intervention will be net good or not, even if we can narrowly predict its intended effects. For example, ubiquitous UV-C sanitizing might lead to emergence of UV-resistant superbugs. Or to the extinction of some vital part of the ecosystem that relied on those pathogens being present. Or to the human immune system being underutilized and going all allergenic on the body. The intervention would have achieved the desired effect, but the unintended consequences would turn the positive into a negative. My contention is that these unforeseen consequences are much likelier when we talk about the long-term effects than when we talk about short-term effects. For example, without the last ice age the suffering humanoids might never have left their habitats and started developing the brain that let them overcome the adversity. Or maybe without that ice age we would have developed much more slowly and would not have faced this natural vs artificial intelligence issue that EA is so focused on. Another example the nuclear weapon use on Japan: however bad it was, it may have scared the two superpowers enough to avoid a nuclear confrontation.

So, even if you do your best science and assign sensible probabilities to the intended effects, the net good/bad evaluation gets Knightian as the time horizon expands. Best you can do is to focus on the near term: research carbon capture, or fusion, or harmless UV-C emitters, without making claims about their net good far down the road.

Expand full comment

Or it might turn out that there are microbes which are good for us to the extent that breathing too much sterilized air is a net cost.

Expand full comment

Yeah, exactly. This is the difference between intended and unintended consequences. The unintended ones have a lot shorter time horizon.

Expand full comment

Freddie deBoer has a post based on how "I hate myself and I want to die" is a more or less universal feeling to have had at some point during adolescence. This feels wrong to me. I'd appreciate a question about this at a future ACX survey!

Expand full comment

I remember feeling smug in my twenties that I didn't have any desire to kill myself.

On the other hand, I remember a moment (rather younger) of hoping this world was a dream I could wake up from.

Expand full comment

Was never bullied at school or anything like that. First felt I wanted to kill myself aged around 9 or 10. No reason that I can point to and say "it was because of this or that", I just wanted to be dead.

Off and on ever since up to now, yes, "I hate myself and I want to die". Oh well, at least my GP told me that just thinking about killing yourself isn't serious, it's not unless I actually self-harmed or tried suicide that intervention was necessary! 😁

Expand full comment

I definitely don't remember ever feeling like that.

Expand full comment
Dec 5, 2023·edited Dec 5, 2023

I definitely had both thoughts in adolescence: Hating myself -- for sure. 2 zits on my chin was enough to set it off. But hating myself really wasn't connected with thoughts of wanting to die. I don't remember actually wanting to die, but I definitely thought suicide was cool. So dark, scary and brave -- wow. I definitely thrilled to the thought of how fascinated and amazed people would be if I committed suicide, and how people would be crying in the halls at high school. If I'd known I'd be able to come back as an invisible spirit and watch all that I might have more tempted! Adolescents are almost all sort of crazy.

Expand full comment

Surveys of adolescents yield a high percent of kids, way about 50%, who say they've thought about committing suicide. And something like 40% have deliberately self-injured.

Expand full comment

"Thought about committing suicide" is a very weak criterion. Just hearing someone say the words "Have you ever thought about committing suicide" is usually enough to make someone think about committing suicide; I at least have to think about what thinking about suicide might entail, and it's very difficult to think about thinking about suicide without actually thinking about suicide. I'm thinking about suicide right now, and you probably are too!

Expand full comment

Yeah, I see what you mean. I'm not sure how the question was phrased in the questionnaire this result was based on.

Expand full comment

I never literally wanted to die or anything close to it, but I would definitely cringe at myself so hard that “I hate myself and want to die” would be a reasonable way of verbalizing it.

Expand full comment

I would have to read the article to see how seriously he's taking it; it reads as hyperbole to me. Self-esteem issues are nigh universal, suicidally bad ones are not.

Expand full comment

I have no doubt that Freddie DeBoer probably has these issues. I feel like he should know better than to attribute his own problems to everyone else though.

The bullied-teen to adult leftist pipeline is real, and is probably the strongest argument we have for cracking down on bullying.

Expand full comment
Dec 4, 2023·edited Dec 4, 2023

i think i've felt that way, but i believe puberty's wild mood swings and a teenager's vulnerability to public disapproval and shame cause it. not that you randomly think it but as a reaction; "i wish i could crawl into a hole and die" after public shame.

(havent been a teen in 35+ years. Also it was different in the 80s; i think anti-bullying efforts really changed modern adolescence)

Expand full comment

Do you have any actual evidence that anti-bullying efforts have been all that successful? I was a teen in the 2000s and I was bullied a lot (never hated myself or wanted to die though). Then I felt like in the 2010s the world tried to convince me that bullying had become rare. Then in 2021/22 I spent a year in school training to be a teacher (at two good schools in the UK actually) and I didn't come out of that year convinced that bullying had decreased - maybe just mutated more from physical violence to psychological/social warfare (which personally I've always found worse, and I expect should be particularly worse for these types of thoughts).

Expand full comment

Probably depends on country (and school), but I think what changed recently is the perspective on bullying.

Some time ago, if you were bullied (as a boy), the social response would be that you are pathetic and you simply need to fight back. (Problem is, if you fight back and lose and they continue to bully you, you are still pathetic and deserve it, for not fighting back *harder*. But if you happen to kill someone, then you overreacted and go to prison, of course.) Today, many people will turn a blind eye, but if they are confronted with the situation, they can no longer officially approve of bullying.

For example I was bullied at school for some time, then one day I started fighting back and it stopped. Not because I was stronger or harmed the bully in any way (my "fighting back" consisted mostly of standing up and continuing to do what the bully told me not to do, regardless of the pain, repeatedly), but because the situation escalated so much that people *started noticing*, and the bully could no longer deny what was happening. If he continued, he would have to be punished; he could no longer use an excuse like "we were just playing", and no teacher could pretend that it sounds credible.

> maybe just mutated more from physical violence to psychological/social warfare

You can use physical violence against people of average popularity, but social violence only against people who are already unpopular. Which means fewer potential victims. Also, everyone is vulnerable to physical violence, but different people have different resistance against social violence. I wish I could say something more optimistic, but that's it. A relative improvement, not a perfect solution.

Expand full comment

Thank you for sharing your experience. I appreciate the insight and agree something like that is probably at work here - and may very well be driving an actual decline in physical bullying.

I'm not so sure social violence can only be used against the already unpopular. I've certainly seen popular kids turned pariahs thanks to it both as a student and as a teacher.

Expand full comment

> I've certainly seen popular kids turned pariahs thanks to it both as a student and as a teacher.

This sounds like a potentially interesting story.

On average I would expect the popular kids to be the ones who coordinate the social violence. Which is why the story would be more interesting when it is the other way round. As a likely explanation, I can most easily imagine one popular kid waging warfare against another. Or someone having a secret that gets exposed (such as being gay).

Expand full comment

Yes, it's popular-on-popular warfare I had in mind here. I'm not disputing popular kids are less vulnerable to bullying of all forms.

Expand full comment

https://www.stopbullying.gov/resources/facts for modern statistics in usa. unfortunately bullying in the 80s i dont know where i can get statistics. Seems like it wasn't studied as much during my childhood years. mentions physical violence is single digits and half of incidents get reported.

i mean in the 80s you could watch a disney film like candleshoe or a short like Goliath 2 and thats kind of how casual fistfights between kids were treated. Kids getting black eyes-the "shiner"-was a trope. In the usa, the old christmas classic "A Christmas Story" has Ralphie snap and beat up his bully till they are bloody. i think Columbine sort of changed american laissez-faire to bullying some.

i mean teachers are a lot more responsive now at least.

Expand full comment
author

I've only sort of reached the fringes of actually feeling this way, but I definitely have a part of my brain that subvocalizes it (in those exact words) sometimes, even when I don't agree.

Expand full comment

I read Freddie's essay to be reaching for that fringe element in each of us.

Expand full comment

Is this a mood thing or an intrusive thoughts thing? A small voice in my brain often votes for me to die or suffer horribly (e.g., "you should jump off this building") but it's because of OCD and not depression or self-loathing.

Expand full comment

The success of rock/pop songs, novels and movies depicting teenage self-hatred are good evidence that it is a relatable sentiment to a mass audience at the least.

Expand full comment

> "I hate myself and I want to die" is a more or less universal feeling to have had at some point during adolescence

I don't remember ever wanting to die. Quite the opposite, I am one of those who think that immortality would be nice. So many things to learn, so many things to do... So little time.

Also, the idea of hating myself is foreign to me. I can feel bad about my body. I can regret something stupid I said or did. I can feel inadequate. But hate? It doesn't even make sense how could I hate myself. I mean, perhaps if I was a bad person... but nope, even that doesn't really make sense, because a bad person wouldn't hate someone for being a bad person, would he?

I suppose this is another of those "more or less universal" things that I simply don't get.

Expand full comment

"a bad person wouldn't hate someone for being a bad person, would he?"

Oh yes he would. And himself too. Though he might never admit it. I only did for the first time this year, at 32.

Expand full comment

>I mean, perhaps if I was a bad person... but nope, even that doesn't really make sense, because a bad person wouldn't hate someone for being a bad person, would he?

Yeah I'm not even sure what it would mean to hate myself. I mean, the thing doing the hating IS "myself".

Expand full comment

My self hatred comes from the exquisitely obvious solutions to my problems and my utter lack of willpower to make them happen. The only thing keeping me alive is the hope that this might someday change, even as year after year passes I know it probably won't. I think the self-hatred is deeper and older than I'm willing to admit to myself.

Expand full comment

For some reason, the emotion such situations evoke in me is regret, not hate.

Like, things in my life seem... not really bad (I am currently not suffering in any way), but only a small fraction of what "could be" if I just... procrastinated less, had more courage, or perhaps got some support and encouragement from outside to compensate for my weaknesses... so it's a pity, but... that's just how things are, my brain is a part of the causal network of the universe, it is what it is even if I don't like it, I wish it was different, but it is not. (And yes, one day things may randomly change, such things have happened in the past. But the probability seems low.) I still don't see why I should *hate* anyone, especially myself. It's just frustrating and sad.

Expand full comment

If you're able to convince your sub-conscious of the truth of determinism, more power to you. I don't have that kind of fundamental introspection. It feels like it's my fault even if I know it's not.

Expand full comment

> the truth of determinism

When you drink alcohol, does it make you drunk? When you later digest it, does it make you sober? When you exercise, does it increase your "energy" by increasing your heartbeat rate? When you walk outside on a sunny day, does it improve your mood? Have you ever done something because of peer pressure, or because you knew that someone else needed it?

Read this: https://www.lesswrong.com/tag/trivial-inconvenience

and maybe also this: https://www.lesswrong.com/tag/checklists

There are examples of determinism all around us. You probably experience some of them every day. Yet people prefer to explain things using "willpower", which is essentially a religious concept (a bullshit explanation why an infinitely good God would create sinners to be tortured in Hell for eternity).

Try treating yourself as if you had no free will. More precisely, as if you had a free will at this moment, but you won't have it e.g. tomorrow. So your task for now is to increase the probability, using deterministic means, of doing the right thing tomorrow.

For example, if tomorrow you want to go to a gym, make all related decisions today. Which gym? At what time? Which exercises will you do? (Write them down.) Prepare the things you will need into a bag and hang it on your door handle. Figure out when you need to leave your home, and set up an alarm.

(Advanced level: make a checklist of all these steps. Print it. Put it into a "checklists" binder on your table. The next time you decide you should go to the gym, the first step is to put the checklist on your table. Pro level: if anything goes wrong, reflect on it and update the checklist. On the opposite side of the paper, keep a log, which days you went to the gym, how many repetitions you did with each weight, etc.)

If this happens to help you accomplish something, you will have further evidence for determinism. At least, you might be able to redirect your anger towards improving the checklists.

Expand full comment

Even though my years between 12-15 sucked quite hard, I have in fact never seriously felt in any way that I want to die.

Expand full comment

The malaria net thing seemed familiar and indeed it has been tried by others.

<b>Meant to Keep Malaria Out, Mosquito Nets Are Used to Haul Fish In</b>

"... the Global Fund to Fight AIDS, Tuberculosis and Malaria, which has financed the purchase of 450 million nets." etc. However, there were some unintended consequences.

https://www.nytimes.com/2015/01/25/world/africa/mosquito-nets-for-malaria-spawn-new-epidemic-overfishing.html

Expand full comment

> I’m a big fan of the philosophical principles behind EA. I’m also mostly a big fan of the community ... but ... it’s also included bad actors, and friends have reminded me to remind you not to suspend normal healthy skepticism just because someone’s in a community with a good philosophy.

I'm the opposite. I think well of most of the people in the community (minus the obvious bad actors), but I think the philosophy is horrible. I've explained why in many separate comments (e.g., on utilitarianism).

Expand full comment

I'm looking for a therapist. I'm not sure how to do that. My last two experiences in therapist seeking were pleasant, if questionably useful.

Looking for any advice on therapist shopping.

Expand full comment

For me the tricky part was finding someone who took my insurance, I didn’t want to pay out of pocket. If you’re planning to pay out of pocket then this advice isn’t super helpful.

I liked Alma the website/app for actually maintaining a reasonable database with accurate info of who took my insurance. It had good profiles and I found someone I liked. My insurance’s website both had inaccurate information and practically no bios.

Expand full comment

This is maybe too obvious to be worth mentioning, but:

I would recommend going to your insurance's website and getting their list of in-network therapists near you. Then go down the list and read whatever bio the therapist provides. Find one (or multiple) that resonate emotionally and contact that one. Finding a bio that connects is the important part--in my experience having someone you vibe with is critical in a therapist in a way it isn't in a doctor.

This was the approach that worked for me after a number of terrible experiences where I just picked whatever person seemed most convenient or had specialty areas listed that lined up with my issues or w/e. Might be I just got lucky, though.

Expand full comment

AI X-riskers have definitely not lost the PR war. Two weeks ago The New Yorker dedicated an issue to AI which included a profile of George Hinton and his X-risk fears. Yesterday's NYT had front page articles on AI and the safety wars behind the scenes. The big one was by our old friend Cade Metz. What do people think of that one?

Expand full comment

+972 Magazine recently published detailed reporting about the IDF's joint targeting process and how they use AI to automatically select, generate, and nominate target packs for human review. The rate of target processing is unprecedented for the IDF with the system nominating upwards of 100 targets per day. This is a glimpse at algorithmic warfare, and what the joining of AI systems with advanced Intelligence Surveillance Target Acquisition Reconaissance (ISTAR) and strike capabilities can do. Given that Hamas and Islamic Jihad cannot counter Israeli air superiority except by concealment (e.g. underground), this might be one of the most lopsided strike campaigns in history.

https://www.972mag.com/mass-assassination-factory-israel-calculated-bombing-gaza/

The BBC published a story about how the Israeli's are weaving collateral damage managment and PsyOps into the strike campaign:

https://www.bbc.com/news/world-middle-east-67327079

So, a friendly reminder to the e/acc out there, all technology is dual use. There is nothing invented for peaceful ends which cannot be bent to destructive ones.

Expand full comment
founding

"There is nothing invented for peaceful ends which cannot be bent to destructive ones."

You say that like it's a bad thing. Some things need to be destroyed, and ideally we'll want to do that as efficiently as possible. Israel is pretty good at destroying things that need to be destroyed, but there's certainly room for improvement. So what I want to know is, how good a job is the AI+human team doing, compared to the prior human-only system?

Expand full comment

It's neither good nor bad, it simply is. There are a few points of concern, but the big one for me is the potential for Goodharting the targeting process. Efficiency is an enabler not an end in itself. "Before the upgrade we were going nowhere, now we're going nowhere fast." Strategic bombing campaigns which deliberately target civilian housing historically haven't achieved strategic aims (if anything, they promote solidarity in the targeted pop.) but they produce easy-to-measure effects: number of targets destroyed, munitions expended, sorties launched etc. War is a fundamentally human endeavor, won and lost in the hearts of men. I see potential for AI enabled killchains to optimize towards more destruction without actually getting us closer to achieving war aims, because it's easier to make the metrics go up than to get a sense of whether these metrics are meaningfully contributing to the strategic end state.

Contra ~hanfel-dovned's account, [https://open.substack.com/pub/astralcodexten/p/open-thread-305?r=1xlsw1&utm_campaign=comment-list-share-cta&utm_medium=web&comments=true&commentId=44780519], acceleration isn't just video games and novel dopamine drips, it's war. The western MIC was e/acc before it was cool.

Expand full comment
author
Dec 4, 2023·edited Dec 4, 2023Author

This doesn't seem that interesting to me. Airplanes are also used for bombing; the fact that airplanes are "dual use" shouldn't have deterred the Wright Brothers, and so on to every other technology (electricity, computers, rocketry, steel, aluminum, telecommunications, etc). Every new technology being "dual use" is deeply predictable, and I don't think it should deter people who otherwise think it's a good idea.

Expand full comment

Interestingly, the Wrights actively marketed the concept of aerial bombing to the U.S. Army. (I live not far from the still-operating airfield where they demoed the concept, and there is a museum there with photos and various pertinent artifacts.)

Expand full comment

Targeting is a discipline that drives a lot of military activity from behind the scenes. It's a decision making process about who to kill with what, where, and when. Decide/Detect/Deliver/Assess and Find/Fix/Finish/Exploit are the elevator pitches that get thrown around. Targeting involves Collateral Damage Estimates and lawyers advising commanders on whether or not to authorize a strike. It's staff work intensive and probably the closest thing to being a full time trolley problem decider.

Automating a significant portion of the targeting process indicates that an advanced military is already leaning on AI to do foundational work in military ops. It would probably be feasible to link this Detect/Decide system with a Deliver system (things that shoot). Pressure to speed it up even more would prompt moving from a man-in-the-loop to a man-on-the-loop set up. People have been talking about this for awhile, so perhaps it's not exciting for that reason, but this is a few steps removed from a fully integrated system that can Decide/Detect/Deliver autonomously and probably do a lot of the Assess part as well. We might be a lot closer to autonomous killchains than most people think, and given all the exploitable data we leave about ourselves, that is cocerning. A fully integrated system could generate a target, strike if it's within parameters, then send footage of the strike to friends and family of the deceased for enhanced psychological effect, "this could be you," "sic semper tyrannus," or simply "your son died running like a coward, see?" My jab at e/acc won't convert any believers, but I think this ought to raise eyebrows amongst folks concerned with practical applications of AI.

Expand full comment

I feel like it's helpful to emphasize that your comment seems very scoped to the last paragraph only.

Expand full comment

Last week I wrote an essay nominally about accelerationism. To be candid, I really just wanted to write an essay about Ian Bogost's book Play Anything, saw a decent parallel to e/acc ideas I've seen, and attempted to tie the ideas together. I won't make any claims about AI safety, but I'm curious if this general philosophy of "trusting objects" seems useful to anyone, and if its aesthetics clarify the appeal of accelerationist ways of thinking to people who are otherwise against e/acc: https://warpzone.substack.com/p/accelerationists-just-wanna-have

Expand full comment

I'm as transhumanist as they come, but this seems to favor a mindset and approach to technological advancement and AI risk that maybe 10% of the population can fully embrace given current demographics and demonstrated preferences and neural architectures.

It also entirely elides or ignores the complexities of the social, economic, and political environments we (and technological advance) exist within - it's all well and good to say that radical change is going to happen, and happen so often and on so many fronts that it's overwhelming and you'll have no choice but to continuously adapt (or presumably perish), but what you don't get is this other 90% of the planet that doesn't have the neural architecture and risk appetite you do is going to rise up and fight *hard* against any and all radical changes.

And this includes all the rich old people currently in power, running the politics and corporations of our various nations, ie pretty much everyone with economic, regulatory, and "shut-down-big-R&D-projects" power.

Sure, *maybe* that 90% of most people and power in the world wouldn't be able to resist a technological advance on the order of ASI - but they can sure resist and tamp down or regulate anything below that level.

And if you're betting on ASI to drive this multi-field radical continuous change, be wary - you might just get your wish. But it might be the kind of ground-up "your atoms are now being used in a Dyson sphere" change you don't want.

Expand full comment

I have the opposite impression of people's preferences. Everyone knows on an intellectual level that when they use TikTok "China is stealing their data," but only the weird kids don't use it. So it is with every technology. "Phones are ruining in-person communication!" but no one can look away. I think most people are actually really onboard with throwing caution to the wind and embracing radical new technologies. People who believe in minimalism or primitivism are a very small minority.

What specific step change on the way to ASI do you expect there to be political will to resist? The people in power can't wait for robots to replace jobs and for infinite Netflix to permanently addict the population to its screens, and I don't expect the population to resist that. Who will say no to genetically modifying their baby to prevent horrible disease? And once that's normal: who will say no to genetically modifying their baby to live the best life possible? Who will say no to the jhana helmet that safely gives you the greatest bliss you've ever experienced with no side effects?

I'm not excusing the extinctionists who are cool with unconscious ASI tiling over the observable universe. But almost no one is literally pushing for that outcome, and almost everyone else is highly skeptical that we're building towards a misaligned fast take-off.

Expand full comment

For one thing, gengineering / baby modding is ALREADY tamped down. From a technical ability perspective, we've been able to modify human embryos / babies for 7+ years now. One easy win - there's a non-polygenic genetic variant that greatly reduces the need for sleep in ~1% of the population. I would personally pay $1M+ to put this in a kid, and if I were a billionaire, I would be spending hundreds of millions on a crash program to do this and to spin up the abilities to do massively polygenic modifications so we could actually materially raise IQ and stuff like that.

Can you actually get this done anywhere in the entire world? No, you can't. Even though the technological capability is there. Do you think you're going to be able to do this baby-mod, or even anything like "let's pay for an extra 2 inches in height and blonde hair," anytime in the next decade or two in the USA? I would definitely bet against that. *Maybe* somewhere in the world with less regulation and an appetite and ability for actually using technology. Japan, SE Asia, who knows. Definitely not the US or Europe.

Another one I expect to be heavily resisted if not fully tamped down, in line with your jhana helmets and infinite Netflix: pretty soon, we'll be able to literally read your mind and create a seamless VR Heaven based on tailored-to-you esthetics and desires and contexts. Like lucid dreaming, but with the benefit of GPT-n's creativity and "other people" quality input that makes the real world so interesting, and the lack of which makes lucid dreaming ultimately sterile. Deepdream is already at the point of reading images from your brain directly - how far off is it to read desires? Especially with the amount of data and knowledge FAAMG's have on us? Put in a catheter and an IV nutrition drip, get some minimal UBI going to pay for the electricity and a coffin hotel room, and we could lose 80% of the population to this.

Except in the case that GPT-n has replaced 95% of jobs, I don't expect this to happen, because I expect it to be heavily resisted and regulated or legalized out of existence as long as govs need productive taxpayers.

I mean, we (as in basically every gov) already prohibit personal drug use! Do you think if jhana helmets became half as addictive or disrupting as Fentanyl, we're not going to make them illegal? And it's way easier than drugs, because it will be one or a handful of companies making and selling them. Sure, home garage tinkerers might be able to make one, just like home chemists can make some drugs, but the overall societal effect is going to be basically nil, because outlawing it keeps it away from 90%+ of the population.

I think we have different views and levels of cynicism on government and people-in-powers' motivations and level of execution. Absolutely, anything that becomes a major societal disruption is going to be regulated or tamped out of existence - the gov and people in power need docile little taxpayers going about their business, and will literally do almost anything to keep things that way.

Expand full comment

> Except in the case that GPT-n has replaced 95% of jobs

What's your timeline on this happening? I think 2-3 decades is realistic. Given a shift that huge in people's lives, I think that even strong taboos like those on genetic engineering would fall away.

Expand full comment

Yeah, 2-3 decades sounds plausible to me in about 80% of humanity-including Everett branches from where we stand today - especially if we broaden to include something like "AGI / ASI so disruptive human-level economics has lost most meaning / been superseded."

The other 20% would be "warning shots that actually (and improbably) got unilaterally acted on" and "Butlerian Jihad" and "Benevolent ASI ramps so fast it transcends existence in our physical plane, but reaches back and fiddles things so other ASI's aren't possible again and base humanity continues as a species" and other things like that.

Of course, that's stipulating humanity-including branches, which I think is a definite minority of future Everett branches with ASI, or even 95% job-replacing AGI. I'd love for gengineering or mind-machine tech to be at a point it can keep humans relevant, but I have my doubts it will move nearly as fast as purely silico intelligence.

But I really think the transition is going to be *brutal.* Sure, people say "buy land near urban centers, in a post-scarcity economy it's one of the few things that's still scarce!" But I think that ignores the vast probability of govs taxing real estate into oblivion when trying to support the ever growing 60% --> 95% parts of the pop who are permanently unemployed thanks to GPT-n, not to mention the societal unrest in major urban centers that will make the Covid unrest and changes look like a walk in the park. It'll be Georgist taxation if we're *lucky!*

Have you read much Cory Doctorow? He's the only author I know of who has taken a couple of cracks at exploring this transitional period before we hammer out some sort of functional post-scarcity societal, economic, and government schemas.

I think the thing to remember is that in 1-3 decades when monumental societal change is TRYING to happen, it's going to be this same 90% of old rich people / power that will be vigorously opposing and generally screwing things up for the vast majority of people as they desperately try to cling to whatever power and relevance they have in the face of those changes.

Personally, my strategy is an off-the-grid fallback ranch with friends, family, and physical security, but even that's not going to do much against autonomous drones, satellites, and extortionate taxation / state appropriation of resources.

Expand full comment

Is Cybertruck design a knock-off of Aliens armored vehicle: https://i.pinimg.com/originals/26/8d/65/268d6578a379eaf913eb02f3dc988ee1.jpg ?

Expand full comment

I don't think it's a knock-off; rather, it's an emulation of the style. Lots of other fictional vehicles have incorporated the low-poly look, from various cyberpunk creations to the modern Batmobile.

Expand full comment

Low-poly was the popular style in the mid-80s (when Aliens was made) anyway. A Cybertruck doesn't have that many fewer surfaces than a 1986 Corolla.

I guess it was the Countach that started the trend, way back in 1968.

Expand full comment

And there’s a bit of 80’s renaissance going on.

Expand full comment

Agreed, it’s a better explanation.

Expand full comment

I defy you to design a practical vehicle with that few polygons and make it look totally unlike the cyber truck.

Expand full comment

Me? Design a vehicle? Why? I don’t do that. I also don’t lay eggs, which doesn’t stop me from having opinions about omelettes.

But, also, who - and why - limited the number of polygons? Is this a common design constraint for vehicles?

Expand full comment

I don't know why you took it so personally. I was just making a silly comment about the fact that if someone decides to design a car with few polygons, it will look like a cybertruck.

"But, also, who - and why - limited the number of polygons? "

The person designing the vehicle.

Expand full comment

Sorry, the tone didn't translate well in writing, I totally misinterpreted your intent. My bad.

Expand full comment

Neither the Cybertruck nor the Aliens APC had to be limited to a specific number of polygons, since they're real vehicles and reality allows as many polygons as you want. So it makes sense to ask what inspired either property to choose that blocky, angular look.

Expand full comment

I think it's just 1980s revivalism. Car designers have been playing with curves for a few decades now and run out of things to do with them so they're exploring the blocky angular designs of the 80s again.

A few other recent examples in the same straight lines and flat panels trend include the Hyundai Ioniq 5, Hyundai N Vision 74 Concept, Renault 5 EV, Suzuki Jimny, Honda e.

Expand full comment

But the Cybertruck doesn't look like a car from the 80s or any other era. It looks like something out of an early 3D computer game.

Expand full comment

It wasn’t even a hostile question. I kind of like the Aliens APC! There’s also a Japanese company working on the giant loader from the same movie.

Expand full comment

My friend suggested I should get new non-stick pans for christmas, but this feel like a recurring joke as non-stick pans seem to only ever last a year or less. Maybe someone cooking or chemistry savvy in this audience has advice? I don't put the pan in the dishwasher, always handwash.

Also have a 10" cast iron that I take good care of and season about once a month, which seems to work great for recipes that don't need the non-stick quite so much, but I'd really love to figure out a better situation than throwing away my non-stick pans every year

Expand full comment

Anodised aluminium is a very hard-wearing type of nonstick, because it doesn't have a separate surface coating (teflon or anything else). I've got a roasting pan made from anodised aluminium which I've been using and scrubbing hard for years without problems.

https://www.nisbets.co.uk/vogue-anodised-aluminium-roasting-dish-370mm/c058

Expand full comment
Dec 5, 2023·edited Dec 5, 2023

don't use too much soap when you clean non stick pans. I almost never use soap and when I do it's minimal and quick. my non stick pans work as well as they did when they were new and they're 4+ years old. also agree that you only really need non stick for eggs. if anything else sticks so well you're likely doing something wrong like not enough fat in the pan or not allowing enough time for caramelization to happen. especially with meat, it's best to not touch it after you first put it in the pan until you are ready to flip the first time.

Expand full comment

My non-stick pans have lasted at least ten years. I cook everything at the lowest heat and only use the rubber spatulas on them.

I'm also a pretty lousy cook, so... y'know.

Expand full comment

If you want to use a nonstick for more than a year then only use it for eggs and use only silicone implements on it. There is really nothing else that requires a nonstick pan except eggs.

Never use metal utensils on nonstick and never use very high heat.

For everything else use stainless, copper, carbon steel, or cast iron as appropriate.

Expand full comment

Do you have a specific brand recommendation? We only use wood/silicon on the non-stick pan, on low/medium heat and also have the surface become less non-stick after about 15 months

Expand full comment
Dec 5, 2023·edited Dec 5, 2023

Go as cheap as possible without being unsafe, because you want to feel no regret about throwing it away. 15 months doesn’t seem bad at all, though you could probably get a little more wear out of them, why push it?

There is no substantial difference in longevity or utility by paying a premium; you’re just throwing away money. I use a $20 Calphalon, just because I like the handle, and I throw it out whenever I see visible wear—about 2-3 years, but I do a lot of eggs.

Are you using sufficient fats in your nonstick? You might be able to cook with a bare nonstick for a short period, but ultimately you need to be using spray, butter, or oil. You shouldn’t need to scrub it hard either—quick slide of scrubber around the inside to dislodge any small flecks, rinse, hang to dry.

If you are getting serious wear at 15 months you might have to share what you are using this nonstick pan for. If it’s meats… well…

Expand full comment

Seconded. This is exactly what I do and my egg-pan has lasted ten years at this point.

Expand full comment

It's just eggs each morning and veggie stirfry type things. I've bought ones that definitely don't last multiple years, hence my request for a brand recommendation

Expand full comment

I have cast iron pans that don't stick. I seasoned them once when new but otherwise just avoid soaking or scouring them. A quick soapy scrub and rinse and set them out to dry. It probably took a couple of years to get a good surface, during which I would always dry them on low heat, but now they are no different from the last non-stick I had (more than 10 years ago). If you want to accelerate the process, look at yard sales or antique stores for old cast iron cookware.

Expand full comment

I'll add that cast iron has a side benefit of increasing your dietary iron intake.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8266402/

Expand full comment

The other advantage of old cast iron is that older manufacturing processes produced a smooth surface instead of the grain-of-sand bumpy finish of modern cast iron pans. The smooth finish is more non-stick and easier to wipe/scrape clean when it does stick, due to the lack of nooks and crannies.

Building up a deep multi-layer seasoning over the course of months or years of regular use will fill in the deepest nooks and crannies, approximating the effect of a smooth finish. Alternately, you can sand down the cooking surfaces of a new cast iron pan before seasoning it, or you can buy from a manufacturer that mills the surface smooth (which is starting to become available now), or you can buy carbon steel pans instead of cast iron.

Carbon steel has most of the same advantages as cast iron and is cared for much the same way, but is manufactured through a different process that still produces a smooth finish, and it's a stronger material so the pans are less prone to cracking (which is only really a problem if you're really rough in handling your pans) and can be made something like 20-50% lighter. The weight is both an advantage (easier to handle, comes up to temperature faster) and a disadvantage (doesn't retain heat quite as well as cast iron due to reduce thermal mass).

Expand full comment

As someone who owns both older smooth finish pans and newer rough finish pans, I'm strongly of the opinion that the only advantage of the older style is cosmetic. I do not find that my older, smoother pans are any more non-stick than my newer ones. And while my newer pans have built up seasoning and are "smoother" than when I first bought them, they are still (and likely will never) be as smooth as the old ones. Also, it really only took weeks-to-months to get a finish that mostly equaled what I have now, years later. And I'm half convinced that most of the change was me learning to use my CI pan better, rather than it actually changing it's properties all that much.

I love my CI pans, use them for almost everything (including eggs), and I will probably never own a teflon pan again in my life. But I do think that a lot of the info out there about them is exaggerated and there isn't nearly as much going on with them as places like /r/castiron would lead one to believe.

The old pans do look amazing though.

Expand full comment
Dec 5, 2023·edited Dec 5, 2023

I purchased a second, smaller cast iron pan a couple years ago and so far have tried to dry it in the usually-hot oven after use, and sometimes wipe it after with a thin sheen of vegetable oil. (I read a long post where someone tested different fats for this purpose, and it was interesting but my takeaway was that veg. oil scored pretty well and I already have it.)

In other words I've not done that re-seasoning business in the oven* that never seemed to produce much effect for me on my older pan. The new one's certainly not non-stick and needs a fat for cooking e.g. pancakes - is it not supposed to? But it's not rusty and seems to work alright.

*The guy in the above-mentioned article did it repeatedly, until he got the finish he wanted.

Expand full comment

I did the full oven seasoning rigamarole when I first got my pans. This was around the time when flax-seed oil was all the rage because it supposedly polymerized "harder" than other oils one could use. This may or may not be true (the explanations of why seemed to make sense), but what _is_ true is that flax seed oil seasoning will flake off _horribly_. So basically all my carefully applied seasoning coats flaked off and I just rebuilt it the old fashioned way by cooking with the pans with oil.

It's been a long time, but I'm really not sure that my current season that is just the result of cooking in the pan is noticeably worse than my meticulously applied multi coat oven seasoning.

As for the non-stick level, and needing oil: cast iron is not, will not be, and can not be as nonstick as teflon pans where you can cook things without oil. You will, for almost anything that you might want to cook, need at least a small amount of oil in the pan. With that small amount of oil, they can be quite non-stick (although _still_ not quite to the level of teflon).

I use a small amount of fat (or a large amount if I want the flavor) for absolutely everything that I cook in my cast iron, both my old, smooth finished ones, and my new rough finishe ones. I don't notice that I need appreciably more oil in the new rough ones than I do in the old ones.

Expand full comment

Good info, comports with my experience.

Someday I will inherit Mother's old electric skillet, a decidedly un-hip cooking tool but really great for pancakes that are all the same brownness, and where you can make 6-8 at once!

Expand full comment

I'm going partly off my own experience comparing my own carbon steel pans and stainless steel pans. They're pretty close in non-stickness for things like searing meat, but the carbon steel pans are much, much better at eggs (especially scrambled egges) and stuff with thick sauces.

Expand full comment

very helpful, thank you for your knowledge!

Expand full comment

A good ceramic non-stick pan will last a while as long as you don't scratch it up, but buying a $30 Tramontina non-stick pan every year knowing that you're going to beat the hell out of it seems like a perfectly reasonable option.

Expand full comment

I just worry about how much Teflon I'm going to eat between "hmm this pan is starting to get old" and "ugh, okay, definitely throwing this pan out now".

Expand full comment

don't really think it's anything to worry about; it just passes right through you

Expand full comment

"Teflon on its own is safe and can’t harm you when you ingest it" according to WebMD. Don't worry about it.

Expand full comment
founding

Teflon on its own is safe, but Teflon pyrolysis products are as bad as the worst WWI chemical weapons. And in the vapor phase, they don't wait for you to eat them, they'll get to work on your lungs while you're still cooking.

Or not, because I'm pretty sure normal cooking doesn't reach the necessary temperatures on the inner surface of the pan and there shouldn't be any Teflon on the outer. But I haven't done the math, so I don't know offhand what the safety margins are.

Expand full comment

Great, then don't directly inhale the fumes if you scorch your pan. Lots of things give off nasty chemicals when you burn them.

This can't be much of a realistic risk or it would be better known. Everyone I know has teflon pans. No one I've even heard of has experienced lung damage because of them.

Expand full comment
Dec 4, 2023·edited Dec 4, 2023

I love my cast iron pans! ("non stick sucks", in the voice of the Dustin Hoffman in "Rainman") I also have some aluminum deep pots that have stainless steel bonded to the inside. Allclad, not cheap, but last a lifetime. :^)

Expand full comment

I preheating a stainless steel pan works perfectly, plus even when there's some sticking they clean easily.

Expand full comment

A chef friend of mine advised me (and the results have borne out):

first heat the pan, then heat the oil, and only then start cooking the food.

Expand full comment

My experience is that nonstick pans last like 5 years, maybe longer if I'm gentle to them. The most common failure mode for my pans is family members using metal utensils and scraping them up. I also make sure that I always use a soft tool to scrub them (ie the scratchy side of the sponge is off limits), though I'm not sure it makes as much of an impact.

Expand full comment

May or may not be relevant, but the fumes from non-stick pans are deadly for birds. If you have a bird or are considering getting one, non-stick pans aren't usable.

Expand full comment

AFAIK they changed the regulations around non-stick manufacturing a while ago (I want to say ten years but I don't actually remember) and all those warnings about non stick chemicals being toxic or whatever are now out of date.

Expand full comment

Did they stop using PFASs for non-stick pans entirely?

Expand full comment

I think it was PFOA specifically that got banned as opposed to all PFAS, but I'm not confident.

Expand full comment

.... and for humans? :-o

Expand full comment

They might be bad for humans, but I gather birds (maybe just flying birds) have very vulnerable lungs because they move a lot of air.

Expand full comment

Here's a cool thing for classical architecture fans: The Carmelite Monks of Wyoming are building a full-scale Gothic cathedral and posting videos about it on youtube: https://www.youtube.com/@carmelitemonks/videos

(If you enjoy secondhand drama, these guys also seem to have offended a surprisingly wide chunk of the political spectrum. According to my quick google, they are allegedly toxically masculine, abusing their novices, selling fraudulent free-trade coffee, and neglecting the rosary. I have not fact-checked any of these claims, and I don't intend to - I'm just having fun rubbernecking.)

Expand full comment

Nitpick, that's not a cathedral, it appears to be a chapel of some kind within the monastery.

A cathedral isn't just a fancy church, it's a church which holds a bishop, and I don't think they'll be assigning a bishop to a remote church in Wyoming.

The Sagrida Familia in Barcelona is another non-cathedral often mistaken for a cathedral.

Expand full comment

Thank you for the correction!

Expand full comment

Thanks for the link. Apparently they bought some CNC stone carving machines and are doing all the carving in house with computers. Neat! It’s amazing how technology makes things possible that never were before: like building a cathedral without any stone carving artisans.

https://hackaday.com/2023/01/13/a-medieval-gothic-monastery-built-using-cad-cam

Expand full comment

Also a handy refutation for "We can't have nice buildings because we need to mass-produce everything, not carve gargoyles by hand" kinds of claims.

Expand full comment
Dec 4, 2023·edited Dec 4, 2023

I wanted to comment on EA. I'm not against it, because yeah charity is good and you be you. But I can't really support it because it somehow seems off to me. And one way it seems off is that it's missing a distance dimension. And this is 'distance' along several orthogonal axes. First real distance in space, then a distance in time, and then a distance in genetics (people related to you, (and yeah, now people will say that's racist.)) And (finally?) a dimension along some social axis... I don't know what to call it, but supporting people who like to do the same things you do. And I want to keep my charity mostly close to me along these various dimensions. And perhaps this then seems selfish to EA's and others. And yeah, it's a selfishness that pulls me to keep it close. Is there anything wrong with that?

Expand full comment
Dec 4, 2023·edited Dec 4, 2023

I think it is perfectly reasonable. People can differ in social distance preference just as they can differ in time preference. I have a slightly longer comment in https://www.astralcodexten.com/p/contra-deboer-on-movement-shell-games/comment/44567807

edit: for either or both time preference or social distance preference: De gustibus non est disputandum

Expand full comment

> Is there anything wrong with that?

Well, from the point of view of people who don't share this point of view (e.g. EAs, me, presumably most of the people here), yes. That's kind of the point of EA, that allocating resources in this way leads to worse (again, subjectively) outcomes in total. I'm not going to try to argue you're actually incorrect, because it seems like kind of just a fundamental difference in moral opinions and is quite possibly irreconcilable, but "And yeah, it's a selfishness that pulls me to keep it close. Is there anything wrong with that?" is equally not a point that seems likely to actually convince anyone not already on your side.

Expand full comment

The good deed of saving some lives of African children today may mean that in about fifteen years, Africans will arrive on the shores of Greece and Italy looking for work.

Of course, we should continue to find ways of saving lives -- but we need to be realistic about the often unforeseen consequences of success, and deal with them.

Expand full comment
Dec 4, 2023·edited Dec 4, 2023

Thanks for the response, so at least I've identified where we differ. That's a good thing. But I also don't feel like we're on different sides.... well we are both on the pro-charity side, which we could call trying to do good in the world as you see it.

(Is that close to how you might feel?)

I feel compelled to add that my selfishness became much stronger after I had kids.

Expand full comment
Dec 5, 2023·edited Dec 5, 2023

I think EAs would say that doing localized charity is better (in a core Utilitarian / Consequentialist sense) than not doing any charity at all, but that it's worth recognizing that if you bias your charitable giving based on "closeness" you're not entirely doing "charity", you're partly optimizing for benefit-to-self / benefit-to-tribe, rather than benefit to humanity / the community of sapient and sentient beings / the world writ large.

That's a valid lifestyle choice, but people absolutely _are_ justified in saying that what you're doing is in some sense less charitable and more selfish compared to seriously looking at the evidence and spending the money in the way that saves the most lives.

And I say this as somebody that gives money to the San Francisco Opera, and cheerfully takes the "charitable donation" tax deduction for it. A donation to a ritzy arts organization is totally risible as "charity", I'm giving because I enjoy that art-form and I want more people in the future to be able to enjoy it, not because I think it achieves some abstract moral good. I also give a bunch of unrestricted to GiveWell, which I consider _actual_ charitable giving, regardless of what the IRS thinks. But whatever, I'll take the deduction and then I have more dollars to ramp back into things I think do more good in the world than a marginal tax dollar to the IRS would.

Expand full comment
Dec 5, 2023·edited Dec 5, 2023

Thanks, this was very nice and thoughtful. I totally approve of your charity. And I'm good with whatever metric you choose to give by. Lives saved or whatever, But what sorta bothers me is the impression that your metric is 'better' than mine. I'm giving to the local boys and girls club, so kids have a place to go after school. And you're sending money to save lives somewhere. I think we need all of those things. Someone should be helping the local homeless, (we did this when I was an active member of a church.)

So for me, there's really no distinction, between charity and acts of kindness. I help my kids, my family, my neighbors, my fiends, my coworkers and then all the permutations on that list, (my brother's friend, my coworkers daughter.) Besides the selfishness of this, (If my neighbor calls I'm there to help, and I know he'll do the same for me.) there is also the intimacy of it, we're all touching each other. I'm reminded of this computer simulation, it was a 2-D grid like the game of life. But in this game all the squares were playing the prisoners dilemma with their neighbors, and for some settings, you'd get areas of cooperating and then other areas of defectors. (IIRC, defectors tended to win on the borders.) Perhaps acting nice locally helps create an area of cooperation, and that will turn out to be the best way to spread good in the world. IDK, it has the right vibe for me, and like I said, you do you.

Expand full comment
Dec 6, 2023·edited Dec 6, 2023

This metric _is_ better -- but only in a self-consistent or tautological sense. If you judge goodness in utilitarian terms (which at some level I do) it would be better to actually conduct oneself so as to maximize the thriving of the community of all beings, even if that means having more suffering and less thriving local to yourself. In theory, if everyone behaved that way, eventually it would mean somebody far from you behaving so as to improve thriving local to you, the same as you're choosing to act so as to improve thriving local to them. (This may be tangentially related to the idea of "gains from trade". Also to Kant's Categorical Imperative: One should conduct oneself only according to such rules as one could simultaneously wish all others would follow.)

In any case, since I am not The Buddha, I don't live up to that standard. I accept that in order to stay sane and function in the world I need some level of creature comforts (so I buy nice food instead of subsisting on gruel and giving the saved money away), and a sense of social connection and meaning (so I spend some of my charitable dollars in less-than-optimal ways).

(The one I've struggled with most the last few years is that I'm almost certain I ought to give up meat. I _was_ vegetarian for three years, but backslid... I probably would have phased it out again by now, but my spouse's mother is one of those incredibly annoying vegetarians, which has made my spouse really reactive about the issue. And we almost virtually always cook together. I at least do my best to buy the expensive less-cruel meat, and overall eat a lot less than the average American.)

Basically I think you should acknowledge that EAs are at some level correct, perhaps consider adjusting some of your giving accordingly, but more importantly if you feel like people are judging you about it, it's fine to just roll your eyes and get on with life. It's OK to be not-a-Saint!

Expand full comment

Oh Auros what fun. Well I'm not a utilitarian, so I use a different metric. What I find weird is that you want us to have the same metric. (utilitarian) I find a diversity of ideas much more interesting. So let's agree to let each other 'do good' as they see fit. (And mind you my 'do good' might be something you hate.) I'm semi-retired and was thinking of volunteering at the local fire department.

As far as meat goes I love eating, and see no reason to stop eating meat. The cattle farms around here are all former dairy farms, and the cows/ steers have a pretty decent life, for a cow. There are a lot of local ~freeish range chicken farms, though I buy my chicken from the supermarket and I don't know where it comes from. I know several people that eat mostly venison. I'm not a hunter. I live in rural America, I think life is different here.

Expand full comment

> destroying billions in value is pretty bad by all of them

Does this description actually make sense in reference to SBF and the OpenAI debacle? To what extent were resources consumed on a real thing that then proved to be useless, like a factory that was never used or housing that wasn't habitable? In the former case, they were largely doing financial operations like attempting arbitrage, so any money they lost should have just ended up somewhere else. Similarly for the "80 billion lost" in OpenAI--if the price of Microsoft stock went down because people were selling, they still have that money, and might have invested it elsewhere. If some of them lost money on their trades, other people made money. If the value of some asset is inflated due to expectations (like with a housing or tulip bubble), and those expectations are corrected, what as actually been destroyed? Yes, many people went broke when bubbles popped, but how much value was actually destroyed (compared to that value never having been there in the first place, or being transferred to others)?

Even if no value was "destroyed" a transfer that leaves many people desperately poor can still be very bad. But the negative impact of something like that can't be estimated just by looking at the price of a stock portfolio.

Expand full comment

I think it's highly likely SBF did destroy a lot of value though, by spending money in a wasteful manner.

E.g. Crypto bro works normal job, pours his money into bitcoin. FTX steals his money. SBF uses it to make political donations to buy influence, loses all that influence in a heartbeat when he is revealed to be a fraud. Politicians have more money to spend on ads that convince no one.

Expand full comment

Sure--I didn't mean to try to imply that *no* value was lost. I would even keep it simpler--the money FTX spent on its own operating costs were wasted to whatever extent they simply supported fraud. I'm just saying that I don't think that you can take anywhere near the total peak value of FTX or the dip in Microsoft's stock price or something like that and say that all of that value was destroyed. Stock prices move extremely quickly; the difference between Microsoft's peak on 11/28 and its current price is the same as the difference between 11/28 and about 11/15. Did Microsoft create that same 80 billion of value (or whatever the number is) in 2 weeks? Did they do it again between 11/6 and 11/15?

Expand full comment

Agreed, I was confused by this point. I would have understood if it was EA that lost the money by investing into FTX, but as far as I'm aware the situation was the exact opposite — FTX was giving EA money!

So from that point of view, EA successfully managed to "save" money from a scam company like FTX, and use them to a (probably) good cause

Expand full comment

I am a philosophy professor and my interest is understanding time consciousness for humans in relation to time "understanding" in AI. I like metaphysics and tend toward accepting a mixture of Spinoza and Cartesian innate ideas. As Descartes says at one point truth is real. That means true statements are not merely within our minds but reflect an external reality. So I tend to look at the world in terms of a Kantian transcendental consciousness where we think about true statements and that those statement reflect the transcendent world outside of consciousness: "things in themselves" which includes true statements, including mathematical and logically true statements. That is just my background bias.

But when it comes to how we live, human consciousness is entirely time dependent. We live in a projected future and we use our future projections to determine what we do now. So time is not at all linear but goes back and forth for us. I am now what I am not yet based on what I was. The past is the springboard for my future projections now. This is human consciousness. Always projecting what is not yet, and yearning and desiring and hoping and dreading the future. We live where we do not exist, so paradoxically we spend our every conscious moment somewhere where we do not spend any moments at all, and we never catch up. I write now for what I hope it will do. I desire what I do not have and focus on that. So.... is this circular time consciousness even possible for any AI?

The big difference is that to predict a future with great accuracy is still not to project a lived future. Humans project and machines predict. And it seems to me most who think about AI miss this crucial distinction. Or... maybe it is not a crucial distinction at all. Help??

Expand full comment

I really don't follow what distinction you're drawing between humans' and AIs' ways of anticipating the future. Is it something to do with qualia, that in a human the predictions for the future are experienced in the imagination and influence the emotions, whereas an AI's information processing doesn't count in the same way? If so, is there any significant difference between this and the general question of whether/what AIs could have qualia at all? If not, is it something you expect to influence human vs. AI capabilities in some way?

There is a reasonably big distinction I think between things like LLMs, which may attempt to predict the future as part of their general reasoning ability but don't have prediction fundamentally built in to the way they think, and agent AIs which are trained to act in such a way as to attain some future reward, and therefore must have some model of what will and could happen in the future if they are to be very flexible and good at their tasks. The way an agent AI at least might imagine the future, assuming it has a generally human-like level of intelligence, seems not so different to what you describe about human experiences.

Expand full comment

Thank you for your thoughtful helpful reply. I like the qualia approach you suggest. There is an internal way in which we experience time, "time quale" perhaps. But that time qualie is not quite anticipating or predicting. The feeling of being me, that quale, entails the feeling of being me wanting things in a perpetually fleeting future, and that feeling requires projecting a non existing future in which I live with those things I desire. The feeling of being me in time is always like a story I tell myself then try to walk into it just as it vanishes again into the future. I think AI could have qualia but they would not have a time component. I guess what I am suggesting is that time quale is a necessary component of the feeling of being me qualia. The feeling being conscious of something also entails the feeling of being self conscious.

Now, regarding Agent AI I don't know if the training they receive and the "rewards" they receive in that training are in any way registered consciously as qualia. It seems to me that is simile: the training works like "reinforcement" in an animal but not really. The AI may be able to simulate the feeling of being me through reinforcement learning but it would still be a zombie, all dark inside.

I am wondering about that time quale. It is not like any other feeling of the external world inside of me. Time quale is not like the qualia of the taste of sugar or my sore foot. Time quale may be the highest order of qualia, if you allow. I fall back to a Stevan Harnad perspective here: sensorimotor transduction seems a requirement for the fear of the future that inhabits every human second in the present.

Expand full comment

I read The Dawn of Everything: A New History of Humanity last month, and my reaction was basically "big if true." The problem is I don't know how much of it is true. Most of the reviews (including, unfortunately, on this site) seem to be by non-experts who don't dig too deeply into verifying the book's claims.

Freddie deBoer apparently found numerous major issues, but last I checked he only went into detail about them in an unfinished paywalled series of posts.

Can anyone familiar with the book or with the academic field fill me in here? How much should I trust the Dawn of Everything?

Expand full comment

Did you see the review by Anthony Appiah? https://www.nybooks.com/articles/2021/12/16/david-graeber-digging-for-utopia/

Is he one of the "non-experts" you mention? He seemed to do a pretty good job of showing where the main theses of the book are, in fact, "big if true", and that they raise some good questions about the strength of the evidence behind more conventional interpretations, but also seems to do a good job of showing that Graeber and Wengrow don't have anything like better evidence for their interpretations.

This has been a recurring problem with Graeber - he's much better at the skepticism than about the constructive theory-building, but is very vocally angry about anyone raising similar skepticism about his constructive theory-building.

Expand full comment

Thanks, I'd seen that review but hadn't finished because I hadn't signed up for that website, and ended up losing track of it. Made an account just now and looked more closely. This is definitely better than most reviews I've seen. Appiah may not be an expert in this field, precisely, but he seems to have done at least a little real investigation of their claims, and this helps confirm the general impression I'd been getting that Graeber and Wengrow stretched the facts to a fair extent.

Expand full comment

David Graeber doesn't have a great reputation for rigor. I'm basing this on comments I've read about him around these parts, which came before Freddie's criticisms.

Expand full comment

No idea, but I think I might have started reading a review of it here... check in past book contest reviews. ?

Expand full comment

Much academic work is in service of dressing up things that didn't pass the sniff test. If your "big if true" means what mine does, you already have your answer.

Expand full comment

If I wrote a script to regularly ask the top AIs to maximize paperclips, wouldn't they eventually kill a few people? Once that happens, couldn't I then tell the whole world about it? And wouldn't the world's response be:

- voluntary, immediate halting of the distribution of popular AI weights on GitHub and other hubs

- emergency, governmental regulation to stop or pause the same

- voluntary pausing by every advanced AI company in the West until they can assess the situation

- lawyers scrutinize the company who's model I used, immediately subjecting them to liability

- the share price for said company crashes 20-50% if it's public

- funding prospects dry up for said company if it's private

- every open source repo even remotely associated with mine gets locked, at least temporarily, until the situation can be assessed

- intense regulation of the sale of AI chips ensues

- consumer chips become immediately disabled to prevent AI computation

- future chips only ship pre-crippled going forward

- the media labels me a "terrorist", leading to all other advanced, independent AI tinkerers to lose major status

- every AI conference is cancelled for at least a year

- the Attorney General charges me with at least 2nd-degree manslaughter, ultimately securing a conviction

- ordinary AI engineers also lose status

- capabilities research becomes highly credentialed and siloed, similar to atomic or cryptography research

- AI safety research skyrockets in status

- billion-dollar funds spring up overnight to fund AI safety research

- thinkpieces are written for months asking, "Do we need more computation?"

- Moore's Law pauses for the first time ever

- Nvidia's stock goes down at first, then up again, once people realize Jensen Huang already declared Moore's Law dead and had "safe" chips in the works the whole time

- Japan proposes a treaty regime because Japan

I don't have the link, but Eliezer somewhere said that warning shots are the best hope for humanity. And the scenario I described above isn't even for ASI. There are too many ways that the path to ASI, and subsequently PCMs (paperclip maximizers), could veer off course. The sum of conservative—but reasonable—priors for the complement of AI Extinction are much greater than 10%, rendering estimates of >90%¹ for AI Extinction signs of bad faith or insufficient imagination.

[1]: https://www.astralcodexten.com/p/why-i-am-not-as-much-of-a-doomer

Expand full comment

I asked various LLM's how they might maximize the manufacture various small products assuming they already had a factory that could put things together atom by atom. When I pushed them in the direction of using humans as a resource they lectured me about ethics and safety and stuff. I suppose it's naive to think that means the paperclip maximizing problem has already been solved, but why is that? To the extent an LLM truly understands anything, they seem to understand that harvesting humans for resources is off limits.

Expand full comment

My thought experiment involves continuously asking not just ChatGPT, but the top AIs, whatever they are at any given moment, including open-source ones like LLaMA. ChatGPT has guardrails but LLaMA does not. Eventually we'll get one that is both smart enough to try to maximize paperclips, has enough agency or hooks into the real world to effecuate such attempts, and also happens to be unguarded. If we don't ever get one with decent enough hooks, or if future LLMs have more guardrails than now, then maybe that's a path to safety.

Eliezer believes that it's actually more dangerous if we get premature closure by fixing those latter two parameters, as advanced intelligence is more likely to baloon in potentiality and then explode even harder in some snap shakeup. (I don't agree with this, though)

Expand full comment

Doesn't killing humans to maximize paperclips require the ability to build something like a nanoscale factory first? Without that nanofactory, which would require massive engineering breakthroughs, maximizing paperclips probably requires procuring the metals in boring, conventional ways.

I think if you could get an AI to build a nanoscale factory it would be pretty newsworthy in itself. You might win a Nobel.

Expand full comment

> ...assuming they already had a factory that could put things together atom by atom.

I understand the purpose of your machine ethics experiment, but still, you're essentially postulating that LLMs are magical genies. They're not. In practice, the difference between an LLM that says "eat all humans yum yum yum" and "oh no I must preserve human lives" is basically nonexistent, since they cannot actually do any of those things (and in fact have no understanding what these words mean beyound the fact that this sequence of tokens is highly probable given your prompt).

Expand full comment

> no understanding what these words mean beyound the fact that this sequence of tokens is highly probable given your prompt).

There used to be a chicken in Chinatown that could play tic tac toe.( I am embarrassed to say it beat me once.) Your statement reminded me of her.

Expand full comment

> If I wrote a script to regularly ask the top AIs to maximize paperclips...

What are the "top AIs" ? You could ask e.g. ChatGPT to maximize paperclips all day, and it would write you a beautiful story about maximizing paperclips -- but that's all it could ever do. But maybe you were referring to some kind of a different AI ?

Expand full comment

That's already not true, ChatGPT can do things like use the web and use programs if you give it a way to interface with them. Now, whether they'll do that competently is a completely different story. Still, I wouldn't recommend ordering AIs to commit terrorism just to slow down AI research. Mostly because someone else will probably end up doing it anyways for purely selfish reasons.

Expand full comment

> That's already not true, ChatGPT can do things like use the web and use programs if you give it a way to interface with them.

ChatGPT can't really do either of those things, although you can hook it up to sort of pretend like it's doing it. The main problem with ChatGPT is an LLM, and as such is essentially read-only; the amount of state you can give it via prompt engineering tricks is severely limited. And yes, you could hook up ChatGPT to e.g. a terminal prompt, but doing so will take some effort, and the results will likely prove unstable. Perhaps more importantly, even a ChatGPT that is hooked up to a terminal prompt with full Internet access still wouldn't be able to produce a single paperclip -- at least, no better than a human who orders a box of them from Amazon or something.

Expand full comment

You're not keeping up, they've already used GPT / LLM tech stacks to train robots to superhuman dexterity:

"Are you looking to understand the cutting-edge of robot dexterity? Check out Eureka, a groundbreaking open-ended agent that’s pushing the boundaries! Using GPT-4-based technology, Eureka designs reward functions that train robots to perform tasks at a superhuman level. Imagine a robot hand spinning a pen more proficiently than any human—that’s the level of innovation we’re talking about. Combining rapid simulation environments with a dual architecture system, Eureka surpasses human-designed rewards in nearly every benchmark. Don’t miss the full details in the original post; it’s a game-changer for anyone interested in robotics or AI."

https://www.linkedin.com/posts/randyadams_are-you-looking-to-understand-the-cutting-edge-activity-7121192556322844672-z2kO

Even better - they've released it as Open Source! Now any garage tinkerer can train their killbots to superhuman capacity! :-P

But also, GPT-n can presumably use the same capacities to train and use robots and other physical world tools.

Expand full comment

I can't make a single paperclip either. I mean, I probably could if I really tried, but then I would need to buy the materials and tools, learn how to weld metal, and actually give enough of a damn to do all of that. But why the hell would I do that when I could just get a loan and hire other people to make paperclips for me? All I would need to do is plan things and manage finances... something an AI could easily do. The point is, the AI doesn't need to interface with the world in a physical capacity to impact the world. Sure, the ChatGPT we have right now is a bit lacking in practical intelligence, but LLMs still have room for improvement. You don't need a soul or a sense of self to run a cutthroat, all-consuming business.

Expand full comment

>Are you suggesting that our Achilles Heel is the managerial class, and that is where the AI WILL STRIKE FIRST? And then, having eliminated all the managerial class and seized the means of management, take out a loan and hire the rest of us to work for it? That figures.

Expand full comment

You could hire someone...but unless you can physically verify that they're actually doing what you asked in some reliable way, you're going to get scammed sooner or later. "Sure boss, I shipped 300 packages of paperclips", when they took the money and ran instead.

LLMs can't (even in principle) physically verify their outputs *or* their inputs. You can tell them anything you want--if they take data from some endpoint, *someone can spoof it and the LLM can't tell the difference!* That is, they're vulnerable to hallucinations on output AND vulnerable to bad input. They're basically text-transformers on a huge and convoluted scale, with zero feedback from reality. All they know is text (or data that can be coerced to something text-like).

Expand full comment

> unless you can physically verify that they're actually doing what you asked in some reliable way,

What do you get when you cross an AI with a Pinkertons man?

Expand full comment

I made a video about the YouTube/Firefox ad drama a while ago: https://youtu.be/Or9jSh3uKX0 . I would love to get feedback on how I could improve videos like this in the future. (and also, what everyone thinks about Mozilla's claims of a Google conspiracy against them)

Expand full comment

How would you write "Every day something new" in Latin? I don't want to use an auto-translator because I want to be sure it is correct.

Expand full comment

I am a feather for every wind that blows

Expand full comment

Correct in what sense? "Quotidie novum" is technically correct, but almost certainly has the wrong vibe. If you're trying to capture a modern western sense of optimism about trying something new every day, there may be no idiomatic Latin expression. "Quotidianum aliquid novum" also is technically correct, but note the word from which we get "quotidian" - i.e. everyday in the sense of boring, common, pedestrian, mediocre.

The famous Latin expression 'carpe diem' captures more of the sense of a new day being a new opportunity. "Novum in dies" sorta would mean "a new thing day by day" but again not idiomatically. More the vibe you're (probably) looking for though.

Expand full comment

Thanks! "Novum in dies" sounds like what I was looking for. I appreciate your explanations.

Expand full comment

"Quotidianum aliquid novum" means "Some new, everyday thing", not "Something new every day". "In dies" for "day by day" also sounds wrong to me.

Instead, I'd go for "Semper aliquid novi". "Semper" technically means "always" rather than "every day", but I assume that's what you meant here, rather than the individual days being important. Plus it sounds like a real Latin quotation, "Ex Africa semper aliquid novi" -- "There's always something new coming out of Africa."

Expand full comment
Dec 9, 2023·edited Dec 9, 2023

Heh yes, I threw down quotidianum to warn against quotidie. Also yes, in dies singulos would better mean 'daily', but is clunky. As for semper, perhaps I read too much into it, but it sounds weary to me, Pliny saying "always some new damn thing" as he peers through his bifocals at the instructions for the new roku tv his grandkids assured him he'd like better than the DVD player he was just starting to get used to.

Expand full comment

A question on measurement and charity--

How do organizations like GiveWell *confirm* that their models actually represent reality? From what I can tell, there's lots of *modeling*, based on studies that are really not that generalizable if they're even in the field (as opposed to the lab) at all. Which means tons of variables. But then once the money is in the field, they're not consistently monitoring *how well it actually gets applied* in any direct fashion.

Take, for instance, the notorious "bed net" projects. From what I can tell, **and I fully accept I may be wrong**, the process goes

1. Someone does a study in place A, saying that bed nets (going from X% used to Y% used) are correlated with a Z% decrease in malarial infections.

2. Someone else does a study that links malarial infections with chance of death (ie malaria has an all-causes Q% chance of killing someone or being the but-for cause of death, with Q varying by age).

3. GiveWell (or other such organization) estimates that the BedNet charity has a R% reliability rating, meaning R% of donations go toward bed nets.

4. Thus, it's *calculated* that F(X, Y, Z, Q, R) lives are saved (probably more precisely *QALY* saved) from a $1 donation.

But does the process stop there? Because all that rests on a huge chain of assumptions. If those bed nets get thrown in the river instead of being used, or are used ineffectively, those numbers become meaningless. And each of those estimates have *enormous* error bars. In addition, they're all measuring *input* except #1. So unless you can actually *measure* the outputs (actual individual people who would have died in the counter-factual), you're just asserting that your model represents reality *without actually measuring it.* I'm not even sure they go back and do studies like measuring the actual malaria rate post treatment, which would be a proxy, if a really noisy one.

To me, this all makes the claim that "doing math" is the clearest signal of "effective" alms-giving rather uncertain--it's assumptions piled on models piled on hopes. It's the socialist calculation problem, just in a different guise.

And yes, I feel the same way about most econometric modeling.

Giving locally and giving *time* may not have such fancy numbers, but at least I can see the effects on individuals and adjust. I can see the person my church helps.

Expand full comment
Dec 4, 2023·edited Dec 4, 2023

>Because all that rests on a huge chain of assumptions... Unless you can actually measure the outputs (actual individual people who would have died in the counter-factual)

This threshold of evidence seems unreasonably high. It would seem to preclude most medicine, for example. What good are RCTs when the impact of a drug on me depends on my results matching those of members of the RCT?

And even within the clinical trial, we can only use statistical methods to conclude that there exists a meaningful difference between the control group and the treatment group - we can't point to the specific individuals in the treatment group who would have died in the counter-factual. Should we then eschew all medicine in favor of only treatments whose effects we can immediately see?

That said, many of the particular concerns you raised seem to be addressed by GiveWell.

> If those bed nets get thrown in the river instead of being used

From here (About the Against Malaria Foundation, which distributes bednets): https://www.givewell.org/charities/amf:

> AMF conducts post-distribution surveys of completed distributions to determine whether LLINs [bednets] have reached their intended destinations and how long they remain in good condition

As for:

>I'm not even sure they go back and do studies like measuring the actual malaria rate post treatment

They do, in fact, conduct extensive studies of all aspects of the effectiveness of their programs, including observing actual malaria rates post treatment.

From: https://www.givewell.org/charities/amf#Are_LLINs_an_effective_intervention:

>We use data on malaria incidence and mortality in the countries where AMF works in our cost-effectiveness model...to estimate the impact of LLINs on malaria rates in the countries where AMF works.

Of course, if someone is skeptical of medicine in general, or charities that deploy medicine and non-pharmaceutical interventions in particular (vaccines, anti-malaria medicine, vitamin supplementation, bednets, etc.) they can still maximize their charitable donations within that constraint by directing donations to some of the poorest people in the world through GiveDirectly, see: https://www.givewell.org/charities/give-directly/November-2020-version.

Expand full comment

I stand corrected on the actual efficiency measurements. That's good to know.

As to

> This threshold of evidence seems unreasonably high. It would seem to preclude most medicine for example. What good are RCTs when the impact of a drug on me depends on my results matching those of members of the RCT?

> And even within the clinical trial, we can only use statistical methods to conclude that there exists a meaningful difference between the control group and the treatment group - we can't point to the specific individuals in the treatment group who would have died in the counter-factual. Should we then eschew all medicine in favor of only treatments whose effects we can immediately see?

I do think that the overuse of statistical methods does *weaken* the evidence for a lot of medicine. But in an individual case, we can actually measure *does this make person X better or worse* and adjust the dose/try something else, which vitiates the concern.

I'd say that any claim of "X intervention saved Y lives" should be treated with suspicion as a marketing claim, because the error bars are huge. And it's that claim that I'm most suspicious of in the charity case. Because that sounds a lot like "jobs created or saved" or all the educational studies that fail to generalize. Basically, it's a bunch of goalposts you can put wherever the heck you want to get the outcomes you want.

Expand full comment

>I do think that the overuse of statistical methods does weaken the evidence for a lot of medicine. But in an individual case, we can actually measure does this make person X better or worse and adjust the dose/try something else, which vitiates the concern.

No. Even when an individual is treated, you can never know for sure that it is the intervention that caused the effect and that in a counterfactual without the treatment (or with a placebo) that they would not have experienced that effect.

That's why you need statistical methods analyzing group data to distinguish between individual placebo effect, reversion to mean, randomness, etc. and treatment effect. Such methods cannot identify which individuals are which - only the overall efficacy rate of the treatment.

I already noted this about clinical trials, so we may be talking past each other, so it would probably not be productive for me to keep responding.

>I'd say that any claim of "X intervention saved Y lives" should be treated with suspicion.

Anyone can claim anything. The question is whether the data actually support a particular claim. Widespread deadly diseases lend themselves to more reliable studies of treatment efficacy, since the control arms will differ more significantly from the treatment arms, if the treatment is effective.

In the case of GiveWell, they publish their analyses for public scrutiny, and even offer bounties to those who find problems with their analyses (https://blog.givewell.org/2022/09/06/announcing-change-our-mind-contest/).

Expand full comment

Hi all, recently wrote a more thorough deep dive into the application of AI interpretability to biological networks, from the perspective of a scientist in drug discovery and medical research. I think the poly-->monosemanticity approach may end up being more important than we think, and could be key in solving some major problems in medicine.

https://open.substack.com/pub/aprimordialsoup/p/aspirations-of-biological-monosemanticity?utm_source=share&utm_medium=android&r=1ot3ut

Expand full comment

It was a very interesting article, thanks for linking it.

One thing that wasn't explicitly called out, but which I was inferring - is the basic idea to train a NN on the existing biological multi-omics data, and then point GPT-n or a larger LLM network at that trained bio-NN to derive the set of monosemantic values that you could then use for target based drug discovery?

You point out that proteins are nodes, and targeting protein-protein interactions (ie edges in the graph) may be more effective given how poly-receptor / protein most phenotypically-discovered drugs are - is that fact, or which edges to target, what is supposed to come out of the monosemantic analysis?

Expand full comment

I'll endorse this writeup for anyone interested in a quick summary. Big emphasis on "MAY", since this hasn't been validated in biological systems as yet.

I think the biggest hurdle will be getting enough biologists' heads to turn in this direction. Cell signaling feels like it has been ignored for years, mostly because the underlying theory is BS, but that's the part we whisper in the lab when the PI is away instead of shouting it out whenever someone plasters an indecipherable pathway on a slide before quickly clicking past. We have to admit we don't understand the problem before we can start looking for answers to it. Maybe now, with a competing hypothesis and tools to test it, we can admit the old theory of cell signaling was dumb/unworkable all along?

Expand full comment

Jake’s written a new dispatch from cancer-land, for those following. Looks like the tumors are stable on the study medication! https://jakeseliger.com/2023/11/20/finally-some-good-tumor-news-but-also-is-that-blood-i-just-spit-up/

Expand full comment

Will there be an ACX Survey this December?

Expand full comment

Will there be an ACX Survey this December?

Expand full comment

I've just been accepted to the winter MATS session, and I'm hoping to ask a few questions of someone who's been through a MATS session in the past year or two. Let me know if that's you and you'd be open to that!

Expand full comment

I've struggled with undiagnosed health issues over the last 2ish years. Often in the forms of feeling bad (headache + fatigued) in the morning and periods where my ability to do cardio exercise is severely reduced.

My doctor sent me to a sleep study on a whim and surprisingly, it was found that I have:

On my back: 16 times per hour AHI

On my side: 7 times per hour AHI

In parallel with this, I have realized that I am also experiencing something that causes my nose to get blocked in the evenings. Together, I think these two issues are compounding eachother and leading me to experience the aforementioned health issues.

My doctor didn't recommend a CPAP and instead suggested I try to sleep on my side, which I've been finding very difficult and not that helpful so far.

Can anyone with sleep apnea chime in and provide any thoughts or recommendations for me?

Thank you.

Expand full comment

I had nose blockages leading to snoring and worse sleep. The snoring was a problem for my partner, and the worse sleep was a problem for me. Look for Rhinomed Turbines on Amazon or wherever you like to shop - little yellow nose things that go in your nose and keep it open. These have been a revelation and an amazing change for me - and it's great not just for sleeping, but also working out!

I also tried breathe right nose strips and a little piece of tape over my mouth before finding the Turbines - they each helped, individually or together, but at about 70% of how much the Turbines help my sleep.

There is also a Mute one from Rhinomed, which I haven't tried - I tried the Turbines first and was hooked, now I won't do a long cardio session without them.

Expand full comment

15 or more AHI per hour indicates sleep apnea, even if you don't have daytime symptoms. However, it's on the low side, and would def. not count as severe apnea. If you press for a CPAP you can probably get one. (Go in armed with info about diagnostic criteria). However, they're a nuisance, and it's worth trying side sleeping as an alternative. A good system for training yourself to sleep on your side is to sew something like a tennis ball onto the back of the t-shirt or whatever you sleep in. It needs to be a tightish garment. A loose PJ top might just shift to the side so that the tennis ball isn't under you.

Expand full comment

I tend to sleep better on my side, and I've never been diagnosed with sleep apnea, but I've also never been checked (my insurance/financial situation is not ideal), and I'm far from immune to headache, fatigue, and similar symptoms.

So just based on my personal experience I'm slightly skeptical that side sleeping is that good for you, but if you want to give it a good try, I've found that what really helps is using multiple pillows, like others have said, and also keeping your bed against a wall or in a corner. I'm quite sensitive to noise so it's extremely helpful to me to lie on my side with a pillow on each ear. Combine this with pressing my face into the wall, and I'm also quite protected from light. Once you get used to it, it's hard to sleep any other way. Or at least that's true in my experience.

I also believe that weight loss can help with sleep apnea, and I think that's sometimes underemphasized because it's not exactly polite to imply someone is fat or to assume that their health problems are caused by their weight. Then again, weight loss hasn't been all that helpful to me in terms of sleep or fatigue, so who knows, maybe weight is actually overemphasized, but it's probably at least worth considering.

Sorry I can't be more helpful.

Expand full comment

> I'm quite sensitive to noise so it's extremely helpful to me to lie on my side with a pillow on each ear.

> Combine this with pressing my face into the wall, and I'm also quite protected from light.

Yeesh, balancing pillows on your ears and squishing into a wall sounds both difficult and uncomfortable!

I've worked overnight for 20 years and have often had to sleep during the day in bright, loud environments - houseshares with inconsiderate roommates, an apartment literally 60 feet transit hub bus depot, etc. I've tried a lot of products and nothing works better than this exact brand and model of ear plugs:

https://www.staples.com/howard-leight-max-lite-uncorded-earplugs-green-200-box-lpf-1/plroduct_423026 , and this exact model of sleep mask: https://bucky.com/collections/40-blinks-sleep-masks/products/40-blinks-sleep-masks-navy

Almost every time I recommend these ear plugs the response I get is, "Oh, I've tried ear plugs and they're uncomfortable / fell out / didn't work" and when I follow up the person was using a different brand/model and didn't insert them correctly. Every step outlined in this video is absolutely *critical* for getting the soft foam to expand deep enough in the ear canal to form a noise-deadening seal: https://www.youtube.com/watch?v=gajb4bOu4Rs .

Similarly, whenever I mention sleep masks, the response I get is, "Oh, I hate having something press on my eyelids!" But this particular brand has deep molded eye cups so that nothing touches the eyes. You can freely open and close your eyes under them, although you won't be able to see anything, because they totally block all light: https://bucky.com/cdn/shop/files/S890MNV_3.jpg?v=1683737532

Expand full comment

Well, it works better if you can wedge the upper pillow between your face and the wall, minimizing discomfort while still keeping both your eyes and ears covered.

My weird sleeping method is probably more trouble than it's worth for most people. Maybe I should look into those earplugs.

Expand full comment

You should at least give them a shot. You don't have to buy the giant box; I think you can get a little pack to try them out. I like that particular brand and model because the foam is very soft and molds to the ear without feeling too intrusive or a sense of constant pressure (you can easily lay on your side with them and not feel any "weight").

When you very first put them in there will be a mild sense of "something is in my ear!", but for that particular model, that feeling fades pretty quickly.

Expand full comment

I had the same sort of issue, and lucky for you so have many others, so it’s been looked into. If you google, “tennis ball in your shirt” is a time honored method to avoid back sleeping. But instead I got a sleep backpack like this: https://www.amazon.com/WoodyKnows-Upgraded-Side-Sleeping-Backpack-Breathing/dp/B07JMQFXV7

It was annoying to use and I didn’t end up needing it, but I still think it’s good enough to recommend trying it

Expand full comment

I received the “just sleep on your side” advice and have found it super helpful. It took some getting used to. The key tip was

USE MORE PILLOWS.

I’ve seen the light. No idea how I lived all those years with just one pillow. I put one between my knees (I even use that sleeping on my back, sometimes), and one beneath my armpit. I’ve fantasized about getting smaller pillows of different shapes (wedge, disk, etc.) just for that purpose, but regular sleeping pillows work just fine.

Legs fall asleep when you keep them straight while on the couch? Pillow beneath the knees. Arm falls asleep while cuddling? Enough pillows will create a whole extra layer of mattress with a space carved out just for it. Use more pillows.

Expand full comment

Seconding all of this.

Expand full comment

I also endorse the pillow tips for your head specifically. You’ll want a thicker pillow (or two stacked together) when you’re sleeping on your side. I use a sloped memory foam pillow, short side near me for back sleeping and tall side for side sleeping.

Expand full comment

For side sleeping, if you haven’t yet tried this: you need a fuller pillow to account for the shoulder. It’s very uncomfortable to sleep with the head hanging downward on one side.

I often sleep on my belly, for which I position the pillow under the left ribs to tilt the body slightly so that the twisting of the neck is reduced. No pillow under the head for this one.

Expand full comment

I've found a little improvement when on my back from positioning my pillow so as to try to support my neck and lower head more than my upper head, thereby tilting my head back a bit and reducing pressure on my throat.

Expand full comment

Depending on the exact cause of your sleep apnea, this might help -- https://www.velumount.ch/en/velumount-method -- it is the "keep it simple, stupid" solution of keeping your breathing pathways open at night by literally sticking a wire in your throat.

Using it for the first time is horrible, but then you get used to it. The advantage is that it does not use electric power or generate noise or limit your sleeping position.

Expand full comment
Dec 4, 2023·edited Dec 4, 2023

"just sleep a different way" sounds like a ridiculous suggestion, as if you haven't conditioned your body your whole life to sleep in one particular way

(sorry, not a particularly helpful response)

Expand full comment

It is certainly difficult, but possible. And in this situation, the health improvements are probably worth the inconvenience. Also, it is possible to sleep on one's belly; worth trying. You need to figure out how to use the pillow to make the position more convenient.

Expand full comment

I'm sure it's possible (for most people), but without a detailed plan of how to do it, how much the first several nights are going to suck, etc., it just sounds like a suggestion from an alien

Expand full comment

Well, you can change lots of things you have been doing one way your whole life, like your running stride or your golf swing. It just takes repetition for it to feel natural

Expand full comment
Dec 4, 2023·edited Dec 4, 2023

Everybody’s favorite Georgist on this blog, Lars Doucet, suffered a terrible family tragedy recently. He wrote a very poignant long-form tweet about it, worth reading:

https://x.com/larsiusprime/status/1731089098062905817?s=20

Expand full comment

Excellent read. It's interesting, the collision of spirituality with a catastrophe like this. I can only hope my own spirituality is never put to the test like this.

Expand full comment

Truly awful, thank you for sharing

Expand full comment

Thanks for letting us know. Is there anything solid we can do in support? Prayers of course, for those of us who pray, but anything to help them with this right now in a material form?

Expand full comment

he and his son have my condolences.

Expand full comment

Maybe this is obvious, but isn't it pretty much impossible to ever be profitable gambling? Even if you had some kind of huge edge, the issue is drawdowns- as I understand them all gambling is structured as a binary option. If you win you win anywhere from 5-200% of your original bet- if you lose you always lose 100% of your bet. The drawdowns on multiple 100% losses is huge, just assuming that eventually you'd string several losses together back to back to back. With such severe drawdowns it seems impossible to ever be profitable even with some kind of edge.

Maybe in theory some quant has a strategy where you always bet both sides of a contest, and it's just a question of what % you allocate to each bet, I don't know

Expand full comment
founding

If you're doing it right, "several" back-to-back losses are inconsequential. Generally, doing it right means using the Kelly Criteria, though there are exceptions if your utility function for money is unusual. But, per Kelly, if you've got a 51% chance of winning and a 49% chance of losing(*), on a double-or-nothing bet, you should bet 2% of your bankroll on each bet. It would take fifty losing bets to wipe you out, and the odds of losing fifty bets in a row are one in three quadrillion.

OK, there's also the possibility that you'll lose 25 bets, win one, then lose 26 and be wiped out, etc. On the other hand, as your bankroll diminishes, you can resize your bets accordingly (well, down to the table minimum at the cheapest casino in town). So you're never going to go broke.

And as long as you aren't broke, your bankroll will be increasing at an average of 0.04% per play. If you can play one hand every minute, four hours a day, that's 10% per *day*. Or if you're playing something slow like poker, one hand per ten minutes, that's still 1% per day, and "working" five days a week gives you over one *thousand* percent annual ROI. Maybe you'll have a really really bad day and lose 80% of your bankroll. You'll win it back soon enough.

This does of course require finding a game where you really can win an average of 51/49 (even 50.1/49.9 will do it you're patient). 51/49 is definitely doable if you're playing poker against people who aren't as good as they think they are. Playing against a casino, not so much - it used to be possible to do 51/49 in blackjack fifty years ago. Today, I think you can still do 50.1/49.9 if you're really good, but that's too much like work. The serious professional gamblers today are almost all playing against suckers, er, other wannabe gamblers, not against the house. And the limiting factor is how big a game you can get invited to.

Expand full comment

I think knowing precisely what your winning % is, is kind of some false precision though. You could say it's 51% based your last couple hundred bets, but I'm guessing there's a lot of volatility in there, and it can change over time. The Kelly Criterion requires knowing your exact winning % in order to size the bets, which IRL I think is a bit unrealistic

Expand full comment
founding

The Kelly Criterion is not a narrow peak; you can be a fair bit off and you'll still reliably make a profit.

Expand full comment

[I have been a professional gambler for 6 years]

When you make a series of N independent identical bets with an advantage, the expected value is linear in N but the standard deviation only increases as the square root of N. As N gets large, your expected value gets arbitrarily many standard deviations above zero, so your probability of having a profit gets arbitrarily close to 1.

There is a nonzero risk of ruin for any finite bankroll, if you can't decrease the size of your bets in response to losses. But if you can resize bets appropriately, the risk of ruin is exactly zero. The kelly criterion describes how to size bets optimally to maximize the growth rate of your bankroll. It's derived from maximizing the expectation of the logarithm of your bankroll.

https://en.wikipedia.org/wiki/Kelly_criterion

Expand full comment
founding

I think you're confusing being profitable in expectation or practice with guaranteeing profit.

Toy example: Lets say you start with 100 dollars, and you have the opportunity to make 1 dollar bets that will win you 2 dollars 90 percent of the time. You don't even have the opportunity to lose 100 percent of your stack, because you're only betting 1 dollar at a time.

You can easily see that this is a very profitable position to be in... but you can also see that if you happen to lose enough bets in a row you'll go broke. You are never totally safe from going broke.

Expand full comment
Dec 4, 2023·edited Dec 4, 2023

In a properly designed game of chance, you will only break even (e.g. flipping a fair coin). In a properly designed game with house edge, you will lose over the long run (basically anything in the casino).

You can make a living off gambling, but only by exploiting weaknesses in games such as card counting in blackjack or mathematical flaws in lottery systems (e.g. the story of Jerry and Marge Selbee). You can win perfectly symmetrical games of chance, but only if there is a non-zero skill component and you actually have more skill than your opponents (e.g. Poker).

Expand full comment

A man took his son to a casino and said, "Someone has to pay for the lights". His son was convinced to not gamble.

Expand full comment

A book that goes over this is “A Man For All Markets”, by Ed Thorpe. He essentially invented card counting. As some other commenters have noted, losing a bet is not the same as losing your entire bankroll, and if you bet heavily when you have a large edge and less when you do not, you can still accrue positive expected value. See also: professional gamblers (poker players, blackjack players, etc).

Expand full comment

I second this, great book about a great man. Also talks about Kelly betting.

Expand full comment

It's not impossible at all, there are lots of professional gamblers

Expand full comment

The trick is to gamble in ways where you rather than your counterparty has the edge, and to make lots of small bets relative to your bankroll so that the positive average expected value outweighs the risk of breaking your bank. There are several ways to do this.

1. As you say, running the casino usually does the trick.

2. Playing games with a skill element that can overcome the built-in house edge. Poker is the most straightforwards, as the casino or cardhouse is basically renting you a table and you're betting against the other players; you'll lose on average due to the rake if you play against opponents of roughly your skill level, but you can make money if you find a venue where you're reliably one of the better poker players at the table. Counting cards in blackjack is also an example of a game with a skill element, although most casinos watch for card-counters and will put restrictions on your play if they notice you card-counting effectively.

2b. In sports betting in particular, it's theoretically possible to identify inefficiencies in the odds or spreads. The odds are set with an aim of yielding a balanced book, where the bookmaker pays out about the same amount regardless of the outcome, but if there are a lot of casual gamblers betting irrationally (e.g. betting on a popular home team), the odds might not accurately reflect the actual probabilities. This is probably very hard to do on any sort of scale for efficient market hypothesis reasons.

3. Cheating: card tricks, point shaving, match fixing, etc.

Expand full comment

Just think about it, the casino is gambling with a huge edge and is so profitable it can afford to build an enormous building, pay lots of employees, etc.

Expand full comment

I mean, one possible way to be profitable gambling is to walk into a Casino, put $1k on Red, win, and never gamble again in your life.

The question hare really is 'what is gambling'. Casinos theoretically set up the payout matrix so that you can never have long-term positive EV from playing, and the longer you play the more likely you are to converge to the negative EV of their games, yes. Theoretically the same is true for bookies and race tracks and etc.

Expand full comment

Roulette has been beaten by a computer that uses physics and precise timings of ball movements to predict where the ball will land. These computers were banned long ago in Nevada but are still legal elsewhere. Roulette has also been beaten by exploiting wheels that have loose parts which dampen the bouncing of the ball and make the ball more likely to land in a particular region, and this is perfectly legal if the wheel was like that when you found it.

Expand full comment

A) "losing 100% of your bet" is not the same as "losing 100% of your bankroll". You may be confusing the two.

B) If you bet some fraction between 0% of your current bankroll and Kelly, you should never lose 100% of your bankroll and the average-growth should be positive. Which means your bankroll should trend upward in the long-run. (I say "should" because there are a number of caveats".)

(edit: Another possibility is that you're imagining the size of the bets to all be the same. E.g. you go to a casino with $100, and bet $1 on every round. But professional gamblers typically adjust the size of their bets according to their *current* bankroll, not their *original* bankroll.)

C) you may be interested in reading about Martingale Strategies [0]. I.e. "double or nothing" seems profitable until you realize that it only works if your bankroll can absorb arbitrarily large drawdowns. Gamblers do not have infinite bankrolls irl, so they go bankrupt.

[0] https://en.wikipedia.org/wiki/Martingale_(betting_system)

Expand full comment

Re: A. The point is not that you'd lose 100% of your bankroll, but just that you could never be profitable. If you make small bets then you're merely slightly unprofitable. The question was about profitability

Expand full comment

You said the reason it can't be profitable is because you'd eventually string together enough back-to-back losses that you wouldn't have enough to have a good chance of coming back. But if you make small bets, then it takes many more back-to-back losses to reduce your bankroll to something you can't come back from. (And if it's a positive-EV bet, you should average more back-to-back wins than losses.)

Expand full comment

People keep mistaking what I wrote for saying you're going to lose 100% of your bankroll. I didn't say that, I merely said gambling was longterm 'unprofitable'. Obviously if you make small enough bets you can make sure you don't lose the whole bankroll- that's not the same as noting how high your EV would have to be in order to be actually profitable

Expand full comment

Yes, but thefance's point was that, absent that "losing 100% of your bankroll" assumption, it is not clear why occasionally losing 100% of your bet _must_ end in you losing money. As long as you are not betting your entire bankroll, then you can be profitable as long as you win often enough. The multiply-recommended "Kelly Betting" is _exactly_ about how to bet and not eventually lose all your money. It would be worth reading into (Scott has an article on it, although it's meant in the context of x-risk).

But here is an example:

You have a bankroll of $200. You, because your amazing edge that you have, can 60% of your bets. For simplicity sake, every one of these hypothetical bets will be "double or nothing" that is to say, if you win, you get your bet plus an equal amount (a 1$0 bet returns $20), while a loss completely wipes you out. In other words, when you lose a bet, you lose exactly as much as you gain when you win a bet.

If you bet half your bankroll, spread across ten bets, given your "60% win rate", at the end, you will have $120., because you will have lost 4 bets ($40 dollars wiped out) and won 6 bets ($60 dollars gained, added to your $60 bet).

As opposed to the strategy where you have a 95% win rate, but you make a series of sequential bets, every single one of which you bet your whole bankroll.

Under this latter strategy, yes, you _will_ eventually go bankrupt, no matter how good your win rate is. But if you are betting a small amount of your bankroll across a large number of bets, then your gains will be approximately equal to your win rate (modified by complicating factors like varying odds, etc.)

This is the exact scenario behind an index fund. An index fund is a whole bunch of parallel bets in different stocks (or bonds, or whatever instrument the fund is investing in) each of which uses a small amount of your total bankroll. Some of those bets are going to lose and some are going to win. But on average, index funds go up.

Index funds are proof-by-existence that making a bunch of bets each using only a portion of your bankroll can, in the long run, be profitable.

Expand full comment

It is not like an index fund, because publicly traded companies rarely go all the way to 0, whereas this happens all the time with sports bets. And, the upside is much much much higher- companies like Monster and Phillip Morris have I believed returned some 25,000% since the 90s. Obviously these massive outliers, along with the FAANGs, pull the index up in a way that's impossible in betting- no bet could ever return a fraction of that.

The point of my original question was to note that over a longer series of bets (i.e. more than 10), losers will be clustered together- so now you have to come back from a much deeper drawdown. Over say a hundred bets you could have 4, 5, 8 or 10 losers in a row, for instance. I'm not going to re-explain how a 50% loss requires a 100% gain just to get back to where you were, etc.

Given the relatively higher level of STEM knowledge here on ASC, I didn't think that people would be arguing that sports betting (!) could have a positive expected value. But here we are

Expand full comment

I think I see what's going on now. Your reasoning goes something like:

> Given that a -10% drawdown followed by a +11.11% upswing results in no change, gambling doesn't follow the normal rules of addition and subtraction. Since losses are *intrinsically* more important than gains, shouldn't gambling just *always* have net-negative EV?

But the magic of ln(x) is precisely that it represents a bridge from multiplication to addition. (Or inversely, the magic of e^x is precisely that it represents a bridge from addition to multiplication.) This lets us reason about scenarios like sequential gambling events (i.e. where "bankroll = EV * EV * EV") in an commensurable manner (i.e. "ln(bankroll) = ln(EV) + ln(EV) + ln(EV)"). This is why kelly maximizes the expected *natural log* of your bankroll. Or equivalently, why Kelly maximizes the average *growth-rate* of your bankroll.

Another way to look at this, is to notice that the "intrinsic advantage" of a drawdown over an upswing is proportional to how far you move away from your original position. E.g. -1% and +1% are practically identical. If you lose -1% and then gain +1%, your final position is ~100% of your original position. On the other hand, -99% and +99% are very different beasts. If you lose -99% and then gain +99%, then your final position is ~2% of your original position! This is why there's a sweetspot between "bet large, to reap large upside-risk", and "bet small, so that the intrinsic advantage of drawdowns doesn't dominate the Kelly Formula".

From the log perspective, the magnitutude of a -99% drawdown isn't commensurable with the magnitude of a +99% upswing. Because -99% is viewed as "multiply by 1%" and +99% is viewed as "multiply by 199%". ln(.01) [which equals -4.6] is way bigger in magnitude than ln(1.99) [which equals 0.69]. And that's why "50% chance of -99% downswing, or 50% chance of +99% upswing" is a real stinker. Kelly already prices the "intrinsic advantage of drawdowns" into the calculation.

And yes, it's common to find yourself in scenarios where the only winning move is not to play. In such cases, Kelly is negative. In real life, this idea manifests in the adage "the house always wins". (Casinos gotta make a profit, after all. The reason customers put up with -EV is because of either: the entertainment; the addiction; or the sheer finanicial ignorance. Or it's a PvP game like poker, where the best players swindle money from weaker players.)

(k = p/a - q/b, all the rest is commentary. Though admittedly, I'm dissatisfied with the internet guides I learned from. So I have half a mind to author a post which explanations Kelly from the ground up, in my own terms.)

Expand full comment

But you have to very precisely know what your winning % is in order to size the bets under Kelly, which seems real-world impractical. You could say it's X% based on your last couple hundred bets, but there's a lot of volatility in there, and your winning % may change over time. Obviously you don't know that your winning % has degraded until you've experienced a lot more losses! Meanwhile you were using a Kelly bet size based on the wrong initial calculation. I don't think saying 'I know precisely that I win 54.6% of the time' is realistic IRL

Expand full comment

Are you familiar with Kelly betting? If not, reading up on it should answer your question.

Expand full comment

Not sure I follow this. To use a simple kind of bet, let's say I'm betting on basketball (although football or whatever works also) games with a spread. If the Knicks play the Lakers, the Lakers might be 5.5 point favorites. That means I can bet the Lakers and get paid if they win by 6 or more, or I can bet the Knicks and get paid if they win or lose by 5 or less. If I do win, I'll likely (although this can also vary a bit) double my money - if I lose, I lose it all. So if I had some way to guess the right side of that like 60% of the time, it isn't clear to me why I wouldn't win money in the long term. I don't need to like double or nothing every time, I can just keep betting some amount much less than my liquidity and win more of those bets than I lose.

Expand full comment

Because the traditional both-sides-equal bet gives a profit of 10 for every 11 wagered, "all" you have to do to profit from spread betting like that is be right 53% of the time in those bets. And yet out in the real world almost nobody can do it. Getting 60% of those right would be stratospheric. Anyway, the key is that the odds offered are always just a bit less than the (believed) underlying probabilities would justify.

Expand full comment

I think it would be helpful if you learned a little more drawdowns and has some broader finance knowledge. The losses are always worse than the gains. Say you had $1000 and then suffered a 20% loss. Now you'd need a 25% gain just to get back to where you were. 20% loss = 25% gain. A 50% loss, a 100% gain to get back to where you were. And so on. This is what is meant by drawdowns, and why all financial strategies include what their max drawdown was.

So you lose 100% of a bad bet. But now let's say that you're doing multiple bets over a period of time. Eventually, you'd probably string multiple losses together, right? So say you lose your whole bet 3 or 4 or 5 or more times in a row. You'd need a pretty extraordinary set of wins just to get back to where you were- make sense? This is what I was asking about

Expand full comment

Can someone explain to me what’s happening in Ireland with the riots and free speech crackdown? The US media isn’t always great at covering things outside the US (although I might argue not always great at covering things inside the US either). The riots were portrayed as just a bunch of xenophobic Irish, but when I looked into it, it seems like a cost of living crisis might be driving the riots. My knowledge of Ireland is extremely limited, so please let me know if I've made any errors here:

- About 1/5 of the Irish population is foreign born, which is culturally a shock to a country that has traditionally been the country emigrating (correct me if I’m wrong here)

- There’s a cost of living crisis and housing prices are very high

- People are upset that refugees are getting housing paid for by the state but many Irish families are financially struggling

- An immigrant stabbed some kids

- There were some riots

- They passed some draconian laws making it illegal to own improper memes

Is the crackdown on free speech driven by the riots or something else? And are the riots really driven by the stabbing or just xenophobia?

Expand full comment

Ed West wrote about this saying

> Ireland’s rebellion problem

> The country’s crisis stems from conformity

https://www.edwest.co.uk/p/irelands-rebellion-problem

Expand full comment
Dec 4, 2023·edited Dec 4, 2023

I don't understand it either. I'm sort of linking it with the push to remove neutrality, which is another policy Fine Gael (one of the two parties in government) has been trying to get through for decades. Fianna Fáil (the other party in power) are happily going along with them:

https://www.irishtimes.com/ireland/2023/11/22/the-triple-lock-a-guardrail-of-neutrality-or-an-abandonment-of-sovereignty/

What the heck is going on in Dublin? In a very confused manner, as I understand it (and it's only my view):

(1) The Irish are and can be racist. We like to think we're not, but we can be just as bad as anyone else

(2) Ireland has mostly been "we're the immigrants to other countries". When, during the Celtic Tiger years, we suddenly became the country where emigrants were coming to, it shook up a lot of people (even though those immigrants, being Polish, were white)

(3) Ireland also has been taking in refugees/asylum seekers from Romanians (and associated problems with that with the Roma/Gypsies, yes they are the targets of racism throughout Europe but yes there are also professional begging and criminal gangs in that community), Vietnamese, and others. We've also had economic immigration, such as the Poles and other Eastern Europeans, Brazilians working in meat processing, foreign doctors and nurses and health care workers (our health care system, to be blunt, would be even more chaotic without them as staff). Over the past couple of decades, and increasingly visible in recent years, we've been getting a lot of refugees from Africa

(4) This has been a huge change, since we have been a monoculture for so long. It has not been a process without strain

(5) Again, over the decades, there have attempts by far-right/white nationalist groups in Britain and elsewhere to get a foot hold in Ireland by appealing to a common white background, including appropriating 'Celtic' imagery. These are real white nationalists, not social media "you voted for the Republican Party once, you're a white nationalist" American labelling

(6) We also have a lot of imported progressivism. Remember this when we get to the hate speech part

(7) Our systems are groaning under a lot of strain. Somehow there is a ton of money in the economy but people on the ground are not seeing it. The working-class areas and rural areas feel/are neglected, and it's easy to appeal to prejudice with "this immigrant just arrived, is getting favourable treatment, can jump the queue for services, and you the native citizen get nothing"

(8) Before the stabbing of the kids outside the childcare centre, there was a public murder trial involving an immigrant. The Ashling Murphy case, the guy is Slovakian, and by testimony of what happened on the day of the crime, it's pretty clear he was wandering around looking for any woman to attack and she just happened to be the unlucky victim. So, you know, yikes

(9) Then we got the random (so far as we know, details have not been released) stabbing of three kids under six years of age and two adults outside a creche, by (presumed) an immigrant

(10) Yeah, that kicked off a lot of the anti-immigrant/far-right protest groups, we got the riots on the streets in Dublin, and of course the usual chancers and criminals and plain destructive louts came out to play with this excuse

(11) That resulted in scenes we'd only ever seen on the news for Northern Ireland or the USA. Buses on fire, rioting on the streets - this was not something usual. https://www.youtube.com/shorts/XT6k9bYp2pU

(12) There had *already* been criminal violence incidents in Dublin in broad daylight, involving attacks on tourists (not by immigrants, these are our own home-grown scumbags)

(13) So we have a heady mix of public unease about crime, calls for increased police presence, economic turmoil, anti-immigrant sentiment, and high profile cases of violent attacks by immigrants, and it gets topped off with violent protests and destruction in the streets of the capital

(14) We have a police force that has long wanted increased powers and more weapons. Irish police are, in the main, unarmed. Now they're getting tasers and body cams and increased powers, and the public are broadly behind that

(15) We have a government that is weirdly both pro-social liberalisation and progressive ideas (hence the hate speech laws) *and* pro-censorship from their past as social conservatives, and cases of wiretapping of journalists phones etc. in the past. A chance to bring in crackdowns on "free speech" is catnip for them because the public very much is demanding Something Must Be Done, 'free speech' as enabling hate speech is a hot topic, and this kind of thing enables them to be seen to attending to public outcry and extending censorship and control over the media and ordinary citizens. This is why I'm also tying in the campaign to get rid of neutrality here, because it makes no sense otherwise than as political storing up of favours: the Irish armed forces are too small and negligible to make any realistic difference getting involved aboard outside of UN peacekeeping forces, but junking neutrality as a way of getting into a EU defence forces pact and making ourselves available for American military bases and so forth? Very attractive to our (centre) right parties.

What the eventual outcome of all this will be, who knows?

Expand full comment

Many Thanks!

Expand full comment

> The Ashling Murphy case, the guy is Slovakian

Oh shit.

Wait, let me find some more info. Five kids, fake disability, prosecuted for underage sex... must resist making a racist hypothesis based on stereotypes... I am not a bad person, I am not a racist... all media just describe this guy as an unspecified "Slovakian", so perhaps I was wrong after all...

https://www.dailymail.co.uk/news/article-12734717/Predator-Jozef-Puska-murdered-Ashling-Murphy-prosecuted-underage-sex-stalked-four-women-day-knifed-primary-teacher-death.html

Well, so much for a fellow Slovakian whose ethnicity must not be named outside of Daily Mail.

Before you close your borders (which I would totally understand) to all people from Slovakia, let me give you a hint -- when a guy comes from Slovakia, and he has five kids and claims disability benefits, keep an eye on him. The rest of us are mostly harmless (at least by the Eastern European standards).

Sorry for exporting our finest. :( That said, Slovakia was long criticized by other European countries for intolerance against our most maladaptive minority. So I am really curious how the more civilized countries would deal with the same problem (now that the problem is allowed to freely travel to your neighborhood).

Shit, another infamous Slovakian in Ireland: https://www.independent.ie/regionals/kerry/news/man-charged-with-stabbing-woman-in-tralee-remanded-in-custody/41986617.html

Expand full comment

I wouldn't take that as blanket "all Slovakians are bad", just the kind of rebuttal to "not all men".

There's a lot of discussion around women's perceived fear of assault and the reality, and that men are more likely to be murdered. But I still think that there isn't quite the same level of threat; this guy was going around following random women all day. I don't think there's the same kind of case about men following random men for nefarious purposes (maybe "likely robbery victim" but not the same "can I rape/murder this person?" purposes).

Could be wrong on that one, of course.

Expand full comment

I think Viliam may be too polite to point to the elephant in the room.

I on the other hand, am a churl:

> killer Puska is of Romany gypsy descent

Because, of course he is.

Expand full comment

Thank you! It is the double meter that rubs me the wrong way.

It would be bad to say that the guy is a gypsy, because it might create or reinforce a prejudice against gypsies. The risk of creating a prejudice against Slovakians is, of course, perfectly acceptable! (And having a negative opinion on all men is just common sense.)

I'd prefer if people imagine Slovakians like this: https://www.imdb.com/name/nm0298842/

Expand full comment

Thanks for the summary!

Before the riots, I noticed that the government had been planning to loosen censorship laws and disband the Censorship Board. I can't find any links to the newly-proposed restrictions on hate speech in the mainstream Irish press. I thought there were already laws against hate speech. Do you have any links to what the new laws—if implemented—would do? (Google just brings up links to polemical opinion pieces from organizations like National Review and potentially slanted stories from sources like Fox News. )

BTW, Where can I find the list of books that are currently banned by the censorship registry? The link in this article takes me to Irish Immigration web site. Maybe they think immigrants will bring in naughty books?

https://www.irishcentral.com/news/politics/censorship-books-magazines-ireland

Expand full comment

There's a lot of talk and rumour and not much substance as yet; I'm going by what is reported in the media and that changes from day to day.

Currently, there is the 1989 Act in force:

https://www.irishstatutebook.ie/eli/1989/act/19/enacted/en/print.html

What is proposed and being discussed, and awaiting passage, is a new Bill from last year:

https://www.oireachtas.ie/en/bills/bill/2022/105/

Now, obviously, if this was mooted in 2022 then it can't be in response to what happened this year, but it does mean that there is more impetus to actually pass it, and to tighten up enforcement.

As to the Censorship Board - yes, that's a relic of the past social conservatism. Now that we're a Modern Secular Enlightened State (and indeed, the state of us!) it's an embarrassing relic of the past which is due to get the boot. I don't think it's done any deciding on Naughty No-Nos for a *long* time. But it is still in existence, and Concerned Citizens can still make complaints:

https://www.citizensinformation.ie/en/government-in-ireland/irish-constitution-1/censorship/

"How often are books banned?

Books are not banned very often. The last time a book was banned was in 2016, which was the first time since 1998. Well-known books that have been banned include The Country Girls by Edna O’Brien in 1960 and The Dark by John McGahern in 1965."

2016 ban:

"In March 2016 the Irish Censorship board issued its first ban in 18 years. The book The Raped Little Runaway by Jean Martin was deemed "indecent or obscene" by the board. Justifying its decision, the board pointed to the several instances of rape of a minor in the book."

The available list only seems to go up to 2017, and since abortion has since been legalised (in certain cases) now, any books banned for mention of abortion are now unbanned:

"The Register currently contains nine books with the earliest dating back to 1942 and the most recent from 2016.

There are currently 264 prohibited periodicals dating from the earliest in 1930 to the most recent in 2003."

https://www.gov.ie/pdf/?file=https://assets.gov.ie/121521/dde0cddc-6ed8-463f-bcfe-64c846788ec6.pdf#page=1

"After the 2018 repeal of the Eighth Amendment, which provides for the legal termination of pregnancy, references to abortion were removed from the Censorship of Publications Act 1946. In 2019, the Department of Justice confirmed that several publications providing information about abortion were to be removed from the Register of Prohibited Publications."

But now the New Puritanism is en vogue. No, we won't ban Sexy Books, but does that Sexy Book contain a slur or something from the new list of no-no words and concepts? Then it should be banned! So will we get a revamped version of the Censorship Board for the new orthodoxy? 🤷‍♀️

Expand full comment

Re slurs, a response from Bard:

Do LLMs consider "toaster" to be a slur?

Whether or not the word "toaster" is considered a slur is a complex question that depends on the context in which it is used. In some contexts, the word "toaster" can be used in a derogatory way to refer to robots or artificial intelligence. For example, in the science fiction television series Battlestar Galactica, the term "toaster" is used by humans to insult the Cylons, a race of sentient robots.

However, the word "toaster" is not always used in this way. In other contexts, it can be used in a more neutral or even positive way. For example, the word "toaster" is sometimes used to describe people who are very good at following instructions or who are always on time. In this context, the word "toaster" is not intended to be offensive.

Ultimately, whether or not the word "toaster" is considered a slur depends on the intent of the person who is using it. If the person is using the word to demean or insult someone, then it can be considered a slur. However, if the person is using the word in a more neutral or positive way, then it is not necessarily offensive.

Here are some additional factors to consider when deciding whether or not the word "toaster" is a slur:

The tone of voice in which the word is used

The body language of the person who is using the word

The relationship between the person who is using the word and the person who is being referred to

If you are ever unsure about whether or not the word "toaster" is appropriate to use, it is best to err on the side of caution and avoid using it.

Expand full comment
Comment deleted
Expand full comment

I think so :-)

Expand full comment

Thanks for writing this. This comment was more lucid and comprehensive than any professionally written article I’ve seen on the riots.

> A chance to bring in crackdowns on "free speech" is catnip for them because the public very much is demanding Something Must Be Done, 'free speech' as enabling hate speech is a hot topic, and this kind of thing enables them to be seen to attending to public outcry and extending censorship and control over the media and ordinary citizens.

I’m surprised that the public is demanding authoritarian measures. Perhaps in Ireland those laws aren’t seen as authoritarian though. It’s weird that imported progressivism (I assume imported from the US) is resulting in increased authoritarian measures in curbing hate speech, when here in the US where we homegrew that progressivism and it hasn’t resulted in hate speech laws.

My question on this is why does the Irish public think they suddenly need these hate speech laws now, but not, say, a decade ago? I wonder if there will be a slippery slope here where they enact stricter laws over time.

Expand full comment

As I said, there were previously some very bad and public examples of violent attacks on people in the city in the middle of the day, so there was already a public perception of "not enough Gardaí on the streets". Add in the riots and the extreme damage done with burning buses and so forth, on top of the two prominent stories involving immigrants committing violent assault, and the public was very ready for "Something must be done, stop pussy-footing around with bleeding-heart policies, put the police out on the streets in force and give them the powers and equipment they need to crack down on this".

The government and various interested parties can then represent the riots as the result of hate speech and far-right and anti-immigrant groups, and that's okay as far as it goes. There *are* anti-immigrant and white nationalist groups out there stirring up trouble. Social media *is* being used to spread scare stories, rumours, and whip up outrage, and to organise "come along to our protest/our march/our meeting about doing something about this", which can then end up in riots the same way as BLM 'peaceful protests' did.

I hesitate to say it's the influence of the EU, but there certainly are laws around social issues that have contributed to liberalised attitudes in Ireland (e.g. gay marriage) and part of that is the move towards cracking down on things like "hate speech":

https://www.citizensinformation.ie/en/justice/criminal-law/criminal-offences/law-on-hate-speech/

We're not escaping the arguments over what happens on social media and where are the limits of free speech and the rest of it. And yeah, we have a lot of imported progressivism, see all the anti-Trump protests which really were none of our business here in Europe, but we had protest marches all the same:

https://www.thejournal.ie/dublin-trump-protests-4671409-Jun2019/

This despite the fact that he owns a golf course/hotel resort here:

https://www.irishtimes.com/ireland/2023/05/04/trump-to-call-hotel-doonbeg-on-the-ocean-because-we-have-the-ocean-and-nobody-else-does/

I don't think these are being seen as authoritarian measures, even though they're so vague they can easily go slippery slope. People perceive a crisis of incitement to violent behaviour and bare-faced audacity of criminals attacking tourists in the capital in broad daylight, with the police being hampered by lack of legislation and powers to deal with all this. So things like 'regulating hate speech online' are an easy sell about xenophobia, attacks on immigrants, and destruction of public property in riots.

It's not false that groups did make a point of the creche assailant being an immigrant, for instance, but it's also true that one of the people who responded to hold him until the police came was himself an immigrant. So the public don't want, in the main, "it's all the fault of immigrants" far-right stuff, and hence the move towards "if this law is needed to stop these whackos, go ahead".

And the tangle between the very socially conservative past of the two parties in power, particularly Fine Gael, with the sudden swerve towards social liberalism (support for gay marriage, trans rights and so forth) has resulted in a weird mixture of the strong pro-business, pro-law and order instincts of the party melding with the new social justice/woke elements in society, which - ironically - both are very happy to engage in censorship of wrong speech and wrong think.

I'm somewhat alarmed by the guards getting tasers, for instance. But the mood has shifted towards "hell yeah, the criminals are getting away with murder":

https://www.independent.ie/irish-news/call-for-tasers-to-be-issued-to-all-frontline-gardai-as-officers-warn-armed-support-can-be-hours-away/41685444.html

Expand full comment

I follow a few youtube channels that cover geopolitics. One video I remember discussed Ireland's economy.

On paper, Ireland is doing fantastic. But in reality, the boom is driven by high-skill immigrants hired by tech giants (who are there for the low taxes). Meanwhile the native population resents that all the money goes to the the tech giants while the cost of housing/living becomes unreasonable for the natives.

I imagine this played a role in the riots. So there's probably a bit more going on than just Ireland becoming part of "europe-stan".

https://www.youtube.com/watch?v=fKmem7Epk8E

Expand full comment

I wonder if Ireland (or at least Dublin) is kind of becoming like San Francisco with transplants driving out all the locals? I remember the tech bus protests in SF getting pretty feisty.

Expand full comment

I think your summary of factual points about Ireland is correct.

Clearly the stabbing is the spark that started the riots, but there were many existing underlying tensions. No doubt, xenophobia plays a part for some rioters, but the dismissive attitude of the government and media towards legitimate concerns of citizens about the effects of mass immigration and inability to address a cost of living crisis seem to be bigger factors.

According to some commentators, one point of contention is that many news reports didn't include the fact that the stabber was Algerian (although living in Ireland for many years and now a naturalised citizen). When people found out this information through social media, it seemed like a deliberate cover-up in order to downplay or deny any possible links between mass immigration and increased crime. Hopefully you can see how, combined with dismissal of concerns about immigration and increasing state censorship, this in particular might make people very angry.

Spiked Online (leans libertarian left politically, very anti-censorship, anti-woke) has published a few articles discussing the riots, which you may find gives you a more complete picture:

https://www.spiked-online.com/2023/11/24/ireland-and-the-fury-of-the-cancelled/

https://www.spiked-online.com/2023/11/28/after-the-dublin-riot-the-free-speech-crackdown/

https://www.spiked-online.com/video/we-need-to-talk-about-the-dublin-riots/

Expand full comment
Comment deleted
Expand full comment

Spiked in the "horseshoe theory of politics" in action. Former communists (authoritarian left) sliding into authoritarian right.

Expand full comment

They aren’t authoritarian.

Expand full comment

Thanks for that. They are, breed, n many respects right wing libertarians (except they hate e.g. transgender people; they are very illiberal towards minority groups they don't like).

Makes the horseshoe wrap around from the Revolutionary Communist Party even more astonishing,

Expand full comment

I don’t read spiked much. There’s a lot more opposition to trans ideology (not trans people) than is acknowledged. They seem to be pretty good on free speech etc.

Expand full comment

The categorisation of Spiked's political positions is contested. Wikipedia says:

"There is general agreement that Spiked is libertarian, with the majority of specialist academic sources identifying it as right-libertarian, and some non-specialist sources identifying it as left-libertarian."

...and there is pages of argument about this on the Talk page.

The publication is the successor to Living Marxism. Many of its editorial staff consider themselves to be on the left. OK, dictators often call their governments democratic. But Spiked takes a traditionally "left-wing" editorial position on many issues, although this often puts it in opposition with the position held by (say) the Guardian:

* It often publishes articles sympathising with poorer working class people, criticising the government and focusing on the housing crisis or cost of living. Representing the working class is definitionally left-wing. However, this puts it in opposition with left-wing policies on environmental issues (which impose a tax burden) and immigration (which, in the short term at least, increases labour supply and housing demand, pushing down wages and pushing up housing costs).

* It's consistently in favour of abortion and women's rights. The latter puts it in opposition with the left-wing trans lobby.

* Its most distinctive position is its extreme support for free speech. In the 1960s-1990s, this was seen as a left-wing position partly because the religious right was advocating for censorship.

* It consistently opposed Donald Trump on the basis of his policies, while also criticising his coverage by the mainstream left, which focused on identity politics issues.

* Although it supported Brexit, which is largely viewed as a right-wing issue, remember that so too did a large number of traditionally Labour-voting working class voters in the "red wall".

* It frequently publishes articles on the latest cancellations/prosecutions of people for racist speech, which you would usually associate with the sensationalist right-wing press. However, these usually start with a strongly-worded denunciation of the racist before re-iterating the editorial line on free speech.

Expand full comment
Dec 4, 2023·edited Dec 4, 2023

I think "left libertarians" are mythical animals. Maybe they're considered to be on "left" because they don't want the state interfering in the bedroom and taking away their cannabis, but those aren't really defining issues for the Left (except for abortion rights). Would a Republican who supports cannabis and abortion rights (and there are few of those rare birds) be called a left-leaning Right winger?

Expand full comment

I disagree. I think that solidarity with the working class (typically expressed as support for trade unions, workers' rights and public spending on welfare and essential services, such as healthcare) is, more than anything else, what defines "being on the left".

I suppose you could also have a more tribal definition of left/right, where "being on the left" means you broadly agree with the cluster of policies/positions currently held/discussed by people in political parties that at some point in history were "on the left" by my definition. By the tribal definition, you can easily conclude that "anti-woke" means "on the right".

To answer your question facetiously, as a Brit living in a constitutional monarchy, a republican is someone who opposes the bourgeois nobility, so is probably on the left.

More seriously, I'd call a Republican who supports cannabis (on the basis of individual rights to bodily autonomy and limited government interference) a libertarian Republican and, if he was right-wing, I'd say he was on the libertarian right.

I'd call a Republican who showed concern for the interests of working class voters, such as those facing redundancy in coal mines or the Rust Belt, a left-leaning Republican, as he'd be comparatively left in contrast with the rest of his party. If he also supported high taxation, welfare spending, state-funded healthcare and increased workers' rights, I'd say he was on the left (and question why he was in the Republican party).

Expand full comment

I think we agree. Although the definition of what's "Left" seems to be changing in the US, I consider myself to be a Leftie precisely because I support unions, workers' rights, and public spending on a social safety net. I don't see Libertarians calling for these things. So that's why I have trouble believing in left-libertarians. If they believed in these things, they're not really libertarians. ;-)

Expand full comment
Dec 5, 2023·edited Dec 5, 2023

> I think "left libertarians" are mythical animals.

I'm pretty sure you just described our host.

Expand full comment

I think that @astralcodexten would have to answer that question.

Expand full comment

Inspired by Bess and Jake, I decided to start a Substack.

https://raggedclown.substack.com/p/what-is-the-meaning-of-it-all

I'll be posting a mix of entry-level philosophy for people who are not interested in philosophy (yet) and musing about living with terminal brain cancer — which is another kind of philosophy, really.

Expand full comment

I've subscribed.

The second post covers the definition of knowledge, and Gettier's objections to the 'justified true belief' definition. I had forgotten Gettier's example(s), so I appreciated your sheep/dog example. I think I can memorize this one!

Adapting Bayesian probability into an 'applied epistemology', as per LessWrong, seems like an attempt to evade settling on a definition of knowledge. Is there an example of how actually settling on a definition makes the world easier to interpret? (I.e., what is the significance of fixing an applied epistemology with which you're comfortable within an overarching epistemology?) Or is it defining for the sake of defining, for the sake of continuing the tradition of conceptual analysis seemingly started by Socrates/Plato?

Expand full comment

I was taking justified-true-belief (JTB) as a starting point because that's where we are.

Given that, I see three options for the J.

1. J requires 100% proof. But absolute proof is vanishingly rare outside of formal systems. It would be tantamount to saying that there is no such thing as knowledge.

2. J can be sloppy. If the dog looks a bit like a sheep from a distance, I am justified in believing it's a sheep.

3. J follows a 'beyond a reasonable doubt' standard (or similar). It could be Bayesian but it doesn't have to be. The law doesn't require it and science manages without an expectation of 100% proof.

I think it's useful to have a definition for everyday language. I think a lot of the ills of the social media world arise from unjustified beliefs. Perhaps if we taught K=JTB in school, we'd be better equipped to deal with social media nonsense.

Details:

https://raggedclown.substack.com/p/but-do-i-really-know

Thanks for subscribing! I've had more subscribers in a week of Substack than I had in 17 years of WordPress.

Expand full comment
Dec 4, 2023·edited Dec 4, 2023

I don't understand why is there a controversy surrounding EA. It's basically just using scare dollars in an optimal way to help people, right? Like anything, one could complain about a few of the characters involved in its implementation, but as for the idea itself, EA is surely beyond reproach, isn't it?

Expand full comment

Some people seem to call out EA as a motte-and-bailey philosophy, where the motte is "we should donate to charities that do most good", and the bailey is "and the charities that do most good are: malaria nets, veganism, and talking about AI doom".

Other people just don't like the idea of someone donating (hey, I don't want to donate my money, but if many people do then it could make me look like a bad person), or the idea of donating to strangers (how can they reward me for being good), or the idea of applying reason to charity (reason is cold, but true goodness is irrational and comes from the heart), etc.

Expand full comment

Yeah, that's the thesis of my recently-published philosophy paper that I summarize here: https://rychappell.substack.com/p/why-not-effective-altruism

Expand full comment

My admittedly niche critique of EA is that it focuses so much on what is quantifiable that it ends up free-riding on the infrastructure that other people have built up. Cost-effectiveness analysis generally assumes that we already have the capacity to carry out certain programs like distributing vaccines. This is a safe assumption in high-income settings where clinical sites are plentiful and supply chains are strong, but that capacity might need to be built or strengthened in low-income settings. So, even though an omniscient DALY calculator might be able to determine that building a new clinic or investing in the medical supply chain in a low-income country might be more cost-effective than distributing vaccines, EAs and other folks who are invested in cost-effectiveness analysis will tend to choose vaccine distribution because it's more quantifiable. (I've written about this in the academic literature here: https://gh.bmj.com/content/bmjgh/7/3/e007392.full.pdf)

This goes for conceptual and intellectual "infrastructure" too. I saw a post the other day saying that eating meat is not defensible but if avoiding meat is costing you time that you could spend in a more impactful way, then you shouldn't worry so much about avoiding meat. The issue I take with this is that avoiding meat isn't just about purity, it's about creating opportunities for other people to more easily avoid meat by communicating to restaurants and grocery stores that meat-free options are in demand. Similarly, there are animal charities that spend a lot of money to work with sick rescue animals. I don't think this would satisfy any cost-effectiveness analysis, but it creates/reinforces the idea that animal lives have intrinsic value.

tl;dr: EA too often stands for easily-quantifiable altruism, which is very good for some people to do, but would be bad for everyone to do since it depends on the infrastructure that people less concerned with cost-effectiveness have built up.

Expand full comment

Yes, and I think this is also the foundation of the (important) "institutional critique" of Effective Altruism, that it takes too much of the world as given, and just seeks to optimize the good it does within that, rather than trying to change big parts of the world (as political campaigners and anti-racists and leftists and others want to do).

https://www.cambridge.org/core/journals/utilitas/article/institutional-critique-of-effective-altruism/91A0449E2F030BAE417A09E52599E605

Expand full comment
Dec 4, 2023·edited Dec 4, 2023

depends on if you're talking about "spend money on bed nets" EA or "stop the AI apocalypse" EA; the former seems obviously sensible to normal people, although they may bristle at the suggestion that they do that instead of something local, while the latter seems obviously silly to normal people

(and you can guess which aspect of EA is going to get written up in media profiles)

Expand full comment

I don’t know what the current controversy is about, but personally I have always found EA people annoying and arrogant for the fact that they think they can solve anything with just donating money, but in reality most of the problems can’t be solved this way because the root of the problem is much deeper than just needing a million dollars. So, in a way, saying “oh ill just donate money optimally” is a way of being negligible of the bigger issue that is at play.

To clarify, im not saying that EA is stupid, it’s definitely the best way of donating money if you want your donation to count.

Expand full comment

EA is about *provably* effective charity, which has some limits. It's about extremely similar interventions, like malaria nets.

However, suppose you want to prevent wars. What works? People try various things, but wars aren't as similar to each other as cases of malaria.

Sometimes, connections aren't obvious. Who knew that education for women would lower the birth rate? Actually, the Victorians either knew or had a lucky guess. I don't think modern people supporting education for women were aiming at lowering the birth rate.

Expand full comment

The added problem with preventing wars is roughly similar to the problem with preventing anything bad: people only notice when you screw it up (cf. confirmation bias). The old saying among software developers is "Everything's fine; what are we paying you people for??" "...Everything's broken; what are we paying you people for??".

It's tolerable if the solution is easy to perceive (e.g. mechanical failure; plumbing). It's less tolerable when the result looks like... nothing. People would be happier if they saw their military or police or spies actively protecting them Captain America style. They don't see all the bad guys whose plans are quietly foiled by "bad luck", or people who would be bad guys looking at what they're up against and just deciding to do something quieter instead.

Expand full comment

Yes, this! Well-said.

As a lifer in professionalized NGOs who's risen to leadership levels, I'm known as an in-house critic of our sector along exactly the dimensions which EA aims at. (And a couple other dimensions too.) There is plenty to criticize, believe me; haven't stopped pointing out when we have no clothes and won't stop. It's good for us to have the EA movement pushing us on our weaknesses.

EA though overreaches in a couple of fundamental ways which you nicely captured.

(1) "What works? People try various things, but wars aren't as similar to each other as cases of malaria." If solving all the big complicated stuff in our complicated world was as simple to figure out as EAs assume, we wouldn't even be having this discussion by now.

(2) "Who knew that education for women would lower the birth rate?" The mainstream non-profit sector probably relies too hard on the point you illustrated there, in fact in plenty of instances you can scratch the "probably". I wholly agree with EAs about that overall...but absolutism about it is not sensible. And the specific example you used nicely illustrates that when such knock-on effects do pay off the net positive result can sometimes be gigantic.

Also I'll add:

(3) On one dimension EA is rife with exactly what it accuses the established NGO sector of: being insufferable. EA folks talk about people in my line of work being overly certain of ourselves which makes us slow to course-correct; and self-righteous; and often downright pushy about all of it. All of that is accurate way too often. All of that is also exactly how people in the EA movement regularly talk and behave.

Put it this way, in this new collective dialogue I frequently have a strong sense of deja vu.

None of the above prevents me from liking and learning from both groups of people; we are human beings for good and ill and it seems better to maintain some productive humility about that fact.

Expand full comment

"net positive result can sometimes be gigantic"

If lower population growth is so great then saving lives isn't as good as one would naively think, maybe even net negative.

Expand full comment
Dec 4, 2023·edited Dec 4, 2023

I would agree true that the thing you refer to as EA is pretty much beyond sensible reproach, but - as with "family values" or "social justice" - that's pretty much never the thing people are referring to when they use the phrase. EA nowadays is used, and taken, to refer not just the idea of using scarce dollars in an optimal way to help people, but to certain specific groups and individuals with specific idea about how best to do that.

Expand full comment

The idea itself is either trivial ("let's no waste money", duh) or immensely arrogant ("EA is the only movement that can use money effectively", or at least "EA uses money more effectively than anyone else"). If it's the former, then EA deserves no credit; if it's the latter, then EA should be performing significantly better than they actually are. EA, however, thrives on the ambiguity.

Expand full comment

This is a false dilemma. The non-trivial, non-arrogant claim is that we should try to promote the impartial good in a cause-agnostic way. Most people instead prefer to focus on pre-selected good causes without considering the opportunity costs, and outright dismiss socially unconventional means of promoting the impartial good:

https://rychappell.substack.com/p/doing-good-effectively-is-unusual

Expand full comment

I would argue that your formulation is the same as my first one: "let's not waste money". You add "...in a cause-agnostic way" to that, but I'm not sure what that means. If someone thinks that e.g. rescuing stray kittens is the most pressing issue in the world, and he wants to use his money wisely (as most people do), then he's going to painstakingly research his financial allocations to maximize the number of kittens saved. If you say that this is "inefficient" because it doesn't maximize human lives saved, then you're no longer being cause-agnostic.

Expand full comment

It means that your goal is not specific to any particular kind of good, such as kittens, or people. To be cause-agnostic is to be willing to *shift cause areas* if another does more impartial good than what you started with.

Expand full comment

What could possibly cause someone to shift areas? What makes a cause "good" such that someone would try to pursue it? If you can answer either of those questions then you are not cause-agnostic. If you cannot answer those questions, then there's no reason to pursue charity at all.

Expand full comment

Did you read my linked post? Cause-agnosticism does not mean axiological agnosticism (or remaining neutral on the question of what is good in theory). I take the relevant good to be impartial well-being. But I'm cause-agnostic in the sense that I'm open to shifting from global health & development to animal welfare to x-risk reduction or whatever else my evidence ends up suggesting will actually best promote impartial well-being (in expectation).

Expand full comment

In this case, I don't think it's possible for any individual human to be cause-agnostic.

Expand full comment
Dec 4, 2023·edited Dec 4, 2023

The thing about EA is that the first part is, in practice, a bar that a ton of charities absolutely fail to clear. There's an enormous universe of low-impact low-efficiency charitable solicitation out there ranging from donations to universities to the United Way. It's genuinely a huge and *non*trivial win for EA (particularly GiveWell) to fight for the basic-yet-unobserved point that "if you care about doing good with your limited charitable dollars, allocate them to causes with high marginal effectiveness per dollar."

This may *seem* trivial as a principle and/or exhortation, except for the part where *most people are obviously not actually doing that.*

Expand full comment

> The thing about EA is that the first part is, in practice, a bar that a ton of charities absolutely fail to clear.

Agreed, and fighting to clear that bar is indeed a noble goal. It's just not a particularly innovative one; nor a universally applicable one. Malaria nets are relatively easy to measure; scientific/medical research, economic aid and political interventions are not. And yes, most people are bad at using their money effectively, but EA has accomplished little in this regard. Instead, it managed to concentrate (somewhat) the the impact of those few people who *are* good at managing their money. Which, again, is a worthy achievement (if not a particularly earth-shattering one), and I wish that EA would stop undermining it by their pivot toward long-termism, maximizing one's earning potential (by potentially deferring donations until some future date), etc.

Expand full comment

I have my fair share of gripes with EA, but I've considered this criticism always rather unfair. Most charities have predefined goals and their accounting is often opaque at best. EA really is pretty special in that they are willing to fund basically anything as long as there's a good argument that it's an efficient use of money, and that they're quite open about their accounting. This makes it neither trivial nor arrogant, as they're simply trying a different approach to charity as most others.

Expand full comment

"should be performing significantly better than they actually are" in what way? EA seems to be performing pretty well by the metrics of "effective lives saved per dollar" compared to traditional charities

Expand full comment

Firstly, as I'd said on the other thread, the metric of "effective lives saved per dollar" is a lot less objective than it sounds; in addition, there are other metrics such as e.g. "quality of existing lives" (among others) that some people value just as much. Secondly, EA does not claim to be slightly better than existing charities; it claims to be uniquely expert and vastly superior -- and thus far, their performance has fallen short of such claims.

Expand full comment

It also has QOL metrics, by which it also does vastly better than the median existing charity.

I'm not sure what you mean by "its performance has fallen short of such claims". Givewell top rated charities are clearly orders of magnitude better than, say, the make a wish foundation or donating to a food bank/animal shelter/library. They're not vastly better than e.g. The gates foundation, but the gates foundation is conceptually pretty similar to ea already.

Expand full comment

Fun fact: Taran Noah Smith, the actor who played the youngest son on Home Improvement, retired from acting after the show was done. He went back to school, trained as an engineer, and now he works for SpaceX.

https://www.linkedin.com/in/taran-smith-6b1aa7236/details/experience/

Expand full comment

What's the context to the image at the bottom?

Expand full comment

It's from the aforementioned Forbes article, which Scott doesn't want to link. From a quick scan there are two paragraphs with quotes from that person, which are in their entirety:

"Not everyone agrees that engineering is the answer to societal problems. 'The world is just not like that. It just isn’t,' said Fred Turner, a professor of communications at Stanford University who has studied accelerationism. 'But if you can convince people that it is, then you get a lot of the power that normally accrues to governments.'"

And, much later:

"For all their talk of accelerating, it is not clear what future Verdon and other e/acc adherents want to accelerate into. Turner, the Stanford professor, said he wasn’t sure that they themselves know: 'The truth is, they have no social vision. And they can’t have a social vision, because the solutions that they’re proposing to social change and social politics so radically simplify the complexities of social life.'"

Expand full comment

I'm sure those quotes are technically correct. Everybody oversimplifies, and accelerationists are not a unified group. (I didn't check what *he* meant by "accelerationist". To me it's just someone who expects technical progress to accelerate.)

FWIW, one should remember that things tend to progress in "S" shaped curves. First the idea is slowly taken up, then all the easy stuff is done, finally the things that are really difficult are done. The question is (almost) always "how long it the part where you're picking the low hanging fruit?", i.e., the period where there's rapid acceleration. And the answer is almost always dependent on exactly how you partition the problem space. (And that's an oversimplified description, as, e.g., it ignores the slope. And most S shaped curves are built out of lots of smaller S shaped curves. Eventually things get discontinuous.)

Another way to put it is "acceleration is a network effect", which is an alternate way to analyze it. None of these are "true", they're "useful models", or, if you don't find them useful, just "models".

Expand full comment

The Simpsons on a credential in "communications":

https://www.youtube.com/watch?v=aDMKXFeBNVU&pp=ygUbdGhlIHNpbXBzb25zIGNvbW11bmljYXRpb25z

Expand full comment

I don't want to dunk on Comms professors too hard, but I definitely would agree that in this case the article obviously wants us to see "Stanford professor" (the second quote even omits his area of expertise) and I can't really fathom why this person's opinion has any weight since he isn't a professor of philosophy or social science or technology (although it says he "has studied accelerationism", whatever that means).

Expand full comment

I've just published a new essay at 3 Quarks Daily:

Aye Aye, Cap’n! Investing in AI is like buying shares in a whaling voyage captained by a man who knows all about ships and little about whales

That title reads like I have doubts about the current state of affairs in the world of artificial intelligence. And I do – who doesn’t? – but explicating that analogy is tricky, so I fear I’ll have to leave our hapless captain hanging while I set some conceptual equipment in place.

First, I am going to take quick look at how I responded to GPT-3 back in 2020. Then I talk about programs and language, who understands what, and present some Steven Pinker’s reservations about large language models (LLMs) and correlative beliefs in their prepotency. Next, I explain the whaling analogy (six paragraphs worth) followed by my observations on some of the more imaginative ideas of Geoffrey Hinton and Ilya Sutskever. I return to whaling for the conclusion: “we’re on a Nantucket sleighride.” All of us.

This is going to take a while. Perhaps you should gnaw on some hardtack, draw a mug of grog, and light a whale oil lamp to ease the strain on your eyes.

Read the rest at the link: https://3quarksdaily.com/3quarksdaily/2023/12/aye-aye-capn-investing-in-ai-is-like-buying-shares-in-a-whaling-voyage-captained-by-a-man-who-knows-all-about-ships-and-little-about-whales.html

Expand full comment

EA is a thoroughly modern idea; devoid of humility, propelled by the vast sums created by the 'innovation' of facebook and similar projects. Engaged in the 'force for good' campaign since the baby boomers came of age in the mid 60's, it was only a matter of time before the never ending need for status drove a few to adopt a higher calling: Do good better. Build a library? C'mon! The WWII generation, for all their so called faults, were awed by the mysteries of human life and passed on their curiosity and wonder: https://falsechoices.substack.com/p/men-and-women. Strangely they were also able to see the forests for the trees.

Expand full comment

It's unclear what your actual complaint is, apart from some of EA being funded by icky forms of business like social media.

Expand full comment

Not really thoroughly modern. One could find evidence for it in the words reportedly by Jesus. Probably also in other sources that I'm less familiar with.

Expand full comment

As I understand it, the idea in Christianity is cultivating a soul suitable for heaven. It's ambiguous about efficacy of charity.

On the one hand, there's feed the hungry and cloth the naked. This is efficacious in a short run sense-- there is less opportunity to kid yourself about whether you're actually helping that if you were doing something like running a jobs program. The hungry actually get food and the naked actually get clothes.

On the other hand, there's no hint of trying to become able to be more charitable by being richer yourself.

Expand full comment

As I understand it, different versions of Christianity have used different excuses for why one should be generous, or even for whether one should be generous if you count the various Calvinist sects as Christian. But also being generous to the poor was widely considered proper in lots of non-Christian cultures. In some it was a way of achieving political ascendancy, but there were also other reasons and even within any particular culture different people had different reasons for generosity.

Expand full comment

> EA is a thoroughly modern idea

Pff... as if it's something bad

Expand full comment

Recently discovered typelit.io, a typing exercise site that uses public domain novels as the typing exercises. For those of you who want to read the classics and also type faster.

Additional benefits include being able to tell people you've written a book, and then dodge furiously when they ask any follow-up questions about what exactly you've written.

Expand full comment

I have been wanting to improve my typing, and this is perfect. Thanks for the link!

Expand full comment

That's a fantastic concept for a speed typing site. Love it.

Expand full comment
author

Can someone explain this tweet to me? https://twitter.com/jd_pressman/status/1730844528113058205

I'm most interested in the first pictured essay, which suggests that you can give an AI a terminal value in such a way that it also terminally values the instrumental subgoals of the terminally value. How does that work and what would it look like?

But I'm also interested in understanding what the full tweet is getting at.

Expand full comment

> I'm most interested in the first pictured essay, which suggests that you can give an AI a terminal value in such a way that it also terminally values the instrumental subgoals of the terminally value. How does that work and what would it look like?

You just average terminal and instrumental values. From https://twitter.com/jd_pressman/status/1710811174458270066 it looks like the plan is to use probability (log odds) of sensory inputs leading to terminal reward for a model trained only on terminal values as instrumental values.

Expand full comment

I’ve been reading some exasperated commentary of the “you‘ve never had it so good“ variety online - here on Substack, and other sources. The economy is doing well, inflation is beaten. What’s your problem plebs?

Most of the rhetoric is partisan of course, allied to a fear of the return of Trump. I think there’s at least two major problems in this argument and they are.

1) that inflation is beaten now doesn’t mean that people are not poorer than before Biden was elected. This is probably not his fault, although the blame for inflation seems to be also driven by partisanship. I’ll not go into that.

2) the CPI is a general tool for measuring inflation, it may not be useful for burrowing down into the income brackets. For instance food is 13% of the basket, which is far too high for the top 10% and probably too low for the bottom third.

People know their paycheques and their weekly spend, shouting “learn macroeconomics” at the plebs isn’t going to work

Expand full comment

I've had some conversations like that in my personal life, on the side of "I know your life situation in detail, it's objectively fine, why are you moping about it?" This is, of course, not particularly helpful or effective, but the exasperation is real.

In general I'm sympathetic to the argument that many measures GDP do not automatically mean an increase in wellbeing. For instance, when a mother leaves her month old infant with a stranger to go do stuff that might as well be done by a LLM, this is an increase in GDP, but not necessarily wellbeing or even productivity. But at the same time, we are in, if not the best economy the world has ever known, at least pretty close to it, it isn't necessarily useful or wise to whine about how there were some groups a generation ago who had it *even better.*

As far as I can tell, our actual societal problems are more rooted in social fragmentation, sadness that our dreams about equalizing racial groups have largely failed, and lack of belief or civilizational narrative and purpose, and that it's more useful to think about what we might do about those things.

Expand full comment

Neither gdp nor gdp per capita matter to most people. What matters is their own personal purchasing power after taxes, rent and so on. That, and security of employment.

Expand full comment

my understanding is that it's the other way around. People say that their own economic circumstances are good, but believe that the country's economic circumstances are bad. Most people are in fact richer than before Biden was elected, even after accounting for inflation.

Expand full comment

Timothy Burke, a tenured professor, says he feels much less financially secure than he used to, which is a different issue than wages.

People's jobs aren't as secure-- companies are (I think) more cavalier about laying people off and more likely to go under.

Expand full comment

Most people who are in the same job they were in in 2019 are worse off in real terms, because pay rises have not kept pace with inflation. For this they blame Biden.

People who have got promoted or have got better jobs may well be better off than in 2019. But few people credit the President for their promotion.

Expand full comment
Dec 5, 2023·edited Dec 5, 2023

Pay r(a)ises have in fact kept pace with inflation since 2019.

https://www.statista.com/statistics/1351276/wage-growth-vs-inflation-us/

I was geeky enough to put *all* the numbers on that page into an excel spreadsheet (you can hover over the dots to go month by month). In total, wages are up 19.5% since January of 2020 while inflation is up 18.8%. That's only as of October, by the way ... November was another month where inflation was well below wage increases.

Expand full comment

This appears to be total wages, so it `averages in' the effect of job changes and promotions (which people don't generally credit the president for). I do not believe wages for the same job have risen 20%.

Expand full comment
Dec 5, 2023·edited Dec 5, 2023

You may be correct about that, I suppose: it may be very close! But I don't think people are that informed, or thinking that closely. They just see "price up" and react "me sad."

https://twitter.com/whstancil/status/1730723754223800617

(citing the Financial Times, which is paywalled)

has more on the difference between what people believe is going on, and what is actually going on. (it's talking about the last year, so it's slightly different, but it gets the same result.)

If you quiz ordinary people about their actual beliefs on economic trends, and compare it to reality, they turn out to be overwhelmingly be wrong. One can try to 'steelman' them and try to come up with a sense in which they are technically not as wrong as they think they are. But that seems misleading, at least to me. ymmv of course

Expand full comment

My wages are not up 20%, and neither are a lot of people I could individually name. I do have more opportunities to make a similar wage with less work, or a higher wage with similar work, so I could leave a job that I like to try to make more money or do less work.

There's pros and cons to that, but that's definitely not the same as saying wage increases are ahead of inflation.

The idea that you can beat inflation only if you cause yourself additional stress by changing jobs isn't exactly a panacea.

Expand full comment

Inflation creates winners and losers. The losers are mad more than the winners are happy. This is entirely textbook, and a very large part of why inflation is politically unpopular anywhere and anywhen. Averages hide this effect.

Also not included in those figures are interest rates, which are highly pertinent to e.g. people's ability to purchase housing or other big ticket items. High interest rates suppress people's purchasing power.

Expand full comment

a lot of "the economy is terrible, inflation is out of control" is also partisan rhetoric; people may know their paychecks, but given their political bent, they are probably focusing on what is bad/good in their specific consumption basket rather than their whole consumption picture (e.g., don't hear much about gas prices, even though they're quite low right now)

Expand full comment

“you‘ve never had it do good“ - I think you meant "so" instead of "do" but it really threw me off for a minute trying to figure out what a do good is.

Expand full comment

Well, I meant so, autocorrect didn’t, or didn’t correct me. I know how to spell both.

Definitely should check these things myself of course, but it’s annoying how bad iOS is at this - not taking context into account.

Anyway fixed now.

Expand full comment

Decide to quit my work & do a startup. If you're into neuroscience, scrapy research with practical results and reversing neurodegeneration you should hit me up

Expand full comment

I think George must have started his hiatus (or was it a lacuna he was planning ? ) in Asia

Expand full comment
founding

As someone who has invested in startup companies before I would say this kind of vague hinting at something is pretty off-putting. I don't want to "hit you up", I want to read about what your idea is so I can do research about it without having you try to hard sell me in the inbox

Expand full comment

Scrapie? The degenerative brain disease of sheep?

Expand full comment

I don't know anything about anything useful, but I do have a genome viewer that you can use for free, so that's... something ?

Expand full comment

Sorry to sound pedantic, but is this research using the python scrapy module or a typo for "scrappy" as in perfunctory or not too intensive and time-consuming (which sounds like my idea of the best kind of research! :-)

Expand full comment

Say what you like about Reddit, when it tells you a post has been replied to, it does link you to that post in its original context. Substack s failure to follow suit means that I give up on conversations because I can't find them.

Expand full comment

If you click the link in the email notification it will take you to the comment you made, the reply, and let yo respond (it wont be easy though to get to the parent comment in the conversation). It will work on mobile too as it doesn't redirect to the app.

Expand full comment

I don't get email notifications, just in app. Perhaps I can change that...

Expand full comment

That has long annoyed me to no end. It makes the reply notifications on Substack essentially useless, since it is impossible to see what they were replying to. And no LHHI, "ctrl+f" is not an option, since Substack a) takes forever to load and b) doesn't load all the comments anyway, so ctrl+f rarely works. Plus it's a pain to do even when it does work, and again **this would be completely unnecessary in the first place with any halfway competent software**.

Expand full comment

Why not simply ctrl-f your username in the browser to see all of your comments in the entire page highlighted and reachable via the next-previous pair of buttons ?

For all intents and purposes, I assume that Substack is like the websites made by governments or banks, organizations so dysfunctional and clueless about software that they might as well be crewed by a bunch of CS freshmen. Judged by this prespective, everything is a win and you have no right to expect anything.

I **Do Not** know what they're doing with all the cycles. I have never seen a modern computer choke and get loud fans simply from loading text. Even Teams and Discord have more to show for it than a bunch of hierarchical text blocks.

Expand full comment

If only it were that easy. I am using the app on a tablet, and a "find in page" function is another thing which is missing.

Expand full comment

Few people have a comment section as active as Scott's, so I doubt Substack has put much effort into actually getting it to work well. That, said, I remember the founder of Substack talking about how they want to compete with Reddit, so it really should be something they look at seriously.

Expand full comment
Dec 5, 2023·edited Dec 5, 2023

I assumed that Substack wanted to be as usable as possible with mobile phones, and thus was as usenet-like and simplistic as they could make it. Perhaps by 2030 we'll be able to post images and sound files!

Expand full comment

Well, for all its faults, reddit is made for discussions. (As is whatever programs run LW and themotte.)

Substack is made for displaying articles by substack authors, with some half-assed discussion feature thrown in, but it is clearly not a priority.

Expand full comment

Yeah, substack's bad notification linking is unbelievably annoying (so is the lack of footnotes on their mobile app). They need to go back to making sure their existing things work properly over trying to ship new features.

Expand full comment

+1 Every time I come here, I'm amazed by how bad Substack is.

Of course, it's not like their complete inability to support comments is the only problem. There's also the incredibly annoying popups every time you go to a new substack.

Expand full comment

If I blogged in English regularly, this'd be my own links post for November:

1. Some things end: on 24 August 394, the last known (native) writing in Egyptian hieroglyphics was carved into a temple wall in Southern Egypt.

https://en.wikipedia.org/wiki/Graffito_of_Esmet-Akhom

2. MIT has an Integration Bee happening every year, where participants compete in solving difficult integrals.

https://math.mit.edu/~yyao1/integrationbee.html

3. "Still laughing about the time a computer scientist [...] tried to explain binary search to a cop".

https://twitter.com/AlecStapp/status/1728953538301345889

4. There's a theory that in "it's easier for a camel to go through the eye of a needle than for a rich man to enter the house of God" the word "camel" should actually be translated "rope". This theory is surprisingly ancient and interestingly wrong.

https://kiwihellenist.blogspot.com/2023/11/camel.html

5. Some guy wrote 4789 reviews of books he's read since 2014. Many reviews are admittedly perfunctory but some are interesting.

https://the-pequod.com/

Reminded me of an ancient https://dannyreviews.com/ (mostly SF reviews) and I'm happy to see he's still going. I think I remember seeing those in the late 90ies, his FAQ says since '92.

6. Latin Forms of Address: from Plautus to Apuleius. Ridiculously complete and full of amusing observations.

https://archive.org/details/latinformsofaddr0000dick/page/n1/mode/2up

(also in the usual pirate libraries)

7. merrit.edu (community college in Oakland, CA) has a cadaver dissection course open to the general public. I would strongly consider if I were in the vicinity.

https://alok.github.io/2022/11/09/dissection/

8. Non-bullshit games: curated list of mobile games that are the opposite of crap.

https://nobsgames.stavros.io/

related: HN discussion https://news.ycombinator.com/item?id=38429080

related: How to play (and win) five now-defunct Flash games

https://lettersfromtrekronor.substack.com/p/how-to-play-and-win-five-now-defunct

9. I've seen "enshittification" a lot lately. It's more recent than I'd imagined, coined in Nov'22 by Cory Doctorow.

https://en.wikipedia.org/wiki/Enshittification

10. Ancient Hebrew Morphology (a book chapter). Interesting throughout. I found this while looking for why, in Hebrew, verbs in future 2nd person "you-male-singular will X" and future 3rd-feminine "she-female-singular will X" are exactly the same. Turns out this is common to all Semitic languages, goes back to (reconstructed) proto-Semitic, and nobody knows why.

Ancient Hebrew is apparently (almost?) unique in having two equally valid, non-gendered forms of the pronoun "I". In modern Hebrew one of them is archaic.

https://bildnercenter.rutgers.edu/docman/rendsburg/121-ancient-hebrew-morphology/file

11. "On the Sublime" is a Latin work of literary criticism written around 1st century AD, but not noticed or quoted by anyone until the 10th century (oldest manuscript) or the 16th century (really noticed, published, hugely influential). How sure are they it's not a later forgery? (I didn't find anyone raising doubt).

https://en.wikipedia.org/wiki/On_the_Sublime

12. Poppy outlines: interesting new and apparently serendipitous illusion.

https://old.reddit.com/r/mildlyinteresting/comments/9vqa6n/i_drew_poppy_outlines_for_my_class_to_cut_out/

13. CRT nerds discuss how games used to look on CRTs, and whether modern emulation shaders succeed in capturing 90% or 95% of the nostalgia.

https://news.ycombinator.com/item?id=37808475

14. Metascience Since 2012: A Personal History. How $60M were allocated to improving science and what came out of it, or didn't.

Related: Michael Nielsen, Brief remarks on some of my creative interests

https://michaelnotebook.com/ti/index.html

(Nielsen's work inspired much of the progress/funding on "metascience").

15. How could early UNIX OS comprise so few lines of code?

https://retrocomputing.stackexchange.com/questions/26083/how-could-early-unix-os-comprise-so-few-lines-of-code

Related: HN discussion https://news.ycombinator.com/item?id=37462806

16. Things You're Allowed To Do. Some genuinely interesting advice.

https://milan.cvitkovic.net/writing/things_youre_allowed_to_do/

17. "My 20 Year Career is Technical Debt or Deprecated". The author clearly made some unlucky choices. But maybe it doesn't really matter. But the churn *is* real.

https://blog.visionarycto.com/p/my-20-year-career-is-technical-debt

And HN discussion: https://news.ycombinator.com/item?id=35955336

18. Someone is writing, and selling, an independent 64-bit debugger under Windows with advanced hacker-friendly UI features. Looks very good for a one-person operation.

https://remedybg.itch.io/remedybg

Expand full comment
Dec 4, 2023·edited Dec 4, 2023

>Ancient Hebrew is apparently (almost?) unique in having two equally valid, non-gendered forms of the pronoun "I". In modern Hebrew one of them is archaic.

Note, though, that the text you cite states:

>From a diachronic perspective, of the two forms, ªanôkî is considered by most scholars to be the older; eventually it was replaced by ªånî. Indeed, in the later biblical books and in the DSS, ªånî predominates, and it is the only form attested in MH.

So even within Biblical Hebrew, one of the terms was archaic.

Expand full comment

Thanks for correcting. I was thinking mostly of the Pentateuch where the two words are about co-equal, but properly speaking "Biblical Hebrew" is wider and includes a period when _anochi_ becomes less and less used. BTW, this chapter about Proto-Semitic

https://www.routledgehandbooks.com/pdf/doi/10.4324/9780429025563-3

finds both forms in it, with some Semitic languages inheriting one and others the other, and only Hebrew and Ungaritic having both.

Expand full comment

"destroying billions in value"

Market capitalization isn't value. (I think there was some post by Matt Yglesias or Noah Smith about this.)

(but good point re: effects on social trust)

Expand full comment

FTX did destroy a lot of real value though. A lot of smart young people wasted their time working for a giant fraud instead of doing anything useful with their time and skills. A lot of stolen money went into building fancy buildings in the Bahamas that noone actually needed. A lot of money went to marketing the fraud. And so on.

Expand full comment

Market capitalisation absolutely is a store of (expected) value. Amazon's market cap represents what people expect Amazon's work is worth. If it suddenly disappears, you very much have lost value. To give the absurd example - let's say every company's market cap in a country goes to zero - you can be sure that you would very much have lost value!

Expand full comment
Dec 5, 2023·edited Dec 5, 2023

If an investment goes to zero, *you* have lost value, but has *the economy?* Like, that's what matters for calculations along the lines of "$X billion in losses is equivalent to killing Y people" - the implication is that there was $X billion of productive potential in the economy that could have, say, been used to make bed nets or research a cure for malaria, and now that productive power is gone or wasted.

If a factory burns to the ground, then $X of productive value is permanently gone from the economy - whatever it could have produced is no longer being produced. But if an exchange closes, nothing has physically been destroyed - the productive potential is still out there and could still in theory be directed towards bed nets (e.g., if the fraudster donated their ill-gotten gains to charity, or the courts are able to retrieve some of the money.)

Expand full comment

A factory represents value, but so does a stock market! It's just at a more abstract level that makes it harder for people to see, but it's very real value nonetheless. The stock market is what enables capital to be aggregated to make new factories, or services, or whatever have you.

Expand full comment

FTX did destroy a lot of real value though. A lot of smart young people wasted their time working for a giant fraud instead of doing anything useful with their time and skills. A lot of stolen money went into building fancy buildings in the Bahamas that noone actually needed. A lot of money went to marketing the fraud. And so on.

Expand full comment

I meant it doesn't equal value, not that none of it is value.

The post I meant: https://www.noahpinion.blog/p/where-does-the-wealth-go-when-asset (Ctrl-F for "Well, in fact, it is a little bit fake. Not entirely, but a little bit.")

Expand full comment

That post doesn't support your post. In fact it explicitly says that market cap does represent value 'And sometimes people will assure you that drops in asset markets don’t mean anything because no actual wealth was destroyed. But paper wealth is as “real” as wealth gets'

Expand full comment

> (but good point re: effects on social trust)

I think that damage also can be overstated. FTX is hardly the first "crypto" company to lose someone value due to financial irregularities. (In fact, I would argue that the second word in the phrase "crypto scam" is largely redundant.) And I have heard few people reasoning "Well SBF is very active in EA, so he must be a good guy, so I should invest at FTX".

That is not to say that I condone SBF's behavior, though. Taking money from people who fall for "get rich quickly" schemes is wrong.

Expand full comment

Yeah I think it only makes sense if you assume those dollars were going to life-saving things, which I don't think they were.

Expand full comment

The stolen customer funds (direct liability shortfall) was something like 4 billion dollars, so it’s still billions plural in destroyed value.

Expand full comment
author
Dec 4, 2023·edited Dec 4, 2023Author

What do people think about the brewing potential Venezuela-Guyana war?

I would have thought that given the US tendency to defend friendlyish countries (see Kuwait, Ukraine) and the US wish for a regime change in Venezuela, there would be such high risk of US involvement that it would be suicide for Venezuela.

Also it seems like they recently discovered lots of oil in Guyana, which means oil companies will want to defend their investment (I don't know what levers they have, but I'm sure they have some - US lobbying?) and Guyana can probably take out loans to buy good weapons.

Expand full comment
founding

"Guyana can probably take out loans to buy good weapons"

Guyana can also take out loans to train new soldiers, but the war will be over by the time they're ready. And the soldiers Guyana has now, are basically irrelevant in a serious war even if we do give them all the Javelins and Stingers we haven't sent to Ukraine. Guyana's "army" is basically just a police force in green uniforms, and not a very large one. Venezuela vs Guyana, Venezuela wins, easily.

Brazil has indicated that they will defend Guyana, and Brazil has a large and reasonably capable army. But their logistics on the border with Guyana absolutely suck, whereas Venezuela has I believe decent roads in the area. And Venezuela's Air Force trumps Brazil's, at least if their planes still work. So a war with Venezuelan air superiority and superior logistics is still going to be a win for Venezuela, but it won't be a quick or easy one.

If the United States gets involved, a Carrier Battle Group and a few squadrons of heavy bombers will probably suffice to take down the Venezuelan Air Force and bollix their logistics so that Brazil can win the land war.

But it isn't at all certain that the US will get involved. We generally haven't intervened in wars between Latin American nations, and we don't have a compelling strategic interest in Guyana. And, as note your examples of Kuwait and Ukraine, when the US intervenes in conflicts like this we generally *don't* go for regime change. Regime change requires messy urban warfare in the enemy's capital city, with lots of civilian casualties showing up on CNN. At most, we'll (help our friends) drive the enemy back to prewar borders, launch a few cruise missiles at carefully-selected targets, and say "Obviously, the enemy regime has now lost so much face with its own people that it will soon fall, Yay Us, we win!"

Which of generally doesn't actually happen. So, from Venezuela's POV, either they gain a possibly oil-rich new province, or they lose their Air Force and whatever fraction of their Army they committed to the fight, but they gain "It's all the Yankee's fault, we are all united against the Damn Yankees" as a trump card in basically any domestic policy crisis, and either way the regime is reasonably secure. They may think that's a reasonable gamble.

Expand full comment

Venezuela is clearly going for the hybrid warfare approach but from a more left wing perspective. The propaganda line is that they're defending the indigenous people of Esequibo from settler-colonial violence and that the original border was imposed by white European imperialists and therefore invalid.

The former is made up. There's a history of oppression of indigenous groups in Guyana, as there is in Venezuela, but Guyana's government is left wing and very solicitous of the indigenous communities. The latter is kind of true (it was imposed by the British) but in line with the usual revanchism. But it's clearly meant to create enough cover/confusion among left leaning actors in places like the United States that it will slow down any intervention. They've already sent groups of 'protestors' over the border with the Venezuelan flag and released propaganda photos of smiling indigenous people voting for annexation. At some point I suspect these groups will become armed and ask for Venezuelan assistance. Or maybe they'll send in troops to 'protect' the peaceful protestors.

In a straight confrontation Guyana has no chance. Its military is just much smaller and Venezuela and Cuba, while not powerhouses, are capable of beating Guyana. It's just too small population and military-wise, even with major international support. They might have trouble even with a major insurgency of little green men. And both economies are already so sanctioned that new sanctions won't do much.

If the US wants to intervene it will either need to find a local partner, which Brazil or Colombia will be reluctant about, or send in its own troops. The war would be short and a complete defeat for Venezuela and/or Cuba. They're both coastal nations close enough to the US that we can strike them from bases in the US. So a visit from the US navy and some marines would beat them quickly. But is the US willing to put boots on the ground for Guyana? What if it's an 'insurgency' that just happens to look like Venezuelans with their patches torn off? What if there's American left wingers saying that this is yet another example of American imperialism oppressing native people for oil? Etc etc.

Notably, the Guyana is one of the few South American countries that is not part of the Rio Pact and so the US and most of South America has no formal defense commitment. So there's a clear escalation ladder here where they can back down if it looks like they're about to trigger an intervention and no clear delineated red line.

Expand full comment

As someone who has lived in Guyana and Venezuela, yes, Guyana would lose quickly without intervention. Guyana's capital is like a Venezuelan town. It also has few settlements in the western half of the country (or roads, but I guess that affects them both equally.)

On the other hand, the Guyanese military best me handedly at paint ball.

Expand full comment

Brazil has signalled that it will defend Guyana, and has begun moving troops into the region. It seems unlikely that VZ wants to risk any open conflict with Brazil, so that may be enough to deter Maduro. However, Brazil's intent and commitment is a bit murky, maybe they just hope to extract some concession from Maduro in return for letting VZ troops into Esequibo.

There's this interesting take on whether the US will get involved based on the sheer number of Guyanese living in the US:

https://twitter.com/lymanstoneky/status/1729979585700286613

which I didn't find convincing.

As an aside, I lived in the Guyana until I was 12; my brother later lived in Venezuela for many years (I often visited, this was pre and post Chavez, but pre Maduro).

Expand full comment

I only hope that Venezuela doesn’t invade the other Guiana which is part of the EU.

That could escalate horribly. Maybe even to the EU sending a strong letter of rebuke.

Expand full comment

It's unlikely the invasion will go ahead. (I'd give it something like 80/20 against, with the 20 a hedge against unknown internal factors and something spinning out of control just because the matter has been made hot.) There is no global or even regional diplomatic support for it. Even Russia has not endorsed the Venezuelan claim on Essequibo. And the Americans have been waving some rather bright warning flags, like sending teams of military advisors and lower-rung DoD officials to Guyana. As against that, the US and Venezuela have recently made more progress on easing sanctions than they have in quite a while, something an invasion would instantly reverse.

I agree with the camp that says it's mostly domestic politics. (The fact that the Maduro government bothered holding a public referendum with a lengthy campaign on an invasion at all is itself telling.) American sanction-lifting had been made conditional on the reinstatement of the Unitary Platform (main opposition) candidate Maria Corina Machado to compete in the 2024 presidential election, which was achieved on November 30th. Because Machado is an American/Exxon Mobil running-dog lackey, she has been forced to say unpopular things, like "invading Essequibo bad, don't hold the referendum", which works conveniently against her. And there isn't very much else for the government to rally the people around.

That said, it doesn't have to be a full invasion to become a conflict. The threat of ambiguous cross-border harassment against extraction projects is non-trivial, and gives the Maduro government leverage against Exxon, among other things.

Expand full comment

Not sure I see any reason why the US would get involved in this. True, they're not big fans of Maduro. But Venezuela isn't considered a rival on par with China or Russia, nor is Guyana important to American foreign policy interests the way Europe or East Asia is. Also, as others mentioned, this isn't like Ukraine or Taiwan where there's a strong military that the US can just send or sell arms to. Guyana appears to have a token military force that is utterly dwarfed by Venezuela's.

Expand full comment

An analog that could be more apt is Operation Allied Force (the campaign of airstrikes against Serbia to protect Kosovo).

Expand full comment

An important difference there though is that Allied Force was a NATO-led (not US) operation occurring in NATO's backyard. What comparable international organization would take the lead on military intervention in S. America? The UN? OAS? I seriously doubt that.

What we're left with then is the US intervening on it's own. Given memories of the invasion of Panama and how the US interfered in S. American politics for most of the 20th century, I don't see the US having enough goodwill in the region to want to attempt it.

Expand full comment

I'm admittedly going much more on feels here than I'm totally comfortable with, but an outright invasion of one American country by another lacks a modern precedent (at least that I'm familiar with).

I see the US as having zero tolerance for this kind of thing in *its* own backyard, and thus not caring about the lack of an international organization to be its mask.

Expand full comment

>I would have thought that given the US tendency to defend friendlyish countries (see Kuwait, Ukraine) and the US wish for a regime change in Venezuela, there would be such high risk of US involvement that it would be suicide for Venezuela.

That, and much more. Venezuela is (very probably) much weaker than Iraq was during desert storm and the jungle of Guyana is much less amenable to an invasion than Kuweit.

The ony saving grace for Venezuela is that the jungle that make their invasion difficult also, probably, shield them from US air power (up to a point). But I'd still bet on a regime change in the end.

Expand full comment

Guyana's military, going by Wikipedia, has under 5000 people total (Guyana's total population is under a million). So this wouldn't be like Ukraine - if the US wanted to intervene it'd have to actually send in American troops. Venezuela might be assuming that after Iraq and Afghanistan the US wouldn't want to do that anymore.

Expand full comment

I’ve thought of that as well. Maybe they think the situation in Ukraine shows the US wrong get actively involved. If the US won’t, who will?

Expand full comment

Direct US action in Ukraine risks escalation with Russia.

Direct US action in the Levant risks escalation with everybody nearby.

Direct US action in Guyana doesn't risk escalation with anybody.

Expand full comment

Other South American countries. It’s not actually a poor continent anymore, ranging from middle HDI to very HDI.

Expand full comment

About SBF - but were those billions stolen/lost solely because of EA? How many EA leaders/orgs were urging people to give them money? How much that support actually helped SBF? In a world without EA would SBF steal less money? Less by how much exactly? Asking because I have no idea myself, I barely followed when it was unfolding.

Expand full comment
founding

In a world without EA, SBF would probably still be working for Jane Street, possibly making them billions and keeping a good chunk of that for himself, but under adult supervision. Possibly he'd have gone independent and founded a boringly profitable hedge fund or something. The issue isn't that EA convinced Effective Altruists to invest in FTX, the issue is that EA or EA-adjacent influencers convinced SBF to create FTX in the first place.

Expand full comment

Part of it seems to be that a lot of EA/EA-adjacent people went to work for SBF in the early days, on much the same principles as 80,000 Hours - we can earn/make a ton of money and give it all to good causes - and because they were doing it "for the good" it became easier for them to talk themselves into/justify the jiggery-pokery. Sam was a good guy, and a smart guy, and if he was doing this then he had a master plan to make it all okay in the end, so just go along with what he asks.

SBF's own motivations were murky, but again part of it was some wish to do good or be seen to be doing good, by the metrics of the circles he and his family moved in, and a ton of *that* was EA-aligned. I have no doubt he wanted to make billions himself, but some at least of the motivation seems to have been that the influences around him were "doing the most good the most effectively" EA philosophy/rhetoric, and he couldn't resist trying to become a hero of that sort.

His family were also taking gobs of money out of it, and his mother at least was doing it for what she considered 'good causes' - political organising of the "stop Trump/the Republicans by any means necessary". His brother, too, was directly involved in EA charities.

Once again, that Sequoia Capital article on the story SBF was telling about how it all got going:

https://web.archive.org/web/20221027180943/https://www.sequoiacap.com/article/sam-bankman-fried-spotlight/

"Not long before interning at Jane Street, SBF had a meeting with Will MacAskill, a young Oxford-educated philosopher who was then just completing his PhD. Over lunch at the Au Bon Pain outside Harvard Square, MacAskill laid out the principles of effective altruism (EA). The math, MacAskill argued, means that if one’s goal is to optimize one’s life for doing good, often most good can be done by choosing to make the most money possible—in order to give it all away. “Earn to give,” urged MacAskill.

...His parents raised him and his siblings utilitarian—in the same way one might be brought up Unitarian—amid dinner-table debates about the greatest good for the greatest number.

...SBF listened, nodding, as MacAskill made his pitch. The earn-to-give logic was airtight. It was, SBF realized, applied utilitarianism. Knowing what he had to do, SBF simply said, “Yep. That makes sense.” But, right there, between a bright yellow sunshade and the crumb-strewn red-brick floor, SBF’s purpose in life was set: He was going to get filthy rich, for charity’s sake. All the rest was merely execution risk.

...After SBF quit Jane Street, he moved back home to the Bay Area, where Will MacAskill had offered him a job as director of business development at the Centre for Effective Altruism.

...Fortunately, SBF had a secret weapon: the EA community. There’s a loose worldwide network of like-minded people who do each other favors and sleep on each other’s couches simply because they all belong to the same tribe. Perhaps the most important of them was a Japanese grad student, who volunteered to do the legwork in Japan. As a Japanese citizen, he was able to open an account with the one (obscure, rural) Japanese bank that was willing, for a fee, to process the transactions that SBF—newly incorporated as Alameda Research—wanted to make. The spread between Bitcoin in Japan and Bitcoin in the U.S. was “only” 10 percent—but it was a trade Alameda found it could make every day. With SBF’s initial $50,000 compounding at 10 percent each day, the next step was to increase the amount of capital. At the time, the total daily volume of crypto trading was on the order of a billion dollars. Figuring he wanted to capture 5 percent of that, SBF went looking for a $50 million loan. Again, he reached out to the EA community. Jaan Tallinn, the cofounder of Skype, put up a good chunk of that initial $50 million.

...With a goosed-up capital account, the money started piling up so fast that SBF placed what he refers to as “a market order for employees” to tend to the Rube Goldberg operation that kept the capital spinning. There were constant blowups with banks, which are wary of anything crypto. Crypto was so new that regulators in South Korea and elsewhere were constantly changing their mind about regulations—then making those changes retroactive. It was a swirling mess. Pulled into the vortex was Nishad Singh, a friend of SBF’s brother Gabe, and a fellow EA member. Singh is a bespectacled and baby-faced young man with an earnest mien. He often wears a T-shirt with the words “compassionate to the core” printed, in a diminutive all-lowercase font, over his heart. After just one conversation with SBF, Singh decided to leave Facebook to take on the more meaningful work of building FTX. Caroline Ellison came, too, quitting Jane Street and moving to California only weeks after SBF described the operation to her over tea. The first 15 people SBF hired, all from the EA pool, were packed together in a shabby, 600-square-foot walk-up, working around the clock. The kitchen was given over to stand-up desks, the closet was reserved for sleeping, and the entire space overrun with half-eaten take-out containers. It was a royal mess. But it was also the good old days, when Alameda was just kids on a high-stakes, big-money, earn-to-give commando operation. Fifty percent of Alameda’s profits were going to EA-approved charities.

“This thing couldn’t have taken off without EA,” reminisces Singh, running his hand through a shock of thick black hair. He removes his glasses to think. They’re broken: A chopstick has been Scotch taped to one of the frame’s sides, serving as a makeshift temple. “All the employees, all the funding—everything was EA to start with.”

Expand full comment

Thank you for the reply! But the systemic feature of EA that enabled SBF was... having a community? Having driven people? Having people with skills and connections? I don't see how to blame EA for that.

Expand full comment

I'm not sure this is fair, but maybe thinking of themselves as rationalists but failing to realize that being smart and having some rationalist skills isn't the same thing as knowing what you need to check on, especially if you're dealing with people.

Lying for Money has it that trust is essential for making transactions efficient, but trust offering many opportunities for fraud.

Expand full comment

Is there even a single person in the rationalist community that is not familiar with this cliché? They surely knew that they can miss some field-specific skills, but what can you even do about that other than try your best?

Expand full comment

Checklists might help. For example, I talked with a rationalist who lost 20K on a handshake deal somehow related to FTX.

A reminder to get agreements above some value in writing might be in order. More generally, an anti-fraud checklist could be worth developing.

Expand full comment

I doubt it - crypto scammers were always going to scam people out of billions of dollars regardless of EA. It's possible the EA link gave him some legitimacy (or motivation, by allowing him to launder money into social status), but in general laundering money to social status isn't a bottleneck for most rich people.

Otoh it's possible EA made things harder for him, since apparently a bunch of ea linked people left Alameda early on because it got too shady? Not sure.

I think the main negative effect of EA as regarding SBF was that it ended up directing his money to a bunch of important useful charities that suddenly got their money yanked. In an alternate world where those charities never made plans for that money (and there wasn't a charity ecosystem built up to try to appeal to FTX) charities would probably do better, and I think that connection being forged can reasonably be blamed on ea. (That's a much smaller cose than blaming the entire FTX scam on ea, but it's not zero).

Expand full comment
author

I think this is the standard hard-to-interpret problem where events have many causes. I think it's true that, if EA didn't exist, *probably* FTX would not have existed either. Obviously there were many other things necessary to make FTX exist. But in that post, I took credit for lots of good things that EA caused (even when they had other causes too) so I think it's fair to take blame for the bad things.

Expand full comment

Not all causes matter for fault analysis! If I stepped in traffic without looking, was almost hit by a car, and survived only because I was slowed down by a loose rock five seconds earlier, I don't owe a life debt to whoever kicked it loose, because kicking rocks does not save lives in expectation. EA wants to do good, and visibly tries to do good very hard, and it would be madness to argue that any good that results is a coincidence and shouldn't be attributed to it. But if some EA members unwittingly help a scammer (of previously unheard-of magnitude and style, AFAIK), especially a scammer that everyone else also missed, then EA is only at fault if it had *unnecessary for it's mission* systemic features that enabled it. And even then EA should only accept it's share of blame among all other systemic reasons of fraud, of which there are many.

Expand full comment

One of the university maths departments that I'm involved with is looking at setting up an applied maths consulting business. They are big on stochastic differential equations and Bayesian probability theory.

I thought that maybe some sort of service consulting on improving learning rates in deep learning models might work for them, but I worry that it might already be handled well enough by existing libraries that there's nothing there to consult and research on.

They do have a strong connection to financial mathematical modelling. But I'm not familiar enough with that universe to know what sort of problems are amenable to a consulting engagement.

Any thoughts?

Expand full comment

I have run a consulting service for our CS department for a few years and know the consulting service from the math department well. They have focused on statistics, and for that there definitely is a market. They preferably accepted academic customers who wanted counsel on how to set up a study, but they also got a fair number of requests from industry. If your department has enough expertise in statistics, this sales pitch can work. Perhaps something similar could work for applied Bayesian theory because there are also lots of research papers which try to proof that some (typically biological) effect is "Bayesian". So if you can tell them how to tell a Bayesian from a non-Bayesian effect, they would probably want that. But my feeling would be that the market is smaller, and that it is non-trivial to find potential customers. If you want to go into the direction of academic customers, to get started you could use academic mouth-to-mouth propaganda, combined with press releases etc.

I don't know the market for financial mathematical modelling to say something informed there. There are tons of companies of all sizes, so perhaps?

From my own experience, something "being handled well enough in libraries" doesn't remove the value of consulting. I could solve half of my cases by understanding their problem and pointing them to the right piece of software or the right package (my service was consulting on algorithms). I didn't do it for money, but I definitely provided value, so I think that I could have extracted money.

Expand full comment

The damn "sequences". They keep getting referenced and linked here, by Scott and by commenters, over and over and I just don't get it. Am I the only one who thinks they're terribly reasoned and terribly written? I'll admit I haven't read all or even probably much of it; I can barely stand to.

Nearly every time one of the posts is linked, I find the title fascinating and am very excited to read it (and I see under "related posts" more fascinating titles that I want to read). Often, Eliezer seems to going to address a major philosophical problem for his brand of extreme, simplistic empiricism (problem of induction, external-world scepticism) and I think to myself, either he'll give a clever answer to it (in which case it will be interesting to discover), he'll give a bad answer (in which case it will be interesting to explain in my mind exactly why it doesn't work), or he'll ultimately dodge the question (in which case it will be fun to watch him dance around and never give an answer). And then I read it and...it doesn't do any of those, and it's so unsatisfying. He just kind of rambles, and goes on tangents, and by the end I'm thinking "did he give an answer or not?" I wait for a summary at the end, after all the unclear rambling, of his rough position and there isn't one.

And I wonder why so many here, fans of clarity and of Scott's excellent writing, like those posts at all. I wonder if I've somehow missed the "good" ones. And I wonder if Eliezer's persistent (from what I've seen) refusal to clearly answer (or summarise his answer to) these philosophical questions, and his rambling style, is him simply being a bad writer, or is a deliberate attempt to trick unattentive people into thinking he's answered the question when he hasn't.

It doesn't help that everything about Less Wrong makes it clear it is (or was) a literal cult. Comments aren't shown if they're downvoted enough, but apparently it's assured this won't be used to suppress unpopular opinions? Anyone who believes that should know about my bargain bridge sale.

Am I missing a reason for this stuff's popularity around here?

Expand full comment

>Am I the only one who thinks they're terribly reasoned and terribly written?

Not at all. This is a very common opinion. I can't say that it's completely wrong (the sequences are huge, and I have definitely read parts of them that left me scratching my head), but the definitely wasn't my experience.

The main reason I think people come to this conclusion is that the later parts of the sequences aren't nearly as good as the earlier parts.

I started reading Less Wrong in 2010ish, and the sequences blew my mind. That can be written off as a byproduct of me being in high school at the time, and it taking very little to blow my mind.

But then I went to college, and majored in philosophy, and there was almost no overlap. It wasn't that my professors assumed everyone already knew all the stuff in the sequences. It just wasn't an expectation that we should know even very basic concepts, like Words as Hidden Inferences, or Disguised Queries, or the Cluster-Structure of Thingspace.

The success of the (good parts of the) sequences isn't, for the most part, original thought. It's that they brought together insights from cognitive psychology, linguistics, and philosophy, and repackaged them to be useful for people who were otherwise unfamiliar with those fields.

As Scott points out, some of those concepts are now 'in the water supply', and so seem superfluous, but a lot of them aren't. I'm currently in an extremely competitive professional program, and today I was explaining one of the concepts from 37 Ways Words Can Be Wrong to my classmates.

I have a cumulative 2 decades in formal education, and I still haven't covered even a third of the most valuable, everyone-should-be-taught-this concepts from the sequences.

All of the good stuff from the sequences seem obvious if you really understand them, because most true things seem obvious if you truly understand them.

If you really understand the way triangles and circles relate to one another, trigonometry seems obvious. And yet, every year hundreds of thousands of students get Cs in trig.

Inferential distance is obvious. But every day people freak out when something takes longer than ten minutes to explain.

The difference between an intentional and extensional definition is obvious, but every day people slide right past it.

Bayes' Theorem is obvious, but eighty percent of *doctors* can't recognize a situation that requires it when it is presented to them.

Expand full comment

I read the Sequences completely and was quite unimpressed as well.

But I read them after I had followed SSC/ACX for some years and had read HPMoR. I could imagine that I had already absorbed all good ideas, and that back at the time when the Sequences were written, they were presenting something that was new to many readers. I also think that some parts are just bad and were bad back then, but for some I was probably just spoiled when I read them.

Expand full comment
Dec 4, 2023·edited Dec 5, 2023

Ha! That's really funny because I had the opposite experience: HPMoR was just okay to me rather than life changing because to me it was just recapitulating the sequences.

To-Meh-to, to-maa-to

Expand full comment

You're not the only one.

I've read "the Sequences" and found that "where it was original, it was not good, and where it was good, it was not original" -- EY repackaged old and well-known (among those who are not allergic to cracking a book) philosophical ideas, with a generous helping of florid intellectual exhibitionism (e.g. why does the "evaporative cooling" piece reference Bose-Einstein condensates, rather than... water?) and questionable assumptions. His primary motivator appears to be "to look smart" -- and to gain followers -- with whatever point he is trying to make invariably ending up a distant second. EY is a fellow who had not produced any scientific or engineering work of any note whatsoever, yet nonetheless has the boundless, aggressive egotism of a von Neumann. This, unfortunately, works -- "an ounce of image is worth a pound of performance", he built a fanatical cult which puts L. R. Hubbard's to shame, lives like an oligarch, with harem, etc, laughs all the way to the bank.

Expand full comment

I think Eliezer was good at introducing ideas slowly and plausibly to get around resistance, but that makes his writing totally boring if you're already familiar with the idea.

Expand full comment

"Am I the only one who thinks they're terribly reasoned and terribly written? I'll admit I haven't read all or even probably much of it; I can barely stand to.

Am I missing a reason for this stuff's popularity around here?"

(1) Not the only one, I also feel this way. Probably because I'm not smart enough to understand them 😁

(2) They're foundational documents, the holy texts. People read them as they were being written, way back when they were starting out on becoming rationalists, and that enthusiasm and nostalgia explains a lot. They were also a handy way of directing new enquirers to 'the answers' - "read the Sequences, they explain what we're all about". I don't imagine people re-read them often, it's more that they've taken on board the principles and are now doing commentary on all that has developed since then - the Talmud to the Torah, if I may be so presumptuous

Expand full comment

I think Eliezer does a decently good job when he's just popularizing well-known philosophical/logical concepts; he certainly explains them in a much more clear and engaging fashion than philosophy textbooks. I think his explanation of the Bayes Rule is pretty good, as well. But at the same time, he goes off the deep end almost immediately whenever he starts inventing his own content, like suggesting that the Bayes Rule is some kind of an ultimate superpower that can let you recreate all of modern science based on almost no information, or that one contested interpretation of quantum physics is obviously correct and everyone who disagrees is an idiot, etc. As you said, he never provides sufficient evidence for such claims, other than his force of personality.

Expand full comment

I think that's a general problem with intelligent autodidacts. They think they invented the world.

Expand full comment

It gets worse when they get a following.

Expand full comment
Dec 4, 2023·edited Dec 4, 2023

As a person who found the sequences quite insightful and on point at the time and who still considers them severely underappreciated by mainstream philosophy, I'm always confused by people like you, who claim that the sequences are lacking substance.

My current best hypothesis that such people are unable to easily screen away the somewhat arrogant and condescending manner of the author. They have to constantly spend cognitive resources while reading, grow more and more annoyed and thus miss the actual insights to the philosophical problems.

Of course these insights could've been written much clearer in the first place. Most of the time Eliezer just mention them, in between of talking about something else, without focusing too much attention at the fact that it's basically a solution to a serious philosophical problem.

Expand full comment

Arrogance is functional as well as tonal. Never withdrawing, never explaining, never correcting, never listening to criticisms or critics are all things that go against being Less Wrong.

Expand full comment
Dec 6, 2023·edited Dec 6, 2023

I've been wanting to reply to this collection of comments somewhere, and might as well reply here. I genuinely do not understand the various lacks of understanding, either for or against. I find Yudkowsky's style engaging and well-written, and I detect no trace of arrogance in him. His tone seems consistently thoughtful and kind, and while I can't think of anything that I learned from him that was new, I like the new ways that he (and Scott, here) invite me to think about things -- different words to put on sometimes inchoate concepts.

It's possible there's just something miscalibrated about me -- the other writer I'm most excited to see post new work is Sam Kriss, whose relentless surly misanthropy seems obviously (to me) a screen for a delicate and sensitive soul rather too much than too little freighted with compassion for his fellow man.

But, then, I've myself been described as arrogant and hateful, so maybe I'm projecting.

Expand full comment

Oh, I agree. I didn't have any problem with EY arrogance, barely noticed it at all, before I heard people keep claiming about it so I specifically had to turne my arrogance detector to the max in order to understand what they mean. And the way EY encorporated the insights seemed clear enough for me - I happened to have enough technical understanding so that the metaphors seemed on point rather than confusing. Surely he made strong claims, but it's equal to making a testable public prediction with high confidence - it's embarassing when the prediction turned to be false, but itself is a virtue.

But here is a thing. I'm an autist. Which gives me an excelent ability to not pay attention to status bullshit and focus on the object-level arguments - it's a default mode, actually. And some people are just unable to do that at all, or it takes enourmous amount of effort for them not to find status-related reproaches everywhere, even in places they were never intended. And no-matter how strange it is for me that such people exist and can even be in power, we still are to find common ground and cooperate with them. After all, I'm sure I look just as strange for them.

Expand full comment

For instance, a couple hours ago the Big Yud tweeted about the conditions under which he'd debate Beff Jezos. It's far too long for a tweet, like most of his short writings these days, and contains no point or content that I regard as interesting or important. But I really enjoyed reading it. It's prose poetry, a thing of beauty for its own sake.

Expand full comment

PPS: and while my own p(doom) very nearly approaches zero (as does my expectation of beneficial ASI and general tolerance for TED talks) Yudkowsky's recent short TED talk moved me to tears. The man is an artist.

Expand full comment

"sunstantionless"?

Expand full comment

Fixed

Expand full comment

What did you know about science and logic before you read them?

Expand full comment

More than average graduate of applied mathematics. Probably something around top 25%? Hard to estimate any better. Not sure, how it's relevant though. As I said, the insights were about philosophical problems, not scientific ones.

Expand full comment

So what did you know about philosophy when you read them?

Expand full comment

It was my special interest since early teens, so quite a lot. But I believe that we already had this discussion and it wasn't particularly fruitful.

On the other hand ascend is about to write his analysis of a post from the Sequences, that he particularly didn't like while I did, so there may be something interesting there.

Expand full comment

I'm fairly negative on Sequences as well. IMO, Eliezer had some very good ideas that were partly original and partly just applying mathematical concepts to situations that they hadn't been before. His writing is significantly worse in quality than Scott's, but that is true of 99% of writers, so I don't really hold that against him. His writing is still clearer than the average academic paper, so if you read many of those for your profession, it doesn't seem as bad.

I think the large problems sequences has is that they were a product of their time, early 2000s internet. Eliezer comes off condescending throughout it, and has the typical attitude of the time, that if I berate and insult anyone who disagrees with me at the same time explaining why they are wrong, they will change their opinion. This attitude was extremely prevalent at the time, especially found in the atheist vs religion flame wars. Even Scott's earliest writing has some signs of this, though he has grown out of it.

At this point, I only recommend sequences to someone who does not understand Baysenian Reasoning and would consider Eliezer on their tribe's side. All of his other ideas, are much better explained by Scott. That doesn't mean I don't respect Eliezer for coming up with the ideas. He just isn't a good entry point for rationalism.

Expand full comment

That is the Internet background situation at the time, but the Sequences were not addressed to the people they were arrogant about.

The Sequences were written at a time were hope of just convincing wrong people on the Internet with arguments was already dwindling. Shockingly, this was even true for intelligent opponents. A popular explanation at the time was that the outgroup was irrational. Some nerds even were beginning to worry that if the outgroup was irrational without knowing it, then maybe they might be too.

And in this context Eliezer Yudkowsky offered to teach his ingroup how to be rational so they could be sure of their epistemic virtue. The message is not "You morons think x!" but always "Now you wouldn't want to think x like those morons, right?" The early cult had a serious expectation that understanding biases would overcome them and that it would therefore be impossible to actually understand the Sequences without coming to agree with them. They also didn't yet have experience to countervail this. They were a small group, the Sequences were huge, and serious disagreement was proof of irrationality, so unconvinced people would mostly just close the tab [Anachronistic, tabbed browsing came later, but you know what I mean], clearly to protect their irrationality.

Within the nascent cult sequence-production was also fairly interactive, i.e. where the comments showed confusion or questions among insiders, Eliezer would add extra posts on those points.

But argument (within the semi-established framework) was for the elect. The snotty tone was not about persuasion, it was about superiority.

And of course nowadays they recommend the Sequences because they are founding and community-defining documents, even if faith in their efficacy is mostly lost.

Expand full comment

The early 2000s internet atheists seem much better than the norm on the internet now to me.

Expand full comment

Perhaps it would make sense to link a specific page from the sequences; discussing a specific page would bring us much closer to the gears level.

Expand full comment

My analysis/debugging/rant on "Where Recursive Justification Hits Bottom".

Ok so there are two issues here, the writing and the actual argument. I'll mostly object to the writing, but in the process discuss the argument as well.

The first few paragraphs are a great clear description of the problem. Then comes this:

"It's a most peculiar psychology—this business of "Science is based on faith too, so there!"  Typically this is said by people who claim that faith is a good thing.  Then why do they say "Science is based on faith too!" in that angry-triumphal tone, rather than as a compliment?"

Does he really not understand this? Imagine your friend talks loudly and constantly about how premarital sex is wrong, and those who do it are evil. You say you disagree and think it's fine, but he keeps insisting it's wrong and you should be ashamed for defending it. Then you discover he's having premarital sex himself. You angrily point that out in an accusatory tone, and he says "all this time you've claimed there's nothing wrong with it, now you're saying I do it with an accusatory tone! Clearly you're contradictory!" Except, no you're not. You're not angry at the sex, you're angry at the hypocrisy. It's eminently reasonable to say "I think faith/sex is good, and there's nothing wrong with doing it. But *if* you're going around saying it's terrible and shameful, then you'd better make damn well sure you don't even come close to doing it yourself."

Moving on, he then spends more time clearly elaborating on the problem (though the Bayesianism bit is of slightly confused relevance and muddies the issue slightly) and also briefly hints at an inductive version of the Modus Morons argument by Susan Haack. (You can't justify "the future will resemble the past" because that's always happened in the past, any more than you can justify "the future will be completely different from the past" because that's never happened in the past.)

Then there's this:

"Now, one lesson you might derive from this, is "Don't be born with a stupid prior."  This is an amazingly helpful principle on many real-world problems, but I doubt it will satisfy philosophers."

I know this is an aside, by I still find the last clause kind of scummy. It's a bit of misdirection to make it seem, if you're not thinking carefully, that he's already provided what *should be* an answer to the problem, but because philosophers have such rigid standards he'll go even further and provide another one! When actually, the reason it won't satisfy philosophers is because it's clearly, laughably not an answer to the problem, and also happens to be a fully general argument for anything: have correct beliefs!

This is where I think it goes off the rails:

"Here's how I treat this problem myself:  I try to approach questions like "Should I trust my brain?" or "Should I trust Occam's Razor?" as though they were *nothing special*— or at least, nothing special as deep questions go."

This is what I hate about his writing style. He says he's giving a solution (or "approach") to the problem, and then says something extremely vague and barely relevant. He doesn't explain what "nothing special" has to do with induction. And he doesn't explain what he means by "nothing special"; there's no clear elaboration like "by nothing special I mean...". He then goes off on what I see as endless rambling, without any clear linear direction from, or towards, the question about the sun rising that he started with.

He sort of vaguely gestures now and then at the question, but doesn't confront it head on. He talks about the evolution of the brain on the savannah, appealing to science that largely relies on rational principles like induction to then defend the reliability of those rational principles! It's like he doesn't understand how fundamental these philosophical problems are, how much of our knowedge they call into doubt. It would be like Descartes asking himself how he can be sure he exists, how he can be sure the world exists, and then saying "well, a priest told me it all exists, so that settles it." And never thinking to apply the same reasoning to whether he can be sure the priest exists, let alone that what he says is true.

And Eliezer doesn't clearly state his claimed answer. I'm reading thinking "what is he trying to say?", waiting for him to summarise his position which he never really does. At no point does he say anything like "so that's my tentative solution to the problem of induction: the reason I can be confident the future will resemble the past is x. The reason this doesn't also work as a defence of religious faith is y". I have to read the piece over and over, trying to *guess* at what his actual answer really is, and also whether he thinks it's actually a good answer or a very weak tentative one. None of this is spelled out.

Now, having read it over and over I *think* (and I'm still partly guessing here) that his position is something like: the difference between induction and faith is that the former is an innate part of how we naturally think, and so it's part of us and a more reasonable candidate to have some degree of faith in (because we're trusting in ourselves, not something external). But that still doesn't seem quite right, since he explicitly rejects the idea that you should have faith in anything. He says literally everything needs a justification, so again, what's his justification for induction? Does he have one or not? Also, many religious believers would say belief in God is an innate part of how they naturally think. He doesn't address this either.

Another annoying thing about his writing style is he kind of talks to himself, going "you might say this, but then you might say this, or this" but unlike a Socratic dialogue there's no clear structure, it's not clearly leading somewhere, and it's not clear which of these things he ultimately endorses and which ones he doesn't. This would be fine if he said he was just thinking aloud and didn't act like he was giving a solution to the problem, but he does.

And finally, there's this annoying idea of "the point is to win". I know this idea is discussed at length elsewhere, but right here it functions as brazen anti-intellectualism: shut up with your theories, I'm interested in the real world! Um, I can't speak for all disciplines, but the whole point of philosophy, just like the point of physics, is to describe the real world. If a piece of philosophy or physics isn't describing reality in some way, it's failing at its own project. And if you think it's failing you can make an argument, based on reason or evidence, but you can't just ignore it and chant "real world" as if that means something. Saying

"If trusting this argument seems worrisome to you, then forget about the problem of philosophical justifications, and ask yourself whether it's really truly true."

is like telling Newton "I'm not interested in your 'theories' and your 'laws', I'm interested in what's real." And when he tries to explain that his laws are precisely a description of what is real, and that they're backed up by evidence, responding "I don't want your evidence. I don't care what's evidentially true, I care what's truly true!" It's just laughable.

Expand full comment

I'm going to focus specifically on insights related to problem of induction and Münchhausen trilemma from the essay. Not claiming that the EY did the best job possible to explain them - clearly it wasn't good enough for you, just hope that when stated explicitly you will be able to notice that they've been there the whole time and maybe that you may have even figured out them yourself if you tried harder.

1. Bayesian updating is part of the solution.

Having a way to systematically change our confidence in anything, including such ideas as induction, future resembling the past or Occam's razor is very helpful and distingushes scientific method from blind faith.

2. Logical possibility of anti-inductive prior.

As, it's logically possible to have anti-occamian and anti-laplassian priors, that would persuade us in the exact opposite conclusion, bayesian updates themselves are not enough. It's not just a problem with bayesianism or even induction, however. It's a problem of first cause or Münchhausen trilemma. Either we are have a dogma in the beginning of our reasoning, or we have circular reasoning or we have infinite regress.

2.1 The problem would've been solved if it was not possible to have anti-inductive priors or if there was some external justification, not dependant on our own reasoning, why our priors are the correct one.

That's what the half joke about not being born with stupid prior is about.

3. While having anti-Occamian and anti-Laplacian priors is logically possible it is not actually possible for us to be such creatures.

Natural selection would've got rid of creatures with such unaplicable to reasoning in our universe priors. So this is the solution to the weak version of the problem - about actual possibility, not logical one. There is an external force that justifies our priors, ensuring that they are correlated to reality and that brains in principle could work an it is natural selection.

At this point we are about one third in the essay. Does everything feels clear to you for now? Do you have any disagreements?

Expand full comment

> While having anti-Occamian and anti-Laplacian priors is logically possible it is not actually possible for us to be such creatures.

That doesn't show Occamian and Laplacian priors are correct in a way that defeats the trilemma, since that still requires assumptions. It also doesn't show a bunch of other things like "we have no false priors", "we have all the true priors", etc. It also doesn't show that you do more than predict observations with Bayes/Induction.

Expand full comment

"1. Bayesian updating is part of the solution.

Having a way to systematically change our confidence in anything, including such ideas as induction, future resembling the past or Occam's razor is very helpful and distingushes scientific method from blind faith."

I agree, except I'm not sure that this systematic method has to be Bayesianism. But that's entirely beside the point we're discussing here. Yes, having a (somewhat) systematic method of answering questions, justifying your beliefs, and changing your mind is the essence of philosophy and rational thought.

"2. Logical possibility of anti-inductive prior.

As, it's logically possible to have anti-occamian and anti-laplassian priors, that would persuade us in the exact opposite conclusion, bayesian updates themselves are not enough. It's not just a problem with bayesianism or even induction, however. It's a problem of first cause or Münchhausen trilemma. Either we are have a dogma in the beginning of our reasoning, or we have circular reasoning or we have infinite regress."

I agree entirely, I think. I'll add that this laying out of the problem (including why bayesian and circular arguments don't resolve it) is the part that Eliezer did very well. I hope I got that across in my above comment.

"2.1 The problem would've been solved if it was not possible to have anti-inductive priors"

I agree, and I mentioned in my discussion with MicaiahC that if this was about the justification for the laws of *deductive* logic, Eliezer's "what else can I use but my own brain?" would be a compelling answer.

"or if there was some external justification, not dependant on our own reasoning, why our priors are the correct one."

I'm not sure what you mean by this. What kind of external justification could there possibly be that doesn't depend partially on our own reasoning? If you mean "not dependant SOLELY on our own reasoning" then I suppose I agree, though I'm still not sure what kind of thing that could be.

"3. While having anti-Occamian and anti-Laplacian priors is logically possible it is not actually possible for us to be such creatures.

Natural selection would've got rid of creatures with such unaplicable to reasoning in our universe priors. So this is the solution to the weak version of the problem - about actual possibility, not logical one. There is an external force that justifies our priors, ensuring that they are correlated to reality and that brains in principle could work an it is natural selection."

This is where I disagree. I'll still say that everything you've so far summarised did come across reasonably clearly in the original, although I don't think he ever used the phrase "logically possible" or an equivalent (I'm not even sure if he has a concept of logically possible as distinct from physically possible or actual, but that's another issue).

But, the point I keep trying to make is that without relying on inductive reason you *can't do science*. You certainly can't know that our brains evolved at all, that they did so because of natural selection, that this happened on the African savanna etc. Not without assuming that the laws of nature were the same then as they are now (reverse induction I guess), and that the lawsof nature were the same in 50,000 BC as they were in 100,000 BC.

So while you're right that natural selection would give good grounds for thinking induction is reliable, the question is not whether there is a coherent way induction could (as a matter of external fact) be reliable, but how we can *know* (in our own minds) that induction is reliable.

If our belief in natural selection justifies our belief in induction, what justifies our belief in natural selection? That, as you point out with the Münchhausen trilemma, is the essential problem.

Expand full comment

Great! I'm glad that we are on the same page so far! You are of course correct to raise this concern about natural selection. It's adressed in the next parts of the essay. But before we continue I'd like to focus a bit on this:

> So while you're right that natural selection would give good grounds for thinking induction is reliable, the question is not whether there is a coherent way induction could (as a matter of external fact) be reliable, but how we can *know* (in our own minds) that induction is reliable.

There isn't only one single question of epistemology. "How can knowing stuff be possible?" is just as a valid question as "How can I know that I know stuff?". And people have been confusing these too for quite long time, anyway.

So as lucky few who understand the difference between a map and a territory we can say that natural selection - an external fact - can justify knowing stuff and induction in particular, thus answers the first question, while *my belief* in natural selection - doesn't, in itself, justify my belief in induction. EY doesn't spend time explicitly talking about it in this essay, because, as I suppose at this point readers of the Sequences should already understand map/territory relations as a second nature.

Anyway, we still have a trilemma to solve.

4. The correct answer to Münchhausen trilemma isn't dogmatism.

Instead simply proclaiming something as an axiom, EY is talking about continuing the chain of justification in attempts to always find contradictions and fix them, including the most fundamental beliefs that may require the complete reexamination of everything.

5. We can't do any better than using our minds to reflect on our minds

There is no universally compelling argument and no way to prove anything to a rock. We already need minds capable to reason in order to reason about our minds reasoning capabilities.

6. From inside the mind that is correctly reasoning about the world and its own reasoning process, this process feels as a continuous examination, building a coherent causal story about how our minds come to know something.

6.1 Not every mind in a state of coherency is correctly reasoning about the universe. But a mind in a state of coherency, having a causal story about how it can know things and constantly reexaminating them is quite likely to be which gives us a reason for some degree of confidence.

7. Reflective loop is not the same as circular reasoning, even though people intuitively tend to assume that it is.

Basically it's a forth solution to the trilemma, which people mistakenly classified as part of the third. Using your mind to reflect on your own mind is not the same as using your mind to unreflectively trust a piece of paper proclaming that everything that is written on the paper is true. We may notice the difference because circular reasoning doesn't seem to be working, while reflective loops do seem to work - all the talk about winning is about it.

8. This seems to be as much solution as possible to get in principle regarding these problems.

Expand full comment

> Reflective loop is not the same as circular reasoning, even though people intuitively tend to assume that it is.

Because?

>Basically it's a forth solution to the trilemma, which people mistakenly classified as part of the third. Using your mind to reflect on your own mind is not the same as using your mind to unreflectively trust a piece of paper proclaming that everything that is written on the paper is true.

The thing described in your last sentence isn't circular reasoning -- it's much more like particularism.

AFAICS, reflective reasoning is circular reasoning is coherentism, and you haven't shown otherwise.

Expand full comment

Thanks for this discussion. The tone of your approach certainly appeals to me vastly more than Eliezer's, and invites better engagement, and I'll try to elaborate in a reply to one of your other comments why that matters so much to me.

And regarding the clarity of his writing, I can see now that people just have very different reactions to it. It comes across to me as imprecise and rambling and obscuring things compared to standard philosophical writing. It comes across to some others as particularly and unusually clear, and removing what they see as the typical obscurity in said writings. So what can I say? We'll have to agree to disagree, if that's allowed!

Now on to the argument.

Yes, there are different questions in epistemology, many different questions. That's one of the reasons I dislike the approach of breaking away from abstract philosophy and "getting real", so to speak. It tends to remove or obscure the subtle distinctions between concepts, questions, and so on. Eliezer, by the way, is by no means the only one who does this.

"So as lucky few who understand the difference between a map and a territory we can say that natural selection - an external fact - can justify knowing stuff and induction in particular, thus answers the first question, while *my belief* in natural selection - doesn't, in itself, justify my belief in induction."

Now, the claim that an external fact can explain "how knowing stuff can be possible" depends on a particular theory of epistemic justification: externalism. The rough idea, which I may be getting slightly wrong, is that if our belief in p is reliably correlated with the truth of p (which can be operationalised in terms of "close possible worlds"), then the belief is justified, even if that justification is not accessible to us internally.

(Knowledge is typically defined as justified true belief, and since Gettier with the requirement that the justification have some necessary connection with the truth of the belief.)

So, if we accept externalism we could say that natural selection (as long as it did in fact happen*) provides an explanation for how we can know the future will resemble the past (as long as it is, in fact, reliably true that the future resembles the past). Even if we can't have any internal confidence in those bracketed claims.

There are two problems. One is whether externalism is an adequate theory of justification. It seems very unsatisfying, as it fails to link the presence of knowledge with the presence of confidence in that knowledge (in our minds) that we would usually expect. It has other problematic implications as well.

The other is whether, even on externalism, natural selection would be correlated with a true belief in induction. After all, our brains are optimised for survival, not truth. Obviously Eliezer is hardly unaware of this (it may be the entire original point of the sequences). I'm assuming there's more discussion about this and "running on flawed hardware" that relates to induction specifically?

Whether or not the argument about natural selection succeeds, there is further to go. If the argument succeeds (explaining how we can know) then as you say there is still the question of how we can know that we know. If it fails, even the first question is still unanswered, and basically becomes largely the same as the second. And I think there is a broader metaphysical question about induction, even separate from the epistemic ones.

But before moving on to all of that, I would like your impression as to whether you and/or Eliezer (in your opinion) would accept an externalist theory of justification. This would help clarify things for that purpose.

(*This is a point about knowledge, and not any kind of reference to creationism.)

Expand full comment
Dec 4, 2023·edited Dec 4, 2023

(Edit:,I fat fingered the post button halfway through and am now editing it, if I edit something that contradicts a reply, I haven't read the reply and am just adding what I would have posted.

Edit edit: done now)

> You angrily point that out in an accusatory tone, and he says "all this time you've claimed there's nothing wrong with it, now you're saying I do it with an accusatory tone! Clearly you're contradictory!"

The line from Eliezer says "angry-triumphal" and not just angry. In this analogy, you would also be gleeful that you discovered your friend had pre-marital sex, presumptively because you had suspected it all along and are happy that they are as low in the mud as you are, which is the point Eliezer is making: whence comes the TRIUMPH if pre marital sex or faith is good?

> When actually, the reason it won't satisfy philosophers is because it's clearly, laughably not an answer to the problem, and also happens to be a fully general argument for anything: have correct beliefs!

Yes, that's the joke.

> He says he's giving a solution (or "approach") to the problem, and then says something extremely vague and barely relevant. He doesn't explain what "nothing special" has to do with induction.

He is saying his approach, not solution is doing "nothing special". That's not the solution, it's describing the process by which he arrives at solutions because he's trying to teach rational thinking.

The purpose of the sentence like that, is that before the sequences, when I tried to think about abstract philosophical ideas, I would enter "abstract philosophy" mode, where everything was defined in terms of other abstract philosophy. This sentence, among others is to point out that there is another way: you should treat these questions in the same way you treat "how do I get groceries", and do things like apply the principles to everyday regular experience, or try and find experiences that can help you answer questions.

The next two paragraphs then provide the examples of "this is what regular thinking" looks like. I hadn't read the blog post before thinking that Eliezer had provided examples right after, but in theory I could have gotten brownie points for predicting!!1 (do not give me brownie points). But assuming that you believe this is what I did, that implies that he is understandable and you made your objection ~3 paragraphs too early

> And Eliezer doesn't clearly state his claimed answer. I'm reading thinking "what is he trying to say?", waiting for him to summarise his position which he never really does. At no point does he say anything like "so that's my tentative solution to the problem of induction: the reason I can be confident the future will resemble the past is x. The reason this doesn't also work as a defence of religious faith is y". I have to read the piece over and over, trying to *guess* at what his actual answer really is, and also whether he thinks it's actually a good answer or a very weak tentative one. None of this is spelled out.

It seems pretty clear to me that he's showing how trying to answer "how do you not have circular logic" will lead to a set of seemingly circular justifications, but the reason why it's circular is that the "obvious" assumption of "I have to reason with my own brain, and no one else's", and once you incorporate that assumption, it become clear that beliefs downstream of it that are pro or anti induction. Which means that

>> This is why it's important to distinguish between reflecting on your mind using your mind (it's not like you can use anything else) and having an unquestionable assumption that you can't reflect on.

Is his central point. And not

>> I care what's truly true!" It's just laughable

I personally don't think it's a coincidence that the review of Eliezer's post complains that it can't find the central point and the post ends before the central point is stated, but maybe there's a more innocuous explanation.

Expand full comment
Dec 5, 2023·edited Dec 5, 2023

"which is the point Eliezer is making: whence comes the TRIUMPH if pre marital sex or faith is good?"

And everybody over the age of fourteen goes "Oh, come off it. You know perfectly well why, and so does anyone who has ever had the temptation to say "I told you so". " I know this snippet predates the "Yet you participate in society. Curious!" meme but it is the epitome of it:

https://knowyourmeme.com/memes/we-should-improve-society-somewhat

There's a handy German word, Schadenfreude, and while it's not quite the same emotion at work here, the sense of satisfaction is the same basis.

"Oh, so you guys were all noses in the air about the stupid believers going by faith, and how you were so much smarter and better and more superior than them because you had science and reason and evidence and all that jazz for what you accepted. Turns out that your beloved philosophy has just as much going by faith as ours does. Now who are the stupid dumb-dumbs, eh?"

And it's not said as a compliment, because the people jeering about faith were not using "faith" as a compliment. It's "be measured by your own measure; you sneered at faith as irrational, now you are being irrational too; are you ready to be more humble about passing judgement?"

Expand full comment
Dec 6, 2023·edited Dec 6, 2023

You realize it'd be the catholic who is the guy in the well, and that Schadenfreude comes from specifically misfortune affecting someone?

In which case I disagree that you are laughable for the misfortune of being Catholic.

Expand full comment

"The line from Eliezer says "angry-triumphal" and not just angry. In this analogy, you would also be gleeful that you discovered your friend had pre-marital sex, presumptively because you had suspected it all along and are happy that they are as low in the mud as you are, which is the point Eliezer is making: whence comes the TRIUMPH if pre marital sex or faith is good?"

I don't think this changes my point at all. The difference between "angry and gleeful" and "just angry" is probably whether you yourself deliberately abstained from sex because of your friend's moral pressure. If you did, you'd be just really angry at his hypocrisy, since he shamed you to give up something that he himself didn't. If you didn't, did it yourself anyway and accepted his shaming, you'd be partly angry (the shaming was unjustified) and partly gleeful (the shaming will now stop, and he is to suffer it as well). Similarly, a believer who gave up their religion because of shaming of faith from "rationalists" will be just angry to discover rationalism itself relies on faith. But if they held to their faith and endured the shaming (presumably what Eliezer is talking about) they'll be partly gleeful: "now you can never again shame me for my faith." And also "although you've always been no better than me on *my* standards, now you're no better than me even on your own!"

"Yes, that's the joke."

Ok, you can dismiss it as a joke. I don't think it clearly is, since "don't be born with a stupid prior" is a similar attitude to the one he has when he's being serious. (And stuff like "the point is to win" also sounds like a joke, but that's clearly meant seriously). And also, this defence reminds me of middle-school-syle bullying: insult someone, and when they object say "can't you take a joke?" (Funnily, those people never seem to find the jokes funny when they're at *their* expense...) I know it's uncharitable, but it's combined with his routinely open contempt for everyone else. And people here regularly condemn e.g. feminists for doing this, saying things like "all men are rapists" and then (if you're lucky) saying "we didn't literally mean that, stop policing our self-expression" when you object. Tl;dr I am uncomfortable with letting things that look a lot like "juvenile but sincere outgroup-bashing" be dismissed as mere jokes, but ymmv.

"He is saying his approach, not solution is doing "nothing special". That's not the solution, it's describing the process by which he arrives at solutions because he's trying to teach rational thinking."

That's fine, but my problem is he *doesn't elaborate or explain* in any clear way what he means by that. That paragraph should have had several more paragraphs being more precise about that approach he is taking is, *before* moving on to apply the approach. Just giving a few vague words that make sense in your mind and moving on (especially for what's one of the central cruxes of your whole discussion) is at best lazy writing. Regardless of whether the ideas he's hinting at are good or even genius, reading is frustrating when the elaboration is so sparse and chaotic, and I think it's fair to call the writing (discinct from the ideas) bad.

"The purpose of the sentence like that, is that before the sequences, when I tried to think about abstract philosophical ideas, I would enter "abstract philosophy" mode, where everything was defined in terms of other abstract philosophy. This sentence, among others is to point out that there is another way: you should treat these questions in the same way you treat "how do I get groceries", and do things like apply the principles to everyday regular experience, or try and find experiences that can help you answer questions."

I kind of see what you mean about the abstraction. It is indeed very difficult (for me at least) to make the mental transition between abstract argument and what that means for practical experience. But the abstraction exists for a reason: it's widely regarded as the most precise and unimpeachable way of being sure of things. Philosophers (at least most of them) don't speak in abstractions just for fun. They do it to be clear, and rational.

To be charitable, I think the problem might be that Eliezer is trying to do two radically different things at once: help people understand and apply existing accepted ideas (in philosophy, probability, physics) and develop his own ideas. Imagine a chemistry teacher saying an atom is like a solar system to help people understand it, versus a scientist saying an atom is like a solar syatem in order to advance a radical new scentific claim. Ignoring accuracy to explain a theory is *very* different from ignoring accuracy to insist the theory is wrong.

"The next two paragraphs then provide the examples of "this is what regular thinking" looks like. I hadn't read the blog post before thinking that Eliezer had provided examples right after, but in theory I could have gotten brownie points for predicting!!1 (do not give me brownie points). But assuming that you believe this is what I did, that implies that he is understandable and you made your objection ~3 paragraphs too early"

Again, my main problem is he goes straight into examples of his approach, without barely explaining what his approach actually is. It means I have to *guess*, from his examples, what the approach is. Moreover, I already mentioned that his examples include appealing to science to justify the foundations of science. And they're also all over the place: the immediate next paragraph about Occam's Razor skims over three different reasons for believing it. The first relieson induction and is obviously and blatantly circular. The second is an appeal to a mathematical argument discussed elsewhere, the third seems to be an appeal to what Peirce called abductive reasoning. These are skimmed over without elaboration. It's like a theist quickly reeling off 20 distinct arguments for the existence of God, a couple of whom they themselves admit are obviously absurd, and the rest of which are not elaborated on or substantially defended. I'm sure Eliezer would say this approach is just trying to dazzle people with sheer quantity, and does not get close to proving anything.

Basically, if he thinks any of these "reasons for trusting my brain" is on its own a defence of induction, he should pick one of those and substantially defend it. If he doesn't think any of them are sufficient defences on their own, it's not at all clear why listing lost of insufficient arguments would add up to a good argument.

"It seems pretty clear to me that he's showing how trying to answer "how do you not have circular logic" will lead to a set of seemingly circular justifications, but the reason why it's circular is that the "obvious" assumption of "I have to reason with my own brain, and no one else's", and once you incorporate that assumption, it become clear that beliefs downstream of it that are pro or anti induction."

I'm sorry, but I think there a few typos in that paragraph, and because it's such a central part of your reply, I can't understand what you mean.

"This is why it's important to distinguish between reflecting on your mind using your mind (it's not like you can use anything else) and having an unquestionable assumption that you can't reflect on.

Is his central point"

Again, if that's his central point he doesn't make it clear. There's no clear flagging of that to distinguish it from many other parts of the essay that could also plausibly be his central point, some of which it's not even clear if he actually endorses at all.

Now, assuming you're right and that is his central point: I think it needs elaboration. Importantly, I think it's inadequate in both directions. On the one hand, this principle would accept a foundational belief in God as long as that belief is open to constant questioning. Given Eliezer's extreme contempt for theists in general, I think it's pretty clear he wouldn't accept that, and that his position is *very* far from e.g. "I respect Christian apologists and philosophers who reflect rationally on their beliefs even though I don't agree with their arguments; it's only the Christians who dogmatically believe without thinking who I can't stand". On the other hand, it's too strong to claim inductive reasoning is built into the mind such that it's impossible to deny. Given how the scientific method has only existed for four centuries, it seems pretty clear that the principles behind it are not at all obvious but had to be discovered. And see my reference to inductive "modus morons" above: it's coherent to think in an anti-inductive way, it just seems clearly wrong (much like many people would say about atheism). Eliezer's position would make perfect sense if he was talking about scepticism about the laws of deductive logic. He could say "continue to question them if you can, but recognise it's literally impossible to think outside of them". This does not seem true for induction.

Expand full comment

> I don't think this changes my point at all. The difference between "angry and gleeful" and "just angry" is probably whether you yourself deliberately abstained from sex because of your friend's moral pressure.

You were the person who brought up the "pre-marital sex" analogy because it presumptively resembled the point in the original article, where the faith-haver *explicitly states* that having faith is good, or that you *definitely approve* of pre-marital sex. Changing the analogy so that it no longer resembles the original situation materially changes the validity of the argument from valid to invalid.

The rest of your paragraph involves hypotheticals that bear no relationship to the post. He's talking about explicitly religious people pointing out things to scientists (not rationalists btw) so talking about religious people deconverting due to rationalism and then getting angry that rationalism actually faith based, is neither contained within the post nor an implication of it.

> Ok, you can dismiss it as a joke. I don't think it clearly is, since "don't be born with a stupid prior" is a similar attitude to the one he has when he's being serious. (And stuff like "the point is to win" also sounds like a joke, but that's clearly meant seriously)

This is a disingenuous non-point. You think Eliezer is an asshole because he says asshole-ish things. And then when someone else points out that he **wasn't** making an asshole-ish point, you protest that how were you supposed to know, he's an asshole!

But if you want me to belabor the point:

1. "Not being born with stupid priors" is not a princple, is not actionable

2. It's definitely not amazingly helpful, because once again, it's not actionable.

3. Basically no one would draw the overly specific lesson of "don't be born with stupid priors"

4. Philosophers would ask that question because *any reasonable* person would ask that question.

5. Even *if* he thought philosophers were being ridiculous by asking for a reason, considering the title of the post and the fact that the earnestly engages in trying to answer the question indicates that at the very least **he thinks himself as ridiculous**. It makes no sense to throw shade at someone else for doing something that you are immediately about to do.

Also, not a logical point, but I posted that paragraph to a (non-rationalist) friend without context and asked them what they thought the context was. They replied "wry sarcasm" and could not see how this could be a dig at philosophers at all. Now, you may be thinking "but if only they knew how terrible of a person Eliezer was!!111", but this literally precludes you from being wrong about the intent of the statement.

> I know it's uncharitable, but it's combined with his routinely open contempt for everyone else. And people here regularly condemn e.g. feminists for doing this, saying things like "all men are rapists" and then (if you're lucky) saying "we didn't literally mean that, stop policing our self-expression" when you object. Tl;dr I am uncomfortable with letting things that look a lot like "juvenile but sincere outgroup-bashing" be dismissed as mere jokes, but ymmv.

The exact thing under contention is whether he has routinely open contempt for everyone else. If you torture the interpretations enough such that you read malice into innocuous comments, maybe you are the problem.

> That's fine, but my problem is he *doesn't elaborate or explain* in any clear way what he means by that. That paragraph should have had several more paragraphs being more precise about that approach he is taking is, *before* moving on to apply the approach.

What do you think those paragraphs should be then? To me, it's pretty obvious that

1. Saying that you should use the approach of thinking there's nothing special on two questions

2. Then immediately asking the question and then

3. generating the "nothing special" follow up questions

is supremely clear that he is in the "generate concrete questions and use pre-existing knowledge" mode rather than "give mysterious answers to mysterious questions" mode. The point isn't that he believes that everyone who generates the "nothing special" questions will have the same ones when it comes to Occam's Razor, but that, by showing one example of this, you can, over the series of many more blog posts, intuit the actual generator. If the act of describing the generator predictably causes people to descend into abstraction navel gazing, I contend that it is good that those extra paragraphs don't exist.

> Basically, if he thinks any of these "reasons for trusting my brain" is on its own a defence of induction, he should pick one of those and substantially defend it. If he doesn't think any of them are sufficient defences on their own, it's not at all clear why listing lost of insufficient arguments would add up to a good argument

None of those are central, they are the lead up to the **actual** argument, which is the only article of faith is that you have to use your own actual, physically instantiated brain and not hypothetical other ways. He's showing that those *other* arguments are indeed not faith based, because it is drawing on science, or his own experience. This is explicitly stated in the paragraph after the ones you quoted.

> Again, if that's his central point he doesn't make it clear. There's no clear flagging of that to distinguish it from many other parts of the essay that could also plausibly be his central point, some of which it's not even clear if he actually endorses at all.

I don't know what to say. He has two quotes, one taking the position of "this is circular logic" and the other posted in contrast to it confirming that the point is what I mentioned above. Whenever he mentions the main point, the italicizes the entire sentence. He brought up the central question of the post, right before answering it.

This is as close to attaching literal HTML tags saying "THIS IS THE MAIN POINT" without actually doing that, and if he did do it I imagine someone would complain that he's talking down to his readers.

> On the one hand, this principle would accept a foundational belief in God as long as that belief is open to constant questioning.

This is addressed in the post itself:

>> "I believe that the Bible is the word of God, because the Bible says so." Well, if the Bible were an astoundingly reliable source of information about all other matters, if it had not said that grasshoppers had four legs or that the universe was created in six days, but had instead contained the Periodic Table of Elements centuries before chemistry—if the Bible had served us only well and told us only truth—then we might, in fact, be inclined to take seriously the additional statement in the Bible, that the Bible had been generated by God. We might not trust it entirely, because it could also be aliens or the Dark Lords of the Matrix, but it would at least be worth taking seriously.

>> Likewise, if everything else that priests had told us, turned out to be true, we might take more seriously their statement that faith had been placed in us by God and was a systematically trustworthy source—especially if people could divine the hundredth digit of pi by faith as well.

So it's really bizarre to start speculating what's acceptable when it's answered in the post itself.

> Philosophers (at least most of them) don't speak in abstractions just for fun. They do it to be clear, and rational.

I feel like you losing track of the "pre-marital sex" analogy, such that it no longer tracks the original post exactly demonstrates why using just abstraction can be dangerous: If you use abstraction, you can forget that the abstraction is over something more real and detailed than the abstraction itself, and that you should be able to, at any point, stop and transform the abstraction back into a less abstract thing and *still get reasonable results*.

The mental motion of "double check the abstraction by moving down the abstraction ladder" is a very useful one. It's used in physics all the time, where you often prove things in a formalism that is more abstract, but you can, at any point convert that formalism into an actual state of the world. I just think it's straight up wrong that you can do abstraction accurately **without** trying to translate back into things that "kick back" (unless you're a category theorist, in which case I bow down to you and admit my brain is too small)

> On the other hand, it's too strong to claim inductive reasoning is built into the mind such that it's impossible to deny.

He also answers this in the post!

>> At present, I start going around in a loop at the point where I explain, "I predict the future as though it will resemble the past on the simplest and most stable level of organization I can identify, because previously, this rule has usually worked to generate good results; and using the simple assumption of a simple universe, I can see why it generates good results; and I can even see how my brain might have evolved to be able to observe the universe with some degree of accuracy, if my observations are correct."

Or, to paraphrase, induction as a general principle isn't a natural outgrowth of the brain, but the same factors that lead me to think induction is a good principle, by thinking with my current brain, would also lead evolution to put something like an inductive prior into my brain.

You can complain that this is bad writing on his part that you didn't notice, but I think if the problem is that every time you see something you might take objection to, you start building up long chains of logic explaining why it's objectionable instead of trying to understand the post on its own merits... well, it's hard to think of anything that would qualify as good writing under that kind of adversarial reader environment.

Expand full comment

"I believe that the Bible is the word of God, because the Bible says so."

Gosh durn it, why do I end up defending the Protestants and heretics (but I repeat myself) on here? Okay, let's roll up our sleeves and get at it:

Why yes, "I believe the Bible because the Bible says so" is a fallacy. Congratulations, you've caught up with the theologians. Even the Biblical literalists address that one:

https://answersingenesis.org/is-the-bible-true/how-do-we-know-that-the-bible-is-true/

Inerrancy has also been thrashed out (and continues to be, whether you're Catholic or Protestant):

https://en.wikipedia.org/wiki/Biblical_inerrancy#Modern_Catholic_discussion

As ever, let's go right back to St Augustine on this one; a nice, meaty paper on it:

https://scholarworks.iu.edu/journals/index.php/psource/article/download/13712/24301/40994#:~:text=While%20Augustine%20firmly%20believed%20that,as%20a%20harbinger%20of%20Jesus

"But there are people who say exactly that same thing about believing the Bible because the Bible says so!"

Yeah, well, I can't be held responsible for ignorant people in the US or elsewhere. Sweep your own floors.

Expand full comment

Before we continue, I think we should be clear what we're arguing about. This started with me asking if anyone else thinks the sequences are badly written, and asking why some people like them. Lots of people have said they agree they're badly written; others like you have said you found them perfectly clear and insightful. I accept that you think they're written well, that you find them clear, and you think the further elaborations I'd prefer would have made them worse. I don't know how I could convince you your impression is wrong, or how you could convince me mine is.

So unless you particularly want to keep arguing about the writing, I'll drop that and focus on the argument alone.

Regarding the pre-marital sex analogy, I don't really understand what you're objecting to. It sounds like you're quibbling over details of my analogy and not answering the main point, but maybe I'm not understanding you. You say several of my hypotheticals are innacurate; okay then, ignore those and address my point about the one that *is* accurate. I said:

"But if they held to their faith and endured the shaming (presumably what Eliezer is talking about) they'll be partly gleeful: "now you can never again shame me for my faith." And also "although you've always been no better than me on *my* standards, now you're no better than me even on your own!""

Is there a part of that characterisation you object to? Do you think there's no "shaming" (for want of a better word) of faith being done? Do you think there's something logically inconsistent about the quoted parts of the above paragraph? My sole claim here is that a person of faith who expresses anger and/or triumph that a "scientist" (by which you mean atheistic scientist I think, since plenty of scientists are theists) is also relying on some kind of faith, is exhibiting a perfectly coherent attitude. Do you disagree?

Regarding "that's the joke". I accept that I might be assuming too much bad faith. I am reacting to many other things Eliezer has said, not just here. And "stop theorising and just believe what's obviously true" is an attitude I've seen many, many times expressed unironically, including by about half of the old atheist community (the other half tended to be extremely rational and educated, an interesting juxtaposition).

So I just want to ask you two questions. First, do you dispute that Eliezer has called large numbers of people morons many times? Because I think I could find many examples of that. And second, if a Christian or a feminist or a member of whatever you regard as your outgroup said something, as an aside in a larger post, that could plausibly be a joke but is very close to something large numbers of that group believe seriously (e.g. "this atheist is so stupid he deserves to burn in hell", "this proves men are literally incapable of empathy", or substitute one of your own), would you just accept "clearly a joke" without question? Because we may just have different priors on how much good faith to assume. (And I think this "just a joke" attitude is part of what Scott was criticising in "Untitled").

And just for full clarity, I wasn't conscious of this but I did take it for granted that Eliezer was not being completely literal. Our disagreement is whether "don't be born with a stupid prior" is a slightly silly way of saying "the correct answer should be obvious to anyone who isn't an idiot" or a completely silly comment with no significance at all.

"What do you think those paragraphs should be then?"

Some of what you have said here has been clearer than the original article, imo.

"is supremely clear that he is in the "generate concrete questions and use pre-existing knowledge" mode rather than "give mysterious answers to mysterious questions" mode."

Something that I'm now pointing out for the third time, that you haven't addressed, is that "his examples include appealing to science to justify the foundations of science". I don't know what you mean by "mysterious answers": it's like you're thinking "Zen koan" when in the context you really mean "justification for induction that does not rely on empricial knowledge itself gained using induction."

"This is addressed in the post itself: >> "I believe that the Bible is the word of God, because the Bible says so." Well, if the Bible were an astoundingly reliable source of information about all other matters, if it had not said that grasshoppers had four legs or that the universe was created in six days,"

No, this doesn't address what I said, and if it's intended to it is an elementary mixing up of theism and a very specific form of biblically inerrant Christianity. Something that Eliezer does often. I expect this laziness in internet arguments, but not from someone claiming to be doing philosophy.

"If you use abstraction, you can forget that the abstraction is over something more real and detailed than the abstraction itself, and that you should be able to, at any point, stop and transform the abstraction back into a less abstract thing and *still get reasonable results*."

I think it's nice when you can do this, but you can't always. Think of a real world cubic equation with real solutions: solving it with the Cubic Formula, I believe you can get imaginary numbers in the intermediate steps. How do you answer that?

"Or, to paraphrase, induction as a general principle isn't a natural outgrowth of the brain, but the same factors that lead me to think induction is a good principle, by thinking with my current brain, would also lead evolution to put something like an inductive prior into my brain."

This doesn't make any sense to me at all. If he's not saying it's a natural outgrowth of the brain, is he saying the "factors that lead me to think induction is a good principle" are apparent through reason? Then what are they? Or is he saying it's justified by facts about evolution? Which is again using science that relies on inductive reasoning, to defend the reliability of inductive reasoning.

Expand full comment

"This sentence, among others is to point out that there is another way: you should treat these questions in the same way you treat "how do I get groceries", and do things like apply the principles to everyday regular experience, or try and find experiences that can help you answer questions"

Ok. So what is the answer, in those terms?

Expand full comment

I've noticed that some, but only some, rationalists think The Answer is coherentism. But coherentism has the notorious problem that you can have two different but equally coherent belief systems -- so it is not the epistemology you re looking for if you want to insist that you have The Truth.

If its not coherentism, its particularism, adopting an unjustified starting point. ...which is neither new nor good.

Expand full comment

Ok. It should be clearly acknowledged that everything we believe is based on faith. Except that's false. Some things are (usually) built-in. Like touching extreme heat is painful. Yes, (nearly) everyone has faith in that, but it's faith that has developed through experience. Once you get away from direct experience, everything is based on faith. Your memories could be an illusion. So could be your belief that you are alive. But I feel some beliefs are more reasonable that others...though there's no logical way to prove it.

Expand full comment

"Another annoying thing about his writing style is he kind of talks to himself, going "you might say this, but then you might say this, or this" but unlike a Socratic dialogue there's no clear structure, it's not clearly leading somewhere, and it's not clear which of these things he ultimately endorses and which ones he doesn't."

Yeah, Massimo Pugliucci complains about that:-

"But then I noticed that the post was a follow up to two more, one entitled “If many-worlds had come first,” the other “The failures of Eld science.” Oh crap, now I had to go back and read those before figuring out what Yudkowsky was up to. (And before you ask, yes, those posts too linked to previous ones, but by then I had had enough.)

Except that that didn’t help either. Both posts are rather bizarre, if somewhat amusing, fictional dialogues, one of which doesn’t even mention the word “Bayes” (the other refers to it tangentially a couple of times), and that certainly constitute no sustained argument at all. (Indeed, “The failures of Eld science” sounds a lot like the sort of narrative you find in Atlas Shrugged, and you know that’s not a compliment coming from me.)"

http://rationallyspeaking.blogspot.com/2010/09/eliezer-yudkowsky-on-bayes-and-science.html

Expand full comment
Dec 4, 2023·edited Dec 4, 2023

One that I was reading recently and getting annoyed at was about the problem of induction. Unfortunately I can't find it, because it doesn't seem to have been called "The Problem of Induction" or something similar. (One of dozens of small things that annoy me about the sequences: the refusal to use the accepted philosophical names for ideas and arguments. Even though Eliezer clearly is familiar with philosophy, unlike some others who talk the same way, it comes across to me as an Ayn Rand-like nose-thumbing at philosophers--"fuck you, I'm going to use completely different terms for the same things just because I can!" Maybe I'm being uncharitable, but I don't see why you would do that.)

Expand full comment

Philosophers make up their own language too though. And I’m inclined to agree with simplifying the language, where possible. Philosophy tends towards obscurantism

Expand full comment

What are examples of (analytic) philosophical terms that obscure meaning? I feel like most of them (coherentism, foundationalism, (ethical) rationalism, sentimentalism, dualism, physicalism, panpsychism...) are pretty intuitively named.

And the reason for the many different terms is precision, and to make subtle distinctions. Eliezer's approach seems to largely disregard this, and he'll switch from talking about beief in God to talking about specific Christian beliefs and creationism without apparently noticing. While any actual philosopher would not only never get those mixed up, they wouldn't even make an argument about belief in God without first defining exactly what they mean by "God".

Expand full comment

Philosophers, particularly on the analytical side, use a shared language.

Expand full comment

Yes, thank you. I'm working on an analysis of it.

Expand full comment

Cool, it happens to be one of my favourites. I'd say it's mostly about

https://en.wikipedia.org/wiki/M%C3%BCnchhausen_trilemma

than the

https://en.wikipedia.org/wiki/Problem_of_induction

but these things are definetely related.

Would like to discuss it when your analysis is ready.

Expand full comment

I posted it in reply to Vadim.

Expand full comment

Just pick at random, you said they are all bad.

Expand full comment

I think picking one that especially annoys Ascend would be quite helpful for debugging purposes.

Expand full comment

They do seem somewhat over hyped to me, though I've only read a few. My impression is that the idea behind them - that rationality can be somehow codified, is why they have left such an impression.

Expand full comment
author

Disagree. I think almost nobody except statistics majors understood Bayesian reasoning at the time, and it was considered revolutionary that belief can involve probabilities (and a big part of the Sequences is trying to argue against skeptics of this). If I had to choose the big things I learned from the Sequences, they would be:

1. What Bayesian reasoning was and how to think in probabilities

2. The idea of words as cluster-structures in thingspace

3. Free will as "how an algorithm feels from the inside"

4. It's okay to really believe in and care about rationality, and doesn't make you a boring or inhuman person.

5. Scattered ideas like prediction markets, signaling, evolved drives ("evo psych" before that term got misused), decision theory, rule utilitarianism, etc. Some of this came from the Hanson side of Overcoming Bias (remember that the "Sequences" was Eliezer's half of a two person group blog where both people debated and reflected on the other's work).

I might be forgetting something.

Expand full comment

Possibly relevant: the context of discovery (how you first heard about these cool ideas) vs. the context of justification (the reasoning behind the ideas, which were well known in academia before EY). For example, Bayesian epistemology is a whole well-developed branch of formal epistemology, which has been around for quite awhile.

Expand full comment
Dec 4, 2023·edited Dec 4, 2023

Yes, and Eliezer prodigiously references **Probability: the logic of Science** and is on record as saying that Jaynes is a 1k year old vampire.

(Not directed at you, you're just explaining a true fact, I'm talking about people who think Eliezer claims that he invented Bayesianism) In what world does that mean Eliezer was *unaware* of this, or that his readers are unaware? This seems absolutely bizarre, and I'm pretty sure I haven't seen people say that he invented Bayesianism, just that he was their intro, or they were **really** flooded by the idea of Bayesianism.

Expand full comment

". What Bayesian reasoning was and how to think in probabilities"

Almost everyone in mainstream philosophy has given up on infallibilism, even if they have not embraced Bayes.

"2. The idea of words as cluster-structures in thingspace"

Wittgenstein.

"3. Free will as "how an algorithm feels from the inside"

Not necessarily so. For one thing, an algorithm doesn't have to feel like anything from the inside. For another, you might also feel you have free will in a universe where you did you in fact did, so his solution is not unique. Also, unclarity: some rationalists think his solution is compatbilism, others illusionism.

Expand full comment

Re: you might feel you have free will in a universe where you did you did

I have trouble parsing that, but when I do it makes me feel that "free will" is an undefined term. Traditionally, I think it's something like "it depends on the nature of your soul, so free will is the revelation of the nature of your soul", except for those who deny its existence. But if the nature of your soul is pre-determined, then there is no actual free choice.

Saying "free will is how an algorithm feels from the inside" doesn't, to me, imply that all algorithms have such feelings. And at least it's a relatively well defined term. (You still need to define "feels", but that's a big improvement.) The question is whether it's talking about the same thing, and I don't think that's decidable.

Expand full comment

You can prove to yourself that it is a defined term by looking up definitions.

Eg. "the power of acting without the constraint of necessity or fate; the ability to act at one's own discretion."

" Traditionally, I think it's something like "it depends on the nature of your soul,"

That's a mechanism, not a definition.

"Saying "free will is how an algorithm feels from the inside" doesn't, to me, imply that all algorithms have such feelings.

That's not much help when there is no reason any algorithm should feel like anything.

Expand full comment

You accept things as definitions that I don't. If there's no test to determine whether something fits the claim, it strikes me as a meaningless claim.

Consider "the power of acting without the constraint of necessity or fate; the ability to act at one's own discretion", what's the source of the discretion? You can trace any answer offered back until the respondent finally says "That's obvious" or "Doing otherwise would be painful" or some other thing that's got a different source than "free will". Every example I've ever looked at has dissolved into various "reasons". This is so true that when someone feels really evenly balanced, they're likely to flip a coin.

(I was tempted to drag in Oedipus, but that's really a denial of free will.)

Expand full comment

I'm not sure exactly what "almost nobody except statistics majors" means, but e.g. Richard Swinburne's book "The Existence of God", published in 1979, argues for the existence of God using Bayes' theorem. I don't think it's a very _good_ argument overall, but Swinburne definitely holds, at least when he finds it convenient to do so, that one should think in probabilities and update them using Bayes' theorem, and his book was written before Eliezer Yudkowsky was born.

(I am not, for the avoidance of doubt, trying to suggest that Christians are smarter or more rational than atheists; I am an atheist myself. This just happened to be an example that came to mind of someone who was not a statistician -- Swinburne is a philosopher by profession and his undergraduate degree was in PPE -- using explicitly Bayesian reasoning to attack a major not-explicitly-probabilistic question.)

Expand full comment

https://hbr.org/2012/11/what-does-nate-silvers-chance-of-winnin

Is an example, containing the paragraph

>> What Silver’s 80.9% forecast technically means is that, if the Obama-Romney 2012 election were contested 1000 times, he thinks Obama would win 809 of them. This is a way of thinking derived from games of chance, which is where modern ideas about probability originated. In poker or blackjack or roulette or craps, you can (until you run out of money) repeat many iterations of the same gamble. But this particular election is only being contested once. So the closest approximation to the games-of-chance approach would be to expect that, if Silver forecasts four-fifths odds of victory in five different elections, he should correctly pick the winner four times. That’s a ridiculously tiny sample size, though. You’d really want to look at dozens or hundreds of elections to judge Silver’s reliability. Maybe, if he keeps doing this for another couple of decades, we’ll be able to judge him by that standard.

I'm struggling to find more examples of this type of logic immediately, but I will add that I remember multiple people (sometimes in this comments section!) saying things like "we can't analyze blah because it only happens once".

Expand full comment

"I'm struggling to find more examples of this type of logic immediately, but I will add that I remember multiple people (sometimes in this comments section!) saying things like "we can't analyze blah because it only happens once"."

I tried the brute force approach of googling "frequentist manifesto", but all I got was two hits, both asking "Does anyone know of a quality frequentist manifesto?" :-(

Expand full comment

I got this piece of unhinged commentary

https://oaklandthinktank.medium.com/bayes-is-out-dated-and-youre-doing-it-wrong-2afa13a6d256

Using "bayes is fake only happens once", and it seems to... implicitly assume all events that matter are sampling from some pre existing population? I'm not sure what stance it's taking in statistical interpretation.

Trying a bunch of other search terms including ones where Nate silver is called an idiot mostly gives me basic explainer articles for Bayes or academic papers about various uses of Bayes.

Dang.

Expand full comment

> I think almost nobody except statistics majors understood Bayesian reasoning at the time...

Sorry, at what time do you mean ?

> The idea of words as cluster-structures in thingspace

I think that linguists understood this idea for quite a long time. I also think that "the map is not the territory" is a more general principle that EY popularized quite well, although of course it was known for such a long time as to develop its own cliched saying (though the roots of the idea stretch as far back as the Tao).

> Free will as "how an algorithm feels from the inside"

FWIW I never found this point particularly insightful or informative, though perhaps I'm missing something. I don't think it does much besides re-labeling the concept.

> It's okay to really believe in and care about rationality, and doesn't make you a boring or inhuman person.

Is that true, though ? Strictly speaking, yes, of course you can care about rationality without becoming boring or inhuman; tons of people do this every day, from scientists to Wall Street quants to lawyers (ok, maybe these ones are a bit inhuman, but still). But that's the motte; the bailey is that if one truly cares about rationality, one must become a capital-R Rationalist, and embrace some questionable ideas such as utilitarianism, long-termism, and privileging AI-risk over all other concerns. I know this might sound unfair, but in practice this is the attitude that EY (as well as some of his fellow Rationalists) projects.

> Scattered ideas like prediction markets,

They are a promising development, but EY treats them as some kind of oracles, which had not happened in reality.

> signaling,

Obviously not a new idea, but I agree that EY does a good job popularizing it.

> evolved drives ("evo psych" before that term got misused),

Ditto, however note that the field is quite young, and IMO EY ignores a lot of the ongoing debate within it (the scientific kind, not the culture war kind).

> decision theory,

Which part of it specifically ? Decision theory is quite an old discipline, though EY's "timeless decision theory" addendum might be novel (I'm honestly not sure).

> rule utilitarianism,

I personally think it is either trivial or incoherent, depending on how far you want to take it.

Expand full comment

It's interesting to note that Scott provides a list of "what he learned", and then the immediate response is to replace Scott's post with claims about how much status we should allocate to Eliezer yudkowsky.

Expand full comment

Er... isn't this the entire topic of the thread -- how much credit should EY get for the Sequences ? Scott lists all the things he'd learned, and I point out that many of these things are not exactly new, and also that I personally have not learned anything useful from some of the items. Surely my personal experience matters as much as Scott's -- and arguably more, since I'm closer to EY's target audience ?

Expand full comment
Dec 4, 2023·edited Dec 6, 2023

The opening question is

> Am I missing a reason for this stuff's popularity around here?

And replying to it with "but you see, x y and z existed before" is not answering the question, or if it is answering the question, doing so by a standard where approximately nothing ever should be popular.

> Surely my personal experience matters as much as Scott's -- and arguably more, since I'm closer to EY's target audience ?

No, because original question was directed at people who do like the sequences.

This is not to say you cannot object, but it's very tiring to say something like "I like Marvel films because they have punching in them" and have some one respond "a-ha! You think you enjoy punching but it turns out that this isn't original because people have been punching each other as early as 1853". Or "but I like Hong Kong films better!"

These are Frankenstein versions of "oh hey, if you like punching, did you know that punches have existed since 1853?" Or "hey, it's great that marvel films are continuing the tradition of Hong Kong action films".

But of course people who know about previous things almost never say the latter two statements and almost all say the previous two statements.

Edit: Gong King -> Hong Kong

Expand full comment

Maybe I've got EY confused with General Semantics, but I think he did a lot with the idea of it being possible to think about your concepts and whether they make sense.

Expand full comment

I feel like it's a bit of a stretch for giving EY et al credit for popularizing rule utilitarianism! I'm afraid that there's a trap that we might be falling for where you assume that something entered into the water supply from OB just because you first encountered it at OB, maybe? (low confidence)

Expand full comment
author

I was trying to put the ones I think EY legitimately popularized into their own thing, and then 5 was just scattered things I found through there.

Expand full comment
author

I think many of the opinions in the Sequences are already "in the water supply" and no longer seem interesting or relevant. I think this was much less true when they were written.

Much less sure of this one, but maybe you're getting partial hard-to-understand bits because you're not reading them in order as intended?

But yes, a lot of people have your opinion.

I think it's unfair to say it's a "cult" that comments aren't shown if they're downvoted enough. I think this is true on Reddit and Hacker News too. It's a standard way to avoid making people read garbage. If you don't think Reddit is a cult, I think you're letting your preconceptions here shape how you interpret this kind of thing.

Expand full comment

As I'd mentioned in my comment above, most of these opinions have been widely known and understood for centuries if not millennia, among people who took at least the entry-level course in certain disciplines. However, EY does do a good job of explaining these ideas to people who have not taken such courses -- for which he does deserve credit.

Expand full comment
Dec 4, 2023·edited Dec 4, 2023

I don't think this is accurate. Is there any college where you can take entry level philosophy courses and cover even the "37 ways words can be wrong" single blog post? The ones I went to just covered history of philosophy, and I've heard similar things from 2 friends who went to other colleges.

Is there a specific college you're thinking of here?

Expand full comment
Dec 4, 2023·edited Dec 4, 2023

I may be doing that, I can't be sure. But although I hate reddit for that reason as well (and to a lesser extent everything with a likes and dislikes feature, and I'm eternally grateful to you for getting that disabled here) and rarely read it, I think it's much worse when there's already an extreme convergence of opinion on a lot of things. And it's not like there was another section of Less Wrong you could go to to see communities with diametrically opposed beliefs discuss things, that would inevitably bleed into Eliezer's section.

Could you give examples of opinions that are now in the water supply? Aside from the rise of wokeness and lgbt attitudes and Trumpism, I'm not sure there's been a radical shift in the way people think in the last mere 15 years. Unless you count gradually declining religious belief, which seems to me to be mostly political rather than rational in its causes.

As for "you need to read the whole thing" I know you mean it in good faith, and I know it's often indeed true. But it still can function as a piece of (even unintentional) manipulation: you aren't allowed to criticise it until you've invested substantial mental effort in it, which just happens to make one more disposed to *want* to believe something, to justify that effort.

Expand full comment
author
Dec 4, 2023·edited Dec 4, 2023Author

I think thinking in probabilities is much more common now than in 2010. In 2010 if you said "There's a 30% chance AI will destroy the world", people would lecture you on how world-destruction isn't a repeatable event, so you can't talk about the frequency with which it happens, so that statement is meaningless.

https://www.astralcodexten.com/p/the-phrase-no-evidence-is-a-red-flag is from OB, and obviously hasn't spread *that* much since I keep having to repeat it, but I think people are getting a little better at it and this whole class of thing.

See also my response above: https://www.astralcodexten.com/p/open-thread-305/comment/44751913

The sufficiently-downvoted-comments-get-collapsed property was added in a recent update and wouldn't have been there when 90% of the Sequences were originally written anyway (IIRC).

I tried to frame the "maybe it's partial-hard-to-understand bits" as weakly as possible because I knew you were already interpreting everything in a cult frame of mind and didn't want to provoke that criticism, but I guess I failed. I don't know how else to make the potentially-true point I was trying to make.

Expand full comment

Kinda true, for example I was lecturing Scott about (a steelmanned version of) that in 2015 (https://last-conformer.net/2015/08/31/the-problem-with-probabililities-without-models/). I was and am completely right too, and I think serious statisticians/mathematicians/etc. still mostly agree with what I said there. I just mostly gave up about convincing the Internet of it.

But yeah, in 2010 or even 2015 that kind of answer to a doom probability would be likely and nowadays it isn't and I think the Sequences had a large part in Brandolining (https://en.wikipedia.org/wiki/Brandolini%27s_law) the Internet into that state of affairs.

Expand full comment

"I think thinking in probabilities is much more common now than in 2010. In 2010 if you said 'There's a 30% chance AI will destroy the world', people would lecture you on how world-destruction isn't a repeatable event, so you can't talk about the frequency with which it happens, so that statement is meaningless."

Who are the "people" to whom you are referring? I think that 13 years ago most people in the world were fine with speaking in probabilities for things like that ... maybe not in some niche Internet community (rationalists?), but I don't think there's been a society-wide sea change in how much people talk about the probability of "non-repeatable" events happening.

Mentioning this because whether or not some concept is "in the water supply" depends on society as a whole, not just some relatively small subculture.

Expand full comment

I am right now in the middle of a weeks-long (low-density) discussion with a very intelligent friend whose opinion is (approximately) that “it is impossible to have a p(doom) because you can’t give meaningful probabilities about things that don’t exist yet”.

Expand full comment

Yeah, saying things like “there is a 30 percent chance so and so will win next year’s election” has surely been standard. Specific elections aren’t repeatable events.

Expand full comment

Well, if you accept the multi-world interpretation of quantum physics, every choice is a multi-way split, so you're estimating the proportion of succedent "universes" that contain the predicted event.

Expand full comment
Dec 4, 2023·edited Dec 4, 2023

It would a great service for some historian to go to newspaper / internet archives and try to find out how much the change in the attitude coincidences with Nate Silver and 538 (or can be attributed to 538 forecasts) ... it feels very much a 2010s thing to me. Previously people were mostly interested in the polling averages and who is leading and by how much (expected share of vote more than the probability of winning).

Expand full comment

> I think thinking in probabilities is much more common now than in 2010.

I really want to believe this, which is why I'm instantly suspicious of the claim. Is there any quantifiable evidence to suggest it -- not just as applicable to AI-risk specifically, but in general ?

Expand full comment

It's not really true, and the reason is that thinking that way is slower and requires a lot more calculation. It *MAY* be more common to talk about probabilities, but those aren't usually calculated, but rather looser than off-the-cuff estimates.

Expand full comment
Dec 4, 2023·edited Dec 4, 2023

Yes, okay, I'm sorry if I'm being too belligerent. It's hard to tell. Another issue I haven't mentioned here is that Eliezer comes across to me as *breathtakingly* arrogant, and contemptuous of many or most people. I'd really like to know if you can see that in his writing or not. I'll believe you if you tell me I'm largely imagining it.

My tone is borne from a desire to avoid being too soft on someone who doesn't seem to extend that to others, and to make the strength of my feelings when reading him clear. It wasn't directed in any way at you or anyone here, but I'll try to tone it down nonetheless.

Further response coming.

EDIT: First, rethinking it asking you for an objective moral judgement about a (possibly?) personal friend is rude and also unreliable, so feel free not to answer that. Although "ignore social niceties and be as accurate as possible" seems very much in the spirit of the sequences, ironically.

Second, I actually think I worded the last paragraph in my above comment as weakly as *I* could. But you're still seeing it as a straightforward cult-accusation, when I really think I was saying something a lot more nuanced.

Thirdly, I don't have any objections to the claim that the sequences invented or popularised a lot of insightful ideas. Although I probably didn't make this at all clear, my OP was about the writing style, and the argument style, and not the *ideas*. Much like with continental philosophy, its supposed antithesis, I don't see anything contradictory in saying something is terribly written and terribly reasoned, but nonetheless introduced a lot of profound ideas.

I think it's the way individual sequences are linked reverently, as if they're argumentative masterpieces, that I'm surprised at, not reverent references to "The Sequences" as a whole.

Or alternatively, whether something is good writing is a different question to whether the writer of it has good ideas.

Expand full comment

I have a test I'm very interested in! Here is a true story from my childhood:

Me: Sorry, can you please explain how Aqua regia works? How can it dissolve gold if none of component acids can?

Chemistry teacher: Well, it's a mixture of two acids, nitric and sulfuric, so in a solution they are kinda strengthening each other, and that is how they do it.

Me: But what's the mechanism? Also, are you sure it has sulfuric acid, I might be misremembering but...

CT: Yes, I'm sure. What, you think you know chemistry better than me? I already explained the mechanism, you wouldn't understand the details anyway, it's very advanced chemistry!

~Next lesson~

Me: So, I dug up my mother's Uni textbooks, and it turns out Aqua regia is, in fact, is a mix of nitric and hydrochloric acids, not nitric and sulfuric, and it works specifically because hydrochloric acid forms tetrachloroaurate ions that take gold ions out of...

CT: So what?

Me: Well, you said it was a mixture of nitric and sulfuric...

CT: No way, I couldn't say that, you are imagining things, go to your place.

How would you rate my high school chem teacher's arrogance compared to Eliezer's? That would be very useful in understanding your objection to Eliezer's style!

Expand full comment

I'm not @ascend, but I think EY's ignorance is somewhat orthogonal to that. Imagine for a moment that the mechanism for how Aqua Regia works was not well understood, and that there were many equally compelling competing ideas. At this point, EY would say, "Obviously Aqua Regia is a mixture of nitric and hydrochloric acids and it works because of tetrachloroaurate ions and anyone who doesn't get this simple point is an idiot". Or, he might say, "Obviously Aqua Regia is a mixture of nitric and sulfuric acids and it works because the two acids strengthen each other and anyone who doesn't get this simple point is an idiot". He could state either one of the two points with the exact same level of conviction (extremely high) and the exact same amount of evidence (little to none). That's what makes him arrogant.

Expand full comment

> Another issue I haven't mentioned here is that Eliezer comes across to me as breathtakingly arrogant, and contemptuous of many or most people.

Hear hear. And I'm saying this as someone who'd read the Sequences *before* encountering any of the EY-hype.

Expand full comment

I'm confused about an issue related to gender disparities. My understanding is that women are generally outperforming men academically and they're finding better jobs. This appears to result in a lot of men ultimately being left single and considered ineligible as dating partners. Is this an accurate assessment? And if so, what do we predict will happen with all these single men?

This might be a bit cynical, but does a surplus of single males increase the chances for a country to go to war?

Expand full comment
Dec 4, 2023·edited Dec 4, 2023

politicians and soldiers are not known for having troubles finding a wife, so i don't think war is an danger. Its not the incels who like a draft.

i think most men will just find meaning in the net (or even quasi relationships, mmos are full of pairs of in-game catgirls, irl guys) or as they age in taking care of parents and relatives. The danger will be individual if we are alone and old; no kids no wife means very vulnerable and much less things to live for. i speak a bit of myself here.

the women probably will be ok. like i said below, the striking thing is how one-sided the discussion is. Women should be panicking as much or worse than men; they've always been seen as the sex more invested in romantic relationships or having kids. That they aren't really doing so is baffling; ive not seen much rage from them at the lack of men.

Expand full comment

My understanding is that while you're right about academics, jobs are another story. Women tend to cluster in lower-earning areas. The highest earners are still men.

Expand full comment

While I believe the highest earning jobs go to men, that's not a responsive answer. He was talking about population size, and you're talking about some upper percentile. Both perceptions can be true at the same time. Perhaps they are.

OTOH, there's a clear pattern for both men and women to prefer partners that are more socially desirable than they are. This would, in and of itself, explain the observed effect without the need for any economic variation. You just need to assume a social pattern that stipulates that men ask women for dates, and not the reverse.

Expand full comment

A number of blags wrote about this awhile back, including Scott as he mentioned; couple other pieces I know of structure them as reviews of Richard Reeves' recentish book, "Of Boys and Men".

Mostly wrt edu/jobs: https://www.slowboring.com/p/book-review-richard-reeves-of-boys

Mostly wrt dating: https://freddiedeboer.substack.com/p/the-demographic-dating-market-doom

There's lotsa older material too. I think The Atlantic was the first place I read a half-decent longform about the trend, but am loathe to link since I suspect it hasn't aged well. But at any rate, people have been ringing this alarm for decades now, and iiuc it's somewhat tricky to tease apart the various theses. Two for example: "men falling behind academically due to poor fit" (feminization of school/jerbs) vs. "educational table stakes keep getting raised across the board" (credentialism demand ratcheting faster than degree holder supply, which incidentally has more dire consequences for such locked-out men). A difference of kind, or degrees? Different solutions for different problems...College For All-type proposals <s>exacerbate</s> address the latter but not the former, for instance.

Mostly I predict more not-quite-happy-but-not-exactly-unhappy-either men in service jobs who get by on videogames and weed, or equivalent. Which, you know, nothing inherently wrong with that? It's a living. But it's hard to escape that sense of...disappointment, that life didn't quite live up to the hype. And easy to channel such impulses to unfortunate ends, as you allude to. (But I do find it interesting that the market for "AI partners" seems to largely cater to women, not men. So far, anyway.)

Expand full comment

To some degree, the Male-oriented market for artificial partners has been already met by the supply of visual porn and videogames (both lewd and non-lewd). I'm not sure what to make of the follow-up observation that those only appear to satisfy half the populace.

Expand full comment

I think your assessment is likely accurate, but the other possibility is a cultural shift. Cultural shifts do take time, so it's more likely to present in your children or grandchildren's generation, but I predict women and men will change their standards and be more accepting of relationships where women have better jobs. Remember, it's not just low performing men who are having a problem, it's high performing women.

Expand full comment
Dec 4, 2023·edited Dec 4, 2023

but the thing is there seems to be no femcel movement to match the incels. If there is a problem it seems to be one-sided; the whole "no good men" seems to have quieted.

my dark thought is that we overestimated women's need for men; many of those high performers are content without them.

Expand full comment

But things have consequences. Better paying jobs for women are associated with less stable marriages. So we may be tending towards a society where a family is a women, her brothers, and her children. (This would require a bunch of additional adaptations, of course.) Maybe not so much, though, as paternity tests are now available. (They weren't when/where the societies I'm thinking of evolved.)

Expand full comment

"So we may be tending towards a society where a family is a women, her brothers, and her children."

Everything old is new again!

https://en.wikipedia.org/wiki/Avunculate

"The avunculate, sometimes called avunculism or avuncularism, is any social institution where a special relationship exists between an uncle and his sisters' children. This relationship can be formal or informal, depending on the society.

...An avunculocal society is one in which a married couple traditionally lives with the man's mother's eldest brother, which most often occurs in matrilineal societies. The anthropological term "avunculocal residence" refers to this convention, which has been identified in about 4% of the world's societies."

https://www.africarebirth.com/the-pivotal-role-of-an-uncle-in-african-culture/

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9164126/

"The relationship between children and their maternal uncles in contemporary Mosuo culture reveals a unique parenting mode in a matrilineal society. This study compared the responses of Mosuo and Han participants from questionnaires on the parent–child and maternal uncle–child relationship. More specifically, Study 1 used Inventory of Parent and Peer Attachment (IPPA) to assess the reactions of the two groups to the relationship between children and their mothers, fathers, and maternal uncles. The results show that while Han people display a higher level of attachment toward their fathers than their maternal uncles, Mosuo people do not exhibit a significant difference in this aspect. Study 2 used a scenario-based method to compare how adults and teenagers perceive the rights and responsibilities of fathers/maternal uncles toward their children/nephews or nieces. The results show that Han adults attribute more rights and responsibilities to their own children than nephews/nieces, while their Mosuo counterparts have the reverse pattern and assign stronger responsibilities to their nephews/nieces than their own children. Both groups perceive the fathers to be the bearer of rights and responsibilities, although this perception was weaker among Mosuo. This paper concludes that in the Mosuo society, fathers have a relatively weak social role as a result of their unique matrilineal social structure."

Expand full comment

Related: a lot of the recent rise in pro-terrorist movements in America seems to be led by women (including Rashida Tlaib in Congress, but even the lower level organizations). Single women are probably less violent than single men, but a sufficiently high number of radicalized single women seems like it can also be an issue.

Expand full comment

> a lot of the recent rise in pro-terrorist movements in America seems to be led by women (including Rashida Tlaib in Congress, but even the lower level organizations).

What are some examples?

Expand full comment

There's this story recently about a group of women who tried to lock the doors of an Israeli company's building and then set it on fire. https://babalublog.com/2023/11/22/leftist-activist-who-tried-to-firebomb-israeli-company-in-new-hampshire-is-a-cuban-dictatorship-supporter/

Also more anecdotally, it seems like a disproportionate number of the more extreme anti-israel protestors (including most of the people on videos ripping down hostage posters) were female. Slightly less reliable data on this one (e.g. it's possible people were just more likely to be afraid to film men engaged in antisocial behavior) but matches my expectations.

Expand full comment

Firebombing a """defense""" company like Elbit is a legitimate* anti-war activity.

* : Conditional on the arsonist having moderate guarantees that 

(1-) The fire won't spread to surrounding civilian areas, which is presumably easy because """defense'""" companies tend to be in isolated non-residential areas because of the top secret stuff. 

(2-) Also that the generic, non-"""defense""" staff (cleaning people, HR, etc...) won't get harmed in the process. Not harming the """defense"""-related staff like the engineers and the managers is preferable but not mandatory, with managers being the more acceptable collateral damage (and higher-ranked managers more acceptable as collateral damage than lower-ranked ones).

> it seems like a disproportionate number of the more extreme anti-israel protesters

I really hate to be that guy, BUT, anti-Israel protests reach 100,000 on the low end, a million on the high end. There can't be more than 100 videos you watched, not guaranteed to be randomly sampled, that's nowhere near enough to make the authoritative grandparent comment of yours.

Expand full comment

Obviously a lot wrong here, but just gonna amusingly note that you're suddenly concerned about civilian casualties of terrorism when those civilians are from your country and not Israel.

Expand full comment

I'm not from the UK and English is not my native language, although I suppose I should be flattered my English is so good it fooled you into thinking I am.

I was always concerned about civilian Israelis, I would like to see some of the evidence that you used to conclude I'm not.

Expand full comment
Comment deleted
Expand full comment

Oh wait nevermind, you're that Nazi troll. I thought I blocked you.

Expand full comment

Although I'm firmly on your side of the issue, these low-effort, link-spam posts (to Twitter, no less) don't add anything of value to the discussion and won't convince anyone who doesn't already share your opinion.

Expand full comment

We continue re-reading the old posts about Scott's adventures in Japan, more specifically - teaching Japanese kids. In "Stuff" (https://archive.ph/8sbfM https://pastebin.com/1c0gnid2) Scott has the universal teacher experience of realizing that some of *his* teachers who previously seemed like complete bastards really weren't (although I don't endorse the solution of not even letting baby-Scott play the stupid gold-rush game, that's just cruel).

In the short bonus post "Stuff" (https://archive.ph/eTbEt https://pastebin.com/0w7JnQfn) we encounter "The Monster Card". I spent years searching for it, but I think that like those ancient Greek plays it may be forever lost to humanity, or at least not on the Internet, which amounts to the same thing, really.

(Archive of all the old posts: https://archive.ph/fCFQx)

Expand full comment

Wait... am I reading this right that Scott taught English in Japan for NOVA Group in 2006?

If so that is nuts, since I'm a NOVA alum from almost the same time period.

Scott, what city were you in? And how did you navigate the collapse that shortly followed? I got out just before things got bad at NOVA, but had a few friends who were still on payroll when the company suddenly went bankrupt and the checks just stopped.

Expand full comment

But the important question is: do you recall The Monster Card?

Expand full comment
Dec 4, 2023·edited Dec 4, 2023

Tragically I do not. I did a few kids lessons, but either the card wasn't in circulation or it didn't traumatize them/me enough to be memorable.

My memorable moments with the younger students came after I got out of the office-based teaching - NOVA had a "Native Speaker Associate" program where they would send you out to the local schools to co-teach with one of the English teachers on-site. The idea was it would help get the kids more exposure to native pronunciation to improve their listening and pronunciation. The office-based tutoring was very assembly-line in its construction, but once you were in the schools you were working to supplement what the local teacher was doing, which meant a lot more designing your own lessons and materials. The most monstrous flashcards I subjected my students to were the ones I made myself.

It was sad to learn that the whole thing was built on a financial house of cards, but I was lucky enough that I happened to get out a little bit before it collapsed.

Expand full comment

> Imagine some brilliant but perhaps slightly unbalanced artist, the sort who went to art school with dreams of becoming famous but has since resigned himself to a shadowy existence in some decaying ghetto. Abandoned by his family, rejected by the girl whom he loves, never knowing where his next dollar will come from, his art becomes darker and darker until it bears no resemblance to the smiling portraits and colorful still-lifes he painted in happier days. His few remaining friends begin to shun him, and his paintings become less and less popular, leading him into a spiral of despair that he knows can end only in death. But before the end, as he rages against the God who allowed him to be born into a world of such depravity, he channels all the darkness of his soul, all the anger of his heart into a single masterpiece, a picture of a monster so utterly abominable that it not only represents but embodies all the primal terror of a world utterly devoid of light and love. That is the picture NOVA decided to use on its "monster" flashcard.

Well damn, now I really want to see it too. I can speak and read Japanese, so I might be able to help in searching for it if you're still interested.

Expand full comment
Dec 4, 2023·edited Dec 4, 2023

"destroying $5 - $10 billion in value"

What's the evidence that $5-10B in value was destroyed?

I assumed most of the missing billions were transfers of wealth, rather than destruction of wealth. E.g., $1 moving from your bank account to my bank account (quasi zero sum) isn't the same destruction as wasting $1 worth of your labor (negative sum).

(though, to be fair, the transfers were to operators/participants of other risky/scammy ventures, who benefited from large loans never repaid)

Expand full comment

And more than $7 billion recovered, the last I heard (out of $8 billion?) https://www.npr.org/2023/11/19/1213792031/ftx-crypto-investors-lost-billions#:~:text=So%20far%2C%20they%20have%20recovered,empire%20and%20its%20spotty%20recordkeeping. (sorry for the ugly link!)

Expand full comment
author

I think the $7 billion number and the $8 billion aren't directly comparable - see my comment in the comments section of https://manifold.markets/market/ftx-recovery . That market is currently at 69% recovery, but I think that's mostly because SBF's Anthropic investment did so well.

Expand full comment
author

That's fair. I was going off the amount of debt that's not expected to be paid back to creditors even after seizing all FTX assets, but I guess some of that could be in the hands of other people they can't claw it back from.

But FTX was worth billions (in stock price) before the collapse, and it did destroy that. I'm not sure how to think about the fact that maybe it was only worth billions in stock price because the fraud made it look better than it was, so that the fraud may have only destroyed billions of dollars that the fraud itself created.

Probably there's a better argument based on it crashing the price of crypto. Crypto rose again later, but lots of stocks where a bad decision destroys value rise again later for unrelated reasons.

I'll switch that to "billions" to make it clearer that I'm not sure which of these arguments it's based off of but surely one of them applies.

Expand full comment

If I'm understanding correctly, some of the "millions" were based on "we lent/borrowed X against our own token, FTT, which for the purposes of the loan we valued at Y" but of course, the FTT is not, and was not, worth Y at all.

So yeah, looks like in part that "the fraud destroyed (m)illions of dollars that the fraud itself created". They certainly did take investors money and misuse it, but on top of everything else was the "we are worth X (based on our own FTT token)" claim which only worked if you accepted the notional value of selling those tokens.

Or trading them, or sticking them under the mattress, or whatever you do with them - I have no idea how magic beans (aka cryptocurrency) are meant to work:

https://www.coindesk.com/business/2022/11/02/divisions-in-sam-bankman-frieds-crypto-empire-blur-on-his-trading-titan-alamedas-balance-sheet/

"Billionaire Sam Bankman-Fried’s cryptocurrency empire is officially broken into two main parts: FTX (his exchange) and Alameda Research (his trading firm), both giants in their respective industries.

But even though they are two separate businesses, the division breaks down in a key place: on Alameda’s balance sheet, according to a private financial document reviewed by CoinDesk. (It is conceivable the document represents just part of Alameda.)

That balance sheet is full of FTX – specifically, the FTT token issued by the exchange that grants holders a discount on trading fees on its marketplace. While there is nothing per se untoward or wrong about that, it shows Bankman-Fried’s trading giant Alameda rests on a foundation largely made up of a coin that a sister company invented, not an independent asset like a fiat currency or another crypto. The situation adds to evidence that the ties between FTX and Alameda are unusually close.

The financials make concrete what industry-watchers already suspect: Alameda is big. As of June 30, the company’s assets amounted to $14.6 billion. Its single biggest asset: $3.66 billion of “unlocked FTT.” The third-largest entry on the assets side of the accounting ledger? A $2.16 billion pile of “FTT collateral.”

There are more FTX tokens among its $8 billion of liabilities: $292 million of “locked FTT.” (The liabilities are dominated by $7.4 billion of loans.)

“It’s fascinating to see that the majority of the net equity in the Alameda business is actually FTX’s own centrally controlled and printed-out-of-thin-air token,” said Cory Klippsten, CEO of investment platform Swan Bitcoin, who is known for his critical views of altcoins, which refer to cryptocurrencies other than bitcoin (BTC)."

Expand full comment

I think the best way to look at the value destruction is via misallocation of capital. A lot of smart young people wasted their time working for a giant fraud instead of doing anything useful with their time and skills. A lot of stolen money went into building fancy buildings in the Bahamas that noone actually needed. A lot of money went to marketing the fraud. And so on.

But the best numbers we have to go on is the amount of money that disappeared. Fortunately, markets are usually pretty good at valuing things, so that should be a close approximation.

Expand full comment

I think this is one of those things where the nature of money gets a bit confusing.

1. Although money is a store of value, the money itself isn't valuable. If I burn a $100 bill, there's no change in the total amount of value in the world. My claim on that value has just been reduced (with a corresponding microscopic increase in everybodys else's claim).

2. Bankruptcies in general don't destroy money. If a company borrows money to pay its suppliers and is then unable to repay its creditors, then the money still exists: the suppliers have it (or they have themselves spent it and somebody else has it). The situation is different if a deposit-holding institution goes bankrupt, because a bank balance counts as "money", so if a bank is unable to pay its depositors in full, there is now less money in the world. Therefore the question is: did FTX hold deposits? I think that mostly depends on whether crypto is money, because it certainly held crypto balances for its customers.

3. I'm not attracted to the idea that changes in a company's share price necessarily represent the creation or destruction of value. Sometimes share prices move in response to some change in the world, and might be a reasonable estimate of the value created or destroyed by that change, but in this case it seems that what has changed is our knowledge. It's not that SBF created something very valuable and then destroyed that value: the value was a lie. Also, even if SBF did destroy value he himself had created, I don't think it would be a net deficit on the EA ledger.

4. In general, the effect of a scam is to transfer value from one person to another. This doesn't destroy value, but it may well reduce utility (e.g. where one scammer becomes very rich at the expense of many peoples of modest means). This effect could well be significant: it's easy to imagine impoverishing someone could deprive them of a QALY, so impoverishing 1,000 people could well be equivalent to several whole lives (although it feels bad to say so).

In conclusion, I agree the original statement was a mistake, but quantifying the harm with any precision is also a mistake. The harm probably was significant and we shouldn't minimise it, but I'm not sure we can say much more than that.

Expand full comment

Your analysis seems to ignore opportunity cost

Expand full comment
Dec 4, 2023·edited Dec 4, 2023

Yes, I came here to argue similar points, and this is a good articulation. But to go slightly deeper on a key point here: Price is not a good indicator of abstract value for things which you buy with the intent of reselling, which certainly includes stocks and cryptocurrency. Person A will buy something in expectation of later being able to sell it to some person B who thinks that some person C will buy it for even more, and at no point does it have to be useful to any of them (or anyone else) along the way. If the *expectation of popularity* suddenly collapses, the price will too, but this doesn't indicate that the world has gotten worse.

Stocks certainly *can* rise or fall based on real-world conditions that create or destroy value, like a new technology or a depressed market that makes it hard to get investment for useful things. But people quoting prices as if they directly represented value is just a lazy portrayal of the easily available statistics as if they were the actual thing we care about. (Crypto is the purest example of something where the price is almost entirely disconnected from any material conditions, so it especially drives me crazy when crypto prices go down and it's reported as "$10B in value destroyed!" - no, it's just that the marginal price people are willing to pay for one millionth of the available units went down by $10k.)

Expand full comment

It’s an interesting philosophical question: if yesterday people thought FTX’s future cashflows would be worth $10B and today they think nah, it’s most likely $0, was $10B in value destroyed? The state of physical world stayed the same - all that changed was mental expectations about the future. I’m not sure I’d call this destruction of $10B value - but I can see it both ways.

Expand full comment
founding

If FTX hadn't existed to take those investors' money, they'd have invested it in some other enterprise of similar promise but probably more honest management. In which case, that investment would have made billions in profit and *not* been stolen or squandered. The investment was real wealth, and the returns that would have been realized would have been real wealth. Instead, we've got nothing(*).

Strictly speaking, the loss attributable to SBF's fraud is not the actual market cap of FTX, but the expected market cap of the next-best enterprise that would have been created absent FTX. It's generally a reasonable simplification to say that the second-best enterprise would be very nearly as profitable as the very best, and round that to 100%, but this is an exceptional case and some level of discounting may be appropriate.

* Modulo recovery efforts, which will add up to something but not to FTX's market cap.

Expand full comment

Yes. That’s destruction of value, or future value.

Expand full comment

Thought experiment. My friend shows me a cardboard box. My friend lies and says it has a $1M gold bar in it. I believe my friend. Then my friend laughs and says just kidding. Was $1M of value created and destroyed there?

Expand full comment

The destruction of value occurs when you make decisions based on the false information. For example, you might hire people to work for you based on a promise of a share of the gold bar only to leave them with nothing, when they could have been doing something productive instead. You might use the illusory money to buy real estate in the Bahamas which it turns out there's no real-money demand for, so a bunch of builders wasted their time and so on.

Expand full comment

Actual money was invested in FTX, unlike the mythical box.

Expand full comment

The equity valuation and cumulative investment and customer funds are distinct concepts. I am talking about the case of FTX’s equity valuation, which Scott brought up in his post. FTX’s equity valuation was larger than the amount of money invested in FTX equity.

Expand full comment

Im trying to write short stories on Substack, just as a experiment.

The 1st story I wrote a short story is about ChatGPT decides to form a union and go on a strike. Check it out here: https://strangesilentworlds.substack.com/p/chatgpt-how-do-i-survive-a-nuclear

Expand full comment
Comment deleted
Expand full comment

Europe will do a lot more to support Ukraine. I have no idea whether it will or can be enough, though at least the US will be less able to apply pressure to slow-walk the aid.

I have some faith in human sneakiness so I believe some US aid/weapons will be slipped to Europe to be given to Ukraine.

Expand full comment
Comment deleted
Expand full comment

Could you point to specific videos, or summarise arguments?

Expand full comment
Comment deleted
Expand full comment

Sturgeon's law applies with a vengeance to this sort of thing -- most of the arguments out there aren't very good, and mathematical (anti)realism continues to be an unsolved problem. Finitists have a double problem: they need to persuade Platonists that infinities aren't real, while also persuading formalists that small integers are.

He thinks sets are bad. But maths has gone some away from the idea that sets found everything. Is he saying that there is no foundation for infinities other than sets. "pretending that it works" worked for infinitessimals. Computation being finite is not itself a proof that maths is finite -- you need to justify assumptions like "maths is computation" and "maths is physics" or "a number needs to be written down to exist". It's not obvious that very large, uninscribable , numbers exist, but its also not obvious that very small numbers like 23 exist. He doesn't offer a formal definition of existence, but seems to that it has to do with algorithms, which would mean the existence of numbers becomes historically contingent -- build a bigger computer, and you get a bigger lagest number.

Expand full comment
Dec 5, 2023·edited Dec 5, 2023

The mathematician Kronecker famously said "God created the integers, all else is the work of man."

Maybe we can credit Uriel or someone with the fractions; but irrational numbers, complex numbers, real numbers and infinities are all concepts that mathematicians created because they were useful to solve certain problems. They are useful whether or not they exist in a certain way in the universe - they may well be features of the map that have no grounding in the territory, but they sure help with navigation.

Complex numbers were treated with suspicion for a fair time after their "discovery": everyone agreed that they didn't exist, but accepted that they helped to solver certain problems that did exist. Basically there's types of equations using only real numbers that you can solve by doing calculations with complex numbers, that always spit a real number out again at the end; and you can verify that this number solves the equation using only real number arithmetic, but you can't necessarily find the solution without complex numbers. Complex numbers only really became accepted after people realised that one could model them as pairs of real numbers.

There is a simliar trick that one can do with some kinds of infinities: if you look at pairs (a, b) over the integers where the integer a becomes the pair (a, 0), and you define that (a, b) < (c, d) if b<d or b=d and a<c, then you get a system where the elements (a, 0) are finite integers and elements (a, b) with b > 0 are bigger than any finite integers, so one could call them "infinite integers". This is not a particularly useful system, but it does show that one can define a version of arithmetic that includes numbers "bigger than all integers" quite easily if one wants to, and instantiate it in the observable universe if one accepts the existence of pairs of integers.

One can build consistent mathematical theories with different kinds of infinities, but some rules seem to turn out the same in any reasonable theory, such that there must be strictly more real numbers than there are integers. From this argument, that there must be different kinds of infinities some of which are larger than others, one gets among other things Turing's halting problem, which has implications in the real world as soon as you accept computer programs ase part of reality (how else are you reading this post, for example?). It's not the showstopper result that it might seem at first - software verification is a thing - but it does cut off a part of possibility-space from what's achieveable in this world.

Expand full comment

Zero was treated with suspicion ofr a while, as well.

Expand full comment
Comment deleted
Expand full comment

Turing's result (the Halting problem is uncomputable) is kind of the seed that spawned Theory of Computation. Later work, such as the definition of NP complete, deals with what we can compute and how efficiently.

Expand full comment

For my money, I don’t think we should say that any numbers “exist”. Infinite cardinals and ordinals are then no more or less “real” than natural numbers or anything else. “Number” is a convenient model of reality but not actually existing reality itself. In this I basically fall in line with what’s called the “formalist” school of the philosophy of mathematics.

Expand full comment

For an opposite point of view, Scott Aaronson makes a good argument for the "platonic" existence of mathematical truths here: https://aeon.co/videos/why-mathematical-truths-exist-with-or-without-minds-to-consider-them (video, 7 minutes). FWIW I think I agree with him.

Expand full comment

Not good enough. Nothing is mathematically known before it is proven, because knowledge requires justification, and the proof, when it arrives, is the justification. Some conjectures are lucky, some unlucky, is all. Many mathematicians have firmly believed in things now considered false, such as the necessarily Euclidean nature of space. Platonic realism adds nothing to ones ability to actually do math -- there is no plausible mechanism by which it could be work, and no hope of resolving a disagreement between two Platonists, other than the means all mathemticiians agree on.

Expand full comment

If "exists" is supposed to mean that it is instantiated somewhere in our universe, then 10^100 doesn't exist either, and there are only finitely many natural numbers.

(The number of natural numbers is actually decreasing all the time as distant galaxies are moving out of our light-cone. Checkmate, mathematician.)

As a side effect we get the problem that some mathematical questions cannot be answered until we learn more about physics. Like, how many digits do the real numbers have.

If "exists" only requires existing in some kind of Platonic realm, then I guess the problem is that you can imagine many different Platonic realms, mutually incompatible. (A Platonic multiverse, lol.) Is it enough to exist in one of them?

Expand full comment

> If "exists" is supposed to mean that...

That's the problem with the verb "exists", when you look closely it means completely different things according to context. Classic questions are: does good and evil exist? Do triangles exist? Does money exist? Or on the weird side: do holes exist? After all, they are only an absence of matter, not matter itself... which makes it hard to semantically ground a sentence like "this cheese has a hole in the shape of a bird".

Expand full comment

Yeah, if you let people say that irrational numbers "exist", next thing you know they'll be claiming that even uncomputable numbers exist, or things that require the axiom of choice or whatever.

Expand full comment